One of the coolest things an AI developer can do is building your own database agent. This agent can take a question as input. It will, then, autonomously generate the necessary sequence of actions.
This eliminates the need to find a data analyst to translate the question into an SQL database query.
This article series is designed to teach you how to create such an AI agent. We’ll use SQL on Azure OpenAI service and LangChain agent frameworks throughout the tutorials.
By the end of this series, you should feel confident in your ability to deploy LLMs to build your AI agent.
Let’s dive into the first part of this series to:
- Recap generative AI
- Learn about RAG
- Build first AI agent
- and, Enhance your understanding of AI for database applications.
Building First AI Agent with Azure OpenAI
In the first article, you will build AI Agents with Azure OpenAI service. You will learn grounding techniques, RAG, to start building AI agents.
By the end of the lesson, you will deploy your Azure database SQL OpenAI service instance and test API.
You will build a database agent where the user can interact with the application in their language instead of using SQL queries.
Before diving in, let’s quickly cover some background to set the context for this course.
Background on Generative AI
Generative AI, particularly in the 2020s, refers to the capability of AI to generate text, images, videos, and other types of content.
This is accomplished through interactions with a generative AI system using natural language prompts.
Moreover, as shown in the figure, generative AI builds on the foundations of machine learning and deep learning.

Let’s Explore The Key Differentiators of Generative AI
- Serves as Foundation Models: Foundation models can perform multiple tasks with a single setup. This means that when you are building an AI agent, you can connect it to a database, process documentation, or create various applications, all with the same model.
- Use of Different Data Formats: Generative AI models can handle various data formats, including text, images, videos, and audio. This versatility is ideal for multimodal applications, allowing a single model to process and integrate different types of data.
- Ability to Use Natural Language: Another crucial aspect of generative AI is its ability to use natural language. This means users can communicate with the system in English and/or any other language instead of interacting with the system by coding. Accordingly, the technology accessible to a broader audience, including both technical and business users.
These are essential features to consider for this lesson and any future generative applications you create.
Ways to Customize LLMs
Imagine that you have databases, PDFs, and private information for your AI project. How can you integrate these various data sources to enhance the knowledge of a model like GPT-4?
Well, you have two options:

1. RAG/Grounding
The first option is retrieval-augmented generation (RAG). This method uses an orchestration tool to connect the model with your data sources without requiring additional training.
For example, Azure OpenAI GPT-4 can query your database directly without modifying the model itself. This approach is more efficient and practical, as it avoids the overhead associated with model training.
Also, future model updates or replacements can be seamlessly integrated into the existing setup. You only need to use the exact connection mechanisms to access your data.
2. Fine-tuning
The second option is fine-tuning the model. This process involves retraining the model on specific datasets and then redeploying the updated model.
While this approach allows for customization, it is resource-intensive and less commonly adopted due to its complexity and operational cost.
The RAG approach will be implemented in this course!
Building Your First AI Agent
Now, you will create a simple Microsoft AI agent using Python and Jupyter Notebook. This tutorial is designed for beginners and will guide you through the code and concepts step-by-step.
Step 1: Setting Up the Environment
As described in the official documentation of Azure Chat OpenAI, to access SQL database Azure OpenAI models, you need to:
- Create an Azure account
- Deploy an Azure OpenAI model
- Obtain the name and endpoint for your deployment
- Acquire an Azure OpenAI API key
- Finally, install the LangChain-OpenAI integration package.
You will use your own API endpoint and API key. You get these from your Azure OpenAl environment.
This endpoint is a pre-created cloud resource that gives you access to the Azure OpenAl GPT model.
Also, before diving into the code, ensure you have the following dependencies installed.
You can do this by creating a requirements.txt file with the necessary libraries and using pip to install them.
requirements.txt
pyodbc==5.1.0
tabulate==0.9.0
openai==1.12.0
langchain==0.1.6
langchain-community==0.0.20
langchain-core==0.1.23
langchain-experimental==0.0.49
langchain-openai==0.0.5 pandas==2.2.2To install dependencies, run the following command:
pip install -r requirements.txtNow, you can import the libraries Open AI, Langchain, and Pandas.
import os
import pandas as pd
from IPython.display import Markdown, HTML, displayStep 2: Leveraging Langchain SQL Agent
Now, you can import the necessary classes.
The humanMessage object defines the type of message you send, and Azure Chat OpenAI connects to the SQL Azure db OpenAI model. Initialize the AzureChatOpenAI model with the specified API version. Then deployment name and endpoint from environment variables.
from langchain.schema import HumanMessage
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version="2024-04-01-preview",
azure_deployment="gpt-4-1106",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
)Step 3: Preparing Your Prompt
You can prepare your prompt by creating a HumanMessage object containing the text to be translated into German and Spanish.
message = HumanMessage(
content="Translate this sentence from English "
"to German and Spanish. The quick brown fox "
"plays with the lazy dog."
)Step 4: Engaging a Model to Receive a Response
You can send the message to the Azure OpenAI model and invoke the translation process.
model.invoke([message])Congratulations!
You have built your AI agent!
Final Thoughts on Building Your Own Database Agent
Throughout this article, you learned the key differentiators of Generative AI. The article highlighted the importance of Retrieval-Augmented Generation (RAG) patterns. These patterns enable AI models to connect with external data sources. This can be done without the need for extensive retraining.
The article walked you through building AI agents using Python and Jupyter Notebook.
The next article in this AI course series will guide you through creating a data agent. This agent will interact with CSV data and SQL databases.
It will help expand your skillset and understanding of AI applications. Stay tuned!
Discover more from AI For Developers
Subscribe to get the latest posts sent to your email.