I am sure we all have been hearing about AI agents and are not sure where to begin š¤; no worriesāyou're in the right place! In this article, I am going to introduce you to the world of AI Agents and walk you through step-by-step how to build your first AI agent with LangChain.
LangChain is an incredibly useful tool for connecting AI models to various outbound APIs. In this guided tutorial, we will build our first Agent and connect it to Open API Weather data š¦ļø to make it more interactive and practical.
By the time we're done, you will have your own AI agent š¤, which can chat, pull in live data, and do so much more!
š¤ What is an AI Agent? Letās Break it Down
AI Agents are like a supercharged virtual assistant thatās always ready to help. Whether itās answering your questions, doing small tasks for you, or even making decisions, this AI agent is like having a digital helper at your disposal. It can do everything from fetching data to creating content or even having a conversation with you. Pretty cool, right?š
AI agents arenāt just staticātheyāre smart, dynamic, and capable of working on their own, thanks to the power of large language models (LLMs) like GPT-3 or GPT-4.
š§© What is LangChain? A Developer-Friendly Powerhouse
LangChain is a developer-friendly framework that connects AI models (like GPT-3 or GPT-4) with external tools and data. It helps create structured workflows where the AI agent can talk to APIs or databases to fetch information.
Why LangChain?
- Easy to use: It simplifies integrating large language models with other tools (Jira, Salesforce, Calendars, Database, etc).
- Scalable: You can build anything from a basic chatbot to a complex multi-agent system.
- Community-driven: With a large, active community, LangChain provides a wealth of documentation, examples, and support.
In our case, weāre building a simple agent that can answer questions, and to make things cooler, itāll retrieve real-time data like weather information. Let's dive in!
Step 1: Setting Up Your Environment
In this section, let's set up our development environment.
1.1 Install Python (if you havenāt already)
Make sure you have Python installed. You can download it from python.org. Once installed, verify it by running:
python --version
1.2 Install LangChain
Now letās install LangChain via pip. For those who are new to Python, PIP is a package manager for Python packages. Open your terminal and run:
pip install langchain
1.3 Install OpenAI
Weāll also be using the OpenAI API to interact with GPT-3, so youāll need to install the OpenAI Python client:
pip install openai
1.4 Set Up a Virtual Environment (Optional)
Itās a good practice to work in a virtual environment to keep your project dependencies separate:
python -m venv langchain-env
source langchain-env/bin/activate # For Mac/Linux
# or for Windows
langchain-env\Scripts\activate
Step 2: Building Your First AI Agent
Now comes the fun partāletās build our first AI agent! In this step, weāll create an agent that can have a simple conversation using OpenAIās language model. For this, youāll need an API key from OpenAI, which you can get by signing up at OpenAI.
Hereās a small snippet to create your first agent:
from langchain.llms import OpenAI
# Initialize the model
llm = OpenAI(api_key="your-openai-api-key")
# Define a prompt for the agent
prompt = "What is the weather like in New York today?"
# Get the response from the AI agent
response = llm(prompt)
print(response)
In the above code, weāre setting up a very basic agent that takes a prompt (a question about the weather) and returns a response from GPT-3. At this point, the agent doesnāt actually retrieve live weather dataāitās just generating a response based on the language modelās knowledge.
Step 3: Connecting to an Open API (Weather API)
Now letās step things up by integrating real-time data into our agent. Weāre going to connect it to a weather API, which will allow the agent to retrieve live weather information š¦ļø.
Hereās how you do it.
Get an API Key from OpenWeather
Head over to OpenWeather and sign up for a free API key.Make the API Request
In this next part, weāll modify our agent so that it fetches live weather data from OpenWeatherās API, and then outputs it as part of the conversation.
import requests
from langchain.llms import OpenAI
def get_weather(city):
api_key = "your-openweather-api-key"
url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url).json()
# Extract relevant data
temp = response['main']['temp']
description = response['weather'][0]['description']
return f"The current temperature in {city} is {temp}Ā°C with {description}."
# Now use the LangChain LLM model to integrate this data
llm = OpenAI(api_key="your-openai-api-key")
city = "New York"
weather_info = get_weather(city)
prompt = f"Tell me about the weather in {city}: {weather_info}"
response = llm(prompt)
print(response)
In the above code, the get_weather function makes a request to the OpenWeather API and extracts data like temperature and weather description.
The response is then integrated into the AI agentās output, making it look like the agent is providing up-to-date weather information.
Step 4: Deploying Your AI Agent as an API
Now that our agent can chat and retrieve live data, letās make it accessible to others by turning it into an API. This way, anyone can interact with the agent through HTTP requests.
Using FastAPI for Deployment
FastAPI is a powerful web framework that makes it easy to create APIs in Python. Hereās how we can deploy our agent using FastAPI:
from fastapi import FastAPI
from langchain.llms import OpenAI
import requests
app = FastAPI()
def get_weather(city):
api_key = "your-openweather-api-key"
url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url).json()
temp = response['main']['temp']
description = response['weather'][0]['description']
return f"The weather in {city} is {temp}Ā°C with {description}."
llm = OpenAI(api_key="your-openai-api-key")
@app.get("/ask")
def ask_question(city: str):
weather = get_weather(city)
prompt = f"Tell me about the weather in {city}: {weather}"
response = llm(prompt)
return {"response": response}
Now you can run this API locally and access it by sending HTTP requests to http://localhost:8000/ask?city=New York.
Conclusion: Whatās Next?
Congratulations!š Youāve just built your first AI agent from scratch and connected it to an open API to fetch real-time data. Youāve also deployed your agent as an API that others can interact with. From here, the possibilities are endlessāyou can integrate more APIs, build multi-agent systems, or deploy it on cloud platforms for broader use.š
If youāre ready for more š„, and want to explore advanced features of LangChain, like memory management for long conversations, or dive into multi-agent systems to handle more complex tasks, do let me know in the comments below.
Have fun experimenting, and feel free to drop your thoughts in the comments below!š¬
š You can also learn more about my work and projects at https://santhoshvijayabaskar.com