Building Your First AI Agent with LangChain and Open APIs

Santhosh Vijayabaskar - Oct 24 - - Dev Community

I am sure we all have been hearing about AI agents and are not sure where to begin šŸ¤”; no worriesā€”you're in the right place! In this article, I am going to introduce you to the world of AI Agents and walk you through step-by-step how to build your first AI agent with LangChain.

LangChain is an incredibly useful tool for connecting AI models to various outbound APIs. In this guided tutorial, we will build our first Agent and connect it to Open API Weather data šŸŒ¦ļø to make it more interactive and practical.

By the time we're done, you will have your own AI agent šŸ¤–, which can chat, pull in live data, and do so much more!


šŸ¤– What is an AI Agent? Letā€™s Break it Down

AI Agents are like a supercharged virtual assistant thatā€™s always ready to help. Whether itā€™s answering your questions, doing small tasks for you, or even making decisions, this AI agent is like having a digital helper at your disposal. It can do everything from fetching data to creating content or even having a conversation with you. Pretty cool, right?šŸ˜Ž

AI agents arenā€™t just staticā€”theyā€™re smart, dynamic, and capable of working on their own, thanks to the power of large language models (LLMs) like GPT-3 or GPT-4.

šŸ§© What is LangChain? A Developer-Friendly Powerhouse

LangChain is a developer-friendly framework that connects AI models (like GPT-3 or GPT-4) with external tools and data. It helps create structured workflows where the AI agent can talk to APIs or databases to fetch information.

Why LangChain?

  • Easy to use: It simplifies integrating large language models with other tools (Jira, Salesforce, Calendars, Database, etc).
  • Scalable: You can build anything from a basic chatbot to a complex multi-agent system.
  • Community-driven: With a large, active community, LangChain provides a wealth of documentation, examples, and support.

In our case, weā€™re building a simple agent that can answer questions, and to make things cooler, itā€™ll retrieve real-time data like weather information. Let's dive in!

Step 1: Setting Up Your Environment

In this section, let's set up our development environment.

1.1 Install Python (if you havenā€™t already)

Make sure you have Python installed. You can download it from python.org. Once installed, verify it by running:

python --version
Enter fullscreen mode Exit fullscreen mode

1.2 Install LangChain
Now letā€™s install LangChain via pip. For those who are new to Python, PIP is a package manager for Python packages. Open your terminal and run:

pip install langchain
Enter fullscreen mode Exit fullscreen mode

1.3 Install OpenAI
Weā€™ll also be using the OpenAI API to interact with GPT-3, so youā€™ll need to install the OpenAI Python client:

pip install openai
Enter fullscreen mode Exit fullscreen mode

1.4 Set Up a Virtual Environment (Optional)
Itā€™s a good practice to work in a virtual environment to keep your project dependencies separate:

python -m venv langchain-env
source langchain-env/bin/activate   # For Mac/Linux

# or for Windows
langchain-env\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

Step 2: Building Your First AI Agent

Now comes the fun partā€”letā€™s build our first AI agent! In this step, weā€™ll create an agent that can have a simple conversation using OpenAIā€™s language model. For this, youā€™ll need an API key from OpenAI, which you can get by signing up at OpenAI.

Hereā€™s a small snippet to create your first agent:

from langchain.llms import OpenAI

# Initialize the model
llm = OpenAI(api_key="your-openai-api-key")

# Define a prompt for the agent
prompt = "What is the weather like in New York today?"

# Get the response from the AI agent
response = llm(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

In the above code, weā€™re setting up a very basic agent that takes a prompt (a question about the weather) and returns a response from GPT-3. At this point, the agent doesnā€™t actually retrieve live weather dataā€”itā€™s just generating a response based on the language modelā€™s knowledge.

Step 3: Connecting to an Open API (Weather API)

Now letā€™s step things up by integrating real-time data into our agent. Weā€™re going to connect it to a weather API, which will allow the agent to retrieve live weather information šŸŒ¦ļø.

Hereā€™s how you do it.

  1. Get an API Key from OpenWeather
    Head over to OpenWeather and sign up for a free API key.

  2. Make the API Request
    In this next part, weā€™ll modify our agent so that it fetches live weather data from OpenWeatherā€™s API, and then outputs it as part of the conversation.

import requests
from langchain.llms import OpenAI

def get_weather(city):
    api_key = "your-openweather-api-key"
    url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
    response = requests.get(url).json()

    # Extract relevant data
    temp = response['main']['temp']
    description = response['weather'][0]['description']
    return f"The current temperature in {city} is {temp}Ā°C with {description}."

# Now use the LangChain LLM model to integrate this data
llm = OpenAI(api_key="your-openai-api-key")
city = "New York"
weather_info = get_weather(city)
prompt = f"Tell me about the weather in {city}: {weather_info}"

response = llm(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

In the above code, the get_weather function makes a request to the OpenWeather API and extracts data like temperature and weather description.

The response is then integrated into the AI agentā€™s output, making it look like the agent is providing up-to-date weather information.

Step 4: Deploying Your AI Agent as an API

Now that our agent can chat and retrieve live data, letā€™s make it accessible to others by turning it into an API. This way, anyone can interact with the agent through HTTP requests.

Using FastAPI for Deployment

FastAPI is a powerful web framework that makes it easy to create APIs in Python. Hereā€™s how we can deploy our agent using FastAPI:

from fastapi import FastAPI
from langchain.llms import OpenAI
import requests

app = FastAPI()

def get_weather(city):
    api_key = "your-openweather-api-key"
    url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
    response = requests.get(url).json()
    temp = response['main']['temp']
    description = response['weather'][0]['description']
    return f"The weather in {city} is {temp}Ā°C with {description}."

llm = OpenAI(api_key="your-openai-api-key")

@app.get("/ask")
def ask_question(city: str):
    weather = get_weather(city)
    prompt = f"Tell me about the weather in {city}: {weather}"
    response = llm(prompt)
    return {"response": response}
Now you can run this API locally and access it by sending HTTP requests to http://localhost:8000/ask?city=New York.
Enter fullscreen mode Exit fullscreen mode

Conclusion: Whatā€™s Next?

Congratulations!šŸŽ‰ Youā€™ve just built your first AI agent from scratch and connected it to an open API to fetch real-time data. Youā€™ve also deployed your agent as an API that others can interact with. From here, the possibilities are endlessā€”you can integrate more APIs, build multi-agent systems, or deploy it on cloud platforms for broader use.šŸš€

If youā€™re ready for more šŸ”„, and want to explore advanced features of LangChain, like memory management for long conversations, or dive into multi-agent systems to handle more complex tasks, do let me know in the comments below.

Have fun experimenting, and feel free to drop your thoughts in the comments below!šŸ’¬

šŸŒ You can also learn more about my work and projects at https://santhoshvijayabaskar.com

. . . . . .