AI agents are predicted to become one of the most exciting trends in the AI space. Unlike general models such as GPT or Claude, these agents are highly specialized in a single area. One particular area that interests me is advisory agents in the form of chatbots.
In this tutorial, we will build one using a scalable and new solution from Digital Ocean — imhgo, one of the most innovative and incredibly affordable cloud platforms on the internet.
Why Digital Ocean
I personally discovered this platform by chance when I actually wanted to spawn a new Kubernetes cluster. I was astonished by its all-in-one solution for building agents — from the versatility of models and a knowledge base powered by OpenSearch to a host of tools and guardrails. The safeguards are especially important when deploying real solutions for real problems.
Other Cloud Providers
There are many providers in this space, and competition is fierce. In my experience, we generally encounter two types of providers for AI agents/chatbots:
- SaaS Providers: These allow you to click and build workflows that leverage AI and your data, then publish it on their platform (or on-premise if it’s open source).
- Powerful Cloud Providers: Providers like AWS, Azure, and Google offer very capable solutions, especially for enterprises. However, these often come with a steep learning curve and more complex setups.
With that in mind, let’s dive in.
Project Overview
In this tutorial, we are building a Google search advisor based on data about Google search syntax—which can help craft quite complex queries by combining search operators. Head over to Digital Ocean and sign up if you haven’t already.
https://cloud.digitalocean.com/
Step 1: Create the Agent
Click on GenAI Platform from the services on the left side.
Then create an agent as depicted below. For demonstration purposes, we are totally fine with the smallest model, llama 3.1 instruct 8b. For many use cases, it is sufficient and very inexpensive. It costs less than 20 cents per million tokens (input and output). For comparison, the cheapest OpenAI model, GPT-4-mini, costs 15 cents per million input tokens and 30 cents per million output tokens as of today.
https://cloud.digitalocean.com/gen-ai/agents/new
Step 2: Create a Knowledge Base
Once the agent is finished, we add our knowledge base—the data the agent will use to "consult" our users.
We upload the search syntax database, which is basically a text file with a set of rules.
https://cloud.digitalocean.com/gen-ai/knowledge-bases/new?i=866497
When asked for the data source, select upload file:
https://cloud.digitalocean.com/gen-ai/knowledge-bases
What happens here?
In this step, the platform uploads the file and automatically creates an OpenSearch cluster, then indexes the data to make it available via natural language (so-called semantic search) to the agent (the model, basically).
Finally, take a look at the pricing. For a simple application like this, it costs just a few cents.
The indexing of your data source will start automatically. Essentially, it creates an OpenSearch cluster and indexes all your data in a vector store.
Please note that the created OpenSearch cluster, in its default version with 2GB RAM, currently costs $19. More details on pricing can be found here.
This is basically the only fixed cost in this setup that I am aware of. Everything else is on a per-use basis:
- The agent usage (see above)
- The indexing (a one-time cost per document you ingest)
With that done, let’s head back to our agent and attach the knowledge base.
Wait until the indexing is finished; this might take 10–30 minutes or so, depending on the size of your document.
Then, go back to your agent and attach the knowledge base:
Now we have all the moving parts together: the agent plus the knowledge base. We are almost done. Let’s test the solution with a simple query using Google search syntax.
Here is the prompt:
sites thtat reference deepseek.com
The answer looks good for a start.
Note that without grounding the model with your knowledge base, you might either get incorrect or random answers that are not necessarily helpful to your users, which is why we built an agent in the first place :)
Everything looks good.
Step 3: Publish It on Your Website
The final step is publishing and sharing your chatbot with your users. If you have a website, it’s as simple as copying and pasting an embed code that places the chatbot in the bottom right corner of your site—as you might have seen on other websites. The embedded chatbot runs entirely on Digital Ocean, saving you the headache of deploying and integrating the bot. This is an incredibly awesome feature.
Scroll to the bottom of the page. You should see a snippet that you can embed on your homepage.
For testing purposes, we are launching a simple HTTP server that only shows this chatbot…
vscode ➜ /tmp/demo $ npx http-server
<html>
<script async
src="https://agent-e2f0c70935f72016e447-6r8xx.ondigitalocean.app/static/chatbot/widget.js"
data-agent-id="8665a20d-e257-11ef-bf8f-4e013e2ddde4"
data-chatbot-id="92vBefZTNeG-jWZxQbh1aRIzkgxsyltl"
data-name="google-search-advisor Chatbot"
data-primary-color="#031B4E"
data-secondary-color="#E5E8ED"
data-button-background-color="#0061EB"
data-starting-message="Hello! How can I help you today?"
data-logo="/static/chatbot/icons/default-agent.svg">
</script>
</html>
Now open http://localhost:3007 (or whichever port you chose).
You should now see the chatbot and be able to interact with it, just like in the playground.
That’s it.
We have built an AI agent that scales as needed without having to deal with complex infrastructure, letting you focus on your idea.
Wrap-Up
In this post, we created an AI agent based on Llama and integrated it with our knowledge base. After configuring the agent, we published it on our website.
You can now take this one step further by adding more knowledge bases, integrating additional tools (for example, Google search using the search syntax we created), and, of course, implementing guardrails to avoid misuse of the solution.
---``