I built a Q&A chat app for house recommendation. The idea is simple, you can ask using text or image about your dream house and the app will find the most relevant house listing stored on a Vectorize database. Currently, I have 100 house listing in Bogor, Indonesia.
And you read that right, you can upload an image to perform reverse image search or more accurately, a semantic search using an image embedding modelš
Retrieval-Augmented Generation (RAG) for house recommendation.
This project uses multiple AI models to perform QnA style house search/recommendation using RAG method. It's a more advanced use case of CloudFlare AI which integrates many CloudFlare services and AI models.
This is my third and final submission to the CloudFlare Hackathon. My previous submission was about creating a storybook and dev.to author recommendation, now Iām focusing on LLM and RAG for Q&A.
RAG: Retrieval-Augmented Generation.
Building the RAG pipeline
This time my idea was to build an AI assistant to give house recommendation based on text prompt. You can enter a prompt describing the house you want, for example, the number of bedrooms, bathrooms, etc. and then the model will give you house recommendations based on the house listing stored on the D1 database.
There are three parts that make up the RAG pipeline.
Query agent: this agent provides context or āmemoryā from earlier prompt, if exists. This produces a new ārefined prompt,ā hopefully with an added context from a previous chat.
Semantic search: the refined prompt is then fed to a text embedding model and a vector search is performed to a Vectorize index, returning the most relevant document containing the house listing.
Answer agent: using the retrieved documents as context, this agent will then summarize and generate a final response to the users.
Overall, it is the usual RAG pipeline youāll see on many tutorials on the internet. But can we improve it?
Prompting by text is mainstream, what about image?
I found using text prompts to be effective, but I wanted to explore if using an image as a query could enhance the experience.
Currently, CloudFlare AI doesnāt have an image embedding model available. To solve this, I considered using a 3rd party service for image embedding. However, I recalled that TensorFlow has a JS version that could potentially run on a web worker.
Initially, I faced difficulties in the image decoding process with TensorFlow.js because it is designed mainly for browsers, which have built-in image decoding capabilities. Fortunately, you can decode an image using pure JS library such as jpeg-js and run a TensorFlow.js model in a CloudFlare worker.
BUT, it is slow. Really slow...
It takes about 5 seconds to perform a single image embedding. It is good enough for a prototype, but in the long run this will lead to bad UX. The bottleneck appears to be caused by workers needing to download a model and set up everything from scratch each time they run an image embedding process. Since each call to a Worker is isolated, I cannot cache the model for future inference.
Now that we have got the embedding of our image, we can continue with semantic search and summarize the retrieved documents. This will enable us to generate a conclusive answer.
Compared to my previous submission, this app is definitely more intricate, but fun otherwise. I donāt even have to use LangChain to build this RAG pipeline. Overall, this project shows that CloudFlare AI, especially the Text Generation model quality is quite good for building RAG apps. The only major problem I faced on this project is the model hallucinations in the query agent, causing the responses to be reformulated into a question, not a statement. Maybe my system prompt is not optimal yet.
The fact that we can also bring our own TensorFlow.js model to CloudFlare Worker is a major advantage, as it simplifies our system architecture and allows us to run nearly everything on CloudFlare Worker. But keep in mind the drawback I mentioned aboveš
Also, big thanks to my friend @rasyidf for building the frontend app. I couldnāt do it without him.