Back in April 2023, Supabase announced it's AI themed hackathon for Launch Week 7 - there were a lot of categories to win in and for me it was the perfect way to spend my weekend. This blog is about how I created an AI based app that won me the hackathon category for Best use of AI which I created in a weekend - yes, that's right. This is how fast it has become to ship web based software these days thanks to techs like Next.js and Supabase.
The idea & the stack
The most important yet the most simplest part of building something for a hackathon is thinking of an idea that shines among all the other submissions. I spend like half a day thinking of what I should build, I knew I was going to build something OpenAI. My app idea for the hackathon was an AI based Twitter thread generator. You feed the app a blog or some sort of content and then choose the number of tweets the thread should have and that's it. OpenAI's model would then process the blog and create twitter posts from it. I named the app Supathreads and the app is live here at Supathreads.vercel.app.
NOTE: The app won't work now as I spent all my OpenAI credits, but you see it working in this video below for now.
Here's the stack I used for building the app -
- Next.js as a Framework.
- Supabase as my database.
- Prisma ORM.
- NextAuth.js for auth.
- Tailwind CSS for styling.
I decided not to use Supabase's built-in Auth feature as I didn't have a very complex use case for Auth, I just needed a simple OAuth button for GitHub.
Setting up the project
For building the project, you would need a Supabase account & an OpenAI account. You can create both of them for free though there's a limit to the free credits you get in OpenAI and they expire after a certain point of time.
Once you have created accounts on them, obtain your OpenAI API key by going to platform.openai.com/account/api-keys
Then create a project on Supabase, and get it's connection URL which we will use for connecting it with Prisma. Go to the project's settings -> Database -> Scroll down and click on Node.js. You will also need to replace [YOUR-PASSWORD]
in the string with the password you got while creating the project.
Store both the OpenAI API key and Connection URI for Supabase in a .env
file so that you can use it in our project.
Building the backend
To keep the blog concise, I would only focus on the main functionality of the app that's creating threads from content. Let's first see how the backend for AI generation works.
// src/app/api/thread/route.ts
import { OpenAIStream, OpenAIStreamPayload } from "@/lib/stream";
// OPENAI_API_KEY from env file
if (!process.env.OPENAI_API_KEY) {
throw new Error("Missing env var from OpenAI");
}
export const config = {
runtime: "edge",
};
export async function POST(req: Request) {
console.log("ok");
if (req.method !== "POST") {
return new Response("Method not allowed", { status: 405 });
}
const { prompt } = (await req.json()) as {
prompt?: string;
};
if (!prompt) {
return new Response("No prompt in the request", { status: 400 });
}
const payload: OpenAIStreamPayload = {
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
temperature: 0.7,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
max_tokens: 2000,
stream: true,
n: 1,
};
const stream = await OpenAIStream(payload);
return new Response(stream);
}
This is the main API route responsible for generation of threads. The prompt along with content for generation of threads is passed in the request. The API then uses the OpenAIStream
function to stream the generated thread to the frontend.
Back when. I built this app - there were not a lot of good/easy options available to achieve streaming in the UI like it is now thanks to vercel/ai library. I researched a bit and then found this code from one of @nutlope's projects on GitHub that does exactly what I needed.
// src/lib/stream.ts
import {
createParser,
ParsedEvent,
ReconnectInterval,
} from "eventsource-parser";
export type ChatGPTAgent = "user" | "system";
console.log("...")
export interface ChatGPTMessage {
role: ChatGPTAgent;
content: string;
}
export interface OpenAIStreamPayload {
model: string;
messages: ChatGPTMessage[];
temperature: number;
top_p: number;
frequency_penalty: number;
presence_penalty: number;
max_tokens: number;
stream: boolean;
n: number;
}
export async function OpenAIStream(payload: OpenAIStreamPayload) {
const encoder = new TextEncoder();
const decoder = new TextDecoder();
let counter = 0;
const res = await fetch("https://api.openai.com/v1/chat/completions", {
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.OPENAI_API_KEY ?? ""}`,
},
method: "POST",
body: JSON.stringify(payload),
});
const stream = new ReadableStream({
async start(controller) {
// callback
function onParse(event: ParsedEvent | ReconnectInterval) {
if (event.type === "event") {
const data = event.data;
// https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream
if (data === "[DONE]") {
controller.close();
return;
}
try {
const json = JSON.parse(data);
const text = json.choices[0].delta?.content || "";
if (counter < 2 && (text.match(/\n/) || []).length) {
// this is a prefix character (i.e., "\n\n"), do nothing
return;
}
const queue = encoder.encode(text);
controller.enqueue(queue);
counter++;
} catch (e) {
// maybe parse error
controller.error(e);
}
}
}
// stream response (SSE) from OpenAI may be fragmented into multiple chunks
// this ensures we properly read chunks and invoke an event for each SSE event stream
const parser = createParser(onParse);
// https://web.dev/streams/#asynchronous-iteration
for await (const chunk of res.body as any) {
parser.feed(decoder.decode(chunk));
}
},
});
return stream;
}
This code block might seem a bit scary but what it does isn't that hard to understand. It creates a readable stream out of your response data, turns it into chunks of smaller data and then returns them as a stream which makes it look like the data is being generated in real time on the frontend.
Building the frontend
Now that we have our backend ready, let's create the frontend for taking the content as input and showing the generated thread.
// src/components/thread-form.tsx
"use client";
import { useState } from "react";
import { z } from "zod";
import { UtilButton } from "@/components/ui/buttons";
import { SignOut } from "@/app/actions";
import { CopyButton } from "@/components/ui/buttons";
import { toast } from "react-hot-toast";
export const InputSchema = z.object({
article: z.string().trim().nonempty().max(17000),
numberOfTweets: z.number().max(6).min(1).default(4),
tweetType: z.enum(["short", "medium", "long"]).default("medium"),
});
type TweetType = "short" | "medium" | "long";
export default function ThreadForm() {
const [article, setArticle] = useState("");
const [numberOfTweets, setNumberOfTweets] = useState(4);
const [tweetType, setTweetType] = useState<TweetType>("medium");
const [generatedTweets, setGeneratedTweets] = useState<string>("");
const [loading, setLoading] = useState(false);
const [showSaveForm, setShowSaveForm] = useState(false);
const [threadTitle, setThreadTitle] = useState("");
const generateThread = async () => {
const id = toast.loading("Generating thread...");
const input = InputSchema.safeParse({
article,
numberOfTweets,
tweetType,
});
if (!input.success) {
console.log(input.error);
toast.error("Invalid form input!", {
id,
});
return;
}
const prompt = `Turn this article into an interesting twitter thread that catches people's attention, your tone should be like that of a famous twitter thread creator, the first tweet should be an introduction tweet to the thread, the thread should consist of ${input.data.numberOfTweets} tweets of ${input.data.tweetType} length each. Keep the links and people's name in the article as it is, and clearly label them like "1." and "2.",separate each tweet with a line gap, tweets should be written in first person - \n ${input.data.article}`;
setGeneratedTweets("");
setLoading(true);
const response = await fetch("/api/thread", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
}),
});
if (!response.ok) {
console.error(response.statusText);
toast.error("Couldn't generate thread!", {
id,
});
}
// This data is a ReadableStream
const data = response.body;
if (!data) {
return;
}
const reader = data.getReader();
const decoder = new TextDecoder();
let done = false;
while (!done) {
const { value, done: doneReading } = await reader.read();
done = doneReading;
const chunkValue = decoder.decode(value);
setGeneratedTweets((prev) => prev + chunkValue);
}
// scrollToBios();
setLoading(false);
toast.success("Thread generated successfully!", {
id,
});
};
const toggleSaveForm = async () => {
setShowSaveForm((prev) => !prev);
};
const saveThread = async () => {
const id = toast.loading("Saving thread...");
if (!threadTitle.trim()) {
toast.error("Please enter a thread title!", {
id,
});
return;
}
const reqBody = {
title: threadTitle,
tweets: generatedTweets.split("\n\n").map((tweet, index) => ({
content: tweet.trim().slice(3),
})),
};
const response = await fetch("/api/threads", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody),
});
if (!response.ok) {
console.error(response.statusText);
toast.error("Couldn't save the thread!", {
id,
});
}
const data = await response.json();
console.log(data);
toast.success("Thread saved successfully!", {
id,
});
};
return (
<div className="lg:w-[56%] w-full flex flex-col gap-2 justify-start items-center p-4 my-6">
<h2 className="text-xl font-bold w-full bg-clip-text text-transparent bg-gradient-to-b from-zinc-900 via-neutral-800 to-stone-900">
Create a new thread
</h2>
<div className="flex flex-col lg:flex-row md:flex-row gap-4 w-full">
<div className="w-full">
<label
htmlFor="numberOfTweets"
className="self-start text-sm mb-1 text-zinc-700"
>
Number of tweets (1 - 6)
</label>
<input
type="number"
className="w-full bg-zinc-200 text-zinc-900 focus:bg-zinc-200/50 transition-all border border-zinc-300/60 shadow-smfocus:border-zinc-500/50 duration-200 rounded-md p-2 outline-none"
max={6}
min={1}
placeholder="Number of tweets"
value={numberOfTweets}
onChange={(e) => setNumberOfTweets(parseInt(e.target.value))}
/>
</div>
<div className="w-full">
<label
htmlFor="tweetType"
className="self-start text-sm mb-1 text-zinc-700"
>
Tweet type
</label>
<select
value={tweetType}
className="w-full bg-zinc-200 text-zinc-900 focus:bg-zinc-200/50 transition-all border border-zinc-300/60 shadow-sm focus:border-zinc-500/50 duration-200 rounded-md p-2 outline-none"
onChange={(e) => setTweetType(e.target.value as TweetType)}
>
<option value="short">Short</option>
<option value="medium">Medium</option>
<option value="long">Long</option>
</select>
</div>
</div>
<div className="w-full">
<label
htmlFor="article"
className="self-start text-sm mb-1 text-zinc-700"
>
Article (max 17k characters)
</label>
<textarea
placeholder="Enter your article"
className="w-full bg-zinc-200 text-zinc-900 focus:bg-zinc-200/50 transition-all h-[400px] border border-zinc-300/60 shadow-sm focus:border-zinc-500/50 duration-200 rounded-md p-2 outline-none scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-500/50 scrollbar-thumb-rounded-md"
onChange={(e) => setArticle(e.target.value)}
/>
</div>
<div className="w-full flex gap-4">
<UtilButton onClick={generateThread} label="Generate" />
<SignOut />
</div>
{generatedTweets.trim() !== "" && (
<div className="w-full whitespace-pre-line text-zinc-200 flex flex-col gap-2 mt-4">
<h2 className="text-xl font-bold w-full bg-clip-text text-transparent bg-gradient-to-b from-zinc-900 via-neutral-800 to-stone-900">
Generated tweets
</h2>
<div className="relative flex flex-col gap-4 z-0 mb-2">
<div className="absolute w-full h-full left-5 z-[-100] top-0 bg-transparent border-l border-zinc-400/60"></div>
{generatedTweets.split("\n\n").map((tweet, index) => (
<p
key={index}
className="bg-zinc-200 shadow-md relative z-[100] border border-zinc-300/60 rounded-md p-4 pr-10 text-zinc-900"
>
{tweet.trim().slice(3)}
<CopyButton text={tweet.trim().slice(3)} />
</p>
))}
</div>
<div>
{showSaveForm && (
<div className="flex gap-4 mb-4">
<input
type="text"
placeholder="Thread name / title"
className="w-full bg-zinc-200 text-zinc-900 focus:bg-zinc-200/50 transition-all border border-zinc-300/60 shadow-sm focus:border-zinc-500/50 duration-200 rounded-md p-2 outline-none"
value={threadTitle}
onChange={(e) => setThreadTitle(e.target.value)}
/>
<UtilButton onClick={saveThread} label="Save" />
</div>
)}
{generatedTweets && (
<UtilButton
onClick={toggleSaveForm}
label={showSaveForm ? "Cancel" : "Save Thread"}
/>
)}
</div>
</div>
)}
</div>
);
}
This is my main component for the thread creation UI. You can see some imports and predefined prompt for OpenAI. Your prompt should be concise but to-the-point for best results. I have a few states I'm using to handle input changes. Once someone selects all inputs and then clicks on the Generate button. the whole input gets validated against a Zod schema and then is fed to the API route we created in the previous step. The generated response is being split into separate tweets and is then shown in the UI as a thread.
Adding the Save functionality to threads
We have our main functionality of the app working, but what if someone wants to save the generated threads to use it later or just bookmark it? Supabase is here to help us! If you see the above code block you can see I'm providing a button in the form to save it once the tweets have generated. You can name your thread and then click on Save and it will get added to your dashboard!
Let's look at the function and API I'm using for saving threads.
/ src/components/event-form.tsx
...
const saveThread = async () => {
const id = toast.loading("Saving thread...");
if (!threadTitle.trim()) {
toast.error("Please enter a thread title!", {
id,
});
return;
}
const reqBody = {
title: threadTitle,
tweets: generatedTweets.split("\n\n").map((tweet, index) => ({
content: tweet.trim().slice(3),
})),
};
const response = await fetch("/api/threads", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody),
});
if (!response.ok) {
console.error(response.statusText);
toast.error("Couldn't save the thread!", {
id,
});
}
const data = await response.json();
console.log(data);
toast.success("Thread saved successfully!", {
id,
});
};
...
This function is being used in my frontend to save the generated thread. It sends a request to my API route with the tweets in the request body. Let's look at the API route.
// src/pages/api/threads/index.ts
import type { NextApiRequest, NextApiResponse } from "next";
import { prisma } from "@/lib/prisma";
import { getServerSession } from "next-auth";
import { authOptions } from "../auth/[...nextauth]";
import { ThreadCreateSchema } from "@/lib/schemas";
async function handler(req: NextApiRequest, res: NextApiResponse) {
const session = await getServerSession(req, res, authOptions);
console.log("running");
if (!session || !session.user) {
res.status(401).json({ error: "Unauthorized" });
return;
}
if (req.method === "POST") {
const parsedBody = ThreadCreateSchema.safeParse(req.body);
if (!parsedBody.success) {
res.status(400).json({ error: "Invalid request body" });
return;
}
try {
const thread = await prisma.thread.create({
data: {
title: parsedBody.data.title,
authorId: session.user.id,
tweets: {
create: parsedBody.data.tweets.map((tweet) => ({
content: tweet.content,
})),
},
},
select: {
id: true,
},
});
res.status(201).json(thread);
} catch (error) {
res.status(500).json({ error: "Couldn't create thread" });
}
} else {
res.status(405).json({ error: "Method not allowed" });
}
}
export default handler;
You can see here that the data is first parsed using Zod and if it's parsed successfully, it gets added to our Supabase database using Prisma. Now let's create the dashboard UI to show all the saved threads to the user.
// src/app/dashboard/page.tsx
import type { Thread } from "@prisma/client";
import { z } from "zod";
import { getServerSession } from "next-auth";
import { authOptions } from "@/pages/api/auth/[...nextauth]";
import { prisma } from "@/lib/prisma";
import { SignIn } from "../actions";
import type { Metadata } from "next";
export const metadata: Metadata = {
title: "Dashboard"
}
const ThreadSchema = z.array(
z.object({
id: z.string(),
title: z.string(),
createdAt: z.date(),
})
);
async function getDashboardData() {
const session = await getServerSession(authOptions);
if (!session?.user) {
throw new Error("You need to be logged in to access this page");
}
const data = await prisma.thread.findMany({
where: {
authorId: session.user.id,
},
orderBy: {
createdAt: "desc",
}
});
const threads = ThreadSchema.safeParse(data);
if (!threads.success) {
throw new Error("Invalid response from server");
}
return {
threads: threads.data,
session,
};
}
export default async function Dashboard() {
const { threads, session } = await getDashboardData();
return (
<>
{session?.user ? (
<div className="lg:w-[56%] w-full p-4 my-6">
<h2 className="text-3xl font-bold w-full bg-clip-text text-transparent bg-gradient-to-b from-zinc-900 via-neutral-800 to-stone-900 mb-4">
Your Threads
</h2>
<div className="flex flex-col gap-4 w-full ">
{threads?.map((thread) => (
<div
key={thread.id}
className="p-4 bg-zinc-200 rounded-md shadow-md border border-zinc-300/60"
>
<a
className="text-lg font-semibold cursor-pointer text-zinc-950 hover:underline underline-offset-2"
href={`/threads/${thread.id}`}
>
{thread.title}
</a>
<p className="text-zinc-700 mt-4 text-sm">
created {thread.createdAt.toLocaleDateString()}
</p>
</div>
))}
</div>
</div>
) : (
<div className="flex flex-col items-center justify-center h-screen">
<h2 className="text-2xl font-semibold text-zinc-100 mb-4">
You need to be logged in to create a thread
</h2>
<SignIn />
</div>
)}
</>
);
}
Here you can see I'm first fetching all the saved threads for the user using Prisma query. This is a React Server Component since I'm using NExt.js 13. Once the data is loaded, I'm mapping the threads to show them in the dashboard.
Finishing up
Using Next.js (v13) and Supabase with Prisma helped me a lot in moving faster and ship this idea within a weekend. Both Next.js and Supabase have great DX and great documentation. Whenever I felt stuck - the docs were enough to solve my issue (most of the times).
This project isn't a very complex app or an app with a lot of features. But it still won the hackathon category for Best use of AI because the idea was unique and I focused only on the main functionality of adding a lot of features which shadows your app's main functionality. Your goal for a hackathon should be to build an app that shows and implements your idea clearly in a way others can understand and use it :)
Resources & Links
Thanks a lot for reading the blog post!