I've been developing in Python and JavaScript, working on AI and non-AI projects, for quite some time now.
I've noticed that ecosystems related to AI development largely skew towards Python. However, Javascript/Typescript has some apparent benefits over Python.
- Performance: JavaScript/TypeScript often performs better in web-based applications due to its asynchronous nature and non-blocking I/O.
- Integration: Easier integration with web technologies makes creating AI-powered web applications more straightforward without switching languages.
- Package Management: Letâs be honest: Javascript's package management is superior to Python.
The AI scene is currently brimming with Python libraries. Every other new library comes with native Python support.
However, we know JS still rules the web-dev realm, and having JS/TS support makes it super easy to integrate AI into web apps, improving both the front-end and back-end development experience.
So, if you are also searching for JS libraries to develop AI apps, look no further. I have compiled a coveted list of open-source libraries for building awesome AI applications.
Feel free to explore their GitHub repositories, contribute to your favourites, and support them by starring the repositories.
1. Composio đ - Build Reliable Agents 10x Faster
If you have explored this space, you know that building reliable AI agents can be quite challenging, especially if you want to automate workflows involving external applications such as Discord, Slack, Calendar, etc.
This is where Composio comes into the picture. They are building the tooling infrastructure for building AI-powered applications.
You can integrate over 100 popular tools across business verticals, such as CRM, Productivity, Dev, HR, etc., with your AI agents to automate complex workflows.
They provide native support for Javascript.
You can get started with Composio by running the following command.
npm install composio-core openai
#yarn add composio-core openai
#pnpm add composio-core openai
Define a method to let the user connect their GitHub account.
import { OpenAI } from "openai";
import { OpenAIToolSet } from "composio-core";
const toolset = new OpenAIToolSet({
apiKey: process.env.COMPOSIO_API_KEY,
});
async function setupUserConnectionIfNotExists(entityId) {
const entity = await toolset.client.getEntity(entityId);
const connection = await entity.getConnection('github');
if (!connection) {
// If this entity/user hasn't already connected, the account
const connection = await entity.initiateConnection(appName);
console.log("Log in via: ", connection.redirectUrl);
return connection.waitUntilActive(60);
}
return connection;
}
Add the required tools to the OpenAI SDK and pass the entity name on to the executeAgent
 function.
async function executeAgent(entityName) {
const entity = await toolset.client.getEntity(entityName)
await setupUserConnectionIfNotExists(entity.id);
const tools = await toolset.get_actions({ actions: ["github_activity_star_repo_for_authenticated_user"] }, entity.id);
const instruction = "Star a repo ComposioHQ/composio on GitHub"
const client = new OpenAI({ apiKey: process.env.OPEN_AI_API_KEY })
const response = await client.chat.completions.create({
model: "gpt-4-turbo",
messages: [{
role: "user",
content: instruction,
}],
tools: tools,
tool_choice: "auto",
})
console.log(response.choices[0].message.tool_calls);
await toolset.handle_tool_call(response, entity.id);
}
executeGithubAgent("joey")
Execute the code and let the agent do the work for you.
Composio works with famous frameworks like LangChain, LlamaIndex, CrewAi, etc.
For more information, visit the official docs, and for even more complex examples, see the repository's example sections.
Star the Composio.dev repository â
2. Instructor-JS - Â Structured Data Extraction from LLMs
I always found extracting useful information from LLM responses a chore. But not anymore.
The instructor provides a simple and structured approach to extracting and validating LLM responses.
It also has native support for both Python and Javascript. For Javascript, it uses Zod for data validation.
Hereâs how you can quickly get started with it.
npm i @instructor-ai/instructor zod openai
Extract information from LLM responses.
import Instructor from "@instructor-ai/instructor";
import OpenAI from "openai"
import { z } from "zod"
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
organization: process.env.OPENAI_ORG_ID ?? undefined
})
const client = Instructor({
client: oai,
mode: "TOOLS"
})
const UserSchema = z.object({
// Description will be used in the prompt
age: z.number().describe("The age of the user"),
name: z.string()
})
// User will be of type z.infer<typeof UserSchema>
const user = await client.chat.completions.create({
messages: [{ role: "user", content: "Jason Liu is 30 years old" }],
model: "gpt-3.5-turbo",
response_model: {
schema: UserSchema,
name: "User"
}
})
console.log(user)
// { age: 30, name: "Jason Liu" }
For more information, visit the official documentation page.
Star the Instructor-Js repository â
3. CopilotKit - Build AI Copilot for React Apps
If you have an exciting project or are working on a new one and want to equip it with AI capabilities, the CopilotKit is the solution you need.
It is a ready-made Copilot that you can integrate with your application or any code you can access (OSS).
It offers React components like text areas, popups, sidebars, and chatbots to augment any application with AI capabilities.
Get started with CopilotKit using the following command.
npm i @copilotkit/react-core @copilotkit/react-ui
AÂ CopilotKit
 must wrap all components interacting with CopilotKit. You should also start with CopilotSidebar
 (swap to a different UI provider later).
"use client";
import { CopilotKit } from "@copilotkit/react-core";
import { CopilotSidebar } from "@copilotkit/react-ui";
import "@copilotkit/react-ui/styles.css";
export default function RootLayout({children}) {
return (
<CopilotKit publicApiKey=" the API key or self-host (see below)">
<CopilotSidebar>
{children}
</CopilotSidebar>
</CopilotKit>
);
}
You can check their documentation for more information.
Star the CopilotKit repository â
4. E2B - Code Interpreting for AI Apps
If you are building an AI web application that requires the LLMs to execute code, such as AI analysts and SWE agents, E2Bâs Code interpreter is your go-to choice.
It provides a safe and secure cloud environment for LLMs to execute codes.
It allows AI to run safely for long periods, using the same tools as humans, such as GitHub repositories and cloud browsers.
The Code Interpreter SDK allows you to run AI-generated code in a secure small VM - E2B sandbox - for AI code execution. Inside the sandbox is a Jupyter server you can control from their SDK.
Get started with E2B with the following command.
npm i @e2b/code-interpreter
Execute a program.
import { CodeInterpreter } from '@e2b/code-interpreter'
const sandbox = await CodeInterpreter.create()
await sandbox.notebook.execCell('x = 1')
const execution = await sandbox.notebook.execCell('x+=1; x')
console.log(execution.text) // outputs 2
await sandbox.close()
For more on how to work with E2B, visit their official documentation.
5. LanceDB - Performant Vector Database for AI Apps
AI applications are incomplete without vector databases. They help you store, query, and manage embeddings of unstructured data like texts, Images, videos, and audio.
LanceDB is one of the best open-source vector databases available with native Javascript support.
It offers production-scale vector search, multi-modal support, Zero-copy, automatic data versioning, GPU-powered querying, and more.
Get started with LanceDB.
npm install @lancedb/lancedb
Create and query a vector database.
import * as lancedb from "@lancedb/lancedb";
const db = await lancedb.connect("data/sample-lancedb");
const table = await db.createTable("vectors", [
{ id: 1, vector: [0.1, 0.2], item: "foo", price: 10 },
{ id: 2, vector: [1.1, 1.2], item: "bar", price: 50 },
], {mode: 'overwrite'});
const query = table.vectorSearch([0.1, 0.3]).limit(2);
const results = await query.toArray();
// You can also search for rows by specific criteria without involving a vector search.
const rowsByCriteria = await table.query().where("price >= 10").toArray();
You can find more on LanceDB here on their documentation.
Star the LanceDB repository â
6. Trigger.Dev
Trigger. Dev is an open-source platform and SDK that allows you to create long-running background jobs with no timeouts. Write normal async code, deploy, and never hit a timeout.
They also let you reliably call AI APIs with no timeouts, automatic retrying, and tracing. You can use the existing SDKs with it.
import { task } from "@trigger.dev/sdk/v3";
// Generate an image using OpenAI Dall-E 3
export const generateContent = task({
id: "generate-content",
retry: {
maxAttempts: 3,
},
run: async ({ theme, description }: Payload) => {
const textResult = await openai.chat.completions.create({
model: "gpt-4o",
messages: generateTextPrompt(theme, description),
});
if (!textResult.choices[0]) {
throw new Error("No content, retryingâŚ");
}
const imageResult = await openai.images.generate({
model: "dall-e-3",
prompt: generateImagePrompt(theme, description),
});
if (!imageResult.data[0]) {
throw new Error("No image, retryingâŚ");
}
return {
text: textResult.choices[0],
image: imageResult.data[0].url,
};
},
});
Star the Trigger.Dev repository â
7. Vercel AI SDK - Build AI Web Apps in Typescript
If I were to build a full-stack AI-powered application right now, I would pick Vercel AI SDK in a heartbeat.
Itâs a toolkit designed to let developers build AI web apps with React, Vue, NEXT, Sveltekit, etc.
Vercel AI SDK abstracts LLM providers, eliminates boilerplate codes for building chatbots, and provides interactive visualization components to provide a rich customer experience.
It has three parts,
- AI SDK Core: A single API for generating text, structured data, and tool interactions with LLMs.
- AI SDK UI: Framework-independent hooks for quickly building chat and generative UIs.
- AI SDK RSC: A library for streaming generative UIs with React Server Components (RSC).
To get started, install the library.
npm install ai
Install the model provider of your choice.
npm install @ai-sdk/openai
Call OpenAI API.
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai'; // Ensure OPENAI_API_KEY environment variable is set
async function main() {
const { text } = await generateText({
model: openai('gpt-4-turbo'),
system: 'You are a friendly assistant!',
prompt: 'Why is the sky blue?',
});
console.log(text);
}
main();
For more on Vercel AI SDK, visit their documentation.
Star the Vercel AI SDK repository â
8. Julep - Managed Backend for AI Apps
Cracking AI applications with long-term memory is a hard task, to say the least.
Julep, an open-source AI platform, is solving the same.
It's like Firebase or Supabase for AI, offering memory (user management), knowledge (built-in RAG and context management), tools (integration with Composio & others), and soon, tasks.
They also provide Javascript support.
Check out their documentation for more.
Star the Julep AI repository â
9. Gateway - Single API to Access 200+ LLMs
To build AI applications, you often need multiple LLMs from different providers. However, LLM providers have their own SDKs, which makes it difficult to manage multiple providers.
Gateway streamlines request to 200+ open & closed source models with a unified API. It is also production-ready with support for caching, fallbacks, retries, timeouts, and load balancing and can be edge-deployed for minimum latency.
To run it locally, run the following command in your terminal, and it will spin up the Gateway on your local system:
npx @portkey-ai/gateway
For more, refer to their repository.
Star the Gateway repository â
Do you use or have built some other cool AI tool or framework?
Let me know about them in the comments :)