Top 10 generative AI blogs, articles, or tutorials in 2024

IBM Developer - Feb 25 - - Dev Community

Discover the most popular content for 2024's hottest topic: generative AI

By Michelle Corbin

This article was originally published on IBM Developer.

Generative AI is revolutionizing the way we build, create, and innovate. With advancements in large language models (LLMs), AI agents, and open-source platforms, there’s never been a better time to dive deep into this transformative field.

Whether you’re a developer eager to build AI applications, a data scientist exploring new forecasting methods, or an enthusiast looking to dive into the world of LLMs, we’ve curated a list of the top 10 blogs, articles, and tutorials to help you stay ahead of the curve and inspire your next AI project. Explore the best content that will shape your generative AI journey this year!

10. Developing a gen AI application using IBM Granite Code

In this tutorial, Developing a gen AI application using IBM Granite Code, you learn how to use the IBM Granite Code model as a code assistant to build a get AI application using Python and Flask. Discover the benefits of running Granite Code locally, the power of using all open source code, and just how much you can boost your productivity with Granite Code as your code assistant.

Try the Granite models for yourself by prompting them in the Granite Playground.

Explore more content on IBM Developer about the Granite models.

9. Generating SQL from text with LLMs

In this tutorial, Generating SQL from text with LLMs, you learn how to use a large language model (LLM) from the IBM Granite models to create valid SQL statements from normal descriptions of data operations using natural language processing (NLP). The goal is to convert text input into a structured representation and use that structured data to generate a semantically correct SQL query that can be executed on a database.

Explore more content on IBM Developer about watsonx.ai.

8. Build a RAG agent to answer complex questions

In this tutorial, Build a RAG agent to answer complex questions, you learn how to build a RAG-based LLM agent (AI agent) to handle complex questions and interact with external information sources such as vector databases and the internet. In this tutorial, you are introduced to a sample RAG agent (with diagrams and full Python implementation), and see how an example question is answered correctly by the sample RAG agent and incorrectly by ChatGPT 4o.

Learn more about what AI agents are in this article. Read more about AI agents based on LLMs (LLM agents) in this blog.

Explore more content on IBM Developer about retrieval-augmented generation (RAG).

7. Leveraging CrewAI and IBM watsonx

In this blog, Leveraging CrewAI and IBM watsonx, you are introduced to CrewAI that you use to orchestrate AI agents and how to combine CrewAI with watsonx, which is IBM's portfolio of AI products that helps accelerate the impact of generative AI.

Explore more content on IBM Developer about watsonx.

6. Create a LangChain AI agent in Python using watsonx

In this tutorial, Create a LangChain AI agent in Python using watsonx, you learn how to create an AI agent using LangChain in Python with watsonx. You'll create a tool to return today's date and another tool to return today's Astronomy Picture of the Day using NASA's open source API.

Python, an open-source programming language, seems to be the most popular programming language for generative AI.

Explore more content on IBM Developerabout Python.

5. Contributing knowledge to open-source LLMs like the Granite models using the InstructLab UI

In this tutorial, Contributing knowledge to open-source LLMs like the Granite models using the InstructLab UI, you learn how to contribute to open-source large language models (LLMs), such as the IBM Granite models, using InstructLab UI. InstructLab is a community-based approach to building truly open-source LLMs.

Learn more about what InstructLab is and why developers need it in this article on IBM Developer.

Explore more content on IBM Developer about InstructLab.

4. Build a RAG application with watsonx.ai flows engine

In this tutorial, Build a RAG application with watsonx.ai flows engine, you learn how to enhance your applications with generative AI by building a question-answer application using JavaScript and IBM watsonx.ai flows engine.

Using the watsonx.ai flows engine CLI and SDK, you can more readily integrate generative AI into your own applications.

Explore this tutorial series on IBM Developer about using watsonx.ai flows engine.

3. Using the IBM Granite models for time series forecasting

In this tutorial, Using the IBM Granite models for time series forecasting, you learn how to use the Granite TinyTimeMixer (TTM) model, a compact time-series foundation model, to perform zero-shot prediction and fine-tuned forecasting on an air pollution data set. You learn about the benefits of using foundation models for time series forecasting tasks, highlighting their ability to handle varied dataset resolutions with minimal model capacity.

Explore more content on IBM Developer about the Granite models. And, explore more content on IBM Developer about watsonx.ai.

2. Build a local AI co-pilot using IBM Granite Code, Ollama, and Continue

In this tutorial, Build a local AI co-pilot using IBM Granite Code, Ollama, and Continue, you learn how to set up a local AI co-pilot in Visual Studio Code using IBM Granite Code, Ollama, and Continue, overcoming common enterprise challenges such as data privacy, licensing, and cost. The setup includes open-source LLMs, Ollama for model serving, and Continue for in-editor AI assistance. Get your geek on try building a local AI co-pilot of your own using local LLMs.

This tutorial was the prerequisite tutorial for our #10 tutorial, Developing a gen AI application using IBM Granite Code.

Explore more content on IBM Developer about the Granite models. And, explore more content on IBM Developer about watsonx.ai.

1. Contributing knowledge to the open source Granite model using InstructLab

Just edging out our #2 tutorial for 2024 is this tutorial, Contributing knowledge to the open source Granite model using InstructLab, where you learn learn how to contribute knowledge to open source large language models (LLMs) on your personal laptop or Mac device using the InstructLab environment. After completing this tutorial, you will have gained the necessary skills and knowledge to become a valuable contributor to the InstructLab community and the broader generative AI ecosystem.

Our #5 tutorial, Contributing knowledge to open-source LLMs like the Granite models using the InstructLab UI, built on the skills learned in this fundamental InstructLab tutorial for 2024.

Explore more content on IBM Developer about InstructLab.

Want more?

In 2024, learning about generative AI meant learning about large language models (LLMs) like the open-sourced Granite models, the open-source project InstructLab that you use to bulid open-source LLMs, and AI agents that use RAG, LangChain, Python, or watsonx.ai technologies. Stay tuned to see what 2025 brings for generative AI.

Check out this Red Hat blog about their top 10 articles in AI.

Check out more of our generative AI content on our Build with generative AI page.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .