CoLlama πŸ¦™ - ollama as your AI coding assistant (local machine and free)

PaweΕ‚ Ciosek - Jan 1 - - Dev Community

Hello πŸ™‹ first of all, Happy New Year! πŸŽ‰

TLDR

If you in hurry, below a mindmap to quickly consume the content. πŸ•’πŸ₯—

Click here to see the mind map in xmind

Image description

AI Coding Assistant

AI Code Assistants are rapidly gaining popularity in the tech industry. They are becoming an essential tool for programmers, providing assistance in writing code, debugging, and even generating code snippets. Mastering the use of an AI Code Assistant is becoming a necessary skill for modern developers.

There are several AI Code Assistants available in the market. GitHub Copilot, AWS Codewhisperer, Tabnine. There are many more tools available, each with its unique features and capabilities.

However, most of these tools come with their own set of limitations. Many of them are not free, although they often offer trial versions for users to test out their capabilities. Additionally, these tools typically work by sending your code to an external server, which might raise privacy concerns for some users. Lastly, these tools are generally limited to answering programming-related questions and may not be able to assist with other types of inquiries.

What is ollama?

Ollama is a user-friendly tool designed to run large language models (LLMs) locally on a computer. This means it offers a level of security that many other tools can't match, as it operates solely on your local machine, eliminating the need to send your code to an external server. Plus, being free and open-source, it doesn't require any fees or credit card information, making it accessible to everyone. πŸ₯³

You can find more about ollama on their official website: https://ollama.ai/. It's designed to work in a completely independent way, with a command-line interface (CLI) that allows it to be used for a wide range of tasks. It's not just for coding - ollama can assist with a variety of general tasks as well.

Image description

One of the standout features of ollama is its library of models trained on different data, which can be found at https://ollama.ai/library. These models are designed to cater to a variety of needs, with some specialized in coding tasks. One such model is codellama, which is specifically trained to assist with programming tasks.

Even, you can train your own model πŸ€“

Run ollama locally

You need at least 8GB of RAM to run ollama locally.

Running ollama locally is a straightforward process. The first step is to install it following the instructions provided on the official website: https://ollama.ai/download.

Image description

If you are Windows user

If you are a Windows user, you might need to use the Windows Subsystem for Linux (WSL) to run ollama locally, as it's not natively supported on Windows. You can find instructions on how to install WSL on the Microsoft website: https://learn.microsoft.com/en-us/windows/wsl/install.

Once ollama is installed, the next step is to download the model that best fits your needs. For programming-related tasks, it's recommended to use the codellama model.



ollama pull codellama


Enter fullscreen mode Exit fullscreen mode

Image description

After the model is downloaded, you can start using ollama.



ollama run codellama


Enter fullscreen mode Exit fullscreen mode

Image description

How to integrate ollama with my editor?

Integrating ollama with your code editor can enhance your coding experience by providing AI assistance directly in your workspace. This can be achieved using the Continue extension, which is available for both Visual Studio Code and JetBrains editors. You can find the extension at https://continue.dev/.

Image description

Once the extension is installed, you'll need to configure it to work with ollama. This involves adding ollama to the extension's configuration file. In your home directory, look for the .continue folder (e.g., /Users/pciosek/.continue) and edit the config.json file. Add the ollama model to the "models" section as follows:



{
  "models": [
    {
      "title": "CodeLlama",
      "model": "codellama",
      "provider": "ollama"
    }
  ]
}


Enter fullscreen mode Exit fullscreen mode

More information about this configuration can be found at https://continue.dev/docs/reference/Model%20Providers/ollama.

After updating the configuration, restart your editor for the changes to take effect. You should now see ollama listed as a model in the extension's sidebar. πŸ₯³

Image description

Now you're ready to use ollama in your editor!

Two ways to use ollama in your editor

  • Open the extension's sidebar and start the conversation.
  • Inside code editor, select the code and press (cmd/ctrl) + M to start the conversation. Selected code will be use as a context for the conversation.

More about the extension can be found at https://continue.dev/docs/intro

Below an example of generating tests for a component

Image description

The extension do not support code completion, if you know extension that support code completion, please let me know in the comments. πŸ™

Conclusion

AI Code Assistants are the future of programming. It's imporant the technology is accessible to everyone, and ollama is a great example of this. It's free, open-source, and runs locally on your machine, making it a great choice for developers looking for an AI Code Assistant that is both secure, free and easy to use. πŸ₯³

Share your thoughts

What do you think about ollama? Do you use any other AI Code Assistants? Maybe did you use other models? Let me know in the comments below! πŸ™

. . . . . . . . . . .