DeepSeek Extension for GitHub Copilot in VS Code ๐Ÿš€

Wassim Chegham - Feb 13 - - Dev Community

tl;dr: Run DeepSeek models in your GitHub Copilot Chat ๐Ÿš€

DeepSeek has been getting a lot of buzz lately, and with a little setup, you can start using it today in GitHub Copilot within VS Code. In this post, Iโ€™ll walk you through how to install and run a VS Code extension I built, so you can take advantage of DeepSeek right on your machine. With this extension, you can use @deepseek to explore the deepseek-coder model. Itโ€™s powered by Ollama, enabling seamless, fully offline interactions with DeepSeek modelsโ€”giving you a local coding assistant that prioritizes privacy and performance.

In a future post I'll walk you through the extension code and explain how to call models hosted locally using Ollama. Feel free to subscribe to get notified.

Features and Benefits

Open-Source and Extendable

As an open-source project, the DeepSeek for GitHub Copilot extension is fully customizable. Advanced users can modify and extend its functionality, build from source, tweak configurations, and even integrate additional AI capabilities.

Local AI Processing

With the DeepSeek for GitHub Copilot extension, all interactions are processed locally on your machine, ensuring complete data privacy and eliminating latency issues. This makes it an ideal solution for developers working on sensitive projects or in restricted environments.

Seamless Integration with GitHub Copilot Chat

The extension integrates natively with GitHub Copilot Chat, allowing you to invoke DeepSeek models effortlessly. If you're already familiar with GitHub Copilot, you'll find the workflow intuitive and easy to use. You can simply type @deepseek followed by your question to get started.

Powered by Ollama

Ollama, a lightweight AI model runtime, powers the execution of DeepSeek models. It simplifies model management by handling downloads and execution, so you can focus on coding.

Customizable Model Selection

You can configure the extension to use different DeepSeek models through a simple setting adjustment. This flexibility allows you to choose the right model size and capability for your hardware. Please note that running bigger models might not work on your local system. You can take advantage of Azure's infrastructure to run bigger models.

Installation Guide

Installing and Running Ollama

DeepSeek for GitHub Copilot requires Ollama to function properly. Ollama is an AI model runtime that allows you to run and manage large language models efficiently on your local machine.

  1. Install Ollama: Download the installer, install, and start Ollama from the Ollama website.
  2. Install from Visual Studio Code Marketplace: The simplest way to get started is by installing the extension directly from the Visual Studio Code Marketplace.
    • Open Visual Studio Code.
    • Navigate to the Extensions panel (Ctrl + Shift + X).
    • Search for DeepSeek for GitHub Copilot and click Install.
  3. Using the Extension: Once installed, using the extension is straightforward:
    • Open the GitHub Copilot Chat panel.
    • Type @deepseek followed by your prompt to interact with the model.
    • Note: On the first run, the extension will automatically download the DeepSeek model. This may take a few minutes, depending on your internet connection.

DeepSeek Extension for GitHub Copilot in VS Code

Configuration and Customization

DeepSeek for GitHub Copilot allows users to configure the AI model through Visual Studio Code settings.

To change the DeepSeek model, update the settings.json file:

{ "deepseek.model.name": "deepseek-coder:1.3b" }
Enter fullscreen mode Exit fullscreen mode

A list of available models can be found on the Ollama website.

Limitations and Workarounds

Current Limitations

  • The extension does not have access to your files in this version, meaning it cannot provide context-aware completions. This is due to the fact that DeepSeek models don't support Function Calling.
  • Limited to local machine performanceโ€”larger models may require more RAM and CPU power.

Workarounds

  • To provide context for completions, manually copy-paste the relevant code into the chat.
  • Optimize performance by selecting smaller DeepSeek models (such as deepseek-coder:1.3b) if you experience lag.

System Requirements

To run DeepSeek for GitHub Copilot Chat, ensure you have the following:

  • Visual Studio Code (latest version recommended)
  • Ollama app installed and running (Download from ollama.com)
  • Sufficient system resources
    • Minimum: 8GB RAM, multi-core CPU
    • Recommended: 16GB RAM, GPU acceleration (if available)

Conclusion

The DeepSeek for GitHub Copilot Chat extension provides an excellent way for delivering privacy, low-latency responses, and offline capabilities.

๐Ÿ”— Get Started Today: Install the DeepSeek for GitHub Copilot Chat extension and supercharge your GitHub Copilot Chat experience with AIโ€”entirely offline! ๐Ÿš€

โ–  Wassim Chegham (https://wassim.dev)

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .