Using DeepSeek locally and VScode extension

How to Use DeepSeek with Ollama and Continue in VSCode

A Step-by-Step Tutorial

Prerequisites

  1. VSCode installed on your machine.
  2. Basic familiarity with terminal commands.

Step 1: Install Ollama

First, install Ollama to run AI models locally.

For macOS/Linux:

curl -fsSL https://ollama.com/install.sh | sh  

For Windows (Preview):
Download the installer from Ollama.com and run it.

Start the Ollama service:

ollama serve  

(Keep this terminal running in the background.)


Step 2: Download the DeepSeek Model

Pull the DeepSeek model from Ollama’s library. For example, use deepseek-coder (adjust the version as needed):

ollama pull deepseek-coder:33b-instruct-q4_K_M  

(Replace with your preferred variant, e.g., 6.7b or 1.3b for lighter models.)


Step 3: Install Continue in VSCode

  1. Open VSCode.
  2. Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X).
  3. Search for “Continue” and install it.

Alternatively, use the VSCode Command Palette (Ctrl+Shift+P / Cmd+Shift+P):

ext install continue  

Step 4: Configure Continue to Use Ollama & DeepSeek

  1. Open the Continue configuration file in VSCode:

    • Press Ctrl+Shift+P (or Cmd+Shift+P on macOS).
    • Search for “Continue: Open config.json” and select it.
  2. Add Ollama as a model provider and specify DeepSeek:

{  
  "models": [  
    {  
      "title": "DeepSeek via Ollama",  
      "provider": "ollama",  
      "model": "deepseek-coder:33b-instruct-q4_K_M",  
      "baseUrl": "http://localhost:11434"  
    }  
  ]  
}  

(Match the model name to the one you downloaded in Step 2.)

  1. Save the file (Ctrl+S / Cmd+S).

Step 5: Test the Integration

  1. Create a test file (e.g., test.py).
  2. Write a comment prompting DeepSeek:
# Write a function to reverse a string in Python  
  1. Place your cursor on the comment line and press Ctrl+Shift+I (or Cmd+Shift+I on macOS) to trigger Continue.
  2. DeepSeek will generate code suggestions!

Troubleshooting Tips

  • Ollama Isn’t Running: Ensure ollama serve is running in the background.
  • Model Not Found: Double-check the model name in config.json and ensure you’ve pulled it via ollama pull.
  • Slow Responses: Use smaller DeepSeek variants (e.g., 1.3b) or upgrade hardware.

By following these steps, you’ve transformed VSCode into an AI-powered IDE with DeepSeek and Ollama! :rocket:

For more details, visit the Continue Documentation or Ollama’s Blog.

17 Likes

This I guess is a memory issue right? :frowning:

image

Did you run ollame serve? It has to run in the background for the extension to work

yes it is…

and in fact it is being found. Unsure if I am doing something wrong…

I figured it out, we also need to configure the tabAutocomplete Config:

"tabAutocompleteModel": {
    "title": "DeepSeek via Ollama",
    "provider": "ollama",
    "model": "deepseek-coder",
    "apiBase": "http://localhost:11434/"
  },

Here’s my config, for the 1.3b model, you can adjust it accordingly.

1 Like

still the same error … here is the top part of config.json.

I believe this is a memory issue. Will try a lighter model and get back with results.

ok managed … used this one: 6.7b-base. It develops even in BASIC :smiley: the times of the Commodore 64!!!

and removed the other one : ollama rm deepseek-coder:33b-instruct-q4_K_M

Thanks @Pratik_Patnaik and @tapan_sharma for helping!

2 Likes

How much space it has taken ?

1 Like

the one installed last is nearly 4GB … the other one was much much more…