How to Easily Install DeepSeek R1 on Your Computer (Without Sharing Your Data)
By The PyCoach
Over the past few days, everyone has been talking about DeepSeek. Some suggest it could be the next ChatGPT killer, while others point out potential pro-China censorship concerns. Whatever the controversy may be, the best way to evaluate this powerful AI is to test it yourself—locally and safely.
This guide walks you through a simple, privacy-respecting method for installing DeepSeek R1 on your own machine. No data sharing. No cloud. Just you, your computer, and a three-step installation process.
If you’re more of a visual learner, don’t worry! Check out my 2-minute video tutorial available below to watch the process step-by-step.
This article is brought to you by Artificial Corner, my newsletter where I break down AI concepts in plain English. Join thousands of curious tech-enthusiasts, developers, and professionals who are using AI to level up their work. Don’t forget to grab a copy of my free cheat sheets too!
Step 1: Install Ollama
The first step in installing DeepSeek R1 is to set up Ollama, an open-source platform designed to simplify running large language models (LLMs) right on your local machine. It’s lightweight, powerful, and very user-friendly.
- Go to the official Ollama website.
- Click on the Download button and select the download option for your operating system (Windows, macOS, or Linux).
- Once the download is complete, unzip the installer package and open the application.
- Follow the on-screen prompts. When the “Welcome to Ollama” screen appears, click Next.
- Proceed to install the command-line interface (CLI), then finish the installation by clicking Finish.
Now that Ollama is installed, you’re ready to fetch your first model.
Step 2: Install DeepSeek Through Your Terminal
With Ollama ready to go, the next step is fetching the DeepSeek R1 model. You’ll do this directly through the terminal.
- On the Ollama Models Page, use the search bar to look for deepseek-r1.
- DeepSeek R1 comes in different size variants (such as base, medium, and full). Choose the version that best suits your computer’s hardware capabilities. Keep in mind that larger models require more RAM and processing power.
- Go to your terminal or command line interface and run the following command:
ollama run deepseek
The CLI will start downloading the model and prepare it for use. Once completed, you’ll be able to run and interact with DeepSeek R1 right from your terminal.
Step 3: Use DeepSeek Through a User Interface (Optional)
If you’re not a fan of using the terminal for everything, you can also connect DeepSeek R1 to a simple web-based user interface that gives you a more approachable environment for chats and prompts.
There are several open-source UIs compatible with Ollama, such as:
To sync DeepSeek R1 with one of these interfaces, you may need to configure API access through Ollama or follow integration setup steps from the respective UI repositories.
Why Run DeepSeek R1 Locally?
There are many benefits to installing and running DeepSeek R1 on your own machine:
- Privacy: Your data stays on your device—no cloud means no data leaks.
- Availability: Run the model anytime, even without internet access.
- Freedom: Avoid censorship, commercial restrictions, or throttled usage that often come with cloud-based platforms.
With Ollama and DeepSeek R1 working hand-in-hand, you’re now equipped with a cutting-edge open-source AI model that runs entirely under your control.
Final Thoughts
DeepSeek R1 is a powerful tool that helps you explore what’s possible with open-source language models—all without compromising privacy or relying on third-party cloud services. With Ollama as your LLM engine and optional web-based UIs for full comfort, you can take full ownership of your AI workflows.
Have questions or want more hands-on AI tutorials? Subscribe to Artificial Corner using the link below to stay updated with the latest tools, techniques, and AI breakthroughs—in plain English.