Summary

DeepSeek is a new AI from China that’s been the cause of quite some uproar in the AI industry and market. While most of the attention has gone to the big ChatGPT-beating model, there are various DeepSeek models that will run on a regular computer, and on my Mac the results are impressive.

How To Get DeepSeek Running on a Mac

There are two ways to get DeepSeek running on your Mac—Ollama(with aDocker interface) or LM Studio. I tried both, but the LM Studio method is by far the easiest.

First, head to theLM Studio download siteand download and install the application. Then run it. You’ll see this onboarding screen the first time you run the app. In this case we’re offered a DeepSeek model with 7B parameters, and this is a fine place to start. However, I want to run a bigger model, so for now we’ll choose “Skip onboarding.”

LM Studio for Mac’s onboarding screen.

Since we have no models loaded, type “DeepSeek” into the search box at the top of the LM Studio window and press Enter.

I searched for “DeepSeek 14B” which is the largest model my MacBook can reasonably run. You’ll have a number of options, many of which have been tuned by the community. Choose whichever you like and click “download”.

The LM Studio search box on Mac.

After the model has completed its download, click on the search bar at the top of the LM Studio window again and you’ll see the models you’ve downloaded.

After selecting it, you’ll see the parameters for the model. For now, just go with the default parameters. Click “Load model” and we’re ready to start asking the LLM question.

The LM Studio model download screen.

My Mac Specs

Getting any of the DeepSeek models to run at usable speeds depends on the specifications of your MacBook. In my case, I’m using an M4 MacBook Pro, with an M4 Pro chip and 24GB of RAM. The RAM count is crucial, since the whole model needs to fit into your GPU memory to work correctly, or at least at usable speeds.

This is why I can run the 14B model, since it fits easily into the 24GB of RAM available, but if you’re using an 8GB Mac, you’re limited to the 7B or smaller models, and even then things may not run all that well. Of course, there’s no harm in trying any model on your Mac. The worst that can happen is that it won’t work well or at all, but it might still be good enough for your needs.

LM Studio model selection.

Comparing DeepSeek 14B With ChatGPT o3 Mini

So, how well does this work? The easiest way to give you an idea is to give the same prompts to DeepSeek 14B running on my Mac and ChatGPT o3 Mini.

Here’s the first prompt:

Write a short cover letter as Mickey Mouse applying for a job at a mousetrap factory.

Here are the results.

Both models created cogent, grammatically correct results, but 03 Mini clearly did a much better job when it comes to embodying the Mickey Mouse character.

Next I asked:

Explain solar power to me at the 5th-grade level.

The results are both decent, but the o3 Mini version is better written, in my opinion.

We could do this all day, and I have! My overall impression is that this 14B model, at least, is about as good as ChatGPT was when it was first released to the public.

Running DeepSeek using LM Studio on a Mac.

However, compared to o3 Mini, it’s clearly not as smart.However, it’s more than smart enough to do all the things I was happy to ask the original ChatGPT to do, and considering its running locally on my little laptop, that’s ahugeleap forward. Even if it takes ten times as long to answer my questions, it’s still under a minute in most cases. Of course, your GPU specs will affect this one way or another.

Some Things To Keep in Mind

Now, while I encourage anyone to try out the local version of DeepSeek, which doesn’t record your private data on servers somewhere in China, there are some things to keep in mind.

First, be mindful of which model you’re using. There are already many tweaked versions of DeepSeek and some are going to be better or worse for your purposes. Second, the full-fat DeepSeek model that’sactuallycompeting with the best ChatGPT models is a 671B monster that needs a huge computer system with hundreds of GBs of RAM to work. These little 7B and 14B models are nowhere near as smart, and so are more prone to generating nonsense.

There are many great reasons to run one of these LLMs locally, but don’t fall into the trap of thinking that, because the online versions are smart and accurate, these smaller models will be anywhere near as good. Still, this is more than just a curiosity. I, for one, will be leaving DeepSeek on my Mac, because even if it’s ten times slower and ten times less intelligent than the best of those data-center AIs, that’s still plenty smart for what most people need an LLM to do.