Hey, Chris. What hardware are you running your beefier local LM's on? I can't run the Deepseek model you mentioned on my M2 MacBook Air 15 with 24 GB RAM.
You can't run the native DeepSeek R1 in full on any consumer laptop. You can run distilled versions of other models that were retrained with it, like Llama 3.3 70B. There's a Qwen 2.5 7B that almost any Mac with an M series chip and 16 GB of RAM can run - take a look in LM Studio for the R1 Distilled Qwen 2.5 7B at 4 bit quantization.
This is the model name: mlx-community/DeepSeek-R1-Distill-Qwen-7B-4bit - and to be clear, this is Qwen2.5 that's been optimized by DeepSeek to have the same style of reasoning, but it's NOT DeepSeek itself.
Hey, Chris. What hardware are you running your beefier local LM's on? I can't run the Deepseek model you mentioned on my M2 MacBook Air 15 with 24 GB RAM.
You can't run the native DeepSeek R1 in full on any consumer laptop. You can run distilled versions of other models that were retrained with it, like Llama 3.3 70B. There's a Qwen 2.5 7B that almost any Mac with an M series chip and 16 GB of RAM can run - take a look in LM Studio for the R1 Distilled Qwen 2.5 7B at 4 bit quantization.
This is the model name: mlx-community/DeepSeek-R1-Distill-Qwen-7B-4bit - and to be clear, this is Qwen2.5 that's been optimized by DeepSeek to have the same style of reasoning, but it's NOT DeepSeek itself.
Thank you, Sir.