← Canonical rankings
Canonical Rankings

Best Macs for this model

DeepSeek R1 Distill Qwen 32B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence. Historical baseline selected; model picker is focused on current-market choices.

29 ranked MacsUse the strongest current runtime evidence for each row.27 other historical models hiddenStatic paths cover only canonical model pages; sort and quantization stay as query state.

Historical baseline selected: DeepSeek R1 Distill Qwen 32B. Default model choices remain current-market; other historical models stay hidden.

RankMacScoreQuantTok/sRuntimeFitsHeadroomContextEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB3618bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits223.0 GB131kEstimated$7,4998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 223.0 GB headroom remains at this quantization.
2Mac Pro M2 Ultra 192GB2978bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits159.0 GB131kEstimated$6,9998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 159.0 GB headroom remains at this quantization.
3Mac Studio M4 Max 128GB2338bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits95.0 GB131kEstimated$4,4998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 95.0 GB headroom remains at this quantization.
4MacBook Pro M5 Max 128GB 16-inch2338bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits95.0 GB131kEstimated$5,3998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 95.0 GB headroom remains at this quantization.
5MacBook Pro M4 Max 128GB 16-inch2338bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits95.0 GB131kEstimated$5,9998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 95.0 GB headroom remains at this quantization.
6Mac Studio M3 Ultra 96GB2018bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits63.0 GB131kEstimated$3,9998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 63.0 GB headroom remains at this quantization.
7Mac Studio M4 Max 64GB1698bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits31.0 GB96kEstimated$2,9998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 31.0 GB headroom remains at this quantization.
8MacBook Pro M4 Max 64GB 16-inch1698bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits31.0 GB96kEstimated$4,4998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 31.0 GB headroom remains at this quantization.
9Mac Mini M4 Pro 48GB1538bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits15.0 GB40kEstimated$1,5998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 15.0 GB headroom remains at this quantization.
10MacBook Pro M4 Pro 48GB 14-inch1538bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits15.0 GB40kEstimated$2,4998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 15.0 GB headroom remains at this quantization.
11Mac Studio M4 Max 48GB1538bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · LM Studio · EstimatedLM StudioFits15.0 GB40kEstimated$2,4998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 15.0 GB headroom remains at this quantization.
12MacBook Pro M4 Pro 48GB 16-inch1538bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · Ollama · EstimatedOllamaFits15.0 GB40kEstimated$2,9998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 15.0 GB headroom remains at this quantization.
13MacBook Pro M4 Max 48GB 14-inch1538bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · LM Studio · EstimatedLM StudioFits15.0 GB40kEstimated$3,4998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 15.0 GB headroom remains at this quantization.
14MacBook Pro M4 Max 48GB 16-inch1538bit 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · LM Studio · EstimatedLM StudioFits15.0 GB40kEstimated$3,9998bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 15.0 GB headroom remains at this quantization.
15Mac Studio M4 Max 36GB141Q6_K 18.0 tok/s Fastest evidence path: Q6_K · 18.0 tok/s · Ollama · EstimatedOllamaFits8.5 GB21kEstimated$1,999Q6_K is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 8.5 GB headroom remains at this quantization.
16MacBook Pro M4 Max 36GB 14-inch141Q6_K 18.0 tok/s Fastest evidence path: Q6_K · 18.0 tok/s · Ollama · EstimatedOllamaFits8.5 GB21kEstimated$2,999Q6_K is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 8.5 GB headroom remains at this quantization.
17MacBook Pro M4 Max 36GB 16-inch141Q6_K 18.0 tok/s Fastest evidence path: Q6_K · 18.0 tok/s · Ollama · EstimatedOllamaFits8.5 GB21kEstimated$3,499Q6_K is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 8.5 GB headroom remains at this quantization.
18Mac Mini M4 32GB1396bit 18.0 tok/s Fastest evidence path: 6bit · 18.0 tok/s · Ollama · EstimatedOllamaFits6.6 GB16kEstimated$7996bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 6.6 GB headroom remains at this quantization.
19MacBook Air M4 32GB 13-inch1396bit 18.0 tok/s Fastest evidence path: 6bit · 18.0 tok/s · Ollama · EstimatedOllamaFits6.6 GB16kEstimated$1,4996bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 6.6 GB headroom remains at this quantization.
20MacBook Air M4 32GB 15-inch1396bit 18.0 tok/s Fastest evidence path: 6bit · 18.0 tok/s · Ollama · EstimatedOllamaFits6.6 GB16kEstimated$1,6996bit is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 6.6 GB headroom remains at this quantization.
21Mac Mini M4 24GB130Q4_K_M 18.0 tok/s Fastest evidence path: Q4_K_M · 18.0 tok/s · Ollama · EstimatedOllamaFits4.0 GB10kEstimated$599Q4_K_M is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 4.0 GB headroom remains at this quantization.
22MacBook Air M4 24GB 13-inch130Q4_K_M 18.0 tok/s Fastest evidence path: Q4_K_M · 18.0 tok/s · Ollama · EstimatedOllamaFits4.0 GB10kEstimated$1,299Q4_K_M is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 4.0 GB headroom remains at this quantization.
23Mac Mini M4 Pro 24GB130Q4_K_M 18.0 tok/s Fastest evidence path: Q4_K_M · 18.0 tok/s · Ollama · EstimatedOllamaFits4.0 GB10kEstimated$1,399Q4_K_M is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 4.0 GB headroom remains at this quantization.
24MacBook Air M4 24GB 15-inch130Q4_K_M 18.0 tok/s Fastest evidence path: Q4_K_M · 18.0 tok/s · Ollama · EstimatedOllamaFits4.0 GB10kEstimated$1,499Q4_K_M is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 4.0 GB headroom remains at this quantization.
25MacBook Pro M4 Pro 24GB 14-inch130Q4_K_M 18.0 tok/s Fastest evidence path: Q4_K_M · 18.0 tok/s · Ollama · EstimatedOllamaFits4.0 GB10kEstimated$1,999Q4_K_M is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 4.0 GB headroom remains at this quantization.
26MacBook Pro M4 Pro 24GB 16-inch130Q4_K_M 18.0 tok/s Fastest evidence path: Q4_K_M · 18.0 tok/s · Ollama · EstimatedOllamaFits4.0 GB10kEstimated$2,499Q4_K_M is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 4.0 GB headroom remains at this quantization.
27Mac Mini M4 16GB99mlx-dynamic-2.7bpw 18.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 18.0 tok/s · Ollama · EstimatedOllamaFits3.2 GB11kEstimated$499mlx-dynamic-2.7bpw is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 3.2 GB headroom remains at this quantization.
28MacBook Air M4 16GB 13-inch99mlx-dynamic-2.7bpw 18.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 18.0 tok/s · Ollama · EstimatedOllamaFits3.2 GB11kEstimated$1,099mlx-dynamic-2.7bpw is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 3.2 GB headroom remains at this quantization.
29MacBook Air M4 16GB 15-inch99mlx-dynamic-2.7bpw 18.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 18.0 tok/s · Ollama · EstimatedOllamaFits3.2 GB11kEstimated$1,299mlx-dynamic-2.7bpw is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 3.2 GB headroom remains at this quantization.

DeepSeek R1 Distill Qwen 32B — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: Q4_K - Medium

3Benchmark rows
3Chip tiers covered
27.0Fastest avg tok/s (M5 Max (64 GB))
Minimum RAM observed

Fastest published result is 27.0 tok/s on M5 Max (64 GB) at Q4_K - Medium. Published runtimes include LM Studio, Ollama. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Based on 3 external benchmarks; no lab runs yet.

Published runtimes: LM Studio, Ollama.

32.8BTotal params
DenseActive params
131,072Context window
2025-01-20Release date

This is a reference-only model record. It remains useful for historical benchmarks, migration checks, and audit context, but it is excluded from current frontier packs.

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models.

Official source  ·  Raw model card

codingreasoning

Runtime support mentioned

vLLMSGLangTransformers

Official specs

  • Base model: Qwen2.5-32B.
  • Distillation source: DeepSeek-R1.
  • License: MIT.
  • Architecture: Qwen2ForCausalLM.
  • Total parameters: 32.764B.

Official takeaways

  • DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
  • We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models.
  • Please visit DeepSeek-V3 repo for more information about running DeepSeek-R1 locally.
  • Distillation: Smaller Models Can Be Powerful Too We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

DeepSeek R1 Distill Qwen 32B: 3 Apple Silicon field reports; best reported generation ~19 tok/s; seen on M3 MAX, M4 MAX 128GB; via MLX, Ollama, GGUF.

3Benchmark rows
3Field reports
3Practitioner signals
Sparse BenchmarksEvidence status

What practitioners keep saying

  • A top comment reports R1-Distill-Qwen-32B-Q4_K_M-GGUF on M3 Max at 15.93 tok/s generation over 744 tokens with 0.73s to first token.
  • Use this as runtime-shape evidence only; the comment does not provide Silicon Score hygiene sidecars or importable benchmark artifacts.
  • A top comment reports R1-Distill-Qwen-32B-MLX-4bit on M3 Max at 19.00 tok/s generation over 654 tokens with 0.67s to first token.

Apple Silicon field sources

  • r/LocalLLaMA

    2025-01-21 · M3 Max, M4 Max 128GB · GGUF, MLX +1

    The same captured thread also gives DeepSeek R1 Distill Qwen 32B a Q4_K_M GGUF comparison point on M3 Max.

Runtime mentions in the field

MLXOllama

Hardware mentioned in reports

128GBM4Mac

What would improve confidence

  • Reproduce Field Performance Signal
  • Upgrade To First Party Measurement

Published chip coverage includes M5 Max (64 GB), M4 Max (48 GB), M3 Max (36 GB). Fastest published row is 27.0 tok/s on M5 Max (64 GB) at Q4_K - Medium.

Related DeepSeek R1 Distill Qwen models with published pages: DeepSeek R1 Distill Llama 8B · DeepSeek R1 Distill Llama 70B

Raw benchmark rows for DeepSeek R1 Distill Qwen 32B

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M5 Max (64 GB)Q4_K - Medium27.0 tok/sOllamaref
M4 Max (48 GB)Q4_K - Medium18.0 tok/sLM Studioref
M3 Max (36 GB)Q4_K - Medium14.0 tok/sOllamaref

Ordered by fastest published tok/s on the chip family in each Mac. Click through for the full machine page.

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →