← Canonical rankings
Canonical Rankings

Best Macs for this model

Qwen 2.5 72B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence. Historical baseline selected; model picker is focused on current-market choices.

29 ranked MacsUse the strongest current runtime evidence for each row.23 other historical models hiddenStatic paths cover only canonical model pages; sort and quantization stay as query state.

Historical baseline selected: Qwen 2.5 72B. Default model choices remain current-market; other historical models stay hidden.

RankMacScoreQuantTok/sRuntimeFitsEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB3118bit15.0 tok/sOllamaFitsEstimated$7,4998bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 185.3 GB headroom remains at this quantization.
2Mac Pro M2 Ultra 192GB2478bit15.0 tok/sOllamaFitsEstimated$6,9998bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 121.3 GB headroom remains at this quantization.
3Mac Studio M4 Max 128GB1838bit15.0 tok/sOllamaFitsEstimated$4,4998bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 57.3 GB headroom remains at this quantization.
4MacBook Pro M4 Max 128GB 16-inch1838bit15.0 tok/sOllamaFitsEstimated$5,9998bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 57.3 GB headroom remains at this quantization.
5MacBook Pro M5 Max 128GB 16-inch1638bit10.0 tok/sOllamaFitsEstimated$5,3998bit is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 57.3 GB headroom remains at this quantization.
6Mac Studio M3 Ultra 96GB1518bit15.0 tok/sOllamaFitsEstimated$3,9998bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 25.3 GB headroom remains at this quantization.
7Mac Studio M4 Max 64GB1306bit15.0 tok/sOllamaFitsEstimated$2,9996bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 10.2 GB headroom remains at this quantization.
8MacBook Pro M4 Max 64GB 16-inch1306bit15.0 tok/sOllamaFitsEstimated$4,4996bit is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 10.2 GB headroom remains at this quantization.
9Mac Mini M4 Pro 48GB123MXFP415.0 tok/sOllamaFitsEstimated$1,599MXFP4 is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 9.0 GB headroom remains at this quantization.
10MacBook Pro M4 Pro 48GB 14-inch123MXFP415.0 tok/sOllamaFitsEstimated$2,499MXFP4 is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 9.0 GB headroom remains at this quantization.
11Mac Studio M4 Max 48GB123MXFP415.0 tok/sOllamaFitsEstimated$2,499MXFP4 is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 9.0 GB headroom remains at this quantization.
12MacBook Pro M4 Pro 48GB 16-inch123MXFP415.0 tok/sOllamaFitsEstimated$2,999MXFP4 is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 9.0 GB headroom remains at this quantization.
13MacBook Pro M4 Max 48GB 14-inch123MXFP415.0 tok/sOllamaFitsEstimated$3,499MXFP4 is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 9.0 GB headroom remains at this quantization.
14MacBook Pro M4 Max 48GB 16-inch123MXFP415.0 tok/sOllamaFitsEstimated$3,999MXFP4 is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 9.0 GB headroom remains at this quantization.
15Mac Studio M4 Max 36GB95Q2_K15.0 tok/sOllamaFitsEstimated$1,999Q2_K is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 10.7 GB headroom remains at this quantization.
16MacBook Pro M4 Max 36GB 14-inch95Q2_K15.0 tok/sOllamaFitsEstimated$2,999Q2_K is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 10.7 GB headroom remains at this quantization.
17MacBook Pro M4 Max 36GB 16-inch95Q2_K15.0 tok/sOllamaFitsEstimated$3,499Q2_K is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 10.7 GB headroom remains at this quantization.
18Mac Mini M4 32GB91Q2_K15.0 tok/sOllamaFitsEstimated$799Q2_K is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 6.7 GB headroom remains at this quantization.
19MacBook Air M4 32GB 13-inch91Q2_K15.0 tok/sOllamaFitsEstimated$1,499Q2_K is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 6.7 GB headroom remains at this quantization.
20MacBook Air M4 32GB 15-inch91Q2_K15.0 tok/sOllamaFitsEstimated$1,699Q2_K is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 6.7 GB headroom remains at this quantization.
21Mac Mini M4 24GB88IQ2_XS15.0 tok/sOllamaFitsEstimated$599IQ2_XS is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
22MacBook Air M4 24GB 13-inch88IQ2_XS15.0 tok/sOllamaFitsEstimated$1,299IQ2_XS is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
23Mac Mini M4 Pro 24GB88IQ2_XS15.0 tok/sOllamaFitsEstimated$1,399IQ2_XS is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
24MacBook Air M4 24GB 15-inch88IQ2_XS15.0 tok/sOllamaFitsEstimated$1,499IQ2_XS is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
25MacBook Pro M4 Pro 24GB 14-inch88IQ2_XS15.0 tok/sOllamaFitsEstimated$1,999IQ2_XS is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
26MacBook Pro M4 Pro 24GB 16-inch88IQ2_XS15.0 tok/sOllamaFitsEstimated$2,499IQ2_XS is the current best practical quantization. 15.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
27Mac Mini M4 16GB0F32OllamaNoEstimated$499Qwen 2.5 72B does not fit on Mac Mini M4 16GB at the current practical quantization.
28MacBook Air M4 16GB 13-inch0F32OllamaNoEstimated$1,099Qwen 2.5 72B does not fit on MacBook Air M4 16GB 13-inch at the current practical quantization.
29MacBook Air M4 16GB 15-inch0F32OllamaNoEstimated$1,299Qwen 2.5 72B does not fit on MacBook Air M4 16GB 15-inch at the current practical quantization.

Qwen 2.5 72B — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: Q4_K - Medium

2Benchmark rows
2Chip tiers covered
15.0Fastest avg tok/s (M4 Ultra (192 GB))
Minimum RAM observed

Fastest published result is 15.0 tok/s on M4 Ultra (192 GB) at Q4_K - Medium. Published runtimes include Ollama. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Evidence state: 2 linked reference rows and no Silicon Score Lab rows yet.

Published runtimes here: Ollama.

72.7BTotal params
DenseActive params
131,072Context window
2024-09-19Release date

This is a reference-only model record. It remains useful for historical benchmarks, migration checks, and audit context, but it is excluded from current frontier packs.

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.

Official source  ·  Raw model card

codingreasoning

Runtime support mentioned

vLLMTransformers

Official specs

  • Type: Causal Language Models.
  • Scale: 72.7B.
  • Context: Full 131,072 tokens and generation 8192 tokens.
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
  • Total parameters: 72.7B.
  • Max input: Full 131,072 tokens and generation 8192 tokens.

Official takeaways

  • Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
  • Please refer to our Documentation for usage if you are not familar with vLLM.
  • To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
  • Qwen2.5 brings the following improvements upon Qwen2: This repo contains the instruction-tuned 72B Qwen2.5 model, which has the following features: Type: Causal Language Models Training Stage: Pretraining & Post-trainin…

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

2Benchmark rows
0Field reports
0Practitioner signals
Sparse BenchmarksEvidence status

What would improve confidence

  • Expand Cross Chip Benchmark Coverage
  • Upgrade To First Party Measurement

Published chip coverage includes M4 Ultra (192 GB), M5 Max (128 GB). Fastest published row is 15.0 tok/s on M4 Ultra (192 GB) at Q4_K - Medium.

Related Qwen 2.5 models with published pages: Qwen 2.5 14B · Qwen 2.5 7B

Raw benchmark rows for Qwen 2.5 72B

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M4 Ultra (192 GB)Q4_K - Medium15.0 tok/sOllamaref
M5 Max (128 GB)Q4_K - Medium10.0 tok/sOllamaref

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →