Field reality on Apple Silicon
What would improve confidence
- Expand Cross Chip Benchmark Coverage
- Fetch Source Artifact
- Upgrade To First Party Measurement
gpt-oss 120B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence. Model picker is focused on current-market choices.
| Rank | Mac | Score | Quant | Tok/s | Runtime | Fits | Evidence | Price | Why it ranks here |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Mac Studio M3 Ultra 256GB | 252 | 8bit | 10.0 tok/s | Ollama | Fits | Estimated | $7,499 | 8bit is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 146.0 GB headroom remains at this quantization. |
| 2 | Mac Pro M2 Ultra 192GB | 188 | 8bit | 10.0 tok/s | Ollama | Fits | Estimated | $6,999 | 8bit is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 82.0 GB headroom remains at this quantization. |
| 3 | Mac Studio M4 Max 128GB | 138 | Q6_K | 10.0 tok/s | Ollama | Fits | Estimated | $4,499 | Q6_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 37.6 GB headroom remains at this quantization. |
| 4 | MacBook Pro M4 Max 128GB 16-inch | 138 | Q6_K | 10.0 tok/s | Ollama | Fits | Estimated | $5,999 | Q6_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 37.6 GB headroom remains at this quantization. |
| 5 | Mac Studio M3 Ultra 96GB | 117 | Q5_K_M | 10.0 tok/s | Ollama | Fits | Estimated | $3,999 | Q5_K_M is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 17.4 GB headroom remains at this quantization. |
| 6 | Mac Studio M4 Max 64GB | 78 | Q3_K_L | 10.0 tok/s | Ollama | Fits | Estimated | $2,999 | Q3_K_L is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 13.5 GB headroom remains at this quantization. |
| 7 | MacBook Pro M4 Max 64GB 16-inch | 78 | Q3_K_L | 10.0 tok/s | Ollama | Fits | Estimated | $4,499 | Q3_K_L is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 13.5 GB headroom remains at this quantization. |
| 8 | Mac Mini M4 Pro 48GB | 75 | Q2_K | 10.0 tok/s | Ollama | Fits | Estimated | $1,599 | Q2_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 11.1 GB headroom remains at this quantization. |
| 9 | MacBook Pro M4 Pro 48GB 14-inch | 75 | Q2_K | 10.0 tok/s | Ollama | Fits | Estimated | $2,499 | Q2_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 11.1 GB headroom remains at this quantization. |
| 10 | Mac Studio M4 Max 48GB | 75 | Q2_K | 10.0 tok/s | Ollama | Fits | Estimated | $2,499 | Q2_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 11.1 GB headroom remains at this quantization. |
| 11 | MacBook Pro M4 Pro 48GB 16-inch | 75 | Q2_K | 10.0 tok/s | Ollama | Fits | Estimated | $2,999 | Q2_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 11.1 GB headroom remains at this quantization. |
| 12 | MacBook Pro M4 Max 48GB 14-inch | 75 | Q2_K | 10.0 tok/s | Ollama | Fits | Estimated | $3,499 | Q2_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 11.1 GB headroom remains at this quantization. |
| 13 | MacBook Pro M4 Max 48GB 16-inch | 75 | Q2_K | 10.0 tok/s | Ollama | Fits | Estimated | $3,999 | Q2_K is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 11.1 GB headroom remains at this quantization. |
| 14 | Mac Studio M4 Max 36GB | 70 | IQ2_K_S | 10.0 tok/s | Ollama | Fits | Estimated | $1,999 | IQ2_K_S is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization. |
| 15 | MacBook Pro M4 Max 36GB 14-inch | 70 | IQ2_K_S | 10.0 tok/s | Ollama | Fits | Estimated | $2,999 | IQ2_K_S is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization. |
| 16 | MacBook Pro M4 Max 36GB 16-inch | 70 | IQ2_K_S | 10.0 tok/s | Ollama | Fits | Estimated | $3,499 | IQ2_K_S is the current best practical quantization. 10.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization. |
| 17 | Mac Mini M4 16GB | 0 | F32 | — | Ollama | No | Estimated | $499 | gpt-oss 120B does not fit on Mac Mini M4 16GB at the current practical quantization. |
| 18 | Mac Mini M4 24GB | 0 | F32 | — | Ollama | No | Estimated | $599 | gpt-oss 120B does not fit on Mac Mini M4 24GB at the current practical quantization. |
| 19 | Mac Mini M4 32GB | 0 | F32 | — | Ollama | No | Estimated | $799 | gpt-oss 120B does not fit on Mac Mini M4 32GB at the current practical quantization. |
| 20 | MacBook Air M4 16GB 13-inch | 0 | F32 | — | Ollama | No | Estimated | $1,099 | gpt-oss 120B does not fit on MacBook Air M4 16GB 13-inch at the current practical quantization. |
| 21 | MacBook Air M4 24GB 13-inch | 0 | F32 | — | Ollama | No | Estimated | $1,299 | gpt-oss 120B does not fit on MacBook Air M4 24GB 13-inch at the current practical quantization. |
| 22 | MacBook Air M4 16GB 15-inch | 0 | F32 | — | Ollama | No | Estimated | $1,299 | gpt-oss 120B does not fit on MacBook Air M4 16GB 15-inch at the current practical quantization. |
| 23 | Mac Mini M4 Pro 24GB | 0 | F32 | — | Ollama | No | Estimated | $1,399 | gpt-oss 120B does not fit on Mac Mini M4 Pro 24GB at the current practical quantization. |
| 24 | MacBook Air M4 32GB 13-inch | 0 | F32 | — | Ollama | No | Estimated | $1,499 | gpt-oss 120B does not fit on MacBook Air M4 32GB 13-inch at the current practical quantization. |
| 25 | MacBook Air M4 24GB 15-inch | 0 | F32 | — | Ollama | No | Estimated | $1,499 | gpt-oss 120B does not fit on MacBook Air M4 24GB 15-inch at the current practical quantization. |
| 26 | MacBook Air M4 32GB 15-inch | 0 | F32 | — | Ollama | No | Estimated | $1,699 | gpt-oss 120B does not fit on MacBook Air M4 32GB 15-inch at the current practical quantization. |
| 27 | MacBook Pro M4 Pro 24GB 14-inch | 0 | F32 | — | Ollama | No | Estimated | $1,999 | gpt-oss 120B does not fit on MacBook Pro M4 Pro 24GB 14-inch at the current practical quantization. |
| 28 | MacBook Pro M4 Pro 24GB 16-inch | 0 | F32 | — | Ollama | No | Estimated | $2,499 | gpt-oss 120B does not fit on MacBook Pro M4 Pro 24GB 16-inch at the current practical quantization. |
Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.
Quantizations observed: Q4_K - Medium
What this page answers best
Fastest published result is 10.0 tok/s on M4 Ultra (192 GB) at Q4_K - Medium. Published runtimes include MLX, Ollama. Start with Rankings for the decision, then use the raw rows below to audit the evidence.
Evidence state: 2 linked reference rows and no Silicon Score Lab rows yet.
Published runtimes here: MLX, Ollama.
Catalog record
Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.
Field reality on Apple Silicon
What would improve confidence
Current published coverage
Published chip coverage includes M4 Ultra (192 GB), M5 Max (128 GB). Fastest published row is 10.0 tok/s on M4 Ultra (192 GB) at Q4_K - Medium.
Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.
| Chip | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|
| M4 Ultra (192 GB) | Q4_K - Medium | 10.0 tok/s | MLX | ref |
| M5 Max (128 GB) | Q4_K - Medium | 7.0 tok/s | Ollama | ref |
Chips with published results for gpt-oss 120B
Data
benchmarks.json — full dataset · models.json — model summaries · benchmarks.csv — CSV export