M5 Max (128 GB) — benchmark record
M5 Max (128 GB) local LLM benchmarks on Apple Silicon Mac with 33 published rows across 23 models. Peak published speed is 158.0 tok/s on Gemma 4 E2B. Largest published fit uses 92.6 GB. Published runtimes include flash-moe, llama.cpp, MLX, Ollama.
Quick take
Best published speed here is 158.0 tok/s on Gemma 4 E2B at Q4_K - Medium. Largest published memory footprint is Qwen3-Coder-Next at 8bit, using 92.6 GB. Longest published context on this page is 66k. This page is evidence, not the full buying answer.
Based on 33 external benchmarks; no lab runs yet.
Published runtimes: flash-moe, llama.cpp, MLX, Ollama.
Current published coverage
Published model coverage includes Gemma 4 E2B, Llama 3.1 8B, Gemma 4 E4B, Qwen 3 8B plus 19 more published models. Published runtime coverage on this chip includes flash-moe, llama.cpp, MLX, Ollama. Fastest published row is 158.0 tok/s on Gemma 4 E2B at Q4_K - Medium. Largest published footprint on this page is 92.6 GB for Qwen3-Coder-Next. Longest published context is 66k.
Macs shipping with the M5 Max (128 GB)
This chip is part of a family. Compare all M5 Max RAM variants →
Raw benchmark rows for M5 Max (128 GB)
Rows sorted by avg tok/s descending. Click source badge to see original measurement page.
| Model | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|
| Gemma 4 E2B | Q4_K - Medium | 158.0 tok/s | MLX | ref |
| Llama 3.1 8B | Q4_K - Medium | 138.0 tok/s | MLX | ref |
| Gemma 4 E4B | Q4_K - Medium | 128.0 tok/s | MLX | ref |
| Qwen 3 8B | Q4_K - Medium | 98.0 tok/s | Ollama | ref |
| Qwen3-Coder-Next | 8bit | 79.3 tok/s | MLX | ref |
| Qwen3.5-9B | Q8_0 | 78.0 tok/s | MLX | ref |
| Qwen3-Coder-Next | 8bit | 74.3 tok/s | MLX | ref |
| Qwen3-Coder-Next | 8bit | 68.6 tok/s | MLX | ref |
| Qwen3.5-122B-A10B | 4bit | 65.9 tok/s | MLX | ref |
| Qwen3.5-122B-A10B | 4bit | 60.6 tok/s | MLX | ref |
| Qwen3.6-35B-A3B | Q4_K - Medium | 55.0 tok/s | MLX | ref |
| Qwen3.5-122B-A10B | 4bit | 54.9 tok/s | MLX | ref |
| Gemma 4 26B-A4B | Q4_K - Medium | 50.0 tok/s | MLX | ref |
| Qwen3-Coder-Next | 8bit | 48.2 tok/s | MLX | ref |
| Qwen3.5-35B-A3B | Q4_K - Medium | 48.0 tok/s | Ollama | ref |
| Mistral Small 4 119B | Q4_K - Medium | 42.0 tok/s | MLX | ref |
| Qwen 3 14B | Q8_0 | 42.0 tok/s | MLX | ref |
| Mistral Small 4 119B | Q4_K - Medium | 38.0 tok/s | Ollama | ref |
| Qwen3.5-27B | 4bit | 31.6 tok/s | MLX | ref |
| Qwen 3 32B | Q4_K - Medium | 28.0 tok/s | Ollama | ref |
| Gemma 4 31B | Q4_K - Medium | 26.0 tok/s | MLX | ref |
| Llama 4 Scout 17B-16E | Q4_K - Medium | 26.0 tok/s | MLX | ref |
| Llama 4 Scout 17B-16E | Q4_K - Medium | 22.0 tok/s | Ollama | ref |
| Gemma 3 27B | Q6_K | 20.0 tok/s | llama.cpp | ref |
| Qwen 3 235B-A22B | Q4_K - Medium | 18.0 tok/s | MLX | ref |
| Qwen3.5-27B | Q6_K | 16.5 tok/s | llama.cpp | ref |
| Llama 3.3 70B | Q4_K - Medium | 15.0 tok/s | MLX | ref |
| Qwen 3 235B-A22B | Q4_K - Medium | 15.0 tok/s | Ollama | ref |
| Qwen3.5-397B-A17B | 4bit | 13.0 tok/s | flash-moe | ref |
| Llama 3.3 70B | Q4_K - Medium | 12.0 tok/s | Ollama | ref |
| DeepSeek R1 Distill Llama 70B | Q4_K - Medium | 11.0 tok/s | Ollama | ref |
| Qwen 2.5 72B | Q4_K - Medium | 10.0 tok/s | Ollama | ref |
| gpt-oss 120B | Q4_K - Medium | 7.0 tok/s | Ollama | ref |
Models tested on this chip
Next step
Use Rankings when you need the best overall answer for a Mac, and keep this page open when you need the evidence behind that recommendation.
If you are comparing local Apple Silicon against rented cloud hardware, use AI Data Center Index for current GPU rental context.
Data
benchmarks.json — full dataset · chips.json — chip summaries · benchmarks.csv — CSV export
Data from in-house lab measurements plus community-published benchmarks. See all chip families →