Best field report is 18.0 tok/s; keep ranking movement provisional until Bench evidence hardens.
Machine answer
Best local LLMs for MacBook Pro M5 Max 128GB 16-inch in 2026
Use this page when your real query is the exact machine, not a generic “best Mac” article. The ranking below is machine-specific, and the supporting links show where the evidence is benchmark-backed, sparse, or still fit-limited.
Current coding-biased answer for MacBook Pro M5 Max 128GB 16-inch: Qwen3.6-27B. Use Fit and Bench to verify how much headroom remains once you move past the default answer.
Silicon Score currently tracks 27 direct benchmark rows for this exact machine across 17 models. Latest direct evidence date: April 17, 2026. Model catalog current through April 22, 2026; benchmark corpus through April 27, 2026.
Best local LLMs for this Mac
MacBook Pro M5 Max 128GB 16-inch ranked for coding with a most capable bias, using the best available runtime evidence. focused on the current market set.
Current frontier watch
Fresh releases stay visible, but sparse evidence remains explicit.
Best field report is 17.3 tok/s; keep ranking movement provisional until Bench evidence hardens.
Best field report is 203.1 tok/s; keep ranking movement provisional until Bench evidence hardens.
No Apple Silicon field speed reports yet. First-party measurement is queued. The official card does not publish Apple Silicon throughput, memory pressure, prompt-processing, or quantized-context behavior, and the late May 4 search refresh found no qualifying Mistral Small 4 Apple Silicon practitioner row.
| Rank | Model | Score | Quant | Tok/s | Runtime | Evidence | Headroom | Context | Why it ranks here |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Gemma 4 31B30.7B parameters | 290 | 8bit | 26.0 tok/s Fastest evidence path: 8bit · 26.0 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 91.4 GB | 87k | Recent frontier candidate in the current catalog. 8bit is the highest practical quality here. 26.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 91.4 GB headroom leaves workable context margin. |
| 2 | Qwen3.5-27B27B parameters | 283 | 8bit | 31.6 tok/s Fastest evidence path: 8bit · 31.6 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 100.4 GB | 262k | Recent frontier candidate in the current catalog. 8bit is the highest practical quality here. 31.6 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 100.4 GB headroom leaves workable context margin. |
| 3 | Qwen3.6-27B27B parameters | 280 | 8bit | 16.6 tok/s Fastest evidence path: 8bit · 16.6 tok/s · Ollama · Estimated | Ollama | EstimatedFirst-party M5 batch queued | 100.4 GB | 262k | Recent frontier candidate in the current catalog. 8bit is the highest practical quality here. 16.6 tok/s estimated from nearby benchmark coverage, with Ollama wrapper on llama.cpp as the best runtime hint. 100.4 GB headroom leaves workable context margin. |
| 4 | Devstral Small 2 24B24B parameters | 274 | 8bit | 23.4 tok/s Fastest evidence path: 8bit · 23.4 tok/s · llama.cpp · Estimated | llama.cpp | EstimatedFirst-party M5 batch queued | 103.9 GB | 262k | Recent frontier candidate in the current catalog. 8bit is the highest practical quality here. 23.4 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 103.9 GB headroom leaves workable context margin. |
| 5 | Qwen3.6-35B-A3B3B active / 35B total | 252 | 8bit | 55.0 tok/s Fastest evidence path: 8bit · 55.0 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 94.3 GB | 262k | Recent frontier candidate in the current catalog. 8bit is the highest practical quality here. 55.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 94.3 GB headroom leaves workable context margin. |
| 6 | Mistral Small 4 119B6.5B active / 119B total | 193 | Q6_K | 42.0 tok/s Fastest evidence path: Q6_K · 42.0 tok/s · Ollama · Estimated | Ollama | EstimatedFirst-party M5 batch queued | 32.1 GB | 32k | Recent frontier candidate in the current catalog. Q6_K is the highest practical quality here. 42.0 tok/s estimated from nearby benchmark coverage, with Ollama wrapper on llama.cpp as the best runtime hint. 32.1 GB headroom leaves workable context margin. |
| 7 | Gemma 4 26B-A4B3.8B active / 25.2B total | 252 | 8bit | 50.0 tok/s Fastest evidence path: 8bit · 50.0 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 102.2 GB | 262k | Recent model release in the current catalog. 8bit is the highest practical quality here. 50.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 102.2 GB headroom leaves workable context margin. |
| 8 | Nemotron Cascade 2 30B-A3B3B active / 30B total | 250 | 8bit | 28.0 tok/s Fastest evidence path: 8bit · 28.0 tok/s · Ollama · Estimated | Ollama | EstimatedFirst-party M5 batch queued | 99.2 GB | 1000k | Recent model release in the current catalog. 8bit is the highest practical quality here. 28.0 tok/s estimated from nearby benchmark coverage, with Ollama wrapper on llama.cpp as the best runtime hint. 99.2 GB headroom leaves workable context margin. |
| 9 | Qwen3.5-35B-A3B3B active / 35B total | 249 | 8bit | 57.6 tok/s Fastest evidence path: 8bit · 57.6 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 94.3 GB | 262k | Recent model release in the current catalog. 8bit is the highest practical quality here. 57.6 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 94.3 GB headroom leaves workable context margin. |
| 10 | GLM-4.7-Flash3B active / 30B total | 247 | 8bit | 58.0 tok/s Fastest evidence path: 8bit · 58.0 tok/s · llama.cpp · Estimated | llama.cpp | EstimatedFirst-party M5 batch queued | 92.2 GB | 90k | Recent model release in the current catalog. 8bit is the highest practical quality here. 58.0 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 92.2 GB headroom leaves workable context margin. |
| 11 | Nemotron-3-Nano-30B-A3B3.5B active / 30B total | 246 | 8bit | 43.7 tok/s Fastest evidence path: 8bit · 43.7 tok/s · llama.cpp · Estimated | llama.cpp | EstimatedFirst-party M5 batch queued | 99.2 GB | 1000k | Recent model release in the current catalog. 8bit is the highest practical quality here. 43.7 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 99.2 GB headroom leaves workable context margin. |
| 12 | Magistral Small24B parameters | 242 | 8bit | Measure it | Best available | Fit-firstFirst-party M5 batch queued | 103.9 GB | 41k | 8bit is the highest practical quality here. Speed still needs direct speed coverage. 103.9 GB headroom leaves workable context margin. |
| 13 | Gemma 4 E4B8B parameters | 230 | 8bit | 128.0 tok/s Fastest evidence path: 8bit · 128.0 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 119.4 GB | 131k | Recent model release in the current catalog. 8bit is the highest practical quality here. 128.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 119.4 GB headroom leaves workable context margin. |
| 14 | Qwen3.5-9B9B parameters | 223 | 8bit | 15.0 tok/s Fastest evidence path: 8bit · 15.0 tok/s · llama.cpp · Estimated | llama.cpp | EstimatedFirst-party M5 batch queued | 118.1 GB | 262k | Recent model release in the current catalog. 8bit is the highest practical quality here. 15.0 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 118.1 GB headroom leaves workable context margin. |
| 15 | Qwen3.5-122B-A10B10B active / 122B total | 197 | Q6_K | 60.6 tok/s Fastest evidence path: Q6_K · 60.6 tok/s · MLX · Estimated | MLX | EstimatedBitter Mill import queued | 33.6 GB | 165k | Recent model release in the current catalog. Q6_K is the highest practical quality here. 60.6 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 33.6 GB headroom leaves workable context margin. |
| 16 | Qwen3-Coder-Next3B active / 80B total | 192 | 8bit | 74.3 tok/s Fastest evidence path: 8bit · 74.3 tok/s · MLX · Community row | MLX | Community rowFirst-party M5 batch queued | 52.2 GB | 262k | Recent model release in the current catalog. 8bit is the highest practical quality here. 74.3 tok/s benchmark-backed on MLX backend. 52.2 GB headroom leaves workable context margin. |
| 17 | GLM-4.5-Air12B active / 106B total | 190 | 8bit | 18.0 tok/s Fastest evidence path: 8bit · 18.0 tok/s · LM Studio · Estimated | LM Studio | EstimatedFirst-party M5 batch queued | 27.3 GB | 55k | 8bit is the highest practical quality here. 18.0 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 27.3 GB headroom leaves workable context margin. |
| 18 | Gemma 4 E2B5.1B parameters | 170 | 8bit | 158.0 tok/s Fastest evidence path: 8bit · 158.0 tok/s · MLX · Estimated | MLX | EstimatedFirst-party M5 batch queued | 122.5 GB | 131k | Recent model release in the current catalog. 8bit is the highest practical quality here. 158.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 122.5 GB headroom leaves workable context margin. |
| 19 | gpt-oss 120B5.1B active / 117B total | 164 | Q6_K | 7.0 tok/s Fastest evidence path: Q6_K · 7.0 tok/s · Ollama · Estimated | Ollama | EstimatedFirst-party M5 batch queued | 37.6 GB | 131k | Q6_K is the highest practical quality here. 7.0 tok/s estimated from nearby benchmark coverage, with Ollama wrapper on llama.cpp as the best runtime hint. 37.6 GB headroom leaves workable context margin. |
Machine