Best models for this Mac
Mac Studio M3 Ultra 96GB ranked for coding with a most capable bias, using the best available runtime evidence.
| Rank | Model | Score | Quant | Tok/s | Runtime | Evidence | Headroom | Why it ranks here |
|---|---|---|---|---|---|---|---|---|
| 1 | Qwen 3 32B | 274 | 8bit | 22.0 tok/s | Ollama | Estimated | 63.0 GB | 8bit is the highest practical quality here. 22.0 tok/s estimated from nearby benchmark coverage, with Ollama wrapper on llama.cpp as the best runtime hint. 63.0 GB headroom leaves workable context margin. |
| 2 | Qwen3.5-27B | 264 | 8bit | 20.6 tok/s | MLX | Estimated | 68.4 GB | 8bit is the highest practical quality here. 20.6 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 68.4 GB headroom leaves workable context margin. |
| 3 | Devstral Small 1.1 | 262 | 8bit | 43.0 tok/s | LM Studio | Estimated | 71.9 GB | 8bit is the highest practical quality here. 43.0 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 71.9 GB headroom leaves workable context margin. |
| 4 | Devstral Small 2 24B | 262 | 8bit | 25.3 tok/s | MLX | Estimated | 71.9 GB | 8bit is the highest practical quality here. 25.3 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 71.9 GB headroom leaves workable context margin. |
| 5 | Llama 3.3 70B | 261 | 8bit | 11.8 tok/s | LM Studio | Estimated | 27.2 GB | 8bit is the highest practical quality here. 11.8 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 27.2 GB headroom leaves workable context margin. |
| 6 | Gemma 3 27B | 259 | 8bit | 14.5 tok/s | LM Studio | Estimated | 66.1 GB | 8bit is the highest practical quality here. 14.5 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 66.1 GB headroom leaves workable context margin. |
| 7 | DeepSeek R1 Distill Qwen 32B | 252 | 8bit | Measure it | Best available | Fit-first | 63.0 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 63.0 GB headroom leaves workable context margin. |
| 8 | Mistral Small 3.1 24B | 240 | 8bit | Measure it | Best available | Fit-first | 71.9 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 71.9 GB headroom leaves workable context margin. |
| 9 | Magistral Small | 240 | 8bit | Measure it | Best available | Fit-first | 71.9 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 71.9 GB headroom leaves workable context margin. |
| 10 | Nemotron-3-Nano-30B-A3B | 233 | 8bit | 43.7 tok/s | llama.cpp | Estimated | 67.2 GB | 8bit is the highest practical quality here. 43.7 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 67.2 GB headroom leaves workable context margin. |
| 11 | Qwen 3 30B-A3B | 233 | 8bit | 76.7 tok/s | MLX | Estimated | 66.3 GB | 8bit is the highest practical quality here. 76.7 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 66.3 GB headroom leaves workable context margin. |
| 12 | Qwen3-Coder-30B-A3B | 233 | 8bit | 58.5 tok/s | llama.cpp | Estimated | 66.7 GB | 8bit is the highest practical quality here. 58.5 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 66.7 GB headroom leaves workable context margin. |
| 13 | Qwen3.5-35B-A3B | 232 | 8bit | 80.0 tok/s | MLX | Estimated | 62.3 GB | 8bit is the highest practical quality here. 80.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 62.3 GB headroom leaves workable context margin. |
| 14 | GLM-4.7-Flash | 232 | 8bit | 36.8 tok/s | llama.cpp | Estimated | 60.2 GB | 8bit is the highest practical quality here. 36.8 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 60.2 GB headroom leaves workable context margin. |
| 15 | Qwen 3 8B | 211 | 8bit | 63.1 tok/s | LM Studio | Estimated | 86.7 GB | 8bit is the highest practical quality here. 63.1 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 86.7 GB headroom leaves workable context margin. |
| 16 | Llama 2 7B | 209 | 8bit | 36.4 tok/s | llama.cpp | Estimated | 85.2 GB | 8bit is the highest practical quality here. 36.4 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 85.2 GB headroom leaves workable context margin. |
| 17 | Qwen3.5-9B | 206 | 8bit | 15.0 tok/s | llama.cpp | Estimated | 86.1 GB | 8bit is the highest practical quality here. 15.0 tok/s estimated from nearby benchmark coverage, with llama.cpp backend as the best runtime hint. 86.1 GB headroom leaves workable context margin. |
| 18 | Qwen 2.5 14B | 199 | 8bit | Measure it | Best available | Fit-first | 81.0 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 81.0 GB headroom leaves workable context margin. |
| 19 | Phi-4 14B | 198 | 8bit | Measure it | Best available | Fit-first | 81.5 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 81.5 GB headroom leaves workable context margin. |
| 20 | Llama 3.1 8B | 189 | 8bit | Measure it | Best available | Fit-first | 87.0 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 87.0 GB headroom leaves workable context margin. |
| 21 | Qwen 2.5 7B | 189 | 8bit | Measure it | Best available | Fit-first | 88.0 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 88.0 GB headroom leaves workable context margin. |
| 22 | Mistral 7B v0.3 | 188 | 8bit | Measure it | Best available | Fit-first | 87.7 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 87.7 GB headroom leaves workable context margin. |
| 23 | GLM-4.5-Air | 179 | 6bit | 54.0 tok/s | MLX | Estimated | 20.0 GB | 6bit is the highest practical quality here. 54.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 20.0 GB headroom leaves workable context margin. |
| 24 | Qwen3.5-122B-A10B | 174 | Q5 | 57.0 tok/s | MLX | Estimated | 23.7 GB | Q5 is the highest practical quality here. 57.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 23.7 GB headroom leaves workable context margin. |
| 25 | Qwen3-Coder-Next | 172 | 8bit | 74.0 tok/s | MLX | Estimated | 20.2 GB | 8bit is the highest practical quality here. 74.0 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 20.2 GB headroom leaves workable context margin. |
| 26 | Gemma 3 4B | 150 | 8bit | 100.5 tok/s | LM Studio | Estimated | 90.4 GB | 8bit is the highest practical quality here. 100.5 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 90.4 GB headroom leaves workable context margin. |
| 27 | Qwen 3 4B | 150 | 8bit | 143.2 tok/s | MLX | Estimated | 90.6 GB | 8bit is the highest practical quality here. 143.2 tok/s estimated from nearby benchmark coverage, with MLX backend as the best runtime hint. 90.6 GB headroom leaves workable context margin. |
| 28 | Qwen 3 0.6B | 145 | 8bit | 184.4 tok/s | LM Studio | Estimated | 94.8 GB | 8bit is the highest practical quality here. 184.4 tok/s estimated from nearby benchmark coverage, with LM Studio wrapper on mixed as the best runtime hint. 94.8 GB headroom leaves workable context margin. |
| 29 | Llama 3.2 1B | 124 | 8bit | Measure it | Best available | Fit-first | 94.1 GB | 8bit is the highest practical quality here. Speed still needs direct measurement. 94.1 GB headroom leaves workable context margin. |
Machine
96GBUnified memory
$3,999MSRP
mac_studioForm factor
m3-ultraChip