M3 Max (GPU count not published, 64 GB) — benchmark record
M3 Max (GPU count not published, 64 GB) local LLM benchmarks on Apple Silicon Mac with 24 published rows across 2 models. Peak published speed is 10.4 tok/s on Qwen 3 32B. Published runtimes include llama.cpp, Ollama.
Quick take
Best published speed here is 10.4 tok/s on Qwen 3 32B at Q8_0. Longest published context on this page is 32k. This page is evidence, not the full buying answer.
Based on 24 external benchmarks; no lab runs yet.
Published runtimes: llama.cpp, Ollama.
Current published coverage
Published model coverage includes Qwen 3 32B, Llama 3.3 70B. Published runtime coverage on this chip includes llama.cpp, Ollama. Fastest published row is 10.4 tok/s on Qwen 3 32B at Q8_0. Longest published context is 32k.
Raw benchmark rows for M3 Max (GPU count not published, 64 GB)
Rows sorted by avg tok/s descending. Click source badge to see original measurement page.
| Model | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|
| Qwen 3 32B | Q8_0 | 10.4 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 10.3 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 10.3 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 10.3 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 10.3 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 10.3 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 10.2 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 10.1 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 10.1 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 10.1 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 9.9 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 9.9 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 9.7 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 9.7 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 9.2 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 9.2 tok/s | Ollama | ref |
| Qwen 3 32B | Q8_0 | 8.6 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 8.6 tok/s | Ollama | ref |
| Llama 3.3 70B | Q4_K - Medium | 8.2 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 7.6 tok/s | llama.cpp | ref |
| Qwen 3 32B | Q8_0 | 7.5 tok/s | Ollama | ref |
| Llama 3.3 70B | Q4_K - Medium | 7.5 tok/s | llama.cpp | ref |
| Llama 3.3 70B | Q4_K - Medium | 7.0 tok/s | llama.cpp | ref |
| Llama 3.3 70B | Q4_K - Medium | 6.1 tok/s | llama.cpp | ref |
Models tested on this chip
Next step
Use Rankings when you need the best overall answer for a Mac, and keep this page open when you need the evidence behind that recommendation.
If you are comparing local Apple Silicon against rented cloud hardware, use AI Data Center Index for current GPU rental context.
Data
benchmarks.json — full dataset · chips.json — chip summaries · benchmarks.csv — CSV export
Data from in-house lab measurements plus community-published benchmarks. See all chip families →