M4 — benchmark record
M4 local LLM benchmarks across 16 GB, 32 GB RAM tiers on Apple Silicon Mac. 15 published rows across 10 models with explicit evidence state and RAM-tier comparison. Peak published speed is 92.0 tok/s.
15Benchmark rows
10Models tested
2RAM configurations
92.0Fastest avg tok/s
RAM configurations
Each configuration differs only in unified memory. More RAM = larger models fit. Throughput is similar across RAM tiers at the same model size.
All benchmark rows — M4
Sorted by avg tok/s descending. Click source badge to see original measurement.
| Chip (RAM) | Model | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|---|
| M4 (16 GB) | Qwen3.5-4B | Q4_K - Medium | 92.0 tok/s | Ollama | ref |
| M4 (16 GB) | DeepSeek R1 Distill Llama 8B | Q4_K - Medium | 78.0 tok/s | MLX | ref |
| M4 (16 GB) | Llama 3.1 8B | Q4_K - Medium | 75.0 tok/s | Ollama | ref |
| M4 (16 GB) | Qwen3.5-9B | Q4_K - Medium | 72.0 tok/s | LM Studio | ref |
| M4 (16 GB) | Ministral 3 8B | Q4_K - Medium | 72.0 tok/s | MLX | ref |
| M4 (16 GB) | Phi-4 14B | Q4_K - Medium | 38.0 tok/s | Ollama | ref |
| M4 (32 GB) | Gemma 3 27B | Q4_0 | 5.7 tok/s | llama.cpp | ref |
| M4 (16 GB) | Qwen3.5-9B | Q4_0 | 4.1 tok/s | llama.cpp | ref |
| M4 (16 GB) | Devstral Small 2 24B | Q4_0 | 3.4 tok/s | llama.cpp | ref |
| M4 (16 GB) | Qwen3.5-9B | Q4_K - Small | 3.1 tok/s | llama.cpp | ref |
| M4 (16 GB) | Qwen3.5-9B | Q6_K | 2.2 tok/s | llama.cpp | ref |
| M4 (16 GB) | Qwen3.5-35B-A3B | Q4_K - Medium | 1.3 tok/s | llama.cpp | ref |
| M4 (16 GB) | Devstral Small 2 24B | Q4_1 | 0.1 tok/s | llama.cpp | ref |
| M4 (16 GB) | Devstral Small 2 24B | Q4_K - Medium | 0.0 tok/s | llama.cpp | ref |
| M4 (16 GB) | Qwen3.5-27B | Q4_K - Medium | 0.0 tok/s | llama.cpp | ref |
Models tested on M4
Data
benchmarks.json — full dataset · chips.json — chip summaries · benchmarks.csv — CSV export
Data from in-house lab measurements plus community-published benchmarks. See all chip families →