← All benchmarks

M4 Pro (16-core GPU) — LLM Benchmarks

Measured LLM inference benchmarks for M4 Pro (16-core GPU) across all RAM configurations (24 GB, 48 GB, 64 GB). 8 benchmark rows across 3 models. Compare how RAM affects throughput. Real runs, not estimates.

8Benchmark rows
3Models tested
3RAM configurations
111.9Fastest avg tok/s

Each configuration differs only in unified memory. More RAM = larger models fit. Throughput is similar across RAM tiers at the same model size.

All benchmark rows — M4 Pro (16-core GPU)

Sorted by avg tok/s descending. Click source badge to see original measurement.

Chip (RAM)ModelQuantRAM req.Avg tok/sPrompt tok/sRuntimeSource
M4 Pro (16-core GPU, 64 GB)Llama 3.2 1B InstructQ4_K - Medium111.9 tok/s1858.9 tok/sref
M4 Pro (16-core GPU, 48 GB)Llama 3.2 1B InstructQ4_K - Medium111.0 tok/s1754.6 tok/sref
M4 Pro (16-core GPU, 24 GB)Llama 3.2 1B InstructQ4_K - Medium110.9 tok/s1823.8 tok/sref
M4 Pro (16-core GPU, 24 GB)Llama 3.1 8B InstructQ4_K - Medium30.5 tok/s298.0 tok/sref
M4 Pro (16-core GPU, 48 GB)Llama 3.1 8B InstructQ4_K - Medium30.2 tok/s302.4 tok/sref
M4 Pro (16-core GPU, 48 GB)Qwen 2.5 14B InstructQ4_K - Medium16.8 tok/s161.1 tok/sref
M4 Pro (16-core GPU, 64 GB)Qwen 2.5 14B InstructQ4_K - Medium16.1 tok/s151.0 tok/sref
M4 Pro (16-core GPU, 24 GB)Qwen 2.5 14B InstructQ4_K - Medium15.2 tok/s144.3 tok/sref

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data sourced from factory lab measurements and community reference runs. See all chips →