← Canonical rankings

M1 Max (64 GB) — benchmark record

M1 Max (64 GB) local LLM benchmarks on Apple Silicon Mac with 10 published rows across 5 models. Peak published speed is 58.5 tok/s on Qwen3-Coder-30B-A3B. Largest published fit uses 22.0 GB. Published runtimes include llama.cpp, LM Studio (llama.cpp), LM Studio (MLX), MLX.

10Benchmark rows
5Models tested
58.5Fastest avg tok/s (Qwen3-Coder-30B-A3B)
0Lab benchmarks

Best published speed here is 58.5 tok/s on Qwen3-Coder-30B-A3B at IQ4_XS. Largest published memory footprint is Nemotron-3-Nano-30B-A3B at Q4_K_XL, using 22.0 GB. Longest published context on this page is 8k. This page is evidence, not the full buying answer.

Based on 10 external benchmarks; no lab runs yet.

Published runtimes: llama.cpp, LM Studio (llama.cpp), LM Studio (MLX), MLX.

Published model coverage includes Qwen3-Coder-30B-A3B, Qwen3.5-35B-A3B, Nemotron-3-Nano-30B-A3B, GLM-4.7-Flash plus 1 more published model. Published runtime coverage on this chip includes llama.cpp, LM Studio (llama.cpp), LM Studio (MLX), MLX. Fastest published row is 58.5 tok/s on Qwen3-Coder-30B-A3B at IQ4_XS. Largest published footprint on this page is 22.0 GB for Nemotron-3-Nano-30B-A3B. Longest published context is 8k.

Raw benchmark rows for M1 Max (64 GB)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ModelQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
Qwen3-Coder-30B-A3BIQ4_XS16.1 GB4k58.5 tok/s132.1 tok/sllama.cppref
Qwen3.5-35B-A3B4bit8k57.6 tok/s431.0 tok/sMLXref
Qwen3.5-35B-A3B4bit57.0 tok/sLM Studio (MLX)ref
Nemotron-3-Nano-30B-A3BQ4_K_XL22.0 GB4k43.7 tok/s136.9 tok/sllama.cppref
GLM-4.7-FlashQ4_K_XL17.0 GB4k36.8 tok/s99.4 tok/sllama.cppref
Qwen3.5-35B-A3BQ4_K - Medium29.0 tok/sLM Studio (llama.cpp)ref
Qwen3.5-27B4bit8k15.0 tok/s67.0 tok/sMLXref
Qwen3.5-27BQ6_K12.0 tok/sllama.cppref
Qwen3.5-27BQ4_K - Medium11.5 tok/sllama.cppref
Qwen3.5-27BQ8_010.5 tok/sllama.cppref

Use Rankings when you need the best overall answer for a Mac, and keep this page open when you need the evidence behind that recommendation.

If you are comparing local Apple Silicon against rented cloud hardware, use AI Data Center Index for current GPU rental context.

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data from in-house lab measurements plus community-published benchmarks. See all chip families →