← Canonical rankings

M3 Max (GPU count not published, 64 GB) — benchmark record

M3 Max (GPU count not published, 64 GB) local LLM benchmarks on Apple Silicon Mac with 4 published rows across 1 models. Peak published speed is 8.2 tok/s on Llama 3.3 70B. Published runtimes include llama.cpp. Evidence state and next-step ranking links included.

4Benchmark rows
1Models tested
8.2Fastest avg tok/s (Llama 3.3 70B)
0Silicon Score Lab rows

Best published speed here is 8.2 tok/s on Llama 3.3 70B at Q4_K - Medium. Longest published context on this page is 32k. This page is evidence, not the full buying answer.

Evidence state: 4 linked reference rows and no Silicon Score Lab rows yet.

Published runtimes here: llama.cpp.

Published model coverage includes Llama 3.3 70B. Published runtime coverage on this chip includes llama.cpp. Fastest published row is 8.2 tok/s on Llama 3.3 70B at Q4_K - Medium. Longest published context is 32k.

Raw benchmark rows for M3 Max (GPU count not published, 64 GB)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ModelQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
Llama 3.3 70BQ4_K - Medium2588.2 tok/s67.9 tok/sllama.cppref
Llama 3.3 70BQ4_K - Medium8k7.5 tok/s65.2 tok/sllama.cppref
Llama 3.3 70BQ4_K - Medium16k7.0 tok/s59.5 tok/sllama.cppref
Llama 3.3 70BQ4_K - Medium32k6.1 tok/s50.3 tok/sllama.cppref

Use Rankings when you need the best overall answer for a Mac, and keep this page open when you need the evidence behind that recommendation.

If you are comparing local Apple Silicon against rented cloud hardware, use AI Data Center Index for current GPU rental context.

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data sourced from Silicon Score Lab measurements and community reference runs. See all chip families →