← Canonical rankings

M3 Max (40-core GPU, 48 GB) — benchmark record

M3 Max (40-core GPU, 48 GB) local LLM benchmarks on Apple Silicon Mac with 2 published rows across 2 models. Peak published speed is 149.0 tok/s on Llama 3.2 1B. Largest published fit uses 3.6 GB. Published runtimes include llama.cpp, llamafile.

2Benchmark rows
2Models tested
149.0Fastest avg tok/s (llama-3-2-1b-instruct)
0Lab benchmarks

Best published speed here is 149.0 tok/s on Llama 3.2 1B at Q4_K - Medium. Largest published memory footprint is Llama 2 7B at Q4_0, using 3.6 GB. Longest published context on this page is 512. This page is evidence, not the full buying answer.

Based on 2 external benchmarks; no lab runs yet.

Published runtimes: llama.cpp, llamafile.

Published model coverage includes Llama 3.2 1B, Llama 2 7B. Published runtime coverage on this chip includes llama.cpp, llamafile. Fastest published row is 149.0 tok/s on Llama 3.2 1B at Q4_K - Medium. Largest published footprint on this page is 3.6 GB for Llama 2 7B. Longest published context is 512.

Other published M3 Max (40-core GPU) variants: 64 GB · 128 GB

This chip is part of a family. Compare all M3 Max (40-core GPU) RAM variants →

Raw benchmark rows for M3 Max (40-core GPU, 48 GB)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ModelQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
Llama 3.2 1BQ4_K - Medium149.0 tok/s3399.1 tok/sllamafileref
Llama 2 7BQ4_03.6 GB51265.8 tok/s691.0 tok/sllama.cppref

Use Rankings when you need the best overall answer for a Mac, and keep this page open when you need the evidence behind that recommendation.

If you are comparing local Apple Silicon against rented cloud hardware, use AI Data Center Index for current GPU rental context.

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data from in-house lab measurements plus community-published benchmarks. See all chip families →