← Canonical rankings

M3 Ultra (512 GB) — benchmark record

M3 Ultra (512 GB) local LLM benchmarks on Apple Silicon Mac with 13 published rows across 6 models. Peak published speed is 43.0 tok/s on Devstral Small 1.1. Largest published fit uses 415.4 GB. Published runtimes include LM Studio, MLX.

13Benchmark rows
6Models tested
43.0Fastest avg tok/s (Devstral Small 1.1)
0Lab benchmarks

Best published speed here is 43.0 tok/s on Devstral Small 1.1 at 4bit. Largest published memory footprint is GLM-5 at 4bit, using 415.4 GB. Longest published context on this page is 128k. This page is evidence, not the full buying answer.

Based on 13 external benchmarks; no lab runs yet.

Published runtimes: LM Studio, MLX.

Published model coverage includes Devstral Small 1.1, Qwen3.5-397B-A17B, Qwen 3 235B-A22B, GLM-5 plus 2 more published models. Published runtime coverage on this chip includes LM Studio, MLX. Fastest published row is 43.0 tok/s on Devstral Small 1.1 at 4bit. Largest published footprint on this page is 415.4 GB for GLM-5. Longest published context is 128k.

Other published M3 Ultra variants: 256 GB

This chip is part of a family. Compare all M3 Ultra RAM variants →

Raw benchmark rows for M3 Ultra (512 GB)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ModelQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
Devstral Small 1.14bit43.0 tok/sLM Studioref
Qwen3.5-397B-A17Bq4.1bit40.2 tok/sMLXref
Qwen 3 235B-A22B4bit27.4 tok/sLM Studioref
GLM-54bit391.8 GB1k16.7 tok/s187.0 tok/sMLXref
Llama 3.3 70B4bit8k15.5 tok/s150.0 tok/sLM Studioref
GLM-54bit394.1 GB4k13.7 tok/s180.1 tok/sMLXref
GLM-54bit396.7 GB8k13.2 tok/s154.1 tok/sMLXref
GLM-54bit402.7 GB16k12.0 tok/s117.4 tok/sMLXref
Gemma 3 27Bbf1652.6 GB128k11.2 tok/sLM Studioref
GLM-54bit415.4 GB33k10.7 tok/s77.7 tok/sMLXref
Llama 3.3 70B4bit40k9.6 tok/s103.0 tok/sLM Studioref
Llama 3.3 70B8bit8k8.5 tok/s150.0 tok/sLM Studioref
Llama 3.3 70B8bit40k6.5 tok/s101.0 tok/sLM Studioref

Use Rankings when you need the best overall answer for a Mac, and keep this page open when you need the evidence behind that recommendation.

If you are comparing local Apple Silicon against rented cloud hardware, use AI Data Center Index for current GPU rental context.

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data from in-house lab measurements plus community-published benchmarks. See all chip families →