← Canonical rankings
Canonical Rankings

Best Macs for this model

Qwen3.5-122B-A10B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence. Model picker is focused on current-market choices.

29 ranked MacsUse the strongest current runtime evidence for each row.28 historical models hiddenStatic paths cover only canonical model pages; sort and quantization stay as query state.
RankMacScoreQuantTok/sRuntimeFitsHeadroomContextEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB3878bit 43.0 tok/s Fastest evidence path: 3bit · 57.0 tok/s · MLX · EstimatedMLXFits141.1 GB262kCommunity row$7,4998bit is the current best practical quantization. 43.0 tok/s is backed by direct benchmark coverage. 141.1 GB headroom remains at this quantization.
2Mac Pro M2 Ultra 192GB3718bit 57.0 tok/s Fastest evidence path: 8bit · 57.0 tok/s · MLX · EstimatedMLXFits77.1 GB262kEstimated$6,9998bit is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 77.1 GB headroom remains at this quantization.
3MacBook Pro M5 Max 128GB 16-inch336Q6_K 60.6 tok/s Fastest evidence path: Q6_K · 60.6 tok/s · MLX · EstimatedMLXFits33.6 GB165kEstimated$5,399Q6_K is the current best practical quantization. 60.6 tok/s is estimated from nearby benchmark coverage. 33.6 GB headroom remains at this quantization.
4Mac Studio M4 Max 128GB322Q6_K 57.0 tok/s Fastest evidence path: Q6_K · 57.0 tok/s · MLX · EstimatedMLXFits33.6 GB165kEstimated$4,499Q6_K is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 33.6 GB headroom remains at this quantization.
5MacBook Pro M4 Max 128GB 16-inch322Q6_K 57.0 tok/s Fastest evidence path: Q6_K · 57.0 tok/s · MLX · EstimatedMLXFits33.6 GB165kEstimated$5,999Q6_K is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 33.6 GB headroom remains at this quantization.
6Mac Studio M3 Ultra 96GB3065bit 57.0 tok/s Fastest evidence path: 5bit · 57.0 tok/s · MLX · EstimatedMLXFits23.7 GB110kEstimated$3,9995bit is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 23.7 GB headroom remains at this quantization.
7Mac Studio M4 Max 64GB263Q3_K_L 57.0 tok/s Fastest evidence path: Q3_K_L · 57.0 tok/s · MLX · EstimatedMLXFits11.2 GB26kEstimated$2,999Q3_K_L is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 11.2 GB headroom remains at this quantization.
8MacBook Pro M4 Max 64GB 16-inch263Q3_K_L 57.0 tok/s Fastest evidence path: Q3_K_L · 57.0 tok/s · MLX · EstimatedMLXFits11.2 GB26kEstimated$4,499Q3_K_L is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 11.2 GB headroom remains at this quantization.
9Mac Mini M4 Pro 48GB260mlx-dynamic-2.7bpw 57.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 57.0 tok/s · MLX · EstimatedMLXFits8.4 GB21kEstimated$1,599mlx-dynamic-2.7bpw is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
10MacBook Pro M4 Pro 48GB 14-inch260mlx-dynamic-2.7bpw 57.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 57.0 tok/s · MLX · EstimatedMLXFits8.4 GB21kEstimated$2,499mlx-dynamic-2.7bpw is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
11Mac Studio M4 Max 48GB260mlx-dynamic-2.7bpw 57.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 57.0 tok/s · MLX · EstimatedMLXFits8.4 GB21kEstimated$2,499mlx-dynamic-2.7bpw is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
12MacBook Pro M4 Pro 48GB 16-inch260mlx-dynamic-2.7bpw 57.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 57.0 tok/s · MLX · EstimatedMLXFits8.4 GB21kEstimated$2,999mlx-dynamic-2.7bpw is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
13MacBook Pro M4 Max 48GB 14-inch260mlx-dynamic-2.7bpw 57.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 57.0 tok/s · MLX · EstimatedMLXFits8.4 GB21kEstimated$3,499mlx-dynamic-2.7bpw is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
14MacBook Pro M4 Max 48GB 16-inch260mlx-dynamic-2.7bpw 57.0 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 57.0 tok/s · MLX · EstimatedMLXFits8.4 GB21kEstimated$3,999mlx-dynamic-2.7bpw is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
15Mac Studio M4 Max 36GB258IQ2_XS 57.0 tok/s Fastest evidence path: IQ2_XS · 57.0 tok/s · MLX · EstimatedMLXFits6.3 GB19kEstimated$1,999IQ2_XS is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization.
16MacBook Pro M4 Max 36GB 14-inch258IQ2_XS 57.0 tok/s Fastest evidence path: IQ2_XS · 57.0 tok/s · MLX · EstimatedMLXFits6.3 GB19kEstimated$2,999IQ2_XS is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization.
17MacBook Pro M4 Max 36GB 16-inch258IQ2_XS 57.0 tok/s Fastest evidence path: IQ2_XS · 57.0 tok/s · MLX · EstimatedMLXFits6.3 GB19kEstimated$3,499IQ2_XS is the current best practical quantization. 57.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization.
18Mac Mini M4 16GB0F32 MLXNo-439.7 GBEstimated$499Qwen3.5-122B-A10B does not fit on Mac Mini M4 16GB at the current practical quantization.
19Mac Mini M4 24GB0F32 MLXNo-431.7 GBEstimated$599Qwen3.5-122B-A10B does not fit on Mac Mini M4 24GB at the current practical quantization.
20Mac Mini M4 32GB0F32 MLXNo-423.7 GBEstimated$799Qwen3.5-122B-A10B does not fit on Mac Mini M4 32GB at the current practical quantization.
21MacBook Air M4 16GB 13-inch0F32 MLXNo-439.7 GBEstimated$1,099Qwen3.5-122B-A10B does not fit on MacBook Air M4 16GB 13-inch at the current practical quantization.
22MacBook Air M4 24GB 13-inch0F32 MLXNo-431.7 GBEstimated$1,299Qwen3.5-122B-A10B does not fit on MacBook Air M4 24GB 13-inch at the current practical quantization.
23MacBook Air M4 16GB 15-inch0F32 MLXNo-439.7 GBEstimated$1,299Qwen3.5-122B-A10B does not fit on MacBook Air M4 16GB 15-inch at the current practical quantization.
24Mac Mini M4 Pro 24GB0F32 MLXNo-431.7 GBEstimated$1,399Qwen3.5-122B-A10B does not fit on Mac Mini M4 Pro 24GB at the current practical quantization.
25MacBook Air M4 32GB 13-inch0F32 MLXNo-423.7 GBEstimated$1,499Qwen3.5-122B-A10B does not fit on MacBook Air M4 32GB 13-inch at the current practical quantization.
26MacBook Air M4 24GB 15-inch0F32 MLXNo-431.7 GBEstimated$1,499Qwen3.5-122B-A10B does not fit on MacBook Air M4 24GB 15-inch at the current practical quantization.
27MacBook Air M4 32GB 15-inch0F32 MLXNo-423.7 GBEstimated$1,699Qwen3.5-122B-A10B does not fit on MacBook Air M4 32GB 15-inch at the current practical quantization.
28MacBook Pro M4 Pro 24GB 14-inch0F32 MLXNo-431.7 GBEstimated$1,999Qwen3.5-122B-A10B does not fit on MacBook Pro M4 Pro 24GB 14-inch at the current practical quantization.
29MacBook Pro M4 Pro 24GB 16-inch0F32 MLXNo-431.7 GBEstimated$2,499Qwen3.5-122B-A10B does not fit on MacBook Pro M4 Pro 24GB 16-inch at the current practical quantization.

Qwen3.5-122B-A10B — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: 4bit, MXFP4, 8bit, IQ1_M

6Benchmark rows
3Chip tiers covered
65.8Fastest avg tok/s (M5 Max (128 GB))
40.8 GBMinimum RAM observed

Fastest published result is 65.9 tok/s on M5 Max (128 GB) at 4bit. Smallest published fit is 40.8 GB on M5 Pro (64 GB). Longest published context on this page is 33k. Published runtimes include llama.cpp, MLX. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Based on 6 external benchmarks; no lab runs yet.

Published runtimes: llama.cpp, MLX.

122BTotal params
10BActive params
262,144Context window
2026-02-24Release date

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

Official source  ·  Raw model card

agentscodingreasoningvisual-understanding

Runtime support mentioned

vLLMSGLangTransformersKTransformers

Official specs

  • Type: Causal Language Model with Vision Encoder.
  • Scale: 122B in total and 10B activated.
  • Context: 262,144 natively and extensible up to 1,010,000 tokens.
  • Total parameters: 122B in total and 10B activated.
  • Max input: 262,144 natively and extensible up to 1,010,000 tokens.

Official takeaways

  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding ben…
  • Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.
  • Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.
  • Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

Qwen3.5-122B-A10B: 24 Apple Silicon field reports; best reported generation ~65.853 tok/s; best reported prompt processing ~1239.734 tok/s; reported RAM use ~71.91-102GB; seen on MacBook Pro M5 MAX 128GB, Mac Studio M3 ULTRA 256GB, Mac Studio M4 MAX 128GB; via MLX, oMLX, EXO over Thunderbolt 5 RDMA, llama.cpp.

6Benchmark rows
24Field reports
16Practitioner signals
Sparse BenchmarksEvidence status

What practitioners keep saying

  • The oMLX context table reports Qwen3.5-122B-A10B-Text-qx64-hi 6bit on an M3 Max 40-core 128GB Mac at 1024 tokens context with 432.5 tok/s prompt processing, 39.5 tok/s generation, and peak memory at 86.6GB.
  • The oMLX context table reports Qwen3.5-122B-A10B-Text-qx64-hi 6bit on an M3 Max 40-core 128GB Mac at 4096 tokens context with 497.1 tok/s prompt processing, 38.2 tok/s generation, 8.240s TTFT, and peak memory at 87.9GB.
  • The oMLX context table reports Qwen3.5-122B-A10B-Text-qx64-hi 6bit on an M3 Max 40-core 128GB Mac at 8192 tokens context with 487.2 tok/s prompt processing, 35.6 tok/s generation, and peak memory at 88.7GB.

Apple Silicon field sources

  • oMLX community benchmarks

    2026-05-05 · M3 Max 40-core GPU, 128GB unified memory · oMLX

    Fresh oMLX rows show the Qwen3.5-122B-A10B-Text-qx64-hi 6-bit variant on M3 Max 128GB holding interactive generation from 1k through 16k context.

  • oMLX community benchmarks

    2026-05-05 · M3 Max 40-core GPU, 128GB unified memory · oMLX

    Fresh oMLX rows show Qwen3.5-122B-A10B-Text-qx85 running on an M3 Max 40-core 128GB system across 1k to 16k context with high prompt throughput and interactive generation.

  • r/LocalLLaMA

    2026-04-14 · M4 Max (memory tier ambiguous in source) · llama.cpp and oMLX

    A newer M4 Max operator thread reports Qwen3.5-122B-A10B-MXFP4_MOE running through llama.cpp around 10 tok/s, slowing at long context, while comments point toward MLX/oMLX quant choices and KV-cache settings as likely decision-critical.

  • r/LocalLLaMA

    2026-03-28 · MacBook Pro M3 Max 128GB, MacBook Pro M5 Max 128GB · oMLX

    The same oMLX comparison reports Qwen3.5-122B-A10B at 46.1 tg tok/s on a MacBook Pro M3 Max 128GB at pp1024/tg128.

  • SharpAI HomeSec-Bench

    2026-03-26 · MacBook Pro M5 Pro 64GB · llama.cpp

    The M5 Pro 64GB HomeSec-Bench run shows Qwen3.5-122B-A10B already crossing into plausible Apple Silicon frontier use without requiring a Max-class machine.

8 more Apple Silicon field sources tracked in the research queue.

Runtime mentions in the field

Continuellama.cppLM StudioMLXoMLXOpenClaw

Hardware mentioned in reports

64GB96GB128GBM3 UltraM4MacMac StudioMacBook

What would improve confidence

  • Reproduce Field Performance Signal
  • Resolve Blocked Source Capture
  • Upgrade To First Party Measurement

Published chip coverage includes M5 Max (128 GB), M3 Ultra (256 GB), M5 Pro (64 GB). Fastest published row is 65.9 tok/s on M5 Max (128 GB) at 4bit. Lowest published RAM requirement is 40.8 GB on M5 Pro (64 GB). Catalog context window is 33k.

Related Qwen3.5-122B-A10B models with published pages: Qwen3.5-27B · Qwen3.5-35B-A3B · Qwen3.5-9B · Qwen3.5-397B-A17B · Qwen3.5-4B

Standardized eval scorecards for Qwen3.5-122B-A10B

These are fixed-machine model scorecards from a single Apple Silicon setup. They help explain whether a model is merely fast or actually good at tools, coding, reasoning, and general tasks. They do not replace the main Mac ranking above.

Mac Studio M3 Ultra 256GB · Avg 88%

90%Tools
90%Coding
80%Reasoning
90%General

Speed and memory

  • Long decode: 57.0 tok/s
  • Short decode: 26.3 tok/s
  • Cold TTFT: 0.714 s
  • Active RAM: 65.0 GB

The best value version in this scorecard: near-frontier quality at roughly half the RAM.

vLLM-MLX SCORECARD.md  ·  discussion · 2026-03-04

Mac Studio M3 Ultra 256GB · Avg 89%

87%Tools
90%Coding
90%Reasoning
90%General

Speed and memory

  • Long decode: 42.7 tok/s
  • Short decode: 19.4 tok/s
  • Cold TTFT: 1.300 s
  • Active RAM: 129.8 GB

Highest overall quality in this standardized set, but it demands real memory.

vLLM-MLX SCORECARD.md  ·  discussion · 2026-03-04

Raw benchmark rows for Qwen3.5-122B-A10B

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M5 Max (128 GB)4bit71.9 GB4k65.9 tok/s881.5 tok/sMLXref
M5 Max (128 GB)4bit73.8 GB16k60.6 tok/s1239.7 tok/sMLXref
M3 Ultra (256 GB)MXFP465.0 GB57.0 tok/sMLXref
M5 Max (128 GB)4bit76.4 GB33k54.9 tok/s1067.8 tok/sMLXref
M3 Ultra (256 GB)8bit129.8 GB43.0 tok/sMLXref
M5 Pro (64 GB)IQ1_M40.8 GB18.0 tok/sllama.cppref

Ordered by fastest published tok/s on the chip family in each Mac. Click through for the full machine page.

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →