← Canonical rankings
Canonical Rankings

Best Macs for this model

Qwen 3 235B-A22B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence. Historical baseline selected; model picker is focused on current-market choices.

29 ranked MacsUse the strongest current runtime evidence for each row.27 other historical models hiddenStatic paths cover only canonical model pages; sort and quantization stay as query state.

Historical baseline selected: Qwen 3 235B-A22B. Default model choices remain current-market; other historical models stay hidden.

RankMacScoreQuantTok/sRuntimeFitsHeadroomContextEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB241Q6_K 27.0 tok/s Fastest evidence path: Q6_K · 27.0 tok/s · MLX · EstimatedMLXFits73.0 GB105kEstimated$7,499Q6_K is the current best practical quantization. 27.0 tok/s is estimated from nearby benchmark coverage. 73.0 GB headroom remains at this quantization.
2Mac Pro M2 Ultra 192GB198Q5_K_M 26.4 tok/s Fastest evidence path: Q5_K_M · 26.4 tok/s · MLX · EstimatedMLXFits32.8 GB19kEstimated$6,999Q5_K_M is the current best practical quantization. 26.4 tok/s is estimated from nearby benchmark coverage. 32.8 GB headroom remains at this quantization.
3Mac Studio M4 Max 128GB169Q3_K_L 30.0 tok/s Fastest evidence path: Q3_K_L · 30.0 tok/s · LM Studio · EstimatedLM StudioFits25.2 GB25kEstimated$4,499Q3_K_L is the current best practical quantization. 30.0 tok/s is estimated from nearby benchmark coverage. 25.2 GB headroom remains at this quantization.
4MacBook Pro M4 Max 128GB 16-inch169Q3_K_L 30.0 tok/s Fastest evidence path: Q3_K_L · 30.0 tok/s · LM Studio · EstimatedLM StudioFits25.2 GB25kEstimated$5,999Q3_K_L is the current best practical quantization. 30.0 tok/s is estimated from nearby benchmark coverage. 25.2 GB headroom remains at this quantization.
5Mac Studio M3 Ultra 96GB148mlx-dynamic-2.7bpw 26.4 tok/s Fastest evidence path: mlx-dynamic-2.7bpw · 26.4 tok/s · MLX · EstimatedMLXFits18.7 GB20kEstimated$3,999mlx-dynamic-2.7bpw is the current best practical quantization. 26.4 tok/s is estimated from nearby benchmark coverage. 18.7 GB headroom remains at this quantization.
6MacBook Pro M5 Max 128GB 16-inch121Q3_K_L 18.0 tok/s Fastest evidence path: Q3_K_L · 18.0 tok/s · Ollama · EstimatedOllamaFits25.2 GB25kEstimated$5,399Q3_K_L is the current best practical quantization. 18.0 tok/s is estimated from nearby benchmark coverage. 25.2 GB headroom remains at this quantization.
7Mac Mini M4 16GB0F32 MLXNo-863.2 GBEstimated$499Qwen 3 235B-A22B does not fit on Mac Mini M4 16GB at the current practical quantization.
8Mac Mini M4 24GB0F32 MLXNo-855.2 GBEstimated$599Qwen 3 235B-A22B does not fit on Mac Mini M4 24GB at the current practical quantization.
9Mac Mini M4 32GB0F32 MLXNo-847.2 GBEstimated$799Qwen 3 235B-A22B does not fit on Mac Mini M4 32GB at the current practical quantization.
10MacBook Air M4 16GB 13-inch0F32 MLXNo-863.2 GBEstimated$1,099Qwen 3 235B-A22B does not fit on MacBook Air M4 16GB 13-inch at the current practical quantization.
11MacBook Air M4 24GB 13-inch0F32 MLXNo-855.2 GBEstimated$1,299Qwen 3 235B-A22B does not fit on MacBook Air M4 24GB 13-inch at the current practical quantization.
12MacBook Air M4 16GB 15-inch0F32 MLXNo-863.2 GBEstimated$1,299Qwen 3 235B-A22B does not fit on MacBook Air M4 16GB 15-inch at the current practical quantization.
13Mac Mini M4 Pro 24GB0F32 MLXNo-855.2 GBEstimated$1,399Qwen 3 235B-A22B does not fit on Mac Mini M4 Pro 24GB at the current practical quantization.
14MacBook Air M4 32GB 13-inch0F32 MLXNo-847.2 GBEstimated$1,499Qwen 3 235B-A22B does not fit on MacBook Air M4 32GB 13-inch at the current practical quantization.
15MacBook Air M4 24GB 15-inch0F32 MLXNo-855.2 GBEstimated$1,499Qwen 3 235B-A22B does not fit on MacBook Air M4 24GB 15-inch at the current practical quantization.
16Mac Mini M4 Pro 48GB0F32 MLXNo-831.2 GBEstimated$1,599Qwen 3 235B-A22B does not fit on Mac Mini M4 Pro 48GB at the current practical quantization.
17MacBook Air M4 32GB 15-inch0F32 MLXNo-847.2 GBEstimated$1,699Qwen 3 235B-A22B does not fit on MacBook Air M4 32GB 15-inch at the current practical quantization.
18MacBook Pro M4 Pro 24GB 14-inch0F32 MLXNo-855.2 GBEstimated$1,999Qwen 3 235B-A22B does not fit on MacBook Pro M4 Pro 24GB 14-inch at the current practical quantization.
19Mac Studio M4 Max 36GB0F32 MLXNo-843.2 GBEstimated$1,999Qwen 3 235B-A22B does not fit on Mac Studio M4 Max 36GB at the current practical quantization.
20MacBook Pro M4 Pro 48GB 14-inch0F32 MLXNo-831.2 GBEstimated$2,499Qwen 3 235B-A22B does not fit on MacBook Pro M4 Pro 48GB 14-inch at the current practical quantization.
21MacBook Pro M4 Pro 24GB 16-inch0F32 MLXNo-855.2 GBEstimated$2,499Qwen 3 235B-A22B does not fit on MacBook Pro M4 Pro 24GB 16-inch at the current practical quantization.
22Mac Studio M4 Max 48GB0F32 MLXNo-831.2 GBEstimated$2,499Qwen 3 235B-A22B does not fit on Mac Studio M4 Max 48GB at the current practical quantization.
23MacBook Pro M4 Max 36GB 14-inch0F32 MLXNo-843.2 GBEstimated$2,999Qwen 3 235B-A22B does not fit on MacBook Pro M4 Max 36GB 14-inch at the current practical quantization.
24MacBook Pro M4 Pro 48GB 16-inch0F32 MLXNo-831.2 GBEstimated$2,999Qwen 3 235B-A22B does not fit on MacBook Pro M4 Pro 48GB 16-inch at the current practical quantization.
25Mac Studio M4 Max 64GB0F32 MLXNo-815.2 GBEstimated$2,999Qwen 3 235B-A22B does not fit on Mac Studio M4 Max 64GB at the current practical quantization.
26MacBook Pro M4 Max 48GB 14-inch0F32 MLXNo-831.2 GBEstimated$3,499Qwen 3 235B-A22B does not fit on MacBook Pro M4 Max 48GB 14-inch at the current practical quantization.
27MacBook Pro M4 Max 36GB 16-inch0F32 MLXNo-843.2 GBEstimated$3,499Qwen 3 235B-A22B does not fit on MacBook Pro M4 Max 36GB 16-inch at the current practical quantization.
28MacBook Pro M4 Max 48GB 16-inch0F32 MLXNo-831.2 GBEstimated$3,999Qwen 3 235B-A22B does not fit on MacBook Pro M4 Max 48GB 16-inch at the current practical quantization.
29MacBook Pro M4 Max 64GB 16-inch0F32 MLXNo-815.2 GBEstimated$4,499Qwen 3 235B-A22B does not fit on MacBook Pro M4 Max 64GB 16-inch at the current practical quantization.

Qwen 3 235B-A22B — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: 3bit, 4bit, Q4, Q4_K - Medium

8Benchmark rows
6Chip tiers covered
30.0Fastest avg tok/s (M4 Max (128 GB))
100 GBMinimum RAM observed

Fastest published result is 30.0 tok/s on M4 Max (128 GB) at 3bit. Smallest published fit is 100.0 GB on M4 Max (128 GB). Longest published context on this page is 10k. Published runtimes include LM Studio, MLX, Ollama. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Based on 8 external benchmarks; no lab runs yet.

Published runtimes: LM Studio, MLX, Ollama.

235.1BTotal params
22BActive params
131,072Context window
2025-04-29Release date

This is a reference-only model record. It remains useful for historical benchmarks, migration checks, and audit context, but it is excluded from current frontier packs.

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios.

Official source  ·  Raw model card

agentscodingreasoning

Runtime support mentioned

MLXllama.cppOllamavLLMSGLangTransformersKTransformers

Official specs

  • Type: Causal Language Models.
  • Scale: 235B in total and 22B activated.
  • Context: 32,768 natively and 131,072 tokens with YaRN.
  • Total parameters: 235B in total and 22B activated.
  • Max input: 32,768 natively and 131,072 tokens with YaRN.

Official takeaways

  • Sampling Parameters: - For thinking mode (enable_thinking=True), use Temperature=0.6, TopP=0.95, TopK=20, and MinP=0. DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions.
  • Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries.
  • Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
  • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

Qwen 3 235B-A22B: 5 Apple Silicon field reports; best reported generation ~30 tok/s; seen on MacBook Pro M4 MAX 128GB, Mac Studio 512GB, M2 ULTRA 192GB; via MLX, LM Studio.

8Benchmark rows
5Field reports
6Practitioner signals
Sparse BenchmarksEvidence status

What practitioners keep saying

  • One operator reports running the 3-bit DWQ quant on a 128GB MacBook Pro only after closing nearly everything else.
  • This is real Apple Silicon usage, but it is clearly a stretch-tier setup rather than a comfortable default.
  • The owner reports Qwen3-235B-A22B at 27.36 tok/sec with 1.73s to first token on a 512GB Mac Studio in LM Studio.

Apple Silicon field sources

  • r/LocalLLaMA

    2025-07-17 · MacBook Pro M4 Max 128GB · MLX

    Qwen 3 235B-A22B is already being stretched onto 128GB Apple laptops, but only as a high-friction top-end experiment.

  • r/LocalLLaMA

    2025-07-08 · Mac Studio 512GB · LM Studio

    A 512GB Mac Studio report turns Qwen 3 235B-A22B from theoretical fit into real Apple Silicon throughput at an interactive top-end tier.

  • r/LocalLLaMA

    2025-07-05 · MacBook Pro M4 Max 128GB · Ollama

    Top-end Apple Silicon owners are already using low-quant Qwen 3 235B-A22B as a retained model choice, not only as a one-time stunt.

  • r/LocalLLaMA

    2025-05-20 · Mac Pro M2 Ultra 192GB · MLX

    Ultra-class Apple desktops can run Qwen 3 235B-A22B at genuinely interactive speed, not just as a memory-fit stunt.

  • r/LocalLLaMA

    2025-05-05 · MacBook Pro M4 Max 128GB · MLX

    A direct M4 Max 128GB report says Qwen 3 235B-A22B can move from stretch-fit theory into genuinely interactive Apple laptop throughput, at least for shorter prompts.

1 more Apple Silicon field source tracked in the research queue.

Runtime mentions in the field

LM StudioMLXOllama

Hardware mentioned in reports

128GBM3 UltraM4MacMac StudioMacBookMacBook Pro

What would improve confidence

  • Reproduce Field Performance Signal
  • Upgrade To First Party Measurement

Published chip coverage includes M4 Max (128 GB), M3 Ultra (512 GB), M3 Ultra (256 GB), M2 Ultra (192 GB), M4 Ultra (192 GB) plus 1 more chip tier. Fastest published row is 30.0 tok/s on M4 Max (128 GB) at 3bit. Lowest published RAM requirement is 100.0 GB on M4 Max (128 GB). Catalog context window is 10k.

Related Qwen 3 models with published pages: Qwen 3 32B · Qwen 3 30B-A3B · Qwen 3 4B · Qwen 3 8B · Qwen 3 14B · Qwen 3 0.6B

Raw benchmark rows for Qwen 3 235B-A22B

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M4 Max (128 GB)3bit100.0 GB1k30.0 tok/sMLXref
M3 Ultra (512 GB)4bit27.4 tok/sLM Studioref
M3 Ultra (256 GB)Q427.0 tok/sMLXref
M2 Ultra (192 GB)Q426.4 tok/sMLXref
M4 Ultra (192 GB)Q4_K - Medium22.0 tok/sMLXref
M5 Max (128 GB)Q4_K - Medium18.0 tok/sMLXref
M5 Max (128 GB)Q4_K - Medium15.0 tok/sOllamaref
M4 Max (128 GB)Q4_K - Medium10k8.1 tok/sLM Studioref

Ordered by fastest published tok/s on the chip family in each Mac. Click through for the full machine page.

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →