← Canonical rankings
Canonical Rankings

Best Macs for this model

Qwen3-Coder-Next ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence.

28 ranked MacsUse the strongest current runtime evidence for each row.Static paths cover only canonical model pages; sort and quantization stay as query state.
RankMacScoreQuantTok/sRuntimeFitsEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB5428bit74.0 tok/sMLXFitsEstimated$7,4998bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 180.2 GB headroom remains at this quantization.
2Mac Pro M2 Ultra 192GB4788bit74.0 tok/sMLXFitsEstimated$6,9998bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 116.2 GB headroom remains at this quantization.
3Mac Studio M4 Max 128GB4148bit74.0 tok/sMLXFitsEstimated$4,4998bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 52.2 GB headroom remains at this quantization.
4MacBook Pro M4 Max 128GB 16-inch4148bit74.0 tok/sMLXFitsEstimated$5,9998bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 52.2 GB headroom remains at this quantization.
5Mac Studio M3 Ultra 96GB3828bit74.0 tok/sMLXFitsEstimated$3,9998bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 20.2 GB headroom remains at this quantization.
6Mac Studio M4 Max 64GB366Q5_K_M74.0 tok/sMLXFitsEstimated$2,999Q5_K_M is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 9.8 GB headroom remains at this quantization.
7MacBook Pro M4 Max 64GB 16-inch366Q5_K_M74.0 tok/sMLXFitsEstimated$4,499Q5_K_M is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 9.8 GB headroom remains at this quantization.
8Mac Mini M4 Pro 48GB359q4.1bit74.0 tok/sMLXFitsEstimated$1,599q4.1bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 8.6 GB headroom remains at this quantization.
9MacBook Pro M4 Pro 48GB 14-inch359q4.1bit74.0 tok/sMLXFitsEstimated$2,499q4.1bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 8.6 GB headroom remains at this quantization.
10Mac Studio M4 Max 48GB359q4.1bit74.0 tok/sMLXFitsEstimated$2,499q4.1bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 8.6 GB headroom remains at this quantization.
11MacBook Pro M4 Pro 48GB 16-inch359q4.1bit74.0 tok/sMLXFitsEstimated$2,999q4.1bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 8.6 GB headroom remains at this quantization.
12MacBook Pro M4 Max 48GB 14-inch359q4.1bit74.0 tok/sMLXFitsEstimated$3,499q4.1bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 8.6 GB headroom remains at this quantization.
13MacBook Pro M4 Max 48GB 16-inch359q4.1bit74.0 tok/sMLXFitsEstimated$3,999q4.1bit is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 8.6 GB headroom remains at this quantization.
14Mac Studio M4 Max 36GB330Q2_K74.0 tok/sMLXFitsEstimated$1,999Q2_K is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 10.3 GB headroom remains at this quantization.
15MacBook Pro M4 Max 36GB 14-inch330Q2_K74.0 tok/sMLXFitsEstimated$2,999Q2_K is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 10.3 GB headroom remains at this quantization.
16MacBook Pro M4 Max 36GB 16-inch330Q2_K74.0 tok/sMLXFitsEstimated$3,499Q2_K is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 10.3 GB headroom remains at this quantization.
17Mac Mini M4 32GB326Q2_K74.0 tok/sMLXFitsEstimated$799Q2_K is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization.
18MacBook Air M4 32GB 13-inch326Q2_K74.0 tok/sMLXFitsEstimated$1,499Q2_K is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization.
19MacBook Air M4 32GB 15-inch326Q2_K74.0 tok/sMLXFitsEstimated$1,699Q2_K is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 6.3 GB headroom remains at this quantization.
20Mac Mini M4 24GB324IQ2_XS74.0 tok/sMLXFitsEstimated$599IQ2_XS is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
21MacBook Air M4 24GB 13-inch324IQ2_XS74.0 tok/sMLXFitsEstimated$1,299IQ2_XS is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
22Mac Mini M4 Pro 24GB324IQ2_XS74.0 tok/sMLXFitsEstimated$1,399IQ2_XS is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
23MacBook Air M4 24GB 15-inch324IQ2_XS74.0 tok/sMLXFitsEstimated$1,499IQ2_XS is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
24MacBook Pro M4 Pro 24GB 14-inch324IQ2_XS74.0 tok/sMLXFitsEstimated$1,999IQ2_XS is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
25MacBook Pro M4 Pro 24GB 16-inch324IQ2_XS74.0 tok/sMLXFitsEstimated$2,499IQ2_XS is the current best practical quantization. 74.0 tok/s is estimated from nearby benchmark coverage. 4.1 GB headroom remains at this quantization.
26Mac Mini M4 16GB0F32MLXNoEstimated$499Qwen3-Coder-Next does not fit on Mac Mini M4 16GB at the current practical quantization.
27MacBook Air M4 16GB 13-inch0F32MLXNoEstimated$1,099Qwen3-Coder-Next does not fit on MacBook Air M4 16GB 13-inch at the current practical quantization.
28MacBook Air M4 16GB 15-inch0F32MLXNoEstimated$1,299Qwen3-Coder-Next does not fit on MacBook Air M4 16GB 15-inch at the current practical quantization.

Qwen3-Coder-Next — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: 8bit, 4bit, 6bit

6Benchmark rows
2Chip tiers covered
79.3Fastest avg tok/s (M5 Max (128 GB))
44.9 GBMinimum RAM observed

Fastest published result is 79.3 tok/s on M5 Max (128 GB) at 8bit. Smallest published fit is 44.9 GB on M3 Ultra (256 GB). Longest published context on this page is 66k. Published runtimes include MLX. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Evidence state: 6 linked reference rows and no Silicon Score Lab rows yet.

Published runtimes here: MLX.

80BTotal params
3BActive params
262,144Context window
2026-01-30Release date

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

Today, we're announcing Qwen3-Coder-Next, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:

Official source  ·  Raw model card

agentscodingreasoningvisual-understanding

Runtime support mentioned

MLXllama.cppOllamavLLMSGLangTransformersKTransformersClaude CodeCline

Official takeaways

  • Type: Causal Language Models.
  • Scale: 80B in total and 3B activated.
  • Context: 262,144 natively.
  • Super Efficient with Significant Performance: With only 3B activated parameters (80B total parameters), it achieves performance comparable to models with 10–20x more active parameters, making it highly cost-effective fo…

Deployment notes

  • For deployment, you can use the latest sglang or vllm to create an OpenAI-compatible API endpoint.

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

Qwen3-Coder-Next: 3 Apple Silicon field reports; best reported generation ~79.3 tok/s; seen on MacBook Pro M5 MAX 128GB, Mac Studio M3 ULTRA 256GB; via MLX.

6Benchmark rows
3Field reports
6Practitioner signals
Sparse BenchmarksEvidence status

What practitioners keep saying

  • The posted mlx_lm measurements report about 79.3 tok/s at 4K context, 68.6 tok/s at 32K, and still about 48.2 tok/s at 64K for the 8-bit model.
  • This is strong evidence that the new Apple laptop frontier is not just fitting coding-class models, but running them at clearly interactive speed even under long-context loads.
  • The operator reports roughly 65 tok/s decode for the 6-bit model on a Mac Studio M3 Ultra 256GB while still preferring it as the sweet spot for coding quality.

Runtime mentions in the field

Claude CodeClinellama.cppLM StudioMLXOllama

Hardware mentioned in reports

16GB24GB48GB64GB96GB128GBM1 MaxM3 Ultra

What would improve confidence

  • Capture Practitioner Runtime Notes
  • Queue Lab Verification If Hardware Available
  • Reproduce Field Performance Signal
  • Upgrade To First Party Measurement

Published chip coverage includes M5 Max (128 GB), M3 Ultra (256 GB). Fastest published row is 79.3 tok/s on M5 Max (128 GB) at 8bit. Lowest published RAM requirement is 44.9 GB on M3 Ultra (256 GB). Catalog context window is 66k.

Related Qwen3-Coder-Next models with published pages: Qwen3-Coder-30B-A3B

Standardized eval scorecards for Qwen3-Coder-Next

These are fixed-machine model scorecards from a single Apple Silicon setup. They help explain whether a model is merely fast or actually good at tools, coding, reasoning, and general tasks. They do not replace the main Mac ranking above.

Mac Studio M3 Ultra 256GB · Avg 80%

90%Tools
90%Coding
70%Reasoning
70%General

Speed and memory

  • Long decode: 73.5 tok/s
  • Short decode: 41.5 tok/s
  • Cold TTFT: 0.473 s
  • Active RAM: 44.9 GB

The fast coding-first option in this scorecard, with strong tool behavior.

vLLM-MLX SCORECARD.md  ·  discussion · 2026-03-04

Mac Studio M3 Ultra 256GB · Avg 82%

87%Tools
90%Coding
80%Reasoning
70%General

Speed and memory

  • Long decode: 65.6 tok/s
  • Short decode: 34.6 tok/s
  • Cold TTFT: 0.642 s
  • Active RAM: 64.8 GB

Slightly slower than 4-bit, but reasoning is stronger and coding stays high.

vLLM-MLX SCORECARD.md  ·  discussion · 2026-03-04

Raw benchmark rows for Qwen3-Coder-Next

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M5 Max (128 GB)8bit87.1 GB4k79.3 tok/s754.9 tok/sMLXref
M5 Max (128 GB)8bit88.2 GB16k74.3 tok/s1802.1 tok/sMLXref
M3 Ultra (256 GB)4bit44.9 GB74.0 tok/sMLXref
M5 Max (128 GB)8bit89.7 GB33k68.6 tok/s1887.2 tok/sMLXref
M3 Ultra (256 GB)6bit64.8 GB66.0 tok/sMLXref
M5 Max (128 GB)8bit92.6 GB66k48.2 tok/s1432.7 tok/sMLXref

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →