← Canonical rankings
Canonical Rankings

Best Macs for this model

Qwen3.5-9B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence. Model picker is focused on current-market choices.

29 ranked MacsUse the strongest current runtime evidence for each row.28 historical models hiddenStatic paths cover only canonical model pages; sort and quantization stay as query state.
RankMacScoreQuantTok/sRuntimeFitsHeadroomContextEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB7368bit 106.0 tok/s Fastest evidence path: 8bit · 106.0 tok/s · MLX · EstimatedMLXFits246.1 GB262kEstimated$7,4998bit is the current best practical quantization. 106.0 tok/s is estimated from nearby benchmark coverage. 246.1 GB headroom remains at this quantization.
2MacBook Pro M5 Max 128GB 16-inch4968bit 78.0 tok/s Fastest evidence path: 8bit · 78.0 tok/s · MLX · EstimatedMLXFits118.1 GB262kEstimated$5,3998bit is the current best practical quantization. 78.0 tok/s is estimated from nearby benchmark coverage. 118.1 GB headroom remains at this quantization.
3Mac Mini M4 24GB4488bit 92.0 tok/s Fastest evidence path: 8bit · 92.0 tok/s · MLX · EstimatedMLXFits14.1 GB94kEstimated$5998bit is the current best practical quantization. 92.0 tok/s is estimated from nearby benchmark coverage. 14.1 GB headroom remains at this quantization.
4MacBook Air M4 24GB 13-inch4488bit 92.0 tok/s Fastest evidence path: 8bit · 92.0 tok/s · MLX · EstimatedMLXFits14.1 GB94kEstimated$1,2998bit is the current best practical quantization. 92.0 tok/s is estimated from nearby benchmark coverage. 14.1 GB headroom remains at this quantization.
5Mac Mini M4 Pro 24GB4488bit 92.0 tok/s Fastest evidence path: 8bit · 92.0 tok/s · MLX · EstimatedMLXFits14.1 GB94kEstimated$1,3998bit is the current best practical quantization. 92.0 tok/s is estimated from nearby benchmark coverage. 14.1 GB headroom remains at this quantization.
6MacBook Air M4 24GB 15-inch4488bit 92.0 tok/s Fastest evidence path: 8bit · 92.0 tok/s · MLX · EstimatedMLXFits14.1 GB94kEstimated$1,4998bit is the current best practical quantization. 92.0 tok/s is estimated from nearby benchmark coverage. 14.1 GB headroom remains at this quantization.
7MacBook Pro M4 Pro 24GB 14-inch4488bit 92.0 tok/s Fastest evidence path: 8bit · 92.0 tok/s · MLX · EstimatedMLXFits14.1 GB94kEstimated$1,9998bit is the current best practical quantization. 92.0 tok/s is estimated from nearby benchmark coverage. 14.1 GB headroom remains at this quantization.
8MacBook Pro M4 Pro 24GB 16-inch4488bit 92.0 tok/s Fastest evidence path: 8bit · 92.0 tok/s · MLX · EstimatedMLXFits14.1 GB94kEstimated$2,4998bit is the current best practical quantization. 92.0 tok/s is estimated from nearby benchmark coverage. 14.1 GB headroom remains at this quantization.
9Mac Pro M2 Ultra 192GB3888bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits182.1 GB262kEstimated$6,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 182.1 GB headroom remains at this quantization.
10Mac Studio M4 Max 128GB3248bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits118.1 GB262kEstimated$4,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 118.1 GB headroom remains at this quantization.
11MacBook Pro M4 Max 128GB 16-inch3248bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits118.1 GB262kEstimated$5,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 118.1 GB headroom remains at this quantization.
12Mac Studio M3 Ultra 96GB2928bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits86.1 GB262kEstimated$3,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 86.1 GB headroom remains at this quantization.
13Mac Studio M4 Max 64GB2608bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits54.1 GB262kEstimated$2,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 54.1 GB headroom remains at this quantization.
14MacBook Pro M4 Max 64GB 16-inch2608bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits54.1 GB262kEstimated$4,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 54.1 GB headroom remains at this quantization.
15Mac Mini M4 Pro 48GB2448bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits38.1 GB261kEstimated$1,5998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 38.1 GB headroom remains at this quantization.
16MacBook Pro M4 Pro 48GB 14-inch2448bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits38.1 GB261kEstimated$2,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 38.1 GB headroom remains at this quantization.
17Mac Studio M4 Max 48GB2448bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits38.1 GB261kEstimated$2,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 38.1 GB headroom remains at this quantization.
18MacBook Pro M4 Pro 48GB 16-inch2448bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits38.1 GB261kEstimated$2,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 38.1 GB headroom remains at this quantization.
19MacBook Pro M4 Max 48GB 14-inch2448bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits38.1 GB261kEstimated$3,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 38.1 GB headroom remains at this quantization.
20MacBook Pro M4 Max 48GB 16-inch2448bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits38.1 GB261kEstimated$3,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 38.1 GB headroom remains at this quantization.
21Mac Studio M4 Max 36GB2328bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits26.1 GB178kEstimated$1,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 26.1 GB headroom remains at this quantization.
22MacBook Pro M4 Max 36GB 14-inch2328bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits26.1 GB178kEstimated$2,9998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 26.1 GB headroom remains at this quantization.
23MacBook Pro M4 Max 36GB 16-inch2328bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits26.1 GB178kEstimated$3,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 26.1 GB headroom remains at this quantization.
24Mac Mini M4 32GB2288bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits22.1 GB150kEstimated$7998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 22.1 GB headroom remains at this quantization.
25MacBook Air M4 32GB 13-inch2288bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits22.1 GB150kEstimated$1,4998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 22.1 GB headroom remains at this quantization.
26MacBook Air M4 32GB 15-inch2288bit 35.0 tok/s Fastest evidence path: 8bit · 35.0 tok/s · llama.cpp · Estimatedllama.cppFits22.1 GB150kEstimated$1,6998bit is the current best practical quantization. 35.0 tok/s is estimated from nearby benchmark coverage. 22.1 GB headroom remains at this quantization.
27Mac Mini M4 16GB888bit 4.1 tok/s Fastest evidence path: Q4_K_M · 72.0 tok/s · LM Studio · Trusted referencellama.cppFits6.1 GB39kEstimated$4998bit is the current best practical quantization. 4.1 tok/s is estimated from nearby benchmark coverage. 6.1 GB headroom remains at this quantization.
28MacBook Air M4 16GB 13-inch888bit 4.1 tok/s Fastest evidence path: Q4_K_M · 72.0 tok/s · LM Studio · Trusted referencellama.cppFits6.1 GB39kEstimated$1,0998bit is the current best practical quantization. 4.1 tok/s is estimated from nearby benchmark coverage. 6.1 GB headroom remains at this quantization.
29MacBook Air M4 16GB 15-inch888bit 4.1 tok/s Fastest evidence path: Q4_K_M · 72.0 tok/s · LM Studio · Trusted referencellama.cppFits6.1 GB39kEstimated$1,2998bit is the current best practical quantization. 4.1 tok/s is estimated from nearby benchmark coverage. 6.1 GB headroom remains at this quantization.

Qwen3.5-9B — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: 4bit, Q4_K - Medium, Q8_0, Q4_0, Q4_K - Small, Q6_K

13Benchmark rows
9Chip tiers covered
106.0Fastest avg tok/s (M3 Ultra (256 GB))
5.1 GBMinimum RAM observed

Fastest published result is 106.0 tok/s on M3 Ultra (256 GB) at 4bit. Smallest published fit is 5.1 GB on M3 Ultra (256 GB). Published runtimes include llama.cpp, LM Studio, MLX, Ollama. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Based on 13 external benchmarks; no lab runs yet.

Published runtimes: llama.cpp, LM Studio, MLX, Ollama.

9BTotal params
DenseActive params
262,144Context window
2026-02-27Release date

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

Official source  ·  Raw model card

agentscodingvisual-understanding

Runtime support mentioned

vLLMSGLangTransformersKTransformers

Official specs

  • Type: Causal Language Model with Vision Encoder.
  • Scale: 9B.
  • Context: 262,144 natively and extensible up to 1,010,000 tokens.
  • Total parameters: 9B.
  • Max input: 262,144 natively and extensible up to 1,010,000 tokens.

Official takeaways

  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding ben…
  • Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.
  • Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.
  • Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

Qwen3.5-9B: 5 Apple Silicon field reports; best reported generation ~106 tok/s; seen on Mac Studio M3 ULTRA 256GB, MacBook M1 PRO 16GB, MacBook Pro M5 PRO 64GB; via MLX, llama.cpp, LM Studio.

13Benchmark rows
5Field reports
7Practitioner signals
Sparse BenchmarksEvidence status

What practitioners keep saying

  • The benchmark write-up recommends Qwen3.5-9B GGUF Q4_0 through llama.cpp on a 16GB Mac Mini M4 at 4.0 tok/s, 24.6 composite quality, and 58% non-reasoning MMLU.
  • The same thread argues that Qwen3.5-9B works better as a compact context-reading or knowledge model than as a broad local agent default on 16GB hardware.
  • The HomeSec-Bench page reports Qwen3.5-9B Q4_K_M on a MacBook Pro M5 Pro 64GB with llama.cpp at about 25 tok/s generation and about 0.765s TTFT.

Apple Silicon field sources

  • r/LocalLLaMA

    2026-03-26 · Mac Mini M4 16GB · llama.cpp

    A broad 16GB Mac Mini M4 sweep says Qwen3.5-9B is still one of the few dense Qwen3.5 sizes worth considering on constrained Apple Silicon, but only if you are honest about its role.

  • SharpAI HomeSec-Bench

    2026-03-26 · MacBook Pro M5 Pro 64GB · llama.cpp

    A domain benchmark on M5 Pro 64GB says Qwen3.5-9B is not just a tiny fallback model on Apple Silicon: it stays fast enough to be a serious local agent candidate.

  • r/LocalLLaMA

    2026-03-12 · 12GB VRAM local workstation · Kilo Code or Roo Code

    Practitioner testing suggests Qwen3.5-9B is unexpectedly good for agentic coding at small-model hardware limits.

  • r/LocalLLaMA

    2026-03-05 · M1 Pro MacBook 16GB · Ollama

    Qwen3.5-9B is already being used as a real agent on 16GB Apple Silicon laptops, not just demoed in chat.

  • r/LocalLLaMA

    2026-03-04 · Mac Studio M3 Ultra 256GB · MLX

    Qwen3.5-9B now has a clean high-end Apple Silicon reference as a genuinely useful small model, not just a fallback.

1 more Apple Silicon field source tracked in the research queue.

Runtime mentions in the field

Kilo Codellama.cppLM StudioMLXOllamaRoo Code

Hardware mentioned in reports

16GB64GBM1 ProM3 UltraM4MacMac MiniMac Studio

What would improve confidence

  • Reproduce Field Performance Signal
  • Upgrade To First Party Measurement

Published chip coverage includes M3 Ultra (256 GB), M5 Max (64 GB), M4 Pro (24 GB), M5 Max (128 GB), M4 (16 GB) plus 4 more chip tiers. Fastest published row is 106.0 tok/s on M3 Ultra (256 GB) at 4bit. Lowest published RAM requirement is 5.1 GB on M3 Ultra (256 GB).

Related Qwen3.5-9B models with published pages: Qwen3.5-27B · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-397B-A17B · Qwen3.5-4B

Standardized eval scorecards for Qwen3.5-9B

These are fixed-machine model scorecards from a single Apple Silicon setup. They help explain whether a model is merely fast or actually good at tools, coding, reasoning, and general tasks. They do not replace the main Mac ranking above.

Mac Studio M3 Ultra 256GB · Avg 71%

83%Tools
70%Coding
60%Reasoning
70%General

Speed and memory

  • Long decode: 106.4 tok/s
  • Short decode: 35.4 tok/s
  • Cold TTFT: 0.228 s
  • Active RAM: 5.1 GB

The smallest model in this set that still looks broadly useful for agent-style work.

vLLM-MLX SCORECARD.md  ·  discussion · 2026-03-04

Raw benchmark rows for Qwen3.5-9B

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M3 Ultra (256 GB)4bit5.1 GB106.0 tok/sMLXref
M5 Max (64 GB)Q4_K - Medium105.0 tok/sOllamaref
M4 Pro (24 GB)Q4_K - Medium92.0 tok/sMLXref
M5 Max (128 GB)Q8_078.0 tok/sMLXref
M4 (16 GB)Q4_K - Medium72.0 tok/sLM Studioref
M3 (16 GB)Q4_K - Medium58.0 tok/sOllamaref
M1 (16 GB)Q4_K - Medium35.0 tok/sOllamaref
M1 Pro (16 GB)4bit30.0 tok/sMLXref
M5 Pro (64 GB)Q4_K - Medium13.8 GB25.0 tok/sllama.cppref
M1 Pro (16 GB)4bit15.0 tok/sllama.cppref
M4 (16 GB)Q4_04.1 tok/sllama.cppref
M4 (16 GB)Q4_K - Small3.1 tok/sllama.cppref
M4 (16 GB)Q6_K2.2 tok/sllama.cppref

Ordered by fastest published tok/s on the chip family in each Mac. Click through for the full machine page.

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →