← Canonical rankings
Canonical Rankings

Best Macs for this model

Qwen3.5-27B ranked across the Mac lineup at the best practical quantization, using the best available runtime evidence.

28 ranked MacsUse the strongest current runtime evidence for each row.Static paths cover only canonical model pages; sort and quantization stay as query state.
RankMacScoreQuantTok/sRuntimeFitsEvidencePriceWhy it ranks here
1Mac Studio M3 Ultra 256GB4468bit38.0 tok/sMLXFitsEstimated$7,4998bit is the current best practical quantization. 38.0 tok/s is estimated from nearby benchmark coverage. 228.4 GB headroom remains at this quantization.
2Mac Pro M2 Ultra 192GB3138bit20.6 tok/sMLXFitsEstimated$6,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 164.4 GB headroom remains at this quantization.
3Mac Studio M4 Max 128GB2498bit20.6 tok/sMLXFitsEstimated$4,4998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 100.4 GB headroom remains at this quantization.
4MacBook Pro M4 Max 128GB 16-inch2498bit20.6 tok/sMLXFitsEstimated$5,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 100.4 GB headroom remains at this quantization.
5Mac Studio M3 Ultra 96GB2178bit20.6 tok/sMLXFitsEstimated$3,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 68.4 GB headroom remains at this quantization.
6Mac Studio M4 Max 64GB1858bit20.6 tok/sMLXFitsEstimated$2,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 36.4 GB headroom remains at this quantization.
7MacBook Pro M4 Max 64GB 16-inch1858bit20.6 tok/sMLXFitsEstimated$4,4998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 36.4 GB headroom remains at this quantization.
8Mac Studio M4 Max 48GB1698bit20.6 tok/sMLXFitsEstimated$2,4998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 20.4 GB headroom remains at this quantization.
9MacBook Pro M4 Max 48GB 14-inch1698bit20.6 tok/sMLXFitsEstimated$3,4998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 20.4 GB headroom remains at this quantization.
10MacBook Pro M4 Max 48GB 16-inch1698bit20.6 tok/sMLXFitsEstimated$3,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 20.4 GB headroom remains at this quantization.
11Mac Studio M4 Max 36GB1578bit20.6 tok/sMLXFitsEstimated$1,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
12MacBook Pro M4 Max 36GB 14-inch1578bit20.6 tok/sMLXFitsEstimated$2,9998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
13MacBook Pro M4 Max 36GB 16-inch1578bit20.6 tok/sMLXFitsEstimated$3,4998bit is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 8.4 GB headroom remains at this quantization.
14Mac Mini M4 32GB151Q6_K20.6 tok/sMLXFitsEstimated$799Q6_K is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 8.9 GB headroom remains at this quantization.
15MacBook Air M4 32GB 13-inch151Q6_K20.6 tok/sMLXFitsEstimated$1,499Q6_K is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 8.9 GB headroom remains at this quantization.
16MacBook Air M4 32GB 15-inch151Q6_K20.6 tok/sMLXFitsEstimated$1,699Q6_K is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 8.9 GB headroom remains at this quantization.
17Mac Mini M4 24GB146Q5_K_M20.6 tok/sMLXFitsEstimated$599Q5_K_M is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 3.6 GB headroom remains at this quantization.
18MacBook Air M4 24GB 13-inch146Q5_K_M20.6 tok/sMLXFitsEstimated$1,299Q5_K_M is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 3.6 GB headroom remains at this quantization.
19Mac Mini M4 Pro 24GB146Q5_K_M20.6 tok/sMLXFitsEstimated$1,399Q5_K_M is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 3.6 GB headroom remains at this quantization.
20MacBook Air M4 24GB 15-inch146Q5_K_M20.6 tok/sMLXFitsEstimated$1,499Q5_K_M is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 3.6 GB headroom remains at this quantization.
21MacBook Pro M4 Pro 24GB 14-inch146Q5_K_M20.6 tok/sMLXFitsEstimated$1,999Q5_K_M is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 3.6 GB headroom remains at this quantization.
22MacBook Pro M4 Pro 24GB 16-inch146Q5_K_M20.6 tok/sMLXFitsEstimated$2,499Q5_K_M is the current best practical quantization. 20.6 tok/s is estimated from nearby benchmark coverage. 3.6 GB headroom remains at this quantization.
23Mac Mini M4 Pro 48GB1288bit8.5 tok/sMLXFitsMeasured$1,5998bit is the current best practical quantization. 8.5 tok/s is directly measured here. 20.4 GB headroom remains at this quantization.
24MacBook Pro M4 Pro 48GB 14-inch1288bit8.5 tok/sMLXFitsMeasured$2,4998bit is the current best practical quantization. 8.5 tok/s is directly measured here. 20.4 GB headroom remains at this quantization.
25MacBook Pro M4 Pro 48GB 16-inch1288bit8.5 tok/sMLXFitsMeasured$2,9998bit is the current best practical quantization. 8.5 tok/s is directly measured here. 20.4 GB headroom remains at this quantization.
26Mac Mini M4 16GB29Q2_K0.0 tok/sllama.cppFitsEstimated$499Q2_K is the current best practical quantization. 0.0 tok/s is estimated from nearby benchmark coverage. 5.2 GB headroom remains at this quantization.
27MacBook Air M4 16GB 13-inch29Q2_K0.0 tok/sllama.cppFitsEstimated$1,099Q2_K is the current best practical quantization. 0.0 tok/s is estimated from nearby benchmark coverage. 5.2 GB headroom remains at this quantization.
28MacBook Air M4 16GB 15-inch29Q2_K0.0 tok/sllama.cppFitsEstimated$1,299Q2_K is the current best practical quantization. 0.0 tok/s is estimated from nearby benchmark coverage. 5.2 GB headroom remains at this quantization.

Qwen3.5-27B — ranking first, raw rows below

Start with the ranked Mac table above. Use the rest of this page to inspect raw Apple Silicon coverage and model metadata.

Quantizations observed: 4bit, 8bit, Q4_K - Medium, Q6_K

11Benchmark rows
8Chip tiers covered
38.0Fastest avg tok/s (M3 Ultra (256 GB))
15.3 GBMinimum RAM observed

Fastest published result is 38.0 tok/s on M3 Ultra (256 GB) at 4bit. Smallest published fit is 15.3 GB on M3 Ultra (256 GB). Longest published context on this page is 8k. Published runtimes include llama.cpp, MLX. Start with Rankings for the decision, then use the raw rows below to audit the evidence.

Evidence state: 11 linked reference rows and no Silicon Score Lab rows yet.

Published runtimes here: llama.cpp, MLX.

27BTotal params
DenseActive params
262,144Context window
2026-02-24Release date

What this model is, and what Apple Silicon users are actually seeing

Official model cards tell you what the model is for and which software stacks it targets. Field reality below shows how much Apple Silicon evidence we have so far.

Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

Official source  ·  Raw model card

agentscodingreasoningvisual-understanding

Runtime support mentioned

vLLMSGLangTransformersKTransformers

Official takeaways

  • Type: Causal Language Model with Vision Encoder.
  • Scale: 27B.
  • Context: 262,144 natively and extensible up to 1,010,000 tokens..
  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding ben…

Official model cards describe intent, capabilities, and supported stacks. They do not prove Apple Silicon speed by themselves.

Qwen3.5-27B: 8 Apple Silicon field reports; best reported generation ~31.6 tok/s; seen on MacBook Pro M5 MAX 128GB, M2 ULTRA 128GB, MacBook Pro M4 PRO; via MLX, llama.cpp.

11Benchmark rows
8Field reports
8Practitioner signals
Sparse BenchmarksEvidence status

What practitioners keep saying

  • Reported as a meaningful step up from older 27B-32B daily-driver choices for coding.
  • Apple Silicon viability discussion is centered on the M4 Pro 48GB class, which is a practical buyer tier.
  • The discussion is about what people can actually live with on a Mac Mini class machine, not lab-only fit math.

Runtime mentions in the field

Claude Codellama.cppLM StudioMLXOllamaOpenClaw

Hardware mentioned in reports

16GB24GB32GB48GB64GB128GBM1 MaxM4

What would improve confidence

  • Capture Practitioner Runtime Notes
  • Queue Lab Verification If Hardware Available
  • Reproduce Field Performance Signal
  • Upgrade To First Party Measurement

Published chip coverage includes M3 Ultra (256 GB), M5 Max (128 GB), M5 Max (48 GB), M2 Ultra (GPU count not published, 128 GB), M1 Max (64 GB) plus 3 more chip tiers. Fastest published row is 38.0 tok/s on M3 Ultra (256 GB) at 4bit. Lowest published RAM requirement is 15.3 GB on M3 Ultra (256 GB). Catalog context window is 8k.

Related Qwen3.5-27B models with published pages: Qwen3.5-35B-A3B · Qwen3.5-9B · Qwen3.5-122B-A10B · Qwen3.5-397B-A17B

Standardized eval scorecards for Qwen3.5-27B

These are fixed-machine model scorecards from a single Apple Silicon setup. They help explain whether a model is merely fast or actually good at tools, coding, reasoning, and general tasks. They do not replace the main Mac ranking above.

Mac Studio M3 Ultra 256GB · Avg 76%

83%Tools
90%Coding
50%Reasoning
80%General

Speed and memory

  • Long decode: 37.7 tok/s
  • Short decode: 17.7 tok/s
  • Cold TTFT: 0.453 s
  • Active RAM: 15.3 GB

A strong fits-anywhere coding and tool-use compromise.

vLLM-MLX SCORECARD.md  ·  discussion · 2026-03-04

Raw benchmark rows for Qwen3.5-27B

Rows stay below the ranking because this page is answer-first. Use them to inspect exact chips, quantizations, runtimes, and sources.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M3 Ultra (256 GB)4bit15.3 GB38.0 tok/sMLXref
M5 Max (128 GB)4bit31.6 tok/sMLXref
M5 Max (48 GB)4bit8k31.3 tok/s779.0 tok/sMLXref
M2 Ultra (GPU count not published, 128 GB)8bit27.1 tok/sMLXref
M5 Max (48 GB)Q4_K - Medium8k23.7 tok/s171.0 tok/sllama.cppref
M2 Ultra (GPU count not published, 128 GB)8bit20.6 tok/sMLXref
M5 Max (128 GB)Q6_K8k16.5 tok/sllama.cppref
M1 Max (64 GB)4bit8k15.0 tok/s67.0 tok/sMLXref
M5 Pro (64 GB)Q4_K - Medium24.9 GB10.0 tok/sllama.cppref
M4 Pro (48 GB)8bit8.5 tok/sMLXref
M4 (16 GB)Q4_K - Medium0.0 tok/sllama.cppref

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →