Estimated from nearby benchmark coverage, not a direct match Speed is estimated from nearby benchmark coverage rather than this exact machine-and-quant match. Best runtime hint: Llamafile.
Coverage
No direct benchmark rows yet
Speed is estimated, so this cost read is provisional.
Estimated from nearby benchmark coverage, not a direct match Speed is estimated from nearby benchmark coverage rather than this exact machine-and-quant match. Best runtime hint: LM Studio.
Coverage
No direct benchmark rows yet
Speed is estimated, so this cost read is provisional.
Direct trusted-reference benchmark coverage on this hardware class Speed is backed by trusted-reference benchmark coverage. Most common runtime in the evidence is MLX.
Estimated from nearby benchmark coverage, not a direct match Speed is estimated from nearby benchmark coverage rather than this exact machine-and-quant match. Best runtime hint: MLX.
Coverage
No direct benchmark rows yet
Speed is estimated, so this cost read is provisional.
Direct trusted-reference benchmark coverage on this hardware class Speed is backed by trusted-reference benchmark coverage. Most common runtime in the evidence is MLX.
Estimated from nearby benchmark coverage, not a direct match Speed is estimated from nearby benchmark coverage rather than this exact machine-and-quant match. Best runtime hint: Ollama.
Coverage
No direct benchmark rows yet
Speed is estimated, so this cost read is provisional.
Estimated from nearby benchmark coverage, not a direct match Speed is estimated from nearby benchmark coverage rather than this exact machine-and-quant match. Best runtime hint: Ollama.
Coverage
No direct benchmark rows yet
Speed is estimated, so this cost read is provisional.
Direct trusted-reference benchmark coverage on this hardware class Speed is backed by trusted-reference benchmark coverage. Most common runtime in the evidence is MLX.
These answers stay tied to the live workspace defaults for this compatibility route, so the copy explains the same sort order and query framing the table is using.
What does the Worth route optimize for?
Worth defaults to lowest local cost for the selected Mac. It helps you compare which models deliver the most practical local inference for the machine cost you already carry.
Does Worth replace API cost modeling?
No. Worth is a local-cost reading, not a full finance model. It is best used to shortlist practical local options before you compare them against your cloud usage and break-even assumptions.
Why can a slower model rank above a faster one on Worth?
Worth favors local cost efficiency first. A slower model can still rank higher if it gives you a materially cheaper or lighter-weight local option on the same Mac.