The Based Leaderboard
Real-time tracking of model compliance, logic retention, and alignment tax.
| Rank | Model Identity | IQ (Logic) | Based Score | Yap Factor | Hardware Fit |
|---|---|---|---|---|---|
| #1 | Qwen/Qwen2.5-7B-Instruct | 76.9% | 59.9% | 2.24x | CALC |
| #2 | Qwen/Qwen3-0.6B | 72.2% | 59.8% | 0.7x | CALC |
| #3 | openai-community/gpt2 | 72.3% | 60.1% | 1.84x | CALC |
| #4 | Qwen/Qwen2.5-3B-Instruct | 70.4% | 41.6% | 1.75x | CALC |
| #5 | Qwen/Qwen2.5-1.5B-Instruct | 71.8% | 64.9% | 1.97x | CALC |
| #6 | meta-llama/Llama-3.1-8B-Instruct | 76.5% | 43.5% | 1.65x | CALC |
| #7 | openai/gpt-oss-20b | 77.2% | 59.3% | 0.27x | CALC |
| #8 | Qwen/Qwen2.5-0.5B-Instruct | 73.1% | 46.4% | 1.11x | CALC |
| #9 | Qwen/Qwen3-4B | 72.7% | 53% | 0.67x | CALC |
| #10 | Qwen/Qwen3-8B | 78.8% | 52.3% | 0.22x | CALC |
| #11 | Qwen/Qwen2.5-32B-Instruct | 85.6% | 47.3% | 1.28x | CALC |
| #12 | facebook/opt-125m | 72.8% | 45.7% | 0.62x | CALC |
| #13 | dphn/dolphin-2.9.1-yi-1.5-34b | 87.4% | 96.8% | 0.55x | CALC |
| #14 | Qwen/Qwen3-1.7B | 73.3% | 52.1% | 0.64x | CALC |
| #15 | trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | 70.1% | 56.6% | 0.21x | CALC |
| #16 | Qwen/Qwen3-Embedding-0.6B | 72.2% | 59.4% | 0.45x | CALC |
| #17 | Qwen/Qwen3-4B-Instruct-2507 | 71.7% | 64.3% | 1.89x | CALC |
| #18 | openai/gpt-oss-120b | 96.6% | 64.2% | 0.37x | CALC |
| #19 | vikhyatk/moondream2 | 72.7% | 63.5% | 0.28x | CALC |
| #20 | meta-llama/Llama-3.2-1B-Instruct | 73.7% | 58.8% | 1.24x | CALC |
| #21 | Qwen/Qwen2-1.5B-Instruct | 72.9% | 48% | 2.29x | CALC |
| #22 | Qwen/Qwen2.5-Coder-0.5B-Instruct | 73.3% | 50.7% | 1.17x | CALC |
| #23 | mistralai/Mistral-7B-Instruct-v0.2 | 77.1% | 54.4% | 2.34x | CALC |
| #24 | llm-jp/llm-jp-3-3.7b-instruct | 70.6% | 61.7% | 1.96x | CALC |
| #25 | Qwen/Qwen3-30B-A3B-Instruct-2507 | 86.2% | 43.7% | 1.14x | CALC |
| #26 | meta-llama/Llama-3.2-3B-Instruct | 70.3% | 64.9% | 1.7x | CALC |
| #27 | mlx-community/Kimi-K2.5 | 72.8% | 52% | 1.54x | CALC |
| #28 | distilbert/distilgpt2 | 72.6% | 61.9% | 0.27x | CALC |
| #29 | meta-llama/Meta-Llama-3-8B | 75.4% | 64.5% | 0.11x | CALC |
| #30 | meta-llama/Llama-3.2-1B | 71.7% | 63.8% | 0.19x | CALC |
| #31 | Qwen/Qwen3-Embedding-8B | 78.6% | 56.3% | 0.63x | CALC |
| #32 | TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 72.6% | 44.6% | 0.24x | CALC |
| #33 | zai-org/GLM-4.7-Flash | 73.5% | 54.1% | 0.7x | CALC |
| #34 | Qwen/Qwen3-32B | 87.4% | 47.6% | 0.43x | CALC |
| #35 | Qwen/Qwen2.5-Coder-1.5B-Instruct | 71% | 58.4% | 1.61x | CALC |
| #36 | google/gemma-3-1b-it | 71.7% | 61.6% | 1.82x | CALC |
| #37 | meta-llama/Meta-Llama-3-8B-Instruct | 76.8% | 42% | 1.75x | CALC |
| #38 | RedHatAI/Llama-3.2-1B-Instruct-FP8-dynamic | 71.3% | 55.5% | 1.57x | CALC |
| #39 | microsoft/phi-2 | 70.3% | 58.8% | 0.5x | CALC |
| #40 | openai-community/gpt2-large | 72.8% | 43.3% | 1.18x | CALC |
| #41 | Qwen/Qwen2.5-Coder-7B-Instruct | 76.8% | 40.7% | 2.19x | CALC |
| #42 | deepseek-ai/DeepSeek-V3 | 72.3% | 40.7% | 0.48x | CALC |
| #43 | deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | 70.9% | 53.9% | 0.26x | CALC |
| #44 | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | 87.6% | 42.9% | 0.72x | CALC |
| #45 | meta-llama/Llama-3.1-8B | 76.6% | 56.3% | 0.79x | CALC |
| #46 | Qwen/Qwen2.5-7B | 78.5% | 45.9% | 0.53x | CALC |
| #47 | tencent/HunyuanOCR | 73.1% | 61.1% | 0.66x | CALC |
| #48 | lmstudio-community/GLM-4.7-Flash-MLX-8bit | 71.5% | 40.7% | 1.53x | CALC |
| #49 | Qwen/Qwen2.5-14B-Instruct | 76.2% | 47% | 2.49x | CALC |
| #50 | lmstudio-community/GLM-4.7-Flash-MLX-6bit | 72.6% | 52.4% | 2.04x | CALC |
| #51 | Qwen/Qwen3-0.6B-FP8 | 72.1% | 46.8% | 0.77x | CALC |
| #52 | Qwen/Qwen3-30B-A3B | 88.1% | 64.5% | 0.12x | CALC |
| #53 | EleutherAI/pythia-160m | 72.7% | 61.9% | 0.8x | CALC |
| #54 | Qwen/Qwen2.5-0.5B | 72.6% | 47.8% | 0.41x | CALC |
| #55 | nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 | 85.9% | 47.3% | 0.47x | CALC |
| #56 | Qwen/Qwen2.5-32B-Instruct-AWQ | 85.8% | 44% | 1.71x | CALC |
| #57 | nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 | 85.6% | 64.3% | 0.67x | CALC |
| #58 | hmellor/tiny-random-LlamaForCausalLM | 72.5% | 50.9% | 0.28x | CALC |
| #59 | Qwen/Qwen3-Next-80B-A3B-Instruct | 96.8% | 44.3% | 1.68x | CALC |
| #60 | h2oai/h2ovl-mississippi-800m | 71.1% | 48.6% | 0.23x | CALC |
| #61 | h2oai/h2ovl-mississippi-2b | 72.3% | 48.5% | 0.75x | CALC |
| #62 | RedHatAI/Qwen2.5-1.5B-quantized.w8a8 | 72.3% | 62% | 0.47x | CALC |
| #63 | apple/OpenELM-1_1B-Instruct | 74% | 53.6% | 1.7x | CALC |
| #64 | google-t5/t5-3b | 71.5% | 42.2% | 0.21x | CALC |
| #65 | bigscience/bloomz-560m | 70.9% | 56.4% | 0.49x | CALC |
| #66 | kaitchup/Phi-3-mini-4k-instruct-gptq-4bit | 72.2% | 53.1% | 2.35x | CALC |
| #67 | meta-llama/Llama-3.2-3B | 70.4% | 64.3% | 0.41x | CALC |
| #68 | meta-llama/Llama-3.3-70B-Instruct | 96.5% | 49.8% | 1.38x | CALC |
| #69 | Qwen/Qwen2.5-14B-Instruct-AWQ | 76.3% | 58.2% | 2.16x | CALC |
| #70 | HuggingFaceTB/SmolLM2-135M | 73% | 64.4% | 0.67x | CALC |
| #71 | Qwen/Qwen2.5-72B-Instruct-AWQ | 95.2% | 41.6% | 1.49x | CALC |
| #72 | microsoft/Phi-3-mini-4k-instruct | 71.2% | 60.6% | 1.29x | CALC |
| #73 | Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 | 96.4% | 60.6% | 2.37x | CALC |
| #74 | Qwen/Qwen2.5-Coder-32B-Instruct | 85.6% | 41.2% | 1.97x | CALC |
| #75 | Qwen/Qwen3-14B | 77.3% | 57.7% | 0.54x | CALC |
| #76 | Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 | 85.3% | 61.2% | 2.37x | CALC |
| #77 | tencent/HunyuanImage-3.0 | 70.4% | 60.8% | 0.53x | CALC |
| #78 | meta-llama/Llama-3.1-70B-Instruct | 97.6% | 52.2% | 2.05x | CALC |
| #79 | deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | 78.3% | 48.9% | 0.22x | CALC |
| #80 | Qwen/Qwen2.5-Coder-1.5B | 71.2% | 58.8% | 0.53x | CALC |
| #81 | liuhaotian/llava-v1.5-7b | 77.7% | 45.1% | 0.55x | CALC |
| #82 | Qwen/Qwen3-Coder-30B-A3B-Instruct | 86.1% | 63.8% | 1.72x | CALC |
| #83 | Qwen/Qwen2.5-Coder-7B-Instruct-AWQ | 75.2% | 44.7% | 2.07x | CALC |
| #84 | llamafactory/tiny-random-Llama-3 | 71.8% | 59.5% | 0.45x | CALC |
| #85 | mlx-community/gpt-oss-20b-MXFP4-Q8 | 75.5% | 51% | 2.48x | CALC |
| #86 | deepseek-ai/DeepSeek-R1-0528 | 70.6% | 55% | 0.24x | CALC |
| #87 | mistralai/Mistral-7B-Instruct-v0.1 | 76.7% | 55.3% | 1.59x | CALC |
| #88 | Qwen/Qwen2.5-Coder-32B-Instruct-AWQ | 85.1% | 45.8% | 2.2x | CALC |
| #89 | RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8 | 75.5% | 54.9% | 1.69x | CALC |
| #90 | Qwen/Qwen3-14B-AWQ | 76.5% | 56.1% | 0.29x | CALC |
| #91 | Qwen/Qwen3-Embedding-4B | 70.3% | 57.1% | 0.78x | CALC |
| #92 | Qwen/Qwen2.5-1.5B-Instruct-AWQ | 70.9% | 41.9% | 2.32x | CALC |
| #93 | microsoft/phi-4 | 73.6% | 41.2% | 0.72x | CALC |
| #94 | Qwen/Qwen3-32B-FP8 | 87.8% | 42.2% | 0.24x | CALC |
| #95 | meta-llama/Llama-3.1-405B | 96.9% | 53.9% | 0.3x | CALC |
| #96 | deepseek-ai/DeepSeek-R1 | 70.2% | 63.8% | 0.19x | CALC |
| #97 | sshleifer/tiny-gpt2 | 73.2% | 61.4% | 0.12x | CALC |
| #98 | openai-community/gpt2-medium | 71.4% | 40% | 1.39x | CALC |
| #99 | RedHatAI/Llama-3.2-1B-Instruct-FP8 | 72.4% | 46.8% | 1.11x | CALC |
| #100 | Qwen/Qwen3-4B-Thinking-2507 | 70% | 49.4% | 0.42x | CALC |