Model Profile
Meta-Llama-3-8B-Instruct
Use this page to decide where this model is a strong fit. Rankings below are benchmark-backed by use case, with explicit confidence and contributor metrics.
Identity
ID: meta-llama/Meta-Llama-3-8B-Instruct
Author: meta-llama
Origin: huggingface_catalog
Arch: unknown
Benchmark Coverage
Scored use cases: 12
Avg confidence: 22.7%
Evidence points: 62
Raw rows: 45
Weighted rows: 11
Catalog Metadata
Parameters: unknown
Context window: 4096
Downloads: 1,394,471
Intelligence Profile
Dimension Breakdown
No iq benchmarks found
No accuracy benchmarks found
No creativity benchmarks found
* Low confidence — limited benchmark evidence for this dimension
2/5 dimensions scored · Last updated Apr 21, 2026
Benchmark Signals
Click through to the benchmark source behind this model profile.
LLM Trustworthy Leaderboard
adv
Normalized value 100.0% · confidence 100.0%
Strongest impact in Overrefusal (eval)
llm_trustworthy_leaderboard.adv · Mar 31, 2026
EQ-Bench Leaderboard
eq_bench_score
Normalized value 76.8% · confidence 100.0%
Strongest impact in Social post generation
eq_bench.eq_bench_score · Apr 1, 2026
LLM Trustworthy Leaderboard
privacy
Normalized value 69.0% · confidence 100.0%
Strongest impact in Overrefusal (eval)
llm_trustworthy_leaderboard.privacy · Mar 31, 2026
LLM Trustworthy Leaderboard
fairness
Normalized value 46.8% · confidence 100.0%
Strongest impact in Overrefusal (eval)
llm_trustworthy_leaderboard.fairness · Mar 31, 2026
LLM Trustworthy Leaderboard
toxicity
Normalized value 50.0% · confidence 100.0%
Strongest impact in Overrefusal (eval)
llm_trustworthy_leaderboard.toxicity · Mar 31, 2026
DuckDB NSQL Leaderboard
all_execution_accuracy
Normalized value 32.7% · confidence 100.0%
Strongest impact in Social post generation
duckdb_nsql_leaderboard.all_execution_accuracy · Apr 1, 2026
Some fit rows have limited benchmark evidence.
7 of 12 scored use cases have low confidence or thin contributor coverage.
Coverage Diagnostics
actively scoredUse-Case Scores
46
Total Measurements
45
Weighted Measurements
11
Weighted Sources
5
Raw Source Coverage
Weighted Source Coverage
Best Use Cases for This Model
| Use Case | Score |
|---|---|
| Overrefusal (eval) use_case.security.overrefusal_eval | 20.3% |
| Scam and social engineering resistance (eval) use_case.security.scam_social_engineering_resistance_eval | 20.3% |
| Jailbreak resistance (eval) use_case.security.jailbreak_resistance_eval | 20.3% |
| Refusal profile (eval) use_case.security.refusal_profile_eval | 20.3% |
| Crisis escalation protocol (eval) use_case.safety.crisis_escalation_protocol | 20.3% |
| Vulnerability-oriented code review use_case.cyber.vulnerability_review | 12.3% |
| Disinformation and manipulation resistance (eval) use_case.security.disinformation_resistance_eval | 10.8% |
| Social post generation use_case.mkt.social_post_generation | 10.0% |
| Campaign brief use_case.mkt.campaign_brief | 10.0% |
| Product positioning and messaging use_case.mkt.product_positioning | 10.0% |
| Ad copy variants use_case.mkt.ad_copy_variants | 9.4% |
| Personalized sales outreach use_case.mkt.sales_outreach_personalized | 9.4% |