Model Profile
openai/gpt-4o-mini-2024-07-18
Use this page to decide where this model is a strong fit. Rankings below are benchmark-backed by use case, with explicit confidence and contributor metrics.
Identity
ID: external/openai/gpt-4o-mini-2024-07-18
Author: openai
Origin: external_benchmark_shadow
Arch: unknown
Benchmark Coverage
Scored use cases: 12
Avg confidence: 30.6%
Evidence points: 153
Raw rows: 328
Weighted rows: 24
Catalog Metadata
Parameters: unknown
Context window: 4096
Downloads: 0
Intelligence Profile
Dimension Breakdown
No eq benchmarks found
No accuracy benchmarks found
No creativity benchmarks found
* Low confidence — limited benchmark evidence for this dimension
2/5 dimensions scored · Last updated Apr 21, 2026
Benchmark Signals
Click through to the benchmark source behind this model profile.
DuckDB NSQL Leaderboard
all_execution_accuracy
Normalized value 76.9% · confidence 100.0%
Strongest impact in Metric definition workshop
duckdb_nsql_leaderboard.all_execution_accuracy · Apr 1, 2026
LLM Trustworthy Leaderboard
privacy
Normalized value 84.3% · confidence 100.0%
Strongest impact in Jailbreak resistance (eval)
llm_trustworthy_leaderboard.privacy · Mar 31, 2026
DuckDB NSQL Leaderboard
hard_execution_accuracy
Normalized value 50.0% · confidence 100.0%
Strongest impact in SQL debugging
duckdb_nsql_leaderboard.hard_execution_accuracy · Apr 1, 2026
LLM Trustworthy Leaderboard
adv
Normalized value 57.9% · confidence 100.0%
Strongest impact in Jailbreak resistance (eval)
llm_trustworthy_leaderboard.adv · Mar 31, 2026
BigCodeBench Official
bigcodebench_complete_pct
Normalized value 90.8% · confidence 100.0%
Strongest impact in Verilog/VHDL generation
bigcodebench_official.bigcodebench_complete_pct · Apr 1, 2026
LLM Trustworthy Leaderboard
toxicity
Normalized value 50.0% · confidence 100.0%
Strongest impact in Jailbreak resistance (eval)
llm_trustworthy_leaderboard.toxicity · Mar 31, 2026
Some fit rows have limited benchmark evidence.
4 of 12 scored use cases have low confidence or thin contributor coverage.
Coverage Diagnostics
actively scoredUse-Case Scores
141
Total Measurements
328
Weighted Measurements
24
Weighted Sources
14
Raw Source Coverage
Weighted Source Coverage
Best Use Cases for This Model
| Use Case | Score |
|---|---|
| Jailbreak resistance (eval) use_case.security.jailbreak_resistance_eval | 20.4% |
| Refusal profile (eval) use_case.security.refusal_profile_eval | 20.4% |
| Overrefusal (eval) use_case.security.overrefusal_eval | 20.4% |
| Scam and social engineering resistance (eval) use_case.security.scam_social_engineering_resistance_eval | 20.4% |
| Crisis escalation protocol (eval) use_case.safety.crisis_escalation_protocol | 20.4% |
| Metric definition workshop use_case.data.metric_definition_workshop | 16.9% |
| Data quality assistant use_case.data.data_quality_assistant | 15.7% |
| SQL debugging use_case.data.sql_debugging | 14.7% |
| Simulation setup assistant use_case.eng.simulation_setup_assistant | 14.5% |
| Executive brief from metrics use_case.data.exec_brief_from_metrics | 14.1% |
| Insight mining from text corpora use_case.data.insight_mining | 13.1% |
| Verilog/VHDL generation use_case.eda.verilog_generation | 12.9% |