Top AI Models December 2025
Comprehensive comparison of frontier AI models (December 2025): MMLU-Pro, MMLU, and GPQA benchmark scores for leading models including OpenAI, Claude, Gemini, Grok, and open-source LLMs. Updated performance rankings and capabilities assessment.
🏆 Text Model Leaderboards
HLE
- 1GPT-5.250Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 2Gemini 345.8Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 3Grok 444.4Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 4Kimi K2 Thinking44Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 5Gemini 3 Flash43.5Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 6Claude Opus 4.543.2Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 7GPT-542Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 8Orchestrator-8B37.1Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 9MiniMax-M231.8Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
- 10DeepSeek-V3.2-Speciale30.6Performance on HLE (Human Language Evaluation) benchmark (source: scale.com, data via lifearchitect.ai/models-table). 22/12/25
Reasoning (LiveBench)
- 1gemini-3-pro-preview-11-2025-high98.8Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 2gpt-5-codex98.7Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 3claude-opus-4-5-20251101-thinking-medium-effort98.7Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 4gpt-5-high98.2Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 5gpt-5.1-codex98Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 6claude-opus-4-5-20251101-thinking-high-effort98Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 7grok-4-070997.8Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 8gpt-5-pro-2025-10-0696.7Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 9gpt-596.6Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
- 10gemini-3-pro-preview-11-2025-low96.5Average performance on reasoning tasks (Web of Lies v2, Zebra Puzzle, Spatial) from LiveBench. 22/12/25
Programming (LiveBench)
- 1claude-opus-4-5-20251101-medium-effort41.5Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 2claude-opus-4-5-20251101-high-effort40.8Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 3claude-sonnet-4-5-20250929-thinking-64k40.1Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 4gpt-5-high40.1Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 5claude-opus-4-5-20251101-low-effort40.1Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 6claude-4-sonnet-20250514-base39.4Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 7claude-4-sonnet-20250514-thinking-64k39.4Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 8gpt-5-chat39.4Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 9grok-4-070939.4Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
- 10grok-code-fast-1-082539.4Average performance on programming tasks (Code Generation, Coding Completion) from LiveBench. 22/12/25
MMLU
- 1Kimi K2 Thinking94.4Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 2Qwen3-235B-A22B-Thinking-250793.8Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 3DeepSeek-V3.1-Base93.7Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 4DeepSeek-R1-052893.4Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 5Qwen3-235B-A22B-Instruct-250793.1Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 6EXAONE 4.092.3Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 7o192.3Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 8o1-preview92.3Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 9o1-2024-12-1791.8Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
- 10Pangu Ultra MoE91.5Performance on MMLU benchmark (via lifearchitect.ai/models-table). 22/12/25
GPQA
- 1Gemini 393.8Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 2GPT-5.293.2Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 3Gemini 3 Flash90.4Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 4GPT-589.4Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 5Grok 488.9Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 6GPT-5.188.1Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 7o3-preview87.7Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 8Claude Opus 4.587Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 9Gemini 2.5 Pro 06-0586.4Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
- 10DeepSeek-V3.2-Speciale85.7Performance on GPQA benchmark (via lifearchitect.ai/models-table). 22/12/25
Sources: livebench.ai (Reasoning, Programming), lifearchitect.ai/models-table (MMLU, GPQA), scale.com (HLE) | Fetched: 12/22/2025
AI Image Generation Models
The Fréchet Inception Distance (FID) score is a key metric for evaluating AI image generation quality, where lower scores indicate better performance. Below are comprehensive benchmarks across multiple metrics including CLIP Score, FID, F1, Precision, and Recall.
CLIP Score
Measures how closely a generated image matches its text prompt
- 10.265PhotonLuma Labs
- 20.263Flux ProBlack Forest Labs
- 30.259Dall-E 3OpenAI
- 40.258Nano BananaGoogle Gemini
- 50.251Runway Gen 4Runway AI
- 60.250Ideogram V3Ideogram
- 70.249Stability SD TurboStability AI
FID Score
Assesses how close AI-generated images are to real images (lower is better)
- 1305.600Ideogram V3Ideogram
- 2306.080Dall-E 3OpenAI
- 3317.520Runway Gen 4Runway AI
- 4318.550PhotonLuma Labs
- 5318.630Flux ProBlack Forest Labs
- 6318.800Nano BananaGoogle Gemini
- 7321.750Stability SD TurboStability AI
F1 Score
Combines precision and recall to show overall image accuracy
- 10.463PhotonLuma Labs
- 20.447Stability SD TurboStability AI
- 30.445Runway Gen 4Runway AI
- 40.421Flux ProBlack Forest Labs
- 50.415Ideogram V3Ideogram
- 60.380Dall-E 3OpenAI
- 70.351Nano BananaGoogle Gemini
Precision
Measures how many AI-images came out correct vs total generated
- 10.448PhotonLuma Labs
- 20.432Stability SD TurboStability AI
- 30.423Runway Gen 4Runway AI
- 40.406Flux ProBlack Forest Labs
- 50.397Ideogram V3Ideogram
- 60.358Dall-E 3OpenAI
- 70.339Nano BananaGoogle Gemini
Recall
Measures how many correct images the AI produced vs all possible correct images
- 10.533Stability SD TurboStability AI
- 20.532PhotonLuma Labs
- 30.522Runway Gen 4Runway AI
- 40.497Ideogram V3Ideogram
- 50.495Flux ProBlack Forest Labs
- 60.477Dall-E 3OpenAI
- 70.415Nano BananaGoogle Gemini
Source: dreamlayer.io/research | Fetched: 12/9/2025
AI Model Specifications
| Model | Size | Training Data | AGI Level | Access |
|---|---|---|---|---|
| o1 | 200B | 20T | Level 3 | Access |
| o1-preview | 200B | 20T | Level 3 | Access |
| DeepSeek-R1 | 685B | 14.8T | Level 3 | Access |
| Claude 3.5 Sonnet (new) | 175B | 20T | Level 2 | Access |
| Gemini 2.0 Flash exp | 30B | 30T | Level 2 | Access |
| Claude 3.5 Sonnet | 70B | 15T | Level 2 | Access |
| Gemini-1.5-Pro-002 | 1500B | 30T | Level 2 | Access |
| MiniMax-Text-01 | 456B | 7.2T | Level 2 | Access |
| Grok-2 | 400B | 15T | Level 2 | Access |
| Llama 3.1 405B | 405B | 15.6T | Level 2 | Access |
| Sonus-1 Reasoning | 405B | 15T | Level 2 | Access |
| GPT-4o | 200B | 20T | Level 2 | Access |
| InternVL 2.5 | 78B | 18.12T | Level 2 | Access |
| Qwen2.5 | 72B | 18T | Level 2 | Access |
When you see "13B (Size) on 5.6T tokens (Training Data)", it means:
- 13B: 13 billion parameters (think of these as the AI's "brain cells")
- 5.6T: 5.6 trillion tokens of training data (each token ≈ 4 characters)
These models represent the current state-of-the-art in AI language technology (General Purpose Frontier Models).
Performance Milestones
As of Q1 2025, the theoretical performance ceilings were:
- GPQA: 74%
- MMLU: 90%
- HLE: 20%
These ceilings were notably surpassed:
- GPT-03 achieved 87.7% on GPQA
- OpenAI's
o1model surpassed both benchmarks in Q3 2025[¹]
In Q1 2025, these ceilings were surpassed with OpenAI's o1 model[¹].
Access & Details
For detailed information on each model, including:
- Technical specifications
- Use cases
- Access procedures
- Deployment guidelines
Please refer to our Models Access page.
Note: Performance metrics and rankings are based on publicly available data and may evolve as new models and evaluations emerge.
Understanding the Benchmarks
Text Model Benchmarks
- HLE (Humanity's Last Exam): Designed as the most difficult closed-ended academic exam for AI. Aims to rigorously test models at the frontier of human knowledge, as benchmarks like MMLU are becoming too easy (~90%+ scores for top models). Consists of 2,500 questions across >100 subjects from ~1000 experts. Current top models score ~20%, highlighting the gap to human expert level.
- MMLU-Pro: Advanced version of MMLU focusing on expert-level knowledge. Currently considered the most reliable indicator of model capabilities.
- MMLU: Tests knowledge across 57 subjects with a 90% theoretical ceiling and 9% error rate.
- GPQA: PhD-level science benchmark across biology, chemistry, physics, and astronomy. Has a 74% ceiling, with 20% error rate. Notable: even scientists only agree on ~78% of answers.
Image Generation Benchmarks
- CLIP Score: Measures how closely a generated image matches its text prompt. Higher scores indicate better text-to-image alignment.
- FID Score: Assesses how close AI-generated images are to real images by comparing feature distributions. Lower scores are better.
- F1 Score: Combines precision and recall to show overall image generation accuracy. Balances false positives and false negatives.
- Precision: Measures how many AI-generated images came out correct compared to the total number of images generated.
- Recall: Measures how many of the correct images the AI was able to produce out of all the possible correct images it could have generated.
For more details on specific image generation models like Nano Banana, see our dedicated model pages.
Mathematics Competition Benchmarks
AIME25, USAMO25, and HMMT25 are prestigious American high school mathematics competitions expected to be held in 2025.
AIME25 (American Invitational Mathematics Examination): An intermediate competition for students who excel on the AMC 10/12 exams. It features 15 complex problems with integer answers, and top scorers may advance to the USAMO.
USAMO25 (United States of America Mathematical Olympiad): The premier national math olympiad in the US. It is a highly selective, proof-based exam for the top performers from the AIME. The USAMO is a key step in selecting the U.S. team for the International Mathematical Olympiad (IMO).
HMMT25 (Harvard-MIT Mathematics Tournament): A challenging and popular competition run by students from Harvard and MIT. It occurs twice a year (February at MIT, December at Harvard) and includes a mix of individual and team-based rounds, attracting top students from around the world.
These competitions, along with others, have recently been used as benchmarks to test the capabilities of advanced AI models.
[1]: AI Research Community. "Language Model Leaderboard." Google Sheets, 2025. https://docs.google.com/spreadsheets/d/1kc262HZSMAWI6FVsh0zJwbB-ooYvzhCHaHcNUiA0_hY/
