Top AI Models July 2025

Comprehensive comparison of frontier AI models (July 2025): MMLU-Pro, MMLU, and GPQA benchmark scores for leading models including OpenAI, Claude, Gemini, Grok, and open-source LLMs. Updated performance rankings and capabilities assessment.

⚠️
Models that perform nice on standard tests like HLE scoring can perform terribly on other tests.

🏆 Model Leaderboards

HLE

  1. 1Grok 4
    44.4
  2. 2GPT-5
    42
  3. 3o3
    24.9
  4. 4Gemini 2.5 Pro 06-05
    21.6
  5. 5gpt-oss-120b
    19
  6. 6Gemini 2.5 Pro Preview
    18.8
  7. 7DeepSeek-R1-0528
    17.7
  8. 8gpt-oss-20b
    17.3
  9. 9Agentic-Tx
    14.5
  10. 10GLM-4.5
    14.4

Reasoning (LiveBench)

  1. 1grok-4-0709
    97.8
  2. 2claude-4-sonnet-20250514-thinking-64k
    95.3
  3. 3o3-2025-04-16-high
    94.7
  4. 4o3-pro-2025-06-10-high
    94.7
  5. 5gemini-2.5-pro-preview-06-05-highthinking
    94.3
  6. 6gemini-2.5-pro-preview-06-05-default
    93.7
  7. 7claude-4-1-opus-20250805-thinking-32k
    93.2
  8. 8qwen3-235b-a22b-thinking-2507
    91.6
  9. 9deepseek-r1-0528
    91.1
  10. 10o3-2025-04-16-medium
    91

Programming (LiveBench)

  1. 1o3-2025-04-16-high
    40.8
  2. 2o4-mini-2025-04-16-high
    40.8
  3. 3chatgpt-4o-latest-2025-03-27
    39.4
  4. 4o4-mini-2025-04-16-medium
    39.4
  5. 5o3-2025-04-16-medium
    38.7
  6. 6o3-pro-2025-06-10-high
    38.7
  7. 7grok-4-0709
    38.7
  8. 8claude-3-5-sonnet-20241022
    38
  9. 9claude-4-sonnet-20250514-base
    38
  10. 10deepseek-r1
    38

MMLU

  1. 1Qwen3-235B-A22B-Thinking-2507
    93.8
  2. 2DeepSeek-R1-0528
    93.4
  3. 3Qwen3-235B-A22B-Instruct-2507
    93.1
  4. 4EXAONE 4.0
    92.3
  5. 5o1
    92.3
  6. 6o1-preview
    92.3
  7. 7o1-2024-12-17
    91.8
  8. 8Pangu Ultra MoE
    91.5
  9. 9o3
    91.2
  10. 10Cogito 70B
    91

GPQA

  1. 1GPT-5
    89.4
  2. 2Grok 4
    88.9
  3. 3o3-preview
    87.7
  4. 4Gemini 2.5 Pro 06-05
    86.4
  5. 5Claude 3.7 Sonnet
    84.8
  6. 6Grok-3
    84.6
  7. 7Gemini 2.5 Pro Preview
    84
  8. 8Claude Opus 4
    83.3
  9. 9o3
    83.3
  10. 10o4-mini
    81.4

Sources: livebench.ai (Reasoning, Programming), lifearchitect.ai/models-table (MMLU, GPQA), scale.com (HLE) | Fetched: 8/7/2025

Note: SOTA models are now achieving SWE-bench (72.5%) and Terminal-bench (43.2%) scores. Full benchmark details coming soon.

AI Model Specifications

ModelSizeTraining DataAGI LevelAccess
o1200B20TLevel 3Access
o1-preview200B20TLevel 3Access
DeepSeek-R1685B14.8TLevel 3Access
Claude 3.5 Sonnet (new)175B20TLevel 2Access
Gemini 2.0 Flash exp30B30TLevel 2Access
Claude 3.5 Sonnet70B15TLevel 2Access
Gemini-1.5-Pro-0021500B30TLevel 2Access
MiniMax-Text-01456B7.2TLevel 2Access
Grok-2400B15TLevel 2Access
Llama 3.1 405B405B15.6TLevel 2Access
Sonus-1 Reasoning405B15TLevel 2Access
GPT-4o200B20TLevel 2Access
InternVL 2.578B18.12TLevel 2Access
Qwen2.572B18TLevel 2Access

When you see "13B (Size) on 5.6T tokens (Training Data)", it means:

  • 13B: 13 billion parameters (think of these as the AI's "brain cells")
  • 5.6T: 5.6 trillion tokens of training data (each token ≈ 4 characters)

📊 Understanding the Benchmarks

  • HLE (Humanity's Last Exam): Designed as the most difficult closed-ended academic exam for AI. Aims to rigorously test models at the frontier of human knowledge, as benchmarks like MMLU are becoming too easy (~90%+ scores for top models). Consists of 2,500 questions across >100 subjects from ~1000 experts. Current top models score ~20%, highlighting the gap to human expert level.
  • MMLU-Pro: Advanced version of MMLU focusing on expert-level knowledge. Currently considered the most reliable indicator of model capabilities.
  • MMLU: Tests knowledge across 57 subjects with a 90% theoretical ceiling and 9% error rate.
  • GPQA: PhD-level science benchmark across biology, chemistry, physics, and astronomy. Has a 74% ceiling, with 20% error rate. Notable: even scientists only agree on ~78% of answers.

These models represent the current state-of-the-art in AI language technology (General Purpose Frontier Models).

Performance Milestones

As of Q1 2025, the theoretical performance ceilings were:

  • GPQA: 74%
  • MMLU: 90%
  • HLE: 20%

These ceilings were notably surpassed:

  • GPT-03 achieved 87.7% on GPQA
  • OpenAI's o1 model surpassed both benchmarks in Q3 2025[¹]

In Q1 2025, these ceilings were surpassed with OpenAI's o1 model[¹].

Access & Details

For detailed information on each model, including:

  • Technical specifications
  • Use cases
  • Access procedures
  • Deployment guidelines

Please refer to our Models Access page.

Note: Performance metrics and rankings are based on publicly available data and may evolve as new models and evaluations emerge.

Mathematics Competition Benchmarks

AIME25, USAMO25, and HMMT25 are prestigious American high school mathematics competitions expected to be held in 2025.

AIME25 (American Invitational Mathematics Examination): An intermediate competition for students who excel on the AMC 10/12 exams. It features 15 complex problems with integer answers, and top scorers may advance to the USAMO.

USAMO25 (United States of America Mathematical Olympiad): The premier national math olympiad in the US. It is a highly selective, proof-based exam for the top performers from the AIME. The USAMO is a key step in selecting the U.S. team for the International Mathematical Olympiad (IMO).

HMMT25 (Harvard-MIT Mathematics Tournament): A challenging and popular competition run by students from Harvard and MIT. It occurs twice a year (February at MIT, November at Harvard) and includes a mix of individual and team-based rounds, attracting top students from around the world.

These competitions, along with others, have recently been used as benchmarks to test the capabilities of advanced AI models.


[1]: AI Research Community. "Language Model Leaderboard." Google Sheets, 2025. https://docs.google.com/spreadsheets/d/1kc262HZSMAWI6FVsh0zJwbB-ooYvzhCHaHcNUiA0_hY/

Related Links

Subscribe to AI Spectrum

Stay updated with weekly AI News and Insights delivered to your inbox