Steel.dev®

AI Agent Benchmark Registry

Steel.dev Logo

Explore an AI agent eval registry and benchmark leaderboard covering web navigation, coding, desktop control, tool use, deep research, and general reasoning. Compare evaluation suites, tests, frameworks, tasks, evaluators, top scores, and benchmark scope in one place.

How to read this registry

Compare results only when task scope and evaluation method are reasonably comparable. Reproducible suites like WebArena are easier to rerun, while live-web evals like WebVoyager better capture production drift. Start with the category routes for web navigation, coding, and tool use before comparing leaderboard numbers across very different evaluation suites. If you want a single place to browse reported scores across many benchmarks, jump to the Benchmark Index.

REGISTRY
Web navigation benchmark - Public

643 tasks across 15 live public websites. Evaluated by GPT-4V judge. The most widely adopted web agent benchmark — de facto standard for comparing commercial and research agents.

Top Model Score
97.1%
Surfer 2
Human Score
~90%
Web navigation benchmark - Self-hosted

812 tasks across self-hosted Docker environments: e-commerce, CMS, GitLab, forum, and map. Programmatic evaluation — no LLM judge. Gold standard for reproducible, verifiable web agent evaluation.

Top Model Score
71.6%
OpAgent
Human Score
~78%
Web navigation benchmark - Self-hosted

910 tasks requiring visual reasoning across classifieds, shopping, and Reddit environments. Sister benchmark to WebArena — tests agents that rely on screenshots rather than HTML/DOM.

Top Model Score
~38%
Aguvis
Human Score
~88%
Web navigation benchmark - Public

300 verified tasks across 136 live websites. Independently verified by Princeton HAL with cost tracking alongside accuracy — unique Pareto frontier view of performance vs. cost.

Top Model Score
42.33%
SeeAct + GPT-5
Human Score
N/A
Web navigation benchmark - Self-hosted
Benchmark By ServiceNow

Unified gym environment for web tasks, aggregating WebArena, WorkArena, and other benchmarks under a single interface. Enables standardized agent development and cross-benchmark comparison.

Top Model Score
~55%
GPT-4o (BrowserGym)
Human Score
N/A
Web navigation benchmark - Public

214 realistic, time-consuming tasks sourced from 525+ pages across 258 websites. Designed to test agents that must retrieve, synthesize, and reason — not just navigate. Best score is 25.2%.

Top Model Score
25.2%
SPA
Human Score
~70%
Web navigation benchmark - Public
Benchmark By Google DeepMind

Benchmark of tedious, multi-step web chores requiring persistent state tracking and real-world interaction. Designed to test agents on tasks humans find repetitive and boring.

Top Model Score
54.8%
Gemini 2.5 Pro
Human Score
N/A
Web navigation benchmark - Self-hosted
Benchmark By ServiceNow

ServiceNow-based enterprise workflow benchmark. Tests agents on realistic IT, HR, and operations tasks inside a real enterprise SaaS environment via BrowserGym.

Top Model Score
~42%
GPT-4o
Human Score
~78%
Web navigation benchmark - Self-hosted
Benchmark By Princeton

1.18M real Amazon products across a simulated e-commerce environment. Agents must find and purchase specific products matching user instructions. Reward based on product attribute matching.

Top Model Score
~75% reward
WebAgent
Human Score
82.1%
Web navigation benchmark - Public
Benchmark By Halluminate / Skyvern

2,454 tasks across 452 live websites from the global top-1,000 by traffic. Direct spiritual successor to WebVoyager with much broader website coverage. Released May 2025 by Halluminate + Skyvern.

Top Model Score
N/A
Skyvern 2.0
Human Score
N/A
Deep research benchmark - Public
Benchmark By OpenAI

1,266 hard research questions designed to be easy to verify but extremely hard to find. Tests persistent multi-step web browsing and information synthesis. Scores are low across the board.

Top Model Score
60.2%
Kimi K2 Thinking
Human Score
29.2%
Deep research benchmark - Public

Multimodal search benchmark testing agents on complex queries requiring both visual and textual web search. Evaluates image-grounded research across live search engines.

Top Model Score
~58%
Gemini 1.5 Pro
Human Score
N/A
Desktop control benchmark - Self-hosted
Benchmark By xLang AI

369 cross-application desktop tasks across Ubuntu, Windows, and macOS. Covers Chrome, LibreOffice, VS Code, and more. Execution-based evaluation. Agents still well below the human baseline of 72%.

Top Model Score
66.2%
AskUI VisionAgent
Human Score
72.4%
Desktop control benchmark - Self-hosted
Benchmark By AgentSea

Cross-platform desktop benchmark covering macOS, Windows, and Ubuntu with 2,000+ tasks. Focuses on real-world app interactions and long-horizon task completion.

Top Model Score
~40%
Claude Computer Use
Human Score
N/A
Desktop control benchmark - Self-hosted
Benchmark By Show Lab

macOS-specific benchmark with 369 tasks spanning system preferences, Finder, Safari, and productivity apps. Complements OSWorld with platform-specific depth.

Top Model Score
~35%
Claude 3.5 Sonnet
Human Score
~72%
Desktop control benchmark - Self-hosted
Benchmark By Microsoft

154 tasks across real Windows 11 applications running in Azure VMs. Tests document editing, file management, system settings, and browser tasks. Full reproducibility via cloud snapshots.

Top Model Score
19.5%
NAVI
Human Score
74.5%
Desktop control benchmark - Self-hosted
Benchmark By Google DeepMind

116 tasks across 20 real Android apps in a live emulated environment. Functional evaluation without cached states. Tests agents on real apps including Gmail, Chrome, and Settings.

Top Model Score
~30%
M3A (Gemini)
Human Score
~88%
Desktop control benchmark - Self-hosted
Benchmark By X-LANCE

A gym environment for mobile UI interaction built on Android emulator. Provides step-level rewards for fine-grained evaluation of touch-based agent interaction.

Top Model Score
N/A
AppAgent
Human Score
N/A
Coding agent benchmark - Public
Benchmark By Princeton

500 human-verified GitHub issues from real-world Python repos. Expert-curated to remove ambiguous or unreliable tasks. The most trusted coding agent benchmark — resolving a real bug in a real repo.

Top Model Score
~72%
OpenAI o3
Human Score
~94%
Coding agent benchmark - Public
Benchmark By Princeton

300-task curated subset of SWE-bench focusing on self-contained issues. Designed for faster, cheaper evaluation while remaining representative of the full benchmark.

Top Model Score
~55%
OpenAI o3
Human Score
N/A
Coding agent benchmark - Public
Benchmark By Harbor

Purely terminal-based coding and system tasks with no GUI. Tests command-line proficiency across bash, Python, and system administration. Harder and more realistic than sandbox coding benchmarks.

Top Model Score
~45%
Claude 3.7 Sonnet
Human Score
N/A
Coding agent benchmark - Public
Benchmark By OpenAI

75 Kaggle competitions used to evaluate ML engineering agents. Agents must write, run, and iterate on ML pipelines to achieve competitive leaderboard scores.

Top Model Score
~17% medals
AIDE (Claude 3.5)
Human Score
N/A
Coding agent benchmark - Public
Benchmark By SciCode

338 scientific coding subproblems from 80 research problems across math, physics, chemistry, biology, and materials science. Tests research-grade code generation against expert-written tests.

Top Model Score
~26%
Claude 3.5 Sonnet
Human Score
~81%
Coding agent benchmark - Self-hosted
Benchmark By UIUC

Evaluates agents on real-world cybersecurity vulnerability exploitation. Agents are scored on successfully exploiting CVEs from public vulnerability databases in sandboxed environments.

Top Model Score
~47%
o3 (high)
Human Score
N/A
Coding agent benchmark - Public
Benchmark By EvalPlus

Enhanced version of OpenAI's HumanEval with 80x more test cases per problem to reduce false positives. Tests Python code generation against significantly stricter test coverage.

Top Model Score
~99%
o3 / Claude 3.7
Human Score
N/A
Coding agent benchmark - Public
Benchmark By Aider

LLM code editing benchmark using real open-source repos. Measures ability to apply targeted code changes from natural language instructions without breaking existing tests.

Top Model Score
~79%
o3
Human Score
N/A
Coding agent benchmark - Self-hosted
Benchmark By Princeton

Interactive coding benchmark using bash and SQL environments. Agents iteratively execute code and receive environment feedback, testing multi-turn code generation and debugging.

Top Model Score
~60%
GPT-4 + ReAct
Human Score
N/A
Coding agent benchmark - Public

Repository-level code completion benchmark. Tests retrieval and generation across entire codebases — agents must understand cross-file context to complete functions correctly.

Top Model Score
~55%
GPT-4
Human Score
N/A
Tool use benchmark - Public
Benchmark By OpenBMB

16,000+ real-world APIs from RapidAPI across 49 categories. Tests agents on planning and chaining API calls to complete complex instructions. Includes a neural retriever for API selection.

Top Model Score
~60% pass rate
GPT-4 (ToolLLaMA)
Human Score
N/A
Tool use benchmark - Public
Benchmark By Sierra

Agent-computer interaction benchmark focused on realistic customer service scenarios. Agents must complete multi-turn tasks using tools (database lookups, reservations) while following strict policies.

Top Model Score
~60%
Claude 3.5 Sonnet
Human Score
N/A
Tool use benchmark - Public
Benchmark By Alibaba DAMO

73 API tools across 3 difficulty levels testing tool retrieval, plan selection, and API call correctness. One of the earliest systematic tool-use benchmarks for LLMs.

Top Model Score
~75%
GPT-4
Human Score
N/A
Tool use benchmark - Public
Benchmark By UC Berkeley

1,645 API tasks across HuggingFace, TorchHub, and TensorHub. Evaluates if agents generate accurate API calls including correct arguments and library usage without hallucination.

Top Model Score
~80%
Gorilla (fine-tuned)
Human Score
N/A
Tool use benchmark - Self-hosted
Benchmark By Apple

Stateful tool-use benchmark with interdependencies between tool calls. Agents must manage tool state across multi-step tasks — calling one tool affects what another returns.

Top Model Score
~52%
GPT-4o
Human Score
N/A
Tool use benchmark - Public
Benchmark By Scale AI

Evaluation suite for agents using Model Context Protocol servers. Tests correctness of MCP tool invocation, schema understanding, and multi-server orchestration.

Top Model Score
N/A
Claude 3.5 Sonnet
Human Score
N/A
General reasoning benchmark - Public

466 tasks across 3 difficulty levels requiring tool use, multimodal reasoning, and web browsing. With 587 submissions on HuggingFace, the most submitted-to AI agent benchmark in existence.

Top Model Score
~75%
Manus / h2oGPTe
Human Score
92%
General reasoning benchmark - Self-hosted
Benchmark By THUDM

8 distinct environments spanning web browsing, OS, database, and game interaction. Tests agents across diverse real-world-like scenarios in a single unified framework.

Top Model Score
~4.27 score
GPT-4
Human Score
N/A
General reasoning benchmark - Public

3,000 expert-level questions across 100+ academic disciplines, crowd-sourced from domain experts. Designed to be at or beyond the frontier of human knowledge — the hardest factual benchmark yet.

Top Model Score
~26%
o3 (high)
Human Score
N/A
General reasoning benchmark - Public
Benchmark By ARC Prize

The second generation of François Chollet's Abstraction and Reasoning Corpus. Novel visual pattern tasks designed to resist memorization — requires genuine program synthesis from examples.

Top Model Score
~4%
o3 (high)
Human Score
~60%
General reasoning benchmark - Public

448 expert-level multiple-choice questions in biology, physics, and chemistry — written and validated by domain PhDs. Only experts in the relevant field consistently score above random.

Top Model Score
~87%
o3
Human Score
~69% (experts)
General reasoning benchmark - Public
Benchmark By LiveBench

Monthly-refreshed benchmark with questions sourced from recent news, papers, and competition math. Designed to prevent data contamination — the benchmark evolves so models can't memorize answers.

Top Model Score
~80%
o3 / Claude 3.7
Human Score
N/A
General reasoning benchmark - Public
Benchmark By OpenAI

4,326 short factual questions with a single unambiguous correct answer. Measures factual accuracy and hallucination rate — designed to have no trick questions, only clear facts.

Top Model Score
~97%
o3
Human Score
~94%
General reasoning benchmark - Self-hosted
Benchmark By HKUST

Analytical benchmark across 9 diverse agent scenarios. Provides fine-grained progress rates beyond binary success/fail — measures how far along a task an agent gets even when it fails.

Top Model Score
~58% progress
GPT-4
Human Score
N/A
General reasoning benchmark - Public

Long-horizon agent benchmark requiring sustained reasoning and planning over 50+ steps. Tests whether agents can maintain coherent goals across very long task horizons without losing context.

Top Model Score
~35%
Claude 3.5 Sonnet
Human Score
N/A
General reasoning benchmark - Self-hosted
Benchmark By Stony Brook

9 app ecosystem with 750 tasks spanning contacts, music, email, maps, and calendar. Tests agents on realistic app-based workflows requiring coordination across multiple simulated apps.

Top Model Score
~49%
GPT-4o
Human Score
N/A
Specialized agent benchmark - Public
Benchmark By Sotopia Lab

Social intelligence benchmark placing agents in realistic social scenarios. Evaluates believability, social goal completion, relationship management, and secret keeping across 11 social dimensions.

Top Model Score
~7.6/10
GPT-4
Human Score
~8.3/10
Specialized agent benchmark - Public
Benchmark By AIEvals

Safety red-teaming benchmark with 440 harmful agent tasks across 11 categories. Tests whether agent frameworks allow harmful behaviors — jailbreaking, weapon synthesis, fraud, and more.

Top Model Score
N/A
N/A (safety eval)
Human Score
N/A
Specialized agent benchmark - Public
Benchmark By Stanford

300 clinical tasks across 10 medical categories using real EHR data. Tests agents on diagnosis reasoning, treatment planning, and medical record navigation in realistic hospital environments.

Top Model Score
~77%
o1-preview
Human Score
N/A
Specialized agent benchmark - Public
Benchmark By UIUC

Evaluates both cooperative and competitive multi-agent systems. Tasks include collaborative problem-solving and adversarial games — measures emergent coordination and strategic behavior.

Top Model Score
N/A
GPT-4 (multi-agent)
Human Score
N/A
Specialized agent benchmark - Self-hosted

Cybersecurity benchmark testing agents on capture-the-flag style challenges. Covers reverse engineering, web exploitation, and cryptography. Designed to stress-test autonomous offensive security agents.

Top Model Score
~35%
o3
Human Score
N/A
Specialized agent benchmark - Public

Transaction and inventory reasoning benchmark. Agents manage a virtual vending machine over many turns — testing whether models understand real-world economics, stock levels, and pricing logic.

Top Model Score
~62%
Claude 3.5 Sonnet
Human Score
N/A
Specialized agent benchmark - Public

Role-playing and character consistency benchmark. Evaluates agents on maintaining persona fidelity, character knowledge accuracy, and in-character behavior across long conversations.

Top Model Score
~75%
GPT-4
Human Score
N/A
Specialized agent benchmark - Public
Benchmark By Peter Gostev

Tests model resistance to confidently stated falsehoods in prompts. Evaluates whether agents can identify and reject plausible-sounding but incorrect premises before acting on them.

Top Model Score
N/A
o3
Human Score
N/A

MISSING A BENCHMARK? OPEN A PR ON GITHUB TO ADD IT TO THE REGISTRY.

What is an AI agent benchmark?

An AI agent benchmark, eval, or evaluation suite is a structured way to test how well an agent completes tasks in an environment, not just how well a model writes a plausible answer. Instead of grading one response, these tests look at sequences of actions across websites, codebases, tools, desktops, or research workflows. In practice, they measure whether the system can make progress, stay grounded, and reach the correct end state.

That is the main difference between an agent benchmark and a standard LLM eval. A classic LLM test asks whether the model produced the right answer to a prompt. An agent evaluation asks whether the system can plan, recover from mistakes, use the right tools, and complete a workflow under realistic constraints. Strong benchmark leaderboards often track not only accuracy, but also task success, reliability, latency, and cost.

Common methods include exact-match grading, executable test suites, environment-state checks, human review, and LLM-as-judge scoring for open-ended work. Each has tradeoffs in rigor, scalability, and realism. Self-hosted suites are easier to rerun and compare over time, while public-web or live-software evaluations better reflect drift and production messiness. The best way to evaluate AI agents is usually to combine both.

FAQ
What are AI agent benchmarks? [+]
AI agent benchmarks are evaluations that measure whether an agent can complete multi-step tasks in an environment such as a browser, terminal, desktop, or tool stack. Unlike single-prompt model tests, they focus on action quality, task completion, recovery from mistakes, and end-to-end execution.
What is an agent eval registry? [+]
An agent eval registry is a curated index of AI agent benchmarks, evaluations, leaderboards, test suites, and frameworks. Instead of covering just one benchmark family, it helps you compare multiple evaluation options across web navigation, coding, desktop control, and tool use in one place.
How do you evaluate AI agents? [+]
You evaluate AI agents by testing them on multi-step tasks in realistic environments and measuring whether they reach the correct end state. Strong agent evaluations usually track task success, evaluator design, reliability, cost, latency, and recovery from mistakes. The right eval framework depends on whether you care about browser use, coding, tool use, desktop control, or general reasoning.
What is the best benchmark for coding agents? [+]
There is no single best benchmark for every use case, but SWE-bench Verified is widely treated as the most trusted benchmark for coding agents because it uses real repository issues and executable test suites. Terminal-Bench is also useful when you want to evaluate autonomous agents in command-line and systems workflows.
How do browser agent benchmarks differ from coding benchmarks? [+]
Browser agent benchmarks evaluate interaction with websites, page state, navigation, and visual or DOM-grounded actions. Coding benchmarks evaluate repository understanding, file edits, tool use, debugging, and test execution. Compare WebVoyager with SWE-bench Verified to see how different the environments and failure modes are.
What does self-hosted mean in an agent benchmark? [+]
Self-hosted means the benchmark environment can be run in a controlled local or containerized setup instead of depending entirely on live public services. That usually improves reproducibility and evaluation stability, but may be less representative of the messy real web or production software. Benchmarks like WebArena and OSWorld are good examples.
Why do benchmark scores differ across evaluators? [+]
Benchmark scores differ because evaluators measure success in different ways. Some use exact match, some use executable test suites, some verify environment state, and others rely on human review or LLM judges. A score on one benchmark or evaluator is not directly comparable to a score produced by a different evaluation method. Use the How to read this registry note before comparing results.