Steel.dev®

AI Agent Benchmark Registry

Steel.dev Logo

Explore an AI agent eval registry and benchmark leaderboard covering web navigation, coding, desktop control, tool use, deep research, and general reasoning. Compare evaluation suites, tests, frameworks, tasks, evaluators, top scores, and benchmark scope in one place.

How to read this registry

Compare results only when task scope and evaluation method are reasonably comparable. Reproducible suites like WebArena are easier to rerun, while live-web evals like WebVoyager better capture production drift. Start with the category routes for web navigation, coding, and tool use before comparing leaderboard numbers across very different evaluation suites. If you want a single place to browse reported scores across many benchmarks, jump to the Benchmark Index.

REGISTRY

Tool-use benchmarks

Tool-use benchmarks focus on whether an agent can pick the right tool, call it correctly, interpret the result, and keep going when the workflow spans several steps. That matters in production because most useful agents do not win through raw text reasoning alone; they search, execute code, inspect files, query APIs, and chain actions. Good tool-use evals make it clear whether failure came from planning, parsing, or state management. If you want a broad reference point, start with ToolBench and Tau-bench. If the agent also needs to reason deeply between tool calls, compare the results against the general reasoning benchmarks.
Tool use benchmark - Public
Benchmark By OpenBMB

16,000+ real-world APIs from RapidAPI across 49 categories. Tests agents on planning and chaining API calls to complete complex instructions. Includes a neural retriever for API selection.

Top Model Score
~60% pass rate
GPT-4 (ToolLLaMA)
Human Score
N/A
Tool use benchmark - Public
Benchmark By Sierra

Agent-computer interaction benchmark focused on realistic customer service scenarios. Agents must complete multi-turn tasks using tools (database lookups, reservations) while following strict policies.

Top Model Score
~60%
Claude 3.5 Sonnet
Human Score
N/A
Tool use benchmark - Public
Benchmark By Alibaba DAMO

73 API tools across 3 difficulty levels testing tool retrieval, plan selection, and API call correctness. One of the earliest systematic tool-use benchmarks for LLMs.

Top Model Score
~75%
GPT-4
Human Score
N/A
Tool use benchmark - Public
Benchmark By UC Berkeley

1,645 API tasks across HuggingFace, TorchHub, and TensorHub. Evaluates if agents generate accurate API calls including correct arguments and library usage without hallucination.

Top Model Score
~80%
Gorilla (fine-tuned)
Human Score
N/A
Tool use benchmark - Self-hosted
Benchmark By Apple

Stateful tool-use benchmark with interdependencies between tool calls. Agents must manage tool state across multi-step tasks — calling one tool affects what another returns.

Top Model Score
~52%
GPT-4o
Human Score
N/A
Tool use benchmark - Public
Benchmark By Scale AI

Evaluation suite for agents using Model Context Protocol servers. Tests correctness of MCP tool invocation, schema understanding, and multi-server orchestration.

Top Model Score
N/A
Claude 3.5 Sonnet
Human Score
N/A

MISSING A BENCHMARK? OPEN A PR ON GITHUB TO ADD IT TO THE REGISTRY.

What is an AI agent benchmark?

An AI agent benchmark, eval, or evaluation suite is a structured way to test how well an agent completes tasks in an environment, not just how well a model writes a plausible answer. Instead of grading one response, these tests look at sequences of actions across websites, codebases, tools, desktops, or research workflows. In practice, they measure whether the system can make progress, stay grounded, and reach the correct end state.

That is the main difference between an agent benchmark and a standard LLM eval. A classic LLM test asks whether the model produced the right answer to a prompt. An agent evaluation asks whether the system can plan, recover from mistakes, use the right tools, and complete a workflow under realistic constraints. Strong benchmark leaderboards often track not only accuracy, but also task success, reliability, latency, and cost.

Common methods include exact-match grading, executable test suites, environment-state checks, human review, and LLM-as-judge scoring for open-ended work. Each has tradeoffs in rigor, scalability, and realism. Self-hosted suites are easier to rerun and compare over time, while public-web or live-software evaluations better reflect drift and production messiness. The best way to evaluate AI agents is usually to combine both.

FAQ
What are AI agent benchmarks? [+]
AI agent benchmarks are evaluations that measure whether an agent can complete multi-step tasks in an environment such as a browser, terminal, desktop, or tool stack. Unlike single-prompt model tests, they focus on action quality, task completion, recovery from mistakes, and end-to-end execution.
What is an agent eval registry? [+]
An agent eval registry is a curated index of AI agent benchmarks, evaluations, leaderboards, test suites, and frameworks. Instead of covering just one benchmark family, it helps you compare multiple evaluation options across web navigation, coding, desktop control, and tool use in one place.
How do you evaluate AI agents? [+]
You evaluate AI agents by testing them on multi-step tasks in realistic environments and measuring whether they reach the correct end state. Strong agent evaluations usually track task success, evaluator design, reliability, cost, latency, and recovery from mistakes. The right eval framework depends on whether you care about browser use, coding, tool use, desktop control, or general reasoning.
What is the best benchmark for coding agents? [+]
There is no single best benchmark for every use case, but SWE-bench Verified is widely treated as the most trusted benchmark for coding agents because it uses real repository issues and executable test suites. Terminal-Bench is also useful when you want to evaluate autonomous agents in command-line and systems workflows.
How do browser agent benchmarks differ from coding benchmarks? [+]
Browser agent benchmarks evaluate interaction with websites, page state, navigation, and visual or DOM-grounded actions. Coding benchmarks evaluate repository understanding, file edits, tool use, debugging, and test execution. Compare WebVoyager with SWE-bench Verified to see how different the environments and failure modes are.
What does self-hosted mean in an agent benchmark? [+]
Self-hosted means the benchmark environment can be run in a controlled local or containerized setup instead of depending entirely on live public services. That usually improves reproducibility and evaluation stability, but may be less representative of the messy real web or production software. Benchmarks like WebArena and OSWorld are good examples.
Why do benchmark scores differ across evaluators? [+]
Benchmark scores differ because evaluators measure success in different ways. Some use exact match, some use executable test suites, some verify environment state, and others rely on human review or LLM judges. A score on one benchmark or evaluator is not directly comparable to a score produced by a different evaluation method. Use the How to read this registry note before comparing results.