Conversation
Co-authored-by: openhands <openhands@all-hands.dev>
all-hands-bot
left a comment
There was a problem hiding this comment.
🟢 Good taste - Clean, focused fix that solves real problems (wrong package/dataset names) and adds practical CI smoke test support. Tests appropriately validate command construction without requiring full Harbor integration. No fundamental issues found.
all-hands-bot
left a comment
There was a problem hiding this comment.
🟢 Good taste - Clean, focused fix that solves real problems (wrong package/dataset names harbor-bench → harbor, terminal-bench-2 → terminal-bench@2.0) and adds practical CI smoke test support with --n-limit. Tests appropriately validate command construction without requiring full Harbor integration. Evidence provided shows successful smoke runs. No fundamental issues found.
Verdict: ✅ Worth merging
Key insight: Pragmatic fix that solves real integration issues with minimal, well-tested code and proper documentation.
Summary
terminal-bench@2.0) and add--n-limitpassthrough for CI smoke runsterminalbenchin the benchmarks dispatch workflowAGENTS.mdDetails
harbor, notharbor-bench.terminal-benchversion2.0currently exposes 89 tasks.Testing
make builduv run pre-commit run --files benchmarks/terminalbench/config.py benchmarks/terminalbench/run_infer.py benchmarks/terminalbench/README.md tests/test_terminalbench.py .github/workflows/run-eval.ymluv run pytest tests/test_terminalbench.pyEvidence
Verification link: View conversation
Follow-up investigation: the previously cited terminalbench smoke run did not complete end-to-end, so this PR is being moved back to draft pending real live-run evidence.
$ gh run view 22823734279 --repo OpenHands/software-agent-sdk --json status,conclusion,displayTitle,url {"conclusion":"success","displayTitle":"Run Eval (terminalbench) Smoke test for OpenHands/benchmarks#490","status":"completed","url":"https://github.com/OpenHands/software-agent-sdk/actions/runs/22823734279"} $ gh run view 22823745521 --repo OpenHands/evaluation --json status,conclusion,displayTitle,url {"conclusion":"success","displayTitle":"Eval Job (terminalbench) Smoke test for OpenHands/benchmarks#490","status":"completed","url":"https://github.com/OpenHands/evaluation/actions/runs/22823745521"} $ # Datadog pod logs for eval-22823745521-claude-son* service:python [2026-03-08 15:11:16 UTC] Benchmark: terminalbench [2026-03-08 15:11:16 UTC] Dispatching terminalbench build for SDK commit: 77c68ccfd7bdffb27be88e8793f76cafc45faf9d [2026-03-08 15:11:17 UTC] ERROR: Benchmarks build dispatch failed (status 404): {"message":"Not Found","documentation_url":"https://docs.github.com/rest/actions/workflows#create-a-workflow-dispatch-event","status":"404"} [2026-03-08 15:11:17 UTC] Deleted temporary branch: dispatch-22823745521The GitHub Actions runs only proved that the workflow dispatch/deploy path was reachable. Datadog shows the orchestration failed before the evaluation phase, so there was no completed benchmark run, no uploaded results archive, and no Slack success notification.
Likely root cause:
OpenHands/evaluationcurrently derives the benchmark build workflow name asbuild-{benchmark}-images.yml, which becomesbuild-terminalbench-images.yml. That workflow file does not exist onOpenHands/benchmarks(including branchopenhands/terminalbench-ci-490), so the dispatch returns HTTP 404.Checklist