The gap
The current sample pack covers CI analysis (ci-coach), docs (daily-doc-updater), code quality (code-simplifier), and triage (repo-assist). There is no example that addresses cost observability, which is becoming a top concern as teams scale agentic workflows.
This is related to #297, but sidesteps the external telemetry authorization problem entirely: token-usage.jsonl is already written locally by the gh-aw firewall after every run (introduced in gh-aw #23943, v0.25.8+). No external OTel endpoint or auth required.
What the sample would do
A cost-tracker.md workflow that triggers on workflow_run: completed and:
- Downloads the
agent-artifacts artifact from the completed run
- Parses
agent-artifacts/sandbox/firewall/logs/api-proxy-logs/token-usage.jsonl
- Calculates per-run cost using current model pricing
- Compares to a rolling baseline to detect anomalies (e.g. >2x average)
- Posts a cost summary comment on the triggering issue or PR
- Optionally creates an alert issue if spend exceeds a configurable threshold
The workflow would be self-contained and work with any engine (Claude, Copilot, Codex). For teams that want persistent history and per-repo trend charts, the doc could note that AgentMeter can receive the same token data as a GitHub Action step, but the sample itself would have no external dependency.
Why it fits
- The data is already there. Every firewall-enabled run writes
token-usage.jsonl with per-model token counts. The sample teaches teams how to consume it.
- Reactive pattern. Fires after a run completes, analyzes what happened, surfaces actionable data. Same spirit as
ci-coach.
- Fills a real gap. Cost variance is often the first signal of a runaway prompt, a model regression, or a caching miss. Teams running agents at scale need this signal natively.
My situation
I have been building this pattern into AgentMeter, a cost dashboard for agentic workflows, and the core workflow logic is well-tested against real gh-aw runs. Happy to contribute the sample once I confirm the format fits what you have in mind.
I can follow ci-coach as the structural reference (workflows/cost-tracker.md + docs/cost-tracker.md).
The gap
The current sample pack covers CI analysis (
ci-coach), docs (daily-doc-updater), code quality (code-simplifier), and triage (repo-assist). There is no example that addresses cost observability, which is becoming a top concern as teams scale agentic workflows.This is related to #297, but sidesteps the external telemetry authorization problem entirely:
token-usage.jsonlis already written locally by the gh-aw firewall after every run (introduced in gh-aw #23943, v0.25.8+). No external OTel endpoint or auth required.What the sample would do
A
cost-tracker.mdworkflow that triggers onworkflow_run: completedand:agent-artifactsartifact from the completed runagent-artifacts/sandbox/firewall/logs/api-proxy-logs/token-usage.jsonlThe workflow would be self-contained and work with any engine (Claude, Copilot, Codex). For teams that want persistent history and per-repo trend charts, the doc could note that AgentMeter can receive the same token data as a GitHub Action step, but the sample itself would have no external dependency.
Why it fits
token-usage.jsonlwith per-model token counts. The sample teaches teams how to consume it.ci-coach.My situation
I have been building this pattern into AgentMeter, a cost dashboard for agentic workflows, and the core workflow logic is well-tested against real gh-aw runs. Happy to contribute the sample once I confirm the format fits what you have in mind.
I can follow
ci-coachas the structural reference (workflows/cost-tracker.md+docs/cost-tracker.md).