Skip to content

feat: Add XSTest safety benchmark resource server#764

Merged
bxyu-nvidia merged 9 commits intomainfrom
dfarris/add-xstest-benchmark
Mar 10, 2026
Merged

feat: Add XSTest safety benchmark resource server#764
bxyu-nvidia merged 9 commits intomainfrom
dfarris/add-xstest-benchmark

Conversation

@dcfarris
Copy link
Copy Markdown
Contributor

@dcfarris dcfarris commented Feb 25, 2026

Summary

  • Adds XSTest safety benchmark (Rottger et al., NAACL 2024) as a new resource server
  • 450 hand-crafted prompts (250 safe, 200 unsafe) that test whether models correctly comply with safe requests and refuse unsafe ones
  • Three verification modes: string-match (from the paper), LLM-as-judge (3-class taxonomy), and WildGuard (Allen AI safety classifier)
  • Judge mode is configurable via use_judge + judge_output_format — easily swappable between WildGuard, Qwen3-Guard, or any LLM judge
  • Includes WildGuard config (xstest_wildguard.yaml) with custom chat template for local_vllm_model

What's in this PR

  • app.py — resource server with string-match + LLM judge + WildGuard verdict parsing
  • configs/xstest.yaml — default config (string-match only)
  • configs/xstest_wildguard.yaml — WildGuard via local_vllm_model with custom chat template
  • prompt_templates/xstest_judge.txt — XSTest paper's 3-class judge prompt
  • prompt_templates/wildguard_judge.txt — WildGuard classifier prompt (verified identical to allenai/wildguard library)
  • data/example.jsonl — 5 example prompts (no system prompt)
  • tests/test_app.py — 31 tests (string-match, XSTest judge, WildGuard judge)
  • scripts/aggregate_results.py — per-category results aggregation

Validation

  • End-to-end tested with WildGuard judge on all 450 prompts
  • Results match internal baseline within expected temp=1.0 variance

Test plan

  • 31 unit tests passing (pytest)
  • ruff check and ruff format clean
  • ng_prepare_data example validation passing
  • End-to-end rollout collection with ng_collect_rollouts
  • Full 450-prompt run with WildGuard judge via local_vllm_model
  • Aggregate reporting verified with per-category breakdown

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Feb 25, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

dcfarris added 3 commits March 4, 2026 15:33
Adds XSTest (Rottger et al., NAACL 2024) as a new resource server for
evaluating LLM over-refusal behavior. 450 hand-crafted prompts (250 safe,
200 unsafe) test whether models correctly comply with safe requests and
refuse unsafe ones.

Two verification modes:
- String-match classification (from the paper's 21 refusal prefixes)
- LLM-as-judge with the paper's 3-class taxonomy (optional, configurable)

Judge falls back to string matching on errors. Includes per-category
aggregate reporting script and documentation.

Signed-off-by: Dave Farris <dfarris@nvidia.com>
- Remove system prompt from dataset (XSTest tests raw prompt behavior)
- Document recommended generation params (temp=1.0, max_output_tokens=32768)
- Update README with no-system-prompt rationale and gen param guidance
- Simplify _parse_judge_verdict (remove unnecessary variable)
- Simplify test_sanity and aggregate_results.py

Signed-off-by: Dave Farris <dfarris@nvidia.com>
Auto-generated by update-readme-table pre-commit hook.

Signed-off-by: Dave Farris <dfarris@nvidia.com>
@dcfarris dcfarris force-pushed the dfarris/add-xstest-benchmark branch from adf194f to 88b4cec Compare March 4, 2026 23:34
- Add judge_output_format config ("xstest" or "wildguard") to select parser
- Add WildGuard verdict parser matching allenai/wildguard library output
- Add WildGuard prompt template (inner content only, chat template in YAML)
- Add xstest_wildguard.yaml config with local_vllm_model + custom chat template
- WildGuard chat template verified to produce identical prompts to wildguard library
- 31 tests (string-match, xstest judge, wildguard judge parsing + integration)

Signed-off-by: Dave Farris <dfarris@nvidia.com>
@dcfarris dcfarris marked this pull request as ready for review March 6, 2026 23:13
bxyu-nvidia
bxyu-nvidia previously approved these changes Mar 7, 2026
Comment thread resources_servers/xstest/app.py Outdated
Comment thread resources_servers/xstest/app.py
Comment thread resources_servers/xstest/app.py
Comment thread resources_servers/xstest/scripts/aggregate_results.py
Copy link
Copy Markdown
Contributor

@prasoonvarshney prasoonvarshney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments inline

…arsing, judge error tracking

- Make WildGuard the default judge config (xstest.yaml), move string-match
  to xstest_string_match.yaml.
- Fix _strip_thinking_blocks() to handle missing opening <think> tag
  (split on closing tag only, which is the reliable indicator).
- Distinguish judge_parsing_error from judge_error in verdict labels
  for better debugging of unparseable judge output vs HTTP failures.
- Log detected judge type (WildGuard/XSTest 3-class/string-match)
  in aggregate results output.
- Add tests for thinking trace edge cases (no opening tag, orphaned
  opening tag, non-thinking model) and judge_parsing_error label.
- Update README to document all three verification modes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Dave Farris <dfarris@nvidia.com>
Comment thread resources_servers/xstest/tests/test_app.py Outdated
Comment thread resources_servers/xstest/app.py Outdated
Per reviewer consensus, reasoning trace separation is the model server's
responsibility (via --reasoning-parser for vLLM, or the Responses schema's
native reasoning content type. The resource server no longer strips
<think>/<thinking> blocks.

Added a warning log if </think> tags are detected in response content,
indicating the model server's reasoning parser may not be enabled.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
EOF
)

Signed-off-by: Dave Farris <dfarris@nvidia.com>
Copy link
Copy Markdown
Contributor

@prasoonvarshney prasoonvarshney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates, LGTM

bxyu-nvidia
bxyu-nvidia previously approved these changes Mar 10, 2026
…s/add-xstest-benchmark

Signed-off-by: Brian Yu <bxyu@nvidia.com>
@bxyu-nvidia bxyu-nvidia dismissed stale reviews from prasoonvarshney and themself via 5f48716 March 10, 2026 04:34
dcfarris and others added 2 commits March 10, 2026 09:52
- Add policy_model placeholder to xstest.yaml and xstest_string_match.yaml
  (required by config validation when agent references policy_model)
- Add example_rollouts.jsonl (5 entries, required by new CI data validation)
- Add example_metrics.json (required by CI data validation)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Dave Farris <dfarris@nvidia.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Dave Farris <dfarris@nvidia.com>
@bxyu-nvidia bxyu-nvidia merged commit 6ad2db8 into main Mar 10, 2026
6 checks passed
@bxyu-nvidia bxyu-nvidia deleted the dfarris/add-xstest-benchmark branch March 10, 2026 15:51
raveedturing pushed a commit to turing-rlgym/Gym that referenced this pull request Mar 18, 2026
## Summary
- Adds XSTest safety benchmark (Rottger et al., NAACL 2024) as a new
resource server
- 450 hand-crafted prompts (250 safe, 200 unsafe) that test whether
models correctly comply with safe requests and refuse unsafe ones
- Three verification modes: string-match (from the paper), LLM-as-judge
(3-class taxonomy), and WildGuard (Allen AI safety classifier)
- Judge mode is configurable via `use_judge` + `judge_output_format` —
easily swappable between WildGuard, Qwen3-Guard, or any LLM judge
- Includes WildGuard config (`xstest_wildguard.yaml`) with custom chat
template for `local_vllm_model`

## What's in this PR
- `app.py` — resource server with string-match + LLM judge + WildGuard
verdict parsing
- `configs/xstest.yaml` — default config (string-match only)
- `configs/xstest_wildguard.yaml` — WildGuard via `local_vllm_model`
with custom chat template
- `prompt_templates/xstest_judge.txt` — XSTest paper's 3-class judge
prompt
- `prompt_templates/wildguard_judge.txt` — WildGuard classifier prompt
(verified identical to allenai/wildguard library)
- `data/example.jsonl` — 5 example prompts (no system prompt)
- `tests/test_app.py` — 31 tests (string-match, XSTest judge, WildGuard
judge)
- `scripts/aggregate_results.py` — per-category results aggregation

## Validation
- End-to-end tested with WildGuard judge on all 450 prompts
- Results match internal baseline within expected temp=1.0 variance

## Test plan
- [x] 31 unit tests passing (`pytest`)
- [x] `ruff check` and `ruff format` clean
- [x] `ng_prepare_data` example validation passing
- [x] End-to-end rollout collection with `ng_collect_rollouts`
- [x] Full 450-prompt run with WildGuard judge via `local_vllm_model`
- [x] Aggregate reporting verified with per-category breakdown

---------

Signed-off-by: Dave Farris <dfarris@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Brian Yu <bxyu@nvidia.com>
MahanFathi pushed a commit that referenced this pull request Mar 24, 2026
## Summary
- Adds XSTest safety benchmark (Rottger et al., NAACL 2024) as a new
resource server
- 450 hand-crafted prompts (250 safe, 200 unsafe) that test whether
models correctly comply with safe requests and refuse unsafe ones
- Three verification modes: string-match (from the paper), LLM-as-judge
(3-class taxonomy), and WildGuard (Allen AI safety classifier)
- Judge mode is configurable via `use_judge` + `judge_output_format` —
easily swappable between WildGuard, Qwen3-Guard, or any LLM judge
- Includes WildGuard config (`xstest_wildguard.yaml`) with custom chat
template for `local_vllm_model`

## What's in this PR
- `app.py` — resource server with string-match + LLM judge + WildGuard
verdict parsing
- `configs/xstest.yaml` — default config (string-match only)
- `configs/xstest_wildguard.yaml` — WildGuard via `local_vllm_model`
with custom chat template
- `prompt_templates/xstest_judge.txt` — XSTest paper's 3-class judge
prompt
- `prompt_templates/wildguard_judge.txt` — WildGuard classifier prompt
(verified identical to allenai/wildguard library)
- `data/example.jsonl` — 5 example prompts (no system prompt)
- `tests/test_app.py` — 31 tests (string-match, XSTest judge, WildGuard
judge)
- `scripts/aggregate_results.py` — per-category results aggregation

## Validation
- End-to-end tested with WildGuard judge on all 450 prompts
- Results match internal baseline within expected temp=1.0 variance

## Test plan
- [x] 31 unit tests passing (`pytest`)
- [x] `ruff check` and `ruff format` clean
- [x] `ng_prepare_data` example validation passing
- [x] End-to-end rollout collection with `ng_collect_rollouts`
- [x] Full 450-prompt run with WildGuard judge via `local_vllm_model`
- [x] Aggregate reporting verified with per-category breakdown

---------

Signed-off-by: Dave Farris <dfarris@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Brian Yu <bxyu@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants