Skip uploading results to perflab when onlySanityCheck is true#5148
Open
LoopedBard3 wants to merge 1 commit intodotnet:mainfrom
Open
Skip uploading results to perflab when onlySanityCheck is true#5148LoopedBard3 wants to merge 1 commit intodotnet:mainfrom
LoopedBard3 wants to merge 1 commit intodotnet:mainfrom
Conversation
When onlySanityCheck runs are internal, the --upload-to-perflab-container flag was still being passed to benchmarks_ci.py, causing sanity check results (~34 tests) to be uploaded as real performance data. This gates the upload flag on only_sanity_check being false for both microbenchmark and scenario run paths. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Contributor
There was a problem hiding this comment.
Pull request overview
This PR prevents internal sanity-check-only benchmark runs from uploading their results to the Perflab container, avoiding pollution of official performance data when --only-sanity is used.
Changes:
- Gate the microbenchmark
--upload-to-perflab-containerflag oninternal && !only_sanity_check. - Gate scenario upload arguments similarly when running internally with sanity-check-only.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| def get_work_item_command_for_artifact_dir(artifact_dir: str): | ||
| assert args.target_csproj is not None | ||
| return get_work_item_command(args.os_group, args.target_csproj, args.architecture, perf_lab_framework, args.internal, wasm, artifact_dir, wasm_coreclr) | ||
| return get_work_item_command(args.os_group, args.target_csproj, args.architecture, perf_lab_framework, args.internal, wasm, artifact_dir, wasm_coreclr, args.only_sanity_check) |
There was a problem hiding this comment.
This call passes multiple boolean flags positionally (wasm_coreclr, only_sanity_check), which is easy to misread and brittle as parameters evolve. Consider passing these as keyword arguments (or making the optional flags keyword-only) to prevent accidental argument-order bugs in future edits.
Suggested change
| return get_work_item_command(args.os_group, args.target_csproj, args.architecture, perf_lab_framework, args.internal, wasm, artifact_dir, wasm_coreclr, args.only_sanity_check) | |
| return get_work_item_command( | |
| args.os_group, | |
| args.target_csproj, | |
| args.architecture, | |
| perf_lab_framework, | |
| args.internal, | |
| wasm, | |
| artifact_dir=artifact_dir, | |
| wasm_coreclr=wasm_coreclr, | |
| only_sanity_check=args.only_sanity_check, | |
| ) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When onlySanityCheck runs are internal, the --upload-to-perflab-container flag was still being passed to benchmarks_ci.py, causing sanity check results (~34 tests) to be uploaded as real performance data. This gates the upload flag on only_sanity_check being false for both microbenchmark and scenario run paths.