Quality Gates: PR Title standardization and LOC#212
Quality Gates: PR Title standardization and LOC#212cb-anomitromunshi wants to merge 6 commits intomasterfrom
Conversation
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
There was a problem hiding this comment.
PR Complexity Score: 2.0 - Simple
View Breakdown
- Lines Changed: 239
- Files Changed: 5
- Complexity Added: 0
- Raw Score: 19.78
⚠️ Sensitive Data Detected
| File | Types | Count | ||||||
|---|---|---|---|---|---|---|---|---|
|
| Line | Type | Preview |
|---|---|---|
| 13 | Secret: Secret Keyword | [Secret Keyword] |
.github/workflows/pr-size-check.yml
| Line | Type | Preview |
|---|---|---|
| 67 | PII: Phone Number | 127322177288 |
- Introduces a standardized pull request template to improve PR descriptions, changelog entries, and documentation links.
- Adds a script and documentation to automatically generate changelog entries from merged PRs via the GitHub API.
- Configures a reusable PR lint workflow for main/master-targeted PRs.
- Adds a PR size check workflow for
dev/develop/**branches, including a bypass mechanism with audit logging to S3 and CAB approval signaling.
High-level summary
This PR sets up common GitHub workflow and tooling standards for the repository:
- Standardizes how PRs are described and categorized (template).
- Enables automated changelog generation based on merged PRs.
- Enforces PR hygiene via a shared lint workflow.
- Controls PR size with automated checks, optional exception labeling, and traceable approvals.
Key areas affected: .github configuration (PR templates, scripts, and workflows), CI behavior for PRs, and release/changelog processes.
File-level change summary
| File | Change Summary |
|---|---|
.github/pull_request_template.md |
Adds a structured PR template including sections for changelog, summary, automation changes, test report URL, areas of impact, type of change (bugfix/feature/etc.), and documentation link. |
.github/scripts/README.md |
Documents how to use the new generate-changelog.sh script, including prerequisites, environment variable setup, example commands, and argument behavior. |
.github/scripts/generate-changelog.sh |
New Bash script that queries the GitHub API for merged PRs into a given branch within a date range, filters out sync/bot PRs, handles JSON/HTTP errors robustly, and prints a formatted changelog plus a verification search URL. |
.github/workflows/pr-lint.yml |
Adds a “Common PR Lint” workflow that runs on various PR events for main/master-related branches and delegates checks to the shared chargebee/cb-cicd-pipelines reusable workflow, limited to PRs targeting main or master. |
.github/workflows/pr-size-check.yml |
Adds a “PR Size Check” workflow for PRs into dev/develop/** that: (1) comments when a pr-size-exception label is present and CAB approval is pending, (2) runs a shared pr-size-check action with thresholds and path exclusions, and (3) when bypassed, configures AWS OIDC and records bypass approval metadata to S3 for auditing. |
| ## DOCUMENTATION | ||
| REPLACE_ME_WITH_DOCUMENTATION_LINK_OR_NA No newline at end of file |
There was a problem hiding this comment.
Priority: 🟢 LOW
Problem: The file is missing a trailing newline (\n) at the end of the file.
Why: Many tools (linters, formatters, POSIX utilities, and some Git diff viewers) expect text files to end with a newline; missing it can cause noisy diffs and minor interoperability issues.
How to Fix: Add a newline at the end of the file after the last line so the file ends with a proper line terminator.
| ## DOCUMENTATION | |
| REPLACE_ME_WITH_DOCUMENTATION_LINK_OR_NA | |
| ## DOCUMENTATION | |
| REPLACE_ME_WITH_DOCUMENTATION_LINK_OR_NA |
| Lists PRs merged into a branch (excludes "Parent branch sync" and bot-authored PRs). | ||
|
|
||
| **Prerequisites:** `jq`, and GitHub credentials as env vars. | ||
|
|
||
| ### Set credentials (once per terminal) |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The prerequisites mention jq and "GitHub credentials as env vars" but do not specify that the GitHub token must have repo (and possibly read:org if needed) scopes, which can lead to confusing authentication/authorization failures when using the script.
Why: Without explicit scope guidance, users may create a PAT with insufficient permissions, causing the script to fail with opaque GitHub API errors; this degrades usability and makes troubleshooting harder.
How to Fix: Clarify in the prerequisites that GH_PAT must be a GitHub personal access token with at least repo scope (and any other required scopes, if applicable), so users can provision the correct token up front.
| Lists PRs merged into a branch (excludes "Parent branch sync" and bot-authored PRs). | |
| **Prerequisites:** `jq`, and GitHub credentials as env vars. | |
| ### Set credentials (once per terminal) | |
| Lists PRs merged into a branch (excludes "Parent branch sync" and bot-authored PRs). | |
| **Prerequisites:** `jq`, and GitHub credentials as env vars (`GH_PAT` should be a personal access token with at least `repo` scope). | |
| ### Set credentials (once per terminal) |
| export GH_USERNAME=your-github-username | ||
| export GH_PAT=your-github-personal-access-token |
There was a problem hiding this comment.
Priority: 🟢 LOW
Problem: The example environment variable names GH_USERNAME and GH_PAT are introduced without explicitly tying them to the script’s expectations, which may leave users unsure whether these exact names are required or just examples.
Why: Ambiguity around required environment variable names can cause the script to fail at runtime if users choose different names, leading to avoidable setup friction.
How to Fix: Explicitly state that the script expects GH_USERNAME and GH_PAT to be set with those exact names, or clarify that these are the required variable names used by generate-changelog.sh.
| # Optional: Branch name (defaults to current branch if not provided) | ||
| SOURCE_BRANCH="${1:-$(git branch --show-current)}" | ||
| # Optional: Date filter (defaults to last 30 days if not provided) | ||
| DATE_FILTER="${2:-merged:>=$(date -u -v-30d +%Y-%m-%d 2>/dev/null || date -u -d '30 days ago' +%Y-%m-%d)}" | ||
|
|
||
| # Repo is set per-repo when this file is pushed (placeholder replaced by upload script) | ||
| REPO="chargebee/chargebee-react-native" |
There was a problem hiding this comment.
Priority: 🟠 HIGH
Problem: The default DATE_FILTER string is constructed using a mix of macOS (date -v-30d) and GNU (date -d '30 days ago') syntax inside the same parameter expansion, which is hard to read and can behave unexpectedly if either command is unavailable or behaves differently across environments.
Why: This script is likely to be run in different environments (local dev, CI runners, possibly containers). Embedding both variants inline makes the expression brittle and harder to debug; a failure in both date invocations will silently result in an empty DATE_FILTER and a malformed GitHub query.
How to Fix: Compute the date in a small conditional block first (trying macOS syntax, then GNU syntax, and failing clearly if both are unavailable), then use that variable in DATE_FILTER. This improves portability and error handling.
| # Optional: Branch name (defaults to current branch if not provided) | |
| SOURCE_BRANCH="${1:-$(git branch --show-current)}" | |
| # Optional: Date filter (defaults to last 30 days if not provided) | |
| DATE_FILTER="${2:-merged:>=$(date -u -v-30d +%Y-%m-%d 2>/dev/null || date -u -d '30 days ago' +%Y-%m-%d)}" | |
| # Repo is set per-repo when this file is pushed (placeholder replaced by upload script) | |
| REPO="chargebee/chargebee-react-native" | |
| # Optional: Branch name (defaults to current branch if not provided) | |
| SOURCE_BRANCH="${1:-$(git branch --show-current)}" | |
| # Optional: Date filter (defaults to last 30 days if not provided) | |
| if [ -z "$2" ]; then | |
| # Try macOS BSD date first, then GNU date; fail clearly if both are unavailable | |
| if LAST_30_DAYS=$(date -u -v-30d +%Y-%m-%d 2>/dev/null); then | |
| : | |
| elif LAST_30_DAYS=$(date -u -d '30 days ago' +%Y-%m-%d 2>/dev/null); then | |
| : | |
| else | |
| echo "❌ Error: Unable to compute date 30 days ago; 'date' command does not support required options" | |
| exit 1 | |
| fi | |
| DATE_FILTER="merged:>=$LAST_30_DAYS" | |
| else | |
| DATE_FILTER="$2" | |
| fi | |
| # Repo is set per-repo when this file is pushed (placeholder replaced by upload script) | |
| REPO="chargebee/chargebee-react-native" |
| # Repo is set per-repo when this file is pushed (placeholder replaced by upload script) | ||
| REPO="chargebee/chargebee-react-native" | ||
|
|
||
| echo "🔍 Searching for PRs merged into $SOURCE_BRANCH..." |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The REPO variable is hard-coded to chargebee/chargebee-react-native, which makes the script non-reusable across repositories and contradicts the comment that it is “set per-repo when this file is pushed”.
Why: If this script is copied to another repository without updating REPO, it will query and generate changelog entries for the wrong repo, which is a subtle and potentially confusing failure mode.
How to Fix: Allow REPO to be overridden via an environment variable and fall back to a sensible default, or fail with a clear error if REPO is not set, instead of hard-coding a specific repository.
| # Repo is set per-repo when this file is pushed (placeholder replaced by upload script) | |
| REPO="chargebee/chargebee-react-native" | |
| echo "🔍 Searching for PRs merged into $SOURCE_BRANCH..." | |
| # Repo is set per-repo when this file is pushed (placeholder replaced by upload script) | |
| : "${REPO:=chargebee/chargebee-react-native}" | |
| echo "🔍 Searching for PRs merged into $SOURCE_BRANCH..." |
| # GitHub API call with error handling | ||
| HTTP_STATUS=$(curl -s -w "%{http_code}" -G -u "$GH_USERNAME:$GH_PAT" \ | ||
| "https://api.github.com/search/issues" \ | ||
| --data-urlencode "q=NOT \"Parent branch sync\" in:title is:pr repo:$REPO is:merged base:$SOURCE_BRANCH merged:$DATE_FILTER -author:app/distributed-gitflow-app" \ | ||
| -o /tmp/curl_output.json \ | ||
| 2>/tmp/curl_error.log) | ||
|
|
||
| CURL_EXIT_CODE=$? |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The script writes API responses and errors to fixed paths under /tmp (/tmp/curl_output.json and /tmp/curl_error.log), which can cause collisions if multiple instances run concurrently and may leak data between runs.
Why: Using static filenames in a shared temp directory is not safe for concurrent executions and can make debugging harder because logs from different runs overwrite each other.
How to Fix: Use mktemp to create unique temporary files for each run and ensure they are cleaned up at the end; store their paths in variables used consistently throughout the script.
| # GitHub API call with error handling | |
| HTTP_STATUS=$(curl -s -w "%{http_code}" -G -u "$GH_USERNAME:$GH_PAT" \ | |
| "https://api.github.com/search/issues" \ | |
| --data-urlencode "q=NOT \"Parent branch sync\" in:title is:pr repo:$REPO is:merged base:$SOURCE_BRANCH merged:$DATE_FILTER -author:app/distributed-gitflow-app" \ | |
| -o /tmp/curl_output.json \ | |
| 2>/tmp/curl_error.log) | |
| CURL_EXIT_CODE=$? | |
| # GitHub API call with error handling | |
| CURL_OUTPUT_FILE=$(mktemp /tmp/changelog_curl_output.XXXXXX.json) | |
| CURL_ERROR_FILE=$(mktemp /tmp/changelog_curl_error.XXXXXX.log) | |
| HTTP_STATUS=$(curl -s -w "%{http_code}" -G -u "$GH_USERNAME:$GH_PAT" \ | |
| "https://api.github.com/search/issues" \ | |
| --data-urlencode "q=NOT \"Parent branch sync\" in:title is:pr repo:$REPO is:merged base:$SOURCE_BRANCH merged:$DATE_FILTER -author:app/distributed-gitflow-app" \ | |
| -o "$CURL_OUTPUT_FILE" \ | |
| 2>"$CURL_ERROR_FILE") | |
| CURL_EXIT_CODE=$? |
| if [ $CURL_EXIT_CODE -ne 0 ]; then | ||
| echo "❌ Error: curl request failed with exit code $CURL_EXIT_CODE" | ||
| echo "Error details: $(cat /tmp/curl_error.log)" | ||
| exit 1 | ||
| fi | ||
|
|
||
| if [ "$HTTP_STATUS" -ne 200 ]; then | ||
| echo "❌ Error: API returned HTTP status $HTTP_STATUS" | ||
| echo "Response: $(cat /tmp/curl_output.json)" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: When curl fails or the API returns a non-200 status, the script prints the contents of the error and output files inline using command substitution, which can mangle large or multi-line responses and makes it harder to distinguish message boundaries.
Why: Using $(cat file) inside echo collapses newlines and can make JSON or error logs unreadable, complicating debugging when something goes wrong.
How to Fix: Use cat directly to stream the file contents, preserving formatting, and reference the temp file variables introduced for safer temp handling.
| if [ $CURL_EXIT_CODE -ne 0 ]; then | |
| echo "❌ Error: curl request failed with exit code $CURL_EXIT_CODE" | |
| echo "Error details: $(cat /tmp/curl_error.log)" | |
| exit 1 | |
| fi | |
| if [ "$HTTP_STATUS" -ne 200 ]; then | |
| echo "❌ Error: API returned HTTP status $HTTP_STATUS" | |
| echo "Response: $(cat /tmp/curl_output.json)" | |
| exit 1 | |
| fi | |
| if [ $CURL_EXIT_CODE -ne 0 ]; then | |
| echo "❌ Error: curl request failed with exit code $CURL_EXIT_CODE" | |
| echo "Error details:" | |
| cat "$CURL_ERROR_FILE" | |
| exit 1 | |
| fi | |
| if [ "$HTTP_STATUS" -ne 200 ]; then | |
| echo "❌ Error: API returned HTTP status $HTTP_STATUS" | |
| echo "Response:" | |
| cat "$CURL_OUTPUT_FILE" | |
| exit 1 | |
| fi |
| PR_LIST_RESPONSE=$(cat /tmp/curl_output.json) | ||
|
|
||
| # Clean invalid control characters from JSON response | ||
| if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then | ||
| echo "⚠️ Invalid JSON detected — cleaning control characters..." | ||
| PR_LIST_RESPONSE=$(echo "$PR_LIST_RESPONSE" | tr -d '\000-\037') |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: Lines 38-39 and 43-49: The script assumes jq is installed and available, but never checks for it explicitly; if jq is missing, the script will fail with a non-obvious error.
Why: This script is intended as a reusable tool; failing with “command not found” or similar when jq is missing is less user-friendly than a clear prerequisite check.
How to Fix: Add an early check for jq (similar to the environment variable checks) and exit with a clear message if it is not installed.
| PR_LIST_RESPONSE=$(cat /tmp/curl_output.json) | |
| # Clean invalid control characters from JSON response | |
| if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then | |
| echo "⚠️ Invalid JSON detected — cleaning control characters..." | |
| PR_LIST_RESPONSE=$(echo "$PR_LIST_RESPONSE" | tr -d '\000-\037') | |
| if ! command -v jq >/dev/null 2>&1; then | |
| echo "❌ Error: 'jq' is required but not installed. Please install jq and retry." | |
| exit 1 | |
| fi | |
| PR_LIST_RESPONSE=$(cat "$CURL_OUTPUT_FILE") | |
| # Clean invalid control characters from JSON response | |
| if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then | |
| echo "⚠️ Invalid JSON detected — cleaning control characters..." | |
| PR_LIST_RESPONSE=$(echo "$PR_LIST_RESPONSE" | tr -d '\000-\037') |
| if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then | ||
| echo "$PR_LIST_RESPONSE" > /tmp/invalid_json_debug.json | ||
| echo "❌ Error: JSON is still invalid after cleaning control characters" | ||
| echo "💡 Use 'cat /tmp/invalid_json_debug.json' to inspect the JSON" | ||
| exit 1 | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: When JSON remains invalid after cleaning, the script writes to a hard-coded /tmp/invalid_json_debug.json file, which can be overwritten by concurrent runs and is inconsistent with the earlier use of temp files.
Why: Using a fixed filename in /tmp risks collisions and makes it harder to correlate debug files with specific runs, especially in shared CI environments.
How to Fix: Use mktemp to create a unique debug file and include its path in the error message so users can inspect the correct file.
| if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then | |
| echo "$PR_LIST_RESPONSE" > /tmp/invalid_json_debug.json | |
| echo "❌ Error: JSON is still invalid after cleaning control characters" | |
| echo "💡 Use 'cat /tmp/invalid_json_debug.json' to inspect the JSON" | |
| exit 1 | |
| fi | |
| fi | |
| if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then | |
| INVALID_JSON_DEBUG_FILE=$(mktemp /tmp/changelog_invalid_json.XXXXXX.json) | |
| echo "$PR_LIST_RESPONSE" > "$INVALID_JSON_DEBUG_FILE" | |
| echo "❌ Error: JSON is still invalid after cleaning control characters" | |
| echo "💡 Use 'cat $INVALID_JSON_DEBUG_FILE' to inspect the JSON" | |
| exit 1 | |
| fi | |
| fi |
| echo "==============================================================================" | ||
| echo -e "Found ${GREEN}$PR_MERGED_COUNT${NOCOLOR} PR(s) merged into $SOURCE_BRANCH (filter: $DATE_FILTER)" | ||
| echo "==============================================================================" | ||
| echo -e "## ${GREEN}CHANGELOG${NOCOLOR}" | ||
| echo "$PR_LIST_RESPONSE" | jq -r '.items[] | (.title) + " (" + (.user.login) + ") [#" + (.number | tostring) + "]"' | sort | ||
| printf "\n" |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The script assumes .items exists and is non-empty; if the search returns zero results, jq -r '.items[] | ...' will exit with a non-zero status under set -e, causing the script to terminate instead of printing an empty changelog.
Why: For a date range or branch with no merged PRs, it’s reasonable to expect the script to succeed and report zero items; failing in this case makes the tool less robust and can break automated workflows.
How to Fix: Guard the jq iteration so that it doesn’t cause the script to exit when there are no items, for example by checking PR_MERGED_COUNT first or by appending || true to the pipeline.
| echo "==============================================================================" | |
| echo -e "Found ${GREEN}$PR_MERGED_COUNT${NOCOLOR} PR(s) merged into $SOURCE_BRANCH (filter: $DATE_FILTER)" | |
| echo "==============================================================================" | |
| echo -e "## ${GREEN}CHANGELOG${NOCOLOR}" | |
| echo "$PR_LIST_RESPONSE" | jq -r '.items[] | (.title) + " (" + (.user.login) + ") [#" + (.number | tostring) + "]"' | sort | |
| printf "\n" | |
| echo "==============================================================================" | |
| echo -e "Found ${GREEN}$PR_MERGED_COUNT${NOCOLOR} PR(s) merged into $SOURCE_BRANCH (filter: $DATE_FILTER)" | |
| echo "==============================================================================" | |
| echo -e "## ${GREEN}CHANGELOG${NOCOLOR}" | |
| if [ "$PR_MERGED_COUNT" -gt 0 ]; then | |
| echo "$PR_LIST_RESPONSE" | jq -r '.items[] | (.title) + " (" + (.user.login) + ") [#" + (.number | tostring) + "]"' | sort | |
| else | |
| echo "(no merged PRs found for the given filter)" | |
| fi | |
| printf "\n" |
| # Clean up temporary files | ||
| rm -f /tmp/curl_output.json /tmp/curl_error.log |
There was a problem hiding this comment.
Priority: 🟢 LOW
Problem: The cleanup step still references the hard-coded /tmp/curl_output.json and /tmp/curl_error.log paths, which will be incorrect if you adopt mktemp for safer temp file handling.
Why: Leaving these references unchanged after switching to dynamic temp files will either fail to clean up or attempt to remove unrelated files, leading to clutter in /tmp or confusing behavior.
How to Fix: Update the cleanup to use the temp file variables introduced earlier and guard with -n checks to avoid removing empty paths.
| # Clean up temporary files | |
| rm -f /tmp/curl_output.json /tmp/curl_error.log | |
| # Clean up temporary files | |
| [ -n "$CURL_OUTPUT_FILE" ] && rm -f "$CURL_OUTPUT_FILE" | |
| [ -n "$CURL_ERROR_FILE" ] && rm -f "$CURL_ERROR_FILE" |
| on: | ||
| pull_request: | ||
| branches: [master, main,staging, dev,develop] | ||
| types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited] |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The branches list in the pull_request trigger is missing a space after the comma between main and staging ([master, main,staging, dev,develop]), which can make the workflow harder to read and is inconsistent with the spacing used for other entries.
Why: While GitHub Actions will parse this correctly, inconsistent formatting reduces readability and can lead to copy-paste or editing mistakes in the future, especially in shared, reusable workflow configurations.
How to Fix: Add a space after the comma so all branch entries are consistently spaced, improving readability and maintainability.
| on: | |
| pull_request: | |
| branches: [master, main,staging, dev,develop] | |
| types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited] | |
| on: | |
| pull_request: | |
| branches: [master, main, staging, dev, develop] | |
| types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited] |
| pull_request: | ||
| branches: [master, main,staging, dev,develop] | ||
| types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited] |
There was a problem hiding this comment.
Priority: 🟢 LOW
Problem: The types list for the pull_request trigger omits the synchronize event, so updates to existing PRs (new commits pushed) will not re-run the PR lint checks.
Why: Without handling synchronize, PR linting can become stale when new commits are pushed after the initial open/edit events, potentially allowing non-compliant changes to merge without updated checks.
How to Fix: Add synchronize to the types array so that PR lint checks are re-triggered whenever commits are pushed to an existing pull request.
| pull_request: | |
| branches: [master, main,staging, dev,develop] | |
| types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited] | |
| pull_request: | |
| branches: [master, main, staging, dev, develop] | |
| types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited, synchronize] |
| jobs: | ||
| pr-lint: | ||
| name: Common PR Lint Checks | ||
| if: github.base_ref == 'main' || github.base_ref == 'master' | ||
| uses: chargebee/cb-cicd-pipelines/.github/workflows/pr-lint.yml@main |
There was a problem hiding this comment.
Priority: 🟢 LOW
Problem: The if condition uses github.base_ref to gate execution to main/master, but the workflow is already restricted to specific branches via branches, and base_ref is only set for PR events; this condition is slightly redundant and could be simplified or made more explicit.
Why: Redundant or unclear conditions can confuse future maintainers about which branches are actually intended to run this workflow, especially since branches includes staging, dev, and develop but the job condition excludes them.
How to Fix: Either (a) remove staging, dev, and develop from the branches list if the job should only ever run for main/master, or (b) expand the if condition to match the full intended set of branches, keeping triggers and conditions aligned.
| jobs: | |
| pr-lint: | |
| name: Common PR Lint Checks | |
| if: github.base_ref == 'main' || github.base_ref == 'master' | |
| uses: chargebee/cb-cicd-pipelines/.github/workflows/pr-lint.yml@main | |
| jobs: | |
| pr-lint: | |
| name: Common PR Lint Checks | |
| if: github.base_ref == 'main' || github.base_ref == 'master' | |
| uses: chargebee/cb-cicd-pipelines/.github/workflows/pr-lint.yml@main |
| - uses: actions/github-script@v7 | ||
| with: | ||
| github-token: ${{ secrets.GITHUB_TOKEN }} | ||
| script: | | ||
| const owner = context.repo.owner; | ||
| const repo = context.repo.repo; | ||
| const issue_number = context.payload.pull_request.number; | ||
|
|
||
| const marker = '<!-- pr-size-bypass-pending -->'; | ||
| const pending = `${marker} | ||
| 🛑 The \`pr-size-exception\` label is present. This workflow is **waiting for approvals** from the **[cb-Billing-CAB-reviewers](https://github.com/orgs/chargebee/teams/cb-billing-cab-approvers)**.`; | ||
|
|
||
| // create a new comment when the workflow runs | ||
| await github.rest.issues.createComment({ owner, repo, issue_number, body: pending }); |
There was a problem hiding this comment.
Priority: 🟠 HIGH
Problem: The pre-approval-comment job always creates a new comment on every qualifying event instead of updating or de-duplicating, which will spam the PR with repeated “waiting for approvals” comments.
Why: On every opened, synchronize, edited, labeled, or unlabeled event while the pr-size-exception label is present, this job will post another identical comment. This clutters the PR discussion, makes it harder to see real reviewer feedback, and can annoy reviewers.
How to Fix: Before creating a new comment, list existing comments and only create the “pending” comment if one with the marker is not already present. This keeps the PR clean while still signaling pending approval.
| - uses: actions/github-script@v7 | |
| with: | |
| github-token: ${{ secrets.GITHUB_TOKEN }} | |
| script: | | |
| const owner = context.repo.owner; | |
| const repo = context.repo.repo; | |
| const issue_number = context.payload.pull_request.number; | |
| const marker = '<!-- pr-size-bypass-pending -->'; | |
| const pending = `${marker} | |
| 🛑 The \`pr-size-exception\` label is present. This workflow is **waiting for approvals** from the **[cb-Billing-CAB-reviewers](https://github.com/orgs/chargebee/teams/cb-billing-cab-approvers)**.`; | |
| // create a new comment when the workflow runs | |
| await github.rest.issues.createComment({ owner, repo, issue_number, body: pending }); | |
| - uses: actions/github-script@v7 | |
| with: | |
| github-token: ${{ secrets.GITHUB_TOKEN }} | |
| script: | | |
| const owner = context.repo.owner; | |
| const repo = context.repo.repo; | |
| const issue_number = context.payload.pull_request.number; | |
| const marker = '<!-- pr-size-bypass-pending -->'; | |
| const pending = `${marker} | |
| 🛑 The \`pr-size-exception\` label is present. This workflow is **waiting for approvals** from the **[cb-Billing-CAB-reviewers](https://github.com/orgs/chargebee/teams/cb-billing-cab-approvers)**.`; | |
| // Avoid duplicate comments: only add if no existing marker comment is present | |
| const { data: comments } = await github.rest.issues.listComments({ | |
| owner, | |
| repo, | |
| issue_number, | |
| per_page: 100, | |
| }); | |
| const alreadyPresent = comments.some(comment => comment.body && comment.body.includes(marker)); | |
| if (!alreadyPresent) { | |
| await github.rest.issues.createComment({ owner, repo, issue_number, body: pending }); | |
| } |
| env: | ||
| BYPASS_LABEL: pr-size-exception | ||
| environment: ${{ contains(github.event.pull_request.labels.*.name, 'pr-size-exception') && 'cb-billing-reviewers' || '' }} | ||
| steps: |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The environment is set to an empty string ('') when the bypass label is not present, which is not a valid environment name and may cause workflow/environment configuration issues.
Why: GitHub environments expect a valid non-empty name; passing an empty string can lead to confusing behavior (e.g., environment protection rules not applying as expected) or future breakage if GitHub tightens validation.
How to Fix: Use a conditional that omits the environment key when the label is absent, or map to a clearly named default environment instead of an empty string.
| env: | |
| BYPASS_LABEL: pr-size-exception | |
| environment: ${{ contains(github.event.pull_request.labels.*.name, 'pr-size-exception') && 'cb-billing-reviewers' || '' }} | |
| steps: | |
| env: | |
| BYPASS_LABEL: pr-size-exception | |
| environment: ${{ contains(github.event.pull_request.labels.*.name, 'pr-size-exception') && 'cb-billing-reviewers' || 'default' }} | |
| steps: |
| DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)" | ||
| WF_ID="${{ github.run_id }}" | ||
| S3_KEY="${REPO}/datas/${WF_ID}.json" | ||
| printf '{"repo":"%s","date":"%s","pr_link":"%s","wf_id":"%s"}\n' "$REPO" "$DATE" "$PR_LINK" "$WF_ID" | \ | ||
| aws s3 cp - "s3://prsizebypassdata/${S3_KEY}" --content-type application/json | ||
| echo "Recorded to s3://prsizebypassdata/${S3_KEY}" |
There was a problem hiding this comment.
Priority: 🟡 MEDIUM
Problem: The S3 key path uses datas ("${REPO}/datas/${WF_ID}.json") which is likely a typo for data, making the bucket layout inconsistent and harder to query or reason about.
Why: Misspelled or inconsistent prefixes in S3 keys complicate tooling, queries, and documentation around audit logs; future consumers may look under data/ and miss these records.
How to Fix: Rename the prefix from datas to data (or another intentional, documented name) to keep the S3 structure predictable and self-explanatory.
| DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)" | |
| WF_ID="${{ github.run_id }}" | |
| S3_KEY="${REPO}/datas/${WF_ID}.json" | |
| printf '{"repo":"%s","date":"%s","pr_link":"%s","wf_id":"%s"}\n' "$REPO" "$DATE" "$PR_LINK" "$WF_ID" | \ | |
| aws s3 cp - "s3://prsizebypassdata/${S3_KEY}" --content-type application/json | |
| echo "Recorded to s3://prsizebypassdata/${S3_KEY}" | |
| DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)" | |
| WF_ID="${{ github.run_id }}" | |
| S3_KEY="${REPO}/data/${WF_ID}.json" | |
| printf '{"repo":"%s","date":"%s","pr_link":"%s","wf_id":"%s"}\n' "$REPO" "$DATE" "$PR_LINK" "$WF_ID" | \ | |
| aws s3 cp - "s3://prsizebypassdata/${S3_KEY}" --content-type application/json | |
| echo "Recorded to s3://prsizebypassdata/${S3_KEY}" |
| excludePaths: | | ||
| .github/** | ||
| .cursor/** | ||
|
|
||
|
|
||
| - name: Ensure required check passes when bypassed |
There was a problem hiding this comment.
Priority: 🟢 LOW
Problem: There are two consecutive blank lines in the excludePaths block, which is minor formatting noise and inconsistent with the otherwise compact YAML style.
Why: While not functionally harmful, unnecessary blank lines can make the workflow harder to scan and may be flagged by linters or style checks if added later.
How to Fix: Remove the extra blank lines to keep the excludePaths list tight and consistent.
| excludePaths: | | |
| .github/** | |
| .cursor/** | |
| - name: Ensure required check passes when bypassed | |
| excludePaths: | | |
| .github/** | |
| .cursor/** | |
| - name: Ensure required check passes when bypassed |
There was a problem hiding this comment.
PR Complexity Score: 1.2 - Trivial
View Breakdown
- Lines Changed: 81
- Files Changed: 1
- Complexity Added: 0
- Raw Score: 4.62
⚠️ Sensitive Data Detected
| File | Types | Count | ||||||
|---|---|---|---|---|---|---|---|---|
|
| Line | Type | Preview |
|---|---|---|
| 13 | Secret: Secret Keyword | [Secret Keyword] |
.github/workflows/pr-size-check.yml
| Line | Type | Preview |
|---|---|---|
| 66 | PII: Phone Number | 127322177288 |
High-level summary
This PR introduces an automated GitHub Actions workflow to enforce and monitor pull request size limits on the master branch. It:
- Automatically checks PR size and flags large changes.
- Supports a
pr-size-exceptionbypass label with an approval flow. - Records bypass approvals to S3 for auditability.
- Posts a comment when a bypass is requested and pending approvals.
Key functionalities and changes
- Adds a new
PR Size CheckGitHub Actions workflow triggered on PR events targetingmaster. - Implements a
pre-approval-commentjob that:- Detects PRs with the
pr-size-exceptionlabel (excluding bot, revert, and sync branches). - Posts a standardized comment indicating that CAB approvals are required.
- Detects PRs with the
- Implements a
pr-size-checkjob that:- Runs only for non-bot PRs into
master, excluding revert and parent-branch-sync branches. - Uses
chargebee/cb-cicd-pipelinespr-size-checkaction with:- Error threshold: 250 lines.
- Warning threshold: 200 lines.
- Excluded paths:
.github/**,.cursor/**.
- Treats
pr-size-exceptionas a bypass:- Marks the check as successful when bypass is active.
- Configures AWS OIDC credentials.
- Writes a JSON record of the bypass (repo, date, PR link, workflow ID) to an S3 bucket (
prsizebypassdata), under a repo- and workflow-specific key.
- Uses an environment (
cb-billing-reviewers) when the bypass label is present, tying into environment-based approvals.
- Runs only for non-bot PRs into
File-level change summary
| File | Change summary |
|---|---|
.github/workflows/pr-size-check.yml |
New workflow that enforces PR size limits on master, posts pending-approval comments when pr-size-exception is used, conditionally runs size checks, supports bypass via label, configures AWS OIDC when bypassed, and records bypass approvals to S3 for auditing. |
This PR adds standardized PR template, changelog script, PR lint workflow, and PR size check workflow.