Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
## CHANGELOG
REPLACE_ME_WITH_CHANGELOG

## SUMMARY
REPLACE_ME_WITH_SUMMARY_OF_THE_CHANGES

## FUNCTIONAL AUTOMATION CHANGES PR
- [ ] Yes
- If Yes, PR :
- [ ] No
- If No, Reason:

## AUTOMATION TEST REPORT URL
REPLACE_ME_WITH_TEST_REPORT_URL

## AREAS OF IMPACT
REPLACE_ME_WITH_AREAS_OF_IMPACT_OR_NA

## TYPE OF CHANGE
- [ ] 🐞 Bugfix
- [ ] 🌟 Feature
- [ ] ✨ Enhancement
- [ ] 🧪 Unit Test Cases
- [ ] 📔 Documentation
- [ ] ⚙️ Chore - Build Related / Configuration / Others


## DOCUMENTATION
REPLACE_ME_WITH_DOCUMENTATION_LINK_OR_NA
Comment on lines +28 to +29
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟢 LOW

Problem: The file is missing a trailing newline (\n) at the end of the file.

Why: Many tools (linters, formatters, POSIX utilities, and some Git diff viewers) expect text files to end with a newline; missing it can cause noisy diffs and minor interoperability issues.

How to Fix: Add a newline at the end of the file after the last line so the file ends with a proper line terminator.

Suggested change
## DOCUMENTATION
REPLACE_ME_WITH_DOCUMENTATION_LINK_OR_NA
## DOCUMENTATION
REPLACE_ME_WITH_DOCUMENTATION_LINK_OR_NA

31 changes: 31 additions & 0 deletions .github/scripts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Scripts

## Generate changelog

Lists PRs merged into a branch (excludes "Parent branch sync" and bot-authored PRs).

**Prerequisites:** `jq`, and GitHub credentials as env vars.

### Set credentials (once per terminal)
Comment on lines +5 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The prerequisites mention jq and "GitHub credentials as env vars" but do not specify that the GitHub token must have repo (and possibly read:org if needed) scopes, which can lead to confusing authentication/authorization failures when using the script.

Why: Without explicit scope guidance, users may create a PAT with insufficient permissions, causing the script to fail with opaque GitHub API errors; this degrades usability and makes troubleshooting harder.

How to Fix: Clarify in the prerequisites that GH_PAT must be a GitHub personal access token with at least repo scope (and any other required scopes, if applicable), so users can provision the correct token up front.

Suggested change
Lists PRs merged into a branch (excludes "Parent branch sync" and bot-authored PRs).
**Prerequisites:** `jq`, and GitHub credentials as env vars.
### Set credentials (once per terminal)
Lists PRs merged into a branch (excludes "Parent branch sync" and bot-authored PRs).
**Prerequisites:** `jq`, and GitHub credentials as env vars (`GH_PAT` should be a personal access token with at least `repo` scope).
### Set credentials (once per terminal)


```bash
export GH_USERNAME=your-github-username
export GH_PAT=your-github-personal-access-token
Comment on lines +12 to +13
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟢 LOW

Problem: The example environment variable names GH_USERNAME and GH_PAT are introduced without explicitly tying them to the script’s expectations, which may leave users unsure whether these exact names are required or just examples.

Why: Ambiguity around required environment variable names can cause the script to fail at runtime if users choose different names, leading to avoidable setup friction.

How to Fix: Explicitly state that the script expects GH_USERNAME and GH_PAT to be set with those exact names, or clarify that these are the required variable names used by generate-changelog.sh.

```

### Commands

From repo root:

| What you want | Command |
|---------------|---------|
| PRs merged into **current branch** (last 30 days) | `./.github/scripts/generate-changelog.sh` |
| PRs merged into **master** (last 30 days) | `./.github/scripts/generate-changelog.sh master` |
| PRs merged into **master** since a date | `./.github/scripts/generate-changelog.sh master "merged:>=2025-01-01"` |

### Arguments

1. **Branch** (optional) — default: current branch
2. **Date filter** (optional) — default: last 30 days (e.g. `merged:>=2025-01-01`)

Output includes a GitHub search URL to verify results.
84 changes: 84 additions & 0 deletions .github/scripts/generate-changelog.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
#!/bin/bash

set -e

# Check required environment variables
if [[ -z "$GH_USERNAME" ]]; then
echo "❌ Error: Environment variable GH_USERNAME not found"
exit 1
fi

if [[ -z "$GH_PAT" ]]; then
echo "❌ Error: Environment variable GH_PAT not found"
exit 1
fi

# Optional: Branch name (defaults to current branch if not provided)
SOURCE_BRANCH="${1:-$(git branch --show-current)}"
# Optional: Date filter (defaults to last 30 days if not provided)
DATE_FILTER="${2:-merged:>=$(date -u -v-30d +%Y-%m-%d 2>/dev/null || date -u -d '30 days ago' +%Y-%m-%d)}"

# Repo is set per-repo when this file is pushed (placeholder replaced by upload script)
REPO="chargebee/chargebee-react-native"
Comment on lines +16 to +22
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟠 HIGH

Problem: The default DATE_FILTER string is constructed using a mix of macOS (date -v-30d) and GNU (date -d '30 days ago') syntax inside the same parameter expansion, which is hard to read and can behave unexpectedly if either command is unavailable or behaves differently across environments.

Why: This script is likely to be run in different environments (local dev, CI runners, possibly containers). Embedding both variants inline makes the expression brittle and harder to debug; a failure in both date invocations will silently result in an empty DATE_FILTER and a malformed GitHub query.

How to Fix: Compute the date in a small conditional block first (trying macOS syntax, then GNU syntax, and failing clearly if both are unavailable), then use that variable in DATE_FILTER. This improves portability and error handling.

Suggested change
# Optional: Branch name (defaults to current branch if not provided)
SOURCE_BRANCH="${1:-$(git branch --show-current)}"
# Optional: Date filter (defaults to last 30 days if not provided)
DATE_FILTER="${2:-merged:>=$(date -u -v-30d +%Y-%m-%d 2>/dev/null || date -u -d '30 days ago' +%Y-%m-%d)}"
# Repo is set per-repo when this file is pushed (placeholder replaced by upload script)
REPO="chargebee/chargebee-react-native"
# Optional: Branch name (defaults to current branch if not provided)
SOURCE_BRANCH="${1:-$(git branch --show-current)}"
# Optional: Date filter (defaults to last 30 days if not provided)
if [ -z "$2" ]; then
# Try macOS BSD date first, then GNU date; fail clearly if both are unavailable
if LAST_30_DAYS=$(date -u -v-30d +%Y-%m-%d 2>/dev/null); then
:
elif LAST_30_DAYS=$(date -u -d '30 days ago' +%Y-%m-%d 2>/dev/null); then
:
else
echo "❌ Error: Unable to compute date 30 days ago; 'date' command does not support required options"
exit 1
fi
DATE_FILTER="merged:>=$LAST_30_DAYS"
else
DATE_FILTER="$2"
fi
# Repo is set per-repo when this file is pushed (placeholder replaced by upload script)
REPO="chargebee/chargebee-react-native"


echo "🔍 Searching for PRs merged into $SOURCE_BRANCH..."
Comment on lines +21 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The REPO variable is hard-coded to chargebee/chargebee-react-native, which makes the script non-reusable across repositories and contradicts the comment that it is “set per-repo when this file is pushed”.

Why: If this script is copied to another repository without updating REPO, it will query and generate changelog entries for the wrong repo, which is a subtle and potentially confusing failure mode.

How to Fix: Allow REPO to be overridden via an environment variable and fall back to a sensible default, or fail with a clear error if REPO is not set, instead of hard-coding a specific repository.

Suggested change
# Repo is set per-repo when this file is pushed (placeholder replaced by upload script)
REPO="chargebee/chargebee-react-native"
echo "🔍 Searching for PRs merged into $SOURCE_BRANCH..."
# Repo is set per-repo when this file is pushed (placeholder replaced by upload script)
: "${REPO:=chargebee/chargebee-react-native}"
echo "🔍 Searching for PRs merged into $SOURCE_BRANCH..."


# GitHub API call with error handling
HTTP_STATUS=$(curl -s -w "%{http_code}" -G -u "$GH_USERNAME:$GH_PAT" \
"https://api.github.com/search/issues" \
--data-urlencode "q=NOT \"Parent branch sync\" in:title is:pr repo:$REPO is:merged base:$SOURCE_BRANCH merged:$DATE_FILTER -author:app/distributed-gitflow-app" \
-o /tmp/curl_output.json \
2>/tmp/curl_error.log)

CURL_EXIT_CODE=$?
Comment on lines +26 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The script writes API responses and errors to fixed paths under /tmp (/tmp/curl_output.json and /tmp/curl_error.log), which can cause collisions if multiple instances run concurrently and may leak data between runs.

Why: Using static filenames in a shared temp directory is not safe for concurrent executions and can make debugging harder because logs from different runs overwrite each other.

How to Fix: Use mktemp to create unique temporary files for each run and ensure they are cleaned up at the end; store their paths in variables used consistently throughout the script.

Suggested change
# GitHub API call with error handling
HTTP_STATUS=$(curl -s -w "%{http_code}" -G -u "$GH_USERNAME:$GH_PAT" \
"https://api.github.com/search/issues" \
--data-urlencode "q=NOT \"Parent branch sync\" in:title is:pr repo:$REPO is:merged base:$SOURCE_BRANCH merged:$DATE_FILTER -author:app/distributed-gitflow-app" \
-o /tmp/curl_output.json \
2>/tmp/curl_error.log)
CURL_EXIT_CODE=$?
# GitHub API call with error handling
CURL_OUTPUT_FILE=$(mktemp /tmp/changelog_curl_output.XXXXXX.json)
CURL_ERROR_FILE=$(mktemp /tmp/changelog_curl_error.XXXXXX.log)
HTTP_STATUS=$(curl -s -w "%{http_code}" -G -u "$GH_USERNAME:$GH_PAT" \
"https://api.github.com/search/issues" \
--data-urlencode "q=NOT \"Parent branch sync\" in:title is:pr repo:$REPO is:merged base:$SOURCE_BRANCH merged:$DATE_FILTER -author:app/distributed-gitflow-app" \
-o "$CURL_OUTPUT_FILE" \
2>"$CURL_ERROR_FILE")
CURL_EXIT_CODE=$?


echo "🌐 API call status: $HTTP_STATUS"

if [ $CURL_EXIT_CODE -ne 0 ]; then
echo "❌ Error: curl request failed with exit code $CURL_EXIT_CODE"
echo "Error details: $(cat /tmp/curl_error.log)"
exit 1
fi

if [ "$HTTP_STATUS" -ne 200 ]; then
echo "❌ Error: API returned HTTP status $HTTP_STATUS"
echo "Response: $(cat /tmp/curl_output.json)"
exit 1
fi
Comment on lines +37 to +47
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: When curl fails or the API returns a non-200 status, the script prints the contents of the error and output files inline using command substitution, which can mangle large or multi-line responses and makes it harder to distinguish message boundaries.

Why: Using $(cat file) inside echo collapses newlines and can make JSON or error logs unreadable, complicating debugging when something goes wrong.

How to Fix: Use cat directly to stream the file contents, preserving formatting, and reference the temp file variables introduced for safer temp handling.

Suggested change
if [ $CURL_EXIT_CODE -ne 0 ]; then
echo "❌ Error: curl request failed with exit code $CURL_EXIT_CODE"
echo "Error details: $(cat /tmp/curl_error.log)"
exit 1
fi
if [ "$HTTP_STATUS" -ne 200 ]; then
echo "❌ Error: API returned HTTP status $HTTP_STATUS"
echo "Response: $(cat /tmp/curl_output.json)"
exit 1
fi
if [ $CURL_EXIT_CODE -ne 0 ]; then
echo "❌ Error: curl request failed with exit code $CURL_EXIT_CODE"
echo "Error details:"
cat "$CURL_ERROR_FILE"
exit 1
fi
if [ "$HTTP_STATUS" -ne 200 ]; then
echo "❌ Error: API returned HTTP status $HTTP_STATUS"
echo "Response:"
cat "$CURL_OUTPUT_FILE"
exit 1
fi


PR_LIST_RESPONSE=$(cat /tmp/curl_output.json)

# Clean invalid control characters from JSON response
if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then
echo "⚠️ Invalid JSON detected — cleaning control characters..."
PR_LIST_RESPONSE=$(echo "$PR_LIST_RESPONSE" | tr -d '\000-\037')
Comment on lines +49 to +54
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: Lines 38-39 and 43-49: The script assumes jq is installed and available, but never checks for it explicitly; if jq is missing, the script will fail with a non-obvious error.

Why: This script is intended as a reusable tool; failing with “command not found” or similar when jq is missing is less user-friendly than a clear prerequisite check.

How to Fix: Add an early check for jq (similar to the environment variable checks) and exit with a clear message if it is not installed.

Suggested change
PR_LIST_RESPONSE=$(cat /tmp/curl_output.json)
# Clean invalid control characters from JSON response
if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then
echo "⚠️ Invalid JSON detected — cleaning control characters..."
PR_LIST_RESPONSE=$(echo "$PR_LIST_RESPONSE" | tr -d '\000-\037')
if ! command -v jq >/dev/null 2>&1; then
echo "❌ Error: 'jq' is required but not installed. Please install jq and retry."
exit 1
fi
PR_LIST_RESPONSE=$(cat "$CURL_OUTPUT_FILE")
# Clean invalid control characters from JSON response
if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then
echo "⚠️ Invalid JSON detected — cleaning control characters..."
PR_LIST_RESPONSE=$(echo "$PR_LIST_RESPONSE" | tr -d '\000-\037')


if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then
echo "$PR_LIST_RESPONSE" > /tmp/invalid_json_debug.json
echo "❌ Error: JSON is still invalid after cleaning control characters"
echo "💡 Use 'cat /tmp/invalid_json_debug.json' to inspect the JSON"
exit 1
fi
fi
Comment on lines +56 to +62
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: When JSON remains invalid after cleaning, the script writes to a hard-coded /tmp/invalid_json_debug.json file, which can be overwritten by concurrent runs and is inconsistent with the earlier use of temp files.

Why: Using a fixed filename in /tmp risks collisions and makes it harder to correlate debug files with specific runs, especially in shared CI environments.

How to Fix: Use mktemp to create a unique debug file and include its path in the error message so users can inspect the correct file.

Suggested change
if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then
echo "$PR_LIST_RESPONSE" > /tmp/invalid_json_debug.json
echo "❌ Error: JSON is still invalid after cleaning control characters"
echo "💡 Use 'cat /tmp/invalid_json_debug.json' to inspect the JSON"
exit 1
fi
fi
if ! echo "$PR_LIST_RESPONSE" | jq . >/dev/null 2>&1; then
INVALID_JSON_DEBUG_FILE=$(mktemp /tmp/changelog_invalid_json.XXXXXX.json)
echo "$PR_LIST_RESPONSE" > "$INVALID_JSON_DEBUG_FILE"
echo "❌ Error: JSON is still invalid after cleaning control characters"
echo "💡 Use 'cat $INVALID_JSON_DEBUG_FILE' to inspect the JSON"
exit 1
fi
fi


PR_MERGED_COUNT=$(echo "$PR_LIST_RESPONSE" | jq '.total_count')

# Color codes
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
NOCOLOR='\033[0m'

echo "=============================================================================="
echo -e "Found ${GREEN}$PR_MERGED_COUNT${NOCOLOR} PR(s) merged into $SOURCE_BRANCH (filter: $DATE_FILTER)"
echo "=============================================================================="
echo -e "## ${GREEN}CHANGELOG${NOCOLOR}"
echo "$PR_LIST_RESPONSE" | jq -r '.items[] | (.title) + " (" + (.user.login) + ") [#" + (.number | tostring) + "]"' | sort
printf "\n"
Comment on lines +71 to +76
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The script assumes .items exists and is non-empty; if the search returns zero results, jq -r '.items[] | ...' will exit with a non-zero status under set -e, causing the script to terminate instead of printing an empty changelog.

Why: For a date range or branch with no merged PRs, it’s reasonable to expect the script to succeed and report zero items; failing in this case makes the tool less robust and can break automated workflows.

How to Fix: Guard the jq iteration so that it doesn’t cause the script to exit when there are no items, for example by checking PR_MERGED_COUNT first or by appending || true to the pipeline.

Suggested change
echo "=============================================================================="
echo -e "Found ${GREEN}$PR_MERGED_COUNT${NOCOLOR} PR(s) merged into $SOURCE_BRANCH (filter: $DATE_FILTER)"
echo "=============================================================================="
echo -e "## ${GREEN}CHANGELOG${NOCOLOR}"
echo "$PR_LIST_RESPONSE" | jq -r '.items[] | (.title) + " (" + (.user.login) + ") [#" + (.number | tostring) + "]"' | sort
printf "\n"
echo "=============================================================================="
echo -e "Found ${GREEN}$PR_MERGED_COUNT${NOCOLOR} PR(s) merged into $SOURCE_BRANCH (filter: $DATE_FILTER)"
echo "=============================================================================="
echo -e "## ${GREEN}CHANGELOG${NOCOLOR}"
if [ "$PR_MERGED_COUNT" -gt 0 ]; then
echo "$PR_LIST_RESPONSE" | jq -r '.items[] | (.title) + " (" + (.user.login) + ") [#" + (.number | tostring) + "]"' | sort
else
echo "(no merged PRs found for the given filter)"
fi
printf "\n"

echo "=============================================================================="
echo -e "${GREEN}GitHub Search URL (to verify, if required)${NOCOLOR}"
BRANCH_ENCODED=$(echo "$SOURCE_BRANCH" | sed 's/ /%20/g')
echo "https://github.com/$REPO/pulls?q=NOT+%22Parent+branch+sync%22+in%3Atitle+is%3Apr+is%3Amerged+base%3A$BRANCH_ENCODED+merged%3A$DATE_FILTER+-author%3Aapp%2Fdistributed-gitflow-app"
echo "=============================================================================="

# Clean up temporary files
rm -f /tmp/curl_output.json /tmp/curl_error.log
Comment on lines +83 to +84
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟢 LOW

Problem: The cleanup step still references the hard-coded /tmp/curl_output.json and /tmp/curl_error.log paths, which will be incorrect if you adopt mktemp for safer temp file handling.

Why: Leaving these references unchanged after switching to dynamic temp files will either fail to clean up or attempt to remove unrelated files, leading to clutter in /tmp or confusing behavior.

How to Fix: Update the cleanup to use the temp file variables introduced earlier and guard with -n checks to avoid removing empty paths.

Suggested change
# Clean up temporary files
rm -f /tmp/curl_output.json /tmp/curl_error.log
# Clean up temporary files
[ -n "$CURL_OUTPUT_FILE" ] && rm -f "$CURL_OUTPUT_FILE"
[ -n "$CURL_ERROR_FILE" ] && rm -f "$CURL_ERROR_FILE"

13 changes: 13 additions & 0 deletions .github/workflows/pr-lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name: Common PR Lint

on:
pull_request:
branches: [master, main,staging, dev,develop]
types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited]
Comment on lines +3 to +6
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The branches list in the pull_request trigger is missing a space after the comma between main and staging ([master, main,staging, dev,develop]), which can make the workflow harder to read and is inconsistent with the spacing used for other entries.

Why: While GitHub Actions will parse this correctly, inconsistent formatting reduces readability and can lead to copy-paste or editing mistakes in the future, especially in shared, reusable workflow configurations.

How to Fix: Add a space after the comma so all branch entries are consistently spaced, improving readability and maintainability.

Suggested change
on:
pull_request:
branches: [master, main,staging, dev,develop]
types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited]
on:
pull_request:
branches: [master, main, staging, dev, develop]
types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited]

Comment on lines +4 to +6
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟢 LOW

Problem: The types list for the pull_request trigger omits the synchronize event, so updates to existing PRs (new commits pushed) will not re-run the PR lint checks.

Why: Without handling synchronize, PR linting can become stale when new commits are pushed after the initial open/edit events, potentially allowing non-compliant changes to merge without updated checks.

How to Fix: Add synchronize to the types array so that PR lint checks are re-triggered whenever commits are pushed to an existing pull request.

Suggested change
pull_request:
branches: [master, main,staging, dev,develop]
types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited]
pull_request:
branches: [master, main, staging, dev, develop]
types: [ready_for_review, reopened, review_requested, review_request_removed, opened, edited, synchronize]


jobs:
pr-lint:
name: Common PR Lint Checks
if: github.base_ref == 'main' || github.base_ref == 'master'
uses: chargebee/cb-cicd-pipelines/.github/workflows/pr-lint.yml@main
Comment on lines +8 to +12
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟢 LOW

Problem: The if condition uses github.base_ref to gate execution to main/master, but the workflow is already restricted to specific branches via branches, and base_ref is only set for PR events; this condition is slightly redundant and could be simplified or made more explicit.

Why: Redundant or unclear conditions can confuse future maintainers about which branches are actually intended to run this workflow, especially since branches includes staging, dev, and develop but the job condition excludes them.

How to Fix: Either (a) remove staging, dev, and develop from the branches list if the job should only ever run for main/master, or (b) expand the if condition to match the full intended set of branches, keeping triggers and conditions aligned.

Suggested change
jobs:
pr-lint:
name: Common PR Lint Checks
if: github.base_ref == 'main' || github.base_ref == 'master'
uses: chargebee/cb-cicd-pipelines/.github/workflows/pr-lint.yml@main
jobs:
pr-lint:
name: Common PR Lint Checks
if: github.base_ref == 'main' || github.base_ref == 'master'
uses: chargebee/cb-cicd-pipelines/.github/workflows/pr-lint.yml@main

secrets: inherit
81 changes: 81 additions & 0 deletions .github/workflows/pr-size-check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
name: PR Size Check
on:
pull_request:
types: [ reopened, opened, synchronize, edited, labeled, unlabeled ]
branches:
- master


jobs:
pre-approval-comment:
name: Announce pending bypass approval
if: ${{ github.event.pull_request.user.login != 'distributed-gitflow-app[bot]' &&
!startsWith(github.head_ref, 'revert-') &&
!startsWith(github.head_ref, 'parent-branch-sync/') &&
contains(github.event.pull_request.labels.*.name, 'pr-size-exception') }}
runs-on: graviton-small-runner
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const owner = context.repo.owner;
const repo = context.repo.repo;
const issue_number = context.payload.pull_request.number;

const marker = '<!-- pr-size-bypass-pending -->';
const pending = `${marker}
🛑 The \`pr-size-exception\` label is present. This workflow is **waiting for approvals** from the **[cb-Billing-CAB-reviewers](https://github.com/orgs/chargebee/teams/cb-billing-cab-approvers)**.`;

// create a new comment when the workflow runs
await github.rest.issues.createComment({ owner, repo, issue_number, body: pending });
Comment on lines +21 to +34
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟠 HIGH

Problem: The pre-approval-comment job always creates a new comment on every qualifying event instead of updating or de-duplicating, which will spam the PR with repeated “waiting for approvals” comments.

Why: On every opened, synchronize, edited, labeled, or unlabeled event while the pr-size-exception label is present, this job will post another identical comment. This clutters the PR discussion, makes it harder to see real reviewer feedback, and can annoy reviewers.

How to Fix: Before creating a new comment, list existing comments and only create the “pending” comment if one with the marker is not already present. This keeps the PR clean while still signaling pending approval.

Suggested change
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const owner = context.repo.owner;
const repo = context.repo.repo;
const issue_number = context.payload.pull_request.number;
const marker = '<!-- pr-size-bypass-pending -->';
const pending = `${marker}
🛑 The \`pr-size-exception\` label is present. This workflow is **waiting for approvals** from the **[cb-Billing-CAB-reviewers](https://github.com/orgs/chargebee/teams/cb-billing-cab-approvers)**.`;
// create a new comment when the workflow runs
await github.rest.issues.createComment({ owner, repo, issue_number, body: pending });
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const owner = context.repo.owner;
const repo = context.repo.repo;
const issue_number = context.payload.pull_request.number;
const marker = '<!-- pr-size-bypass-pending -->';
const pending = `${marker}
🛑 The \`pr-size-exception\` label is present. This workflow is **waiting for approvals** from the **[cb-Billing-CAB-reviewers](https://github.com/orgs/chargebee/teams/cb-billing-cab-approvers)**.`;
// Avoid duplicate comments: only add if no existing marker comment is present
const { data: comments } = await github.rest.issues.listComments({
owner,
repo,
issue_number,
per_page: 100,
});
const alreadyPresent = comments.some(comment => comment.body && comment.body.includes(marker));
if (!alreadyPresent) {
await github.rest.issues.createComment({ owner, repo, issue_number, body: pending });
}

pr-size-check:
name: Check PR size
if: ${{ (github.base_ref == 'master') && github.event.pull_request.user.login != 'distributed-gitflow-app[bot]' && !startsWith(github.head_ref, 'revert-') && !startsWith(github.head_ref, 'parent-branch-sync/') }}
runs-on: graviton-small-runner
permissions:
contents: read
pull-requests: write
id-token: write
env:
BYPASS_LABEL: pr-size-exception
environment: ${{ contains(github.event.pull_request.labels.*.name, 'pr-size-exception') && 'cb-billing-reviewers' || '' }}
steps:
Comment on lines +43 to +46
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The environment is set to an empty string ('') when the bypass label is not present, which is not a valid environment name and may cause workflow/environment configuration issues.

Why: GitHub environments expect a valid non-empty name; passing an empty string can lead to confusing behavior (e.g., environment protection rules not applying as expected) or future breakage if GitHub tightens validation.

How to Fix: Use a conditional that omits the environment key when the label is absent, or map to a clearly named default environment instead of an empty string.

Suggested change
env:
BYPASS_LABEL: pr-size-exception
environment: ${{ contains(github.event.pull_request.labels.*.name, 'pr-size-exception') && 'cb-billing-reviewers' || '' }}
steps:
env:
BYPASS_LABEL: pr-size-exception
environment: ${{ contains(github.event.pull_request.labels.*.name, 'pr-size-exception') && 'cb-billing-reviewers' || 'default' }}
steps:

- uses: chargebee/cb-cicd-pipelines/.github/actions/pr-size-check@v4.20.3
if: ${{ !contains(github.event.pull_request.labels.*.name, env.BYPASS_LABEL) }}
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
errorSize: 250
warningSize: 200
excludePaths: |
.github/**
.cursor/**


- name: Ensure required check passes when bypassed
Comment on lines +53 to +58
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟢 LOW

Problem: There are two consecutive blank lines in the excludePaths block, which is minor formatting noise and inconsistent with the otherwise compact YAML style.

Why: While not functionally harmful, unnecessary blank lines can make the workflow harder to scan and may be flagged by linters or style checks if added later.

How to Fix: Remove the extra blank lines to keep the excludePaths list tight and consistent.

Suggested change
excludePaths: |
.github/**
.cursor/**
- name: Ensure required check passes when bypassed
excludePaths: |
.github/**
.cursor/**
- name: Ensure required check passes when bypassed

if: ${{ contains(github.event.pull_request.labels.*.name, env.BYPASS_LABEL) }}
run: echo "Bypass active — marking job successful."

- name: Configure OIDC authentication
if: ${{ contains(github.event.pull_request.labels.*.name, env.BYPASS_LABEL) }}
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::127322177288:role/OIDC_S3
role-session-name: GithubActionsSession
role-duration-seconds: 900
aws-region: us-east-1

- name: Record bypass approval to S3
if: ${{ contains(github.event.pull_request.labels.*.name, env.BYPASS_LABEL) }}
run: |
REPO="${{ github.repository }}"
PR_LINK="https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }}"
DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
WF_ID="${{ github.run_id }}"
S3_KEY="${REPO}/datas/${WF_ID}.json"
printf '{"repo":"%s","date":"%s","pr_link":"%s","wf_id":"%s"}\n' "$REPO" "$DATE" "$PR_LINK" "$WF_ID" | \
aws s3 cp - "s3://prsizebypassdata/${S3_KEY}" --content-type application/json
echo "Recorded to s3://prsizebypassdata/${S3_KEY}"
Comment on lines +76 to +81
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Priority: 🟡 MEDIUM

Problem: The S3 key path uses datas ("${REPO}/datas/${WF_ID}.json") which is likely a typo for data, making the bucket layout inconsistent and harder to query or reason about.

Why: Misspelled or inconsistent prefixes in S3 keys complicate tooling, queries, and documentation around audit logs; future consumers may look under data/ and miss these records.

How to Fix: Rename the prefix from datas to data (or another intentional, documented name) to keep the S3 structure predictable and self-explanatory.

Suggested change
DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
WF_ID="${{ github.run_id }}"
S3_KEY="${REPO}/datas/${WF_ID}.json"
printf '{"repo":"%s","date":"%s","pr_link":"%s","wf_id":"%s"}\n' "$REPO" "$DATE" "$PR_LINK" "$WF_ID" | \
aws s3 cp - "s3://prsizebypassdata/${S3_KEY}" --content-type application/json
echo "Recorded to s3://prsizebypassdata/${S3_KEY}"
DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
WF_ID="${{ github.run_id }}"
S3_KEY="${REPO}/data/${WF_ID}.json"
printf '{"repo":"%s","date":"%s","pr_link":"%s","wf_id":"%s"}\n' "$REPO" "$DATE" "$PR_LINK" "$WF_ID" | \
aws s3 cp - "s3://prsizebypassdata/${S3_KEY}" --content-type application/json
echo "Recorded to s3://prsizebypassdata/${S3_KEY}"

Loading