From 8fa203064853f943586bde69d9247c74ff80461a Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Tue, 10 Mar 2026 15:06:48 +0200 Subject: [PATCH 1/7] Add a skill for upstream release updates Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 149 ++++++++++++++++++ 1 file changed, 149 insertions(+) create mode 100644 .claude/skills/upstream-release-docs/SKILL.md diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md new file mode 100644 index 00000000..cf135506 --- /dev/null +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -0,0 +1,149 @@ +--- +name: upstream-release-docs +description: > + Analyze an upstream project's new release, verify changes against source code, and update documentation. Covers discovery, deep-dive into PRs/issues, docs audit, source-verified implementation, and review feedback handling. + + +argument-hint: ' [tag]' +--- + +# Upstream Release Documentation + +Analyze a new release of an upstream project and update the documentation site to reflect verified changes. + +## Core Principle + +**Verify everything against source code at the release tag.** Never trust release notes, PR descriptions, PR review comments, or issue descriptions at face value. Always check actual source code using: + +``` +gh api repos/OWNER/REPO/contents/PATH?ref=TAG +``` + +Claims from any human-written source (release notes, PR bodies, review comments) may be inaccurate, outdated, or aspirational. The source code at the tag is the single source of truth. + +## Input + +Parse the argument to extract: + +- `OWNER/REPO` — the upstream repository (e.g., `stacklok/toolhive-registry-server`) +- `TAG` (optional) — the release tag (e.g., `v0.6.3`). If omitted, fetch the latest release. + +## Phase 1: Discovery + +1. Fetch the release: + + ``` + gh release view TAG --repo OWNER/REPO --json tagName,name,body,publishedAt + ``` + + If no tag was provided: + + ``` + gh release view --repo OWNER/REPO --json tagName,name,body,publishedAt + ``` + +2. Extract PR numbers from the release notes body (look for `#NNN` patterns or full PR URLs). + +3. Categorize changes into: + - **New features** — entirely new capabilities + - **Changed defaults** — existing behavior that now works differently + - **New/changed configuration** — new config options, env vars, CLI flags + - **Deprecations** — features or options being phased out + - **API changes** — new/modified endpoints, request/response formats + - **Bug fixes** — corrections that may affect documented behavior + - **Internal/infra** — CI, dependencies, refactoring (usually no docs impact) + +4. **Present the categorized summary to the user.** Wait for approval before proceeding. + +## Phase 2: Deep Dive + +For each PR identified in Phase 1 (skip internal/infra unless user requests): + +1. Fetch PR details: + + ``` + gh pr view NUMBER --repo OWNER/REPO --json title,body,files + ``` + +2. Fetch linked issues if referenced: + + ``` + gh issue view NUMBER --repo OWNER/REPO --json title,body + ``` + +3. **Read the actual source code at the release tag** to verify every claim made in the PR description: + + ``` + gh api repos/OWNER/REPO/contents/PATH?ref=TAG + ``` + + The response is base64-encoded; decode it to read the content. + +4. Note discrepancies between PR descriptions and actual code. Trust the code. + +5. Identify: + - **Auto-updated content** — files generated from upstream (OpenAPI specs, CLI reference docs, JSON schemas). These should not be manually edited; flag them for automated update instead. + - **Hidden/experimental features** — look for indicators like `Hidden: true` in CLI command definitions, feature flags, or internal-only annotations. Do not document these unless the user explicitly asks. + +## Phase 3: Audit Existing Docs + +1. Search the documentation codebase for references to affected areas: + - Config values, env var names, CLI flags + - Feature names, API paths, version numbers + - Any terminology that changed + +2. Check the project style guide (CLAUDE.md, STYLE-GUIDE.md, or similar) for conventions. + +3. Build an **impact map** — a table with: | File | Current text/value | Verified replacement | Change type | |------|-------------------|---------------------|-------------| | path | what exists now | what it should be | update/new/remove | + +4. **Present the impact map to the user.** Wait for approval before implementing. + +## Phase 4: Implementation + +Apply the approved changes: + +1. **Update existing pages** — edit files using the impact map. Preserve the existing writing style and conventions. + +2. **Create new pages** if a feature requires dedicated documentation: + - Follow the project's information architecture framework (e.g., Diataxis: tutorials, how-to guides, reference, concepts) + - Place the page in the appropriate directory + - Update sidebar/navigation configuration + +3. **Add cross-references** — link new content from related existing pages and vice versa. + +4. **Update version references** — bump version numbers in install instructions, compatibility matrices, etc. + +5. Do not edit auto-generated files. If they need updating, note this for the user. + +## Phase 5: Validation + +1. **Re-verify every factual claim** against source code at the tag. This is the third verification pass (after Phase 2 and Phase 4). + +2. **Build the site** — run the project's build command to check for broken links, missing references, or build errors. + +3. **Run linting** — execute the project's lint/format commands. + +4. **Run `/docs-review`** — invoke the docs-review skill on all changed and new files to catch style, structure, and clarity issues. Address its feedback before proceeding. + +5. Fix any issues found. Re-run validation until clean. + +## Phase 6: Handle Review Feedback + +When receiving review comments (from humans or automated reviewers): + +1. **Verify every review comment against source code** before acting on it. Reviewers can be wrong. + +2. If a comment is correct — implement the fix and verify the result. + +3. If a comment is incorrect — respond with evidence from the source code. Include the actual code snippet and the file path at the tag. + +4. If a comment is ambiguous — check the source code to determine the correct behavior, then respond with your findings. + +## Key Principles + +- **Triple verification**: verify during deep dive (Phase 2), before finalizing (Phase 5), and when handling reviews (Phase 6) +- **User checkpoints**: present the categorized summary (after Phase 1) and impact map (after Phase 3) for user approval before implementing +- **Respect auto-generated content**: identify and skip files that are updated by automated processes +- **Don't document hidden features**: skip features marked as hidden, experimental, or internal unless explicitly asked +- **Follow existing conventions**: match the project's style guide, writing voice, file structure, and naming patterns +- **Be project-agnostic**: this workflow applies to any upstream project and any docs site — do not assume specific frameworks, file paths, or tools From f827343dd827ea1fdb33cc0316b6a968e74c3231 Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Tue, 10 Mar 2026 16:24:34 +0200 Subject: [PATCH 2/7] Update the skill Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md index cf135506..c98aa7af 100644 --- a/.claude/skills/upstream-release-docs/SKILL.md +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -53,7 +53,7 @@ Parse the argument to extract: - **Bug fixes** — corrections that may affect documented behavior - **Internal/infra** — CI, dependencies, refactoring (usually no docs impact) -4. **Present the categorized summary to the user.** Wait for approval before proceeding. +4. **Log the categorized summary** for transparency, then proceed to Phase 2. ## Phase 2: Deep Dive @@ -82,7 +82,7 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): 4. Note discrepancies between PR descriptions and actual code. Trust the code. 5. Identify: - - **Auto-updated content** — files generated from upstream (OpenAPI specs, CLI reference docs, JSON schemas). These should not be manually edited; flag them for automated update instead. + - **Auto-generated content** — files generated from upstream (OpenAPI specs, CLI reference docs, JSON schemas). These should not be manually edited; flag them for automated update instead. Critically, check whether new features are **already covered** by auto-generated docs (e.g., new API endpoints included in an upstream swagger spec that feeds a rendered API reference page). If so, do not create manual duplicates — the auto-generated content is the source of truth and manual pages will drift. - **Hidden/experimental features** — look for indicators like `Hidden: true` in CLI command definitions, feature flags, or internal-only annotations. Do not document these unless the user explicitly asks. ## Phase 3: Audit Existing Docs @@ -96,7 +96,7 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): 3. Build an **impact map** — a table with: | File | Current text/value | Verified replacement | Change type | |------|-------------------|---------------------|-------------| | path | what exists now | what it should be | update/new/remove | -4. **Present the impact map to the user.** Wait for approval before implementing. +4. **Log the impact map** for transparency, then proceed to Phase 4. ## Phase 4: Implementation @@ -104,10 +104,11 @@ Apply the approved changes: 1. **Update existing pages** — edit files using the impact map. Preserve the existing writing style and conventions. -2. **Create new pages** if a feature requires dedicated documentation: +2. **Create new pages** for new features that lack existing documentation and are not already covered by auto-generated content. Default to documenting new features rather than skipping them: - Follow the project's information architecture framework (e.g., Diataxis: tutorials, how-to guides, reference, concepts) - Place the page in the appropriate directory - Update sidebar/navigation configuration + - Skip only if the feature is already fully covered by auto-generated docs (e.g., API reference from a swagger spec) 3. **Add cross-references** — link new content from related existing pages and vice versa. @@ -142,7 +143,7 @@ When receiving review comments (from humans or automated reviewers): ## Key Principles - **Triple verification**: verify during deep dive (Phase 2), before finalizing (Phase 5), and when handling reviews (Phase 6) -- **User checkpoints**: present the categorized summary (after Phase 1) and impact map (after Phase 3) for user approval before implementing +- **Transparency**: log the categorized summary (after Phase 1) and impact map (after Phase 3) for auditability, but do not stop — run all phases to completion - **Respect auto-generated content**: identify and skip files that are updated by automated processes - **Don't document hidden features**: skip features marked as hidden, experimental, or internal unless explicitly asked - **Follow existing conventions**: match the project's style guide, writing voice, file structure, and naming patterns From 3ce0d5ee07232524de27b43af0319f177a2f7ffb Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Tue, 10 Mar 2026 16:46:18 +0200 Subject: [PATCH 3/7] Update the skill with why and what's next Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 28 +++++++++++++------ 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md index c98aa7af..894c1607 100644 --- a/.claude/skills/upstream-release-docs/SKILL.md +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -3,7 +3,6 @@ name: upstream-release-docs description: > Analyze an upstream project's new release, verify changes against source code, and update documentation. Covers discovery, deep-dive into PRs/issues, docs audit, source-verified implementation, and review feedback handling. - argument-hint: ' [tag]' --- @@ -71,7 +70,13 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): gh issue view NUMBER --repo OWNER/REPO --json title,body ``` -3. **Read the actual source code at the release tag** to verify every claim made in the PR description: +3. **Understand the "why"** — for new features, look beyond the code to understand motivation and intended usage: + - Check linked issues for user stories, acceptance criteria, and "definition of done" + - Follow references to RFCs, design docs, or PRDs linked from issues or PR descriptions + - Identify the intended user workflow: who uses this, why, and what happens after? + - If the "why" and consumption story aren't clear from any source, flag this gap — documentation that only covers the API surface without explaining purpose or workflow is incomplete + +4. **Read the actual source code at the release tag** to verify every claim made in the PR description: ``` gh api repos/OWNER/REPO/contents/PATH?ref=TAG @@ -79,10 +84,10 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): The response is base64-encoded; decode it to read the content. -4. Note discrepancies between PR descriptions and actual code. Trust the code. +5. Note discrepancies between PR descriptions and actual code. Trust the code. -5. Identify: - - **Auto-generated content** — files generated from upstream (OpenAPI specs, CLI reference docs, JSON schemas). These should not be manually edited; flag them for automated update instead. Critically, check whether new features are **already covered** by auto-generated docs (e.g., new API endpoints included in an upstream swagger spec that feeds a rendered API reference page). If so, do not create manual duplicates — the auto-generated content is the source of truth and manual pages will drift. +6. Identify: + - **Auto-generated content** — files generated from upstream (OpenAPI specs, CLI reference docs, JSON schemas). Do not manually edit these; flag them for automated update instead. However, auto-generated reference docs (e.g., API endpoints from a swagger spec) do **not** replace the need for conceptual explanations, guide content, or cross-references in existing pages. A new feature with auto-generated API docs still needs: (1) a conceptual explanation of what it is and why it exists, (2) mentions and cross-references in related existing pages (intro pages, feature lists, related guides), and (3) guide content if the feature has non-trivial workflows. Only skip creating a **duplicate API reference page** — never skip the surrounding documentation. - **Hidden/experimental features** — look for indicators like `Hidden: true` in CLI command definitions, feature flags, or internal-only annotations. Do not document these unless the user explicitly asks. ## Phase 3: Audit Existing Docs @@ -94,7 +99,11 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): 2. Check the project style guide (CLAUDE.md, STYLE-GUIDE.md, or similar) for conventions. -3. Build an **impact map** — a table with: | File | Current text/value | Verified replacement | Change type | |------|-------------------|---------------------|-------------| | path | what exists now | what it should be | update/new/remove | +3. Build an **impact map** — a table with these columns: + + | File | Current text/value | Verified replacement | Change type | + | ---- | ------------------ | -------------------- | ----------------- | + | path | what exists now | what it should be | update/new/remove | 4. **Log the impact map** for transparency, then proceed to Phase 4. @@ -104,11 +113,12 @@ Apply the approved changes: 1. **Update existing pages** — edit files using the impact map. Preserve the existing writing style and conventions. -2. **Create new pages** for new features that lack existing documentation and are not already covered by auto-generated content. Default to documenting new features rather than skipping them: +2. **Create new pages** for new features that lack existing documentation. Default to documenting new features rather than skipping them: - Follow the project's information architecture framework (e.g., Diataxis: tutorials, how-to guides, reference, concepts) - Place the page in the appropriate directory - Update sidebar/navigation configuration - - Skip only if the feature is already fully covered by auto-generated docs (e.g., API reference from a swagger spec) + - Even if auto-generated API reference exists for the feature, create conceptual/guide content explaining what the feature is, when to use it, and how it fits into the broader product + - Only skip creating a page that would duplicate auto-generated reference content (e.g., don't manually list API endpoints that are already in a swagger-rendered page) 3. **Add cross-references** — link new content from related existing pages and vice versa. @@ -144,7 +154,7 @@ When receiving review comments (from humans or automated reviewers): - **Triple verification**: verify during deep dive (Phase 2), before finalizing (Phase 5), and when handling reviews (Phase 6) - **Transparency**: log the categorized summary (after Phase 1) and impact map (after Phase 3) for auditability, but do not stop — run all phases to completion -- **Respect auto-generated content**: identify and skip files that are updated by automated processes +- **Respect auto-generated content**: don't manually edit auto-generated files, but always create the surrounding conceptual/guide content that auto-generated reference docs don't provide - **Don't document hidden features**: skip features marked as hidden, experimental, or internal unless explicitly asked - **Follow existing conventions**: match the project's style guide, writing voice, file structure, and naming patterns - **Be project-agnostic**: this workflow applies to any upstream project and any docs site — do not assume specific frameworks, file paths, or tools From d238f9edebaec4dbde6ae297996119ee92685d59 Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Tue, 10 Mar 2026 17:18:20 +0200 Subject: [PATCH 4/7] Follow ups on the skill Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 30 +++++++++++++++++-- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md index 894c1607..1976ac97 100644 --- a/.claude/skills/upstream-release-docs/SKILL.md +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -3,6 +3,7 @@ name: upstream-release-docs description: > Analyze an upstream project's new release, verify changes against source code, and update documentation. Covers discovery, deep-dive into PRs/issues, docs audit, source-verified implementation, and review feedback handling. + argument-hint: ' [tag]' --- @@ -74,6 +75,7 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): - Check linked issues for user stories, acceptance criteria, and "definition of done" - Follow references to RFCs, design docs, or PRDs linked from issues or PR descriptions - Identify the intended user workflow: who uses this, why, and what happens after? + - Map the full lifecycle: if the feature has a publish/produce side, identify the consume/discover side too. Document both. - If the "why" and consumption story aren't clear from any source, flag this gap — documentation that only covers the API surface without explaining purpose or workflow is incomplete 4. **Read the actual source code at the release tag** to verify every claim made in the PR description: @@ -114,10 +116,26 @@ Apply the approved changes: 1. **Update existing pages** — edit files using the impact map. Preserve the existing writing style and conventions. 2. **Create new pages** for new features that lack existing documentation. Default to documenting new features rather than skipping them: - - Follow the project's information architecture framework (e.g., Diataxis: tutorials, how-to guides, reference, concepts) - - Place the page in the appropriate directory + + **Diataxis separation** — create separate pages per document type, not one combined page: + - **Concept page** (explanation): What is this feature, why does it exist, when would you use it? Lead with concrete scenarios and user personas ("If you maintain a shared MCP registry and want to let teams publish reusable tool bundles..."). Explain relationships to existing features. + - **Guide page** (how-to): Task-oriented, organized by user goals — not by API endpoint order. Include practical examples: realistic `curl` commands, sample payloads with plausible values, expected responses, and error cases. If a feature has both producer and consumer sides, document both workflows. + - **Reference page**: Only create if not already auto-generated. If auto-generated API reference exists, link to it instead of duplicating endpoint listings. + + **Consumer workflow** — always document "then what?" after the producer/API side. If a feature has a publish/consume pattern, document how consumers discover and use what was published. If consumption tooling isn't built yet, say so explicitly rather than leaving a gap. + + **Practical examples** — every guide page needs at least one end-to-end example with: + - Realistic sample data (not `foo`/`bar` placeholders) + - The exact commands or API calls to run + - Expected output or response + - What to do if something goes wrong + + **Naming conventions** — when the feature introduces naming rules (e.g., kebab-case identifiers, camelCase config keys), call these out explicitly with examples of valid and invalid names. + + **Page mechanics:** + - Place each page in the appropriate directory - Update sidebar/navigation configuration - - Even if auto-generated API reference exists for the feature, create conceptual/guide content explaining what the feature is, when to use it, and how it fits into the broader product + - Update frontmatter descriptions on all new and modified pages - Only skip creating a page that would duplicate auto-generated reference content (e.g., don't manually list API endpoints that are already in a swagger-rendered page) 3. **Add cross-references** — link new content from related existing pages and vice versa. @@ -155,6 +173,12 @@ When receiving review comments (from humans or automated reviewers): - **Triple verification**: verify during deep dive (Phase 2), before finalizing (Phase 5), and when handling reviews (Phase 6) - **Transparency**: log the categorized summary (after Phase 1) and impact map (after Phase 3) for auditability, but do not stop — run all phases to completion - **Respect auto-generated content**: don't manually edit auto-generated files, but always create the surrounding conceptual/guide content that auto-generated reference docs don't provide +- **Separate by Diataxis type**: never combine concepts and how-to guides in a single page. Create separate pages for each document type. +- **Document the full lifecycle**: if a feature has producer and consumer sides, document both. Always answer "then what?" — don't leave the reader at the API call with no guidance on what happens next. +- **Lead with scenarios, not abstractions**: open concept pages with concrete "who is this for and why should they care" scenarios, not abstract definitions +- **Flag gaps honestly**: if consumption tooling, client support, or integration isn't ready yet, say so explicitly rather than omitting the topic +- **Use realistic examples**: guide pages need end-to-end examples with plausible data, exact commands, and expected output — not placeholder values +- **Call out naming conventions**: when a feature introduces naming rules (casing, allowed characters, namespacing), document them explicitly with valid/invalid examples - **Don't document hidden features**: skip features marked as hidden, experimental, or internal unless explicitly asked - **Follow existing conventions**: match the project's style guide, writing voice, file structure, and naming patterns - **Be project-agnostic**: this workflow applies to any upstream project and any docs site — do not assume specific frameworks, file paths, or tools From fcef88bd93bf7adf0efc91897624380c6f019c58 Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Tue, 10 Mar 2026 17:52:40 +0200 Subject: [PATCH 5/7] More updates on the skill Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md index 1976ac97..e568b50a 100644 --- a/.claude/skills/upstream-release-docs/SKILL.md +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -152,9 +152,13 @@ Apply the approved changes: 3. **Run linting** — execute the project's lint/format commands. -4. **Run `/docs-review`** — invoke the docs-review skill on all changed and new files to catch style, structure, and clarity issues. Address its feedback before proceeding. +4. **Run `/docs-review`** — invoke the docs-review skill on all changed and new files to catch style, structure, and clarity issues. When the review returns, **do not stop or present the findings to the user**. Instead, immediately apply every actionable fix yourself: + - For primary issues: edit the files to resolve them. + - For secondary issues and inline suggestions: apply the fixes directly. + - For items you disagree with (e.g., they conflict with verified source code): skip them silently. + - After applying fixes, re-run formatting/linting to ensure the fixes are clean. -5. Fix any issues found. Re-run validation until clean. +5. Fix any remaining issues found in the build or lint steps. Re-run validation until clean. ## Phase 6: Handle Review Feedback From cd306ddc6857bd6ef991a13f36463c5e7ef9ca99 Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Tue, 10 Mar 2026 18:07:24 +0200 Subject: [PATCH 6/7] Address copilot feedback Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md index e568b50a..2b17fdd1 100644 --- a/.claude/skills/upstream-release-docs/SKILL.md +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -15,8 +15,8 @@ Analyze a new release of an upstream project and update the documentation site t **Verify everything against source code at the release tag.** Never trust release notes, PR descriptions, PR review comments, or issue descriptions at face value. Always check actual source code using: -``` -gh api repos/OWNER/REPO/contents/PATH?ref=TAG +```bash +gh api repos///contents/?ref= ``` Claims from any human-written source (release notes, PR bodies, review comments) may be inaccurate, outdated, or aspirational. The source code at the tag is the single source of truth. @@ -25,21 +25,21 @@ Claims from any human-written source (release notes, PR bodies, review comments) Parse the argument to extract: -- `OWNER/REPO` — the upstream repository (e.g., `stacklok/toolhive-registry-server`) -- `TAG` (optional) — the release tag (e.g., `v0.6.3`). If omitted, fetch the latest release. +- `/` — the upstream repository (e.g., `stacklok/toolhive-registry-server`) +- `` (optional) — the release tag (e.g., `v0.6.3`). If omitted, fetch the latest release. ## Phase 1: Discovery 1. Fetch the release: - ``` - gh release view TAG --repo OWNER/REPO --json tagName,name,body,publishedAt + ```bash + gh release view --repo / --json tagName,name,body,publishedAt ``` If no tag was provided: - ``` - gh release view --repo OWNER/REPO --json tagName,name,body,publishedAt + ```bash + gh release view --repo / --json tagName,name,body,publishedAt ``` 2. Extract PR numbers from the release notes body (look for `#NNN` patterns or full PR URLs). @@ -61,14 +61,14 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): 1. Fetch PR details: - ``` - gh pr view NUMBER --repo OWNER/REPO --json title,body,files + ```bash + gh pr view --repo / --json title,body,files ``` 2. Fetch linked issues if referenced: - ``` - gh issue view NUMBER --repo OWNER/REPO --json title,body + ```bash + gh issue view --repo / --json title,body ``` 3. **Understand the "why"** — for new features, look beyond the code to understand motivation and intended usage: @@ -80,8 +80,8 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): 4. **Read the actual source code at the release tag** to verify every claim made in the PR description: - ``` - gh api repos/OWNER/REPO/contents/PATH?ref=TAG + ```bash + gh api repos///contents/?ref= ``` The response is base64-encoded; decode it to read the content. @@ -155,7 +155,7 @@ Apply the approved changes: 4. **Run `/docs-review`** — invoke the docs-review skill on all changed and new files to catch style, structure, and clarity issues. When the review returns, **do not stop or present the findings to the user**. Instead, immediately apply every actionable fix yourself: - For primary issues: edit the files to resolve them. - For secondary issues and inline suggestions: apply the fixes directly. - - For items you disagree with (e.g., they conflict with verified source code): skip them silently. + - For items you disagree with (e.g., they conflict with verified source code): do not apply the suggestion, but briefly log each skipped item with a source-verified reason for auditability. - After applying fixes, re-run formatting/linting to ensure the fixes are clean. 5. Fix any remaining issues found in the build or lint steps. Re-run validation until clean. From 8284fc2c891751c5cf4d06fb96c4b8ccd922bd22 Mon Sep 17 00:00:00 2001 From: Radoslav Dimitrov Date: Wed, 11 Mar 2026 02:20:53 +0200 Subject: [PATCH 7/7] Further skill updates Signed-off-by: Radoslav Dimitrov --- .claude/skills/upstream-release-docs/SKILL.md | 25 ++++++++++++++----- 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/.claude/skills/upstream-release-docs/SKILL.md b/.claude/skills/upstream-release-docs/SKILL.md index 2b17fdd1..22871017 100644 --- a/.claude/skills/upstream-release-docs/SKILL.md +++ b/.claude/skills/upstream-release-docs/SKILL.md @@ -75,10 +75,17 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): - Check linked issues for user stories, acceptance criteria, and "definition of done" - Follow references to RFCs, design docs, or PRDs linked from issues or PR descriptions - Identify the intended user workflow: who uses this, why, and what happens after? - - Map the full lifecycle: if the feature has a publish/produce side, identify the consume/discover side too. Document both. + - Map the full lifecycle: if the feature has a publish/produce side, actively search the source code for the consume/discover side. Check CLIs, client libraries, and related repositories. If consumption tooling doesn't exist in this release, note that explicitly — this gap **must** be documented rather than silently omitted. - If the "why" and consumption story aren't clear from any source, flag this gap — documentation that only covers the API surface without explaining purpose or workflow is incomplete -4. **Read the actual source code at the release tag** to verify every claim made in the PR description: +4. **For major new features** — when a change introduces an entirely new capability (not just a config change or incremental addition), the "why" and consumer workflow often cannot be derived from source code alone. In this case, **ask the user for additional context** before writing documentation: + - Request user stories, PRDs, RFCs, or design documents that explain the motivation and intended usage + - Ask who the target users are and what workflow they're expected to follow + - Ask how consumers are expected to discover and use the feature (CLI, IDE extension, API, etc.) + - Do not attempt to fabricate the "why" or consumer story from code structure alone — this produces documentation that covers the "what" and "how" but misses the perspective and voice that only comes from understanding the product intent + - Incremental changes (new config options, default changes, annotation additions) can proceed without this step + +5. **Read the actual source code at the release tag** to verify every claim made in the PR description: ```bash gh api repos///contents/?ref= @@ -86,9 +93,9 @@ For each PR identified in Phase 1 (skip internal/infra unless user requests): The response is base64-encoded; decode it to read the content. -5. Note discrepancies between PR descriptions and actual code. Trust the code. +6. Note discrepancies between PR descriptions and actual code. Trust the code. -6. Identify: +7. Identify: - **Auto-generated content** — files generated from upstream (OpenAPI specs, CLI reference docs, JSON schemas). Do not manually edit these; flag them for automated update instead. However, auto-generated reference docs (e.g., API endpoints from a swagger spec) do **not** replace the need for conceptual explanations, guide content, or cross-references in existing pages. A new feature with auto-generated API docs still needs: (1) a conceptual explanation of what it is and why it exists, (2) mentions and cross-references in related existing pages (intro pages, feature lists, related guides), and (3) guide content if the feature has non-trivial workflows. Only skip creating a **duplicate API reference page** — never skip the surrounding documentation. - **Hidden/experimental features** — look for indicators like `Hidden: true` in CLI command definitions, feature flags, or internal-only annotations. Do not document these unless the user explicitly asks. @@ -122,7 +129,12 @@ Apply the approved changes: - **Guide page** (how-to): Task-oriented, organized by user goals — not by API endpoint order. Include practical examples: realistic `curl` commands, sample payloads with plausible values, expected responses, and error cases. If a feature has both producer and consumer sides, document both workflows. - **Reference page**: Only create if not already auto-generated. If auto-generated API reference exists, link to it instead of duplicating endpoint listings. - **Consumer workflow** — always document "then what?" after the producer/API side. If a feature has a publish/consume pattern, document how consumers discover and use what was published. If consumption tooling isn't built yet, say so explicitly rather than leaving a gap. + **Consumer workflow** — this is a hard requirement, not optional. For every feature that has a publish/produce side, you **must** answer "then what?" in the documentation. Specifically: + - How does a consumer discover what was published? + - How does a consumer install, fetch, or use it? + - What tooling exists for consumption (CLI commands, IDE extensions, API calls)? + - If consumption tooling doesn't exist yet, **say so explicitly** in the docs. A single sentence closing the gap is better than silence. Example: "Skill installation via agent clients is planned for a future release; for now, the registry serves as a discovery and distribution layer." + - Readers who follow the docs to completion must not hit a dead end. **Practical examples** — every guide page needs at least one end-to-end example with: - Realistic sample data (not `foo`/`bar` placeholders) @@ -178,7 +190,8 @@ When receiving review comments (from humans or automated reviewers): - **Transparency**: log the categorized summary (after Phase 1) and impact map (after Phase 3) for auditability, but do not stop — run all phases to completion - **Respect auto-generated content**: don't manually edit auto-generated files, but always create the surrounding conceptual/guide content that auto-generated reference docs don't provide - **Separate by Diataxis type**: never combine concepts and how-to guides in a single page. Create separate pages for each document type. -- **Document the full lifecycle**: if a feature has producer and consumer sides, document both. Always answer "then what?" — don't leave the reader at the API call with no guidance on what happens next. +- **Document the full lifecycle**: if a feature has producer and consumer sides, document both. Always answer "then what?" — readers who follow the docs to completion must not hit a dead end. If consumption tooling isn't built yet, say so explicitly. +- **Ask for context on major features**: for entirely new capabilities, don't fabricate the "why" from code alone. Ask the user for user stories, PRDs, or RFCs. Incremental changes can proceed autonomously, but big feature introductions need human input to capture product intent and voice. - **Lead with scenarios, not abstractions**: open concept pages with concrete "who is this for and why should they care" scenarios, not abstract definitions - **Flag gaps honestly**: if consumption tooling, client support, or integration isn't ready yet, say so explicitly rather than omitting the topic - **Use realistic examples**: guide pages need end-to-end examples with plausible data, exact commands, and expected output — not placeholder values