diff --git a/.claude/skills/ably-codebase-review/SKILL.md b/.claude/skills/ably-codebase-review/SKILL.md index 3ade6662..8521fbde 100644 --- a/.claude/skills/ably-codebase-review/SKILL.md +++ b/.claude/skills/ably-codebase-review/SKILL.md @@ -154,14 +154,26 @@ Launch these agents **in parallel**. Each agent gets a focused mandate and uses **Method (grep/glob — text patterns and file matching):** 1. **Glob** for each command in `src/commands/` and check if a corresponding test file exists at `test/unit/commands/` -2. **Grep** for `describe(` in test files to check for required describe blocks: `help`, `argument validation`, `functionality`, `flags`, `error handling` -3. **Grep** for `--duration` in subscribe test files +2. **Grep** for `describe(` in test files to check for the 5 required describe blocks with EXACT standard names: + - `describe("help"` — required in every test file + - `describe("argument validation"` — required (test required args OR unknown flag rejection) + - `describe("functionality"` — required (core happy-path tests) + - `describe("flags"` — required (verify flags exist and work) + - `describe("error handling"` — required (API errors, network failures) + Flag non-standard variants: `"command arguments and flags"`, `"command flags"`, `"flag options"`, `"parameter validation"`. + Exception: `interactive.test.ts`, `interactive-sigint.test.ts`, and `bench/*.test.ts` are exempt. +3. **Grep** for `--duration` in unit test `runCommand()` args — should NOT be present (env var handles it). Exceptions: `test:wait` tests, `interactive-sigint` tests, help output checks. 4. **Grep** for `getMockAblyRealtime`, `getMockAblyRest`, `getMockConfigManager` in test files to verify correct mock usage per command type 5. **Grep** for `--api-key`, `--token`, `--access-token` in unit test files — these should not use CLI auth flags +6. **Check** for use of shared test helpers where applicable: + - Control API tests should use `nockControl()`, `getControlApiContext()`, `controlApiCleanup()` from `test/helpers/control-api-test-helpers.ts` instead of manual nock setup + - Control API tests should use mock factories (`mockApp()`, `mockKey()`, `mockRule()`, `mockQueue()`, `mockNamespace()`, `mockStats()`) from `test/fixtures/control-api.ts` instead of duplicating inline response objects + - Tests with boilerplate help/arg-validation/flags blocks should use `standardHelpTests()`, `standardArgValidationTests()`, `standardFlagTests()` from `test/helpers/standard-tests.ts` + - Control API error handling blocks should use `standardControlApiErrorTests()` from `test/helpers/standard-tests.ts` for 401/500/network error tests **Reasoning guidance:** - Missing test files are deviations but may be documented as known gaps -- Missing describe blocks in existing tests are deviations but may be lower priority +- Missing describe blocks or non-standard block names are deviations that should be flagged - Subscribe tests that auto-exit via mocked callbacks (without `--duration`) may be acceptable ### Agent 7: Lifecycle & Convention Sweep diff --git a/.claude/skills/ably-new-command/SKILL.md b/.claude/skills/ably-new-command/SKILL.md index 8f3a5a58..ca310e3a 100644 --- a/.claude/skills/ably-new-command/SKILL.md +++ b/.claude/skills/ably-new-command/SKILL.md @@ -363,7 +363,8 @@ If the new command shouldn't be available in the web CLI, add it to the appropri After creating command and test files, always run: ```bash -pnpm prepare # Build + update manifest/README +pnpm prepare # Build + update manifest +pnpm exec oclif readme # Regenerate README.md from command metadata pnpm exec eslint . # Lint (must be 0 errors) pnpm test:unit # Run tests ``` @@ -395,6 +396,10 @@ See the "Keeping Skills Up to Date" section in `CLAUDE.md` for the full list of - [ ] Test file at matching path under `test/unit/commands/` - [ ] Tests use correct mock helper (`getMockAblyRealtime`, `getMockAblyRest`, `nock`) - [ ] Tests don't pass auth flags — `MockConfigManager` handles auth -- [ ] Subscribe tests use `--duration` flag to auto-exit +- [ ] Subscribe tests auto-exit via env var (ABLY_CLI_DEFAULT_DURATION: "0.25" in vitest.config.ts) — do NOT pass --duration to runCommand() +- [ ] Tests use `standardHelpTests()`, `standardArgValidationTests()`, `standardFlagTests()` from `test/helpers/standard-tests.ts` +- [ ] Control API tests use `nockControl()`, `controlApiCleanup()` from `test/helpers/control-api-test-helpers.ts` +- [ ] Control API tests use `standardControlApiErrorTests()` for 401/500/network error tests in the `describe("error handling", ...)` block +- [ ] Control API response bodies use `mockApp()`, `mockKey()`, `mockRule()`, `mockQueue()`, `mockNamespace()`, `mockStats()` from `test/fixtures/control-api.ts` where applicable - [ ] Index file created if new topic/subtopic - [ ] `pnpm prepare && pnpm exec eslint . && pnpm test:unit` passes diff --git a/.claude/skills/ably-new-command/references/testing.md b/.claude/skills/ably-new-command/references/testing.md index f71bd6ca..a0d110ad 100644 --- a/.claude/skills/ably-new-command/references/testing.md +++ b/.claude/skills/ably-new-command/references/testing.md @@ -2,79 +2,142 @@ Test files go at `test/unit/commands/.test.ts`. +> **Note:** Do NOT pass `--duration` to `runCommand()` in unit/integration tests. `vitest.config.ts` sets `ABLY_CLI_DEFAULT_DURATION: "0.25"` which makes subscribe commands auto-exit after 250ms. Adding `--duration` overrides this with a slower value. + ## Table of Contents - [Product API Test (Realtime Mock)](#product-api-test-realtime-mock) - [Product API Test (REST Mock)](#product-api-test-rest-mock) -- [Control API Test (nock)](#control-api-test-nock) +- [Control API Test](#control-api-test) - [E2E Tests](#e2e-tests) +- [Standard Test Generators](#standard-test-generators) - [Test Structure](#test-structure) --- ## Product API Test (Realtime Mock) +For subscribe/realtime commands. Uses `getMockAblyRealtime()` and standard test helpers. + ```typescript import { describe, it, expect, beforeEach, vi } from "vitest"; import { runCommand } from "@oclif/test"; import { getMockAblyRealtime } from "../../../helpers/mock-ably-realtime.js"; +import { captureJsonLogs } from "../../../helpers/ndjson.js"; +import { + standardHelpTests, + standardArgValidationTests, + standardFlagTests, +} from "../../../helpers/standard-tests.js"; describe("topic:action command", () => { - let mockCallback: ((event: unknown) => void) | null = null; + let mockSubscribeCallback: ((message: unknown) => void) | null = null; beforeEach(() => { - mockCallback = null; + mockSubscribeCallback = null; const mock = getMockAblyRealtime(); const channel = mock.channels._getChannel("test-channel"); // Configure subscribe to capture the callback - channel.subscribe.mockImplementation((callback: (msg: unknown) => void) => { - mockCallback = callback; - }); + channel.subscribe.mockImplementation( + (callback: (msg: unknown) => void) => { + mockSubscribeCallback = callback; + }, + ); // Auto-connect - mock.connection.once.mockImplementation((event: string, cb: () => void) => { - if (event === "connected") cb(); - }); + mock.connection.once.mockImplementation( + (event: string, callback: () => void) => { + if (event === "connected") callback(); + }, + ); // Auto-attach - channel.once.mockImplementation((event: string, cb: () => void) => { + channel.once.mockImplementation((event: string, callback: () => void) => { if (event === "attached") { channel.state = "attached"; - cb(); + callback(); } }); }); - describe("help", () => { - it("should display help with --help flag", async () => { - const { stdout } = await runCommand(["topic:action", "--help"], import.meta.url); - expect(stdout).toContain("USAGE"); - }); - }); - - describe("argument validation", () => { - it("should require channel argument", async () => { - const { error } = await runCommand(["topic:action"], import.meta.url); - expect(error).toBeDefined(); - expect(error?.message).toMatch(/channel|required|argument/i); - }); + // Standard helpers generate help, argument validation, and flags blocks + standardHelpTests("topic:action", import.meta.url); + standardArgValidationTests("topic:action", import.meta.url, { + requiredArgs: ["test-channel"], }); + standardFlagTests("topic:action", import.meta.url, [ + "--rewind", + "--json", + ]); describe("functionality", () => { it("should subscribe and display events", async () => { - const commandPromise = runCommand(["topic:action", "test-channel", "--duration", "1"], import.meta.url); + const commandPromise = runCommand( + ["topic:action", "test-channel"], + import.meta.url, + ); - await vi.waitFor(() => { expect(mockCallback).not.toBeNull(); }); + await vi.waitFor(() => { + expect(mockSubscribeCallback).not.toBeNull(); + }); - mockCallback!({ + mockSubscribeCallback!({ name: "test-event", data: "hello", timestamp: Date.now(), + id: "msg-123", + clientId: "client-1", + connectionId: "conn-1", }); const { stdout } = await commandPromise; expect(stdout).toContain("test-channel"); }); + + it("should emit JSON envelope for --json events", async () => { + const records = await captureJsonLogs(async () => { + const commandPromise = runCommand( + ["topic:action", "test-channel", "--json"], + import.meta.url, + ); + + await vi.waitFor(() => { + expect(mockSubscribeCallback).not.toBeNull(); + }); + + mockSubscribeCallback!({ + name: "greeting", + data: "hi", + timestamp: Date.now(), + id: "msg-envelope-test", + clientId: "client-1", + connectionId: "conn-1", + }); + + await commandPromise; + }); + const events = records.filter( + (r) => r.type === "event" && r.channel === "test-channel", + ); + expect(events.length).toBeGreaterThan(0); + expect(events[0]).toHaveProperty("type", "event"); + expect(events[0]).toHaveProperty("command", "topic:action"); + }); + }); + + describe("error handling", () => { + it("should handle missing mock client in test mode", async () => { + if (globalThis.__TEST_MOCKS__) { + delete globalThis.__TEST_MOCKS__.ablyRealtimeMock; + } + + const { error } = await runCommand( + ["topic:action", "test-channel"], + import.meta.url, + ); + expect(error).toBeDefined(); + expect(error?.message).toMatch(/No mock|client/i); + }); }); }); ``` @@ -83,10 +146,17 @@ describe("topic:action command", () => { ## Product API Test (REST Mock) +For history/get commands. Uses `getMockAblyRest()` and standard test helpers. + ```typescript import { describe, it, expect, beforeEach } from "vitest"; import { runCommand } from "@oclif/test"; import { getMockAblyRest } from "../../../helpers/mock-ably-rest.js"; +import { + standardHelpTests, + standardArgValidationTests, + standardFlagTests, +} from "../../../helpers/standard-tests.js"; describe("topic:action command", () => { beforeEach(() => { @@ -94,63 +164,170 @@ describe("topic:action command", () => { const channel = mock.channels._getChannel("test-channel"); channel.history.mockResolvedValue({ items: [ - { id: "msg-1", name: "event", data: "hello", timestamp: 1700000000000 }, + { + id: "msg-1", + name: "test-event", + data: { text: "Hello world" }, + timestamp: 1700000000000, + clientId: "client-1", + connectionId: "conn-1", + }, ], }); }); - it("should retrieve history", async () => { - const { stdout } = await runCommand( - ["topic:action", "test-channel"], - import.meta.url, - ); - expect(stdout).toContain("1"); - expect(stdout).toContain("messages"); + // Standard helpers generate help, argument validation, and flags blocks + standardHelpTests("topic:action", import.meta.url); + standardArgValidationTests("topic:action", import.meta.url, { + requiredArgs: ["test-channel"], + }); + standardFlagTests("topic:action", import.meta.url, [ + "--json", + "--limit", + "--direction", + "--start", + "--end", + ]); + + describe("functionality", () => { + it("should retrieve history", async () => { + const { stdout } = await runCommand( + ["topic:action", "test-channel"], + import.meta.url, + ); + expect(stdout).toContain("1"); + expect(stdout).toContain("messages"); + }); + + it("should output JSON format when --json flag is used", async () => { + const { stdout } = await runCommand( + ["topic:action", "test-channel", "--json"], + import.meta.url, + ); + + const result = JSON.parse(stdout); + expect(result).toHaveProperty("type", "result"); + expect(result).toHaveProperty("command", "topic:action"); + expect(result).toHaveProperty("success", true); + expect(result).toHaveProperty("messages"); + }); + }); + + describe("error handling", () => { + it("should handle API errors", async () => { + const mock = getMockAblyRest(); + const channel = mock.channels._getChannel("test-channel"); + channel.history.mockRejectedValue(new Error("API error")); + + const { error } = await runCommand( + ["topic:action", "test-channel"], + import.meta.url, + ); + expect(error).toBeDefined(); + expect(error?.message).toContain("API error"); + }); }); }); ``` --- -## Control API Test (nock) +## Control API Test + +For CRUD commands. Uses `nockControl()`, mock factories, and all standard test helpers including `standardControlApiErrorTests()`. ```typescript import { describe, it, expect, afterEach } from "vitest"; import { runCommand } from "@oclif/test"; -import nock from "nock"; +import { + nockControl, + controlApiCleanup, +} from "../../../helpers/control-api-test-helpers.js"; import { getMockConfigManager } from "../../../helpers/mock-config-manager.js"; +import { + standardHelpTests, + standardArgValidationTests, + standardFlagTests, + standardControlApiErrorTests, +} from "../../../helpers/standard-tests.js"; +import { mockQueue } from "../../../fixtures/control-api.js"; describe("topic:action command", () => { afterEach(() => { - nock.cleanAll(); + controlApiCleanup(); }); - it("should create resource", async () => { - const appId = getMockConfigManager().getCurrentAppId()!; - nock("https://control.ably.net") - .post(`/v1/apps/${appId}/resources`) - .reply(201, { id: "res-123", appId }); + // Standard helpers generate help, argument validation, and flags blocks + standardHelpTests("topic:action", import.meta.url); + standardArgValidationTests("topic:action", import.meta.url); + standardFlagTests("topic:action", import.meta.url, ["--json"]); - const { stdout } = await runCommand( - ["topic:action", "--flag", "value"], - import.meta.url, - ); + describe("functionality", () => { + it("should list resources", async () => { + const appId = getMockConfigManager().getCurrentAppId()!; + nockControl() + .get(`/v1/apps/${appId}/resources`) + .reply(200, [ + mockQueue({ id: "res-1", appId, name: "test-resource" }), + ]); + + const { stdout } = await runCommand( + ["topic:action"], + import.meta.url, + ); + + expect(stdout).toContain("test-resource"); + }); - expect(stdout).toContain("created"); + it("should output JSON format when --json flag is used", async () => { + const appId = getMockConfigManager().getCurrentAppId()!; + nockControl() + .get(`/v1/apps/${appId}/resources`) + .reply(200, [ + mockQueue({ id: "res-1", appId, name: "test-resource" }), + ]); + + const { stdout } = await runCommand( + ["topic:action", "--json"], + import.meta.url, + ); + + const result = JSON.parse(stdout); + expect(result).toHaveProperty("type", "result"); + expect(result).toHaveProperty("command", "topic:action"); + expect(result).toHaveProperty("success", true); + }); }); - it("should handle API errors", async () => { - const appId = getMockConfigManager().getCurrentAppId()!; - nock("https://control.ably.net") - .post(`/v1/apps/${appId}/resources`) - .reply(400, { error: "Bad request" }); - - const { error } = await runCommand( - ["topic:action", "--flag", "value"], - import.meta.url, - ); + describe("error handling", () => { + // Standard 401/500/network error tests — call INSIDE the describe block + standardControlApiErrorTests({ + commandArgs: ["topic:action"], + importMetaUrl: import.meta.url, + setupNock: (scenario) => { + const appId = getMockConfigManager().getCurrentAppId()!; + const scope = nockControl().get(`/v1/apps/${appId}/resources`); + if (scenario === "401") scope.reply(401, { error: "Unauthorized" }); + else if (scenario === "500") + scope.reply(500, { error: "Internal Server Error" }); + else scope.replyWithError("Network error"); + }, + }); - expect(error).toBeDefined(); + // Command-specific error tests go alongside the standard ones + it("should handle 403 forbidden error", async () => { + const appId = getMockConfigManager().getCurrentAppId()!; + nockControl() + .get(`/v1/apps/${appId}/resources`) + .reply(403, { error: "Forbidden" }); + + const { error } = await runCommand( + ["topic:action"], + import.meta.url, + ); + expect(error).toBeDefined(); + expect(error?.message).toMatch(/403/); + }); }); }); ``` @@ -264,11 +441,112 @@ describe.skipIf(SHOULD_SKIP_E2E)("topic:action CRUD E2E", { timeout: 30_000 }, ( --- +## Standard Test Generators + +The file `test/helpers/standard-tests.ts` provides generator functions that produce the standard describe blocks, reducing boilerplate: + +- **`standardHelpTests(command, importMetaUrl)`** — generates the `"help"` describe block, verifying `--help` output contains USAGE +- **`standardArgValidationTests(command, importMetaUrl, options?)`** — generates the `"argument validation"` block. Tests unknown flag rejection. If `options.requiredArgs` is provided, also tests that missing args produce an error. +- **`standardFlagTests(command, importMetaUrl, flags)`** — generates the `"flags"` block, verifying each flag in the array appears in `--help` output +- **`standardControlApiErrorTests(opts)`** — generates 401/500/network error tests for Control API commands. Call **inside** a `describe("error handling", ...)` block (does NOT create the describe block itself). Takes `{ commandArgs, importMetaUrl, setupNock }` where `setupNock(scenario)` receives `"401"`, `"500"`, or `"network"`. + +Call the generators at describe-block level (not inside nested describes). You still need to write `"functionality"` and `"error handling"` blocks manually since those are command-specific. For Control API commands, combine `standardControlApiErrorTests()` with command-specific error tests inside the same `describe("error handling", ...)` block. + +### Control API Test Helpers + +The file `test/helpers/control-api-test-helpers.ts` provides shared helpers for nock-based Control API tests: + +- **`nockControl()`** — returns a `nock` scope pre-configured for `https://control.ably.net` +- **`getControlApiContext()`** — returns `{ appId, accountId, mock }` from `MockConfigManager` +- **`controlApiCleanup()`** — calls `nock.cleanAll()` for use in `afterEach` hooks +- **`CONTROL_HOST`** — the default Control API host constant (`"https://control.ably.net"`) + +### Mock Factory Functions + +The file `test/fixtures/control-api.ts` provides factory functions for realistic Control API response bodies. Each accepts an optional `Partial` to override any field: + +- **`mockApp(overrides?)`** — mock app object (id, name, status, tlsOnly, etc.) +- **`mockKey(overrides?)`** — mock API key object (id, key, capability, etc.) +- **`mockRule(overrides?)`** — mock integration rule object (ruleType, source, target, etc.) +- **`mockQueue(overrides?)`** — mock queue object (name, region, state, messages, stats, amqp, stomp, etc.) +- **`mockNamespace(overrides?)`** — mock namespace object (id, persisted, pushEnabled, etc.) +- **`mockStats(overrides?)`** — mock stats object (intervalId, unit, all.messages, etc.) + +### NDJSON Test Helpers + +The file `test/helpers/ndjson.ts` provides helpers for testing JSON output: + +- **`parseNdjsonLines(stdout)`** — parse stdout containing one JSON object per line into an array of records +- **`parseLogLines(lines)`** — parse an array of log lines into JSON records (skips non-JSON) +- **`captureJsonLogs(fn)`** — capture all `console.log` output from an async function and parse as JSON records. Use to verify JSON envelope structure in `--json` mode. + +--- + ## Test Structure -Always include these describe blocks: -1. `help` — verify `--help` shows USAGE, key flags, and EXAMPLES -2. `argument validation` — verify required args produce errors when missing -3. `functionality` — core behavior tests -4. `flags` — verify key flags are accepted and configured -5. `error handling` — API errors, invalid input +Every test file MUST include all 5 of these describe blocks (exact names — no variants): + +1. **`help`** — verify `--help` shows USAGE, key flags, and EXAMPLES +2. **`argument validation`** — verify required args produce errors when missing. For commands with no required args, test that unknown flags are rejected. +3. **`functionality`** — core happy-path behavior tests. This is where the main command logic is tested. +4. **`flags`** — verify key flags are accepted and configured (check help output contains flag names, test flag behavior) +5. **`error handling`** — API errors, invalid input, network failures + +### Block naming rules +- Use EXACTLY these names: `"help"`, `"argument validation"`, `"functionality"`, `"flags"`, `"error handling"` +- Do NOT use variants like `"command arguments and flags"`, `"command flags"`, `"flag options"`, `"parameter validation"`, or domain-specific names like `"subscription behavior"` +- Additional describe blocks beyond the 5 required ones are fine (e.g., `"JSON output"`, `"cleanup behavior"`) + +### What goes in each block + +**`argument validation`** for commands WITH required args: +```typescript +describe("argument validation", () => { + it("should require channel argument", async () => { + const { error } = await runCommand(["topic:action"], import.meta.url); + expect(error?.message).toMatch(/Missing .* required arg/i); + }); +}); +``` + +**`argument validation`** for commands WITHOUT required args: +```typescript +describe("argument validation", () => { + it("should reject unknown flags", async () => { + const { error } = await runCommand( + ["topic:action", "--unknown-flag-xyz"], + import.meta.url, + ); + expect(error).toBeDefined(); + expect(error?.message).toMatch(/unknown|Nonexistent flag/i); + }); +}); +``` + +**`argument validation`** for topic commands (that route to subcommands): +```typescript +describe("argument validation", () => { + it("should handle unknown subcommand gracefully", async () => { + const { stdout } = await runCommand(["topic", "nonexistent"], import.meta.url); + expect(stdout).toBeDefined(); + }); +}); +``` + +**`flags`** block pattern: +```typescript +describe("flags", () => { + it("should show available flags in help", async () => { + const { stdout } = await runCommand(["topic:action", "--help"], import.meta.url); + expect(stdout).toContain("--json"); + }); + + it("should reject unknown flags", async () => { + const { error } = await runCommand( + ["topic:action", "test-arg", "--unknown-flag"], + import.meta.url, + ); + expect(error).toBeDefined(); + }); +}); +``` diff --git a/.claude/skills/ably-review/SKILL.md b/.claude/skills/ably-review/SKILL.md index 8a91d716..96f1458f 100644 --- a/.claude/skills/ably-review/SKILL.md +++ b/.claude/skills/ably-review/SKILL.md @@ -121,10 +121,23 @@ For each changed command file, run the relevant checks. Spawn agents for paralle ### For changed test files (`test/unit/commands/**/*.ts`) -1. **Grep** for `describe(` to check for required describe blocks: `help`, `argument validation`, `functionality`, `flags`, `error handling` +1. **Grep** for `describe(` to check for the 5 required describe blocks with EXACT standard names: + - `describe("help"` — required in every test file + - `describe("argument validation"` — required (test required args OR unknown flag rejection) + - `describe("functionality"` — required (core happy-path tests) + - `describe("flags"` — required (verify flags exist and work) + - `describe("error handling"` — required (API errors, network failures) + Flag any non-standard variants: `"command arguments and flags"`, `"command flags"`, `"flag options"`, `"parameter validation"`. These must use the exact standard names above. + Exception: `interactive.test.ts`, `interactive-sigint.test.ts`, and `bench/*.test.ts` are exempt (REPL/benchmark tests, not command tests). 2. **Grep** for `getMockAblyRealtime`, `getMockAblyRest`, `getMockConfigManager` to verify correct mock usage -3. **Grep** for `--duration` in subscribe test files +3. **Grep** for `--duration` in unit test `runCommand()` args — should NOT be present (env var handles it). Exceptions: `test:wait` tests, `interactive-sigint` tests, help output checks. 4. **Grep** for `--api-key`, `--token`, `--access-token` — unit tests should not use CLI auth flags +5. **Check** for use of shared test helpers where applicable: + - Control API tests should consider using `nockControl()`, `getControlApiContext()`, `controlApiCleanup()` from `test/helpers/control-api-test-helpers.ts` instead of manual nock setup + - Control API tests should consider using mock factories (`mockApp()`, `mockKey()`, `mockRule()`, `mockQueue()`, `mockNamespace()`, `mockStats()`) from `test/fixtures/control-api.ts` instead of inline response objects + - Tests with boilerplate help/arg-validation/flags blocks should consider using `standardHelpTests()`, `standardArgValidationTests()`, `standardFlagTests()` from `test/helpers/standard-tests.ts` + - Control API error handling blocks should use `standardControlApiErrorTests()` from `test/helpers/standard-tests.ts` for 401/500/network error tests + - JSON envelope tests should use `captureJsonLogs()` from `test/helpers/ndjson.ts` instead of manual console.log spying ### For new command files (added, not modified) diff --git a/AGENTS.md b/AGENTS.md index 8ed3d7f3..fb3e57a9 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -5,10 +5,12 @@ **Run these IN ORDER for EVERY change:** ```bash -pnpm prepare # 1. Build + update manifest/README -pnpm exec eslint . # 2. Lint (MUST be 0 errors) -pnpm test:unit # 3. Test (at minimum) - # 4. Update docs if needed +pnpm prepare # 1. Build + update manifest +pnpm exec oclif readme # 2. Regenerate README.md from command metadata +pnpm exec eslint . # 3. Lint (MUST be 0 errors) +pnpm test:unit # 4. Test (at minimum) +pnpm test:tty # 5. TTY tests (local only, skip in CI) + # 6. Update docs if needed ``` **If you skip these steps, the work is NOT complete.** @@ -164,16 +166,35 @@ runCommand(["stats", "account"], { }); ``` +**Duration in tests — do NOT use `--duration` in unit/integration tests:** +Unit and integration tests set `ABLY_CLI_DEFAULT_DURATION: "0.25"` in `vitest.config.ts`, which makes all subscribe/long-running commands auto-exit after 250ms. Do NOT pass `--duration` to `runCommand()` — it overrides the fast 250ms default with a slower explicit value. + +Exceptions: +- `test:wait` command tests — `--duration` is a required flag for that command +- `interactive-sigint.test.ts` — needs a longer duration for SIGINT testing +- Help output checks — testing that `--help` mentions `--duration` is fine + **Test structure:** - `test/unit/` — Fast, mocked tests. Auth via `MockConfigManager` (automatic). Only set `ABLY_API_KEY` env var when testing env var override behavior. - `test/integration/` — Integration tests (e.g., interactive mode). Mocked external services but tests multi-component interaction. +- `test/tty/` — Interactive mode tests requiring a real pseudo-TTY (e.g., SIGINT/Ctrl+C with readline). Uses `node-pty`. Local only — cannot run in CI. - `test/e2e/` — Full scenarios against real Ably. Auth via env vars (`ABLY_API_KEY`, `ABLY_ACCESS_TOKEN`). - Helpers in `test/helpers/` — `runCommand()`, `runLongRunningBackgroundProcess()`, `e2e-test-helper.ts`, `mock-config-manager.ts`. +**Required test describe blocks** (exact names — every unit test file must have all 5): +1. `"help"` — verify `--help` shows USAGE +2. `"argument validation"` — test required args or unknown flag rejection +3. `"functionality"` — core happy-path behavior +4. `"flags"` — verify flags exist and work +5. `"error handling"` — API errors, network failures + +Do NOT use variants like `"command arguments and flags"`, `"command flags"`, `"flag options"`, or `"parameter validation"`. Exempt: `interactive.test.ts`, `interactive-sigint.test.ts`, `bench/*.test.ts`. + **Running tests:** ```bash pnpm test:unit # All unit tests pnpm test:integration # Integration tests +pnpm test:tty # TTY tests (local only, needs real terminal) pnpm test:e2e # All E2E tests pnpm test test/unit/commands/foo.test.ts # Specific test ``` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index edcffa9a..34ef02ef 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,9 +8,9 @@ All code changes, whether features or bug fixes, **MUST** follow the mandatory w In summary, this involves: -1. **Build:** Run `pnpm prepare` to compile, update manifests, and update the README. +1. **Build:** Run `pnpm prepare` to compile and update manifests, then `pnpm exec oclif readme` to regenerate the README. 2. **Lint:** Run `pnpm exec eslint .` and fix all errors/warnings. -3. **Test:** Run relevant tests (`pnpm test:unit`, `pnpm test:integration`, `pnpm test:e2e`, `pnpm test:playwright`, or specific files) and ensure they pass. Add new tests or update existing ones as needed. +3. **Test:** Run relevant tests (`pnpm test:unit`, `pnpm test:integration`, `pnpm test:e2e`, `pnpm test:playwright`, or specific files) and ensure they pass. For interactive mode changes, also run `pnpm run test:tty` (requires a real terminal, not run in CI). Add new tests or update existing ones as needed. 4. **Document:** Update all relevant documentation (`docs/`, `AGENTS.md`, `README.md`) to reflect your changes. **Pull requests will not be merged unless all these steps are completed and verified.** diff --git a/README.md b/README.md index 7379c57a..e2bb1dbb 100644 --- a/README.md +++ b/README.md @@ -429,6 +429,8 @@ EXAMPLES $ ably apps channel-rules create --name "chat" --persisted + $ ably apps channel-rules update chat --mutable-messages + $ ably apps channel-rules update chat --push-enabled $ ably apps channel-rules delete chat @@ -444,8 +446,8 @@ Create a channel rule USAGE $ ably apps channel-rules create --name [-v] [--json | --pretty-json] [--app ] [--authenticated] [--batching-enabled] [--batching-interval ] [--conflation-enabled] [--conflation-interval ] - [--conflation-key ] [--expose-time-serial] [--persist-last] [--persisted] [--populate-channel-registry] - [--push-enabled] [--tls-only] + [--conflation-key ] [--expose-time-serial] [--mutable-messages] [--persist-last] [--persisted] + [--populate-channel-registry] [--push-enabled] [--tls-only] FLAGS -v, --verbose Output verbose logs @@ -458,6 +460,8 @@ FLAGS --conflation-key= The conflation key for messages on channels matching this rule --expose-time-serial Whether to expose the time serial for messages on channels matching this rule --json Output in JSON format + --mutable-messages Whether messages on channels matching this rule can be updated or deleted after + publishing. Automatically enables message persistence. --name= (required) Name of the channel rule --persist-last Whether to persist only the last message on channels matching this rule --persisted Whether messages on channels matching this rule should be persisted @@ -472,9 +476,13 @@ DESCRIPTION EXAMPLES $ ably apps channel-rules create --name "chat" --persisted + $ ably apps channel-rules create --name "chat" --mutable-messages + $ ably apps channel-rules create --name "events" --push-enabled $ ably apps channel-rules create --name "notifications" --persisted --push-enabled --app "My App" + + $ ably apps channel-rules create --name "chat" --persisted --json ``` _See code: [src/commands/apps/channel-rules/create.ts](https://github.com/ably/ably-cli/blob/v0.17.0/src/commands/apps/channel-rules/create.ts)_ @@ -551,8 +559,8 @@ Update a channel rule USAGE $ ably apps channel-rules update NAMEORID [-v] [--json | --pretty-json] [--app ] [--authenticated] [--batching-enabled] [--batching-interval ] [--conflation-enabled] [--conflation-interval ] - [--conflation-key ] [--expose-time-serial] [--persist-last] [--persisted] [--populate-channel-registry] - [--push-enabled] [--tls-only] + [--conflation-key ] [--expose-time-serial] [--mutable-messages] [--persist-last] [--persisted] + [--populate-channel-registry] [--push-enabled] [--tls-only] ARGUMENTS NAMEORID Name or ID of the channel rule to update @@ -568,6 +576,8 @@ FLAGS --conflation-key= The conflation key for messages on channels matching this rule --[no-]expose-time-serial Whether to expose the time serial for messages on channels matching this rule --json Output in JSON format + --[no-]mutable-messages Whether messages on channels matching this rule can be updated or deleted after + publishing. Automatically enables message persistence. --[no-]persist-last Whether to persist only the last message on channels matching this rule --[no-]persisted Whether messages on channels matching this rule should be persisted --[no-]populate-channel-registry Whether to populate the channel registry for channels matching this rule @@ -581,9 +591,13 @@ DESCRIPTION EXAMPLES $ ably apps channel-rules update chat --persisted + $ ably apps channel-rules update chat --mutable-messages + $ ably apps channel-rules update events --push-enabled=false $ ably apps channel-rules update notifications --persisted --push-enabled --app "My App" + + $ ably apps channel-rules update chat --persisted --json ``` _See code: [src/commands/apps/channel-rules/update.ts](https://github.com/ably/ably-cli/blob/v0.17.0/src/commands/apps/channel-rules/update.ts)_ @@ -611,6 +625,8 @@ EXAMPLES $ ably apps create --name "My New App" --tls-only + $ ably apps create --name "My New App" --json + $ ABLY_ACCESS_TOKEN="YOUR_ACCESS_TOKEN" ably apps create --name "My New App" ``` @@ -736,6 +752,8 @@ EXAMPLES $ ably apps set-apns-p12 app-id --certificate /path/to/certificate.p12 --password "YOUR_CERTIFICATE_PASSWORD" $ ably apps set-apns-p12 app-id --certificate /path/to/certificate.p12 --use-for-sandbox + + $ ably apps set-apns-p12 app-id --certificate /path/to/certificate.p12 --json ``` _See code: [src/commands/apps/set-apns-p12.ts](https://github.com/ably/ably-cli/blob/v0.17.0/src/commands/apps/set-apns-p12.ts)_ @@ -763,6 +781,8 @@ EXAMPLES $ ably apps switch APP_ID $ ably apps switch + + $ ably apps switch APP_ID --json ``` _See code: [src/commands/apps/switch.ts](https://github.com/ably/ably-cli/blob/v0.17.0/src/commands/apps/switch.ts)_ @@ -793,7 +813,7 @@ EXAMPLES $ ably apps update app-id --tls-only - $ ably apps update app-id --name "Updated App Name" --tls-only + $ ably apps update app-id --name "Updated App Name" --json $ ABLY_ACCESS_TOKEN="YOUR_ACCESS_TOKEN" ably apps update app-id --name "Updated App Name" ``` @@ -1135,6 +1155,8 @@ EXAMPLES $ ably auth keys switch APP_ID.KEY_ID $ ably auth keys switch KEY_ID --app APP_ID + + $ ably auth keys switch --json ``` _See code: [src/commands/auth/keys/switch.ts](https://github.com/ably/ably-cli/blob/v0.17.0/src/commands/auth/keys/switch.ts)_ @@ -1168,6 +1190,8 @@ EXAMPLES $ ably auth keys update KEY_ID --app APP_ID --capabilities "publish,subscribe" $ ably auth keys update APP_ID.KEY_ID --name "New Name" --capabilities "publish,subscribe" + + $ ably auth keys update APP_ID.KEY_ID --name "New Name" --json ``` _See code: [src/commands/auth/keys/update.ts](https://github.com/ably/ably-cli/blob/v0.17.0/src/commands/auth/keys/update.ts)_ @@ -1178,7 +1202,7 @@ Revoke a token ``` USAGE - $ ably auth revoke-token TOKEN [-v] [--json | --pretty-json] [--app ] [-c ] [--debug] + $ ably auth revoke-token TOKEN [-v] [--json | --pretty-json] [--app ] [-c ] ARGUMENTS TOKEN Token to revoke @@ -1187,7 +1211,6 @@ FLAGS -c, --client-id= Client ID to revoke tokens for -v, --verbose Output verbose logs --app= The app ID or name (defaults to current app) - --debug Show debug information --json Output in JSON format --pretty-json Output in colorized JSON format @@ -1266,8 +1289,8 @@ Run a publisher benchmark test ``` USAGE - $ ably bench publisher CHANNEL [-v] [--json | --pretty-json] [--message-size ] [-m ] [-r ] [-t - rest|realtime] [--wait-for-subscribers] + $ ably bench publisher CHANNEL [-v] [--json | --pretty-json] [--client-id ] [--message-size ] [-m + ] [-r ] [-t rest|realtime] [--wait-for-subscribers] ARGUMENTS CHANNEL The channel name to publish to @@ -1278,6 +1301,8 @@ FLAGS -t, --transport=