Skip to content

Add first-class gpt-5.4-mini support#83

Merged
ndycode merged 6 commits intondycode:mainfrom
sdip15fa:feat/gpt-5.4-mini-support
Mar 18, 2026
Merged

Add first-class gpt-5.4-mini support#83
ndycode merged 6 commits intondycode:mainfrom
sdip15fa:feat/gpt-5.4-mini-support

Conversation

@sdip15fa
Copy link
Contributor

@sdip15fa sdip15fa commented Mar 18, 2026

Summary

  • add gpt-5.4-mini as a first-class model family across normalization, prompt-family routing, config templates, and docs
  • add regression coverage for model mapping, prompt caching, request transformation, and reasoning behavior
  • preserve legacy lightweight alias behavior so gpt-5-mini still maps to gpt-5.4 without inheriting first-class mini semantics

Verification

  • npm run typecheck
  • npm run build
  • npm test

Summary by CodeRabbit

  • New Features

    • Added GPT‑5.4 Mini as a first-class model option with five reasoning-effort variants (none, low, medium, high, xhigh) and an OAuth-capable mini model.
  • Documentation

    • Updated configuration docs and context sizing examples to include GPT‑5.4 Mini and its normalization aliases.
  • Tests

    • Expanded test coverage to validate Mini normalization, reasoning configurations, and model mapping.

note: greptile review for oc-chatgpt-multi-auth. cite files like lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.

Greptile Summary

this pr promotes gpt-5.4-mini to a first-class model family across normalization, prompt routing, reasoning config, config templates, and docs — correctly isolated from both the base gpt-5.4 family and the legacy lightweight aliases (gpt-5-mini, gpt-5-nano) that still map to gpt-5.4 without inheriting mini's semantics.

key changes:

  • normalizeModel and getModelFamily both correctly check for -mini suffix before falling through to the base gpt-5.4 regex, preventing misclassification
  • getReasoningConfig uses canonicalModelName === "gpt-5.4-mini" to gate isGpt54Mini, and supportsNone / supportsXhigh / canonicalSupportsXhigh are all extended correctly
  • isLightweight regex tightened to \bgpt(?:-| )5(?:-| )(?:mini|nano)(?:\b|[- ])/ — correctly matches gpt-5-mini/gpt-5-nano but not gpt-5.4-mini (also guarded by !isGpt54Mini)
  • prewarmCodexInstructions adds gpt-5.4-mini to its default candidate list; isolated cache file gpt-5.4-mini-instructions.md confirmed by test
  • config templates (legacy + modern) add all 5 reasoning-effort variants; high/xhigh correctly set reasoningSummary: detailed

gaps to address:

  • gpt-5-nano none-blocking behavior is correct at runtime but has no vitest test — symmetric with the gpt-5-mini test added in this PR
  • GPT_54_SNAPSHOT_DATE is shared for mini with no comment — document or split if mini uses a different snapshot cadence
  • per repo policy (windows concurrency rule): pr description does not explain windows filesystem concurrency protection or logging redaction strategy. even though this pr does not touch storage, token, or audit-log code, the policy requires an explicit acknowledgement (e.g. "no storage paths touched, no token leakage vector introduced") — this is missing from the pr description

Confidence Score: 4/5

  • safe to merge with minor follow-ups: add gpt-5-nano none-blocking test and clarify snapshot date reuse
  • core routing and reasoning config logic is correct and well-tested; the only logic gap is a missing gpt-5-nano regression test (the runtime behavior is already correct); snapshot date sharing is a low-risk documentation gap; pr description omits the required windows concurrency policy acknowledgement
  • test/request-transformer.test.ts (missing gpt-5-nano test), lib/request/helpers/model-map.ts (undocumented snapshot date reuse)

Important Files Changed

Filename Overview
lib/request/request-transformer.ts correctly gates isGpt54Mini via canonical check; supportsNone and supportsXhigh extended; isLightweight regex correctly excludes gpt-5.4-mini; gpt-5-mini none-blocking is tested but gpt-5-nano has no analogous test
lib/request/helpers/model-map.ts clean addition of all 6 gpt-5.4-mini variants; uses shared GPT_54_SNAPSHOT_DATE with no comment explaining intentional reuse — risk if mini has a different snapshot cadence
config/opencode-legacy.json adds 5 gpt-5.4-mini oauth variants; gpt-5.4-mini-none correctly sets reasoningEffort none (now supported as first-class); high/xhigh correctly use reasoningSummary detailed
test/request-transformer.test.ts getReasoningConfig block added with gpt-5.4-mini first-class and gpt-5-mini legacy tests; gpt-5-nano none-blocking has no test; scattered indentation inconsistencies introduced throughout
test/gpt54-models.test.ts comprehensive gpt-5.4-mini normalization, family detection, reasoning config, and coexistence coverage added; MODEL_MAP count and regex updated correctly

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["raw model string\ne.g. 'openai/gpt-5.4-mini-high'"] --> B["normalizeModel()"]
    B --> C{"contains 'codex'?"}
    C -- yes --> D["→ gpt-5-codex"]
    C -- no --> E{"matches gpt-5.4-pro regex?"}
    E -- yes --> F["→ gpt-5.4-pro"]
    E -- no --> G{"matches gpt-5.4-mini regex?\n/gpt(?:-| )5\\.4(?:-| )mini/"}
    G -- yes --> H["→ gpt-5.4-mini ✨ NEW"]
    G -- no --> I{"matches gpt-5.4 regex?"}
    I -- yes --> J["→ gpt-5.4"]
    I -- no --> K["...other families..."]

    H --> L["getReasoningConfig()"]
    J --> L
    L --> M{"isGpt54Mini?\ncanonicalModelName === 'gpt-5.4-mini'"}
    M -- yes --> N["supportsNone = true\nsupportsXhigh = true\ndefaultEffort = 'high'"]
    M -- no --> O{"isLightweight?\n/gpt-5-mini|nano/ regex\n!isGpt54Mini guard"}
    O -- yes --> P["supportsNone = false\n→ 'none' upgraded to 'low'"]
    O -- no --> Q["normal gpt-5.4 path"]
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: test/request-transformer.test.ts
Line: 1372-1375

Comment:
**missing `gpt-5-nano` legacy none-blocking test**

`gpt-5-mini` is now explicitly covered, but `gpt-5-nano` shares the same `isLightweight` regex path and has no analogous assertion. if the regex changes, `gpt-5-nano` silently starts accepting `none` with no test catching it. add a parallel case:

```suggestion
		it('should downgrade legacy gpt-5-mini none requests to low', () => {
			expect(getReasoningConfig('gpt-5-mini', { reasoningEffort: 'none' }).effort).toBe('low');
		});

		it('should downgrade legacy gpt-5-nano none requests to low', () => {
			expect(getReasoningConfig('gpt-5-nano', { reasoningEffort: 'none' }).effort).toBe('low');
		});
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: lib/request/helpers/model-map.ts
Line: 292

Comment:
**shared snapshot date for gpt-5.4-mini**

`gpt-5.4-mini` reuses `GPT_54_SNAPSHOT_DATE = "2026-03-05"` from the base and pro variants. if gpt-5.4-mini ships with a different snapshot cadence, these dated aliases will silently map to the wrong model. if the snapshot date is confirmed identical for mini, a comment here explaining the intentional reuse would prevent future confusion. if it differs, a separate `GPT_54_MINI_SNAPSHOT_DATE` constant should be introduced.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "fix(models): address..."

Context used:

  • Rule used - What: Every code change must explain how it defend... (source)

Copilot AI review requested due to automatic review settings March 18, 2026 10:02
@sdip15fa sdip15fa requested a review from ndycode as a code owner March 18, 2026 10:02
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 18, 2026

📝 Walkthrough

Walkthrough

Adds GPT-5.4 Mini as a first-class model family (base + five effort-level variants) across configs, model normalization, prompt/cache mappings, reasoning-config logic, and tests; exports a new getReasoningConfig utility for deriving per-model reasoning settings. (≤50 words)

Changes

Cohort / File(s) Summary
Configuration & Docs
config/opencode-legacy.json, config/opencode-modern.json, docs/configuration.md
Added gpt-5.4-mini model and its five effort-level variants; updated model listings, context examples, and normalization/alias mappings; removed one OpenCode runtime tuning line.
Prompts & Types
lib/prompts/codex.ts
Added "gpt-5.4-mini" to ModelFamily and MODEL_FAMILIES; added PROMPT_FILES and CACHE_FILES entries; updated getModelFamily and prewarm/default candidates to include mini.
Model Mapping
lib/request/helpers/model-map.ts
Extended MODEL_MAP with gpt-5.4-mini plus effort-level variants and dated alias expansion for mini variants.
Request Transformation & Reasoning Logic
lib/request/request-transformer.ts
Normalization updated to recognize gpt-5.4-mini explicitly; introduced isGpt54Mini and related branch adjustments (isGpt54General, supportsXhigh/none, isLightweight, etc.); exported getReasoningConfig(modelId, config?).
Tests
test/codex-prompts.test.ts, test/codex.test.ts, test/gpt54-models.test.ts, test/model-map.test.ts, test/request-transformer.test.ts
Added and expanded tests for normalization, family detection, prompt/cache mapping, MODEL_MAP completeness, reasoning config behavior for all effort levels, provider-prefixed IDs, and the getReasoningConfig export.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Transformer as RequestTransformer
  participant ModelMap
  participant Prompts as PromptManager
  participant Config as ReasoningConfigStore

  Client->>Transformer: send model id (e.g., openai/gpt-5.4-mini-high)
  Transformer->>ModelMap: normalizeModel(model id)
  ModelMap-->>Transformer: normalized id (gpt-5.4-mini-high)
  Transformer->>Config: getReasoningConfig(normalized id, userConfig)
  Config-->>Transformer: reasoning settings (effort, summary)
  Transformer->>Prompts: selectPromptFamily(normalized base)
  Prompts-->>Transformer: prompt template & cache key
  Transformer-->>Client: transformed request (model, reasoning headers, prompt)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I found a mini hop in code tonight,
Five little efforts tucked in tight.
Prompts aligned and tests applaud,
Maps updated — a tidy nod.
Hooray — a nibble, then a happy bite!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning PR description omits required Windows concurrency policy acknowledgement per repo guidelines, despite policy requirement to explicitly confirm storage/token/audit-log safety posture. Add explicit acknowledgement to PR description: e.g., 'No storage paths, token handling, or audit-log modifications introduced; no Windows filesystem concurrency risk vectors.'
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main objective: adding first-class gpt-5.4-mini support across the codebase, matching the core focus of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds first-class gpt-5.4-mini support across model normalization, model-family routing (Codex prompts + caches), configuration templates, and docs, with expanded regression coverage to keep legacy gpt-5-mini behavior intact.

Changes:

  • Extend normalizeModel, MODEL_MAP, and reasoning-effort rules to treat gpt-5.4-mini as a distinct GPT‑5.4 family member.
  • Route gpt-5.4-mini through Codex prompt selection with an isolated cache key/file.
  • Add/update tests, docs, and shipped config templates for the new model family.

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
test/request-transformer.test.ts Adds normalization/config/reasoning/transformRequestBody regression tests for gpt-5.4-mini.
test/model-map.test.ts Asserts MODEL_MAP, getNormalizedModel, and isKnownModel include gpt-5.4-mini.
test/gpt54-models.test.ts Adds broad normalization/family/reasoning tests for gpt-5.4-mini.
test/codex.test.ts Verifies Codex getModelFamily recognizes gpt-5.4-mini distinctly.
test/codex-prompts.test.ts Verifies prompt routing + cache isolation for gpt-5.4-mini.
lib/request/request-transformer.ts Implements gpt-5.4-mini normalization + first-class reasoning behavior.
lib/request/helpers/model-map.ts Adds gpt-5.4-mini keys and dated alias expansion to MODEL_MAP.
lib/prompts/codex.ts Extends model families and cache/prompt files for gpt-5.4-mini.
docs/configuration.md Documents reasoning-effort support for gpt-5.4-mini + clarifies legacy aliases.
config/opencode-modern.json Adds gpt-5.4-mini as a first-class model with reasoning variants.
config/opencode-legacy.json Adds legacy preset-style entries for gpt-5.4-mini-* efforts.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +99 to +108
// ============================================================================
// GPT-5.4 Mini Models (latest efficient family)
// ============================================================================
"gpt-5.4-mini": "gpt-5.4-mini",
"gpt-5.4-mini-none": "gpt-5.4-mini",
"gpt-5.4-mini-low": "gpt-5.4-mini",
"gpt-5.4-mini-medium": "gpt-5.4-mini",
"gpt-5.4-mini-high": "gpt-5.4-mini",
"gpt-5.4-mini-xhigh": "gpt-5.4-mini",
...expandDatedAliases(`gpt-5.4-mini-${GPT_54_SNAPSHOT_DATE}`, "gpt-5.4-mini"),
Comment on lines +128 to +147
describe("GPT-5.4 Mini Model Normalization", () => {
it("should normalize gpt-5.4-mini base model", () => {
expect(normalizeModel("gpt-5.4-mini")).toBe("gpt-5.4-mini");
expect(getNormalizedModel("gpt-5.4-mini")).toBe("gpt-5.4-mini");
});

it("should normalize all gpt-5.4-mini reasoning effort variants", () => {
const variants = [
"gpt-5.4-mini-none",
"gpt-5.4-mini-low",
"gpt-5.4-mini-medium",
"gpt-5.4-mini-high",
"gpt-5.4-mini-xhigh",
];

for (const variant of variants) {
expect(normalizeModel(variant)).toBe("gpt-5.4-mini");
expect(getNormalizedModel(variant)).toBe("gpt-5.4-mini");
}
});
Comment on lines +119 to +128
it('should normalize gpt-5.4-mini presets as a first-class family', async () => {
expect(normalizeModel('gpt-5.4-mini')).toBe('gpt-5.4-mini');
expect(normalizeModel('gpt-5.4-mini-none')).toBe('gpt-5.4-mini');
expect(normalizeModel('gpt-5.4-mini-low')).toBe('gpt-5.4-mini');
expect(normalizeModel('gpt-5.4-mini-medium')).toBe('gpt-5.4-mini');
expect(normalizeModel('gpt-5.4-mini-high')).toBe('gpt-5.4-mini');
expect(normalizeModel('gpt-5.4-mini-xhigh')).toBe('gpt-5.4-mini');
expect(normalizeModel('openai/gpt-5.4-mini-xhigh')).toBe('gpt-5.4-mini');
expect(normalizeModel('gpt-5.4-mini-2026-03-05-high')).toBe('gpt-5.4-mini');
});
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
lib/request/request-transformer.ts (1)

87-90: Minor regex inconsistency with Pro pattern.

The outer non-capturing group (?:...) is unnecessary here. The Pro regex at line 83 doesn't have this wrapper:

  • Pro: /\bgpt(?:-| )5\.4(?:-| )pro(?:\b|[- ])/
  • Mini: /(?:\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]))/

Functionally equivalent, but could align for consistency.

🛠️ Optional: Remove unnecessary wrapper
-	if (/(?:\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]))/.test(normalized)) {
+	if (/\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ])/.test(normalized)) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/request/request-transformer.ts` around lines 87 - 90, The regex for
detecting "gpt-5.4-mini" uses an unnecessary outer non-capturing group — update
the pattern used where you return "gpt-5.4-mini" to match the style of the Pro
pattern (remove the outer (?: ... ) wrapper) so both regexes are consistent
while keeping the inner structure \bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]) intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/configuration.md`:
- Around line 55-56: Update the documentation lines that claim legacy aliases
map only to base models and that only base/pro use context=1000000: state that
legacy `gpt-5`, `gpt-5-mini`, `gpt-5-nano` aliases all map to `gpt-5.4` (and
corresponding mini snapshots map to `gpt-5.4-mini` as supported in code), ensure
snapshot patterns `gpt-5.4-2026-03-05*` and `gpt-5.4-pro-2026-03-05*` mention
their mini equivalents (e.g., `gpt-5.4-mini-2026-03-05*`) map to their stable
forms, and change the note about context limits to reflect that shipped mini
templates also use context=1000000 (not just base/pro).

---

Nitpick comments:
In `@lib/request/request-transformer.ts`:
- Around line 87-90: The regex for detecting "gpt-5.4-mini" uses an unnecessary
outer non-capturing group — update the pattern used where you return
"gpt-5.4-mini" to match the style of the Pro pattern (remove the outer (?: ... )
wrapper) so both regexes are consistent while keeping the inner structure
\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]) intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 37c9ab73-7bfa-4f76-8149-b6e29ea2f38b

📥 Commits

Reviewing files that changed from the base of the PR and between 05109fa and 40edefe.

📒 Files selected for processing (11)
  • config/opencode-legacy.json
  • config/opencode-modern.json
  • docs/configuration.md
  • lib/prompts/codex.ts
  • lib/request/helpers/model-map.ts
  • lib/request/request-transformer.ts
  • test/codex-prompts.test.ts
  • test/codex.test.ts
  • test/gpt54-models.test.ts
  • test/model-map.test.ts
  • test/request-transformer.test.ts

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/request/request-transformer.ts (1)

480-514: ⚠️ Potential issue | 🟡 Minor

Use token-aware mini detection here.

These substring checks also match the -minimal suffix that this file already accepts as a reasoning variant. That makes getReasoningConfig("gpt-5.4-minimal") look like first-class Mini, while gpt-5-minimal / gpt-5.1-minimal fall into the lightweight-alias path. normalizeModel does not classify those names that way, so the two helpers can disagree on the same input.

Possible fix
+	const hasMiniToken = /(?:^|[-_/ ])mini(?:$|[-_/ ])/.test(normalizedName);
+	const hasNanoToken = /(?:^|[-_/ ])nano(?:$|[-_/ ])/.test(normalizedName);
 	const isGpt54Mini =
 		canonicalModelName === "gpt-5.4-mini" ||
-		normalizedName.includes("gpt-5.4-mini") ||
-		normalizedName.includes("gpt 5.4 mini");
+		/\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ])/.test(normalizedName);

 	const isLightweight =
 		!isGpt54Mini &&
 		!isCodexMini &&
-		(normalizedName.includes("nano") ||
-			normalizedName.includes("mini"));
+		(hasNanoToken || hasMiniToken);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/request/request-transformer.ts` around lines 480 - 514, The
substring-based "mini" checks (used in isGpt54Mini, isCodexMini, isLightweight,
etc.) incorrectly match suffixes like "-minimal"; change them to token-aware
checks: compute tokens from normalizedName (e.g., split on non-alphanumeric
boundaries) and test for the exact token "mini" (and "codex", "codex-mini"
variants as separate tokens) rather than using normalizedName.includes("mini").
Update the definitions of isGpt54Mini, isCodexMini, isCodex, and isLightweight
to use the token set (and preserve canonicalModelName checks like
canonicalModelName === "gpt-5.4-mini") so "minimal" does not count as "mini" and
the helpers agree with normalizeModel.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@lib/request/request-transformer.ts`:
- Around line 480-514: The substring-based "mini" checks (used in isGpt54Mini,
isCodexMini, isLightweight, etc.) incorrectly match suffixes like "-minimal";
change them to token-aware checks: compute tokens from normalizedName (e.g.,
split on non-alphanumeric boundaries) and test for the exact token "mini" (and
"codex", "codex-mini" variants as separate tokens) rather than using
normalizedName.includes("mini"). Update the definitions of isGpt54Mini,
isCodexMini, isCodex, and isLightweight to use the token set (and preserve
canonicalModelName checks like canonicalModelName === "gpt-5.4-mini") so
"minimal" does not count as "mini" and the helpers agree with normalizeModel.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: cdd609ba-dd8a-4591-b4fa-a03daa3e160f

📥 Commits

Reviewing files that changed from the base of the PR and between 40edefe and 248f4a9.

📒 Files selected for processing (1)
  • lib/request/request-transformer.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/request/request-transformer.ts (1)

457-458: ⚠️ Potential issue | 🟠 Major

Provider-prefixed aliases can be misclassified in reasoning support.

At Line 457, normalizedName keeps the provider prefix, but Line 514 relies on startsWith("gpt-5-").
This makes openai/gpt-5 fail isGpt51General, which then affects supportsNone at Lines 545-549 and can incorrectly downgrade "none".

💡 Proposed fix
-export function getReasoningConfig(
+export function getReasoningConfig(
 	modelName: string | undefined,
 	userConfig: ConfigOptions = {},
 ): ReasoningConfig {
-	const normalizedName = modelName?.toLowerCase() ?? "";
+	const rawModelName = modelName?.toLowerCase() ?? "";
+	const normalizedName = rawModelName.includes("/")
+		? (rawModelName.split("/").pop() ?? rawModelName)
+		: rawModelName;
 	const canonicalModelName = normalizeModel(modelName);

Also applies to: 509-515, 545-549

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/request/request-transformer.ts` around lines 457 - 458, normalizedName
currently preserves provider prefixes (e.g., "openai/gpt-5") causing checks like
isGpt51General that use startsWith("gpt-5-") to misclassify models; update the
logic to compute normalizedName from the provider-agnostic canonical model (use
normalizeModel(modelName) or canonicalModelName) or explicitly strip provider
prefixes before performing startsWith checks so functions like isGpt51General
and the supportsNone branch use the provider-agnostic name (refer to
normalizedName, canonicalModelName, normalizeModel, and isGpt51General) to
correctly detect gpt-5 variants and avoid downgrading "none".
🧹 Nitpick comments (1)
test/request-transformer.test.ts (1)

388-398: Add transform-level coverage for legacy alias none behavior with provider prefixes.

Great unit coverage for getReasoningConfig, but this specific regression risk is in the full transform path (lookupModel + body/provider layering).
Please add direct transformRequestBody assertions for gpt-5-mini / openai/gpt-5-mini with reasoning.effort: "none" to lock the downgrade behavior.

✅ Suggested test addition
+it('should downgrade none to low for provider-prefixed legacy mini aliases in transformRequestBody', async () => {
+	const body: RequestBody = {
+		model: 'openai/gpt-5-mini',
+		input: [],
+		reasoning: { effort: 'none', summary: 'auto' },
+	};
+	const result = await transformRequestBody(body, codexInstructions);
+	expect(result.model).toBe('gpt-5.4');
+	expect(result.reasoning?.effort).toBe('low');
+});

Also applies to: 1695-1712

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/request-transformer.test.ts` around lines 388 - 398, Add transform-level
tests that assert transformRequestBody preserves the legacy downgrade of
reasoning.effort "none" to "low" when model aliases are used; specifically call
transformRequestBody (or the full transform path that uses lookupModel and
getReasoningConfig) with model set to "gpt-5-mini" and "openai/gpt-5-mini" and a
body containing reasoning.effort: "none", then assert the resulting request body
(or assembled reasoning config) has effort === "low". Ensure the test exercises
the provider layering/lookupModel path rather than calling getReasoningConfig
directly to lock the regression.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@lib/request/request-transformer.ts`:
- Around line 457-458: normalizedName currently preserves provider prefixes
(e.g., "openai/gpt-5") causing checks like isGpt51General that use
startsWith("gpt-5-") to misclassify models; update the logic to compute
normalizedName from the provider-agnostic canonical model (use
normalizeModel(modelName) or canonicalModelName) or explicitly strip provider
prefixes before performing startsWith checks so functions like isGpt51General
and the supportsNone branch use the provider-agnostic name (refer to
normalizedName, canonicalModelName, normalizeModel, and isGpt51General) to
correctly detect gpt-5 variants and avoid downgrading "none".

---

Nitpick comments:
In `@test/request-transformer.test.ts`:
- Around line 388-398: Add transform-level tests that assert
transformRequestBody preserves the legacy downgrade of reasoning.effort "none"
to "low" when model aliases are used; specifically call transformRequestBody (or
the full transform path that uses lookupModel and getReasoningConfig) with model
set to "gpt-5-mini" and "openai/gpt-5-mini" and a body containing
reasoning.effort: "none", then assert the resulting request body (or assembled
reasoning config) has effort === "low". Ensure the test exercises the provider
layering/lookupModel path rather than calling getReasoningConfig directly to
lock the regression.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 64c8c197-c96c-44a6-a7ff-b96ee3d185b9

📥 Commits

Reviewing files that changed from the base of the PR and between 248f4a9 and bac0cd5.

📒 Files selected for processing (3)
  • docs/configuration.md
  • lib/request/request-transformer.ts
  • test/request-transformer.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/configuration.md

@ndycode ndycode self-assigned this Mar 18, 2026
@ndycode ndycode merged commit d318c82 into ndycode:main Mar 18, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants