Add first-class gpt-5.4-mini support#83
Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
📝 WalkthroughWalkthroughAdds GPT-5.4 Mini as a first-class model family (base + five effort-level variants) across configs, model normalization, prompt/cache mappings, reasoning-config logic, and tests; exports a new getReasoningConfig utility for deriving per-model reasoning settings. (≤50 words) Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Transformer as RequestTransformer
participant ModelMap
participant Prompts as PromptManager
participant Config as ReasoningConfigStore
Client->>Transformer: send model id (e.g., openai/gpt-5.4-mini-high)
Transformer->>ModelMap: normalizeModel(model id)
ModelMap-->>Transformer: normalized id (gpt-5.4-mini-high)
Transformer->>Config: getReasoningConfig(normalized id, userConfig)
Config-->>Transformer: reasoning settings (effort, summary)
Transformer->>Prompts: selectPromptFamily(normalized base)
Prompts-->>Transformer: prompt template & cache key
Transformer-->>Client: transformed request (model, reasoning headers, prompt)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Adds first-class gpt-5.4-mini support across model normalization, model-family routing (Codex prompts + caches), configuration templates, and docs, with expanded regression coverage to keep legacy gpt-5-mini behavior intact.
Changes:
- Extend
normalizeModel,MODEL_MAP, and reasoning-effort rules to treatgpt-5.4-minias a distinct GPT‑5.4 family member. - Route
gpt-5.4-minithrough Codex prompt selection with an isolated cache key/file. - Add/update tests, docs, and shipped config templates for the new model family.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| test/request-transformer.test.ts | Adds normalization/config/reasoning/transformRequestBody regression tests for gpt-5.4-mini. |
| test/model-map.test.ts | Asserts MODEL_MAP, getNormalizedModel, and isKnownModel include gpt-5.4-mini. |
| test/gpt54-models.test.ts | Adds broad normalization/family/reasoning tests for gpt-5.4-mini. |
| test/codex.test.ts | Verifies Codex getModelFamily recognizes gpt-5.4-mini distinctly. |
| test/codex-prompts.test.ts | Verifies prompt routing + cache isolation for gpt-5.4-mini. |
| lib/request/request-transformer.ts | Implements gpt-5.4-mini normalization + first-class reasoning behavior. |
| lib/request/helpers/model-map.ts | Adds gpt-5.4-mini keys and dated alias expansion to MODEL_MAP. |
| lib/prompts/codex.ts | Extends model families and cache/prompt files for gpt-5.4-mini. |
| docs/configuration.md | Documents reasoning-effort support for gpt-5.4-mini + clarifies legacy aliases. |
| config/opencode-modern.json | Adds gpt-5.4-mini as a first-class model with reasoning variants. |
| config/opencode-legacy.json | Adds legacy preset-style entries for gpt-5.4-mini-* efforts. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // ============================================================================ | ||
| // GPT-5.4 Mini Models (latest efficient family) | ||
| // ============================================================================ | ||
| "gpt-5.4-mini": "gpt-5.4-mini", | ||
| "gpt-5.4-mini-none": "gpt-5.4-mini", | ||
| "gpt-5.4-mini-low": "gpt-5.4-mini", | ||
| "gpt-5.4-mini-medium": "gpt-5.4-mini", | ||
| "gpt-5.4-mini-high": "gpt-5.4-mini", | ||
| "gpt-5.4-mini-xhigh": "gpt-5.4-mini", | ||
| ...expandDatedAliases(`gpt-5.4-mini-${GPT_54_SNAPSHOT_DATE}`, "gpt-5.4-mini"), |
| describe("GPT-5.4 Mini Model Normalization", () => { | ||
| it("should normalize gpt-5.4-mini base model", () => { | ||
| expect(normalizeModel("gpt-5.4-mini")).toBe("gpt-5.4-mini"); | ||
| expect(getNormalizedModel("gpt-5.4-mini")).toBe("gpt-5.4-mini"); | ||
| }); | ||
|
|
||
| it("should normalize all gpt-5.4-mini reasoning effort variants", () => { | ||
| const variants = [ | ||
| "gpt-5.4-mini-none", | ||
| "gpt-5.4-mini-low", | ||
| "gpt-5.4-mini-medium", | ||
| "gpt-5.4-mini-high", | ||
| "gpt-5.4-mini-xhigh", | ||
| ]; | ||
|
|
||
| for (const variant of variants) { | ||
| expect(normalizeModel(variant)).toBe("gpt-5.4-mini"); | ||
| expect(getNormalizedModel(variant)).toBe("gpt-5.4-mini"); | ||
| } | ||
| }); |
test/request-transformer.test.ts
Outdated
| it('should normalize gpt-5.4-mini presets as a first-class family', async () => { | ||
| expect(normalizeModel('gpt-5.4-mini')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('gpt-5.4-mini-none')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('gpt-5.4-mini-low')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('gpt-5.4-mini-medium')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('gpt-5.4-mini-high')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('gpt-5.4-mini-xhigh')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('openai/gpt-5.4-mini-xhigh')).toBe('gpt-5.4-mini'); | ||
| expect(normalizeModel('gpt-5.4-mini-2026-03-05-high')).toBe('gpt-5.4-mini'); | ||
| }); |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
lib/request/request-transformer.ts (1)
87-90: Minor regex inconsistency with Pro pattern.The outer non-capturing group
(?:...)is unnecessary here. The Pro regex at line 83 doesn't have this wrapper:
- Pro:
/\bgpt(?:-| )5\.4(?:-| )pro(?:\b|[- ])/- Mini:
/(?:\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]))/Functionally equivalent, but could align for consistency.
🛠️ Optional: Remove unnecessary wrapper
- if (/(?:\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]))/.test(normalized)) { + if (/\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ])/.test(normalized)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/request/request-transformer.ts` around lines 87 - 90, The regex for detecting "gpt-5.4-mini" uses an unnecessary outer non-capturing group — update the pattern used where you return "gpt-5.4-mini" to match the style of the Pro pattern (remove the outer (?: ... ) wrapper) so both regexes are consistent while keeping the inner structure \bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]) intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/configuration.md`:
- Around line 55-56: Update the documentation lines that claim legacy aliases
map only to base models and that only base/pro use context=1000000: state that
legacy `gpt-5`, `gpt-5-mini`, `gpt-5-nano` aliases all map to `gpt-5.4` (and
corresponding mini snapshots map to `gpt-5.4-mini` as supported in code), ensure
snapshot patterns `gpt-5.4-2026-03-05*` and `gpt-5.4-pro-2026-03-05*` mention
their mini equivalents (e.g., `gpt-5.4-mini-2026-03-05*`) map to their stable
forms, and change the note about context limits to reflect that shipped mini
templates also use context=1000000 (not just base/pro).
---
Nitpick comments:
In `@lib/request/request-transformer.ts`:
- Around line 87-90: The regex for detecting "gpt-5.4-mini" uses an unnecessary
outer non-capturing group — update the pattern used where you return
"gpt-5.4-mini" to match the style of the Pro pattern (remove the outer (?: ... )
wrapper) so both regexes are consistent while keeping the inner structure
\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ]) intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 37c9ab73-7bfa-4f76-8149-b6e29ea2f38b
📒 Files selected for processing (11)
config/opencode-legacy.jsonconfig/opencode-modern.jsondocs/configuration.mdlib/prompts/codex.tslib/request/helpers/model-map.tslib/request/request-transformer.tstest/codex-prompts.test.tstest/codex.test.tstest/gpt54-models.test.tstest/model-map.test.tstest/request-transformer.test.ts
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/request/request-transformer.ts (1)
480-514:⚠️ Potential issue | 🟡 MinorUse token-aware
minidetection here.These substring checks also match the
-minimalsuffix that this file already accepts as a reasoning variant. That makesgetReasoningConfig("gpt-5.4-minimal")look like first-class Mini, whilegpt-5-minimal/gpt-5.1-minimalfall into the lightweight-alias path.normalizeModeldoes not classify those names that way, so the two helpers can disagree on the same input.Possible fix
+ const hasMiniToken = /(?:^|[-_/ ])mini(?:$|[-_/ ])/.test(normalizedName); + const hasNanoToken = /(?:^|[-_/ ])nano(?:$|[-_/ ])/.test(normalizedName);const isGpt54Mini = canonicalModelName === "gpt-5.4-mini" || - normalizedName.includes("gpt-5.4-mini") || - normalizedName.includes("gpt 5.4 mini"); + /\bgpt(?:-| )5\.4(?:-| )mini(?:\b|[- ])/.test(normalizedName); const isLightweight = !isGpt54Mini && !isCodexMini && - (normalizedName.includes("nano") || - normalizedName.includes("mini")); + (hasNanoToken || hasMiniToken);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/request/request-transformer.ts` around lines 480 - 514, The substring-based "mini" checks (used in isGpt54Mini, isCodexMini, isLightweight, etc.) incorrectly match suffixes like "-minimal"; change them to token-aware checks: compute tokens from normalizedName (e.g., split on non-alphanumeric boundaries) and test for the exact token "mini" (and "codex", "codex-mini" variants as separate tokens) rather than using normalizedName.includes("mini"). Update the definitions of isGpt54Mini, isCodexMini, isCodex, and isLightweight to use the token set (and preserve canonicalModelName checks like canonicalModelName === "gpt-5.4-mini") so "minimal" does not count as "mini" and the helpers agree with normalizeModel.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@lib/request/request-transformer.ts`:
- Around line 480-514: The substring-based "mini" checks (used in isGpt54Mini,
isCodexMini, isLightweight, etc.) incorrectly match suffixes like "-minimal";
change them to token-aware checks: compute tokens from normalizedName (e.g.,
split on non-alphanumeric boundaries) and test for the exact token "mini" (and
"codex", "codex-mini" variants as separate tokens) rather than using
normalizedName.includes("mini"). Update the definitions of isGpt54Mini,
isCodexMini, isCodex, and isLightweight to use the token set (and preserve
canonicalModelName checks like canonicalModelName === "gpt-5.4-mini") so
"minimal" does not count as "mini" and the helpers agree with normalizeModel.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: cdd609ba-dd8a-4591-b4fa-a03daa3e160f
📒 Files selected for processing (1)
lib/request/request-transformer.ts
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/request/request-transformer.ts (1)
457-458:⚠️ Potential issue | 🟠 MajorProvider-prefixed aliases can be misclassified in reasoning support.
At Line 457,
normalizedNamekeeps the provider prefix, but Line 514 relies onstartsWith("gpt-5-").
This makesopenai/gpt-5failisGpt51General, which then affectssupportsNoneat Lines 545-549 and can incorrectly downgrade"none".💡 Proposed fix
-export function getReasoningConfig( +export function getReasoningConfig( modelName: string | undefined, userConfig: ConfigOptions = {}, ): ReasoningConfig { - const normalizedName = modelName?.toLowerCase() ?? ""; + const rawModelName = modelName?.toLowerCase() ?? ""; + const normalizedName = rawModelName.includes("/") + ? (rawModelName.split("/").pop() ?? rawModelName) + : rawModelName; const canonicalModelName = normalizeModel(modelName);Also applies to: 509-515, 545-549
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/request/request-transformer.ts` around lines 457 - 458, normalizedName currently preserves provider prefixes (e.g., "openai/gpt-5") causing checks like isGpt51General that use startsWith("gpt-5-") to misclassify models; update the logic to compute normalizedName from the provider-agnostic canonical model (use normalizeModel(modelName) or canonicalModelName) or explicitly strip provider prefixes before performing startsWith checks so functions like isGpt51General and the supportsNone branch use the provider-agnostic name (refer to normalizedName, canonicalModelName, normalizeModel, and isGpt51General) to correctly detect gpt-5 variants and avoid downgrading "none".
🧹 Nitpick comments (1)
test/request-transformer.test.ts (1)
388-398: Add transform-level coverage for legacy aliasnonebehavior with provider prefixes.Great unit coverage for
getReasoningConfig, but this specific regression risk is in the full transform path (lookupModel+ body/provider layering).
Please add directtransformRequestBodyassertions forgpt-5-mini/openai/gpt-5-miniwithreasoning.effort: "none"to lock the downgrade behavior.✅ Suggested test addition
+it('should downgrade none to low for provider-prefixed legacy mini aliases in transformRequestBody', async () => { + const body: RequestBody = { + model: 'openai/gpt-5-mini', + input: [], + reasoning: { effort: 'none', summary: 'auto' }, + }; + const result = await transformRequestBody(body, codexInstructions); + expect(result.model).toBe('gpt-5.4'); + expect(result.reasoning?.effort).toBe('low'); +});Also applies to: 1695-1712
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/request-transformer.test.ts` around lines 388 - 398, Add transform-level tests that assert transformRequestBody preserves the legacy downgrade of reasoning.effort "none" to "low" when model aliases are used; specifically call transformRequestBody (or the full transform path that uses lookupModel and getReasoningConfig) with model set to "gpt-5-mini" and "openai/gpt-5-mini" and a body containing reasoning.effort: "none", then assert the resulting request body (or assembled reasoning config) has effort === "low". Ensure the test exercises the provider layering/lookupModel path rather than calling getReasoningConfig directly to lock the regression.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@lib/request/request-transformer.ts`:
- Around line 457-458: normalizedName currently preserves provider prefixes
(e.g., "openai/gpt-5") causing checks like isGpt51General that use
startsWith("gpt-5-") to misclassify models; update the logic to compute
normalizedName from the provider-agnostic canonical model (use
normalizeModel(modelName) or canonicalModelName) or explicitly strip provider
prefixes before performing startsWith checks so functions like isGpt51General
and the supportsNone branch use the provider-agnostic name (refer to
normalizedName, canonicalModelName, normalizeModel, and isGpt51General) to
correctly detect gpt-5 variants and avoid downgrading "none".
---
Nitpick comments:
In `@test/request-transformer.test.ts`:
- Around line 388-398: Add transform-level tests that assert
transformRequestBody preserves the legacy downgrade of reasoning.effort "none"
to "low" when model aliases are used; specifically call transformRequestBody (or
the full transform path that uses lookupModel and getReasoningConfig) with model
set to "gpt-5-mini" and "openai/gpt-5-mini" and a body containing
reasoning.effort: "none", then assert the resulting request body (or assembled
reasoning config) has effort === "low". Ensure the test exercises the provider
layering/lookupModel path rather than calling getReasoningConfig directly to
lock the regression.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 64c8c197-c96c-44a6-a7ff-b96ee3d185b9
📒 Files selected for processing (3)
docs/configuration.mdlib/request/request-transformer.tstest/request-transformer.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/configuration.md
Summary
gpt-5.4-minias a first-class model family across normalization, prompt-family routing, config templates, and docsgpt-5-ministill maps togpt-5.4without inheriting first-class mini semanticsVerification
npm run typechecknpm run buildnpm testSummary by CodeRabbit
New Features
Documentation
Tests
note: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
this pr promotes
gpt-5.4-minito a first-class model family across normalization, prompt routing, reasoning config, config templates, and docs — correctly isolated from both the basegpt-5.4family and the legacy lightweight aliases (gpt-5-mini,gpt-5-nano) that still map togpt-5.4without inheriting mini's semantics.key changes:
normalizeModelandgetModelFamilyboth correctly check for-minisuffix before falling through to the basegpt-5.4regex, preventing misclassificationgetReasoningConfigusescanonicalModelName === "gpt-5.4-mini"to gateisGpt54Mini, andsupportsNone/supportsXhigh/canonicalSupportsXhighare all extended correctlyisLightweightregex tightened to\bgpt(?:-| )5(?:-| )(?:mini|nano)(?:\b|[- ])/— correctly matchesgpt-5-mini/gpt-5-nanobut notgpt-5.4-mini(also guarded by!isGpt54Mini)prewarmCodexInstructionsaddsgpt-5.4-minito its default candidate list; isolated cache filegpt-5.4-mini-instructions.mdconfirmed by testhigh/xhighcorrectly setreasoningSummary: detailedgaps to address:
gpt-5-nanonone-blocking behavior is correct at runtime but has no vitest test — symmetric with thegpt-5-minitest added in this PRGPT_54_SNAPSHOT_DATEis shared for mini with no comment — document or split if mini uses a different snapshot cadenceConfidence Score: 4/5
Important Files Changed
Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A["raw model string\ne.g. 'openai/gpt-5.4-mini-high'"] --> B["normalizeModel()"] B --> C{"contains 'codex'?"} C -- yes --> D["→ gpt-5-codex"] C -- no --> E{"matches gpt-5.4-pro regex?"} E -- yes --> F["→ gpt-5.4-pro"] E -- no --> G{"matches gpt-5.4-mini regex?\n/gpt(?:-| )5\\.4(?:-| )mini/"} G -- yes --> H["→ gpt-5.4-mini ✨ NEW"] G -- no --> I{"matches gpt-5.4 regex?"} I -- yes --> J["→ gpt-5.4"] I -- no --> K["...other families..."] H --> L["getReasoningConfig()"] J --> L L --> M{"isGpt54Mini?\ncanonicalModelName === 'gpt-5.4-mini'"} M -- yes --> N["supportsNone = true\nsupportsXhigh = true\ndefaultEffort = 'high'"] M -- no --> O{"isLightweight?\n/gpt-5-mini|nano/ regex\n!isGpt54Mini guard"} O -- yes --> P["supportsNone = false\n→ 'none' upgraded to 'low'"] O -- no --> Q["normal gpt-5.4 path"]Prompt To Fix All With AI
Last reviewed commit: "fix(models): address..."
Context used: