Upgrade AI SDK v5 → v6 with usage null safety fixes#1694
Upgrade AI SDK v5 → v6 with usage null safety fixes#1694
Conversation
🦋 Changeset detectedLatest commit: f172c5f The changes in this PR will be included in the next version bump. This PR includes changesets to release 3 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Greptile SummaryThis PR successfully migrates from AI SDK v5 to v6, upgrading all core dependencies and adapting to breaking API changes while maintaining backwards compatibility. Key changes:
The migration is comprehensive and well-tested (326 e2e tests passing). The backwards-compatible shim layer ensures existing code that destructures Confidence Score: 5/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[AI SDK v5] --> B[AI SDK v6 Migration]
B --> C[Update Dependencies]
C --> D[ai package v5 to v6]
C --> E[provider packages v2 to v3]
C --> F[All optional providers upgraded]
B --> G[Type Migrations]
G --> H[LanguageModelV2 to V3]
G --> I[Core*Message to ModelMessage]
B --> J[API Changes]
J --> K[generateObject deprecated]
J --> L[streamObject deprecated]
J --> M[Tool callback signature change]
K --> N[generateText + Output.object]
L --> O[streamText + Output.object]
N --> P[objectShims.ts]
O --> P
P --> Q[Preserve backwards API]
B --> R[Usage Shape Handling]
R --> S[v2: flat token numbers]
R --> T[v3: nested token objects]
R --> U[Defensive guards added]
U --> V[flowLogger middleware]
U --> W[aisdk client]
U --> X[external client]
M --> Y[result param to output param]
Y --> Z[All 8 agent tools updated]
Last reviewed commit: ac2666f |
8bb0449 to
9dadf0d
Compare
Based on #1689 by @dylnslck. Adds optional chaining and deprecated property fallbacks to token detail access across both generateObject and generateText code paths, matching the safe pattern already used in the generateObject path of aisdk.ts. Renames `u` to `usage` for consistency with the rest of the codebase. Co-Authored-By: Dylan Slack <dylnslck@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ge fields Collapse identical prompt/messages branches in generateObjectShim and streamObjectShim into single conditions. Update v3AgentHandler to use the new v6 nested token detail fields (outputTokenDetails.reasoningTokens, inputTokenDetails.cacheReadTokens) with fallback to deprecated flat fields. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
dcb29e8 to
ddcea89
Compare
AI SDK v6's generateText/streamText return usage, finishReason, output,
text, etc. as prototype getters that are lost when spread into a new
object. Explicitly copy these properties so callers who destructure
{ object, usage, finishReason } from the shim get valid values.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/core/lib/v3/flowLogger.ts">
<violation number="1" location="packages/core/lib/v3/flowLogger.ts:1121">
P2: The fallback branch accessing `.total` lacks null safety. If `inputTokens` is `undefined` or `null` (e.g., a model that doesn't report usage), the `typeof` check is `false` and `.total` is accessed on `undefined`, throwing a `TypeError`. Add optional chaining and a fallback default to be consistent with the `?? 0` pattern used elsewhere in the codebase (e.g., `aisdk.ts`).</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
The wrapLanguageModel middleware doesn't auto-adapt v2 provider results
to v3 format, so if a user passes a custom llmClient with a v2-spec
model, result.usage.inputTokens is a flat number (not { total, ... }).
Add typeof guards so token logging works with both spec versions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ac2666f to
f172c5f
Compare
| inputTokens: | ||
| typeof result.usage.inputTokens === "number" | ||
| ? result.usage.inputTokens | ||
| : (result.usage.inputTokens?.total ?? 0), | ||
| outputTokens: | ||
| typeof result.usage.outputTokens === "number" | ||
| ? result.usage.outputTokens | ||
| : (result.usage.outputTokens?.total ?? 0), |
There was a problem hiding this comment.
why would these be empty? does aisdk use different fields now?
maybe better to just do
| inputTokens: | |
| typeof result.usage.inputTokens === "number" | |
| ? result.usage.inputTokens | |
| : (result.usage.inputTokens?.total ?? 0), | |
| outputTokens: | |
| typeof result.usage.outputTokens === "number" | |
| ? result.usage.outputTokens | |
| : (result.usage.outputTokens?.total ?? 0), | |
| usage: result.usage, |
| const result = { | ||
| data: objectResponse.object, | ||
| data: objectResponse.output, | ||
| usage: { |
There was a problem hiding this comment.
| usage: { | |
| usage: { | |
| ...usage, |
can we just return the whole obj that aisdk returns as well so users can use fields they might see in aisdk docs
| const usage = textResponse.usage; | ||
| return { | ||
| prompt_tokens: usage.inputTokens ?? 0, |
There was a problem hiding this comment.
| const usage = textResponse.usage; | |
| return { | |
| prompt_tokens: usage.inputTokens ?? 0, | |
| const usage = textResponse.usage; | |
| return { | |
| ...usage, | |
| prompt_tokens: usage.inputTokens ?? 0, |
ditto here
Summary
aifrom ^5.0.133 to ^6.0.0,@ai-sdk/providerfrom ^2.0.0 to ^3.0.0, and all optional AI provider packages to their latest major versions.LanguageModelV2toLanguageModelV3,CoreSystemMessage/CoreUserMessage/CoreAssistantMessagetoModelMessage, andexperimental_generateImagetogenerateImage.generateObject/streamObjectwithgenerateText/streamText+Output.object(), with backwards-compatible shims (objectShims.ts) to preserve the existingLLMClientAPI surface.toModelOutputcallbacks from(result)to({ output })to match the v6 tool result shape.specificationVersion: "v3"to LLM logging middleware.outputTokenDetailsandinputTokenDetailsaccess in bothaisdk.tsandAISdkClientWrapped.ts, making token usage handling consistent across generateObject and generateText code paths.u→usagevariable in generateText IIFE for consistency with the rest of the codebase.LanguageModelV2comment in test file.Based on #1689 by @dylnslck — thank you for the original upgrade work!
Test plan
pnpm installandpnpm buildsucceedtsc --noEmit)pnpm e2e:localpasses — 326 passed, 2 skipped, 0 failuresLanguageModelV3models from current provider packages without TypeScript errorsBreaking changes for external users
AISdkClientconstructor now requiresLanguageModelV3instead ofLanguageModelV2. Users must upgrade their@ai-sdk/*provider packages to v3+.🤖 Generated with Claude Code