Skip to content

Upgrade AI SDK v5 → v6 with usage null safety fixes#1694

Open
shrey150 wants to merge 7 commits intomainfrom
shrey/upgrade-ai-sdk-v6
Open

Upgrade AI SDK v5 → v6 with usage null safety fixes#1694
shrey150 wants to merge 7 commits intomainfrom
shrey/upgrade-ai-sdk-v6

Conversation

@shrey150
Copy link
Contributor

@shrey150 shrey150 commented Feb 18, 2026

Summary

  • Upgrades ai from ^5.0.133 to ^6.0.0, @ai-sdk/provider from ^2.0.0 to ^3.0.0, and all optional AI provider packages to their latest major versions.
  • Migrates from LanguageModelV2 to LanguageModelV3, CoreSystemMessage/CoreUserMessage/CoreAssistantMessage to ModelMessage, and experimental_generateImage to generateImage.
  • Replaces deprecated generateObject/streamObject with generateText/streamText + Output.object(), with backwards-compatible shims (objectShims.ts) to preserve the existing LLMClient API surface.
  • Updates all agent tool toModelOutput callbacks from (result) to ({ output }) to match the v6 tool result shape.
  • Adds specificationVersion: "v3" to LLM logging middleware.
  • Fixes missing optional chaining and deprecated fallbacks on outputTokenDetails and inputTokenDetails access in both aisdk.ts and AISdkClientWrapped.ts, making token usage handling consistent across generateObject and generateText code paths.
  • Renames uusage variable in generateText IIFE for consistency with the rest of the codebase.
  • Updates stale LanguageModelV2 comment in test file.

Based on #1689 by @dylnslck — thank you for the original upgrade work!

Test plan

  • pnpm install and pnpm build succeed
  • TypeScript typechecks pass (tsc --noEmit)
  • pnpm e2e:local passes — 326 passed, 2 skipped, 0 failures
  • Verify AISdkClient accepts LanguageModelV3 models from current provider packages without TypeScript errors

Breaking changes for external users

  • AISdkClient constructor now requires LanguageModelV3 instead of LanguageModelV2. Users must upgrade their @ai-sdk/* provider packages to v3+.

🤖 Generated with Claude Code

@changeset-bot
Copy link

changeset-bot bot commented Feb 18, 2026

🦋 Changeset detected

Latest commit: f172c5f

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 3 packages
Name Type
@browserbasehq/stagehand Minor
@browserbasehq/stagehand-evals Patch
@browserbasehq/stagehand-server Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 18, 2026

Greptile Summary

This PR successfully migrates from AI SDK v5 to v6, upgrading all core dependencies and adapting to breaking API changes while maintaining backwards compatibility.

Key changes:

  • Replaces deprecated generateObject/streamObject with generateText/streamText + Output.object(), wrapped in backwards-compatible shims (objectShims.ts) to preserve the existing LLMClient API
  • Migrates from LanguageModelV2 to LanguageModelV3 and consolidates Core*Message types to unified ModelMessage
  • Updates all 8 agent tools (click, type, screenshot, etc.) to use v6 tool callback signature: toModelOutput: ({ output }) => ...
  • Fixes token usage access with proper null-safety: adds optional chaining for outputTokenDetails?.reasoningTokens and inputTokenDetails?.cacheReadTokens with backwards-compatible fallbacks
  • Adds specificationVersion: "v3" to LLM logging middleware and handles both v2 (flat numbers) and v3 (nested objects) usage shapes
  • Upgrades ai to ^6.0.0, @ai-sdk/provider to ^3.0.0, and all optional provider packages to their latest major versions

The migration is comprehensive and well-tested (326 e2e tests passing). The backwards-compatible shim layer ensures existing code that destructures { object } from generateObject continues to work seamlessly.

Confidence Score: 5/5

  • This PR is safe to merge with high confidence
  • The migration is systematic and comprehensive with thorough testing (326 e2e tests passing). All breaking changes are properly handled through backwards-compatible shims, token usage access is properly guarded against null values, and type migrations are complete across the entire codebase. The PR builds on previous work and includes defensive code for handling both v2 and v3 usage shapes.
  • No files require special attention

Important Files Changed

Filename Overview
packages/core/lib/v3/llm/aisdk.ts Migrates from LanguageModelV2 to LanguageModelV3, replaces deprecated generateObject with generateText + Output.object(), adds proper null-safety for token detail access
packages/core/lib/v3/llm/objectShims.ts New shim layer preserves backwards-compatible API surface for generateObject/streamObject callers while using v6 generateText/streamText internally
packages/core/lib/v3/external_clients/aisdk.ts Mirrors main client migration: v2→v3 model types, generateObjectgenerateText + Output.object(), fixes token detail access patterns
packages/core/lib/v3/flowLogger.ts Adds specificationVersion: "v3" to middleware, handles both v2 (flat numbers) and v3 (nested objects) usage shapes for backwards compatibility
packages/evals/lib/AISdkClientWrapped.ts Applies same v5→v6 migration as core client with Braintrust tracing wrapper intact
packages/core/package.json Upgrades ai to ^6.0.0, @ai-sdk/provider to ^3.0.0, and all provider packages to their latest major versions

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[AI SDK v5] --> B[AI SDK v6 Migration]
    B --> C[Update Dependencies]
    C --> D[ai package v5 to v6]
    C --> E[provider packages v2 to v3]
    C --> F[All optional providers upgraded]
    
    B --> G[Type Migrations]
    G --> H[LanguageModelV2 to V3]
    G --> I[Core*Message to ModelMessage]
    
    B --> J[API Changes]
    J --> K[generateObject deprecated]
    J --> L[streamObject deprecated]
    J --> M[Tool callback signature change]
    
    K --> N[generateText + Output.object]
    L --> O[streamText + Output.object]
    
    N --> P[objectShims.ts]
    O --> P
    P --> Q[Preserve backwards API]
    
    B --> R[Usage Shape Handling]
    R --> S[v2: flat token numbers]
    R --> T[v3: nested token objects]
    R --> U[Defensive guards added]
    
    U --> V[flowLogger middleware]
    U --> W[aisdk client]
    U --> X[external client]
    
    M --> Y[result param to output param]
    Y --> Z[All 8 agent tools updated]
Loading

Last reviewed commit: ac2666f

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

19 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@shrey150
Copy link
Contributor Author

@greptileai

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@shrey150 shrey150 force-pushed the shrey/upgrade-ai-sdk-v6 branch from 8bb0449 to 9dadf0d Compare February 18, 2026 22:10
dylnslck and others added 5 commits February 20, 2026 07:47
Based on #1689 by @dylnslck. Adds optional chaining and deprecated
property fallbacks to token detail access across both generateObject
and generateText code paths, matching the safe pattern already used
in the generateObject path of aisdk.ts. Renames `u` to `usage` for
consistency with the rest of the codebase.

Co-Authored-By: Dylan Slack <dylnslck@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ge fields

Collapse identical prompt/messages branches in generateObjectShim and
streamObjectShim into single conditions. Update v3AgentHandler to use
the new v6 nested token detail fields (outputTokenDetails.reasoningTokens,
inputTokenDetails.cacheReadTokens) with fallback to deprecated flat fields.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shrey150 shrey150 force-pushed the shrey/upgrade-ai-sdk-v6 branch from dcb29e8 to ddcea89 Compare February 20, 2026 15:48
@shrey150
Copy link
Contributor Author

@greptileai

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

AI SDK v6's generateText/streamText return usage, finishReason, output,
text, etc. as prototype getters that are lost when spread into a new
object. Explicitly copy these properties so callers who destructure
{ object, usage, finishReason } from the shim get valid values.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shrey150
Copy link
Contributor Author

@greptileai

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/core/lib/v3/flowLogger.ts">

<violation number="1" location="packages/core/lib/v3/flowLogger.ts:1121">
P2: The fallback branch accessing `.total` lacks null safety. If `inputTokens` is `undefined` or `null` (e.g., a model that doesn't report usage), the `typeof` check is `false` and `.total` is accessed on `undefined`, throwing a `TypeError`. Add optional chaining and a fallback default to be consistent with the `?? 0` pattern used elsewhere in the codebase (e.g., `aisdk.ts`).</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

The wrapLanguageModel middleware doesn't auto-adapt v2 provider results
to v3 format, so if a user passes a custom llmClient with a v2-spec
model, result.usage.inputTokens is a flat number (not { total, ... }).
Add typeof guards so token logging works with both spec versions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shrey150 shrey150 force-pushed the shrey/upgrade-ai-sdk-v6 branch from ac2666f to f172c5f Compare February 20, 2026 16:53
Comment on lines +1120 to +1127
inputTokens:
typeof result.usage.inputTokens === "number"
? result.usage.inputTokens
: (result.usage.inputTokens?.total ?? 0),
outputTokens:
typeof result.usage.outputTokens === "number"
? result.usage.outputTokens
: (result.usage.outputTokens?.total ?? 0),
Copy link
Member

@pirate pirate Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why would these be empty? does aisdk use different fields now?

maybe better to just do

Suggested change
inputTokens:
typeof result.usage.inputTokens === "number"
? result.usage.inputTokens
: (result.usage.inputTokens?.total ?? 0),
outputTokens:
typeof result.usage.outputTokens === "number"
? result.usage.outputTokens
: (result.usage.outputTokens?.total ?? 0),
usage: result.usage,

const result = {
data: objectResponse.object,
data: objectResponse.output,
usage: {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
usage: {
usage: {
...usage,

can we just return the whole obj that aisdk returns as well so users can use fields they might see in aisdk docs

Comment on lines +313 to +315
const usage = textResponse.usage;
return {
prompt_tokens: usage.inputTokens ?? 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const usage = textResponse.usage;
return {
prompt_tokens: usage.inputTokens ?? 0,
const usage = textResponse.usage;
return {
...usage,
prompt_tokens: usage.inputTokens ?? 0,

ditto here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants