Create migration guide from AI SDK to TanStack#179
Conversation
Add comprehensive migration guide covering: - Package installation differences - Server-side API migration (streamText -> chat) - Client-side useChat hook differences - Isomorphic tool system migration - Provider adapter changes (OpenAI, Anthropic, Gemini) - Streaming response formats - Multimodal content handling - Type safety enhancements - Complete before/after code examples
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdded a docs navigation entry and a new, extensive migration guide documenting how to port projects from the Vercel AI SDK ( Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/guides/migration-from-vercel-ai.md
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Applied to files:
docs/guides/migration-from-vercel-ai.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
Applied to files:
docs/guides/migration-from-vercel-ai.md
🪛 markdownlint-cli2 (0.18.1)
docs/guides/migration-from-vercel-ai.md
614-614: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🔇 Additional comments (7)
docs/guides/migration-from-vercel-ai.md (7)
1-928: Comprehensive migration guide is well-structured and thorough.The document provides excellent coverage of the migration path with:
- Clear before/after patterns for nearly every API surface
- Consistent terminology and organization
- Helpful rationale for architectural differences
- Real-world examples (tools, multimodal, streaming)
- Acknowledgment of removed features with workarounds
The main work is verifying the TanStack AI API surface matches what's documented. Once the above verification items are confirmed, this will be a high-quality migration resource.
403-425: The client tools API usage is correct.clientToolsis properly exported from@tanstack/ai-client, accepts individual tool instances created with.client(), and integrates correctly withuseChatvia thetoolsparameter.
345-374: Tool definition API is correctly documented.The
toolDefinition,.server(), and.client()exports and patterns shown in the example match the actual TanStack AI implementation. No corrections needed.
559-559: RemovestreamToText()from the utilities list—this function does not exist in the TanStack AI SDK.The utilities
toStreamResponse,toServerSentEventsStream, andtoHttpStreamare properly exported from@tanstack/ai. BothfetchServerSentEventsandfetchHttpStreamare available in@tanstack/ai-client. However,streamToText()does not appear in any official TanStack AI documentation or API references. Verify that this function is actually used in the migration guide, or correct the reference to use an appropriate alternative utility if needed.
451-466: The tool approval API structure is correct. Theapproval-requestedstate is valid, and tool-call parts with approval-requested state contain anapproval.idproperty that should be passed toaddToolApprovalResponse({ id, approved }).
74-74: The migration guide uses correct adapter export names and signatures. Verification confirms:
openaiText(),openaiImage(),openaiSpeech(),anthropicText(),geminiText()are the canonical function names (not method syntax)- All are individual, tree-shakeable function exports from separate adapter files
- All follow consistent function call patterns:
adapterName('model-id')
934-937: All referenced documentation links are valid and correct.Verification confirms that all four documentation files referenced in the help section exist at their specified relative paths:
../getting-started/quick-start→docs/getting-started/quick-start.md✓./tools→docs/guides/tools.md✓./connection-adapters→docs/guides/connection-adapters.md✓../api/ai→docs/api/ai.md✓No fixes needed.
| ```typescript | ||
| const stream = chat({ | ||
| adapter: openaiText('gpt-4o'), | ||
| messages: [ |
There was a problem hiding this comment.
This is not correct, we pass it on the root as well, i think its called systemPrompts
There was a problem hiding this comment.
Fixed in ebd689c. You were right — chat() takes systemPrompts: string[] at the root. Verified against packages/typescript/ai/src/types.ts:572 and the chat options at packages/typescript/ai/src/activities/chat/index.ts. Also updated the final complete example (CodeRabbit flagged the same issue at line 928). The System Messages section now shows the multi-prompt form too, since systemPrompts is an array.
| | Vercel AI SDK | TanStack AI | Notes | | ||
| |--------------|-------------|-------| | ||
| | `api: '/api/chat'` | `connection: fetchServerSentEvents('/api/chat')` | Explicit connection adapter | | ||
| | `input`, `handleInputChange` | Manage state yourself | More control, less magic | |
There was a problem hiding this comment.
This seems hallucinated?
There was a problem hiding this comment.
Fixed in ebd689c. You were right to flag this — the useChat API comparison was mixing Vercel v4 (input/handleInputChange/handleSubmit) with the TanStack headless-state philosophy, which made the claim "Same pattern" for append/reload/stop/setMessages misleading. Vercel AI SDK v5+ already moved to headless state (sendMessage, status, regenerate, DefaultChatTransport), so I rewrote the whole table against the current v5+ surface and the actually-exported useChat return from packages/typescript/ai-react/src/use-chat.ts (sendMessage, append, reload, stop, isLoading, setMessages, addToolResult, addToolApprovalResponse). The basic useChat example above the table was updated to match.
|
What's the status on this one? We have some Claude comments but Claude didn't do anything about it. :( Bad Claude, bad! |
- Installation: add @ai-sdk/react to v5+ deps and update quick-reference table to show the v5 framework package names - System prompts: use root-level systemPrompts: [...] instead of prepending a system message to the messages array (verified against packages/typescript/ai/src/types.ts) - useChat API table: rewrite against current Vercel AI SDK v5+ API (sendMessage, status, regenerate, DefaultChatTransport) so the comparison is accurate rather than mixing v4/v5 - MessagePart: expand to full discriminated union with real field names (arguments/input/approval on tool-call, content on tool-result) and real ToolCallState values - Fix nonexistent toStreamResponse references -> toServerSentEventsResponse (and add toHttpResponse where appropriate) - Fix AbortController section heading (h4 -> h3, resolves MD001) - Update tool schema section to note parameters -> inputSchema rename in AI SDK v5 - Tighten tool approval example with optional chaining and a note on arguments vs parsed input
|
View your CI Pipeline Execution ↗ for commit 45fe37b
☁️ Nx Cloud last updated this comment at |
🚀 Changeset Version PreviewNo changeset entries found. Merging this PR will not cause a version bump for any packages. |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
|
Addressed the review feedback in ebd689c. Summary of what changed and why, after verifying every API claim against the current TanStack AI source and the Vercel AI SDK v5+ docs: Human feedback (Alem + Tanner)
CodeRabbit feedback
Other corrections found while verifying
I already pushed the commit; the PR is green for re-review. |
# Conflicts: # docs/tools/migration-from-vercel-ai.md
…rage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.)
Factual corrections verified against source:
- Multimodal image source shape uses { type: 'url'|'data', value, mimeType }
not { url, base64, mediaType } (types.ts:142-183)
- toolCacheMiddleware is exported from @tanstack/ai/middlewares, not the
root (packages/typescript/ai/src/middlewares/index.ts)
- toolDefinition({ description }) is required; add it to the two doc
examples that were missing it (tool-definition.ts:31)
- stream() connection adapter factory is (messages, data?) with no
signal arg; rewrite custom-adapter example (connection-adapters.ts:441)
AI SDK v6 accuracy:
- addToolResult -> addToolOutput (v6 rename)
- experimental_output -> output (de-experimentalized)
- Soften "replaced" claim about generateObject/streamObject — they are
deprecated, not removed
- Vercel addToolApprovalResponse row: v6 has this; replace "N/A"
- First Basic Text Generation Before example now uses v5+ API
(convertToModelMessages + toUIMessageStreamResponse) with a v4
toDataStreamResponse callout
Consistency:
- Agent-loop tables reconciled: only one truly built-in strategy
(maxIterations / untilFinishReason / combineStrategies); hasToolCall
requires a custom AgentLoopStrategy. Both tables now agree.
- prepareStep Before/After actually demonstrates equivalent behavior:
Before shows step-level config tweak, After uses onConfig;
mid-loop model switching split into its own subsection with the
two-chat pattern the prose describes
- Message Structure section qualifies that ToolCallPart.input is the
ai-client projection (server-side reads arguments directly)
- toHttpStream/Response comment in client connection example clarified
- Complete Example clarifies why convertToModelMessages disappears in
the After (chat() accepts UI messages directly)
- clientTools() auto-execution comment expanded to state that no
onToolCall/addToolOutput call is needed
- Anchor slug for Structured Output simplified to #structured-output
Rot hygiene:
- "current releases" removed from v5/v6 note
- "Every option" softened to "Options accepted ... as of AI SDK v6"
- "now expose" / "AI SDK v6 offers" / "v6 consolidated" reworded to
avoid tense decay across future releases
|
Ran the CR loop on the expanded guide (3 specialized review agents, 2 rounds). Round 1 → 15 actionable findings, all fixed in 711dc15 Factual corrections against the TanStack AI source:
AI SDK v6 accuracy:
Internal consistency:
Rot hygiene:
False positive from the review pass: one agent flagged "Embeddings: Not Yet Covered" as factually wrong because CLAUDE.md mentioned Round 2 → 0 findings. Each round-1 fix verified against the source; fresh scan found no regressions. Ready for re-review. |
Both helpers accept ResponseInit & { abortController }, so custom headers,
status, and cancellation flow through the helpers directly. Drop the
hand-rolled `new Response(toServerSentEventsStream(...), { headers: {...} })`
example and keep the raw stream helpers only for the genuine "pipe elsewhere"
case.
Add comprehensive migration guide covering:
🎯 Changes
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit