Describe the bug
PR #401 added providerMetadata to ToolCall and ToolCallStartEvent types, and correctly wired it through the server-side ToolCallManager (used by chat()) and the Gemini adapter (processStreamChunks → formatMessages).
However, the client-side pipeline (StreamProcessor → UIMessage → ModelMessage) does not carry providerMetadata at all. This means:
-
First request works: chat() on the server uses ToolCallManager, which preserves providerMetadata on ToolCall objects. The Gemini adapter reads toolCall.providerMetadata?.thoughtSignature in formatMessages() — all good.
-
Second request fails: The client sends the conversation history back to the server. The StreamProcessor processed the first response's TOOL_CALL_START events, but dropped providerMetadata when creating ToolCallPart UIMessage parts. When converting back to ModelMessage.toolCalls for the second request, providerMetadata is missing. The Gemini adapter's formatMessages() finds no thoughtSignature, and Gemini returns HTTP 400:
Function call is missing a thought_signature in functionCall parts.
This is required for tools to work correctly, and missing thought_signature
may lead to degraded model performance.
Root cause — 5 locations in @tanstack/ai core
| # |
File |
Function/Type |
Issue |
| 1 |
types.ts |
ToolCallPart |
Missing providerMetadata field |
| 2 |
stream/types.ts |
InternalToolCallState |
Missing providerMetadata field |
| 3 |
stream/message-updaters.ts |
updateToolCallPart() |
Doesn't accept or store providerMetadata |
| 4 |
stream/processor.ts |
handleToolCallStartEvent() |
Doesn't pass chunk.providerMetadata to updateToolCallPart() or InternalToolCallState |
| 5 |
messages.ts |
buildAssistantMessages() |
Doesn't include providerMetadata on toolCalls when converting ToolCallPart → ModelMessage.toolCalls |
Additionally, modelMessageToUIMessage() and getCompletedToolCalls() also drop providerMetadata.
Why this wasn't caught
The test in PR #401 (preserves thoughtSignature in functionCall parts when sending history back to Gemini) only tests the server-side chat() flow where ToolCallManager correctly handles providerMetadata. The client-side StreamProcessor → UIMessage → ModelMessage path is not tested.
The server-side chat() loop works fine for single-session server-only scenarios. The bug manifests in client-server architectures where:
- The client uses
ChatClient / StreamProcessor to process SSE events into UIMessages
- On the next user message, UIMessages are converted to ModelMessages via
uiMessageToModelMessages() and sent to the server
- The server passes these messages to
chat() → adapter → Gemini API
Steps to reproduce
- Use
@tanstack/ai-gemini with gemini-3-flash-preview and thinking enabled
- Send a message that triggers a tool call
- First response succeeds (tool call + response)
- Send a follow-up message
- HTTP 400: "Function call is missing a thought_signature in functionCall parts"
Environment
@tanstack/ai@0.9.1
@tanstack/ai-gemini@0.8.4
@tanstack/ai-client@0.7.4
- Model:
gemini-3-flash-preview with thinkingConfig: { includeThoughts: true }
- Architecture: Client (
ChatClient/StreamProcessor) ↔ Server (chat() + Gemini adapter)
Related
Describe the bug
PR #401 added
providerMetadatatoToolCallandToolCallStartEventtypes, and correctly wired it through the server-sideToolCallManager(used bychat()) and the Gemini adapter (processStreamChunks→formatMessages).However, the client-side pipeline (
StreamProcessor→ UIMessage → ModelMessage) does not carryproviderMetadataat all. This means:First request works:
chat()on the server usesToolCallManager, which preservesproviderMetadataonToolCallobjects. The Gemini adapter readstoolCall.providerMetadata?.thoughtSignatureinformatMessages()— all good.Second request fails: The client sends the conversation history back to the server. The
StreamProcessorprocessed the first response'sTOOL_CALL_STARTevents, but droppedproviderMetadatawhen creatingToolCallPartUIMessage parts. When converting back toModelMessage.toolCallsfor the second request,providerMetadatais missing. The Gemini adapter'sformatMessages()finds nothoughtSignature, and Gemini returns HTTP 400:Root cause — 5 locations in
@tanstack/aicoretypes.tsToolCallPartproviderMetadatafieldstream/types.tsInternalToolCallStateproviderMetadatafieldstream/message-updaters.tsupdateToolCallPart()providerMetadatastream/processor.tshandleToolCallStartEvent()chunk.providerMetadatatoupdateToolCallPart()orInternalToolCallStatemessages.tsbuildAssistantMessages()providerMetadataontoolCallswhen convertingToolCallPart→ModelMessage.toolCallsAdditionally,
modelMessageToUIMessage()andgetCompletedToolCalls()also dropproviderMetadata.Why this wasn't caught
The test in PR #401 (
preserves thoughtSignature in functionCall parts when sending history back to Gemini) only tests the server-sidechat()flow whereToolCallManagercorrectly handlesproviderMetadata. The client-sideStreamProcessor→UIMessage→ModelMessagepath is not tested.The server-side
chat()loop works fine for single-session server-only scenarios. The bug manifests in client-server architectures where:ChatClient/StreamProcessorto process SSE events into UIMessagesuiMessageToModelMessages()and sent to the serverchat()→ adapter → Gemini APISteps to reproduce
@tanstack/ai-geminiwithgemini-3-flash-previewand thinking enabledEnvironment
@tanstack/ai@0.9.1@tanstack/ai-gemini@0.8.4@tanstack/ai-client@0.7.4gemini-3-flash-previewwiththinkingConfig: { includeThoughts: true }ChatClient/StreamProcessor) ↔ Server (chat()+ Gemini adapter)Related