Description
Using bare str type as output_schema on an Agent that also has tools causes an infinite call_llm → execute_tools → call_llm loop. The LLM repeatedly calls tools instead of producing the final structured response via set_model_response.
This does not happen with BaseModel subclasses as output_schema, only with primitive types like str.
Reproduction
from google.adk import Agent, Workflow, Event
def plan_visualization(approach: str, technology: str, key_elements: list[str]) -> str:
"""Plan a visualization."""
return f"Plan: {approach} using {technology}"
def query_data(query: str) -> list[dict]:
"""Query the database."""
return [{"q": "Q1", "revenue": 100}]
# This agent loops infinitely:
plan_agent = Agent(
name="plan_agent",
model="gemini-2.5-pro",
instruction="Call plan_visualization with approach, technology, and key elements.",
tools=[plan_visualization, query_data],
output_schema=str, # <-- bare str causes the loop
mode="single_turn",
)
# Use in a Workflow or with Runner — agent calls tools repeatedly
# without ever settling on a final string response
Expected Behavior
Either:
output_schema=str works correctly with tools — the LLM calls tools, then uses set_model_response to return a plain string, OR
- ADK raises a validation error at agent creation time:
"output_schema must be a BaseModel subclass when tools are specified"
Actual Behavior
The agent enters an infinite loop:
- LLM generates a
function_call (e.g., plan_visualization)
- ADK executes the tool and returns the result
- LLM generates another
function_call instead of calling set_model_response
- Repeat endlessly — burns API credits silently
Analysis
ADK auto-adds set_model_response when both output_schema and tools are specified (documented here). With BaseModel schemas, the model correctly calls set_model_response with structured JSON. With bare str, the set_model_response schema may be malformed or confusing to the model, causing it to fall back to repeated tool calls.
Workaround
Remove output_schema from agents that have tools. Use other mechanisms to suppress unwanted text output if needed.
Related Issues
Environment
google-adk 2.0.0a2
- Model:
gemini-2.5-pro (also tested with gemini-2.5-flash)
- Python 3.14
Context for this issue was developed with AI assistance (Claude).
Description
Using bare
strtype asoutput_schemaon anAgentthat also hastoolscauses an infinitecall_llm → execute_tools → call_llmloop. The LLM repeatedly calls tools instead of producing the final structured response viaset_model_response.This does not happen with
BaseModelsubclasses asoutput_schema, only with primitive types likestr.Reproduction
Expected Behavior
Either:
output_schema=strworks correctly with tools — the LLM calls tools, then usesset_model_responseto return a plain string, OR"output_schema must be a BaseModel subclass when tools are specified"Actual Behavior
The agent enters an infinite loop:
function_call(e.g.,plan_visualization)function_callinstead of callingset_model_responseAnalysis
ADK auto-adds
set_model_responsewhen bothoutput_schemaandtoolsare specified (documented here). WithBaseModelschemas, the model correctly callsset_model_responsewith structured JSON. With barestr, theset_model_responseschema may be malformed or confusing to the model, causing it to fall back to repeated tool calls.Workaround
Remove
output_schemafrom agents that have tools. Use other mechanisms to suppress unwanted text output if needed.Related Issues
Environment
google-adk2.0.0a2gemini-2.5-pro(also tested withgemini-2.5-flash)