Describe the bug
When using LiteLLM with models that return reasoning_content (e.g., Gemini 2.5 with thinking enabled), the reasoning/thought content is correctly captured and stored as types.Part objects with thought=True in ADK v1.20.0. However, when building subsequent LLM requests, the _get_content() function does not filter out these thought parts, causing:
- Reasoning content to be sent back to the LLM as regular content in subsequent turns
- Significant token waste from replaying thought traces
- Potential model confusion from seeing its own reasoning as conversation content
To Reproduce
- Install ADK v1.20.0:
pip install google-adk==1.20.0
- Create an agent using LiteLLM adapter with a reasoning model:
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm
agent = Agent(
name="test_agent",
model=LiteLlm(model="gemini/gemini-2.5-pro"), # or other model with reasoning_content
instruction="You are a helpful assistant."
)
- Run a multi-turn conversation (2+ messages)
- Observe in LiteLLM logs that turn 2+ includes the reasoning text from turn 1 as regular message content
Error/Log output:
Turn 2 request to LLM contains reasoning text from turn 1 as regular content field instead of being filtered out.
Expected behavior
Thought parts (with thought=True) should be filtered from the conversation history when building LLM requests, similar to how _convert_foreign_event() in contents.py already filters them (line 513).
Screenshots
N/A - Visible in LiteLLM proxy logs or by adding debug logging to _get_content().
Desktop (please complete the following information):
- OS: macOS
- Python version: 3.11
- ADK version: 1.20.0+
Model Information:
- Are you using LiteLLM: Yes
- Which model is being used: gemini-2.5-pro (any model returning
reasoning_content)
Additional context
Root Cause Analysis:
Commit 31cfa3b82bff2a130622d3ba0909024927121ce4 ("feat: Capture thinking output, forward raw payloads, and fix exec locals") added _convert_reasoning_value_to_parts() to capture reasoning as thought parts, but _get_content() (line 536 in lite_llm.py) was not updated to filter them:
# Current code (buggy)
async def _get_content(parts: Iterable[types.Part], ...):
for part in parts:
if part.text: # Missing: and not part.thought
content_objects.append({"type": "text", "text": part.text})
Suggested Fix:
async def _get_content(parts: Iterable[types.Part], ...):
for part in parts:
if part.thought:
continue # Skip thought/reasoning parts
if part.text:
content_objects.append({"type": "text", "text": part.text})
Related:
- Commit that introduced the bug:
31cfa3b82bff2a130622d3ba0909024927121ce4
- Similar filtering pattern exists in
contents.py:_convert_foreign_event() (line 513): if part.thought: continue
- This was working correctly in ADK v1.19.0 (before reasoning capture was added)
Describe the bug
When using LiteLLM with models that return
reasoning_content(e.g., Gemini 2.5 with thinking enabled), the reasoning/thought content is correctly captured and stored astypes.Partobjects withthought=Truein ADK v1.20.0. However, when building subsequent LLM requests, the_get_content()function does not filter out these thought parts, causing:To Reproduce
pip install google-adk==1.20.0Error/Log output:
Turn 2 request to LLM contains reasoning text from turn 1 as regular
contentfield instead of being filtered out.Expected behavior
Thought parts (with
thought=True) should be filtered from the conversation history when building LLM requests, similar to how_convert_foreign_event()incontents.pyalready filters them (line 513).Screenshots
N/A - Visible in LiteLLM proxy logs or by adding debug logging to
_get_content().Desktop (please complete the following information):
Model Information:
reasoning_content)Additional context
Root Cause Analysis:
Commit
31cfa3b82bff2a130622d3ba0909024927121ce4("feat: Capture thinking output, forward raw payloads, and fix exec locals") added_convert_reasoning_value_to_parts()to capture reasoning as thought parts, but_get_content()(line 536 inlite_llm.py) was not updated to filter them:Suggested Fix:
Related:
31cfa3b82bff2a130622d3ba0909024927121ce4contents.py:_convert_foreign_event()(line 513):if part.thought: continue