fix(lite_llm): propagate Gemini grounding metadata from ModelResponse#5661
Open
1wos wants to merge 3 commits into
Open
fix(lite_llm): propagate Gemini grounding metadata from ModelResponse#56611wos wants to merge 3 commits into
1wos wants to merge 3 commits into
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Collaborator
|
Hi @1wos , Thank you for your contribution! We appreciate you taking the time to submit this pull request. can you please fix the failing test by running autoformat.sh |
22e1e91 to
57851ed
Compare
Author
|
@rohityan rohityanDone. Thanks for the review! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Link to Issue
Problem
LiteLLM exposes Gemini's grounding metadata on
ModelResponse.vertex_ai_grounding_metadatarather than inside the message, and_model_response_to_generate_content_responseinlite_llm.pynever reads it. As a resultLlmResponse.grounding_metadatais alwaysNonewhen Gemini is called throughLiteLlm, breakingevent.grounding_metadata,after_model_callback, and citation pipelines for the entire LiteLlm path. The nativeGemini()path picks grounding up fromcandidate.grounding_metadata(llm_response.py:190).Solution
Pull
vertex_ai_grounding_metadataoff theModelResponseafter the message is converted, and attach it toLlmResponse.grounding_metadata. Three shapes are accepted:types.GroundingMetadatainstance → used as-is.dict→ validated viatypes.GroundingMetadata.model_validate.list(one entry per candidate) → first entry, then the same dispatch.If validation fails the handler logs a warning and leaves grounding unset, so a single malformed payload doesn't break the rest of the response.
Testing Plan
Unit Tests in
tests/unittests/models/test_litellm.py:test_model_response_to_generate_content_response_grounding_metadata_dict— dict payload is propagated.test_model_response_to_generate_content_response_grounding_metadata_list— list payload uses the first entry.test_model_response_to_generate_content_response_no_grounding_metadata— missing attribute leaves grounding asNone(no regression for non-Gemini LiteLlm models).Full
test_litellm.pysuite passes locally:Manual E2E: Live Gemini API verification was not possible locally (no API key configured). The unit tests cover the conversion function the bug report points to.
Checklist