feat(ollama): support response_format#34612
feat(ollama): support response_format#34612Mason Daugherty (mdrxy) merged 7 commits intolangchain-ai:masterfrom
response_format#34612Conversation
Merging this PR will not alter performance
Comparing Footnotes
|
5f212fe to
c3f746c
Compare
c3f746c to
14d6bc9
Compare
|
Could you coordinate with Ayyub (@AB7-cpu) on #34611? These appear to be duplicated at a quick glance |
response_format in ChatOllama
|
Hey Mohan Kumar S (@mohankumar27) 👋 |
|
Hi Ayyub (@AB7-cpu). Did not notice your fix for the issue. Seems both fix are same. Additionally your fix includes appropriate warn log. I think it's good to go ahead with your commit if the maintainers thinks it's good. This PR can be closed. |
|
Hey Mohan Kumar S (@mohankumar27)! Thanks for tackling this issue We'll need to add warnings for silent failures. The main concern is that several edge cases silently ignore the user's Scenarios that likely fail silently:
The existing tests cover the happy paths well. A few additional cases would strengthen coverage. Also, consider splitting Also, the format resolution logic could be extracted to improve readability. (This is optional but makes the main Thank you! |
- Extract response_format resolution to _resolve_format_param helper method. - Add warnings when response_format is ignored (due to explicit format arg) or invalid. - Add tests for format priority and warning scenarios.
|
Thanks for the feedback Mason Daugherty (@mdrxy) I've pushed a new commit addressing your suggestions:
Let me know if this looks good! |
This comment has been minimized.
This comment has been minimized.
response_format in ChatOllamaresponse_format
Fixes #34610
This PR resolves an issue where
ChatOllamawould raise anunexpected keyword argument 'response_format'error when used withcreate_agentor when passed an OpenAI-styleresponse_format.When using
create_agent(especially with models likegpt-oss), LangChain creates aresponse_formatargument (e.g.,{"type": "json_schema", ...}).ChatOllamapreviously passed this argument directly to the underlying Ollama client, which does not supportresponse_formatand instead expects aformatparameter.The Fix
I updated
_chat_paramsinlibs/partners/ollama/langchain_ollama/chat_models.pyto:response_formatargument.formatparameter:{"type": "json_schema", "json_schema": {"schema": ...}}->format=schema{"type": "json_object"}->format="json"response_formatfrom the kwargs passed to the client.Validation
json_schema,json_object, and explicitformatpriority scenarios.libs/partners/ollama/tests/unit_tests/test_chat_models.pycovering these scenarios.make -C libs/partners/ollama test), passing 29 tests (previously 26).make lint_packageandmake format.