Skip to content

feat(ollama): support response_format#34612

Merged
Mason Daugherty (mdrxy) merged 7 commits intolangchain-ai:masterfrom
mohankumar27:fix/chollama-response-format
Apr 7, 2026
Merged

feat(ollama): support response_format#34612
Mason Daugherty (mdrxy) merged 7 commits intolangchain-ai:masterfrom
mohankumar27:fix/chollama-response-format

Conversation

@mohankumar27
Copy link
Copy Markdown
Contributor

@mohankumar27 Mohan Kumar S (mohankumar27) commented Jan 6, 2026

Fixes #34610


This PR resolves an issue where ChatOllama would raise an unexpected keyword argument 'response_format' error when used with create_agent or when passed an OpenAI-style response_format.

When using create_agent (especially with models like gpt-oss), LangChain creates a response_format argument (e.g., {"type": "json_schema", ...}). ChatOllama previously passed this argument directly to the underlying Ollama client, which does not support response_format and instead expects a format parameter.

The Fix

I updated _chat_params in libs/partners/ollama/langchain_ollama/chat_models.py to:

  1. Intercept the response_format argument.
  2. Map it to the native Ollama format parameter:
    • {"type": "json_schema", "json_schema": {"schema": ...}} -> format=schema
    • {"type": "json_object"} -> format="json"
  3. Remove response_format from the kwargs passed to the client.

Validation

  • Reproduction Script: Verified the fix with a script covering json_schema, json_object, and explicit format priority scenarios.
  • New Tests: Added 3 new unit tests to libs/partners/ollama/tests/unit_tests/test_chat_models.py covering these scenarios.
  • Regression: Ran the full test suite (make -C libs/partners/ollama test), passing 29 tests (previously 26).
  • Lint/Format: Verified with make lint_package and make format.

@github-actions github-actions Bot added integration PR made that is related to a provider partner package integration ollama `langchain-ollama` package issues & PRs fix For PRs that implement a fix labels Jan 6, 2026
@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Jan 6, 2026

Merging this PR will not alter performance

✅ 1 untouched benchmark
⏩ 39 skipped benchmarks1


Comparing mohankumar27:fix/chollama-response-format (c516f88) with master (2bc982b)

Open in CodSpeed

Footnotes

  1. 39 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@mdrxy
Copy link
Copy Markdown
Member

Could you coordinate with Ayyub (@AB7-cpu) on #34611? These appear to be duplicated at a quick glance

@mdrxy Mason Daugherty (mdrxy) changed the title fix(ollama): support response_format in ChatOllama fix(ollama): support response_format in ChatOllama Jan 8, 2026
@github-actions github-actions Bot added fix For PRs that implement a fix and removed fix For PRs that implement a fix labels Jan 8, 2026
@AB7-cpu
Copy link
Copy Markdown

Hey Mohan Kumar S (@mohankumar27) 👋
Looks like we both ended up fixing the issue independently in same way.
Since this is a small change and the solutions are very similar, I think it’s probably best for maintainers to go with one PR to avoid duplication.

@mohankumar27
Copy link
Copy Markdown
Contributor Author

Hi Ayyub (@AB7-cpu). Did not notice your fix for the issue. Seems both fix are same. Additionally your fix includes appropriate warn log. I think it's good to go ahead with your commit if the maintainers thinks it's good. This PR can be closed.

@mdrxy
Copy link
Copy Markdown
Member

Hey Mohan Kumar S (@mohankumar27)!

Thanks for tackling this issue

We'll need to add warnings for silent failures. The main concern is that several edge cases silently ignore the user's response_format without any feedback. This can lead to frustrating debugging sessions where the code "works" but produces unexpected output.

Scenarios that likely fail silently:

Input Current behavior Suggested
format="json" + response_format={...} response_format ignored Emit warning about precedence
{"type": "json_schema"} (no schema) format=None Warn about missing schema
{"type": "text"} or unknown type Ignored Warn about unsupported type
response_format="json_object" (string) Ignored Warn that dict is required

The existing tests cover the happy paths well. A few additional cases would strengthen coverage.

Also, consider splitting test_chat_ollama_supports_response_format_json_object — it currently tests both the json_object mapping AND the format priority behavior. Separating these makes failures easier to diagnose.

Also, the format resolution logic could be extracted to improve readability. (This is optional but makes the main _chat_params method cleaner and the logic more testable in isolation.)

Thank you!

- Extract response_format resolution to _resolve_format_param helper method.

- Add warnings when response_format is ignored (due to explicit format arg) or invalid.

- Add tests for format priority and warning scenarios.
@mohankumar27
Copy link
Copy Markdown
Contributor Author

Thanks for the feedback Mason Daugherty (@mdrxy)

I've pushed a new commit addressing your suggestions:

  1. Refactoring: Extracted the resolution logic into a helper method _resolve_format_param to clean up _chat_params.
  2. Warnings: Added warnings.warn (using UserWarning) for cases where response_format is ignored (e.g., when format is explicitly provided) or when an invalid/unrecognized type is passed.
  3. Tests:
  • Split the json_object test into two separate tests: one for the mapping and one for the priority logic.
  • Added new tests to verify that warnings are raised for invalid types and priority conflicts.

Let me know if this looks good!

@github-actions

This comment has been minimized.

@mdrxy Mason Daugherty (mdrxy) changed the title fix(ollama): support response_format in ChatOllama feat(ollama): support response_format Apr 7, 2026
@github-actions github-actions Bot added feature For PRs that implement a new feature; NOT A FEATURE REQUEST and removed size: S 50-199 LOC fix For PRs that implement a fix labels Apr 7, 2026
@github-actions github-actions Bot added the size: M 200-499 LOC label Apr 7, 2026
@mdrxy Mason Daugherty (mdrxy) merged commit 3beba77 into langchain-ai:master Apr 7, 2026
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bypass-issue-check external feature For PRs that implement a new feature; NOT A FEATURE REQUEST integration PR made that is related to a provider partner package integration ollama `langchain-ollama` package issues & PRs size: M 200-499 LOC

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Seems ChatOllama doesnt work well with response_format in create_agent

3 participants