Custom LLM provider integration examples for OpenAI Agents SDK with Temporal workflows.
Adapted from OpenAI Agents SDK model providers examples
Before running these examples, be sure to review the prerequisites and background on the integration.
Uses built-in LiteLLM support to connect to various model providers.
Start the LiteLLM provider worker:
# Set the required environment variable for your chosen provider
export ANTHROPIC_API_KEY="your_anthropic_api_key" # For Anthropic
uv run openai_agents/model_providers/run_litellm_provider_worker.pyThen run the example in a separate terminal:
uv run openai_agents/model_providers/run_litellm_auto_workflow.pyThe example uses Anthropic Claude by default but can be modified to use other LiteLLM-supported providers.
Find more LiteLLM providers at: https://docs.litellm.ai/docs/providers
This example demonstrates tool calling using the gpt-oss reasoning model with a local Ollama server. Running this example requires sufficiently powerful hardware (and involves a 14 GB model download. It is adapted from the OpenAI Cookbook example.
Make sure you have Ollama installed:
ollama serveDownload the gpt-oss model:
ollama pull gpt-oss:20bStart the gpt-oss worker:
uv run openai_agents/model_providers/run_gpt_oss_worker.pyThen run the example in a separate terminal:
uv run openai_agents/model_providers/run_gpt_oss_workflow.py- Custom Example Agent - Custom OpenAI client integration
- Custom Example Global - Global default client configuration
- Custom Example Provider - Custom ModelProvider pattern
- LiteLLM Provider - Interactive model/API key input