Skip to content

Latest commit

 

History

History
67 lines (46 loc) · 2.04 KB

File metadata and controls

67 lines (46 loc) · 2.04 KB

Model Providers Examples

Custom LLM provider integration examples for OpenAI Agents SDK with Temporal workflows.

Adapted from OpenAI Agents SDK model providers examples

Before running these examples, be sure to review the prerequisites and background on the integration.

Running the Examples

Currently Implemented

LiteLLM Auto

Uses built-in LiteLLM support to connect to various model providers.

Start the LiteLLM provider worker:

# Set the required environment variable for your chosen provider
export ANTHROPIC_API_KEY="your_anthropic_api_key"  # For Anthropic

uv run openai_agents/model_providers/run_litellm_provider_worker.py

Then run the example in a separate terminal:

uv run openai_agents/model_providers/run_litellm_auto_workflow.py

The example uses Anthropic Claude by default but can be modified to use other LiteLLM-supported providers.

Find more LiteLLM providers at: https://docs.litellm.ai/docs/providers

Extra

GPT-OSS with Ollama

This example demonstrates tool calling using the gpt-oss reasoning model with a local Ollama server. Running this example requires sufficiently powerful hardware (and involves a 14 GB model download. It is adapted from the OpenAI Cookbook example.

Make sure you have Ollama installed:

ollama serve

Download the gpt-oss model:

ollama pull gpt-oss:20b

Start the gpt-oss worker:

uv run openai_agents/model_providers/run_gpt_oss_worker.py

Then run the example in a separate terminal:

uv run openai_agents/model_providers/run_gpt_oss_workflow.py

Not Yet Implemented

  • Custom Example Agent - Custom OpenAI client integration
  • Custom Example Global - Global default client configuration
  • Custom Example Provider - Custom ModelProvider pattern
  • LiteLLM Provider - Interactive model/API key input