This directory contains comprehensive examples demonstrating how to integrate the OpenAI Agents SDK with Temporal's durable execution engine. These samples extend the OpenAI Agents SDK examples with Temporal's durability, orchestration, and observability capabilities.
The integration creates a powerful synergy between two technologies:
- Temporal Workflows: Provide durable execution, state management, and orchestration
- OpenAI Agents SDK: Deliver AI agent capabilities, tool integration, and LLM interactions
This combination ensures that AI agent workflows are:
- Durable: Survive interruptions, restarts, and failures
- Observable: Full tracing, monitoring, and debugging capabilities
- Scalable: Handle complex multi-agent interactions and long-running conversations
- Reliable: Built-in retry mechanisms and error handling
The Runner and Agent execute within the Temporal workflow (deterministic environment), while model invocations automatically become Temporal Activities (non-deterministic environment). This separation ensures that agent orchestration logic is durable and deterministic, while LLM API calls benefit from Temporal's retry mechanisms and fault tolerance. The integration provides this durability without requiring code changes to your existing Agent SDK applications.
Unified observability across both Temporal and OpenAI systems. View agent execution in Temporal's workflow history and OpenAI's tracing dashboard simultaneously.
Each agent runs in its own process or thread, enabling independent scaling. Add more capacity for specific agent types without affecting others.
- Crash-Proof Execution: Automatic recovery from failures, restarts, and bugs
- Rate Limit Handling: Graceful handling of LLM API rate limits
- Network Resilience: Automatic retries for downstream API failures
- State Persistence: Workflow state automatically saved between steps
Temporal workflows orchestrate the entire agent lifecycle, from initialization to completion, ensuring state persistence and fault tolerance.
Workflows maintain conversation state, agent context, and execution history, enabling long-running, stateful AI interactions.
Seamless integration of OpenAI's built-in tools (web search, code interpreter, file search) with custom Temporal activities for I/O operations.
Complex workflows can coordinate multiple specialized agents, each with distinct roles and responsibilities.
- llms.txt - LLM-friendly summary for AI assistants and developers
- ARCHITECTURE.md - Technical deep dive into integration patterns
- Basic Examples - Fundamental agent patterns, lifecycle management, and tool integration
- Agent Patterns - Advanced multi-agent architectures, routing, and coordination patterns
- Tools Integration - Comprehensive tool usage including code interpreter, file search, and image generation
- Handoffs - Agent collaboration and message filtering patterns
- Hosted MCP - Model Context Protocol integration for external tool access
- Model Providers - Custom LLM provider integration (LiteLLM, Ollama, GPT-OSS)
- Research Bot - Multi-agent research system with planning, search, and synthesis
- Customer Service - Conversational workflows with escalation and state management
- Financial Research - Complex multi-agent financial analysis system
- Reasoning Content - Accessing model reasoning and thought processes
- Temporal server running locally
- Required dependencies:
uv sync --group openai-agents - OpenAI API key:
export OPENAI_API_KEY=your_key_here
- Choose a Service: Start with Basic Examples for fundamental concepts
- Run the Worker: Execute the appropriate
run_worker.pyscript - Execute Workflow: Use the corresponding
run_*_workflow.pyscript - Explore Patterns: Move to Agent Patterns for advanced usage
# Start Temporal server
temporal server start-dev
# Install dependencies
uv sync --group openai-agents
# Run a specific example
uv run openai_agents/basic/run_worker.py
# In another terminal
uv run openai_agents/basic/run_hello_world_workflow.py@workflow.defn
class AgentWorkflow:
@workflow.run
async def run(self, input: str) -> str:
# Agent execution logic
passfrom temporalio.contrib.openai_agents import OpenAIAgentsPlugin, ModelActivityParameters
from datetime import timedelta
client = await Client.connect(
"localhost:7233",
plugins=[
OpenAIAgentsPlugin(
model_params=ModelActivityParameters(
start_to_close_timeout=timedelta(seconds=30)
)
),
],
)
worker = Worker(
client,
task_queue="openai-agents-task-queue",
workflows=[YourWorkflowClass],
)from agents import Agent, Runner
@workflow.defn
class MyAgentWorkflow:
@workflow.run
async def run(self, input_text: str) -> str:
agent = Agent(name="MyAgent", instructions="...")
# Runner.run() executes inside the workflow (deterministic)
# Model invocations automatically create Temporal Activities (non-deterministic)
# (Requires OpenAIAgentsPlugin to be registered with the worker)
result = await Runner.run(agent, input_text)
return result.final_outputEach service documentation follows a consistent structure:
- Introduction: Service purpose and role in the ecosystem
- Architecture: System design and component relationships
- Code Examples: Implementation patterns with file paths and benefits
- Development Guidelines: Best practices and common patterns
- File Organization: Directory structure and file purposes
- Temporal Python SDK Documentation
- OpenAI Agents SDK Documentation
- Module Documentation
- Temporal Blog: OpenAI Agents Integration
- Community Demos
This integration is ideal for:
- Conversational AI: Long-running, stateful conversations with memory
- Multi-Agent Systems: Coordinated AI agents working on complex tasks
- Research & Analysis: AI-powered research workflows with tool integration
- Customer Service: Intelligent support systems with escalation capabilities
- Content Generation: AI content creation with workflow orchestration
- Data Processing: AI-driven data analysis and transformation pipelines
For detailed implementation examples and specific use cases, refer to the individual service documentation linked above.