Skip to content

Commit 5e056b6

Browse files
Python: [BREAKING] Python: Provider-leading client design & OpenAI package extraction (#4818)
* Python: Provider-leading client design & OpenAI package extraction Major refactoring of the Python Agent Framework client architecture: - Extract OpenAI clients into new `agent-framework-openai` package - Core package no longer depends on openai, azure-identity, azure-ai-projects - Rename clients for discoverability: OpenAIResponsesClient → OpenAIChatClient, OpenAIChatClient → OpenAIChatCompletionClient - Unify `model_id`/`deployment_name`/`model_deployment_name` → `model` param - New FoundryChatClient for Azure AI Foundry Responses API - New FoundryAgent/FoundryAgentClient for connecting to pre-configured Foundry agents - Remove OpenAIBase/OpenAIConfigMixin from non-deprecated client MRO - Deprecate AzureOpenAI* clients, AzureAIClient, OpenAIAssistantsClient - Reorganize samples: azure_openai+azure_ai+azure_ai_agent → azure/ - ADR-0020: Provider-Leading Client Design Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: missing Agent imports in samples, .model_id → .model in foundry_local sample Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: CI failures — mypy errors, coverage targets, sample imports - azure-ai mypy: add type ignores for TypedDict total=, model arg, forward ref - Coverage: replace core.azure/openai targets with openai package target - project_provider: add type annotation for opts dict Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: populate openai .pyi stub, fix broken README links, coverage targets Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fixes * updated observabilitty * reset azure init.pyi * fix errors * updated adr number * fix foundry local * fixed not renamed docstrings and comments, and added deprecated markers to old classes * fix tests and pyprojects * fix test vars * updated function tests * update durable * updated test setup for functions * Fix Foundry auth in workflow samples Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Stabilize Python integration workflows Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Update hosting samples for Foundry Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Trigger full CI rerun Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Trigger CI rerun again Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * trigger rerun * trigger rerun * fix for litellm * undo durabletask changes * Move Foundry APIs into foundry namespace Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix Foundry pyproject formatting Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Split provider samples by Foundry surface Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Restore hosting sample requirements Also fix the Foundry Local sample link after the provider sample move. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * updated tests * udpated foundry integration tests * removed dist from azurefunctions tests * Use separate Foundry clients for concurrent agents Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix client setup in azfunc and durable * disabled two tests * updated setup for some function and durable tests * improved azure openai setup with new clients * ignore deprecated * fixes * skip 11 * remove openai assistants int tests --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
1 parent 4b53360 commit 5e056b6

485 files changed

Lines changed: 9784 additions & 12084 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
name: Setup Local MCP Server
2+
description: Start and validate a local streamable HTTP MCP server for integration tests
3+
4+
inputs:
5+
fallback_url:
6+
description: Existing LOCAL_MCP_URL value to keep as a fallback if local startup fails
7+
required: false
8+
default: ''
9+
host:
10+
description: Host interface to bind the local MCP server
11+
required: false
12+
default: '127.0.0.1'
13+
port:
14+
description: Port to bind the local MCP server
15+
required: false
16+
default: '8011'
17+
mount_path:
18+
description: Mount path for the local streamable HTTP MCP endpoint
19+
required: false
20+
default: '/mcp'
21+
22+
outputs:
23+
effective_url:
24+
description: Local MCP URL when startup succeeds, otherwise the provided fallback URL
25+
value: ${{ steps.start.outputs.effective_url }}
26+
local_url:
27+
description: URL of the local MCP server
28+
value: ${{ steps.start.outputs.local_url }}
29+
started:
30+
description: Whether the local MCP server started and passed validation
31+
value: ${{ steps.start.outputs.started }}
32+
pid:
33+
description: PID of the local MCP server process when startup succeeded
34+
value: ${{ steps.start.outputs.pid }}
35+
36+
runs:
37+
using: composite
38+
steps:
39+
- name: Start and validate local MCP server
40+
id: start
41+
shell: bash
42+
run: |
43+
set -euo pipefail
44+
45+
host="${{ inputs.host }}"
46+
port="${{ inputs.port }}"
47+
mount_path="${{ inputs.mount_path }}"
48+
fallback_url="${{ inputs.fallback_url }}"
49+
50+
if [[ ! "$mount_path" =~ ^/ ]]; then
51+
mount_path="/$mount_path"
52+
fi
53+
54+
local_url="http://${host}:${port}${mount_path}"
55+
health_url="http://${host}:${port}/healthz"
56+
log_file="$RUNNER_TEMP/local-mcp-server.log"
57+
pid_file="$RUNNER_TEMP/local-mcp-server.pid"
58+
rm -f "$log_file" "$pid_file"
59+
60+
server_pid="$(
61+
python3 - "$GITHUB_WORKSPACE/python" "$log_file" "$host" "$port" "$mount_path" <<'PY'
62+
from __future__ import annotations
63+
64+
import subprocess
65+
import sys
66+
67+
workspace, log_file, host, port, mount_path = sys.argv[1:]
68+
69+
with open(log_file, "w", encoding="utf-8") as log:
70+
process = subprocess.Popen(
71+
[
72+
"uv",
73+
"run",
74+
"python",
75+
"scripts/local_mcp_streamable_http_server.py",
76+
"--host",
77+
host,
78+
"--port",
79+
port,
80+
"--mount-path",
81+
mount_path,
82+
],
83+
cwd=workspace,
84+
stdout=log,
85+
stderr=subprocess.STDOUT,
86+
start_new_session=True,
87+
)
88+
89+
print(process.pid)
90+
PY
91+
)"
92+
echo "$server_pid" > "$pid_file"
93+
94+
started=false
95+
for _ in $(seq 1 30); do
96+
if curl --silent --fail "$health_url" >/dev/null; then
97+
started=true
98+
break
99+
fi
100+
if ! kill -0 "$server_pid" 2>/dev/null; then
101+
break
102+
fi
103+
sleep 1
104+
done
105+
106+
if [[ "$started" == "true" ]]; then
107+
if ! (
108+
cd "$GITHUB_WORKSPACE/python"
109+
LOCAL_MCP_URL="$local_url" uv run python - <<'PY'
110+
from __future__ import annotations
111+
112+
import asyncio
113+
import os
114+
115+
from agent_framework import Content, MCPStreamableHTTPTool
116+
117+
118+
def result_to_text(result: str | list[Content]) -> str:
119+
if isinstance(result, str):
120+
return result
121+
return "\n".join(content.text for content in result if content.type == "text" and content.text)
122+
123+
124+
async def main() -> None:
125+
tool = MCPStreamableHTTPTool(
126+
name="local_ci_mcp",
127+
url=os.environ["LOCAL_MCP_URL"],
128+
approval_mode="never_require",
129+
)
130+
131+
async with tool:
132+
assert tool.functions, "Local MCP server did not expose any tools."
133+
result = result_to_text(await tool.functions[0].invoke(query="What is Agent Framework?"))
134+
assert result, "Local MCP server returned an empty response."
135+
136+
137+
asyncio.run(main())
138+
PY
139+
); then
140+
started=false
141+
fi
142+
fi
143+
144+
effective_url="$local_url"
145+
pid="$server_pid"
146+
147+
if [[ "$started" != "true" ]]; then
148+
effective_url="$fallback_url"
149+
pid=""
150+
if kill -0 "$server_pid" 2>/dev/null; then
151+
kill -TERM -- "-$server_pid" 2>/dev/null || kill -TERM "$server_pid" || true
152+
sleep 1
153+
kill -KILL -- "-$server_pid" 2>/dev/null || kill -KILL "$server_pid" || true
154+
fi
155+
echo "Local MCP server was unavailable; continuing with fallback LOCAL_MCP_URL."
156+
if [[ -f "$log_file" ]]; then
157+
tail -n 100 "$log_file" || true
158+
fi
159+
else
160+
echo "Using local MCP server at $local_url"
161+
fi
162+
163+
echo "started=$started" >> "$GITHUB_OUTPUT"
164+
echo "local_url=$local_url" >> "$GITHUB_OUTPUT"
165+
echo "effective_url=$effective_url" >> "$GITHUB_OUTPUT"
166+
echo "pid=$pid" >> "$GITHUB_OUTPUT"

.github/workflows/python-check-coverage.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,7 @@
4141
"packages.purview.agent_framework_purview",
4242
"packages.anthropic.agent_framework_anthropic",
4343
"packages.azure-ai-search.agent_framework_azure_ai_search",
44-
"packages.core.agent_framework.azure",
45-
"packages.core.agent_framework.openai",
44+
"packages.openai.agent_framework_openai",
4645
# Individual files (if you want to enforce specific files instead of whole packages)
4746
"packages/core/agent_framework/observability.py",
4847
# Add more targets here as coverage improves

.github/workflows/python-integration-tests.yml

Lines changed: 48 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,8 @@ jobs:
6363
OPENAI_CHAT_MODEL_ID: ${{ vars.OPENAI__CHATMODELID }}
6464
OPENAI_RESPONSES_MODEL_ID: ${{ vars.OPENAI__RESPONSESMODELID }}
6565
OPENAI_EMBEDDINGS_MODEL_ID: ${{ vars.OPENAI_EMBEDDING_MODEL_ID }}
66+
OPENAI_MODEL: ${{ vars.OPENAI__RESPONSESMODELID }}
67+
OPENAI_EMBEDDING_MODEL: ${{ vars.OPENAI_EMBEDDING_MODEL_ID }}
6668
OPENAI_API_KEY: ${{ secrets.OPENAI__APIKEY }}
6769
defaults:
6870
run:
@@ -81,8 +83,8 @@ jobs:
8183
- name: Test with pytest (OpenAI integration)
8284
run: >
8385
uv run pytest --import-mode=importlib
84-
packages/core/tests/openai
85-
-m integration
86+
packages/openai/tests
87+
-m "integration and not azure"
8688
-n logical --dist worksteal
8789
--timeout=120 --session-timeout=900 --timeout_method thread
8890
--retries 2 --retry-delay 5
@@ -94,8 +96,9 @@ jobs:
9496
environment: integration
9597
timeout-minutes: 60
9698
env:
97-
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__CHATDEPLOYMENTNAME }}
99+
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
98100
AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
101+
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
99102
AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__EMBEDDINGDEPLOYMENTNAME }}
100103
AZURE_OPENAI_ENDPOINT: ${{ vars.AZUREOPENAI__ENDPOINT }}
101104
defaults:
@@ -121,7 +124,9 @@ jobs:
121124
- name: Test with pytest (Azure OpenAI integration)
122125
run: >
123126
uv run pytest --import-mode=importlib
124-
packages/core/tests/azure
127+
packages/openai/tests/openai/test_openai_chat_completion_client_azure.py
128+
packages/openai/tests/openai/test_openai_chat_client_azure.py
129+
packages/azure-ai/tests/azure_openai
125130
-m integration
126131
-n logical --dist worksteal
127132
--timeout=120 --session-timeout=900 --timeout_method thread
@@ -151,6 +156,13 @@ jobs:
151156
with:
152157
python-version: ${{ env.UV_PYTHON }}
153158
os: ${{ runner.os }}
159+
- name: Start local MCP server
160+
id: local-mcp
161+
uses: ./.github/actions/setup-local-mcp-server
162+
with:
163+
fallback_url: ${{ env.LOCAL_MCP_URL }}
164+
- name: Prefer local MCP URL when available
165+
run: echo "LOCAL_MCP_URL=${{ steps.local-mcp.outputs.effective_url }}" >> "$GITHUB_ENV"
154166
- name: Test with pytest (Anthropic, Ollama, MCP integration)
155167
run: >
156168
uv run pytest --import-mode=importlib
@@ -161,6 +173,26 @@ jobs:
161173
-n logical --dist worksteal
162174
--timeout=120 --session-timeout=900 --timeout_method thread
163175
--retries 2 --retry-delay 5
176+
- name: Stop local MCP server
177+
if: always()
178+
shell: bash
179+
run: |
180+
set -euo pipefail
181+
server_pid="${{ steps.local-mcp.outputs.pid }}"
182+
if [[ -z "$server_pid" ]]; then
183+
exit 0
184+
fi
185+
if ! kill -0 "$server_pid" 2>/dev/null; then
186+
exit 0
187+
fi
188+
kill -TERM -- "-$server_pid" 2>/dev/null || kill -TERM "$server_pid" 2>/dev/null || true
189+
for _ in $(seq 1 10); do
190+
if ! kill -0 "$server_pid" 2>/dev/null; then
191+
exit 0
192+
fi
193+
sleep 1
194+
done
195+
kill -KILL -- "-$server_pid" 2>/dev/null || kill -KILL "$server_pid" 2>/dev/null || true
164196
165197
# Azure Functions + Durable Task integration tests
166198
python-tests-functions:
@@ -172,10 +204,13 @@ jobs:
172204
UV_PYTHON: "3.11"
173205
OPENAI_CHAT_MODEL_ID: ${{ vars.OPENAI__CHATMODELID }}
174206
OPENAI_RESPONSES_MODEL_ID: ${{ vars.OPENAI__RESPONSESMODELID }}
207+
OPENAI_MODEL: ${{ vars.OPENAI__RESPONSESMODELID }}
175208
OPENAI_API_KEY: ${{ secrets.OPENAI__APIKEY }}
176-
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__CHATDEPLOYMENTNAME }}
177-
AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
209+
OPENAI_EMBEDDING_MODEL: ${{ vars.OPENAI_EMBEDDING_MODEL_ID }}
178210
AZURE_OPENAI_ENDPOINT: ${{ vars.AZUREOPENAI__ENDPOINT }}
211+
AZURE_OPENAI_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
212+
FOUNDRY_MODEL: ${{ vars.AZUREAI__DEPLOYMENTNAME }}
213+
FOUNDRY_PROJECT_ENDPOINT: ${{ secrets.AZUREAI__ENDPOINT }}
179214
FUNCTIONS_WORKER_RUNTIME: "python"
180215
DURABLE_TASK_SCHEDULER_CONNECTION_STRING: "Endpoint=http://localhost:8080;TaskHub=default;Authentication=None"
181216
AzureWebJobsStorage: "UseDevelopmentStorage=true"
@@ -209,7 +244,8 @@ jobs:
209244
packages/durabletask/tests/integration_tests
210245
-m integration
211246
-n logical --dist worksteal
212-
--timeout=120 --session-timeout=900 --timeout_method thread
247+
-x
248+
--timeout=360 --session-timeout=900 --timeout_method thread
213249
--retries 2 --retry-delay 5
214250
215251
# Azure AI integration tests
@@ -221,6 +257,8 @@ jobs:
221257
env:
222258
AZURE_AI_PROJECT_ENDPOINT: ${{ secrets.AZUREAI__ENDPOINT }}
223259
AZURE_AI_MODEL_DEPLOYMENT_NAME: ${{ vars.AZUREAI__DEPLOYMENTNAME }}
260+
FOUNDRY_PROJECT_ENDPOINT: ${{ secrets.AZUREAI__ENDPOINT }}
261+
FOUNDRY_MODEL: ${{ vars.AZUREAI__DEPLOYMENTNAME }}
224262
LOCAL_MCP_URL: ${{ vars.LOCAL_MCP__URL }}
225263
defaults:
226264
run:
@@ -244,7 +282,9 @@ jobs:
244282
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
245283
- name: Test with pytest
246284
timeout-minutes: 15
247-
run: uv run --directory packages/azure-ai poe integration-tests -n logical --dist worksteal --timeout=120 --session-timeout=900 --timeout_method thread --retries 2 --retry-delay 5
285+
run: |
286+
uv run --directory packages/azure-ai poe integration-tests -n logical --dist worksteal --timeout=120 --session-timeout=900 --timeout_method thread --retries 2 --retry-delay 5
287+
uv run --directory packages/foundry poe integration-tests -n logical --dist worksteal --timeout=120 --session-timeout=900 --timeout_method thread --retries 2 --retry-delay 5
248288
249289
# Azure Cosmos integration tests
250290
python-tests-cosmos:

0 commit comments

Comments
 (0)