Skip to content

Releases: strands-agents/sdk-python

v1.36.0

17 Apr 20:09
4e3ad44

Choose a tag to compare

What's Changed

  • feat(hooks): accept callable hook callbacks in Agent constructor by @agent-of-mkmeral in #1992
  • fix: handle missing optional fields in non-streaming citation conversion by @agent-of-mkmeral in #2098
  • fix(telemetry): add common gen_ai attributes to event loop cycle spans by @giulio-leone in #1973
  • fix(telemetry): use per-invocation usage in agent span attributes by @en-yao in #2017
  • feat(a2a): add client_config param and deprecate a2a_client_factory by @agent-of-mkmeral in #2103
  • fix: clear leaked running loop in MCP client background thread by @mkmeral in #2111
  • feat(openai): plumb through cache tokens in metadata events by @Unshure in #2116
  • feat(agent): add take_snapshot() and load_snapshot() methods by @zastrowm in #1948
  • feat(skills): support loading skills from URLs by @dgallitelli in #2091
  • feat: add metadata field to messages for stateful context tracking by @lizradway in #2125
  • feat(bidi): support request_state stop_event_loop flag by @agent-of-mkmeral in #1954
  • fix: preserve Gemini thought_signature in LiteLLM multi-turn tool calls by @opieter-aws in #2129
  • fix(bedrock): normalize empty toolResult content arrays in _format_bedrock_messages by @ghhamel in #2123
  • fix(telemetry): remove force_flush in tracer by @poshinchen in #2142

New Contributors

Full Changelog: v1.35.0...v1.36.0

v1.35.0

08 Apr 19:41
cd5da4f

Choose a tag to compare

What's Changed

Features

Bedrock Service Tier Support — PR#1799

Amazon Bedrock now offers service tiers (Priority, Standard, Flex) that let you control the trade-off between latency and cost on a per-request basis. BedrockModel accepts a new service_tier configuration field, consistent with how other Bedrock-specific features like guardrails are exposed. When not set, the field is omitted and Bedrock uses its default behavior.

from strands import Agent
from strands.models.bedrock import BedrockModel

# Use "flex" tier for cost-optimized batch processing
model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    service_tier="flex",
)
agent = Agent(model=model)

# Use "priority" for latency-sensitive applications
realtime_model = BedrockModel(
    model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
    service_tier="priority",
)

Valid values are "default", "priority", and "flex". If a model or region does not support the specified tier, Bedrock returns a ValidationException.

Bug Fixes

  • Sliding window conversation manager user-first enforcementPR#2087: The sliding window could produce a trimmed conversation starting with an assistant message, causing ValidationException on providers that require user-first ordering (including Bedrock Nova). The trim-point validation now ensures the first remaining message always has role == "user". Also fixed a short-circuit logic bug in the toolUse guard that let orphaned tool-use blocks slip through at window boundaries.

  • MCP _meta forwardingPR#1918, PR#2081: Custom metadata per the MCP spec was silently dropped because MCPClient never forwarded the _meta field to ClientSession.call_tool(). Additionally, the OTEL instrumentation used model_dump() instead of model_dump(by_alias=True), serializing the field as "meta" instead of "_meta" and corrupting the payload. Both the direct call_tool and task-augmented execution paths now correctly forward meta.

  • Tool exception propagation to OpenTelemetry spansPR#2046: When a tool raised an exception, the original exception was dropped before reaching end_tool_call_span, causing all tool spans to get StatusCode.OK even on errors. Tool errors now correctly propagate with StatusCode.ERROR, preserving the original exception type and traceback for observability backends like Langfuse.

  • Anthropic premature stream terminationPR#2047: The Anthropic provider crashed with AttributeError when the stream terminated before the final message_stop event, because it accessed event.message.usage on event types that lack a .message attribute. Now uses the Anthropic SDK's stream.get_final_message() to read accumulated usage from all received events, gracefully handling premature termination and empty streams.

  • Anthropic Pydantic deprecation warningsPR#2044: Fixed message_stop event handling to avoid Pydantic deprecation warnings.

New Contributors

Full Changelog: v1.34.1...v1.35.0

v1.34.1

01 Apr 20:32
635edbc

Choose a tag to compare

What's Changed

Full Changelog: v1.34.0...v1.34.1

v1.34.0

31 Mar 18:45
e267a64

Choose a tag to compare

What's Changed

  • chore: remove Cohere from required integ test providers by @zastrowm in #1967
  • feat: add AgentAsTool by @notowen333 in #1932
  • feat: auto-wrap Agent instances passed in tools list by @agent-of-mkmeral in #1997
  • feat(telemetry): emit system prompt on chat spans per GenAI semconv by @sanjeed5 in #1818
  • feat(mcp): add support for MCP elicitation -32042 error handling by @Christian-kam in #1745
  • fix: ollama input/output token count by @lizradway in #2008
  • feat: add stateful model support for server-side conversation management by @pgrayy in #2004
  • feat: add built-in tool support for OpenAI Responses API by @pgrayy in #2011
  • fix: handle reasoning content in OpenAIResponsesModel request formatting by @pgrayy in #2013

New Contributors

Full Changelog: v1.33.0...v1.34.0

v1.33.0

24 Mar 17:55
0a723bc

Choose a tag to compare

Pins litellm<=1.82.6 to supply chain attack - S​upply Chain Attack in litellm 1.82.8 on PyPI

What's Changed

  • fix: summarization conversation manager sometimes returns empty response by @Unshure in #1947
  • fix: remove agent from swarm test to get more consistency out of it by @Unshure in #1946
  • fix: CRITICAL: Hard pin litellm<=1.82.6 to mitigate supply chain attack by @udaymehta in #1961

New Contributors

Full Changelog: v1.32.0...v1.33.0

v1.32.0

20 Mar 14:02
38c1ab6

Choose a tag to compare

What's Changed

  • fix(event-loop): ensure all cycle metrics include end time and duration by @stephentreacy in #1903
  • fix: pin upper bound for mistralai dependency by @mkmeral in #1935
  • fix: override end_turn stop reason when streaming response contains toolUse blocks by @atian8179 in #1827

New Contributors

Full Changelog: v1.31.0...v1.32.0

v1.31.0

19 Mar 14:07
1643a62

Choose a tag to compare

What's Changed

  • feat: pass A2A request context metadata as invocation state by @mkmeral in #1854
  • fix: s3session manager bug by @mehtarac in #1915
  • fix(graph): only evaluate outbound edges from completed nodes by @giulio-leone in #1846
  • fix(openai): always use string content for tool messages by @giulio-leone in #1878
  • feat: widen openai dependency to support 2.x for litellm compatibility by @BV-Venky in #1793
  • fix: typeError when serializing multimodal prompts with binary content in Graph/Swarm session persistence by @JackYPCOnline in #1870
  • fix: lowercase the python language in code snippet by @zastrowm in #1929
  • fix: openai repsonses api error handling by @Unshure in #1931

New Contributors

Full Changelog: v1.30.0...v1.31.0

v1.30.0

11 Mar 18:34
2da3f7c

Choose a tag to compare

What's Changed

  • feat: add "anthropic" cache strategy to bypass model ID check by @kevmyung in #1808
  • feat: serialize tool results as JSON when possible by @clareliguori in #1752
  • fix: summary manager using structured output by @pgrayy in #1805
  • feat(mcp): expose server instructions from InitializeResult on MCPClient by @ShotaroKataoka in #1814
  • fix: added LANGFUSE_BASE_URL check for additinoal attribute by @poshinchen in #1826
  • feat(session): add dirty flag to skip unnecessary agent state persistence by @Unshure in #1803
  • feat: add public tool_spec setter by @mkmeral in #1822
  • feat: add CancellationToken for graceful agent execution cancellation by @jgoyani1 in #1772
  • feat(session): optimize session manager initialization by @Unshure in #1829
  • fix(mistral): report usage metrics in streaming mode by @jackatorcflo in #1697
  • fix(openai_responses): use output_text for assistant messages in multi-turn conversations by @giulio-leone in #1851
  • feat(hooks): add resume flag to AfterInvocationEvent by @mkmeral in #1767
  • fix: place cache point on last user message instead of assistant by @kevmyung in #1821
  • feat(skills): add agent skills as a plugin by @mkmeral in #1755
  • feat(steering): move steering from experimental to production by @dbschmigelski in #1853
  • fix: break circular references so Agent cleanup doesn't hang with MCPClient by @dbschmigelski in #1830
  • fix: Set is_new_session = False at the end of each initialize* method by @mehtarac in #1859

New Contributors

Full Changelog: v1.29.0...v1.30.0

v1.29.0

04 Mar 21:13
31f1e64

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.28.0...v1.29.0

v1.28.0

25 Feb 19:32
37938da

Choose a tag to compare

What's Changed

  • fix: update region for agentcore in our new account by @afarntrog in #1715
  • fix: remove test that fails for python 3.14 by @Unshure in #1717
  • feat(hooks): support union types and list of types for add_hook by @Unshure in #1719
  • feat: make pyaudio an optional dependency by lazy loading by @mehtarac in #1731
  • feat(hooks): add Plugin Protocol for agent extensibility by @Unshure in #1733
  • feat: add plugins parameter to Agent by @Unshure in #1734
  • refactor(plugins): convert Plugin from Protocol to ABC by @Unshure in #1741
  • feat(steering): migrate SteeringHandler from HookProvider to Plugin by @Unshure in #1738
  • chore: switch to Sonnet 4.6 for Anthropic provider integ tests by @clareliguori in #1754
  • fix: rename init_plugin to init_agent by @Unshure in #1765

Full Changelog: v1.27.0...v1.28.0