Skip to content

feat: widen openai dependency to support 2.x for litellm compatibility#1793

Open
BV-Venky wants to merge 1 commit intostrands-agents:mainfrom
BV-Venky:feat/openai-2x-support
Open

feat: widen openai dependency to support 2.x for litellm compatibility#1793
BV-Venky wants to merge 1 commit intostrands-agents:mainfrom
BV-Venky:feat/openai-2x-support

Conversation

@BV-Venky
Copy link

@BV-Venky BV-Venky commented Mar 1, 2026

Description

Users who need LiteLLM > 1.80.10 (e.g, for SAP Generative AI Hub support) cannot install strands-agents[litellm] because LiteLLM's core dependency to use SAP models requires openai>=2.8.0, while strands has openai<1.110.0 for the litellm extra and openai<2.0.0 for the openai/sagemaker extras.

This widens the openai version upper bound in three optional dependency groups:

  • litellm: openai>=1.68.0,<1.110.0openai>=1.68.0,<3.0.0
  • openai: openai>=1.68.0,<2.0.0openai>=1.68.0,<3.0.0
  • sagemaker: openai>=1.68.0,<2.0.0openai>=1.68.0,<3.0.0

The only breaking change in openai 2.0 (ResponseFunctionToolCallOutputItem.output type change) affects the Responses API, which strands does not use. All existing unit tests pass with openai 2.24.0.

Related Issues

Resolves #1672

Type of Change

New feature

Testing

  • Installed openai 2.24.0 in hatch test environment

  • All 2000 unit tests pass with openai 2.24.0

  • Verified LiteLLMModel instantiation works with openai 2.24.0 present

  • Reviewed openai CHANGELOG from 1.109.0 to 2.24.0; the only breaking change (ResponseFunctionToolCallOutputItem.output type) does not affect strands (uses Chat Completions API, not Responses API)

  • I ran hatch run prepare

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

@agent-of-mkmeral
Copy link

🔴 Adversarial Testing Report for PR #1793

Summary: PASS — No reproducible issues found

Adversarial Testing Result: ✅ PASS — no issues found

Scope: openai version range widening (>=1.68.0,<3.0.0) impact on:

  • OpenAI model provider
  • LiteLLM model provider
  • SageMaker model provider
  • Dependency compatibility

Tests run: 131 existing + 18 adversarial tests
Tests passing: All
Tests failing (findings): 0


🔍 Key Verification Points

1. OpenAI 2.0 Breaking Change Analysis

The only breaking change in openai 2.0.0 (changelog):

ResponseFunctionToolCallOutputItem.output and ResponseCustomToolCallOutput.output now return string | Array<ResponseInputText | ResponseInputImage | ResponseInputFile> instead of string only.

Verification: ✅ Strands is NOT affected

Strands uses the Chat Completions API (chat.completions.create), NOT the Responses API (responses.create). The breaking change only affects the Responses API.

# Verified in src/strands/models/openai.py
async with openai.AsyncOpenAI(**self.client_args) as client:
    response = await client.chat.completions.create(**request)  # ✅ Chat Completions API
2. API Surface Compatibility
API Feature Strands Usage openai 2.x Status
AsyncOpenAI Context manager ✅ Works
chat.completions.create Streaming ✅ Works
beta.chat.completions.parse Structured output ✅ Works
BadRequestError Exception handling ✅ Exists
RateLimitError Throttle handling ✅ Exists
ParsedChatCompletion Type import ✅ Importable
3. Dependency Compatibility Matrix
Dependency openai 2.x Requires strands Requires Resolution
pydantic >=1.9.0,<3.0.0 >=2.4.0,<3.0.0 2.4.0-2.x
httpx >=0.23.0,<1.0.0 (via openai) ✅ Compatible
anyio >=3.5.0,<5.0.0 (via openai) ✅ Compatible
jiter >=0.4.0,<1.0.0 (via openai) ✅ Compatible
typing-extensions >=4.11,<5.0.0 >=4.13.2,<5.0.0 4.13.2-4.x
4. Test Results with openai 2.26.0
$ pip show openai | grep Version
Version: 2.26.0

$ pytest tests/strands/models/test_openai.py tests/strands/models/test_litellm.py tests/strands/models/test_sagemaker.py
============================= 131 passed in 3.01s ==============================

Breakdown:

  • test_openai.py: 72 passed ✅
  • test_litellm.py: 41 passed ✅
  • test_sagemaker.py: 18 passed ✅

⚠️ Observations (Not Blockers)

1. Version Range Allows Both 1.x and 2.x

The range >=1.68.0,<3.0.0 allows:

  • Users who want to stay on openai 1.x (e.g., 1.109.1)
  • Users who need openai 2.x for LiteLLM + SAP GenAI Hub

This is by design per the PR description. Users can pin openai to 1.x if they prefer.

2. Future openai 3.x

The upper bound <3.0.0 protects against future breaking changes in openai 3.x. This is good defensive practice.


✅ Conclusion

The changes survived adversarial testing. No reproducible issues found.

The PR correctly:

  1. Widens version range for openai to support 2.x
  2. Does not introduce any breaking changes for strands users
  3. Enables LiteLLM users to use SAP Generative AI Hub (which requires openai>=2.8.0)
  4. All 131 existing tests pass with openai 2.26.0

Recommendation: ✅ Safe to merge

@codecov
Copy link

codecov bot commented Mar 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Support for LiteLLM > 1.80.10 by updating strands-agents[litellm] to use openai>=2.8.0

2 participants