build: update torchao requirement from <0.16.0,>=0.12.0 to >=0.12.0,<0.18.0#617
build: update torchao requirement from <0.16.0,>=0.12.0 to >=0.12.0,<0.18.0#617dependabot[bot] wants to merge 1 commit intomainfrom
Conversation
Updates the requirements on [torchao](https://github.com/pytorch/ao) to permit the latest version. - [Release notes](https://github.com/pytorch/ao/releases) - [Commits](pytorch/ao@v0.12.0...v0.17.0) --- updated-dependencies: - dependency-name: torchao dependency-version: 0.17.0 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Comment @cursor review or bugbot run to trigger another review on this PR
Reviewed by Cursor Bugbot for commit 588052f. Configure here.
| "whisper-s2t==1.3.1", | ||
| "hqq==0.2.7.post1", | ||
| "torchao>=0.12.0,<0.16.0", # 0.16.0 breaks diffusers 0.36.0, torch+torch: https://github.com/pytorch/ao/issues/2919#issue-3375688762 | ||
| "torchao>=0.12.0,<0.18.0", # 0.16.0 breaks diffusers 0.36.0, torch+torch: https://github.com/pytorch/ao/issues/2919#issue-3375688762 |
There was a problem hiding this comment.
Version bump allows known-broken torchao version
Medium Severity
The upper bound for torchao was raised from <0.16.0 to <0.18.0, but the existing inline comment on the same line explicitly states that 0.16.0 breaks diffusers 0.36.0. Since diffusers>=0.21.4 still permits installing 0.36.0, the previously guarded-against incompatibility is now re-exposed. Either the comment is outdated and the incompatibility was resolved (in which case the comment is misleading), or the constraint genuinely protected against a real breakage that is now allowed again.
Reviewed by Cursor Bugbot for commit 588052f. Configure here.
Up to standards ✅🟢 Issues
|


Updates the requirements on torchao to permit the latest version.
Release notes
Sourced from torchao's releases.
... (truncated)
Commits
02105d4[mxfp8 training] add cutedsl kernel for mxfp8 quantation along dim0 (#4156)d17c61bclean up unused rocm references in test_training.py (#4170)136cacbRemove tensor parallel test for v1 of Int8DynamicActivationInt8WeightConfig (...8fca033[xpu][test] Skip WIP config for Intel GPU in test_safetensors_support.py and ...6a2f643Fix rocm CI (#4167)a927712Move bitpacking.py to prototype and add uintx_utils.py (#4152)9ea1e67Skip test_fsdp2 if PyTorch version is 2.11.0 or higher (#4168)3330d29[reland][xpu] INT8 quantization on Intel XPU (#3782)ac0b820Fix test_sparse_api failures for builds without hipSPARSELt (#4125) (#4125)1f90b4dDelete deprecated PackedLinearInt8DynamicActivationIntxWeightLayout and relat...Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)