[PT] Bump pytorch 2.10.0#3852
Conversation
5a931d1 to
137db9c
Compare
There was a problem hiding this comment.
Pull request overview
Updates the repository’s PyTorch/TorchVision/TorchAO versions and aligns tests, golden FX artifacts, and example requirements with the new dependency stack (while pinning older PyTorch for the GPTQModel integration due to GPU capability constraints).
Changes:
- Bump core PyTorch-related constraints and example requirements (
torch,torchvision,torchao) and refresh related test reference metrics. - Update Swin V2 FX golden graphs / metatype references to match new FX node naming.
- Apply PTQ test workarounds for CI stability (DataLoader
num_workers=0, torchao device util workaround, and stricterbatch_normeps validation in function hook wrapper).
Reviewed changes
Copilot reviewed 37 out of 39 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| tests/torch/data/fx/swin_v2_t.dot | Refresh FX golden graph node names for Swin V2 T. |
| tests/torch/data/fx/reference_metatypes/swin_v2_t.json | Update reference metatypes mapping to match renamed FX nodes. |
| tests/torch/data/fx/quantized/swin_v2_t.dot | Refresh quantized FX golden graph for Swin V2 T. |
| tests/torch/data/fx/post_quantization_compressed/swin_v2_t.dot | Refresh post-quantization-compressed FX golden graph for Swin V2 T. |
| tests/torch/data/fx/dynamic_shapes/post_quantization_compressed/swin_v2_t.dot | Refresh dynamic-shapes FX golden graph (node renames/renumbers). |
| tests/post_training/test_quantize_conformance.py | Add torchao utils import + monkeypatch for device utility behavior. |
| tests/post_training/pipelines/image_classification_base.py | Set DataLoader num_workers=0 for calibration/validation to avoid shm issues in CI. |
| tests/post_training/experimental/sparsify_activations/pipelines.py | Set DataLoader num_workers=0 for calibration subset loader. |
| tests/post_training/data/ptq_reference_data.yaml | Update a reference metric value to match new dependency behavior. |
| tests/integration/gptq_model/requirements_extra.txt | Remove explicit transformers dependency from extra requirements (keep gptqmodel pin). |
| tests/integration/gptq_model/requirements.txt | Pin torch/transformers/pytest versions for GPTQ integration environment. |
| tests/cross_fw/examples/example_scope.json | Update expected perplexity metric value. |
| src/nncf/torch/function_hook/handle_inner_functions.py | Add explicit validation for batch_norm(eps) to reject non-positive values. |
| examples/quantization_aware_training/torch/resnet18/requirements.txt | Bump torch/torchvision for the example. |
| examples/quantization_aware_training/torch/anomalib/requirements.txt | Bump torch for the example. |
| examples/pruning/torch/resnet18/requirements.txt | Bump torch/torchvision for the example. |
| examples/post_training_quantization/torch_fx/resnet18/requirements.txt | Bump torch/torchvision/torchao for the FX PTQ example. |
| examples/post_training_quantization/torch/ssd300_vgg16/requirements.txt | Bump torch/torchvision for the example. |
| examples/post_training_quantization/torch/mobilenet_v2/requirements.txt | Bump torch/torchvision for the example. |
| examples/post_training_quantization/openvino/yolov8_quantize_with_accuracy_control/requirements.txt | Bump torch for the OpenVINO YOLOv8 example. |
| examples/post_training_quantization/openvino/yolo26/requirements.txt | Bump ultralytics + torch for the YOLO26 example. |
| examples/post_training_quantization/onnx/yolov8_quantize_with_accuracy_control/requirements.txt | Bump torch for the ONNX YOLOv8 example. |
| examples/post_training_quantization/onnx/mobilenet_v2/requirements.txt | Bump torch/torchvision for the ONNX MobileNetV2 example. |
| examples/llm_compression/torch_fx/tiny_llama/requirements.txt | Bump torch/torchvision/torchao for the Torch FX LLM example. |
| examples/llm_compression/torch/gptq_model_convertor/requirements_extra.txt | Remove explicit transformers dependency from extra requirements (keep gptqmodel pin). |
| examples/llm_compression/torch/gptq_model_convertor/requirements.txt | Keep torch==2.9.0 for GPTQModel convertor; pin transformers with TODO note. |
| examples/llm_compression/torch/downstream_qat_with_nls/requirements.txt | Bump torch/torchao for the downstream QAT example. |
| examples/llm_compression/torch/distillation_qat_with_lora/requirements.txt | Bump torch/torchao for the distillation QAT example. |
| examples/llm_compression/openvino/tiny_llama_synthetic_data/requirements.txt | Bump torch for the OpenVINO LLM synthetic data example. |
| examples/llm_compression/openvino/tiny_llama_find_hyperparams/requirements.txt | Bump torch for the OpenVINO LLM hyperparam search example. |
| examples/llm_compression/openvino/tiny_llama/requirements.txt | Bump torch for the OpenVINO TinyLlama example. |
| examples/llm_compression/openvino/smollm2_360m_fp8/requirements.txt | Bump torch for the OpenVINO SmolLM2 fp8 example. |
| examples/llm_compression/openvino/smollm2_360m_codebook/requirements.txt | Bump torch/torchvision for the OpenVINO SmolLM2 codebook example. |
| examples/llm_compression/openvino/smollm2_360m_adaptive_codebook/requirements.txt | Bump torch/torchvision for the OpenVINO SmolLM2 adaptive codebook example. |
| examples/llm_compression/onnx/tiny_llama_scale_estimation/requirements.txt | Bump torch for the ONNX LLM scale estimation example. |
| examples/llm_compression/onnx/tiny_llama/requirements.txt | Bump torch for the ONNX TinyLlama example. |
| docs/Installation.md | Update documented tested PyTorch version for develop. |
| constraints.txt | Bump constrained torch/torchvision/torchao versions. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 36 changed files in this pull request and generated no new comments.
You can also share your feedback on Copilot code review. Take the survey.
Changes
torch==2.10.0,torchao=0.16.0andtorchvision==0.25.0torch==2.9.0andgptmodel<5.7for gptmodel example causeerror : Feature '.m16n8k16' requires .target sm_80 or higher(sm_80 is not supported by T4)ultralytics==8.4.21for yolo26 exampleTests
https://github.com/openvinotoolkit/nncf/actions/runs/22961284303 - fail one test like in nightly
https://github.com/openvinotoolkit/nncf/actions/runs/22961310545
PTQ-834