Conversation
There was a problem hiding this comment.
Copilot reviewed 1 out of 1 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (1)
onnxscript/function_libs/torch_lib/ops/fft.py:121
- Verify that op.Shape returns a scalar value when using start and end to extract the dimension. If op.Shape returns a tensor, explicitly extract its value to ensure proper normalization scaling.
scale = (op.CastLike(last_dim_size, self)) / op.CastLike(
| from typing import Optional, Sequence | ||
|
|
||
| from onnxscript import INT64 | ||
| from onnxscript import INT64, ir |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 10 months ago
To fix the issue, we should remove the unused import of ir from the onnxscript module. This will clean up the code and eliminate the unnecessary dependency. The change should be made on line 17, where the import statement is defined.
| @@ -16,3 +16,3 @@ | ||
|
|
||
| from onnxscript import INT64, ir | ||
| from onnxscript import INT64 | ||
| from onnxscript.function_libs.torch_lib.registration import torch_op |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2209 +/- ##
==========================================
- Coverage 74.36% 74.34% -0.02%
==========================================
Files 230 230
Lines 29687 29688 +1
Branches 3450 3450
==========================================
- Hits 22076 22073 -3
- Misses 6442 6447 +5
+ Partials 1169 1168 -1 ☔ View full report in Codecov by Sentry. |
| from typing import Optional, Sequence | ||
|
|
||
| from onnxscript import INT64 | ||
| from onnxscript import INT64, ir |
Check warning
Code scanning / lintrunner
PYLINT/W0611 Warning
| from typing import Optional, Sequence | ||
|
|
||
| from onnxscript import INT64 | ||
| from onnxscript import INT64, ir |
Check warning
Code scanning / lintrunner
RUFF/F401 Warning
| * scale | ||
| ) | ||
| transformed = _fftn_onnx_normalization( | ||
| transformed, |
There was a problem hiding this comment.
Perhaps replace line 137 (op.Shape...) with op.CastLike(last_dim_size, self) and then remove scale? Would that yield the same/better results?
There was a problem hiding this comment.
I thought last_dim_size was op.Shape(transformed, start=dimension, end=dimension + 1)? Let me try
There was a problem hiding this comment.
Oh, nvm, sorry, you're completely right about op.Shape(transformed, start=dimension, end=dimension + 1) being different between line 122 and 137. But your code made me realize that without modifying anything else, line 137 perhaps should be directly replaced with last_dim_size just to save a call to op.Shape.
bmehta001
left a comment
There was a problem hiding this comment.
Perhaps simplify the normalization by passing in the dimension size you want directly to the normalization function. I think there is an underlying issue with my c2r implementation, though, because it should be able to support multiple axes. I can try to see if something like what is done in onnx/onnx#6016 will help
👍 looks like they recreated the conjugate part. It's like doubling the work that was meant to be saved but I guess that's the best option right now? |
|
Fixed differently |
The numbers seem closer-ish for fft_c2r, but don't really match.