[torch_lib] Fix torchvision_roi_align signature mismatch for PyTorch 2.10+#2848
[torch_lib] Fix torchvision_roi_align signature mismatch for PyTorch 2.10+#28486zaille wants to merge 2 commits intomicrosoft:mainfrom
Conversation
|
@microsoft-github-policy-service agree |
|
Hi! I've signed the CLA. This PR fixes the torchvision_roi_align signature mismatch reported in PyTorch issue #138133. Tests passed locally. |
|
This seems to be a duplicate of #2830. I would like to know since which version the schema was changed, so we can define the op properly in a version compatible way. |
|
Hi @justinchuby, Regarding the version: I encountered this issue using PyTorch Nightly (2.10.0.dev). It seems the dispatcher now flattens the output_size tuple into pooled_height and pooled_width for the Dynamo-based exporter. If #2830 already covers the fix, feel free to close this. |
|
The most important info, if you can provide, is help us determine when this change happened by testing with older pytorch versions. Thanks! |
This PR updates the torchvision_roi_align signature in onnxscript to support 7 positional arguments.
In recent PyTorch versions (2.10 and newer), the Dynamo-based ONNX exporter has updated the way it flattens operators. Specifically, for roi_align, the output_size (previously a Sequence[int]) is now decomposed into two separate integer arguments: pooled_height and pooled_width.
Previously, the onnxscript implementation expected 6 arguments, leading to a TypeError: torchvision_roi_align() takes from 3 to 6 positional arguments but 7 were given during the ONNX translation phase.
Changes :
Testing :
Added a new test case in tests/function_libs/torch_lib/ops/vision_test.py that mocks a torchvision.ops.roi_align call and verifies successful ONNX export using the latest PyTorch Nightly.
Thanks for reading my PR 👍