Skip to content

[torch_lib] Fix torchvision_roi_align signature mismatch for PyTorch 2.10+#2848

Open
6zaille wants to merge 2 commits intomicrosoft:mainfrom
6zaille:main
Open

[torch_lib] Fix torchvision_roi_align signature mismatch for PyTorch 2.10+#2848
6zaille wants to merge 2 commits intomicrosoft:mainfrom
6zaille:main

Conversation

@6zaille
Copy link

@6zaille 6zaille commented Mar 11, 2026

This PR updates the torchvision_roi_align signature in onnxscript to support 7 positional arguments.

In recent PyTorch versions (2.10 and newer), the Dynamo-based ONNX exporter has updated the way it flattens operators. Specifically, for roi_align, the output_size (previously a Sequence[int]) is now decomposed into two separate integer arguments: pooled_height and pooled_width.

Previously, the onnxscript implementation expected 6 arguments, leading to a TypeError: torchvision_roi_align() takes from 3 to 6 positional arguments but 7 were given during the ONNX translation phase.

Changes :

  • Updated torchvision_roi_align signature to accept pooled_height and pooled_width as individual positional arguments.
  • Added aligned as the 7th positional argument.
  • Updated the internal op.RoiAlign call to use these new parameters directly.

Testing :

Added a new test case in tests/function_libs/torch_lib/ops/vision_test.py that mocks a torchvision.ops.roi_align call and verifies successful ONNX export using the latest PyTorch Nightly.

Thanks for reading my PR 👍

@6zaille
Copy link
Author

6zaille commented Mar 11, 2026

@microsoft-github-policy-service agree

@6zaille
Copy link
Author

6zaille commented Mar 11, 2026

Hi! I've signed the CLA. This PR fixes the torchvision_roi_align signature mismatch reported in PyTorch issue #138133. Tests passed locally.

@justinchuby
Copy link
Collaborator

This seems to be a duplicate of #2830. I would like to know since which version the schema was changed, so we can define the op properly in a version compatible way.

@6zaille
Copy link
Author

6zaille commented Mar 11, 2026

Hi @justinchuby,
I checked #2830 and it indeed looks related.

Regarding the version: I encountered this issue using PyTorch Nightly (2.10.0.dev). It seems the dispatcher now flattens the output_size tuple into pooled_height and pooled_width for the Dynamo-based exporter.

If #2830 already covers the fix, feel free to close this.
I'm happy to help with testing or adjusting the implementation if needed to maintain backward compatibility!

@justinchuby
Copy link
Collaborator

The most important info, if you can provide, is help us determine when this change happened by testing with older pytorch versions. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

2 participants