Skip to content

Add ARM embedded platform compatibility to all operators#17457

Open
Ninja91 wants to merge 2 commits intopytorch:mainfrom
Ninja91:export-D93254992
Open

Add ARM embedded platform compatibility to all operators#17457
Ninja91 wants to merge 2 commits intopytorch:mainfrom
Ninja91:export-D93254992

Conversation

@Ninja91
Copy link
Contributor

@Ninja91 Ninja91 commented Feb 13, 2026

Summary:
Enable cortex_m operators to build for ARM embedded platforms (FVP simulation, on-device execution).

Changes:

  1. Added _ARM_EMBEDDED_PLATFORMS constant for ARM32 embedded targets
  2. Added compatible_with = _ARM_EMBEDDED_PLATFORMS to operator definitions
  3. Expanded OPERATORS list to include all 15 cortex_m operators (matching operators.yaml)

This requires the visibility fix in portable kernel utilities (parent diff) to allow
cortex_m operators to depend on kernel_ops_util, copy_ops_util, and padding_util.

Differential Revision: D93254992

Copilot AI review requested due to automatic review settings February 13, 2026 20:22
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 13, 2026
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17457

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Cancelled Job

As of commit 806def5 with merge base 6e31609 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-codesync
Copy link
Contributor

meta-codesync bot commented Feb 13, 2026

@Ninja91 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D93254992.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Enable Cortex-M operator targets to build under ARM32 embedded configurations by constraining operator build targets to embedded CPU configs and widening visibility of required portable CPU utilities so Cortex-M ops can depend on them.

Changes:

  • Extend visibility of selected portable CPU util libraries to allow //executorch/backends/cortex_m/... dependencies.
  • Introduce _ARM_EMBEDDED_PLATFORMS and apply compatible_with constraints to Cortex-M operator libraries and the aggregate cortex_m_operators target.
  • Expand the Cortex-M OPERATORS list to include all 15 operators declared in operators.yaml.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
kernels/portable/cpu/util/targets.bzl Adds Cortex-M backend visibility to specific portable CPU util targets required by Cortex-M ops.
backends/cortex_m/ops/targets.bzl Adds ARM embedded compatibility constraints and expands the set of Cortex-M operator targets emitted.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 62 to 65
visibility = ["PUBLIC"],
platforms = CXX,
compatible_with = _ARM_EMBEDDED_PLATFORMS,
exported_deps = all_op_targets,
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting compatible_with = _ARM_EMBEDDED_PLATFORMS on cortex_m_operators (and on each op_* library) will make these targets incompatible with non-ARM embedded configurations. However, cortex_m_generated_lib(_headers) and the cortex_m unit tests depend on these targets and are not similarly constrained, which can cause target incompatibility/build failures in default (host) builds. Consider either propagating the same compatibility constraints to the generated-lib targets/tests, or narrowing the constraint to only the operators that truly require embedded-only deps (e.g., CMSIS-NN-based ops) so host builds/tests for pure-C++ ops still work.

Copilot uses AI. Check for mistakes.
Ninja91 added a commit to Ninja91/executorch that referenced this pull request Feb 14, 2026
Summary:

Enable cortex_m operators to build for ARM embedded platforms (FVP simulation, on-device execution).

## Context

This diff is required for D92456499 (use on_device_model.pte for CC pipeline offline eval). When the FVP benchmark runner builds for ARM embedded platforms, cortex_m operators must be compatible with those platforms.

## Changes

1. Added `_ARM_EMBEDDED_PLATFORMS` constant for ARM32 embedded targets (ovr_config//cpu:arm32-embedded, arm32-embedded-fpu)
2. Added `compatible_with = _ARM_EMBEDDED_PLATFORMS` to operator definitions
3. Expanded `OPERATORS` list to include all 15 cortex_m operators (matching operators.yaml)

## Operators Included

The cortex_m operators handle quantization for ARM Cortex-M targets:
- `quantize_per_tensor`, `dequantize_per_tensor` - Core quantization
- `quantized_linear`, `quantized_conv2d`, `quantized_depthwise_conv2d` - Linear/Conv layers
- `quantized_add`, `quantized_mul` - Element-wise operations
- `quantized_avg_pool2d`, `quantized_max_pool2d`, `quantized_transpose_conv2d` - Pooling/Transpose
- `softmax`, `transpose`, `pad`, `minimum`, `maximum` - Utility ops

Requires D93254993 (visibility fix) to allow cortex_m operators to depend on portable kernel utilities.

Differential Revision: D93254992
Summary:

Add cortex_m backends to visibility for portable kernel utilities that cortex_m operators depend on.

## Context

The cortex_m operators (`quantized_linear`, `quantized_conv2d`, `quantized_add`, etc.) require utilities from `//executorch/kernels/portable/cpu/util/` for kernel calculations, tensor operations, and padding. These utilities had restricted visibility that excluded `//executorch/backends/cortex_m/...`, causing build failures when building cortex_m operators for ARM embedded platforms.

This diff is part of a stack enabling D92456499 (use `on_device_model.pte` for CC pipeline offline eval).

## Changes

Updated visibility for three utility targets:
| Utility | Purpose |
|---------|---------|
| `kernel_ops_util` | Kernel dimension calculations for quantized ops |
| `copy_ops_util` | Tensor copy operations |
| `padding_util` | Padding calculations for pooling operators |

Differential Revision: D93254993
Summary:

Enable cortex_m operators to build for ARM embedded platforms (FVP simulation, on-device execution).

## Context

This diff is required for D92456499 (use on_device_model.pte for CC pipeline offline eval). When the FVP benchmark runner builds for ARM embedded platforms, cortex_m operators must be compatible with those platforms.

## Changes

1. Added `_ARM_EMBEDDED_PLATFORMS` constant for ARM32 embedded targets (ovr_config//cpu:arm32-embedded, arm32-embedded-fpu)
2. Added `compatible_with = _ARM_EMBEDDED_PLATFORMS` to operator definitions
3. Expanded `OPERATORS` list to include all 15 cortex_m operators (matching operators.yaml)

## Operators Included

The cortex_m operators handle quantization for ARM Cortex-M targets:
- `quantize_per_tensor`, `dequantize_per_tensor` - Core quantization
- `quantized_linear`, `quantized_conv2d`, `quantized_depthwise_conv2d` - Linear/Conv layers
- `quantized_add`, `quantized_mul` - Element-wise operations
- `quantized_avg_pool2d`, `quantized_max_pool2d`, `quantized_transpose_conv2d` - Pooling/Transpose
- `softmax`, `transpose`, `pad`, `minimum`, `maximum` - Utility ops

Requires D93254993 (visibility fix) to allow cortex_m operators to depend on portable kernel utilities.

Differential Revision: D93254992
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant