Skip to content

Conversation

@entrpn
Copy link
Collaborator

@entrpn entrpn commented Feb 5, 2026

Original author @eitanporat in #2728

Description

Checkpoint conversion mapping from hf

Tests

  • Checked that the weights are correct by comparing forward passes of the model.
  • Converted HF weights to orbax checkpoint.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Feb 5, 2026

Codecov Report

❌ Patch coverage is 0% with 195 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...xText/utils/ckpt_conversion/utils/param_mapping.py 0.00% 185 Missing ⚠️
src/MaxText/layers/encoders.py 0.00% 7 Missing ⚠️
src/MaxText/layers/models.py 0.00% 2 Missing ⚠️
src/MaxText/utils/ckpt_conversion/utils/utils.py 0.00% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Collaborator

@hengtaoguo hengtaoguo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if you have tried the full checkpoint conversion? Are all layers filled with weights? Not sure if its ready to decode now, seems we are pending two more PRs.

if self.config.use_multimodal and encoder_images is not None:
image_embeddings = self.vision_encoder(input_images=encoder_images, deterministic=not enable_dropout)
# qwen3-omni-30b-a3b returns deep features from the vision encoder.
image_embeddings, _ = self.vision_encoder(input_images=encoder_images, deterministic=not enable_dropout)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you changed the output format of encoder, could you verify if all other references have been updated too?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we update return var number where vision_encoder is called, such as https://github.com/AI-Hypercomputer/maxtext/blob/main/src/MaxText/layers/models.py#L462

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the returned deep_feats should not be _ ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aireenmei it cannot be used right now, it requires the next PR in our list: #2729 thus I'm conforming to the order.

embeddings = encoder(input_images, deterministic=deterministic)
encoder_output = encoder(input_images, deterministic=deterministic)

deep_feats = None
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hengtaoguo instead of modifying all references, this is how its done to return a encoder_output and deep_feats which will be None for the other models.

Co-authored-by: Eitan Porat <eporat@lightricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants