-
Notifications
You must be signed in to change notification settings - Fork 94
Improve robustness of cebra model loading #292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Just checked https://endoflife.date/numpy, it seems that <2 support actually ended a few months ago:
So I guess we could also ignore that particular test and force numpy > 2... |
|
Although e.g. Deeplabcut still has <2 requirement https://github.com/DeepLabCut/DeepLabCut/blob/85911cb83d315398ead65c1198e4991a73001834/setup.py#L68 |
MMathisLab
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀


This PR
numpy<2for ~ 1 extra year (discussed with @MMathisLab today), given its widespread use in other packages used together with CEBRA.test_save_and_loadwhich was skipped entirely previously due to a syntax issue. This revealed several issues in the loading logicCEBRA.loadwas improved: Instead of using three different backend choices, we now default to the (future proof)sklearnbackend: The state dict of the model is saved, and the model is always constructed from the state dict. This follows the recent change in torch (from 2.6.0) to discontinue unsafe loads via pickle.cebra/registry.pymodule. When the@parameterizedecorator is used, the class attributes are now properly passed to the wrapped class. A test has been added for this functionality