[ Conference Paper | Models | Dataset ]
Yushi Huang, Ruihao Gong📧, Jing Liu, Yifu Ding, Chengtao Lv, Haotong Qin, Jun Zhang📧
(📧 denotes corresponding author.)
This is the official implementation of our paper QVGen. It is the first to reach full-precision comparable quality under 4-bit settings and it significantly outperforms existing methods. For instance, our 3-bit CogVideoX-2B improves Dynamic Degree by +25.28 and Scene Consistency by +8.43 on VBench.
-
Jan 31, 2026: 🔥 We release our Python code and checkpoints for QVGen presented in our paper. Have a try!
-
Jan 23, 2026: 🌟 Our paper has been accepted by ICLR 2026! 🎉 Cheers!
Comparison of samples from 4-bit per-channel weight and per-token activation quantized CogVideoX-2B (upper) and Wan 14B (lower), across different methods.
Overview pipeline of the proposed QVGen. (a) This framework integrates auxiliary modules
After cloning the repository, follow these steps to train and run inference. We use Wan 1.3B as the example (the same workflow applies to CogVideoX 2B).
Install dependencies with pip install -r requirements.txt. For quantization, we use 8x H100/H800/A100/A800 GPUs; otherwise, adjust the scripts accordingly.
Download the pretrained Wan2.1 1.3B to models/Wan2.1-T2V-1.3B-Diffusers, then prepare the dataset:
# download and preprocess data
python prepare_dataset/download.py --output_directory dataset
sh script/data/prepare_dataset_wan_1-3b.shNote: please replace any path/to/your/... placeholders in the scripts with your local paths.
We also provide quantized checkpoints on Hugging Face:
| Model | #Bit |
|---|---|
| Wan 1.3B | W4A4 |
| CogVideoX-2B | W4A4 |
Below is an example training command. For more details, please refer to the paper.
sh script/train/w4a4_wan_1-3b.shHere is the corresponding inference command.
# You can also set the path to a pre-downloaded quantized checkpoint in the script.
sh script/inference/w4a4_wan_1-3b.shWe recommend using our inference code and following the steps in VBench. For evaluation, you can also refer to our distributed inference scripts for video generation: inference/ddp_cogvideox_t2v.py and inference/ddp_wan_t2v.py.
The codebase has not been fully cleaned for public release yet. We will continue to clean and refine it in subsequent updates.
- Training and inference code for large models (e.g., Wan 14B, CogVideoX 5B).
- More quantized checkpoints.
- More efficient training code.
Our code is built on the open-source finetrainers.
If you find QVGen useful, please cite our paper:
@inproceedings{huang2026qvgenpushinglimitquantized,
title={QVGen: Pushing the Limit of Quantized Video Generative Models},
author={Yushi Huang and Ruihao Gong and Jing Liu and Yifu Ding and Chengtao Lv and Haotong Qin and Jun Zhang},
booktitle={International Conference on Learning Representations},
year={2026},
url={https://arxiv.org/abs/2505.11497},
}