README.md
17.2 KB · 276 lines · markdown Raw
1 ---
2 license: apache-2.0
3 language:
4 - en
5 - zh
6 pipeline_tag: text-to-video
7 ---
8 # Wan2.2
9
10 <p align="center">
11 <img src="assets/logo.png" width="400"/>
12 <p>
13
14 <p align="center">
15 💜 <a href="https://wan.video"><b>Wan</b></a> &nbsp&nbsp | &nbsp&nbsp 🖥️ <a href="https://github.com/Wan-Video/Wan2.2">GitHub</a> &nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2503.20314">Technical Report</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://wan.video/welcome?spm=a2ty_o02.30011076.0.0.6c9ee41eCcluqg">Blog</a> &nbsp&nbsp | &nbsp&nbsp💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>&nbsp&nbsp | &nbsp&nbsp 📖 <a href="https://discord.gg/AKNgpMK4Yj">Discord</a>&nbsp&nbsp
16 <br>
17
18 -----
19
20 [**Wan: Open and Advanced Large-Scale Video Generative Models**](https://arxiv.org/abs/2503.20314) <be>
21
22
23 We are excited to introduce **Wan2.2**, a major upgrade to our foundational video models. With **Wan2.2**, we have focused on incorporating the following innovations:
24
25 - 👍 **Effective MoE Architecture**: Wan2.2 introduces a Mixture-of-Experts (MoE) architecture into video diffusion models. By separating the denoising process cross timesteps with specialized powerful expert models, this enlarges the overall model capacity while maintaining the same computational cost.
26
27 - 👍 **Cinematic-level Aesthetics**: Wan2.2 incorporates meticulously curated aesthetic data, complete with detailed labels for lighting, composition, contrast, color tone, and more. This allows for more precise and controllable cinematic style generation, facilitating the creation of videos with customizable aesthetic preferences.
28
29 - 👍 **Complex Motion Generation**: Compared to Wan2.1, Wan2.2 is trained on a significantly larger data, with +65.6% more images and +83.2% more videos. This expansion notably enhances the model's generalization across multiple dimensions such as motions, semantics, and aesthetics, achieving TOP performance among all open-sourced and closed-sourced models.
30
31 - 👍 **Efficient High-Definition Hybrid TI2V**: Wan2.2 open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of **16×16×4**. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can also run on consumer-grade graphics cards like 4090. It is one of the fastest **720P@24fps** models currently available, capable of serving both the industrial and academic sectors simultaneously.
32
33 This repository contains our TI2V-5B model, built with the advanced Wan2.2-VAE that achieves a compression ratio of 16×16×4. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can runs on single consumer-grade GPU such as the 4090. It is one of the fastest 720P@24fps models available, meeting the needs of both industrial applications and academic research.
34
35
36
37 ## Video Demos
38
39 <div align="center">
40 <video width="80%" controls>
41 <source src="https://cloud.video.taobao.com/vod/4szTT1B0LqXvJzmuEURfGRA-nllnqN_G2AT0ZWkQXoQ.mp4" type="video/mp4">
42 Your browser does not support the video tag.
43 </video>
44 </div>
45
46
47 ## 🔥 Latest News!!
48
49 * Jul 28, 2025: 👋 We've released the inference code and model weights of **Wan2.2**.
50
51 ## Community Works
52 If your research or project builds upon [**Wan2.1**](https://github.com/Wan-Video/Wan2.1) or Wan2.2, we welcome you to share it with us so we can highlight it for the broader community.
53
54
55 ## 📑 Todo List
56 - Wan2.2 Text-to-Video
57 - [x] Multi-GPU Inference code of the A14B and 14B models
58 - [x] Checkpoints of the A14B and 14B models
59 - [x] ComfyUI integration
60 - [x] Diffusers integration
61 - Wan2.2 Image-to-Video
62 - [x] Multi-GPU Inference code of the A14B model
63 - [x] Checkpoints of the A14B model
64 - [x] ComfyUI integration
65 - [x] Diffusers integration
66 - Wan2.2 Text-Image-to-Video
67 - [x] Multi-GPU Inference code of the 5B model
68 - [x] Checkpoints of the 5B model
69 - [x] ComfyUI integration
70 - [x] Diffusers integration
71
72 ## Run Wan2.2
73
74 #### Installation
75 Clone the repo:
76 ```sh
77 git clone https://github.com/Wan-Video/Wan2.2.git
78 cd Wan2.2
79 ```
80
81 Install dependencies:
82 ```sh
83 # Ensure torch >= 2.4.0
84 pip install -r requirements.txt
85 ```
86
87
88 #### Model Download
89
90
91 | Models | Download Links | Description |
92 |--------------------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------|
93 | T2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-T2V-A14B) | Text-to-Video MoE model, supports 480P & 720P |
94 | I2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-I2V-A14B) | Image-to-Video MoE model, supports 480P & 720P |
95 | TI2V-5B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-TI2V-5B) | High-compression VAE, T2V+I2V, supports 720P |
96
97
98 > 💡Note:
99 > The TI2V-5B model supports 720P video generation at **24 FPS**.
100
101
102 Download models using huggingface-cli:
103 ``` sh
104 pip install "huggingface_hub[cli]"
105 huggingface-cli download Wan-AI/Wan2.2-TI2V-5B --local-dir ./Wan2.2-TI2V-5B
106 ```
107
108 Download models using modelscope-cli:
109 ``` sh
110 pip install modelscope
111 modelscope download Wan-AI/Wan2.2-TI2V-5B --local_dir ./Wan2.2-TI2V-5B
112 ```
113
114 #### Run Text-Image-to-Video Generation
115
116 This repository supports the `Wan2.2-TI2V-5B` Text-Image-to-Video model and can support video generation at 720P resolutions.
117
118
119 - Single-GPU Text-to-Video inference
120 ```sh
121 python generate.py --task ti2v-5B --size 1280*704 --ckpt_dir ./Wan2.2-TI2V-5B --offload_model True --convert_model_dtype --t5_cpu --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage"
122 ```
123
124 > 💡Unlike other tasks, the 720P resolution of the Text-Image-to-Video task is `1280*704` or `704*1280`.
125
126 > This command can run on a GPU with at least 24GB VRAM (e.g, RTX 4090 GPU).
127
128 > 💡If you are running on a GPU with at least 80GB VRAM, you can remove the `--offload_model True`, `--convert_model_dtype` and `--t5_cpu` options to speed up execution.
129
130
131 - Single-GPU Image-to-Video inference
132 ```sh
133 python generate.py --task ti2v-5B --size 1280*704 --ckpt_dir ./Wan2.2-TI2V-5B --offload_model True --convert_model_dtype --t5_cpu --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
134 ```
135
136 > 💡If the image parameter is configured, it is an Image-to-Video generation; otherwise, it defaults to a Text-to-Video generation.
137
138 > 💡Similar to Image-to-Video, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image.
139
140
141 - Multi-GPU inference using FSDP + DeepSpeed Ulysses
142
143 ```sh
144 torchrun --nproc_per_node=8 generate.py --task ti2v-5B --size 1280*704 --ckpt_dir ./Wan2.2-TI2V-5B --dit_fsdp --t5_fsdp --ulysses_size 8 --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
145 ```
146
147 > The process of prompt extension can be referenced [here](#2-using-prompt-extention).
148
149
150 - Running with Diffusers
151
152 ```py
153 import torch
154 import numpy as np
155 from diffusers import WanPipeline, AutoencoderKLWan, WanTransformer3DModel, UniPCMultistepScheduler
156 from diffusers.utils import export_to_video, load_image
157
158 dtype = torch.bfloat16
159 device = "cuda"
160
161 model_id = "Wan-AI/Wan2.2-TI2V-5B-Diffusers"
162 vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
163 pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=dtype)
164 pipe.to(device)
165
166 height = 704
167 width = 1280
168 num_frames = 121
169 num_inference_steps = 50
170 guidance_scale = 5.0
171
172
173 prompt = "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
174 negative_prompt = "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
175
176 output = pipe(
177 prompt=prompt,
178 negative_prompt=negative_prompt,
179 height=height,
180 width=width,
181 num_frames=num_frames,
182 guidance_scale=guidance_scale,
183 num_inference_steps=num_inference_steps,
184 ).frames[0]
185 export_to_video(output, "5bit2v_output.mp4", fps=24)
186
187 ```
188 > 💡**Note**:This model requires features that are currently available only in the main branch of diffusers. The latest stable release on PyPI does not yet include these updates.
189 > To use this model, please install the library from source:
190 > ```
191 > pip install git+https://github.com/huggingface/diffusers
192 > ```
193
194
195 ## Computational Efficiency on Different GPUs
196
197 We test the computational efficiency of different **Wan2.2** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**.
198
199
200 <div align="center">
201 <img src="assets/comp_effic.png" alt="" style="width: 80%;" />
202 </div>
203
204 > The parameter settings for the tests presented in this table are as follows:
205 > (1) Multi-GPU: 14B: `--ulysses_size 4/8 --dit_fsdp --t5_fsdp`, 5B: `--ulysses_size 4/8 --offload_model True --convert_model_dtype --t5_cpu`; Single-GPU: 14B: `--offload_model True --convert_model_dtype`, 5B: `--offload_model True --convert_model_dtype --t5_cpu`
206 (--convert_model_dtype converts model parameter types to config.param_dtype);
207 > (2) The distributed testing utilizes the built-in FSDP and Ulysses implementations, with FlashAttention3 deployed on Hopper architecture GPUs;
208 > (3) Tests were run without the `--use_prompt_extend` flag;
209 > (4) Reported results are the average of multiple samples taken after the warm-up phase.
210
211
212 -------
213
214 ## Introduction of Wan2.2
215
216 **Wan2.2** builds on the foundation of Wan2.1 with notable improvements in generation quality and model capability. This upgrade is driven by a series of key technical innovations, mainly including the Mixture-of-Experts (MoE) architecture, upgraded training data, and high-compression video generation.
217
218 ##### (1) Mixture-of-Experts (MoE) Architecture
219
220 Wan2.2 introduces Mixture-of-Experts (MoE) architecture into the video generation diffusion model. MoE has been widely validated in large language models as an efficient approach to increase total model parameters while keeping inference cost nearly unchanged. In Wan2.2, the A14B model series adopts a two-expert design tailored to the denoising process of diffusion models: a high-noise expert for the early stages, focusing on overall layout; and a low-noise expert for the later stages, refining video details. Each expert model has about 14B parameters, resulting in a total of 27B parameters but only 14B active parameters per step, keeping inference computation and GPU memory nearly unchanged.
221
222 <div align="center">
223 <img src="assets/moe_arch.png" alt="" style="width: 90%;" />
224 </div>
225
226 The transition point between the two experts is determined by the signal-to-noise ratio (SNR), a metric that decreases monotonically as the denoising step $t$ increases. At the beginning of the denoising process, $t$ is large and the noise level is high, so the SNR is at its minimum, denoted as ${SNR}_{min}$. In this stage, the high-noise expert is activated. We define a threshold step ${t}_{moe}$ corresponding to half of the ${SNR}_{min}$, and switch to the low-noise expert when $t<{t}_{moe}$.
227
228 <div align="center">
229 <img src="assets/moe_2.png" alt="" style="width: 90%;" />
230 </div>
231
232 To validate the effectiveness of the MoE architecture, four settings are compared based on their validation loss curves. The baseline **Wan2.1** model does not employ the MoE architecture. Among the MoE-based variants, the **Wan2.1 & High-Noise Expert** reuses the Wan2.1 model as the low-noise expert while uses the Wan2.2's high-noise expert, while the **Wan2.1 & Low-Noise Expert** uses Wan2.1 as the high-noise expert and employ the Wan2.2's low-noise expert. The **Wan2.2 (MoE)** (our final version) achieves the lowest validation loss, indicating that its generated video distribution is closest to ground-truth and exhibits superior convergence.
233
234
235 ##### (2) Efficient High-Definition Hybrid TI2V
236 To enable more efficient deployment, Wan2.2 also explores a high-compression design. In addition to the 27B MoE models, a 5B dense model, i.e., TI2V-5B, is released. It is supported by a high-compression Wan2.2-VAE, which achieves a $T\times H\times W$ compression ratio of $4\times16\times16$, increasing the overall compression rate to 64 while maintaining high-quality video reconstruction. With an additional patchification layer, the total compression ratio of TI2V-5B reaches $4\times32\times32$. Without specific optimization, TI2V-5B can generate a 5-second 720P video in under 9 minutes on a single consumer-grade GPU, ranking among the fastest 720P@24fps video generation models. This model also natively supports both text-to-video and image-to-video tasks within a single unified framework, covering both academic research and practical applications.
237
238
239 <div align="center">
240 <img src="assets/vae.png" alt="" style="width: 80%;" />
241 </div>
242
243
244
245 ##### Comparisons to SOTAs
246 We compared Wan2.2 with leading closed-source commercial models on our new Wan-Bench 2.0, evaluating performance across multiple crucial dimensions. The results demonstrate that Wan2.2 achieves superior performance compared to these leading models.
247
248
249 <div align="center">
250 <img src="assets/performance.png" alt="" style="width: 90%;" />
251 </div>
252
253 ## Citation
254 If you find our work helpful, please cite us.
255
256 ```
257 @article{wan2025,
258 title={Wan: Open and Advanced Large-Scale Video Generative Models},
259 author={Team Wan and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu},
260 journal = {arXiv preprint arXiv:2503.20314},
261 year={2025}
262 }
263 ```
264
265 ## License Agreement
266 The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
267
268
269 ## Acknowledgements
270
271 We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
272
273
274
275 ## Contact Us
276 If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/AKNgpMK4Yj) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!