Instructions to use lightx2v/Wan2.2-Distill-Models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Wan2.2-Distill-Models with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lightx2v/Wan2.2-Distill-Models", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Diffusion Single File
How to use lightx2v/Wan2.2-Distill-Models with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Error with quantized models using the "Load Diffusion Model" node in ComfyUI
Thank you for your excellent work on the wan2.2 model. The standard version of the model works perfectly in my workflow.
I am encountering an issue specifically when trying to load your quantized models (fp8_e4m3 and int8) in ComfyUI. My workflow uses the "Load Diffusion Model" node, but it fails when I select either of the quantized model files.
The console displays the following error message:
unet unexpected: ['blocks.0.self_attn.q.weight_scale', 'blocks.0.self_attn.k.weight_scale', 'blocks.0.self_attn.v.weight_scale', 'blocks.0.self_attn.o.weight_scale', 'blocks.0.cross_attn.q.weight_scale', 'blocks.0.cross_attn.k.weight_scale', ... and many other similar 'weight_scale' keys]
This error suggests that the underlying loading mechanism used by the "Load Diffusion Model" node cannot handle the quantization parameters (weight_scale) present in the model.
Is there a specific custom node or a different loading method required to properly use these quantized models with a node like "Load Diffusion Model"?
Any guidance on how to correctly integrate these models would be very helpful.
Try Wan2.2-Distill-Models/wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors and Wan2.2-Distill-Models/wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors
Try Wan2.2-Distill-Models/wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors and Wan2.2-Distill-Models/wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors
I get the same issue as OP with the comfyui.safetensors