Instructions to use lightx2v/Wan2.2-Distill-Models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Wan2.2-Distill-Models with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lightx2v/Wan2.2-Distill-Models", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Diffusion Single File
How to use lightx2v/Wan2.2-Distill-Models with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Hi~Regarding the 20260412 model, is it possible to provide an official FP8 model-comfyui, or an official Lora-comfyui?
#19 opened 16 days ago
by
RedHn
Is there a recommended sampler/scheduler for the distill models in comfyui?
#18 opened 20 days ago
by
Cldoerick
model is too big ,we need new lora-0412 of high&low,thx!
1
#17 opened 24 days ago
by
RedHn
Question
5
#16 opened about 1 month ago
by
YarvixPA
Is there a workflow for nodes that load multiple LoRa modules? Thank you.
#15 opened 4 months ago
by
wzgrx
How this model is trained
#14 opened 4 months ago
by
pxyy
Is there any way to run T2V with 4 step inference?
2
#13 opened 5 months ago
by
neuralityAI
T2V Version?
1
#12 opened 6 months ago
by
spooknik
Will there be FP4 quantization?
#10 opened 6 months ago
by
hlufeng
Unet Missing ..
5
#9 opened 6 months ago
by
hanswurstwerner
1030 FP8 E4M3 version is bad result in comfyui
👍 3
3
#7 opened 7 months ago
by
t8star
Is there any chance that you will do something similar for T2V models?
#5 opened 7 months ago
by
Ruzarh
Should Wan2.2-Distill-Models used with Distill Loras?
👍 2
#4 opened 7 months ago
by
Cjayk
add pipeline tag for better discoverability :)
🔥 1
#3 opened 7 months ago
by
linoyts
ComfyUI will still cast all layers to fp8
6
#2 opened 7 months ago
by
silveroxides
Error with quantized models using the "Load Diffusion Model" node in ComfyUI
3
#1 opened 7 months ago
by
xuanwoa