Instructions to use stabilityai/stable-diffusion-3.5-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-diffusion-3.5-medium with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-3.5-medium", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Doesn't work boys - we'll get 'em next time. FIX INSIDE
Also reported by people here:
https://github.com/comfyanonymous/ComfyUI/issues/5422
UPDOOT:
if you update Torch to 2.5.1, Torchvision to 0.20.1, Torchaudio to 2.5.1 and also Transformers to 4.46.1-py3-none-any.whl
- it works
- please include this in the model page instructions somewhere?
Error:
RuntimeError: Error(s) in loading state_dict for OpenAISignatureMMDITWrapper:
size mismatch for joint_blocks.0.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.0.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.1.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.1.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.2.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.2.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.3.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.3.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.4.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.4.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.5.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.5.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.6.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.6.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.7.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.7.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.8.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.8.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.9.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.9.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.10.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.10.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.11.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.11.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for joint_blocks.12.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for joint_blocks.12.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
Prompt executed in 1.06 seconds
No pending upload
do we need to update this on the requirements.txt?
Working fine with torch 2.4.1 in Diffusers, perhaps the ComfyUI folks need to looks why their code isn't working.
Also in my experience Torch 2.5.1x is very broke, uses way more memory than 2.4.1.
try pip install -U diffusers