Instructions to use ostris/OpenFLUX.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ostris/OpenFLUX.1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ostris/OpenFLUX.1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Conversion from transformers format
Hi, keep up the good work!
How do you suggest to convert the transformer checkpoints into a single sft? (like dev and schnell)
Thank you. I’ll start adding the single checkpoints as well as quantized versions when it gets to a more useable state. It is getting close, but not quite there yet.
Np, so you think its still not good enough to start training something? I'm not looking for inference
Looking forward to quantized version sooon. Great work!
Almost two months later how does it go? I converted, used the conversion in ComfyUI, but in Kohya NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.