Instructions to use ostris/OpenFLUX.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ostris/OpenFLUX.1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ostris/OpenFLUX.1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Will it support ControlNet and IP daper? How much VRAM is needed to run
Will it support ControlNet and IP daper? How much VRAM is needed to run
@michaelj Yes, it will support it but I believe normal flux.1 dev ip adapters/controlnets will not work on this model, someone has to train it on this model. The VRAM amount is the same as Flux.1 dev or Flux.1 schnell. At fp16, the lowest it could probably fit in a 24gb vram gpu while at nf4v2 it could probably fit in a 8gb vram gpu.
I have actually been able to drop this model in inplace of schnell and/or dev alongside InstantX/FLUX.1-dev-Controlnet-Union and it runs.
However, the output images are not exactly spectacular, and the runtimes are longer with less vram used on gpu (700mb vs 3-5gb in dev/schnell)
Could one, in principle, just finetune the flux CN rather than retraining completely on openflux.1? something like THIS?