Instructions to use smthem/ltx-2-19b-dev-diffusers-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use smthem/ltx-2-19b-dev-diffusers-4bit with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("smthem/ltx-2-19b-dev-diffusers-4bit", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
do i need gemma-3-12b
#1
by DEADMAN3009 - opened
hey , wanted to ask do i still need gemma-3-12b with this
no ,text_encoder is gemma3-12B quant to fp4,
nice thank you , can you release a workflow for your version please
check readme ,a simple infer code ,if you need a comfyUI node ,i juse test one, ,different form comfyUI origin or kijai