GGUF
Cosmos
Diffusers
nvidia
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("city96/Cosmos-Predict2-14B-Text2Image-gguf", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

This is a direct GGUF conversion of nvidia/Cosmos-Predict2-14B-Text2Image.

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model Cosmos-Predict2-14B-Text2Image ComfyUI/models/diffusion_models GGUF (this repo)
Text Encoder (old) T5-XXL-Encoder ComfyUI/models/text_encoders Safetensors
VAE Wan 2.1 VAE ComfyUI/models/vae Safetensors

Example workflow - based on the official example workflow

Example outputs - sample size of 1, not strictly representative

sample

Notes

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

Downloads last month
878
GGUF
Model size
14B params
Architecture
cosmos
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for city96/Cosmos-Predict2-14B-Text2Image-gguf

Quantized
(3)
this model