Instructions to use hlicai/cubediff-512-singlecaption with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use hlicai/cubediff-512-singlecaption with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("hlicai/cubediff-512-singlecaption", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
CubeDiff Panorama Generation Model - Single Caption
This is an open-source implementation of CubeDiff, a method for 360° panorama generation based on diffusion models.
Please refer to the official paper and project page for more information:
📄 Paper: CubeDiff: Repurposing Diffusion-Based Image Models for Panorama Generation
🌐 Original Project Page: Cubediff
📚 Open-source implementation: OpenCubeDiff
Model Details
This model is part of the CubeDiff open-source reimplementation, carried out as part of a semester project by
Hanqiu Li Cai and Juan Tarazona Rodríguez for their Master's degree in Robotics, Systems and Control at ETH Zürich.
This model is not affiliated in any shape or form with Google.
We repurpose and fine-tune a Stable Diffusion backbone (SD 1.5) to generate cube-face-consistent panoramas using CubeDiff-style attention reshaping and conditioning.
For installation, usage examples, and training details, please visit the project repository:
🔗 https://github.com/Juan5713/OpenCubeDiff
- Downloads last month
- 76