Diffusers
Safetensors
PyTorch
AudioDiffusionPipeline
unconditional-audio-generation
diffusion-models-class
Instructions to use zwbdla/audio-diffusion-electronic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use zwbdla/audio-diffusion-electronic with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("zwbdla/audio-diffusion-electronic", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("zwbdla/audio-diffusion-electronic", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0][扩散模型课程] 第8章 模型卡片
这个模型是一个旨在生成 [chosen_genre] 风格音乐的非条件性扩散模型
用法
''' from IPython.display import Audio from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("zwbdla/audio-diffusion-electronic") output = pipe() display(output.images[0]) display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support