Audio-Text-to-Text
Transformers
Safetensors
step_audio_2
text-generation
audio-reasoning
chain-of-thought
multi-modal
step-audio-r1
custom_code
Instructions to use stepfun-ai/Step-Audio-R1.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stepfun-ai/Step-Audio-R1.1 with Transformers:
# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("stepfun-ai/Step-Audio-R1.1", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Severe generation repetition when serving Step-Audio-R1.1 with vLLM
#8 opened 3 months ago
by
Boxp
Does the R1.1 model internally implement a VLLM-based inference framework?
1
#7 opened 4 months ago
by
026jzz
Details on the model parameters
1
#6 opened 4 months ago
by
Allen18
How to output both audio and text?
2
#4 opened 4 months ago
by
Allen18
How to implement MPS?
1
#3 opened 4 months ago
by
Seungyoun
vllm部署出现问题,请教下要怎么解决呢
4
#2 opened 4 months ago
by
syyxsxx
How many languages are supported?
1
#1 opened 4 months ago
by
RoadToNowhere