Papers
arxiv:2505.02625

LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech Synthesis

Published on May 5, 2025
· Submitted by
Qingkai Fang
on May 6, 2025
Authors:
,

Abstract

LLaMA-Omni 2, a series of speech language models with parameters ranging from 0.5B to 14B, achieves high-quality real-time speech interaction through a speech encoder and autoregressive streaming speech decoder, outperforming models like GLM-4-Voice with significantly less training data.

AI-generated summary

Real-time, intelligent, and natural speech interaction is an essential part of the next-generation human-computer interaction. Recent advancements have showcased the potential of building intelligent spoken chatbots based on large language models (LLMs). In this paper, we introduce LLaMA-Omni 2, a series of speech language models (SpeechLMs) ranging from 0.5B to 14B parameters, capable of achieving high-quality real-time speech interaction. LLaMA-Omni 2 is built upon the Qwen2.5 series models, integrating a speech encoder and an autoregressive streaming speech decoder. Despite being trained on only 200K multi-turn speech dialogue samples, LLaMA-Omni 2 demonstrates strong performance on several spoken question answering and speech instruction following benchmarks, surpassing previous state-of-the-art SpeechLMs like GLM-4-Voice, which was trained on millions of hours of speech data.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2505.02625
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 11

Browse 11 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.02625 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.02625 in a Space README.md to link it from this page.

Collections including this paper 16