Papers
arxiv:2604.20244

Hybrid Policy Distillation for LLMs

Published on Apr 22
· Submitted by
SII-Wenhong
on Apr 24
Authors:
,
,
,

Abstract

Hybrid Policy Distillation combines forward and reverse KL divergence approaches to improve knowledge distillation stability and efficiency across different model sizes and tasks.

AI-generated summary

Knowledge distillation (KD) is a powerful paradigm for compressing large language models (LLMs), whose effectiveness depends on intertwined choices of divergence direction, optimization strategy, and data regime. We break down the design of existing KD methods and present a unified view that establishes connections between them, reformulating KD as a reweighted log-likelihood objective at the token level. We further propose Hybrid Policy Distillation (HPD), which integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, and combines off-policy data with lightweight, approximate on-policy sampling. We validate HPD on long-generation math reasoning as well as short-generation dialogue and code tasks, demonstrating improved optimization stability, computational efficiency, and final performance across diverse model families and scales. The code related to this work is available at https://github.com/zwhong714/Hybrid-Policy-Distillation.

Community

Paper submitter

🧭 A unified view of policy distillation methods
⚡ Efficient one-hot-style distillation
🧩 A hybrid KL objective with a masking mechanism
🪶 Lightweight sampling under an offline-prefix setting

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.20244 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.20244 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.