UserMirrorrer-Llama-DPO

This is a preference-aligned user simulator for recommendation systems, fine-tuned from Llama-3.2-3B-Instruct using the UserMirrorer framework.

The model was introduced in the paper Mirroring Users: Towards Building Preference-aligned User Simulator with User Feedback in Recommendation.

Model Details

UserMirrorer is designed to simulate user behavior and preferences in recommender systems by leveraging extensive user feedback. The framework generates decision-making processes as explanatory rationales to enhance alignment with human preferences.

The fine-tuning process involved two stages:

  1. Supervised Fine-tuning (SFT): 1 epoch.
  2. Direct Preference Optimization (DPO): 2 epochs.

Resources

Citation

If you find this work useful, please consider citing:

@misc{wei2025mirroringusersbuildingpreferencealigned,
      title={Mirroring Users: Towards Building Preference-aligned User Simulator with User Feedback in Recommendation}, 
      author={Tianjun Wei and Huizhong Guo and Yingpeng Du and Zhu Sun and Huang Chen and Dongxia Wang and Jie Zhang},
      year={2025},
      eprint={2508.18142},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/2508.18142}, 
}
Downloads last month
294
Safetensors
Model size
3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Joinn/UserMirrorrer-Llama-DPO

Finetuned
(1559)
this model

Dataset used to train Joinn/UserMirrorrer-Llama-DPO

Paper for Joinn/UserMirrorrer-Llama-DPO