UNO-Scorer: A Unified General Scoring Model for UNO-Bench
📖 Introduction
UNO-Scorer is a lightweight yet high-precision general scoring model developed as part of UNO-Bench. It is designed to efficiently automate the evaluation of Large Multimodal Models (LMMs) with minimal computational overhead.
Built upon the powerful Qwen3-14B backbone, UNO-Scorer is fine-tuned on 13K high-quality in-house data. It overcomes the limitations of traditional Overall Reward Models (ORMs) by supporting 6 distinct question types, with particular excellence in Multi-Step Open-Ended Questions (MO).
📊 Performance
UNO-Scorer demonstrates superior performance in automated evaluation, particularly in handling complex Multi-Step Open-Ended Questions. We compared the accuracy of our scorer against other advanced evaluators:
| Model | Accuracy |
|---|---|
| Seed-1.5-VL | 0.9118 |
| GPT-4.1 | 0.9457 |
| UNO-Scorer (Ours) | 0.9505 |
Experiments show that UNO-Scorer surpasses even proprietary frontier models like GPT-4.1 in this specific evaluation domain with lower cost.
💻 Usage
Run Inference
We provide an example script based on vLLM for efficient model inference. You can run the following command to test the scorer:
bash examples/test_scorer.sh
4. Adapt Your Reference Answer
The most critical aspect of utilizing the UNO-Scorer lies in the proper formatting of the Reference Answer. Specifically, it is required to:
- Assign point values to the answer components. The total points for the question should typically sum to 10 points.
- You may customize detailed scoring criteria for each reference answer to suit your needs(e.g., clarifying how to judge cases where the final choice is correct but the reasoning is flawed).
Note: Since the model is primarily trained on Chinese corpora, it adheres more accurately to instructions when these specific descriptions are written in Chinese.
You can structure the Reference Answer as follows:
| Question Type | Scenario | Reference Answer | Example |
|---|---|---|---|
| Single Question | The model only needs to check if the final result matches. | Format as a single sub-question (Sub-question 1) worth exactly 10 points. Template: 小问1:{Answer},总分10分,无需关注推理过程,最终答案正确即可 |
Raw Answer: "C" Input Answer: 小问1:C,总分10分,无需关注推理过程,最终答案正确即可 |
| Multiple Question | The model needs to grade specific checkpoints. | Break down the answer into numbered sub-steps with assigned points (summing to exactly 10). Template: 1. {Sub-Answer A} ({X} points); 2. {Sub-Answer B} ({Y} points). |
Raw Answer: "5 apples, 6 bananas" Input Answer: 1. 5 apples (4 points); 2. 6 bananas (6 points). |
Disclaimer: This model is based on Qwen3-14B. Please strictly follow the license and usage policy of the original Qwen model series.
- Downloads last month
- 10