Perceptual Flow Network for Visually Grounded Reasoning
Abstract
Perceptual Flow Network addresses limitations in vision-language models by decoupling perception from reasoning and using variational reinforcement learning with multi-dimensional rewards for improved visual reasoning.
Despite the success of Large-Vision Language Models (LVLMs), general optimization objectives (e.g., standard MLE) fail to constrain visual trajectories, leading to language bias and hallucination. To mitigate this, current methods introduce geometric priors from visual experts as additional supervision. However, we observe that such supervision is typically suboptimal: it is biased toward geometric precision and offers limited reasoning utility. To bridge this gap, we propose Perceptual Flow Network (PFlowNet), which eschews rigid alignment with the expert priors and achieves interpretable yet more effective visual reasoning. Specifically, PFlowNet decouples perception from reasoning to establish a self-conditioned generation process. Based on this, it integrates multi-dimensional rewards with vicinal geometric shaping via variational reinforcement learning, thereby facilitating reasoning-oriented perceptual behaviors while preserving visual reliability. PFlowNet delivers a provable performance guarantee and competitive empirical results, particularly setting new SOTA records on V* Bench (90.6%) and MME-RealWorld-lite (67.0%).
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reflect to Inform: Boosting Multimodal Reasoning via Information-Gain-Driven Verification (2026)
- Improving Vision-language Models with Perception-centric Process Reward Models (2026)
- LatentGeo: Learnable Auxiliary Constructions in Latent Space for Multimodal Geometric Reasoning (2026)
- UniDoc-RL: Coarse-to-Fine Visual RAG with Hierarchical Actions and Dense Rewards (2026)
- Visually-Guided Policy Optimization for Multimodal Reasoning (2026)
- Not All Tokens See Equally: Perception-Grounded Policy Optimization for Large Vision-Language Models (2026)
- Beyond Where to Look: Trajectory-Guided Reinforcement Learning for Multimodal RLVR (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2605.02730 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper