Dataset Viewer
The dataset viewer is not available for this subset.
Job manager crashed while running this job (missing heartbeats).

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

AffordMatcher: Affordance Learning in 3D Scenes from Visual Signifiers (CVPR 2026)

Nghia Vu

[Project Page] [Paper]

Abstract

Affordance learning is a complex challenge in many applications, where existing approaches primarily focus on the geometric structures, visual knowledge, and affordance labels of objects to determine interactable regions. However, extending this learning capability to a scene is significantly more complicated, as incorporating object- and scene-level semantics is not straightforward. In this work, we introduce AffordBridge, a large-scale dataset with $291,637$ functional interaction annotations across $685$ high-resolution indoor scenes in the form of point clouds. Our affordance annotations are complemented by RGB images that are linked to the same instances within the scenes. Building upon our dataset, we propose AffordMatcher an affordance learning method that establishes coherent semantic correspondences between image-based and point cloud-based instances for keypoint matching, enabling a more precise identification of affordance regions based on cues, so-called visual signifiers.

Overview of AffordMatcher

AffordBridge Dataset

3D Affordance label: We reuse the scene and 3D affordance label of SceneFun3D Dataset. Please navigate to the SceneFun3D homepage to download it

Visual clue images: We reason the affordance labels in scene by annotating visual cue images, which can be downloaded here.

After downloading, you will obtain the data in the following structure

finalized_data
└───420693                  # scene_id
β”‚    └───hook_pull_0        # action in format <action>_<idx>
β”‚    β”‚   β”‚   image_0.png    # list of images
β”‚    β”‚   β”‚   image_1.png
β”‚    β”‚   β”‚   ...
β”‚    β”‚
β”‚    └───hook_pull_1
β”‚    β”‚   β”‚   image_2.png
β”‚    β”‚   β”‚   image_3.png
β”‚    β”‚   β”‚   image_4.png
β”‚    β”‚   β”‚   ...
β”‚    └───rotate_0
β”‚    β”‚   β”‚   image_5.png
β”‚    β”‚   β”‚   image_6.png
β”‚    β”‚   β”‚   image_7.png
β”‚    β”‚   β”‚   ...
β”‚    └───...
└───421093 
β”‚    └───...
└───...

Citation

If you find this work interesting and helpful, please consider citing

@inproceedings{vu2026AffordMatcher,
    title        = {AffordMatcher: Affordance Learning in 3D Scenes from Visual Signifiers},
    author       = {Vu, Nghia and Do, Tuong and Nguyen, Khang and Huang , Baoru and Le, Nhat and Nguyen, Binh X and Tjiputra, Erman and Tran, Quang D and Prakash, Ravi and Chiu, Te-Chuan and Nguyen, Anh},
    year         = {2026},
    booktitle    = {CVPR},
}

License

MIT License

Downloads last month
19

Paper for aiozai/AffordBridge