The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ManiSkill-Memory-Dependence Benchmark
A comprehensive memory dependence robot benchmark across 4 manipulation tasks from different memory dimensions, introduced in the paper "Non-Markovian Long-Horizon Robot Manipulation via Keyframe Chaining".
Dataset Description
This dataset provides a suite of Non-Markovian manipulation tasks built upon the ManiSkill simulator to measure task success rates in scenarios requiring long-horizon memory and state disambiguation. It is specifically designed to evaluate Vision-Language-Action (VLA) models on their ability to resolve state aliasing and handle memory-dependent operations.
Benchmark Tasks
The benchmark evaluates models across four distinct memory dependence dimensions:
- Spatial Reconfiguration: The agent must dismantle a vertical stack of three randomly ordered blocks and reconstruct them in a permuted sequence.
- Temporal Sequencing: The robot must perform a โpick-lift-resetโ cycle for three colored cubes strictly in the order of ๐ ๐๐ โ ๐บ๐๐๐๐ โ ๐ต๐๐ข๐.
- Counting & Latency: A signal lamp flashes twice with a randomized interval. The agent must count these pulses and push the target only after the second flash.
- Identity Tracking: Three visually identical red blocks are aligned, and an auxiliary arm performs a rapid swap between two of them. The agent is tasked with picking the specific block that was originally in the center.
Usage
For detailed instructions on how to set up the environment, load the benchmark, and evaluate your models, please refer to our official GitHub repository:
๐ GitHub Repository: How to use the ManiSkill-Memory-Dependence Benchmark
Citation
If you find this benchmark useful in your research, please cite our paper:
@article{KC-VLA,
title={Non-Markovian Long-Horizon Robot Manipulation via Keyframe Chaining},
author={Yipeng Chen and Wentao Tan and Lei Zhu and Fengling Li and Jingjing Li and Guoli Yang and Heng Tao Shen},
journal={arXiv preprint arXiv},
year={2026},
}
- Downloads last month
- 36