Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
class label
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
11
End of preview.

Short-MetaWorld-VLA (v2 + v3)

Overview

This dataset contains a short MetaWorld collection used for VLA-style training and evaluation.

Current local structure includes:

  • 24 task files in r3m_MT10_20 (12 v2 + 12 v3)
  • 100 trajectories per task
  • 20 or 50 steps per trajectory (task/version dependent)
  • 84,000 total step samples from PKL action/state streams

Dataset Structure

short-metaworld-vla/ ├── mt50_task_prompts.json ├── short_metaworld_loader.py ├── requirements.txt ├── short-MetaWorld/ │ ├── img_only/ │ │ └── //.jpg │ └── r3m-processed/ │ └── r3m_MT10_20/ │ ├── -v2.pkl │ ├── -v3.pkl │ └── data.pkl └── r3m-processed/ └── r3m_MT10_20/

Data Format

Per step:

  • image: RGB frame (.jpg)
  • state: 39D float vector
  • action: 4D float vector
  • prompt: task language instruction (from mt50_task_prompts.json)
  • task_name: task identifier (e.g. button-press-topdown-v3)

Tasks

Includes both -v2 and -v3 variants such as:

  • basketball
  • button-press-topdown
  • door-open
  • drawer-open / drawer-close
  • peg-insert-side
  • pick-place
  • push
  • reach
  • sweep
  • window-open / window-close
  • plus v3-only tasks in this dump (e.g. handle-pull-v3, stick-pull-v3)

🔬 Research Applications

This dataset is designed for:

  • Multi-task Reinforcement Learning: Train policies across multiple manipulation tasks
  • Imitation Learning: Learn from demonstration trajectories
  • Vision-Language Robotics: Connect visual observations with natural language instructions
  • Meta-Learning: Adapt quickly to new manipulation tasks
  • Robot Policy Training: End-to-end visuomotor control

⚖️ License

MIT License - See LICENSE file for details.

Downloads last month
72