The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found mpi3d-realistic.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1167, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found mpi3d-realistic.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for MPI3D-realistic
Dataset Description
The MPI3D-realistic dataset is a photorealistic synthetic image dataset designed for benchmarking algorithms in disentangled representation learning and unsupervised representation learning. It is part of the broader MPI3D dataset suite, which also includes synthetic toy, real-world and complex real-world variants.
The realistic version was rendered using a physically-based photorealistic renderer applied to CAD models of physical 3D objects. The rendering simulates realistic lighting, materials, and camera effects to closely match the real-world recordings of the MPI3D-real dataset. This enables researchers to systematically study sim-to-real transfer and assess how well models trained on high-fidelity synthetic images generalize to real-world data.
All images depict 3D objects under controlled variations of 7 known factors:
- Object color (6 values)
- Object shape (6 values)
- Object size (2 values)
- Camera height (3 values)
- Background color (3 values)
- Robotic arm horizontal axis (40 values)
- Robotic arm vertical axis (40 values)
The dataset contains 1,036,800 images at a resolution of 64×64 pixels (downsampled from the original resolution for benchmarking, as commonly used in the literature). All factors are identical to those used in the toy and real versions of MPI3D, enabling direct comparisons between different domains.

Dataset Source
- Homepage: https://github.com/rr-learning/disentanglement_dataset
- License: Creative Commons Attribution 4.0 International
- Paper: Muhammad Waleed Gondal et al. On the Transfer of Inductive Bias from Simulation to the Real World: A New Disentanglement Dataset. NeurIPS 2019.
Dataset Structure
| Factors | Possible Values |
|---|---|
| object_color | white=0, green=1, red=2, blue=3, brown=4, olive=5 |
| object_shape | cone=0, cube=1, cylinder=2, hexagonal=3, pyramid=4, sphere=5 |
| object_size | small=0, large=1 |
| camera_height | top=0, center=1, bottom=2 |
| background_color | purple=0, sea green=1, salmon=2 |
| horizontal_axis (DOF1) | 0,...,39 |
| vertical_axis (DOF2) | 0,...,39 |
Each image corresponds to a unique combination of these 7 factors. The images are stored in a row-major order (fastest-changing factor is vertical_axis, slowest-changing factor is object_color).
Why no train/test split?
The MPI3D-realistic dataset does not provide an official train/test split. It is designed for representation learning research, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.
Example Usage
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("randall-lab/mpi3d-realistic", split="train", trust_remote_code=True)
# Access a sample from the dataset
example = dataset[0]
image = example["image"]
label = example["label"] # [object_color: 0, object_shape: 0, object_size: 0, camera_height: 0, background_color: 0, horizontal_axis: 0, vertical_axis: 0]
color = example["color"] # 0
shape = example["shape"] # 0
size = example["size"] # 0
height = example["height"] # 0
background = example["background"] # 0
dof1 = example["dof1"] # 0
dof2 = example["dof2"] # 0
image.show() # Display the image
print(f"Label (factors): {label}")
If you are using colab, you should update datasets to avoid errors
pip install -U datasets
Citation
@article{gondal2019transfer,
title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
journal={Advances in Neural Information Processing Systems},
volume={32},
year={2019}
}
- Downloads last month
- 43