Dataset Viewer
The dataset viewer is not available for this dataset.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GNN Constraint-Aware World Model Dataset (v3)

Real robot episodes with per-frame constraint graphs, SAM2 segmentation masks + 256-D feature embeddings, full 3D depth bundles, and synchronized robot states across two manipulation domains. Both domains share the v3 on-disk layout (same JSON/NPZ schemas, same delta-encoded frame_states, same fully-connected PyG expansion at load time) and now share a unified 270-D node feature format — the PyG loader reads a fixed 10-D type encoding from a YAML config so both domains produce identical node dimensionality.

  • Project: GNN world model for constraint-aware video generation
  • Author: Texas A&M University
  • Hardware: UR5e + Robotiq 2F-85 gripper, OAK-D Pro (static side view)

What's in this repo — at a glance

Where Contains Use for
session_* (Desktop) and hanoi/session_hanoi_* Raw episodes + per-frame annotations/ (masks, embeddings, depth bundles, side_graph.json) Training data for the world model
config/type_encoding_*.yaml Fixed 10-D per-type encoding YAMLs Loader inputs (pick one per run)
gnn_world_model_loader.py Self-contained PyG loader (one function per variant; also list_all_frame_graphs iterator) Reading dataset → torch_geometric.data.Data
examples/ 8 runnable scripts (one per step) — see "Full instructions" below Runnable entry points for every step of the pipeline
tools/hanoi_pipeline/ SAM2-FT checkpoint + full Python pipeline (auto-labeler, HanoiGraphInferer, materializer, src/ modules) Auto-labeling new Hanoi sessions and RGB→graph inference

Everything needed to go from a predicted RGB frame to a constraint graph is bundled here — no external repo required. The only exception is Meta's SAM2 base checkpoint (320 MB), which Step 3 below describes how to install.

Domains at a glance

Domain Graph variants offered Node vocab size Node feature dim Edge feature dim Data root
Desktop disassembly products-only, with-robot-node, with-robot-state, with-robot-action 9 (8 products + robot) 270 3 session_<date>_<time>/episode_XX/
Tower of Hanoi products-only, with-robot-state, with-robot-action 4 (ring_1..ring_4) 270 3 hanoi/session_hanoi_<date>_<time>/episode_XX/

Node feature dim = 256 (SAM2 emb) + 3 (3D pos) + 10 (fixed type encoding) + 1 (visibility) = 270. The 10-D type encoding is a fixed, deterministic per-type vector (NOT trained) read from config/type_encoding_random.yaml or config/type_encoding_clip.yaml at load time — so both domains, and any future component vocabulary up to 13 types, share the same node dimension.

Four loader variants (all return torch_geometric.data.Data):

  • load_pyg_frame_products_only — V1 bare graph: products/rings only, no robot info.
  • load_pyg_frame_with_robot — V2 ablation: robot attached as a graph NODE (Desktop only; Hanoi has no robot mask in v1, so this falls back to products-only).
  • load_pyg_frame_with_robot_stateV3 recommended: products-only graph + robot_state=[13] side-tensor. Works for both domains because robot_states.npy is present everywhere.
  • load_pyg_frame_with_robot_action — V3 action-conditioned: same as above + robot_action=[13] delta for the next frame.

The three paper options map cleanly: Option 1 (direct graph encoding) → products_only; Option 2 (encoder → latent → world model with robot context) → with_robot_state; Option 3 (action-conditioned GNN) → with_robot_action.

File layout (same for both domains)

episode_XX/
├── metadata.json            # episode metadata (domain-specific extras)
├── robot_states.npy         # (T, 13) float32 — joints + TCP + gripper
├── robot_actions.npy        # (T-1, 13) float32 — frame deltas
├── timestamps.npy           # (T, 3) float64
├── side/
│   ├── rgb/frame_XXXXXX.png     # 1280×720 RGB
│   └── depth/frame_XXXXXX.npy   # 1280×720 uint16 (mm)
├── wrist/                   # raw wrist camera (not used in v3)
└── annotations/
    ├── side_graph.json          # components, static edges, frame_states
    ├── side_masks/              # {component_id: (H,W) uint8} per frame
    ├── side_embeddings/         # {component_id: (256,) float32} per frame
    ├── side_depth_info/         # flat-keyed depth bundle per frame
    ├── side_robot/              # robot bundle per frame (visible flag)
    └── dataset_card.json        # format description

Alignment guarantee: every labeled frame index has files in all four of side_masks/, side_embeddings/, side_depth_info/, side_robot/. Files are keyed by the same integer frame index, so a loader can key off the mask directory and trust the rest to be present.

Pipeline — four stages from raw video to training-ready graphs

┌─────────────┐     ┌───────────────┐     ┌───────────────────┐     ┌──────────────────┐
│ Collection  │ →   │ Auto-labeling │ →   │ Verification / UI │ →   │  PyG loader @    │
│ (30 Hz RGBD │     │ (SAM2-FT)     │     │ (optional edit)   │     │  training time   │
│  + robot)   │     │               │     │                   │     │                  │
└─────────────┘     └───────────────┘     └───────────────────┘     └──────────────────┘
   episode_XX/         annotations/           annotations/             torch_geometric
                        masks, emb,            (corrected)              .data.Data
                        depth, robot,                                  x=[N,270], edge=[E,3]
                        side_graph.json

Stage 1 — Collection

30 Hz synchronous capture of side RGB + depth + robot state into episode_XX/. No image processing or graph work happens here.

  • Desktop: human teleop via a game controller; the operator decides what to disassemble in what order.
  • Hanoi: autonomous — scripts/hanoi/orchestrator.py pre-plans N missions upfront from the captured initial state, samples each as classical/single_ring/rearrange at 40/40/20 weights, writes metadata.json with goal_prompt, initial_state, target_state, and the deterministic solver_moves (reference action sequence from a classical Hanoi BFS solver). The UR5e executes each mission with blended waypoints and per-ring grasp offsets.

Stage 2 — Auto-labeling (SAM2 detection → graph)

Separate offline step that produces the entire annotations/ tree. Hanoi is fully automatic in v3; Desktop currently uses manual + SAM2-assisted labeling. The Hanoi auto-labeler ships inside this dataset under tools/hanoi_pipeline/ (so users cloning the dataset can reproduce or extend it):

python tools/hanoi_pipeline/scripts/hanoi/auto_label.py <session_dir>

Per-frame algorithm (Hanoi):

  1. Ring detection. HSV range + color-specific mask → largest connected blob → bbox per ring.
  2. SAM2 segmentation. Run SAM2 with (bbox + centroid point) prompt on each ring. The Hanoi-fine-tuned checkpoint is auto-loaded if present (checkpoints/sam2_hanoi_ft.pt); otherwise falls back to vanilla sam2.1_hiera_base_plus.
  3. 256-D embedding. Masked average-pool of SAM2's vision_features spatial grid over each ring mask.
  4. Depth backprojection. Masked pixels → (u, v) + depth → 3D point cloud in camera frame; centroid used as the node position.

Per-episode algorithm: 5. Grasp-interval detection. Read robot_states.npy[:, 12] (Robotiq 2F-85 gripper position, 0-255). Find the lowest stable plateau above the fully-open cutoff (baseline ≈ pre-grasp width), threshold at baseline+10, morphologically close to bridge single-frame glitches, yielding [(start, end, ring_id)] intervals — one per move. 6. Symbolic state unroll. Starting from initial_state, apply solver_moves[i] after each interval closes, marking the moved ring as held=True during the interval and recording the resulting per-frame constraints / visibility / held dicts as deltas in frame_states. No per-frame ring re-identification is needed; the move plan is ground truth.

Stage 3 — Runtime inference (RGB → graph)

The single-frame inferer (tools/hanoi_pipeline/infer_graph_from_frame.py) is what you call inside a world-model prediction loop. Given a predicted RGB (and optional depth), it returns the same graph schema Stage 2 produced — no offline pipeline needed, no temporal context required:

from infer_graph_from_frame import HanoiGraphInferer
inferer = HanoiGraphInferer()
result = inferer(predicted_rgb, depth=predicted_depth)
# result["graph"] / ["masks"] / ["embeddings"] / ["depth_info"] / ["ring_states"]

This is the path from any predicted image straight to a PyG-compatible graph — same masks, same 256-D SAM2 embeddings, same 3-D positions as the training annotations. See Step 3 below for a runnable wrapper.

SAM2 models used in this dataset

Two checkpoints are in play, both distributed by this repo under tools/hanoi_pipeline/checkpoints/ (also available from the SAM2 repo):

File Size What it contains When it's used
sam2.1_hiera_base_plus.pt (Meta AI) ~320 MB Full SAM2 model — image encoder + prompt encoder + mask decoder Loaded as the base. Frozen during fine-tuning and inference
sam2_hanoi_ft.pt (this dataset) ~16 MB Decoder + prompt_encoder only — fine-tuned weights Auto-loaded when present; overrides the base decoder/prompt_encoder

The 16 MB FT checkpoint is small because the image encoder stays frozen at the base SAM2 weights. Training data: ~800 (image, bbox, ground-truth-mask) triples pulled from manually-corrected Hanoi episodes. Per-ring validation IoU (cross-episode held-out solve):

Ring Vanilla SAM2 base Hanoi-FT Δ
ring_1 (red) 0.786 0.851 +6.5 pp
ring_2 (yellow) 0.803 0.842 +3.9 pp
ring_3 (green) 0.814 0.854 +4.0 pp
ring_4 (blue) 0.794 0.846 +5.2 pp
macro mean 0.799 0.848 +4.9 pp

Biggest gains are on partially-gripper-occluded rings where vanilla SAM2 tended to oversegment onto the gripper finger.

Usage in the world-model prediction loop. At inference time you don't need to run the full auto_label.py pipeline. Use the provided single-frame inferer:

from tools.hanoi_pipeline.infer_graph_from_frame import HanoiGraphInferer

inferer = HanoiGraphInferer()        # loads base + FT once
result = inferer(rgb_image, depth=depth_image)

graph      = result["graph"]         # side_graph.json schema
masks      = result["masks"]         # {ring_1..ring_4: (H, W) uint8}
embeddings = result["embeddings"]    # {ring_1..ring_4: (256,) float32}
depth_info = result["depth_info"]    # flat-keyed 3D bundle (empty if depth=None)
states     = result["ring_states"]   # {ring_id: RingState(peg, stack_index)}

This returns the same schema as the offline pipeline's per-frame output, so the PyG loaders work identically on both sources. Override the checkpoint via SAM2_FINETUNE_CKPT=<path>; set to empty string to force vanilla SAM2.

Desktop Disassembly Domain

Components (9 types)

Eight product types + one robot agent. Multiple instances (e.g. ram_1, ram_2) share the same 10-D type encoding and are disambiguated by SAM2 embedding + 3D position.

Index Type Color Notes
0 cpu_fan #FF6B6B Always visible at start
1 cpu_bracket #4ECDC4 Hidden at start (under fan)
2 cpu #45B7D1 Hidden at start
3 ram_clip #96CEB4 Multi-instance
4 ram #FFEAA7 Multi-instance
5 connector #DDA0DD Multi-instance
6 graphic_card #FF8C42 Always visible
7 motherboard #8B5CF6 Always visible (base)
8 robot #F5F5F5 Agent node (stored separately in side_robot/)

Sparse constraint edges

Directed prerequisite relations — A -> B means "A must be removed before B can be removed":

cpu_fan      -> cpu_bracket         (fan covers bracket)
cpu_fan      -> motherboard
cpu_bracket  -> cpu
cpu_bracket  -> motherboard
cpu          -> motherboard
ram_N        -> motherboard
ram_clip_N   -> motherboard
ram_clip_N   -> ram_M               (user pairs manually)
connector_N  -> motherboard
graphic_card -> motherboard

Typical episode has 10-15 product nodes and 10-14 stored directed edges.

Node feature layout (270-D)

[0   : 256]   SAM2 embedding (256)       — masked avg pool over vision_features
[256 : 259]   3D position (3)            — centroid in camera frame (meters)
[259 : 269]   type encoding (10)         — fixed 10-D vector from
                                            config/type_encoding_<method>.yaml
                                            (shared across domains)
[269]         visibility (1)             — 1 if visible this frame, else 0

Total: 270-D. The 10-D type slot is a deterministic encoding (NOT trained) — see "Fixed 10-D type encoding — how it's made" below.

Available Desktop episodes

Session / Episode Labeled frames Goal
session_0408_162129/episode_00 346 cpu_fan
session_0410_125013/episode_00 473 cpu_fan
session_0410_125013/episode_01 525 graphic_card

Total: 1344 frames.

Tower of Hanoi Domain

Components (4 types) — rings only, no robot node in v1

Hanoi episodes use native ring IDs (ring_1 .. ring_4) in components and as npz keys — no desktop-proxy remapping, and no robot node in v1. type_vocab is ["ring_1", "ring_2", "ring_3", "ring_4"] (length 4). Robot segmentation is deferred; side_robot/*.npz is zero-filled per frame for format uniformity but never becomes a graph node.

Note on V2 vs V3 for Hanoi. V2 (with_robot — robot as graph node) requires a labeled robot mask/embedding and is therefore Desktop-only in v1. V3 (with_robot_state / with_robot_action) uses the 13-D robot_states.npy trace, which IS recorded for Hanoi too — so V3 loaders work for both domains.

ID Color Disk size Role
ring_1 red (#E63946) 32 mm Smallest
ring_2 yellow (#F1C40F) 42 mm
ring_3 green (#2ECC71) 52 mm
ring_4 blue (#2E86DE) 62 mm Largest

Mask .npz files carry the literal keys ring_1, ring_2, ring_3, ring_4. No robot in type_vocab, no robot edges, no robot node appended at load time.

Mission kinds (40 / 40 / 20 sampling)

Every Hanoi metadata.json records mission_kind, goal_prompt, initial_state, target_state, and solver_moves. The sampler picks a kind per episode:

Kind Weight Target
classical 0.40 All 4 rings stacked in size order on one peg
single_ring 0.40 One designated ring moved to a new peg; every other ring returns to its initial peg in size order
rearrange 0.20 Uniformly sampled valid (larger-under-smaller) configuration

Physical peg layout (important for prompt grounding)

Throughout the dataset, rings move between three pegs labelled A, B, and C. A text-conditioned world model has no way to know which letter corresponds to which physical peg, so the goal_prompts use physically-grounded labels paired with the (peg A/B/C) cross-reference:

Label Side-camera view (primary) Wrist-camera view (auxiliary)
peg Athe near peg closest to the camera (bottom of image) right side of the frame
peg Bthe middle peg middle of the image middle of the frame
peg Cthe far peg farthest from the camera (top of image) left side of the frame

All structural fields (initial_state, target_state, solver_moves, edge lists in side_graph.json) continue to use the letter labels A/B/C — they're stable identifiers that downstream loaders already depend on. Only the natural-language goal_prompt uses the descriptive names.

Goal-prompt format (run Step 10 once after download)

The goal_prompt is the self-contained, natural-language task description a video world model (e.g. Cosmos Predict 2.5) reads:

"Starting state: <S>.  Task: <T>.  Target state: <G>."

where <S> and <G> enumerate the ring layout top-of-stack first per peg, using the grounded labels above, and <T> is a plain-English instruction derived from mission_kind. Full examples after running Step 10:

# classical
"Starting state: red → green (top → bottom) on the near peg (peg A);
                 yellow → blue (top → bottom) on the far peg (peg C).
 Task: stack all rings onto the far peg in size order (smallest on top),
       solving the Hanoi puzzle.
 Target state: red → yellow → green → blue (top → bottom) on the far peg."

# single_ring
"Starting state: green alone on the near peg (peg A); blue alone on the middle peg (peg B);
                 red → yellow (top → bottom) on the far peg (peg C).
 Task: move the blue ring from the middle peg to the near peg; every other ring must end up
       back on its starting peg, sorted smallest-on-top.
 Target state: green → blue (top → bottom) on the near peg; red → yellow on the far peg."

# rearrange
"Starting state: red → yellow → green → blue (top → bottom) on the far peg (peg C).
 Task: rearrange the rings into the target configuration below; any legal move sequence that
       reaches the target is acceptable.
 Target state: blue alone on the near peg; red → yellow → green on the middle peg."

Why a separate script? The dataset shipped to HF was captured and uploaded over several weeks while the prompt format was iterated. To avoid re-uploading hundreds of gigabytes every time the prompt template improves, the canonical prompt is re-derived locally by examples/10_upgrade_prompts.py from three stable structural fields that never change: mission_kind, initial_state, target_state. Run it once after download and every episode's goal_prompt + side_graph.json gets normalised to the form above.

Structural edges (static, always 6)

The 6 smaller → larger directed pairs are stored verbatim in side_graph.json:

ring_1 -> ring_2     ring_1 -> ring_3     ring_1 -> ring_4
                     ring_2 -> ring_3     ring_2 -> ring_4
                                          ring_3 -> ring_4

At PyG load time the loader expands to 4 × 3 = 12 fully-connected directed edges. The reverse (larger → smaller) direction carries the same has_constraint / is_locked but flipped src_blocks_dst.

Per-frame is_locked semantics

is_locked = 1 on edge (A, B) iff A is currently the immediately-stacked ring on top of B on the same peg (adjacent in the peg-stack with A above B). Every other pair — non-adjacent on the same peg, on different pegs, or with either ring in transit — gets is_locked = 0. This is strictly "physical stacking right now," not "A must move before B."

Held-ring rule (captures "constraint broken during transit")

When the robot holds a ring (gripper closed between grasp and release of that move), the ring is in transit and no longer touches any other ring. The auto-labeler flags held = 1 for that ring on every held frame, and every edge touching it gets is_locked = 0 — the constraint is physically broken mid-move. On release, the new adjacency emerges and that edge flips back to is_locked = 1.

Implementation: auto_label.py reads robot_states.npy[:, 12] (gripper position, Robotiq 2F-85, 0-255) and detects grasp intervals via baseline-mode thresholding (estimate "resting open" mode, threshold at baseline + margin, binary-close morphologically to bridge single-frame glitches). It then zips the resulting intervals with solver_moves in order — the k-th grasp interval is assigned to the k-th move. Validated on ep_00 (1 move, 1 interval), ep_01 (15 moves, 15 intervals), ep_02 (1 move, 1 interval). Per-frame held deltas are recorded as frame_states[f].held = {ring_id: True|False}.

Rule 2 — "larger must never sit on smaller"

Encoded without a new feature via the edge's existing src_blocks_dst bit:

Edge direction src_blocks_dst Meaning
smaller → larger (e.g. ring_1 -> ring_3) 1 Legal — smaller may rest on larger
larger → smaller (e.g. ring_3 -> ring_1) 0 Illegal — larger may not rest on smaller

Three dimension-preserving ways the world model can respect Rule 2:

Method Where One-liner Guarantee
Training loss objective λ * (pred_is_locked * (1 - src_blocks_dst)).sum() Soft (shapes distribution)
Rollout mask inference Reject any predicted is_locked = 1 where src_blocks_dst = 0 Hard (eliminates illegal)
Dataset invariant this spec is_locked is never 1 on a larger→smaller edge in any training frame Hard (on training distribution)

Node feature layout (270-D)

[0   : 256]   SAM2 embedding (256)
[256 : 259]   3D position (3)
[259 : 269]   type encoding (10)         — fixed 10-D vector from
                                            config/type_encoding_<method>.yaml
                                            (shared with Desktop)
[269]         visibility (1)

Total: 270-D — identical to Desktop. The 10-D encoding is domain-independent; unknown/unlisted types encode to a zero vector.

Mission metadata saved per episode

Every Hanoi side_graph.json carries goal_prompt, mission_kind, and target_state in addition to the fields shared with Desktop. Per-frame transitions (grasps, releases, re-stacks) are recorded as deltas in frame_states[f] with constraints, visibility, and held sub-dicts.

Hanoi episodes available

Session Episodes Frames Storage Notes
hanoi/session_hanoi_0415_190808 3 7,479 expanded Initial Hanoi pilot: 1 × classical 15-move solve + 2 × single-ring moves (manual + teleop)
hanoi/session_hanoi_0417_133613 7 10,968 expanded Autonomous orchestrator, initial 4-stack on peg B, 40/40/20 mission mix, 1-10 moves/episode
hanoi/session_hanoi_0417_144403 20 30,942 expanded Autonomous orchestrator, initial 4-stack on peg A, 40/40/20 mission mix, 1-10 moves/episode
hanoi/session_hanoi_0417_164816 20 64,185 18 expanded + 2 zips Autonomous orchestrator, initial 4-stack on peg C, min 3 moves/episode (no upper cap). episode_18.zip + episode_19.zip zipped
hanoi/session_hanoi_0420_132840 20 85,790 20 zips Autonomous orchestrator, initial 2+0+2 (ring_1/ring_3 on peg A, ring_2/ring_4 on peg C), min 3 moves/episode (no cap). Every episode stored as episode_XX.zip
hanoi/session_hanoi_0423_165447 20 84,837 20 zips Autonomous orchestrator, initial 2+1+1 (ring_1+ring_4 on peg A, ring_3 on peg B, ring_2 on peg C), 50/25/25 mission mix (10 classical / 5 single_ring / 5 rearrange), min 5 / max 15 moves/episode — first session with both move bounds. Every episode stored as episode_XX.zip

Total across all Hanoi sessions: 90 episodes, 284,201 frames. Each episode_XX/metadata.json records the exact mission_kind, goal_prompt, initial_state, target_state, and solver_moves for that episode. All autonomous sessions are produced by scripts/hanoi/orchestrator.py, which pre-plans all N missions upfront from the captured initial state, resamples any mission exceeding the per-episode move cap, and records a deterministic solver reference trajectory for each accepted mission.

Zipped episodes. Three sessions have episodes stored as uncompressed (zip -0) archives rather than expanded directory trees — session_hanoi_0417_164816 has only its last two episodes zipped, while every episode of session_hanoi_0420_132840 and session_hanoi_0423_165447 is a zip. This is because HuggingFace datasets have a hard cap of 1 million files per repository, and expanding every annotated frame (∼14 files per frame × ~280 K frames across all sessions) would have exceeded it. Extract before use:

# session_hanoi_0417_164816 — only 2 zipped episodes
cd hanoi/session_hanoi_0417_164816
unzip episode_18.zip       # → episode_18/
unzip episode_19.zip       # → episode_19/

# all-zip sessions — every episode is a zip
for sess in session_hanoi_0420_132840 session_hanoi_0423_165447; do
    cd "hanoi/$sess"
    for z in episode_*.zip; do unzip "$z"; done
    cd -
done

Once unzipped, the on-disk layout is identical to every other episode_XX/ directory in this dataset (same metadata.json, robot_states.npy, side/, wrist/, annotations/ tree, loadable by the exact same PyG loaders below). Expanded sessions require no pre-processing.

Graph generation for Hanoi (reference)

The full pipeline that produced every annotations/ tree above is checked in under tools/hanoi_pipeline/ in this repo. For the pipeline overview, algorithm details, and SAM2 checkpoint stats see the Pipeline and SAM2 models sections above. For the single-frame runtime inferer (use it inside a world-model prediction loop to turn a predicted RGB back into a graph), see tools/hanoi_pipeline/infer_graph_from_frame.py and tools/hanoi_pipeline/README.md.

Per-frame graph retrieval — how it works (important)

Every frame in every episode has its own distinct graph. The dataset stores them as a (structural skeleton + per-frame deltas) decomposition rather than N JSON files per episode, because the skeleton is the same every frame and the deltas are small. This cuts ~6000× disk-space per episode while losing zero information — the loader reconstructs each frame's full graph on demand.

Where each piece of a per-frame graph lives:

Component of the frame-T graph File
Node list (which rings exist) + structural edges (smaller→larger pairs) annotations/side_graph.jsoncomponents, edges (shared across all frames)
is_locked / visibility / held as of frame T annotations/side_graph.jsonframe_states (delta-encoded up to T)
SAM2 mask of each ring at frame T annotations/side_masks/frame_TTTTTT.npz
256-D SAM2 embedding at frame T annotations/side_embeddings/frame_TTTTTT.npz
3D position (centroid) + bbox + depth-valid flag at frame T annotations/side_depth_info/frame_TTTTTT.npz
Robot state at frame T robot_states.npy[T] (13-D)

The PyG loader combines these into a torch_geometric.data.Data object for exactly that frame — node features differ per frame (new embeddings + new 3D positions + new visibility flags), and edge features differ per frame (is_locked bits flip as rings are stacked / unstacked / held mid-transit).

To get a distinct graph for every labeled frame in an episode: use the list_all_frame_graphs helper below, or run scripts/materialize_per_frame_graphs.py to materialize them as individual .pt (and optional .json) files on disk.

Where edge-feature transitions live

The 3-D edge_attr vector is [has_constraint, is_locked, src_blocks_dst]. Of these, only is_locked changes over time — it flips when a ring lifts off / lands on another ring (or enters/exits the held state mid-transit). has_constraint and src_blocks_dst are static per edge.

Every transition of is_locked (and every transition of held) is recorded as a delta in side_graph.json under frame_states. The key is the frame index at which the transition happens; the value lists exactly which entries changed. Example from a real Hanoi single-move episode:

"frame_states": {
  "0":   {"constraints": {"ring_1->ring_2": true,  "ring_2->ring_3": true,
                          "ring_3->ring_4": true}},                // initial stack
  "134": {"constraints": {"ring_1->ring_2": false},                // ring_1 lifted OFF ring_2
          "held":        {"ring_1": true}},                        // ring_1 now in transit
  "278": {"constraints": {"ring_1->ring_3": true},                 // ring_1 placed on ring_3
          "held":        {"ring_1": false}}
}

The loader's resolve_frame_state(graph_json, T) walks frame_states in ascending key order up to T, applies every listed constraint/held delta, and returns the resolved state at frame T. That resolved state then populates edge_attr[:, 1] (the is_locked column) and the held flags that zero out edges touching rings in transit. So for frame 200 in the example above, ring_1->ring_2 is unlocked and every other edge touching ring_1 is also unlocked (held-ring rule), whereas ring_3->ring_4 is still locked (never changed).

Bottom line: there's no separate edge-feature file per frame — the transitions are packed into one delta dict in side_graph.json, and the loader replays them to give you the exact edge_attr for whichever frame you ask for.

Shared: PyG edge feature semantics (3-D, both domains)

edge_attr[k] = [has_constraint, is_locked, src_blocks_dst]

has_constraint is_locked src_blocks_dst Meaning
0 0 0 No physical constraint — message passing only. Used for: robot ↔ anything; Hanoi larger → smaller (non-edge at the pair level)
1 1 1 Constraint active, src is the blocker (physical Desktop) / src rests on top (physical Hanoi)
1 1 0 Same pair, reverse direction — src is the blocked / src is underneath
1 0 1 Constraint released, src was the blocker / legal rest direction with no contact right now
1 0 0 Same released pair, reverse direction

Symmetry invariants: has_constraint and is_locked are symmetric per unordered pair (same value for (i, j) and (j, i)). src_blocks_dst flips between the two directions. Robot ↔ anything edges are always [0, 0, 0].

Shared: Fixed 10-D type encoding — how it's made

Across both domains the component-type universe is 13 types (the two vocabularies unioned):

cpu_fan, cpu_bracket, cpu, ram_clip, ram, connector, graphic_card, motherboard,
ring_1, ring_2, ring_3, ring_4, robot

Each type is assigned a fixed 10-D vector. The encoding is NOT trained — it is a deterministic lookup read from a YAML at load time, so any consumer of the dataset gets the exact same node features bit-for-bit. Two methods are provided; both YAMLs live at the dataset repo root alongside the session directories:

Method YAML file How vectors are built Semantic structure
random config/type_encoding_random.yaml numpy.random.default_rng(42) unit-norm 10-vectors, one per type None — vectors are orthogonal-ish noise
clip config/type_encoding_clip.yaml CLIP ViT-B/32 text embedding of a humanised prompt (e.g. "a CPU fan", "a small red ring") → PCA to 10 → unit-normalise Related types cluster (the four rings are close; the fan/bracket/cpu cluster is tight)

Unknown type → 10-D zero vector. If a component's type is not in the YAML, the loader returns np.zeros(10, dtype=np.float32) for that slot. This keeps node dim at 270 regardless of vocabulary drift.

To reproduce or extend: download whichever YAML you want from the dataset repo root, load it with yaml.safe_load, and look up each component's type. The loader code below shows the full pattern.

Shared: PyG loader — self-contained Python

Prerequisites

pip install torch numpy torch_geometric pillow pyyaml

Save as gnn_world_model_loader.py

The key design property: node_dim = 256 + 3 + 10 + 1 = 270 for both domains. The 10-D type slot comes from the fixed YAML encoding (loaded once), so there's no domain branching — Desktop, Hanoi, and any future vocabulary all produce 270-D nodes.

import json
from dataclasses import dataclass
from functools import lru_cache
from pathlib import Path
from typing import Dict, List, Optional
import numpy as np
import torch
import yaml
from torch_geometric.data import Data

# ---------- constants ----------
TYPE_ENCODING_DIM = 10          # fixed, domain-independent
SAM2_EMB_DIM = 256
POS_DIM = 3
VIS_DIM = 1
NODE_DIM = SAM2_EMB_DIM + POS_DIM + TYPE_ENCODING_DIM + VIS_DIM   # = 270
ROBOT_STATE_DIM = 13            # [j0..j5, tcp_x, tcp_y, tcp_z, tcp_rx, tcp_ry, tcp_rz, gripper_pos]


# ---------- fixed type encoding ----------
# Download once from the dataset repo root:
#   config/type_encoding_random.yaml   (seeded numpy unit vectors, seed=42)
#   config/type_encoding_clip.yaml     (CLIP ViT-B/32 text → PCA(10) → unit-norm)
# Point TYPE_ENCODING_ROOT at wherever you saved them.
TYPE_ENCODING_ROOT = Path("./config")


@lru_cache(maxsize=4)
def load_type_encoding(encoding_method: str = "random") -> Dict[str, np.ndarray]:
    """Load the fixed 10-D per-type encoding from YAML. Cached across calls."""
    path = TYPE_ENCODING_ROOT / f"type_encoding_{encoding_method}.yaml"
    with open(path) as f:
        raw = yaml.safe_load(f)
    return {k: np.asarray(v, dtype=np.float32) for k, v in raw.items()}


def type_encode(comp_type: str, encoding_method: str = "random") -> np.ndarray:
    """Return 10-D vector for `comp_type`; zeros for unknown types."""
    table = load_type_encoding(encoding_method)
    vec = table.get(comp_type)
    if vec is None:
        return np.zeros(TYPE_ENCODING_DIM, dtype=np.float32)
    return vec.astype(np.float32)


# ---------- file helpers ----------
def list_labeled_frames(episode_dir: Path) -> List[int]:
    mask_dir = episode_dir / "annotations" / "side_masks"
    if not mask_dir.exists():
        return []
    frames = []
    for p in mask_dir.glob("frame_*.npz"):
        try:
            frames.append(int(p.stem.split("_")[1]))
        except (ValueError, IndexError):
            continue
    return sorted(frames)


def resolve_frame_state(graph_json: dict, frame_idx: int):
    constraints, visibility = {}, {}
    for c in graph_json["components"]:
        visibility[c["id"]] = True
    for e in graph_json["edges"]:
        constraints[f"{e['src']}->{e['dst']}"] = True
    fs_dict = graph_json.get("frame_states", {})
    for f in sorted([int(k) for k in fs_dict]):
        if f > frame_idx:
            break
        fs = fs_dict[str(f)]
        for k, v in fs.get("constraints", {}).items():
            constraints[k] = v
        for k, v in fs.get("visibility", {}).items():
            visibility[k] = v
    return constraints, visibility


@dataclass
class FrameData:
    graph: dict
    masks: dict
    embeddings: dict
    depth_info: dict
    robot: Optional[dict]
    constraints: dict
    visibility: dict


def load_frame_data(episode_dir, frame_idx):
    anno = Path(episode_dir) / "annotations"
    with open(anno / "side_graph.json") as f:
        graph = json.load(f)
    def _npz(p):
        if not p.exists(): return {}
        d = np.load(p)
        return {k: d[k] for k in d.files}
    masks = _npz(anno / "side_masks" / f"frame_{frame_idx:06d}.npz")
    embeddings = _npz(anno / "side_embeddings" / f"frame_{frame_idx:06d}.npz")
    depth_info = _npz(anno / "side_depth_info" / f"frame_{frame_idx:06d}.npz")
    robot = None
    rp = anno / "side_robot" / f"frame_{frame_idx:06d}.npz"
    if rp.exists():
        r = np.load(rp)
        if r["visible"][0] == 1:
            robot = {k: r[k] for k in r.files}
    constraints, visibility = resolve_frame_state(graph, frame_idx)
    return FrameData(graph, masks, embeddings, depth_info, robot, constraints, visibility)


def _build_product_node_features(nodes, fd, encoding_method):
    feats = []
    for node in nodes:
        cid = node["id"]
        emb = fd.embeddings.get(cid, np.zeros(SAM2_EMB_DIM, dtype=np.float32))
        dvk = f"{cid}_depth_valid"; ck = f"{cid}_centroid"
        if dvk in fd.depth_info and int(fd.depth_info[dvk][0]) == 1:
            pos = fd.depth_info[ck].astype(np.float32)
        else:
            pos = np.zeros(POS_DIM, dtype=np.float32)
        vis = 1.0 if fd.visibility.get(cid, True) else 0.0
        if vis == 0.0:
            emb = np.zeros(SAM2_EMB_DIM, dtype=np.float32)
            pos = np.zeros(POS_DIM, dtype=np.float32)
        feats.append(np.concatenate([
            emb.astype(np.float32),
            pos,
            type_encode(node["type"], encoding_method),
            np.array([vis], dtype=np.float32),
        ]))
    if not feats:
        return torch.empty((0, NODE_DIM), dtype=torch.float32)
    return torch.tensor(np.stack(feats), dtype=torch.float32)


def _build_product_edges(nodes, graph, fd):
    N = len(nodes)
    constraint_set = {(e["src"], e["dst"]) for e in graph["edges"]}
    pair_forward = {frozenset([s, d]): (s, d) for s, d in constraint_set}
    src_idx, dst_idx, edge_attr = [], [], []
    for i in range(N):
        for j in range(N):
            if i == j: continue
            src_id, dst_id = nodes[i]["id"], nodes[j]["id"]
            src_idx.append(i); dst_idx.append(j)
            key = frozenset([src_id, dst_id])
            if key in pair_forward:
                fwd = pair_forward[key]
                is_locked = fd.constraints.get(f"{fwd[0]}->{fwd[1]}", True)
                sb = 1.0 if src_id == fwd[0] else 0.0
                edge_attr.append([1.0, 1.0 if is_locked else 0.0, sb])
            else:
                edge_attr.append([0.0, 0.0, 0.0])
    return src_idx, dst_idx, edge_attr


# ---------- 1) products-only (Option 1: direct graph encoding) ----------
def load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method: str = "random"):
    fd = load_frame_data(episode_dir, frame_idx)
    nodes = fd.graph["components"]
    x = _build_product_node_features(nodes, fd, encoding_method)
    src, dst, ea = _build_product_edges(nodes, fd.graph, fd)
    return Data(
        x=x,
        edge_index=torch.tensor([src, dst], dtype=torch.long),
        edge_attr=torch.tensor(ea, dtype=torch.float32),
        y=torch.tensor([frame_idx], dtype=torch.long),
        num_nodes=len(nodes),
    )


# ---------- 2) V2 ablation: robot as graph NODE (Desktop only) ----------
def load_pyg_frame_with_robot(episode_dir, frame_idx, encoding_method: str = "random"):
    fd = load_frame_data(episode_dir, frame_idx)
    # Hanoi has no robot mask/embedding in v1 → fall back to products-only.
    if fd.robot is None:
        return load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method)

    products = fd.graph["components"]
    N_prod = len(products); N = N_prod + 1

    x_prod = _build_product_node_features(products, fd, encoding_method)
    robot_emb = fd.robot["embedding"].astype(np.float32)
    robot_pos = (fd.robot["centroid"].astype(np.float32)
                 if int(fd.robot["depth_valid"][0]) == 1
                 else np.zeros(POS_DIM, dtype=np.float32))
    robot_feat = np.concatenate([
        robot_emb, robot_pos,
        type_encode("robot", encoding_method),
        np.array([1.0], dtype=np.float32),
    ])
    x = torch.cat([x_prod, torch.tensor(robot_feat, dtype=torch.float32).unsqueeze(0)], dim=0)

    src, dst, ea = _build_product_edges(products, fd.graph, fd)
    robot_idx = N_prod
    for i in range(N_prod):
        src.append(robot_idx); dst.append(i); ea.append([0.0, 0.0, 0.0])
        src.append(i); dst.append(robot_idx); ea.append([0.0, 0.0, 0.0])

    data = Data(
        x=x,
        edge_index=torch.tensor([src, dst], dtype=torch.long),
        edge_attr=torch.tensor(ea, dtype=torch.float32),
        y=torch.tensor([frame_idx], dtype=torch.long),
        num_nodes=N,
    )
    data.robot_point_cloud = torch.tensor(fd.robot["point_cloud"], dtype=torch.float32)
    data.robot_pixel_coords = torch.tensor(fd.robot["pixel_coords"], dtype=torch.int32)
    data.robot_mask = torch.tensor(fd.robot["mask"], dtype=torch.uint8)
    return data


# ---------- 3) V3 recommended: products graph + robot_state side-tensor ----------
def load_pyg_frame_with_robot_state(episode_dir, frame_idx, encoding_method: str = "random"):
    data = load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method)
    robot_states = np.load(Path(episode_dir) / "robot_states.npy")   # (T, 13) float32
    rs = robot_states[frame_idx].astype(np.float32)                  # 13-D
    data.robot_state = torch.tensor(rs, dtype=torch.float32)
    return data


# ---------- 4) V3 action-conditioned: + robot_action delta ----------
def load_pyg_frame_with_robot_action(episode_dir, frame_idx, encoding_method: str = "random"):
    data = load_pyg_frame_with_robot_state(episode_dir, frame_idx, encoding_method)
    robot_states = np.load(Path(episode_dir) / "robot_states.npy")   # (T, 13)
    T = robot_states.shape[0]
    if frame_idx + 1 < T:
        action = robot_states[frame_idx + 1] - robot_states[frame_idx]
    else:
        action = np.zeros(ROBOT_STATE_DIM, dtype=np.float32)
    data.robot_action = torch.tensor(action.astype(np.float32), dtype=torch.float32)
    return data


# ---------- 5) Generator: one distinct graph per labeled frame ----------
_VARIANTS = {
    "products_only":     load_pyg_frame_products_only,
    "with_robot":        load_pyg_frame_with_robot,
    "with_robot_state":  load_pyg_frame_with_robot_state,
    "with_robot_action": load_pyg_frame_with_robot_action,
}


def list_all_frame_graphs(
    episode_dir,
    variant: str = "with_robot_state",
    encoding_method: str = "random",
):
    """Yield (frame_idx, Data) for every labeled frame in an episode.

    Each `Data` object is the full per-frame graph (node features, edges,
    edge features, and any requested side tensors). Feature values and
    `is_locked` bits differ per frame as rings move / stack / get held.
    """
    if variant not in _VARIANTS:
        raise ValueError(f"variant must be one of {list(_VARIANTS)}, got {variant!r}")
    loader = _VARIANTS[variant]
    for f in list_labeled_frames(Path(episode_dir)):
        yield f, loader(episode_dir, f, encoding_method=encoding_method)

Usage examples

All four loaders share the signature (episode_dir, frame_idx, encoding_method="random"). Swap "random" for "clip" to use the CLIP-derived encoding instead.

Desktop V1 — 15 product nodes, 270-D features, fully-connected edges (15×14 = 210):

from pathlib import Path
from gnn_world_model_loader import load_pyg_frame_products_only

episode = Path("session_0408_162129/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=42)
print(data)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3])

Desktop V3 (recommended) — same graph + 13-D robot_state side-tensor:

from gnn_world_model_loader import load_pyg_frame_with_robot_state

data = load_pyg_frame_with_robot_state(episode, frame_idx=42)
print(data)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3], robot_state=[13])

Desktop V3 action-conditioned — adds 13-D delta for the next frame:

from gnn_world_model_loader import load_pyg_frame_with_robot_action

data = load_pyg_frame_with_robot_action(episode, frame_idx=42)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3],
#         robot_state=[13], robot_action=[13])

Hanoi V1 — 4 ring nodes, 270-D features, 12 fully-connected edges:

episode = Path("hanoi/session_hanoi_0415_190808/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=250)
print(data)
# → Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3])

Hanoi V3 (recommended) — V3 works for Hanoi too because robot_states.npy is recorded for every episode:

data = load_pyg_frame_with_robot_state(episode, frame_idx=250)
print(data)
# → Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3], robot_state=[13])

V2 note. load_pyg_frame_with_robot falls back to load_pyg_frame_products_only on Hanoi (no robot mask), so for Hanoi V1 and V2 return identical graphs. On Desktop V2 attaches the robot as a 16-th node (x shape becomes [16, 270]).

How to use this dataset — full instructions

All common tasks ship as runnable Python scripts in this repo. No copy-pasting from the README — download, run, get results.

Everything included in this repo

File / Directory What it does
gnn_world_model_loader.py (root) PyG loader — reads annotations + robot_states, produces one Data per frame
config/type_encoding_{random,clip}.yaml Fixed 10-D per-type encoding YAMLs (loader reads these)
examples/01_inspect_episode.py Step 1 — inspect per-frame graphs in one episode
examples/02_train_gnn.py Step 2 — train an is_locked GAT + save checkpoint
examples/03_infer_from_rgb.py Step 3 — RGB+depth → graph (SAM2)
examples/04_materialize_per_frame.py Step 4 — dump .pt/.json per frame
examples/05_build_canonical_position_lut.py Step 5 — aggregate (ring,peg,stack) → 3D centroid LUT
examples/06_infer_from_predicted_rgb.py Step 6 — RGB-only (no depth) → graph via LUT (for world-model-predicted frames)
examples/07_gnn_to_worldmodel_latent.py Step 7 — load frozen pretrained GNN → per-frame WM latent
examples/08_joint_train_gnn_cosmos.py Step 8 — joint train GNN + Cosmos Predict 2.5
examples/09_verify_goal_prompts.py Step 9 — inspect / sanity-check every episode's goal_prompt vs target_state
examples/10_upgrade_prompts.py Step 10 — one-off, run right after download to normalise every goal_prompt into the canonical grounded form
tools/hanoi_pipeline/ Full Hanoi auto-labeling pipeline (SAM2-FT checkpoint, configs, auto_label.py, infer_graph_from_frame.py, src/ modules)

Step numbers match the numeric prefix on every script so you can cross-reference at a glance. Every script is self-pathing (resolves the loader / configs / pipeline tools relative to its own location), so cd into the repo root or run the scripts with absolute paths — either works.

Step 0 — Download the dataset

pip install huggingface_hub torch torch_geometric numpy pyyaml opencv-python pillow

# Full pull (~150 GB)
hf download ChangChrisLiu/GNN_Disassembly_WorldModel --repo-type dataset --local-dir ./gnn_world_model

# Or slim pull — just one Hanoi episode plus the code + configs:
hf download ChangChrisLiu/GNN_Disassembly_WorldModel --repo-type dataset \
    --include "gnn_world_model_loader.py" "examples/*" "config/*" \
               "tools/hanoi_pipeline/**" \
               "hanoi/session_hanoi_0415_190808/episode_00/**" \
    --local-dir ./gnn_world_model

cd gnn_world_model

The two large episodes in session_hanoi_0417_164816 are stored as uncompressed zip archives (HF has a 1 M file-per-repo cap). Unzip once:

cd hanoi/session_hanoi_0417_164816
unzip episode_18.zip    # → episode_18/
unzip episode_19.zip    # → episode_19/
cd ../..

After this, every example script resolves imports and data paths automatically.

SAM2 base checkpoint (needed only for Steps 3, 6, and the Bonus auto-label)

Steps 3 and 6 — and the "Bonus: reproducing the dataset" section — load Meta AI's SAM2 base model to perform segmentation on new RGB inputs. We ship the Hanoi fine-tuned decoder (tools/hanoi_pipeline/checkpoints/sam2_hanoi_ft.pt), but not the 320 MB SAM2 base — download it from Meta's repo:

# 1. Clone SAM2 (installs the Python package + the configs/)
git clone https://github.com/facebookresearch/sam2
pip install -e ./sam2

# 2. Download the base checkpoint (or see sam2/checkpoints/download_ckpts.sh)
# 3. Place / symlink it where tools/hanoi_pipeline/ can find it:
mkdir -p tools/hanoi_pipeline/sam2/checkpoints
cp /path/to/sam2.1_hiera_base_plus.pt tools/hanoi_pipeline/sam2/checkpoints/
# OR symlink your existing SAM2 install root:
ln -sf /absolute/path/to/your/sam2 tools/hanoi_pipeline/sam2

Once the checkpoint is reachable at tools/hanoi_pipeline/sam2/checkpoints/sam2.1_hiera_base_plus.pt, Steps 3 and 6 work directly. Steps 1, 2, 4, 5, 7, 8 don't need SAM2 — they only use the already-labeled annotations shipped with the dataset.

Step 1 — Load one episode and inspect its per-frame graphs

Verified script (see examples/01_inspect_episode.py):

python examples/01_inspect_episode.py \
    --episode hanoi/session_hanoi_0415_190808/episode_00

It enumerates every labeled frame of the episode, builds a PyG Data for one of them (shape [4, 270] for node features, [12, 3] for edges), prints the current is_locked and src_blocks_dst bits, and shows the locked-edge count varying per frame.

Every labeled frame produces a distinct graph. The static skeleton (which rings exist, which structural pairs are possible) lives in side_graph.json; the time-varying bits (is_locked, held, node features) come from per-frame npz files. The loader reassembles them on demand, which is why a single .json per episode is enough.

The --variant flag (forwarded to the loader) picks which Data schema you want:

Variant What you get
products_only Bare graph — nodes + edges, no robot info
with_robot Desktop V2 — robot as a graph node (falls back to products_only for Hanoi)
with_robot_state Recommended — graph + robot_state=[13] side tensor (works for both domains)
with_robot_action with_robot_state + robot_action=[13] delta for the next frame

Step 2 — Train a GNN on Hanoi graphs

Verified script (see examples/02_train_gnn.py):

python examples/02_train_gnn.py                                   # all Hanoi episodes
python examples/02_train_gnn.py --sessions hanoi/session_hanoi_0415_190808
python examples/02_train_gnn.py --epochs 10 --batch-size 32 --lr 3e-4

The script:

  1. Walks every labeled frame in the selected sessions and materialises a flat list of PyG Data objects.
  2. Wraps them in a PyG DataLoader (auto-batching of nodes + edges + per-graph robot_state).
  3. Trains a 2-layer GATConv predictor for per-edge is_locked, conditioned on the broadcast robot_state.
  4. Adds a Rule-2 compliance termΣ σ(logits) * (1 - legal_mask) over edges, penalising any prediction that would lock a larger→smaller edge.

Key detail baked into the script (and the README's loader code for reference): after PyG batching, data.robot_state is a flat 1-D tensor of length num_graphs × 13 (PyG's default __cat_dim__ = 0 for 1-D attrs). The script reshapes via data.robot_state.view(-1, 13)[data.batch] before concatenating onto node features.

Step 3 — Inference: turn a predicted RGB into a graph

Verified script (see examples/03_infer_from_rgb.py). Requires Meta's SAM2 + base checkpoint installed (see their repo):

python examples/03_infer_from_rgb.py \
    --rgb   hanoi/session_hanoi_0415_190808/episode_00/side/rgb/frame_000100.png \
    --depth hanoi/session_hanoi_0415_190808/episode_00/side/depth/frame_000100.npy \
    --out   /tmp/predicted_graph.json

Internally it loads tools/hanoi_pipeline/infer_graph_from_frame.py's HanoiGraphInferer, which auto-selects tools/hanoi_pipeline/checkpoints/sam2_hanoi_ft.pt (the Hanoi-FT) on top of the vanilla SAM2 base. Output is a 5-field dict identical to the offline pipeline:

Key Shape / type Meaning
graph dict (same schema as side_graph.json) Nodes + structural edges, frame_states empty (single frame has no history)
masks {ring_id: (H, W) uint8} SAM2 mask per detected ring
embeddings {ring_id: (256,) float32} Mask-pooled SAM2 vision features
depth_info flat dict {ring_id_centroid, ring_id_point_cloud, ...} 3-D bundle (empty if --depth omitted)
ring_states {ring_id: RingState(peg, stack_index)} Inferred peg assignment

From this, build a PyG Data by stitching SAM2 embedding + 3-D position + type_encode(c["type"]) + visibility bit per component — exactly what the training loaders do internally. Override the checkpoint path with SAM2_FINETUNE_CKPT=<path>, or set it to empty to force vanilla SAM2.

Step 4 — Materialize per-frame graphs to disk (optional)

Target use: inspect a specific frame's graph outside PyTorch, or hand per-frame graph files to a non-PyTorch consumer.

python examples/04_materialize_per_frame.py \
    hanoi/session_hanoi_0415_190808/episode_00 \
    --out     ./per_frame_graphs \
    --variant with_robot_state \
    --also-json

Writes frame_XXXXXX.pt (and optionally diff-friendly frame_XXXXXX.json) per labeled frame. Reload:

import torch
data = torch.load("per_frame_graphs/frame_000100.pt", weights_only=False)
print(data)          # Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3], robot_state=[13])

For in-process iteration without writing files, use list_all_frame_graphs(...) from the loader module.

Step 5 — Build a canonical (ring, peg, stack_index) → 3-D centroid LUT

Target use: prerequisite for Step 6 (inference on world-model-predicted frames that have no depth). Since the physical rig is fixed, every ring's 3-D centroid at a given (peg, stack_index) is near-constant across all episodes — so you can aggregate them once into a lookup table.

python examples/05_build_canonical_position_lut.py \
    [--hanoi-root hanoi] [--out config/hanoi_canonical_positions.yaml]

Walks every labeled Hanoi frame, buckets (ring_id, peg, stack_index) → centroid, averages, and writes config/hanoi_canonical_positions.yaml. Each row records the mean centroid, per-axis std, and sample count so you can spot-check coverage. The more episodes you run this over, the better the coverage; the full 50-episode HF release exercises essentially all legal (ring, peg, stack) triplets.

Step 6 — RGB → graph on a world-model-predicted frame (no depth)

Target use: during rollout of a video world model (Cosmos Predict 2.5, VideoPoet, DiT, …), each predicted RGB needs to be turned back into a constraint graph — but predicted frames typically don't come with depth. SAM2 still detects rings and their peg / stack_index; you substitute the depth-based centroid with a lookup from Step 5's LUT.

python examples/06_infer_from_predicted_rgb.py \
    --rgb predicted_future_frame.png \
    [--lut config/hanoi_canonical_positions.yaml] \
    [--out graph.json]

Internally this (1) runs HanoiGraphInferer(rgb, depth=None) — returning masks, embeddings, structural edges, and ring_states — then (2) fills depth_info from the LUT by looking up each detected ring's (peg, stack_index). The output is the same 5-field dict as Step 3, so downstream code is unchanged.

LUT misses (e.g. a predicted state you never observed in training) are surfaced explicitly in the script's output; add more labeled sessions to improve coverage, or fall back to vanilla Step 3 when depth is available.

Two ways to wire the GNN into a world model (Steps 7 vs 8)

There are exactly two sensible architectures. Pick one per Step below:

Step 7 — pretrained GNN → WM latent Step 8 — joint GNN + WM training
GNN weights Trained first via Step 2, then frozen Trained jointly with WM backbone
Gradient flow WM → fixed graph latent (stops at GNN) WM ↔ GNN (bidirectional)
Use when WM is a black box you can't backprop through (external API, 3rd-party pipeline), or you want to ablate "does graph conditioning help at all?" You control the WM architecture and want joint optimisation — GNN learns conditioning that's directly useful to the WM's reconstruction loss
Stability GNN stays at the edge-prediction quality Step 2 achieved Joint training can degrade the GNN if the WM loss dominates

Step 7 — Pretrained GNN → world-model conditioning latent

Target use: your GNN is already trained (via Step 2) and you want to USE its per-frame output as an extra conditioning stream for a WM you treat as a black box. GNN weights do not update here.

# 1. Train the GNN (Step 2 already does this; save a checkpoint)
python examples/02_train_gnn.py --epochs 10 --out checkpoints/gnn_is_locked.pt

# 2. Load the checkpoint (frozen), produce [T, 2H] latent per episode, and
#    demonstrate concatenation into the WM's text/context stream:
python examples/07_gnn_to_worldmodel_latent.py \
    --ckpt    checkpoints/gnn_is_locked.pt \
    --episode hanoi/session_hanoi_0415_190808/episode_00 \
    --out     /tmp/wm_conditioning.pt

Step 7 loads the checkpoint's state_dict into the IsLockedPredictor module, freezes it, runs each frame's graph through it to get per-node embeddings [N, H], mean/max pools to [2H], then demonstrates a torch.nn.Linear(2H → wm_text_dim) projection followed by concatenation with the WM's text embedding — exactly what you'd wire into Cosmos Predict 2.5's transformer.context_embedder call site.

Step 8 — Joint training: GNN + Cosmos Predict 2.5 (end-to-end world model)

Target use: you control the WM architecture and want the GNN trained together with the video backbone — gradients flow through both, so the GNN learns conditioning that minimises the WM's reconstruction loss directly.

python examples/08_joint_train_gnn_cosmos.py --epochs 2 --batch-size 2

Architecture:

┌──────────────────────┐
│  Cosmos Predict 2.5  │   ← frozen / LoRA backbone, predicts next-frame RGB
└──────────┬───────────┘
           │ reconstruction loss (pixel / KL)
           ▼
     ┌─────────────────────┐
     │ fusion: cond-token  │ ← [T, 2H] tokens from GraphConditioningEncoder
     │ stream concatenated │   (defined in-file; trained jointly — unlike
     │ alongside Cosmos'   │    Step 7's frozen pretrained path)
     │ text/image stream   │
     └─────────┬───────────┘
               │
               ▼  constraint-aware prediction
         per-edge `is_locked` logits
               │
               ▼  supervision
     BCE  + Rule-2 soft compliance

Total loss: L_wm (reconstruction) + λ_edge · L_edge_BCE + λ_rule · L_rule2.

The script ships a CosmosStub that stands in for the real Cosmos_Predict2_Video2World_Pipeline so it runs on CPU without downloading the 2-B-parameter weights; the docstring shows the exact replacement block for a real run. Gradients flow through GNN → fusion token → Cosmos → pixel loss, so the GNN is literally co-trained with the world model, not bolted on afterward.

Step 9 — Verify every episode's goal_prompt locally

Target use: after hf download, render a self-contained prose description of every episode's task so you can read the dataset without needing to decode what peg A/B/C mean. Also checks that each stored goal_prompt matches what we would canonically derive from its mission_kind + target_state.

# All Hanoi sessions (full prose form with starting state / task / target state)
python examples/09_verify_goal_prompts.py --all-hanoi

# One session
python examples/09_verify_goal_prompts.py --session hanoi/session_hanoi_0420_132840

# One episode
python examples/09_verify_goal_prompts.py --episode hanoi/session_hanoi_0420_132840/episode_04

# One-liner (raw goal_prompt only, no prose expansion)
python examples/09_verify_goal_prompts.py --all-hanoi --compact

# Only flag disagreements between stored goal_prompt and canonical form
python examples/09_verify_goal_prompts.py --all-hanoi --mismatches-only

Full output for a single episode looks like:

episode_04  [single_ring, 15 moves]
  Starting state — green ring (alone) on peg A; blue ring (alone) on peg B;
                   red → yellow  (top → bottom) on peg C.
  Task: move the blue ring from peg B to peg A.  Every other ring must end up
        back on its original peg, sorted smallest-on-top — any intermediate
        displacements must be undone.
  Target state — green → blue  (top → bottom) on peg A;
                 red → yellow  (top → bottom) on peg C.

The top of the output also prints a "physical layout" preamble explaining that peg A is the far peg in the side camera view, peg C is the closest, and peg B sits between them — so the peg letters cross-reference cleanly with the preview videos. On the current HF release you should see 0 prompt-vs-target mismatches.

Step 10 — Normalise all goal_prompts after download (run once)

Target use: the dataset on HF ships with goal_prompt values from different iterations of the prompt template. Run this right after hf download (and unzipping the zipped sessions) to rewrite every episode's goal_prompt into the canonical grounded form documented above.

# Preview (no writes):
python examples/10_upgrade_prompts.py --all-hanoi --dry-run

# Apply:
python examples/10_upgrade_prompts.py --all-hanoi

The script:

  • only touches metadata.json and annotations/side_graph.json (tiny files, near-instant)
  • re-derives the canonical prompt from (mission_kind, initial_state, target_state) — the three fields that never change per episode
  • is idempotent — re-running produces no changes
  • requires no network access

After running, python examples/09_verify_goal_prompts.py --all-hanoi --mismatches-only should report 0 mismatches.

Step 11 — Evaluate and validate a trained world model

Target use: you trained a WM (Step 7 or Step 8) and want to (a) measure absolute prediction quality and (b) prove statistically that your GNN conditioning is the reason for the quality. Both questions are answered with the same generation runs — pair them once, score everything from those windows.

This section is the canonical eval recipe shared across the team: when someone runs an eval, they should follow this exact protocol so numbers are comparable across runs and across people. Comments throughout explain why each choice was made — most decisions trace to a specific failure mode we've seen or an explicit research-claim alignment.

Scope: side view only — and why that's the right scope for our claim

We predict the side camera only. Wrist view exists in the dataset but is not a generation target. The reasoning, in order:

  1. The graph is anchored in the side view. side_graph.json, side_masks/, side_depth_info/ — every annotation lives in side-camera pixel/3-D coordinates. There is intentionally no wrist_graph.json because the wrist view's frame of reference is the moving robot, which has no stable spatial relationship to ring positions.
  2. Task state is observable from the side. "Which ring is on which peg" is the variable our GNN encodes and the WM is supposed to track. From the side camera you can read it off; from the wrist camera the gripper often occludes the very thing you'd be tracking.
  3. Our research claim is about graph conditioning, not multi-view consistency. Adding wrist-view prediction would be a different research question and the GNN has no privileged information to offer there. The cleanest paper is "graph-conditioned side-view prediction"; multi-view is future work.

For contrast, Ctrl-World (ICLR 2026, arXiv 2510.10125) predicts 3 cameras jointly (1 wrist + 2 third-person at 192×320 each); their controllability story is partly about multi-view consistency, and their ablation explicitly removes "multi-view joint prediction" as one of three knobs. That's a different research question. If a reviewer asks why we don't do joint multi-view, the answer is: the GNN's information lives in the side view; jointly predicting wrist would test multi-view conditioning, not graph conditioning.

Why short-horizon windowed eval (read this first)

The honest test of a long-horizon world model is open-loop autoregressive rollout to completion — but at 60–180 s episode lengths, drift dominates: by 30 s the prediction has wandered far enough from GT that any per-frame metric reflects "how badly does the model drift" rather than "how well does the GNN's conditioning help." You can't attribute differences in the metric to the GNN's contribution when noise from compounding error has already overwhelmed the signal.

The fix — used by short-horizon protocols like Ctrl-World — is to slice each test episode into short windows, seed each window with the GT frame at its start, and compare the predicted window to the GT window. Drift is bounded inside the window; differences between models are attributable to conditioning quality.

For Hanoi the natural window is one move (≈8–12 s, the pick-and-place that takes a ring from src_peg to dst_peg). Move boundaries are interpretable, task-relevant, and already saved in every episode's metadata.json.moves + frame_states held-deltas. 5–15 moves/episode × 15 episodes ≈ 75–225 windows — plenty for stable means and statistical tests.

What the GNN provides that prompt-only doesn't

The GNN injects four signals into the world-model conditioning stream that text alone can't:

  1. Identityring_1ring_2 even when partially occluded or visually similar
  2. Spatial position — current 3-D centroid of each ring
  3. Constraintsis_locked / src_blocks_dst edges encoding Hanoi rules
  4. Held state — which ring the gripper is currently holding

A reliable validation has to show the GNN's contribution in metrics that target these specific signals, at horizons short enough that drift can't dominate.

Two co-equal evaluation tracks (run both, report separately)

Track Window Seed Answers Role
A. Validation (paired ablation) one move (~10 s) GT frame + GT graph at move start does the GNN's conditioning cause the lift? headline — confirms graph contribution
B. Drift characterization entire episode first frame only how does each model degrade with horizon? co-equal — confirms the static-side-view problem (see below) doesn't hide failure

Both are necessary. Track A says "the GNN helps over a controlled 10 s window." Track B says "and the failure modes that show up over longer horizons are also visible to our metrics." Without B, a reviewer can argue that any A-style controlled eval is teacher-forcing the model into looking good. Without A, B is just a drift report and doesn't isolate the GNN's contribution.

The earlier framing called Track B "secondary" — that was wrong. In our setup the static side view makes drift partially invisible to averaged whole-frame metrics (see "Static side view → ROI" below). Specifically characterizing drift on the dynamic workspace region is therefore not a sanity check, it's a load-bearing piece of evidence. Promoted to co-equal.

Don't conflate them — they look superficially similar (both use horizon as an axis) but they answer different questions. Track A re-seeds each window from a GT frame so drift can't dominate; Track B lets drift compound on purpose to characterize its shape.

Test split

Lock a held-out set out of training and model selection. Recommended principles:

  • ≈15 episodes (≈17 % of the current 90) — large enough to span structure, small enough to hand-score per-window task success in under 2 hours.
  • Span all three mission kinds (classical, single_ring, rearrange) and 1- to 15-move counts.
  • Mix session dates so a single session's lighting / calibration drift doesn't dominate.
  • Skip session_hanoi_0423_165447 for any metric that uses robot_states.npy — that session's RTDEReceive was wedged at collection, so all 12 proprioceptive columns are frozen. Vision (RGB / depth / masks / embeddings) is fine.

Deterministic example (every 6th episode of every "good" session):

import pathlib

DATA_ROOT = pathlib.Path("hanoi")
GOOD_SESSIONS = [
    "session_hanoi_0416_173445",
    "session_hanoi_0418_140012",
    "session_hanoi_0420_132840",
]

test_episodes = []
for s in GOOD_SESSIONS:
    eps = sorted((DATA_ROOT / s).glob("episode_*"))
    test_episodes += [str(e) for e in eps[::6]]

Save to config/test_split.txt and commit. Never train on these.

Pilot first (do not skip)

Run the L2 identity-preservation curve below on 2 episodes before committing to the full eval. If with-GNN doesn't separate from prompt-only at 10 s on those two episodes, the fusion mechanism is too weak — the GNN isn't being used effectively, and no full eval will rescue that. Pilot takes <1 hour and saves you 1–2 weeks if there's an architectural problem.

Decision rule: proceed only if the 10 s gap in identity-preservation rate exceeds 0.10 on the pilot. If the gap doesn't appear, fix the architecture before continuing.

Generating paired predictions

The validation track requires running both models on identical seeds so the comparison is paired. Re-seed torch.manual_seed and np.random.seed between the two model calls — otherwise the second model uses RNG state already advanced by the first call and the comparison isn't paired.

Deriving move windows from frame_states

Move boundaries are encoded as held-state deltas in annotations/side_graph.json.frame_states. Each (False → True) transition starts a move; the matching (True → False) transition ends it. Pad ±15 frames to include pre-grasp and post-release:

import json
from pathlib import Path

def derive_move_windows(episode_dir):
    """Returns [(t_start, t_end, ring, src_peg, dst_peg), ...] aligned to metadata.moves."""
    ep    = Path(episode_dir)
    graph = json.loads((ep / "annotations/side_graph.json").read_text())
    md    = json.loads((ep / "metadata.json").read_text())

    fs = sorted([(int(k), v) for k, v in graph["frame_states"].items()])
    prev_held = {r: False for r in graph["type_vocab"]}
    pickups, drops = [], []
    for t, state in fs:
        cur_held = {**prev_held, **state.get("held", {})}
        for r, h in cur_held.items():
            if h and not prev_held[r]:
                pickups.append((t, r))
            if not h and prev_held[r]:
                drops.append((t, r))
        prev_held = cur_held

    out = []
    for (t0, r0), (t1, r1), m in zip(pickups, drops, md["moves"]):
        assert r0 == r1 == m["ring"], f"held trace ({r0}) ≠ move ring ({m['ring']})"
        out.append((max(0, t0 - 15), t1 + 15, r0, m["src_peg"], m["dst_peg"]))
    return out
Paired generation loop
import numpy as np, torch, json
from pathlib import Path

def run_paired_eval(model_with, model_without, test_episodes, base_seed=42):
    """Returns parallel lists of per-window predictions for both models,
    using deterministic re-seeding so the comparison is truly paired."""
    out_with, out_without = [], []
    for ep in test_episodes:
        ep_dir = Path(ep)
        md     = json.loads((ep_dir / "metadata.json").read_text())
        for win_idx, (t0, t1, ring, src, dst) in enumerate(derive_move_windows(ep_dir)):
            seed_rgb   = load_frame(ep_dir, "side", t0)             # (H, W, 3) uint8
            seed_graph = pyg_data_at_frame(ep_dir, t0)              # PyG Data
            n          = t1 - t0

            torch.manual_seed(base_seed + win_idx); np.random.seed(base_seed + win_idx)
            pred_w = model_with   .generate(seed_rgb, seed_graph, md["goal_prompt"], n)

            torch.manual_seed(base_seed + win_idx); np.random.seed(base_seed + win_idx)
            pred_o = model_without.generate(seed_rgb, None,        md["goal_prompt"], n)

            out_with   .append(dict(ep=ep, win=win_idx, ring=ring, dst=dst,
                                    pred=pred_w, t0=t0, t1=t1))
            out_without.append(dict(ep=ep, win=win_idx, ring=ring, dst=dst,
                                    pred=pred_o, t0=t0, t1=t1))
    return out_with, out_without

If your WM only generates fixed-length chunks (Cosmos Predict 2.5 outputs ≈120–240 frames at a time), a 10 s window may need 1–2 chunks. Chain them autoregressively inside the window (last frame of chunk N seeds chunk N+1). Errors compound minimally over <10 s.

Validation track — three layers of evidence

Three independent tests, all paired, all drift-resistant. Each layer confirms or denies the same claim — the GNN preserves task-relevant signals during generation — from a different angle.

Layer 1 — Single-move task success (paired binary, McNemar)

Did the right ring end up on the right peg over one move? Score per-window with the perception module, then run McNemar on the paired binary outcomes — a two-sample z-test ignores pairing and inflates Type-II error.

from src.hanoi.perception import detect_rings
from src.hanoi.setup       import HanoiSetup
from scipy.stats           import binomtest

setup = HanoiSetup.from_yaml("config/hanoi_setup.yaml")

def check_ring_on_peg(pred_rgb_final, ring_id, dst_peg, setup):
    """True iff perception sees `ring_id` on `dst_peg` in the predicted final frame."""
    try:
        states = detect_rings(pred_rgb_final, setup)
        return states[ring_id].peg == dst_peg
    except Exception:
        return False                           # bad gen ⇒ task failure (correct, not skip)

def mcnemar_paired(succ_a, succ_b):
    """Returns (chi2, p_value, lift). H0: P(A wins | discordant) = 0.5."""
    b = int(( succ_a & ~succ_b).sum())          # A wins
    c = int((~succ_a &  succ_b).sum())          # B wins
    if b + c == 0:
        return 0.0, 1.0, 0.0
    p     = binomtest(b, b + c, p=0.5).pvalue
    chi2  = (abs(b - c) - 1) ** 2 / (b + c)     # continuity-corrected
    lift  = float(succ_a.mean() - succ_b.mean())
    return float(chi2), float(p), lift

# ----- L1 driver -----
preds_w, preds_o = run_paired_eval(model_with, model_without, test_episodes)
succ_w = np.array([check_ring_on_peg(p["pred"][-1], p["ring"], p["dst"], setup) for p in preds_w])
succ_o = np.array([check_ring_on_peg(p["pred"][-1], p["ring"], p["dst"], setup) for p in preds_o])
chi2, p, lift = mcnemar_paired(succ_w, succ_o)
print(f"GNN: {succ_w.mean():.1%}  prompt-only: {succ_o.mean():.1%}  "
      f"lift: {lift:+.1%}  McNemar χ²={chi2:.2f}  p={p:.4g}  n={len(succ_w)}")

You report: "GNN-conditioned succeeded on 78 % of windows vs 31 % for prompt-only (lift = +47 %, McNemar χ² = 38.7, p < 0.001, n = 142)."

Layer 2 — Identity-preservation curve (paired bootstrap)

This is the test that directly addresses color drift: as horizon grows, both models eventually lose track of which ring is which. The GNN's identity conditioning should slow that loss — the gap widens with horizon.

The HSV ranges are the same ones detect_rings already uses — pull them from setup.ring_hsv_ranges so the check matches the calibrated ranges and handles OpenCV's red-wraparound case correctly:

import cv2, numpy as np

def hue_identity_match(rgb_frame, ring_masks, setup, threshold_pct=0.5):
    """For each ring's mask, is >threshold_pct of its hue inside the calibrated
    HSV range for that ring?  Returns {ring_id: bool|None}.  None = mask too
    small to score."""
    hsv = cv2.cvtColor(rgb_frame, cv2.COLOR_RGB2HSV)
    out = {}
    for ring_id, mask in ring_masks.items():
        if mask.sum() < 50:
            out[ring_id] = None
            continue
        h_pixels = hsv[..., 0][mask.astype(bool)]
        h_lo, h_hi, *_ = setup.ring_hsv_ranges[ring_id]
        if h_lo <= h_hi:
            in_range = (h_pixels >= h_lo) & (h_pixels <= h_hi)
        else:                                    # red wraparound
            in_range = (h_pixels >= h_lo) | (h_pixels <= h_hi)
        out[ring_id] = bool(in_range.mean() > threshold_pct)
    return out

def identity_curve(model, test_episodes, horizons_frames, setup,
                   n_starts=8, base_seed=42):
    """horizons_frames: e.g. [60, 150, 300, 450] = 2/5/10/15 s at 30 fps.
    Returns (summary_dict, raw_per_window_rates_dict)."""
    rates = {h: [] for h in horizons_frames}
    H_max = max(horizons_frames)
    for ep_i, ep in enumerate(test_episodes):
        ep_dir = Path(ep)
        L      = episode_length(ep_dir)
        md     = json.loads((ep_dir / "metadata.json").read_text())
        starts = np.linspace(0, max(L - H_max - 1, 0), n_starts).astype(int)
        for s_i, t0 in enumerate(starts):
            torch.manual_seed(base_seed + ep_i * 1000 + s_i)
            np.random.seed   (base_seed + ep_i * 1000 + s_i)
            seed_rgb   = load_frame(ep_dir, "side", t0)
            seed_graph = pyg_data_at_frame(ep_dir, t0)            # None for prompt-only
            pred       = model.generate(seed_rgb, seed_graph, md["goal_prompt"], H_max)
            for h in horizons_frames:
                masks = segment_rings_sam2(pred[h - 1])           # see Step 3
                hits  = hue_identity_match(pred[h - 1], masks, setup)
                vis   = [v for v in hits.values() if v is not None]
                rates[h].append(sum(vis) / max(len(vis), 1))
    summary = {}
    for h, vals in rates.items():
        v    = np.array(vals)
        boot = np.array([np.random.choice(v, size=len(v), replace=True).mean()
                         for _ in range(2000)])
        summary[h] = (float(v.mean()),
                      float(np.percentile(boot, 2.5)),
                      float(np.percentile(boot, 97.5)))
    return summary, rates

def paired_bootstrap_diff(rates_a, rates_b, n_boot=2000):
    """Per-window paired differences with 95 % CI."""
    a, b = np.array(rates_a), np.array(rates_b)
    assert len(a) == len(b), "rates must be paired (same starts for both models)"
    diff = a - b
    boot = np.array([np.random.choice(diff, size=len(diff), replace=True).mean()
                     for _ in range(n_boot)])
    return float(diff.mean()), float(np.percentile(boot, 2.5)), float(np.percentile(boot, 97.5))

# ----- L2 driver -----
sum_w, raw_w = identity_curve(model_with,    test_episodes, [60,150,300,450], setup)
sum_o, raw_o = identity_curve(model_without, test_episodes, [60,150,300,450], setup)
for h in [60, 150, 300, 450]:
    m, lo, hi = paired_bootstrap_diff(raw_w[h], raw_o[h])
    print(f"{h/30:.0f}s: GNN={sum_w[h][0]:.2f} vs prompt={sum_o[h][0]:.2f}  "
          f"lift={m:+.2f}  95% CI [{lo:+.2f}, {hi:+.2f}]")

The headline finding is the gap widening with horizon: the with-GNN curve stays flat longer; the prompt-only curve drops faster because nothing anchors identity.

Layer 3 — Ambiguous-case study (curated, qualitative)

For ~20 hand-picked cases where the prompt alone is insufficient. Programmatic candidate finder:

def find_ambiguous_windows(test_episodes, setup):
    """Flag windows where prompt + seed are likely insufficient without the graph."""
    cands = []
    for ep in test_episodes:
        for win in derive_move_windows(ep):
            t0, _, ring, _, dst = win
            seed_rgb = load_frame(Path(ep), "side", t0)
            states   = detect_rings(seed_rgb, setup)

            tags = []
            # A: target ring fully occluded under taller stack on its current peg
            stack_max = max(s.stack_index for s in states.values()
                            if s.peg == states[ring].peg)
            if states[ring].stack_index < stack_max:
                tags.append("occluded_target")
            # B: visually similar ring also visible elsewhere
            if any(r != ring and same_hue_family(setup, r, ring)
                   and states[r].peg != states[ring].peg
                   for r in states):
                tags.append("similar_visible")
            # C: would-be illegal placement (Rule 2 violation) without is_locked guard
            if would_violate_rule_2(ring, dst, states):
                tags.append("constraint_critical")
            if tags:
                cands.append((ep, win, tags))
    return cands

Pick ~5 per pattern; lock the case list before running the models (otherwise it's cherry-picking). Score by hand into a markdown sheet:

| case_id | pattern | GNN succ? | prompt-only succ? | note |
|---|---|---|---|---|
| 0420_ep03_w2 | occluded_target     | yes | no  | with-GNN reaches behind ring_2 to ring_3 |
| 0420_ep07_w0 | constraint_critical | yes | no  | prompt-only stacked ring_3 on ring_1 (illegal) |
| ...

These cases become Figure 4 — the visual evidence reviewers actually look at. They're slower to gather than L1/L2 numbers but carry disproportionate weight in convincing skeptical readers.

Pre-registered ablations

Decide these before running L1/L2, run all four, report all four — never cherry-pick:

Condition What's removed Tests
full nothing — the full GNN with edges + types + positions upper bound
no_edges drop edge_attr → only node features feed conditioning constraints' contribution
types_only zero out spatial-position channels in node features spatial info's contribution
prompt_only no GNN at all lower bound

Per-frame fidelity metrics (run on the same paired predictions)

These are the LPIPS / SSIM / PSNR / FVD numbers reviewers expect. Per-window task success (L1) is your headline; these are a complementary view that confirms the predictions also look right, not just end right.

Static side view → use ROI metrics as primary, whole-frame as comparability backup

This is the most important methodology decision in this section, so it gets its own subsection. In our setup, ~85 % of every side-view frame is static background (the table, the three pegs, the workshop wall behind them) — the OAK-D is bolted in place and the only things that move are the rings and the robot arm. Averaged whole-frame LPIPS / SSIM / PSNR are dominated by those static pixels. A model that draws the rings in completely wrong places but reproduces the static background can score LPIPS ≈ 0.05; a model that gets the rings right but rerenders the table at a slightly different brightness can score worse. Whole-frame metrics here are insensitive to exactly the failures we care about.

The fix is a fixed workspace ROI: a single deterministic crop around the three pegs and the gripper-transit envelope, applied to every frame of every episode. Same crop forever — no perception dependency, no per-frame mask variation, fully reproducible. We report metrics computed on this ROI as the primary numbers, with whole-frame metrics reported alongside as a secondary row for comparability with the wider video-prediction literature (Ctrl-World, Sora, FitVid, etc., all of which use whole-frame at lower resolutions where static background occupies a smaller fraction).

The ROI is committed in config/eval_roi.yaml so every team member uses the identical crop. Tweak only with a team-wide ack — changing the ROI invalidates all prior numbers.

# Loaded from config/eval_roi.yaml — deterministic across team and runs.
# Box covers all 3 pegs + ~200 px of headroom for the gripper transit envelope
# above safe_z. Derived once from setup.peg_{A,B,C}_top projected through OAK-D
# intrinsics; commit the result, never re-derive ad-hoc per run.
import yaml
ROI = yaml.safe_load(open("config/eval_roi.yaml"))
ROI_X1, ROI_Y1, ROI_X2, ROI_Y2 = ROI["x1"], ROI["y1"], ROI["x2"], ROI["y2"]

def crop_roi(rgb_thwc):
    """rgb_thwc: (T, H, W, 3) uint8 → (T, h_roi, w_roi, 3) uint8."""
    return rgb_thwc[:, ROI_Y1:ROI_Y2, ROI_X1:ROI_X2, :]

Why we still report whole-frame metrics: (a) reviewers familiar with video-prediction literature will compare your numbers against published whole-frame numbers; not having them invites "but how does this compare to Ctrl-World's PSNR?" questions; (b) a large gap between whole-frame and ROI metrics is itself a result — it tells you the static background is being preserved (good) and where the model's actual failures concentrate (in the ROI). Report both, name ROI as primary, explain the gap in the methodology paragraph.

Image-side metrics
import lpips, torch, numpy as np
from skimage.metrics import structural_similarity as ssim, peak_signal_noise_ratio as psnr

_lpips = lpips.LPIPS(net="alex").cuda()       # pick alex OR vgg, never mix

def _per_frame_metrics(pred, gt):
    """pred, gt: (T, H, W, 3) uint8 — already cropped to whatever region you want."""
    p_t = torch.from_numpy(pred).permute(0,3,1,2).float().cuda() / 127.5 - 1
    g_t = torch.from_numpy(gt  ).permute(0,3,1,2).float().cuda() / 127.5 - 1

    lpips_pf = _lpips(p_t, g_t).flatten().detach().cpu().numpy()
    ssim_pf  = np.array([ssim(g, p, channel_axis=-1, data_range=255)
                         for p, g in zip(pred, gt)])
    psnr_pf  = np.array([psnr(g, p, data_range=255)
                         for p, g in zip(pred, gt)])

    pred_motion = np.abs(np.diff(pred.astype(np.int16), axis=0)).mean()
    gt_motion   = np.abs(np.diff(gt  .astype(np.int16), axis=0)).mean()
    motion_err  = abs(pred_motion - gt_motion) / max(gt_motion, 1e-6)

    return dict(
        lpips_mean=float(lpips_pf.mean()),
        ssim_mean =float(ssim_pf .mean()),
        psnr_mean =float(psnr_pf .mean()),
        motion_err=float(motion_err),
    )

def window_image_metrics(pred, gt):
    """Returns BOTH ROI (primary) and whole-frame (comparability) metrics
    for one paired window.  Both are computed from the same predictions —
    just two crops applied before metric computation."""
    return {
        "roi":         _per_frame_metrics(crop_roi(pred), crop_roi(gt)),
        "whole_frame": _per_frame_metrics(pred,           gt),
    }

For FVD, treat each 10 s window as one clip and accumulate across all windows in the test set. Compute FVD twice — once on ROI clips (primary), once on whole-frame clips (comparability):

# Reference impl: github.com/JunyaoHu/common_metrics_on_video_quality
# (torchmetrics' FrechetInceptionDistance is image-only; use I3D for video.)
fvd_roi         = compute_fvd(real_clips=[crop_roi(g) for g in all_gt_windows],
                              fake_clips=[crop_roi(p) for p in all_pred_windows])
fvd_whole_frame = compute_fvd(real_clips=all_gt_windows, fake_clips=all_pred_windows)

FVD needs ≥100 windows for a stable covariance estimate — bootstrap a 95 % CI before reporting any FVD difference smaller than ≈30. Reference values for whole-frame metrics in the video-prediction literature: LPIPS ≈ 0.1–0.3 for good prediction; SSIM ≈ 0.6–0.9; PSNR ≈ 20–30 dB. ROI metrics are not directly comparable to those reference values because the static-fraction-of-frame term dominates the difference; only compare ROI to your own ablation conditions (with-GNN vs prompt-only).

Graph-side metrics

Image and graph metrics aren't redundant — they have different invariance classes. Image metrics penalize all pixel differences (incl. arm trajectory variation); graph metrics only penalize differences that flip a symbolic state. The GNN's contribution is most visible in graph metrics, because that's the structure it explicitly conditions on.

def window_graph_metrics(pred_rgb, ep_dir, t_start, t_end, setup):
    """For each frame in the predicted window, extract a graph and compare
    to the GT graph at the same timestamp.  Returns per-window scalars."""
    pred_states = [detect_rings(pred_rgb[i], setup) for i in range(len(pred_rgb))]
    gt_states   = [gt_state_at_frame(ep_dir, t_start + i) for i in range(len(pred_rgb))]

    ring_pos_acc = mean(pred_states[i][r] == gt_states[i][r]
                        for i in range(len(pred_rgb)) for r in ALL_RINGS)
    held_f1      = f1_score(pred_held_seq, gt_held_seq)
    edge_f1      = f1_score(pred_edges, gt_edges, average="macro")
    pred_traj    = dedup_consecutive([s.frozen() for s in pred_states])
    gt_traj      = dedup_consecutive([s.frozen() for s in gt_states])
    state_edit   = levenshtein(pred_traj, gt_traj) / max(len(gt_traj), 1)

    return {
        "ring_pos_acc": ring_pos_acc,
        "held_f1":      held_f1,
        "edge_f1":      edge_f1,
        "state_edit":   state_edit,
    }
Labeler-validity check (run once before any graph metric)

detect_rings was calibrated on real OAK-D frames. On Cosmos output it may silently mis-label, biasing every graph metric. Validate before trusting the numbers:

# Sample 50 generated frames from your predictions
# Hand-label each (which ring on which peg)
# Run detect_rings on each
# Compute agreement: |labeler matches hand-label| / 50
labeler_oracle_accuracy = ...
print(f"Labeler accuracy on generated frames: {labeler_oracle_accuracy:.1%}")

If <95 %, every graph metric below is biased by labeler error. Either fine-tune SAM2 on a small set of generated frames, or report the labeler's own accuracy as an explicit "labeler ceiling" caveat row in the table.

Track B — Drift characterization (co-equal track)

Run open-loop autoregressive rollout on all test episodes (not 3–5 cherry-picked — we want this to be representative, since it's now load-bearing evidence rather than a sanity check). Report ROI metrics binned by horizon for both models:

Horizon full GNN — LPIPS ↓ prompt_only — LPIPS ↓ full GNN — task success ↑ prompt_only — task success ↑
0–10 s 0.18 0.27 0.78 0.31
10–30 s 0.27 0.42 0.55 0.18
30–60 s 0.38 0.58 0.32 0.06
full ep 0.49 0.71 0.18 0.02

The story this table tells the reader: "the GNN's contribution doesn't just exist at 10 s — its lift over prompt-only persists across horizons, and the gap widens under drift." That's a stronger claim than "the GNN helps at 10 s" alone, and it directly addresses the static-side-view concern: if drift were invisible to ROI metrics, the with-GNN and without-GNN curves would overlap at long horizons; if it's visible, both curves degrade and the gap stays measurable.

ROI metrics are required for this track. Whole-frame metrics on long horizons can stay artificially flat even as the dynamic content drifts arbitrarily, because the static background continues to dominate the average. Reporting whole-frame in the drift table would actively mislead. The whole-frame columns are reserved for the per-window fidelity table (Track A) where they serve a different purpose (cross-paper comparability).

Don't conflate Track B with L2. They look superficially similar (both have a horizon axis) but L2 re-seeds each window from a GT frame, so drift can't compound; Track B lets drift compound on purpose. Same axis, different question.

Reporting templates

## Validation results

### L1 — Single-move paired ablation (n = 142 windows, paired, McNemar)

| Condition | Per-window task success | Lift over prompt-only | McNemar p |
|---|---|---|---|
| full GNN     | **0.78** | +0.47 | < 0.001 |
| no_edges     |   0.62   | +0.31 | < 0.001 |
| types_only   |   0.55   | +0.24 |   0.003 |
| prompt_only  |   0.31   |   —   |    —    |

### L2 — Identity preservation vs horizon (paired bootstrap, 95 % CI)

| Horizon | full GNN          | prompt_only       | Paired lift             |
|---|---|---|---|
|  2 s | 0.99 [0.98, 1.00] | 0.95 [0.92, 0.97] | +0.04 [+0.02, +0.06] |
|  5 s | 0.97 [0.94, 0.99] | 0.82 [0.77, 0.86] | +0.15 [+0.11, +0.19] |
| 10 s | 0.91 [0.86, 0.95] | 0.55 [0.46, 0.63] | +0.36 [+0.30, +0.42] |
| 15 s | 0.78 [0.69, 0.86] | 0.31 [0.21, 0.41] | +0.47 [+0.40, +0.54] |

### L3 — Ambiguous-case study (n = 20, blind-curated)

GNN-conditioned: 18/20.  Prompt-only: 4/20.  See Figure 4.

### Per-frame fidelity (windowed, GNN vs prompt-only on the same paired windows)

ROI metrics are primary; whole-frame metrics reported alongside for comparability with the wider video-prediction literature. The gap between the two columns is itself informative — it tells you the static background is being preserved (good) and where the dynamic content actually fails (in the ROI).

| Metric | full GNN — ROI | full GNN — whole-frame | prompt-only — ROI | prompt-only — whole-frame | Identity (GT vs GT) | Random-pair |
|---|---|---|---|---|---|---|
| LPIPS ↓                | **0.31** | 0.18 | **0.46** | 0.24 | 0.0  | 0.74 |
| SSIM ↑                 | **0.62** | 0.78 | **0.48** | 0.69 | 1.00 | 0.34 |
| PSNR (dB) ↑            | **18.2** | 22.4 | **15.1** | 19.8 | ∞    | 11.8 |
| Motion-mag err ↓       | **0.18** | 0.12 | **0.41** | 0.27 | 0.0  | 0.92 |
| FVD per 10 s window ↓  | **214**  | 142  | **318**  | 198  | 0    | 818  |
| Ring-position acc ↑    | 0.92 | — | 0.55 | — | 1.00 (labeler ceiling) | 0.25 |
| Held-state F1 ↑        | 0.87 | — | 0.43 | — | 0.98 (labeler ceiling) | 0.10 |
| Edge F1 (is_locked) ↑  | 0.84 | — | 0.51 | — | 0.96 (labeler ceiling) | 0.30 |

(Graph metrics have no whole-frame variant — they operate on perception output, not pixels.)

### Track B — drift characterization (ROI metrics only — see methodology note)

| Horizon | full GNN — LPIPS ↓ | prompt_only — LPIPS ↓ | full GNN — task success ↑ | prompt_only — task success ↑ |
|---|---|---|---|---|
| 0–10 s     | 0.31 | 0.46 | 0.78 | 0.31 |
| 10–30 s    | 0.42 | 0.61 | 0.55 | 0.18 |
| 30–60 s    | 0.55 | 0.74 | 0.32 | 0.06 |
| full ep    | 0.68 | 0.83 | 0.18 | 0.02 |

The fidelity table includes two reference columns: Identity (GT vs GT) tells the reader what perfect looks like — for graph metrics, the deviation from 1.00 is the labeler ceiling (you can't beat this without overfitting to labeler noise). Random-pair is each metric computed between two unrelated GT windows — tells the reader what "totally unrelated videos" look like, so the absolute numbers have a frame of reference. Most video-prediction papers omit these columns and the numbers feel arbitrary.

When you write the paper, the load-bearing numbers are the ROI columns in the fidelity table and the per-window task success column in the drift table. Whole-frame fidelity numbers are supporting evidence for "the model isn't completely broken." The graph metrics are the cleanest test of the GNN's contribution.

Pitfalls — read this before running anything

Methodology pitfalls (the ones that invalidate your numbers):

  • Don't report whole-frame as primary on its own. ~85 % of side-view pixels are static; whole-frame averaged metrics are insensitive to dynamic-region failures. ROI is primary. Whole-frame is comparability backup. (We deviate from Ctrl-World, Sora, and most video-pred papers on this — they use whole-frame because their resolution is lower and/or their scenes have more dynamic content. Our static-side-view setup is different and the ROI choice should be defended explicitly in the methodology section.)
  • ROI must be deterministic and committed. config/eval_roi.yaml is the source of truth. Don't tweak ad-hoc per run — that invalidates cross-run comparison and slips toward p-hacking.
  • Re-seed RNG between paired model calls. Without it, the second model uses RNG state advanced by the first call and the comparison isn't paired.
  • Use McNemar (paired binary) or paired bootstrap (continuous), not two-sample tests. Two-sample tests ignore pairing and are wrong here.
  • Lock the L3 case list before running models. Picking cases after you see outputs is cherry-picking. Score blind on the seed graph alone.
  • Pre-register ablations. A clever post-hoc ablation that "happens" to confirm the GNN is p-hacking. Either re-run all conditions with the same protocol or label it as exploratory.

Scope pitfalls (the ones that confuse the research claim):

  • Don't add wrist-view eval. Wrist view is a different research question (multi-view consistency, à la Ctrl-World) that the GNN has no privileged information for. Mixing it in dilutes the claim and invites "but you don't beat Ctrl-World on wrist FVD" critiques. Stay in scope: graph-conditioned side-view prediction.
  • Don't conflate L2 with Track B (drift). L2 re-seeds each window from a GT frame so drift can't compound; Track B lets drift compound on purpose. Same horizon axis, different question.
  • Don't conflate ROI metrics with crop-augmentation tricks. ROI is a fixed deterministic crop applied at metric time only. The model trains on whole frames as before.

Operational pitfalls (the ones that silently break runs):

  • Pilot before scale. If L2 doesn't separate the curves at 10 s on 2 pilot episodes, fix the architecture before the full eval — burning a week of GPU time on a fusion mechanism that isn't using the GNN is the most expensive mistake you can make here.
  • Don't trust perception on garbage frames. check_ring_on_peg returning False on perception failure is correct — bad generations should count as task failures, not be silently skipped.
  • Validate the labeler on Cosmos output. SAM2 fine-tune was trained on real OAK-D RGB; generated frames are OOD. Hand-validate on 50 samples; report the labeler ceiling.
  • fps / resolution mismatch silently breaks everything. If your WM outputs at 24 fps and you compare against 30 fps GT, every metric is artificially bad. Resample first.
  • FVD with too few samples. <100 windows ⇒ unstable covariance estimate. Bootstrap CIs or report "trend, not value." Differences <30 are probably noise.
  • 0423 session proprio: vision works; robot_states.npy is frozen. Use session_hanoi_0423_165447 only for image metrics, never for anything that consumes robot_states.npy.
  • LPIPS backbone consistency: pick alex or vgg and stick with it across every reported number — they're not directly comparable. Across the team we use alex (faster, cheaper).
  • Move-window boundaries: derive them from frame_states held-deltas, not from wall-clock fixed intervals. A "10 s" wall-clock window may straddle a move boundary and conflate two different dynamics regimes.
  • Window-level FVD with mixed lengths: Hanoi moves vary 5–15 s. Pad or crop to a fixed length (e.g., 8 s) before stacking for FVD; otherwise the I3D feature distribution mixes durations.

Decision log (what we considered and rejected)

For team continuity — if someone joining the project six months from now wonders why we made a particular methodology choice, this is the answer.

Considered Rejected because
Multi-view (side + wrist) generation, à la Ctrl-World the GNN has no privileged information about wrist view; adding it tests multi-view consistency, not graph conditioning. Multi-view is a future-work extension, not part of this paper's claim.
Whole-frame metrics only (Ctrl-World, Sora, FitVid convention) ~85 % of our side view is static background; averaged whole-frame metrics are insensitive to dynamic-region failures. ROI primary + whole-frame comparability is the right compromise.
Per-frame foreground mask from SAM2 mask area changes per frame as the arm moves, makes metrics non-comparable across frames; mask reliability on OOD Cosmos output is itself a confound. Fixed ROI cleaner.
Motion-weighted metrics (weight error by abs(diff_t)) non-standard, hard to interpret; "motion-weighted LPIPS" doesn't exist in the literature. Fixed ROI is cleaner and more reproducible.
Open-loop full-episode rollout as the primary eval drift dominates by 30 s; can't isolate GNN contribution from compounding error. Used as Track B alongside windowed Track A, never as the primary number.
Plain metric tables (Ctrl-World ablation style) without statistical tests underclaims; paired McNemar / paired bootstrap give a defensible significance claim with the same data. We do this better than Ctrl-World.
Closed-loop teacher-forced as headline re-seeds the model with the right answer between chunks → the metric measures one-chunk quality, not the model's actual capability. Useful for early development; not a publication-ready primary metric.

Bonus: reproducing the dataset from a raw robot capture

If you capture your own Hanoi session (raw RGB + depth + robot_states.npy + metadata.json — i.e. before any labeling), run the bundled auto-labeler to produce the full v3 annotations/ tree. Every session already on HF was produced this way.

python tools/hanoi_pipeline/scripts/hanoi/auto_label.py \
    /path/to/session_hanoi_<date>_<time>/

Requires SAM2 base (see Step 0). For each episode it writes:

  • annotations/side_masks/frame_XXXXXX.npz — SAM2 masks (Hanoi-FT auto-selected)
  • annotations/side_embeddings/frame_XXXXXX.npz — 256-D pooled embeddings
  • annotations/side_depth_info/frame_XXXXXX.npz — 3-D positions + bboxes
  • annotations/side_robot/frame_XXXXXX.npz — robot bundle (zero-filled in Hanoi v1)
  • annotations/side_graph.json — structural edges + frame_states deltas (derived from solver_moves + gripper-based held-interval detection)
  • annotations/dataset_card.json — schema pointer

After auto-labeling, every example script above works on the new session unchanged. How the raw session gets captured in the first place (robot arm control, camera sync) is out of scope for this dataset repo — it requires physical hardware.

Shared: common v3 file schemas

side_graph.json

{
  "episode_id": "episode_00",
  "goal_component": "ring_1",            // Desktop: a product id; Hanoi: a ring id
  "view": "side",
  "components": [
    {"id": "ring_1", "type": "ring_1", "color": "#FF0000"}
  ],
  "edges": [
    {"src": "ring_1", "dst": "ring_3", "directed": true}
  ],
  "frame_states": {
    "0":   {"constraints": {"ring_1->ring_3": true},  "visibility": {"ring_1": true}, "held": {}},
    "120": {"constraints": {"ring_1->ring_3": false},                                   "held": {"ring_1": true}}
  },
  "node_positions": {"ring_1": [640, 360]},
  "type_vocab": ["ring_1", "ring_2", "ring_3", "ring_4"],     // Hanoi v1 — no robot
  "embedding_dim": 256,
  "feature_extractor": "sam2.1_hiera_base_plus",

  // Hanoi-only extras:
  "goal_prompt": "Move the red ring to peg B",
  "mission_kind": "single_ring",
  "target_state": {"peg_A": [], "peg_B": ["ring_1"], "peg_C": []}
}

side_depth_info/frame_XXXXXX.npz — 7 flat keys per component

Key Shape Dtype Meaning
{cid}_point_cloud (N, 3) float32 3D points in camera frame (m). (0, 3) if no valid depth
{cid}_pixel_coords (N, 2) int32 (u, v) of valid depth pixels
{cid}_raw_depths_mm (N,) uint16 Filtered to [50, 2000]
{cid}_centroid (3,) float32 Mean of point_cloud; [0,0,0] if invalid
{cid}_bbox_2d (4,) int32 [x1, y1, x2, y2] from mask
{cid}_area (1,) int32 Mask pixel count
{cid}_depth_valid (1,) uint8 1 if N > 0 else 0

side_robot/frame_XXXXXX.npz — always 10 keys

Key Shape Dtype Meaning
visible (1,) uint8 1 if robot labeled, 0 otherwise
mask (H, W) uint8 Binary mask
embedding (256,) float32 SAM2 256-D
point_cloud (N, 3) float32 3D points (m)
pixel_coords (N, 2) int32 (u, v)
raw_depths_mm (N,) uint16 mm
centroid (3,) float32 Mean of point cloud
bbox_2d (4,) int32 From mask
area (1,) int32 Pixel count
depth_valid (1,) uint8 1 if N > 0 else 0

Recording hardware

UR5e + Robotiq 2F-85 gripper; static-mounted Luxonis OAK-D Pro side view with intrinsics fx = 1033.8, fy = 1033.7, cx = 632.9, cy = 359.9; recording at 30 Hz, 1280 × 720 RGB and uint16 depth (mm) filtered to [50, 2000].

License

Released under CC BY 4.0. Use, share, and adapt freely with attribution.

Acknowledgements

Downloads last month
5,224