Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
ds002885
DBS Phantom Recordings
openneuro
https://openneuro.org/datasets/ds002885
10.18112/openneuro.ds002885.v1.0.1
CC0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "ds002885" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

DBS Phantom Recordings

Dataset ID: ds002885

Kandemir2020

At a glance: MEG · Other other · other · 2 subjects · 7 recordings · CC0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="ds002885", cache_dir="./cache")
print(len(ds), "recordings")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/ds002885")

Dataset metadata

Subjects 2
Recordings 7
Tasks (count) 4
Channels 306 (×4), 314 (×3)
Sampling rate (Hz) 19200 (×4), 3000 (×3)
Total duration (h) 0.4
Size on disk 20.1 GB
Recording type MEG
Experimental modality Other
Paradigm type Other
Population Other
Source openneuro
License CC0
NEMAR citations 1.0

Links


Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
52