Datasets:
image imagewidth (px) 128 128 | label class label 26
classes |
|---|---|
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a | |
0a |
TCASL: Temporal Contrast ASL Dataset
Overview
TCASL is an American Sign Language (ASL) image classification dataset generated using temporal contrast emulation, a software technique that mimics the behavior of neuromorphic Dynamic Vision Sensors (DVS). Rather than capturing standard RGB frames, each sample is a sparse, edge-based event map that isolates hand motion and discards static background noise.
The dataset was built to support real-time, low-power ASL finger-spelling recognition on consumer hardware without requiring specialized event cameras or high-end GPUs.
Dataset Details
| Property | Value |
|---|---|
| Classes | 26 (A–Z) |
| Total Samples | 13,000 |
| Samples per Class | 500 |
| Resolution | 128 × 128 px (grayscale) |
| Format | Image classification (folder-per-class) |
| Participants | 5 |
Splits
| Split | Samples |
|---|---|
| Train | 10,400 |
| Val | 1,300 |
| Test | 1,300 |
Sample Images
Below is one example from each class, showing the sparse event-map representation produced by temporal contrast emulation. White pixels represent ON events (brightening), black pixels represent OFF events (darkening), and gray is the neutral background (no motion).
How It Works
Standard webcams produce full-color frames at fixed intervals. TCASL simulates a neuromorphic sensor by computing pixel-level brightness differences between consecutive frames. If the change exceeds a threshold θ, an event is recorded:
- +1 (white) — pixel got brighter (ON event)
- −1 (black) — pixel got darker (OFF event)
- 0 (gray) — no significant change (ignored)
This eliminates redundant background data and retains only the moving hand contour, dramatically reducing the data a model needs to process.
Data Collection
- Participants: 5 individuals with varying hand shapes and sizes
- Recording: Standard consumer webcam; temporal contrast emulation applied in post-processing
- Volume: 100 samples per participant × 26 classes = 13,000 total
- Dynamic letters: ASL letters J and Z involve motion. For consistency with static gestures, only the final hand position was captured
- Quality control: All samples were manually reviewed; blurry frames or incorrect gestures were discarded
Usage
from datasets import load_dataset
ds = load_dataset("keshavshankar08/TCASL")
# Access a split
train = ds["train"]
print(train[0]) # {"image": <PIL.Image>, "label": 0}
# Label index → letter mapping
label_names = train.features["label"].names # ["a", "b", ..., "z"]
Benchmark Results
The following table shows top-1 accuracy across architectures on the TCASL test set.
| Architecture | Accuracy |
|---|---|
| LeNet-5 | 82.3% |
| Hybrid Transformer | 92.5% |
| RG-CNN | 96.8% |
| SDNN (ours) | 98.3% |
| STBP-SNN | 98.6% |
The custom SDNN achieves 98.3% accuracy and runs at over 200 FPS on a standard laptop CPU (Apple M1), with no GPU required.
Motivation
Millions of people rely on sign language to communicate, yet real-time translation tools typically require expensive hardware or high-end GPUs. TCASL addresses this by bringing neuromorphic vision to standard webcams through software emulation, enabling accessible, privacy-preserving ASL recognition at the edge.
Related Work
This dataset was created alongside the TCASL Learner, a real-time "Spelling Bee" game application that runs finger-spelling recognition entirely on a consumer laptop. For full details on the architecture, training paradigm, and experimental results, see the GitHub Page.
Citation
@misc{tcasl2026,
title={TCASL: Real-Time American Sign Language Classification via Temporal Contrast Emulation},
author={Keshav Shankar and Nathaniel Ginck},
year={2026},
note={Technical Report},
url={https://github.com/keshavshankar08/TCASL}
}
License
This dataset is released under the MIT License.
- Downloads last month
- 114
