| --- |
| license: gpl-3.0 |
| task_categories: |
| - tabular-classification |
| - tabular-regression |
| - image-to-3d |
| - depth-estimation |
| pretty_name: AD Trajectories |
| size_categories: |
| - 100K<n<1M |
| --- |
| |
| **Paper in the making** |
|
|
| --- |
|
|
| # AD-Trajectories Dataset |
| This dataset was created for the Master's thesis "From Broadcast to 3D: A Deep Learning Approach for Tennis Trajectory and Spin Estimation" by Alexandra Göppert at the University Augsburg, Chair of Machine Learning and Computer Vision. |
| The AD-Rallies dataset is a large-scale synthetic dataset generated using the MuJoCo physics engine. It was built to bridge the synthetic-to-real gap by providing highly accurate physical models of aerodynamic forces, such as the Magnus effect, and complex ball-court interactions. |
|
|
| --- |
| ## Dataset Overview |
|
|
| The dataset comprises of approxemately 3.2 Million synthetic tennis rallies. The rallies start with a ball toss, a serve and then up to 4 further basic strokes like (groundstroke, volley, lob, short and smash) can be added. All physical kinematics, including the 3D positions, linear velocities, and angular velocities (spin), are captured at a high resolution of 500 frames per second (fps), corresponding to a time step of 0.002 seconds. |
|
|
| The dataset is saved as a .tar file becuase it compromises out of 3.2 Million .npz files. The tar file has a size of 87 GB. |
| The npz file includes the position, velocity and angular velocity of the ball through the whole rally. |
|
|
| The name if the npz files is like follows: |
| toss_xxxxx_branch_yyy.npz or toss_xxxxx_branch_yyy_deadend.npz |
| |
| Where xxxxx is the number of one of the 20000 tosses that were initially simulated and start the rally. Combined with a server this creates the stem for the rallies. For each stem rally upto 4 returns are added. These are numbered in increasing order as branch_yyy. The number of max rallies emerging out of one toss-serve combination is 160. |
| If there is no feasable return found the rally is no longer stritched together (even though it has less than a total of 6 shots in the rally). This rally is then marked with "_deadend". |
| |
| --- |
| # Data Structures per Trajectory |
| Inside each npz, you will find exactly seven .npy files. These numpy arrays store the spatial, temporal, and camera data for that specific sequence: |
| |
| positions.npy: The 3D position of the ball (x, y, z) throughout the rally, recorded at a resolution of 0.002s. |
| velocities.npy: The linear velocity of the ball relative to the world coordinate system, recorded at a resolution of 0.002s. |
| rotations.npy: The angular velocity (spin) of the ball in all 3 directions, recorded at a resolution of 0.002s. |
| |
| The position and velocity is defined in relation to the 3D world coordinate system, which is defined like follows: |
| <img src="./3d_coordinate_system_in_field.png" alt="Coordinate system definition of 3D world coordinates" style="width:50%; height:auto;" > |
| |
| The ball spin (rotations.npy) is defined in relation to the ball's local coordinate system. The direction of which is defined as follows: |
| <img src="./Screenshot 2026-04-20 194926.png" alt="Definition of the ball's local coordinate system" > |
| |
| --- |
| |
| # Download the Dataset |
| |
| You can download the specific tar file using the hf_hub_download function. This is more efficient than cloning the entire repository if you only need the archive. |
| Python |
| |
| ``` |
| from huggingface_hub import hf_hub_download |
| ``` |
| |
| ``` |
| REPO_ID = "XSpaceCoderX/AD-Rallies" |
| FILENAME = "data.tar" |
| |
| print(f"Downloading {FILENAME}...") |
| |
| local_path = hf_hub_download( |
| repo_id=REPO_ID, |
| filename=FILENAME, |
| repo_type="dataset" |
| ) |
| |
| print(f"File downloaded to: {local_path}") |
| ``` |
| |
| ## Unpack the Dataset |
| |
| Once downloaded, you can extract the contents using Python's built-in tarfile module or a system command. |
| Option A: Using Python (Cross-platform) |
| |
| This is the recommended way to ensure compatibility across Windows, macOS, and Linux. |
| |
| ``` |
| import tarfile |
| import os |
| |
| def extract_tar(file_path, extract_path="."): |
| print(f"Extracting {file_path}...") |
| with tarfile.open(file_path, "r") as tar: |
| tar.extractall(path=extract_path) |
| print("Extraction complete!") |
| ``` |
| |
| Note: Make sure you have at least 180GB of free disk space (87 GB for the archive + 100 GB for the extracted contents). |