Dynamic Intelligence - Egocentric Human Motion Annotation Dataset
RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.
π Dataset Overview
| Metric | Value |
|---|---|
| Episodes | 97 |
| Total Frames | ~28,000 |
| FPS | 30 |
| Tasks | 10 manipulation tasks |
| Total Duration | ~15.5 minutes |
| Avg Episode Length | ~9.6 seconds |
Task Distribution
| Task ID | Description | Episodes |
|---|---|---|
| Task 1 | Fold the white t-shirt on the bed | 8 |
| Task 2 | Fold the jeans on the bed | 10 |
| Task 3 | Fold two underwear and stack them | 10 |
| Task 4 | Put the pillow on the right place | 10 |
| Task 5 | Pick up plate and glass, put on stove | 10 |
| Task 6 | Go out the door and close it | 9 |
| Task 7 | Pick up sandals, put next to scale | 10 |
| Task 8 | Put cloth in basket, close drawer | 10 |
| Task 9 | Screw the cap on your bottle | 10 |
| Task 10 | Pick up two objects, put on bed | 10 |
π Repository Structure
humanoid-robots-training-dataset/
β
βββ data/
β βββ chunk-000/ # Parquet files (97 episodes)
β βββ episode_000000.parquet
β βββ episode_000001.parquet
β βββ ...
β
βββ videos/
β βββ chunk-000/rgb/ # MP4 videos (synchronized)
β βββ episode_000000.mp4
β βββ ...
β
βββ meta/ # Metadata & Annotations
β βββ info.json # Dataset configuration (LeRobot format)
β βββ stats.json # Feature min/max/mean/std statistics
β βββ events.json # Disturbance & recovery annotations
β βββ depth_quality_summary.json # Per-episode depth QC metrics
β βββ annotations_motion_v1_frames.json # Motion semantics annotations
β
βββ README.md
π― Data Schema
Parquet Columns (per frame)
| Column | Type | Description |
|---|---|---|
episode_index |
int64 | Episode number (0-96) |
frame_index |
int64 | Frame within episode |
timestamp |
float64 | Time in seconds |
language_instruction |
string | Task description |
observation.state |
float[252] | 21 hand joints Γ 2 hands Γ 6 DoF |
action |
float[252] | Same as state (for imitation learning) |
observation.images.rgb |
struct | Video path + timestamp |
6-DoF Hand Pose Format
Each joint has 6 values: [x_cm, y_cm, z_cm, yaw_deg, pitch_deg, roll_deg]
Coordinate System:
- Origin: Camera (iPhone TrueDepth)
- X: Right (positive)
- Y: Down (positive)
- Z: Forward (positive, into scene)
π·οΈ Motion Semantics Annotations
File: meta/annotations_motion_v1_frames.json
Coarse temporal segmentation with motion intent, phase, and error labels.
Annotation Schema
{
"episode_id": "Task1_Vid2",
"segments": [
{
"start_frame": 54,
"end_frame_exclusive": 140,
"motion_type": "grasp", // What action is being performed
"temporal_phase": "start", // start | contact | manipulate | end
"actor": "both_hands", // left_hand | right_hand | both_hands
"target": {
"type": "cloth_region", // cloth_region | object | surface
"value": "bottom_edge" // Specific target identifier
},
"state": {
"stage": "unfolded", // Task-specific state
"flatness": "wrinkled", // For folding tasks only
"symmetry": "asymmetric" // For folding tasks only
},
"error": "none" // misalignment | slip | drop | none
}
]
}
Motion Types
grasp | pull | align | fold | smooth | insert | rotate | open | close | press | hold | release | place
Why Motion Annotations?
- Temporal Structure: Know when manipulation phases begin/end
- Intent Understanding: What the human intends to do, not just kinematics
- Error Detection: Labeled failure modes (slip, drop, misalignment)
- Training Signal: Richer supervision for imitation learning
π Events Metadata
File: meta/events.json
Disturbances and recovery actions for select episodes.
Disturbance Types
| Type | Description |
|---|---|
OCCLUSION |
Hand temporarily blocked from camera |
TARGET_MOVED |
Object shifted unexpectedly |
SLIP |
Object slipped during grasp |
COLLISION |
Unintended contact |
DEPTH_DROPOUT |
Depth sensor lost valid readings |
Recovery Actions
| Action | Description |
|---|---|
REGRASP |
Release and re-acquire object |
REACH_ADJUST |
Modify approach trajectory |
ABORT |
Stop current action |
REPLAN |
Compute new action sequence |
π Depth Quality Metrics
File: meta/depth_quality_summary.json
| Metric | Description | Dataset Average |
|---|---|---|
valid_depth_pct |
% frames with valid depth at hand | 95.5% β |
plane_rms_mm |
RMS deviation from flat surface | 5.73mm β |
π Usage
With LeRobot
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")
# Access episode
episode = dataset[0]
state = episode["observation.state"] # [252] hand pose (both hands)
rgb = episode["observation.images.rgb"] # Video frame
task = episode["language_instruction"] # Task description
Loading Motion Annotations
import json
from huggingface_hub import hf_hub_download
# Download annotations
path = hf_hub_download(
repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
filename="meta/annotations_motion_v1_frames.json",
repo_type="dataset"
)
with open(path) as f:
annotations = json.load(f)
# Get segments for Task1
task1_episodes = annotations["tasks"]["Task1"]["episodes"]
for ep in task1_episodes:
print(f"{ep['episode_id']}: {len(ep['segments'])} segments")
Combining Pose + Annotations
# Get frame-level motion labels
def get_motion_label(frame_idx, segments):
for seg in segments:
if seg["start_frame"] <= frame_idx < seg["end_frame_exclusive"]:
return seg["motion_type"], seg["temporal_phase"]
return None, None
# Example: label each frame
for frame_idx in range(episode["frame_index"].max()):
motion, phase = get_motion_label(frame_idx, episode_annotations["segments"])
if motion:
print(f"Frame {frame_idx}: {motion} ({phase})")
π Citation
If you use this dataset in your research, please cite:
@dataset{dynamic_intelligence_2024,
author = {Dynamic Intelligence},
title = {Egocentric Human Motion Annotation Dataset},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
}
π§ Contact
Email: [email protected]
Organization: Dynamic Intelligence
- Downloads last month
- 376