Datasets:
update readme.md
Browse files
README.md
CHANGED
|
@@ -2,10 +2,11 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
tags:
|
| 4 |
- robotics
|
|
|
|
| 5 |
- community
|
|
|
|
| 6 |
- so100
|
| 7 |
- manipulation
|
| 8 |
-
- smolvla
|
| 9 |
- lerobot
|
| 10 |
- vision-language-action
|
| 11 |
- embodied-ai
|
|
@@ -22,7 +23,7 @@ pretty_name: Community Dataset v1
|
|
| 22 |
|
| 23 |
A large-scale community-contributed robotics dataset for vision-language-action learning, featuring **128 datasets** from **55 contributors** worldwide.
|
| 24 |
|
| 25 |
-
We used this dataset to pretrain SmolVLA
|
| 26 |
|
| 27 |
## π Overview
|
| 28 |
|
|
@@ -82,28 +83,7 @@ Each dataset follows the LeRobot format standard, ensuring compatibility with ex
|
|
| 82 |
|
| 83 |
## π Usage
|
| 84 |
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
**1. Install LeRobot**
|
| 88 |
-
|
| 89 |
-
Follow the [official LeRobot installation guide](https://huggingface.co/docs/lerobot/installation):
|
| 90 |
-
|
| 91 |
-
```bash
|
| 92 |
-
# Create conda environment with Python 3.10
|
| 93 |
-
conda create -y -n lerobot python=3.10
|
| 94 |
-
conda activate lerobot
|
| 95 |
-
|
| 96 |
-
# Install ffmpeg (required for video processing)
|
| 97 |
-
conda install ffmpeg -c conda-forge
|
| 98 |
-
|
| 99 |
-
git clone https://github.com/huggingface/lerobot.git
|
| 100 |
-
cd lerobot
|
| 101 |
-
|
| 102 |
-
# Install LeRobot from Source
|
| 103 |
-
pip install -e .
|
| 104 |
-
```
|
| 105 |
-
|
| 106 |
-
**2. Authenticate with Hugging Face**
|
| 107 |
|
| 108 |
You need to be logged in to access the dataset:
|
| 109 |
|
|
@@ -120,14 +100,9 @@ Get your token from [https://huggingface.co/settings/tokens](https://huggingface
|
|
| 120 |
### Download the Dataset
|
| 121 |
|
| 122 |
```python
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
dataset_path = snapshot_download(
|
| 127 |
-
repo_id="HuggingFaceVLA/community_dataset_v1",
|
| 128 |
-
repo_type="dataset",
|
| 129 |
-
local_dir="./community_dataset_v1"
|
| 130 |
-
)
|
| 131 |
```
|
| 132 |
|
| 133 |
### Load Individual Datasets
|
|
@@ -154,14 +129,30 @@ print(f"Episodes: {len(dataset.episode_indices)}")
|
|
| 154 |
print(f"Total frames: {len(dataset)}")
|
| 155 |
```
|
| 156 |
|
| 157 |
-
### Integration with SmolVLA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
|
| 159 |
```python
|
| 160 |
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 165 |
|
| 166 |
```
|
| 167 |
|
|
@@ -195,14 +186,6 @@ This dataset is designed for:
|
|
| 195 |
- **Multi-task policy development**
|
| 196 |
- **Embodied AI research**
|
| 197 |
|
| 198 |
-
## π Revisions
|
| 199 |
-
|
| 200 |
-
- **v1.0**: Initial community collection
|
| 201 |
-
- 128 datasets from 55 contributors
|
| 202 |
-
- Standardized LeRobot format
|
| 203 |
-
- Quality filtering and validation
|
| 204 |
-
- Comprehensive metadata
|
| 205 |
-
|
| 206 |
## π€ Community Contributions
|
| 207 |
|
| 208 |
This dataset exists thanks to the generous contributions from researchers, hobbyists, and institutions worldwide. Each dataset represents hours of careful data collection and curation.
|
|
@@ -217,13 +200,9 @@ Future contributions should follow:
|
|
| 217 |
|
| 218 |
Check the [blogpost](https://huggingface.co/blog/lerobot-datasets) for more information
|
| 219 |
|
| 220 |
-
## π License
|
| 221 |
-
|
| 222 |
-
Released under Apache 2.0 license. Individual datasets may have additional attribution requirements - please check contributor documentation.
|
| 223 |
-
|
| 224 |
## π Related Work
|
| 225 |
|
| 226 |
-
- [
|
| 227 |
- [SmolVLA model](https://huggingface.co/lerobot/smolvla_base)
|
| 228 |
- [SmolVLA Blogpost](https://huggingface.co/blog/smolvla)
|
| 229 |
- [SmolVLA Paper](https://huggingface.co/papers/2506.01844)
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
tags:
|
| 4 |
- robotics
|
| 5 |
+
- smolvla
|
| 6 |
- community
|
| 7 |
+
- vlab
|
| 8 |
- so100
|
| 9 |
- manipulation
|
|
|
|
| 10 |
- lerobot
|
| 11 |
- vision-language-action
|
| 12 |
- embodied-ai
|
|
|
|
| 23 |
|
| 24 |
A large-scale community-contributed robotics dataset for vision-language-action learning, featuring **128 datasets** from **55 contributors** worldwide.
|
| 25 |
|
| 26 |
+
We used this dataset to pretrain [SmolVLA](https://huggingface.co/lerobot/smolvla_base). However, this is not a complete set, but the dataset that we selected using specific filters, like fps, min num of episodes, and some qualitative assessment of video qualities, using the https://huggingface.co/spaces/Beegbrain/FilterLeRobotData tool. We also manually curated the task descriptions for this subset of the dataset.
|
| 27 |
|
| 28 |
## π Overview
|
| 29 |
|
|
|
|
| 83 |
|
| 84 |
## π Usage
|
| 85 |
|
| 86 |
+
**1. Authenticate with Hugging Face**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
You need to be logged in to access the dataset:
|
| 89 |
|
|
|
|
| 100 |
### Download the Dataset
|
| 101 |
|
| 102 |
```python
|
| 103 |
+
hf download HuggingFaceVLA/community_dataset_v1 \
|
| 104 |
+
--repo-type=dataset \
|
| 105 |
+
--local-dir /path/local_dir/community_dataset_v1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
```
|
| 107 |
|
| 108 |
### Load Individual Datasets
|
|
|
|
| 129 |
print(f"Total frames: {len(dataset)}")
|
| 130 |
```
|
| 131 |
|
| 132 |
+
### Integration with SmolVLA pretraining framework
|
| 133 |
+
|
| 134 |
+
This dataset is designed for training VLA models
|
| 135 |
+
You can download this dataset and use it for Vision Language Action Models training framework, [VLAb](https://github.com/huggingface/VLAb/tree/main):
|
| 136 |
+
|
| 137 |
+
1. Visit the VLAb repository.
|
| 138 |
+
2. Follow the training instructions in the repo
|
| 139 |
+
3. Point the training script to this dataset
|
| 140 |
|
| 141 |
```python
|
| 142 |
|
| 143 |
+
accelerate launch --config_file accelerate_configs/multi_gpu.yaml \
|
| 144 |
+
src/lerobot/scripts/train.py \
|
| 145 |
+
--policy.type=smolvla2 \
|
| 146 |
+
--policy.repo_id=HuggingFaceTB/SmolVLM2-500M-Video-Instruct \
|
| 147 |
+
--dataset.repo_id="community_dataset_v1/AndrejOrsula/lerobot_double_ball_stacking_random,community_dataset_v1/aimihat/so100_tape" \
|
| 148 |
+
--dataset.root="local/path/to/datasets" \
|
| 149 |
+
--dataset.video_backend=pyav \
|
| 150 |
+
--dataset.features_version=2 \
|
| 151 |
+
--output_dir="./outputs/training" \
|
| 152 |
+
--batch_size=8 \
|
| 153 |
+
--steps=200000 \
|
| 154 |
+
--wandb.enable=true \
|
| 155 |
+
--wandb.project="smolvla2-training"
|
| 156 |
|
| 157 |
```
|
| 158 |
|
|
|
|
| 186 |
- **Multi-task policy development**
|
| 187 |
- **Embodied AI research**
|
| 188 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
## π€ Community Contributions
|
| 190 |
|
| 191 |
This dataset exists thanks to the generous contributions from researchers, hobbyists, and institutions worldwide. Each dataset represents hours of careful data collection and curation.
|
|
|
|
| 200 |
|
| 201 |
Check the [blogpost](https://huggingface.co/blog/lerobot-datasets) for more information
|
| 202 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 203 |
## π Related Work
|
| 204 |
|
| 205 |
+
- [VLAb Framework](https://github.com/huggingface/VLAb)
|
| 206 |
- [SmolVLA model](https://huggingface.co/lerobot/smolvla_base)
|
| 207 |
- [SmolVLA Blogpost](https://huggingface.co/blog/smolvla)
|
| 208 |
- [SmolVLA Paper](https://huggingface.co/papers/2506.01844)
|