Update README.md
Browse files
README.md
CHANGED
|
@@ -6,38 +6,44 @@ pretty_name: STAR
|
|
| 6 |
size_categories:
|
| 7 |
- 100M<n<1B
|
| 8 |
---
|
| 9 |
-
# Dataset Card for Dataset Name
|
| 10 |
|
| 11 |
-
<!-- Provide a quick summary of the dataset. -->
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
size_categories:
|
| 7 |
- 100M<n<1B
|
| 8 |
---
|
|
|
|
| 9 |
|
|
|
|
| 10 |
|
| 11 |
+
# STAR Dataset
|
| 12 |
+
The **STAR (Super-Resolution for Astronomical Star Fields)** dataset is a large-scale benchmark for developing field-level super-resolution models in astronomy. It contains **54,738 flux-consistent image pairs** derived from Hubble Space Telescope (HST) high-resolution observations and physically faithful low-resolution counterparts. The dataset addresses three key challenges in astronomical super-resolution:
|
| 13 |
+
|
| 14 |
+
- **Flux Inconsistency**: Ensures consistent flux using a flux-preserving data generation pipeline.
|
| 15 |
+
- **Object-Crop Configuration**: Strategically samples patches across diverse celestial regions.
|
| 16 |
+
- **Data Diversity**: Covers dense star clusters, sparse galactic fields, and regions with varying background noise.
|
| 17 |
+
|
| 18 |
+
The dataset includes x2 and x4 scaling pairs in `.npy` format, suitable for training and evaluating super-resolution models.
|
| 19 |
+
|
| 20 |
+
## Structure
|
| 21 |
+
- **Full Data**:
|
| 22 |
+
- `data/x2/x2.tar.gz` (33GB): Full x2 dataset.
|
| 23 |
+
- `data/x4/x4.tar.gz` (29GB): Full x4 dataset.
|
| 24 |
+
- Unzip to access: `train_hr_patch/`, `train_lr_patch/`, `eval_hr_patch/`, `eval_lr_patch/` (.npy files), and `dataload_filename/` (txt with pairs).
|
| 25 |
+
|
| 26 |
+
- **Sample Data** (for testing and Croissant, x2 only):
|
| 27 |
+
- `sampled_data/x2/`: Sample .npy pairs (500 train pairs, 100 eval pairs).
|
| 28 |
+
- `train_hr_patch/`, `train_lr_patch/` (500 files each)
|
| 29 |
+
- `eval_hr_patch/`, `eval_lr_patch/` (100 files each)
|
| 30 |
+
- Croissant metadata: `sampled_data/x2/croissant.json`
|
| 31 |
+
|
| 32 |
+
## Loading Sample
|
| 33 |
+
```python
|
| 34 |
+
from datasets import Dataset
|
| 35 |
+
import numpy as np
|
| 36 |
+
import glob
|
| 37 |
+
|
| 38 |
+
# Load paired data
|
| 39 |
+
train_hr_files = glob.glob("sampled_data/x2/train_hr_patch/*.npy")
|
| 40 |
+
train_lr_files = [f.replace("hr_patch", "lr_patch") for f in train_hr_files]
|
| 41 |
+
eval_hr_files = glob.glob("sampled_data/x2/eval_hr_patch/*.npy")
|
| 42 |
+
eval_lr_files = [f.replace("hr_patch", "lr_patch") for f in eval_hr_files]
|
| 43 |
+
data_dict = {
|
| 44 |
+
"hr_data": [np.load(f) for f in train_hr_files + eval_hr_files],
|
| 45 |
+
"lr_data": [np.load(f) for f in train_lr_files + eval_lr_files],
|
| 46 |
+
"split": ["train"] * len(train_hr_files) + ["eval"] * len(eval_hr_files)
|
| 47 |
+
}
|
| 48 |
+
dataset = Dataset.from_dict(data_dict)
|
| 49 |
+
print(dataset[0]["hr_data"].shape) # Example access
|