File size: 10,291 Bytes
24e5668
 
 
 
 
 
 
17253f8
 
 
 
 
 
9fef882
c5d3aa5
 
 
 
17253f8
c5d3aa5
17253f8
24e5668
 
9fef882
24e5668
9fef882
ac9f79b
9fef882
ac9f79b
9fef882
 
 
ac9f79b
9fef882
ac9f79b
9fef882
 
2a8c93a
9fef882
 
 
 
 
 
 
 
 
 
2a8c93a
9fef882
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e70d289
7d0800d
 
ac9f79b
9fef882
0808cc1
9fef882
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2a8c93a
 
9fef882
 
 
2a8c93a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9fef882
 
 
 
 
 
 
2a8c93a
 
 
9fef882
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8fae165
9fef882
 
 
 
 
 
2a8c93a
8fae165
9fef882
 
 
 
 
 
 
 
 
 
 
 
ac9f79b
9fef882
 
 
 
 
 
 
 
 
 
8fae165
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
---
license: mit
language:
- en
pretty_name: STAR
size_categories:
- 100M<n<1B
task_categories:
- image-to-image
tags:
- astronomy
- super-resolution
- computer-vision
- hubble-space-telescope
configs:
- config_name: default
  data_files:
  - split: train
    path: sampled_data/x2/train_metadata.jsonl
  - split: validation
    path: sampled_data/x2/validation_metadata.jsonl
---

# STAR Dataset (Super-Resolution for Astronomical Star Fields)

The **STAR** dataset is a large-scale benchmark for developing field-level super-resolution models in astronomy. It contains **54,738 flux-consistent image pairs** derived from Hubble Space Telescope (HST) high-resolution observations and physically faithful low-resolution counterparts.

## 🌟 Key Features

- **Flux Consistency**: Ensures consistent flux using a flux-preserving data generation pipeline
- **Object-Crop Configuration**: Strategically samples patches across diverse celestial regions
- **Data Diversity**: Covers dense star clusters, sparse galactic fields, and regions with varying background noise

## πŸ“Š Dataset Structure

```
KUOCHENG/STAR/
β”œβ”€β”€ sampled_data/x2/              #  ⚠️ SAMPLE ONLY - For testing/exploration(600 samples). You can get started quickly with the data here.
β”‚   β”œβ”€β”€ train_hr_patch/           # 500 HR training patches (.npy files)
β”‚   β”œβ”€β”€ train_lr_patch/           # 500 LR training patches (.npy files)
β”‚   β”œβ”€β”€ eval_hr_patch/            # 100 HR validation patches (.npy files)
β”‚   β”œβ”€β”€ eval_lr_patch/            # 100 LR validation patches (.npy files)
β”‚   β”œβ”€β”€ train_metadata.jsonl      # Training pairs metadata
β”‚   └── validation_metadata.jsonl # Validation pairs metadata
└── data/
    β”œβ”€β”€ x2/x2.tar.gz              # Full x2 dataset (33GB)
    └── x4/x4.tar.gz              # Full x4 dataset (29GB)
```
⚠️ Important Note: The sampled_data/ directory contains only a small subset (600 pairs) for quick testing and understanding the data structure. For actual training and research, please use the full datasets in data/ directory.
## πŸš€ Quick Start

### Loading the Dataset

```python
from datasets import load_dataset
import numpy as np

# Load metadata
dataset = load_dataset("KUOCHENG/STAR")

# Access a sample
sample = dataset['train'][0]
hr_path = sample['hr_path']  # Path to HR .npy file
lr_path = sample['lr_path']  # Path to LR .npy file

# Load actual data
hr_data = np.load(hr_path, allow_pickle=True).item()
lr_data = np.load(lr_path, allow_pickle=True).item()
```

### Understanding the Data Format

Each `.npy` file contains a dictionary with the following structure:

#### High-Resolution (HR) Data
- **Shape**: `(256, 256)` for all HR patches
- **Access Keys**:
  ```python
  hr_data['image']     # The actual grayscale astronomical image
  hr_data['mask']      # Binary mask (True = valid/accessible pixels)
  hr_data['attn_map']  # Attention map from star finder (detected astronomical sources)
  hr_data['coord']     # Coordinate information (if available)
  ```

#### Low-Resolution (LR) Data
- **Shape**: Depends on the super-resolution scale
  - For x2: `(128, 128)`
  - For x4: `(64, 64)`
- **Access Keys**: Same as HR data
  ```python
  lr_data['image']     # Downsampled grayscale image
  lr_data['mask']      # Downsampled mask
  lr_data['attn_map']  # Downsampled attention map
  lr_data['coord']     # Coordinate information
  ```

### Data Fields Explanation

| Field | Description | Type | Usage |
|-------|-------------|------|-------|
| `image` | Raw astronomical observation data | `np.ndarray` (float32) | Main input for super-resolution |
| `mask` | Valid pixel indicator | `np.ndarray` (bool) | Identifies accessible regions (True = valid) |
| `attn_map` | Star finder output | `np.ndarray` (float32) | Highlights detected astronomical sources (stars, galaxies) |
| `coord` | Spatial coordinates | `np.ndarray` | Position information for patch alignment |

## πŸ’» Usage Examples

### Basic Training Loop

⚠️NOTE: Complete training and testing code/framework, please see [github](https://github.com/GuoCheng12/STAR).

```python
import numpy as np
from datasets import load_dataset
import torch
from torch.utils.data import DataLoader

# Load dataset
dataset = load_dataset("KUOCHENG/STAR")

class STARDataset(torch.utils.data.Dataset):
    def __init__(self, hf_dataset):
        self.dataset = hf_dataset
    
    def __len__(self):
        return len(self.dataset)
    
    def __getitem__(self, idx):
        sample = self.dataset[idx]
        
        # Load .npy files
        hr_data = np.load(sample['hr_path'], allow_pickle=True).item()
        lr_data = np.load(sample['lr_path'], allow_pickle=True).item()
        
        # Extract images
        hr_image = hr_data['image'].astype(np.float32)
        lr_image = lr_data['image'].astype(np.float32)
        
        # Extract masks for loss computation
        hr_mask = hr_data['mask'].astype(np.float32)
        lr_mask = lr_data['mask'].astype(np.float32)
        
        # Convert to tensors
        return {
            'lr_image': torch.from_numpy(lr_image).unsqueeze(0),  # Add channel dim
            'hr_image': torch.from_numpy(hr_image).unsqueeze(0),
            'hr_mask': torch.from_numpy(hr_mask).unsqueeze(0),
            'lr_mask': torch.from_numpy(lr_mask).unsqueeze(0),
        }

# Create PyTorch dataset and dataloader
train_dataset = STARDataset(dataset['train'])
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)

# Training loop example
for batch in train_loader:
    lr_images = batch['lr_image']  # [B, 1, 128, 128] for x2
    hr_images = batch['hr_image']  # [B, 1, 256, 256]
    masks = batch['hr_mask']       # [B, 1, 256, 256]
    
    # Your training code here
    # pred = model(lr_images)
    # loss = criterion(pred * masks, hr_images * masks)  # Apply mask to focus on valid regions
```

### Visualization
### πŸ”­ Astronomical Image Visualization
Astronomical images have extreme dynamic ranges with both very bright stars and faint background features. Direct visualization often shows only the brightest sources. We need special normalization techniques for proper visualization.
```python
import matplotlib.pyplot as plt
import numpy as np
from astropy.visualization import ZScaleInterval, ImageNormalize

# If astropy is not installed: pip install astropy

def z_scale_normalize(image, contrast=0.25):
    """
    Apply Z-scale normalization for astronomical images.
    This technique enhances faint features while preventing bright stars from saturating.
    
    Args:
        image: Input astronomical image
        contrast: Contrast parameter (default 0.25, lower = more contrast)
    
    Returns:
        Normalized image suitable for visualization
    """
    # Remove NaN and Inf values
    image_clean = np.nan_to_num(image, nan=0.0, posinf=0.0, neginf=0.0)
    
    interval = ZScaleInterval(contrast=contrast)
    vmin, vmax = interval.get_limits(image_clean)
    norm = ImageNormalize(vmin=vmin, vmax=vmax)
    return norm(image_clean)

def visualize_sample(hr_path, lr_path):
    # Load data
    hr_data = np.load(hr_path, allow_pickle=True).item()
    lr_data = np.load(lr_path, allow_pickle=True).item()
    
    fig, axes = plt.subplots(2, 3, figsize=(15, 10))
    hr_image_vis = z_scale_normalize(hr_data)
    lr_image_vis = z_scale_normalize(lr_data)

    # HR visualizations
    axes[0, 0].imshow(hr_data['image'], cmap='gray')
    axes[0, 0].set_title('HR Image (256x256)')
    
    axes[0, 1].imshow(hr_data['mask'], cmap='binary')
    axes[0, 1].set_title('HR Mask (Valid Regions)')
    
    axes[0, 2].imshow(hr_data['attn_map'], cmap='hot')
    axes[0, 2].set_title('HR Attention Map (Detected Sources)')
    
    # LR visualizations
    axes[1, 0].imshow(lr_data['image'], cmap='gray')
    axes[1, 0].set_title(f'LR Image ({lr_data["image"].shape[0]}x{lr_data["image"].shape[1]})')
    
    axes[1, 1].imshow(lr_data['mask'], cmap='binary')
    axes[1, 1].set_title('LR Mask')
    
    axes[1, 2].imshow(lr_data['attn_map'], cmap='hot')
    axes[1, 2].set_title('LR Attention Map')
    
    plt.tight_layout()
    plt.show()

# Visualize a sample
sample = dataset['train'][0]
visualize_sample(sample['hr_path'], sample['lr_path'])
```

## πŸ“ File Naming Convention

- **HR files**: `*_hr_hr_patch_*.npy`
- **LR files**: `*_hr_lr_patch_*.npy`

Files are paired by replacing `_hr_hr_patch_` with `_hr_lr_patch_` in the filename.

## πŸ”„ Full Dataset Access

For the complete dataset (54,738 pairs), download the compressed files:

```python
# Manual download and extraction
import tarfile

# Extract x2 dataset
with tarfile.open('data/x2/x2.tar.gz', 'r:gz') as tar:
    tar.extractall('data/x2/')

# The extracted structure will be:
# data/x2/
#   β”œβ”€β”€ train_hr_patch/  # ~45,000 HR patches
#   β”œβ”€β”€ train_lr_patch/  # ~45,000 LR patches
#   β”œβ”€β”€ eval_hr_patch/   # ~9,000 HR patches
#   β”œβ”€β”€ eval_lr_patch/   # ~9,000 LR patches
#   β”œβ”€β”€ dataload_filename/
#   β”‚   β”œβ”€β”€ train_dataloader.txt  # Training pairs list
#   β”‚   └── eval_dataloader.txt   # Evaluation pairs list
#   └── psf_hr/, psf_lr/  # Original unpatched data
```


## 🎯 Model Evaluation Metrics

When evaluating super-resolution models on STAR, consider:

1. **Masked PSNR/SSIM**: Only compute metrics on valid pixels (where mask=True)
2. **Source Detection F1**: Evaluate if astronomical sources are preserved
3. **Flux Preservation**: Check if total flux is maintained (important for astronomy, see the paper for details) 


## πŸ“ Citation

If you use the STAR dataset in your research, please cite:

```bibtex
@article{wu2025star,
  title={STAR: A Benchmark for Astronomical Star Fields Super-Resolution},
  author={Wu, Kuo-Cheng and Zhuang, Guohang and Huang, Jinyang and Zhang, Xiang and Ouyang, Wanli and Lu, Yan},
  journal={arXiv preprint arXiv:2507.16385},
  year={2025},
  url={https://arxiv.org/abs/2507.16385}
}

```

## πŸ“„ License

This dataset is released under the MIT License.

## 🀝 Contact

For questions or issues, please open an issue on the [dataset repository](https://huggingface.co/datasets/KUOCHENG/STAR/discussions).
Also can see [github](https://github.com/GuoCheng12/STAR)