Datasets:
image imagewidth (px) 2.35k 2.35k |
|---|
Chang'e-3: An Authentic In-Situ Dataset of the LuMon Benchmark
1. Dataset Summary
The Chang'e-3 dataset is an authentic, in-situ collection of lunar surface imagery equipped with meticulously constructed metric depth maps. It is a core real-world component of the LuMon Benchmark, designed to evaluate Monocular Depth Estimation (MDE) networks for extraterrestrial navigation.
Deploying terrestrial MDE models to the Moon introduces a severe visual domain gap. While simulations are valuable, they cannot fully replicate the extreme, authentic conditions of the lunar surface—such as zero atmospheric scattering, harsh high-contrast polar shadows, and unique topological textures. Because the historical Chang'e-3 mission data did not include native depth, high-quality ground truth was constructed via stereo reconstruction, refined by sparse ORB features, and converted to an absolute metric scale to enable rigorous sim-to-real evaluations.
2. Dataset Structure & Details
The dataset contains 168 perfectly synchronized triplets of images, depth maps, and valid pixel masks.
Directory Structure
images/: Contains the input RGB images captured from the authentic lunar surface (001.pngto168.png).depth/: Contains the corresponding ground-truth metric depth maps saved as NumPy arrays (001.npyto168.npy).masks/: Contains binary masks saved as NumPy arrays (001.npyto168.npy) used to filter out invalid or uncertain pixels.
Technical Specifications
- Domain: In-Situ (Authentic Lunar Surface).
- Resolution: All files map to a resolution of
2352x1728pixels. - Stereo Baseline: The physical stereo baseline is 27 cm.
- Synchronization: Images, depth maps, and masks are perfectly matched by their 3-digit numerical filenames (e.g.,
images/001.pngcorresponds todepth/001.npyandmasks/001.npy). - Depth & Mask Formats (Important): * Unlike the simulated dataset, the ground truth depth here is provided as
.npyarrays, which store absolute metric distance values.- The masks exclude invalid, non-finite, and out-of-range pixels.
- Evaluation Clipping: When evaluating networks, it is recommended to clip the depth at a maximum of 25 meters. Beyond this range, disparity quantization errors inherent to the stereo camera become highly unreliable.
Citation
This dataset is a crucial component of the LuMon Benchmark suite, accepted at the CVPR 2026 AI4Space Workshop. If you use this dataset in your research, please cite:
@inproceedings{sekmen2026lumon,
title={LuMon: A Comprehensive Benchmark and Development Suite with Novel Datasets for Lunar Monocular Depth Estimation},
author={Aytac Sekmen and Fatih Gunes and Furkan Horoz and Umut Isik and Alp Ozaydin and Altay Topaloglu and Umutcan Ustundas and Alp Yeni and Ersin Soken and Erol Sahin and Gokberk Cinbis and Sinan Kalkan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2026}
}
- Downloads last month
- 713