Light-UNETR
Light-UNETR is a lightweight transformer architecture designed for efficient 3D medical image segmentation, introduced in the paper Harnessing Lightweight Transformer with Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation.
The model addresses computational efficiency through a Lightweight Dimension Reductive Attention (LIDR) module and a Compact Gated Linear Unit (CGLU). To improve data efficiency, the authors propose a Contextual Synergic Enhancement (CSE) learning strategy.
Resources
- Paper: Harnessing Lightweight Transformer with Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation
- Code: Official GitHub Repository
Performance
Light-UNETR significantly reduces computational costs compared to standard architectures. For instance, on the Left Atrial (LA) Segmentation dataset, it reduces FLOPs by 90.8% and parameters by 85.8% compared to state-of-the-art methods while achieving superior performance even with limited (10%) labeled data.
Sample Usage
To test a pre-trained Light-UNETR model using the official implementation, you can use the following command structure:
# Example: Test BraTS model
python test_cse.py --dataset brats --model lightunetr --checkpoint lightunetr_best_model_brats_25lab.pth --gpu 0
Citation
If you find this work useful, please cite:
@article{liu2025harnessing,
title={Harnessing Lightweight Transformer with Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation},
author={Liu, Xinyu and Chen, Zhen and Li, Wuyang and Li, Chenxin and Yuan, Yixuan},
year={2025}
}
Acknowledgement
The authors appreciate the contributions of SSL4MIS, Slim UNETR, BCP, and other referenced codebases.