File size: 15,763 Bytes
cd56396 823a94c cd56396 6e2cdca cd56396 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 |
---
license: apache-2.0
frameworks:
- pytorch
tags:
- medical
tasks:
- image-text-to-text
---
<div style="display: flex; align-items: center; justify-content: center;">
<h1 style="margin: 0; text-align: left;">
Hulu-Med: A Transparent Generalist Model towards Holistic Medical Vision-Language Understanding
</h1>
</div>
<div align="center">
[](https://arxiv.org/abs/2510.08668)
[](https://huggingface.co/ZJU-AI4H/Hulu-Med)
[](https://modelscope.cn/models/Med-Team/Hulu-Med)
[](LICENSE)
[๐ Paper](http://arxiv.org/abs/2510.08668) | [๐ค Hulu-Med-7B](https://huggingface.co/ZJU-AI4H/Hulu-Med-7B) |[๐ค Hulu-Med-14B](https://huggingface.co/ZJU-AI4H/Hulu-Med-14B) |[๐ค Hulu-Med-32B](https://huggingface.co/ZJU-AI4H/Hulu-Med-32B) | [๐ฎ ModelScope Models](https://modelscope.cn/models/Med-Team/Hulu-Med) | [๐ Demo](#demo)
</div>
## ๐ฅ News
- **[2025-10-08]** Hulu-Med models and inference code released!
## ๐ Overview
**Hulu-Med** is a transparent medical vision-language model that unifies understanding across diverse modalities including **medical text, 2D/3D images, and videos**. Built with a focus on transparency and accessibility, Hulu-Med achieves state-of-the-art performance on 30 medical benchmarks while being trained entirely on public data.
### Key Features
- ๐ **Holistic Multimodal Understanding**: Seamlessly processes medical text, 2D images, 3D volumes, and surgical videos
- ๐ **Fully Transparent**: Complete open-source pipeline including data curation, training code, and model weights
- ๐ **State-of-the-Art Performance**: Outperforms leading open-source models and competes with proprietary systems
- โก **Efficient Training**: Only 4,000-40,000 GPU hours required for 7B-32B variants
- ๐๏ธ **Comprehensive Coverage**: Trained on 16.7M samples spanning 12 anatomical systems and 14 imaging modalities
### Comprehensive Data Coverage
Our training corpus encompasses:
- **12 Major Anatomical Systems**: Multi-System, Skin/Integumentary, Respiratory, Cellular/Tissue Level, Digestive, Nervous, Cardiovascular, Musculoskeletal, Reproductive, Urinary, Whole Body, Endocrine, Immune/Lymphatic, and Hematologic systems
- **14 Medical Imaging Modalities**: CT, MRI, X-Ray, Ultrasound, PET, OCT, Endoscopy, Microscopy, Histopathology, Fundus, Dermoscopy, Angiography, Digital Photograph, and Medical Chart
- **Diverse Downstream Tasks**: Medical Dialogue, Anomaly Detection, Prognosis Prediction, Treatment Planning, Surgical Skill Assessment, Education, Medical Report Generation, Surgical Phase Recognition, Medical Computation, and more
## ๐ Performance Highlights
### Medical Multimodal Benchmarks
Performance comparison on medical multimodal benchmarks (For the 'Medical VLM < 10B' subgroup, **bold** indicates the best method):
| Models | OM.VQA | PMC-VQA | VQA-RAD | SLAKE | PathVQA | MedXQA | MMMU-Med |
|--------|--------|---------|---------|-------|---------|--------|----------|
| **Proprietary Models** |
| GPT-4.1 | 75.5 | 55.2 | 65.0 | 72.2 | 55.5 | 45.2 | 75.2 |
| GPT-4o | 67.5 | 49.7 | 61.0 | 71.2 | 55.5 | 44.3 | 62.8 |
| Claude Sonnet 4 | 65.5 | 54.4 | 67.6 | 70.6 | 54.2 | 43.3 | 74.6 |
| Gemini-2.5-Flash | 71.0 | 55.4 | 68.5 | 75.8 | 55.4 | 52.8 | 76.9 |
| **General VLMs (< 10B)** |
| Qwen2.5VL-7B | 63.6 | 51.9 | 63.2 | 66.8 | 44.1 | 20.1 | 50.6 |
| InternVL2.5-8B | 81.3 | 51.3 | 59.4 | 69.0 | 42.1 | 21.7 | 53.5 |
| InternVL3-8B | 79.1 | 53.8 | 65.4 | 72.8 | 48.6 | 22.4 | 59.2 |
| **General VLMs (> 10B)** |
| InternVL3-14B | 78.9 | 54.1 | 66.3 | 72.8 | 48.0 | 23.1 | 63.1 |
| Qwen2.5V-32B | 68.2 | 54.5 | 71.8 | 71.2 | 41.9 | 25.2 | 59.6 |
| InternVL3-38B | 79.8 | 56.6 | 65.4 | 72.7 | 51.0 | 25.2 | 65.2 |
| **Medical VLMs (< 10B)** |
| LLaVA-Med-7B | 34.8 | 22.7 | 46.6 | 51.9 | 35.2 | 20.8 | 28.1 |
| MedGemma-4B | 70.7 | 49.2 | 72.3 | 78.2 | 48.1 | 25.4 | 43.2 |
| HuatuoGPT-V-7B | 74.3 | 53.1 | 67.6 | 68.1 | 44.8 | 23.2 | 49.8 |
| Lingshu-7B | 82.9 | 56.3 | 67.9 | 83.1 | 61.9 | 26.7 | - |
| **Hulu-Med-7B** | **84.2** | **66.8** | **78.0** | **86.8** | **65.6** | **29.0** | **51.4** |
| **Medical VLMs (> 10B)** |
| HealthGPT-14B | 75.2 | 56.4 | 65.0 | 66.1 | 56.7 | 24.7 | 49.6 |
| HuatuoGPT-V-34B | 74.0 | 56.6 | 61.4 | 69.5 | 44.4 | 22.1 | 51.8 |
| Lingshu-32B | 83.4 | 57.9 | 76.7 | 86.7 | 65.5 | 30.9 | - |
| **Hulu-Med-14B** | **85.1** | **68.9** | **76.1** | **86.5** | **64.4** | **30.0** | **54.8** |
| **Hulu-Med-32B** | **84.6** | **69.4** | **81.4** | **85.7** | **67.3** | **34.0** | **60.4** |
### Medical Text Benchmarks
Performance comparison on medical text benchmarks (**bold** indicates the best method in each subgroup):
| Models | MMLU-Pro | MedXQA | Medbullets | SGPQA | PubMedQA | MedMCQA | MedQA | MMLU-Med |
|--------|----------|--------|------------|-------|----------|---------|-------|----------|
| **Proprietary Models** |
| GPT-4.1 | 78.0 | 30.9 | 77.0 | 49.9 | 75.6 | 77.7 | 89.1 | 89.6 |
| o3-mini | 78.1 | 35.4 | 83.7 | 50.1 | 73.6 | 60.6 | 74.5 | 87.0 |
| Claude Sonnet 4 | 79.5 | 33.6 | 80.2 | 56.3 | 78.6 | 79.3 | 92.1 | 91.3 |
| Gemini-2.5-Flash | 70.0 | 35.6 | 77.6 | 53.3 | 73.8 | 73.6 | 91.2 | 84.2 |
| **General VLMs (< 10B)** |
| Qwen2.5VL-7B | 50.5 | 12.8 | 42.1 | 26.3 | 76.4 | 52.6 | 57.3 | 73.4 |
| InternVL2.5-8B | 50.6 | 11.6 | 42.4 | 26.1 | 76.4 | 52.4 | 53.7 | 74.2 |
| InternVL3-8B | 57.9 | 13.1 | 48.5 | 31.2 | 75.4 | 57.7 | 62.1 | 77.5 |
| **General VLMs (> 10B)** |
| Qwen2.5VL-32B | 66.5 | 15.6 | 54.2 | 37.6 | 68.4 | 63.0 | 71.6 | 83.2 |
| InternVL3-14B | 65.4 | 14.1 | 49.5 | 37.9 | 77.2 | 62.0 | 70.1 | 81.7 |
| InternVL3-38B | 72.1 | 16.0 | 54.6 | 42.5 | 73.2 | 64.9 | 73.5 | 83.8 |
| **Medical VLMs (< 10B)** |
| LLaVA-Med-7B | 16.6 | 9.9 | 34.4 | 16.1 | 26.4 | 39.4 | 42.0 | 50.6 |
| MedGemma-4B | 38.6 | 12.8 | 45.6 | 21.6 | 72.2 | 52.2 | 56.2 | 66.7 |
| HuatuoGPT-V-7B | 44.6 | 10.1 | 40.9 | 21.9 | 72.8 | 51.2 | 52.9 | 69.3 |
| Lingshu-7B | 50.4 | 16.5 | 56.2 | 26.3 | 76.6 | 55.9 | 63.3 | 74.5 |
| **Hulu-Med-7B** | **60.6** | **19.6** | **61.5** | **31.1** | **77.4** | **67.6** | **73.5** | **79.5** |
| **Medical VLMs (> 10B)** |
| HealthGPT-14B | 63.4 | 11.3 | 39.8 | 25.7 | 68.0 | 63.4 | 66.2 | 80.2 |
| Lingshu-32B | 70.2 | 22.7 | 65.4 | 41.1 | 77.8 | 66.1 | 74.7 | 84.7 |
| HuatuoGPT-V-34B | 51.8 | 11.4 | 42.7 | 26.5 | 72.2 | 54.7 | 58.8 | 74.7 |
| **Hulu-Med-14B** | **68.0** | **23.2** | **68.5** | **37.7** | **79.8** | **70.4** | **78.1** | **83.3** |
| **Hulu-Med-32B** | **72.9** | **24.2** | **68.8** | **41.8** | **80.8** | **72.8** | **80.4** | **85.6** |
## ๐ Model Zoo
We provide three model variants with different parameter scales:
| Model | Parameters | LLM Base | Training Cost | HuggingFace | ModelScope |
|-------|-----------|----------|---------------|-------------|------------|
| **Hulu-Med-7B** | 7B | Qwen2.5-7B | ~4,000 GPU hours | [๐ค Link](https://huggingface.co/ZJU-AI4H/Hulu-Med-7B) | [๐ฎ Link](https://modelscope.cn/models/Med-Team/Hulu-Med-7B) |
| **Hulu-Med-14B** | 14B | Qwen3-14B | ~8,000 GPU hours | [๐ค Link](https://huggingface.co/ZJU-AI4H/Hulu-Med-14B) | [๐ฎ Link](https://modelscope.cn/models/Med-Team/Hulu-Med-14B) |
| **Hulu-Med-32B** | 32B | Qwen2.5-32B | ~40,000 GPU hours | [๐ค Link](https://huggingface.co/ZJU-AI4H/Hulu-Med-32B) | [๐ฎ Link](https://modelscope.cn/models/Med-Team/Hulu-Med-32B) |
## ๐ ๏ธ Installation
```bash
# Clone the repository
git clone https://github.com/your-org/Hulu-Med.git
cd Hulu-Med
# Create conda environment
conda create -n hulumed python=3.10
conda activate hulumed
# PyTorch and torchvision for CUDA 11.8
pip install torch==2.4.0 torchvision==0.19.0 --extra-index-url https://download.pytorch.org/whl/cu118
# Flash-attn pinned to a compatible version
pip install flash-attn==2.7.3 --no-build-isolation --upgrade
# Transformers and accelerate
pip install transformers==4.51.2 accelerate==1.7.0
# Video processing dependencies
pip install decord ffmpeg-python imageio opencv-python
# Install other dependencies
pip install -r requirements.txt
```
## ๐ป Quick Start
### 2D Example
```python
import torch
from transformers import AutoModelForCausalLM, AutoProcessor
from hulumed import disable_torch_init, model_init, mm_infer
from hulumed import disable_torch_init, model_init, mm_infer
from hulumed.model import load_pretrained_model
from hulumed.mm_utils import load_images, process_images, load_video, process_video, tokenizer_multimodal_token, get_model_name_from_path, KeywordsStoppingCriteria
from hulumed.model.processor import HulumedProcessor
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
model_path = "xxxxxx"
model_name = get_model_name_from_path(model_path)
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, None, model_name,device_map='cuda:0')
processor = HulumedProcessor(image_processor, tokenizer)
slices = load_images(
"./demo/demo.jpg",
)
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Describe this image in detail."},
]
}
]
modal='image'
model=model.to("cuda:0")
inputs = processor(
images=[slices] if modal != "text" else None,
text=conversation,
merge_size=2 if modal == "video" else 1,
return_tensors="pt"
)
inputs = {k: v.cuda().to('cuda:0') if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
if "pixel_values" in inputs:
inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
with torch.inference_mode():
output_ids = model.generate(
**inputs,
do_sample=True,
modals=[modal],
temperature=0.6,
max_new_tokens=8192,
use_cache=True,
pad_token_id=tokenizer.eos_token_id,
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
```
### 3D Example
```
slices = load_images(
"./src/demo/amos_0013.nii", ##Support nii 3D input
nii_num_slices=160
)
conversation = [
{
"role": "user",
"content": [
{"type": "video", "num_frames": len(slices)},
{"type": "text", "text": "This is a medical 3D scenario. Please generate a medical report for the given 3D medical images, including both findings and impressions."},
]
}
]
modal='video'
model=model.to("cuda:0")
inputs = processor(
images=[slices] if modal != "text" else None,
text=conversation,
merge_size=2 if modal == "video" else 1,
return_tensors="pt"
)
inputs = {k: v.cuda().to('cuda:0') if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
if "pixel_values" in inputs:
inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
with torch.inference_mode():
output_ids = model.generate(
**inputs,
do_sample=True,
modals=[modal],
temperature=0.6,
max_new_tokens=8192,
use_cache=True,
pad_token_id=tokenizer.eos_token_id,
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
```
### Video Example
```
frames, timestamps = load_video("./src/demo/1min_demo.mp4", fps=1, max_frames=3000)
conversation = [
{
"role": "user",
"content": [
{"type": "video", "num_frames": len(frames)},
{"type": "text", "text": "Please describe this video in detail."},
]
}
]
modal='video'
model=model.to("cuda:0")
inputs = processor(
images=[frames] if modal != "text" else None,
text=conversation,
merge_size=2 if modal == "video" else 1,
return_tensors="pt"
)
inputs = {k: v.cuda().to('cuda:0') if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
if "pixel_values" in inputs:
inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
with torch.inference_mode():
output_ids = model.generate(
**inputs,
do_sample=True,
modals=[modal],
temperature=0.6,
max_new_tokens=8192,
use_cache=True,
pad_token_id=tokenizer.eos_token_id,
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
```
### Text Example
```
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Hello, I have a headache, what should I do?"},
]
}
]
modal='text'
model=model.to("cuda:0")
inputs = processor(
text=conversation,
merge_size=2 if modal == "video" else 1,
return_tensors="pt"
)
inputs = {k: v.cuda().to('cuda:0') if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
if "pixel_values" in inputs:
inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
with torch.inference_mode():
output_ids = model.generate(
**inputs,
do_sample=True,
modals=[modal],
temperature=0.6,
max_new_tokens=8192,
use_cache=True,
pad_token_id=tokenizer.eos_token_id,
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
```
## ๐ Training
### Data Preparation
Our training data consists of 16.7M samples across four categories:
- **Medical Multimodal Data** (9M samples): Covering 14 imaging modalities
- **Medical Text Data** (4.9M samples): Clinical notes, literature, QA pairs
- **General Multimodal Data** (1.3M samples): Enhancing generalization
- **General Text Data** (1.5M samples): Improving reasoning capabilities
Download and prepare the data:
Comming soon
## ๐๏ธ Model Architecture
Hulu-Med consists of four core components:
1. **Vision Encoder**: SigLIP-based encoder with 2D RoPE for unified 2D/3D/video processing
2. **Multimodal Projector**: Projects visual tokens into language model space
3. **LLM Decoder**: Qwen-based decoder for generating responses
4. **Medical-Aware Token Reduction**: Efficient processing with ~55% token reduction
## ๐ Supported Tasks
- โ
Visual Question Answering (2D/3D/Video)
- โ
Medical Report Generation
- โ
Disease Diagnosis
- โ
Anatomical Understanding
- โ
Surgical Phase Recognition
- โ
Clinical Dialogue
- โ
Medical Text Reasoning
- โ
Multilingual Medical QA
- โ
Rare Disease Diagnosis
- More
## ๐ Citation
If you find Hulu-Med useful in your research, please cite:
```bibtex
@misc{jiang2025hulumedtransparentgeneralistmodel,
title={Hulu-Med: A Transparent Generalist Model towards Holistic Medical Vision-Language Understanding},
author={Songtao Jiang and Yuan Wang and Sibo Song and Tianxiang Hu and Chenyi Zhou and Bin Pu and Yan Zhang and Zhibo Yang and Yang Feng and Joey Tianyi Zhou and Jin Hao and Zijian Chen and Ruijia Wu and Tao Tang and Junhui Lv and Hongxia Xu and Hongwei Wang and Jun Xiao and Bin Feng and Fudong Zhu and Kenli Li and Weidi Xie and Jimeng Sun and Jian Wu and Zuozhu Liu},
year={2025},
eprint={2510.08668},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.08668},
}
```
## ๐ License
This project is released under the [Apache 2.0 License](LICENSE).
---
<div align="center">
Made with โค๏ธ by the ZJU AI4H Team
</div> |