RadImageNet-VQA / README.md
LeoButsanets's picture
Update README.md
fe21541 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 1K<n<10M
task_categories:
  - visual-question-answering
tags:
  - medical
pretty_name: RadImageNet-VQA
dataset_info:
  - config_name: alignment
    features:
      - name: image
        dtype: image
      - name: conversations
        list:
          - name: from
            dtype: string
          - name: value
            dtype: string
      - name: metadata
        struct:
          - name: content_type
            dtype: string
          - name: correct_text
            dtype: 'null'
          - name: is_abnormal
            dtype: bool
          - name: location
            dtype: string
          - name: modality
            dtype: string
          - name: pathology
            dtype: string
          - name: question_id
            dtype: string
    splits:
      - name: train
        num_bytes: 29401649909
        num_examples: 750009
      - name: val
        num_bytes: 3175441830
        num_examples: 83668
    download_size: 38405331105
    dataset_size: 32577091739
  - config_name: benchmark
    features:
      - name: image
        dtype: image
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: string
      - name: question_type
        dtype: string
      - name: metadata
        struct:
          - name: content_type
            dtype: string
          - name: correct_text
            dtype: string
          - name: is_abnormal
            dtype: bool
          - name: location
            dtype: string
          - name: modality
            dtype: string
          - name: pathology
            dtype: string
          - name: question_id
            dtype: string
    splits:
      - name: test
        num_bytes: 414947216
        num_examples: 9000
    download_size: 361133763
    dataset_size: 414947216
  - config_name: instruct
    features:
      - name: image
        dtype: image
      - name: conversations
        list:
          - name: from
            dtype: string
          - name: value
            dtype: string
      - name: metadata
        struct:
          - name: content_type
            dtype: string
          - name: correct_text
            dtype: string
          - name: is_abnormal
            dtype: bool
          - name: location
            dtype: string
          - name: modality
            dtype: string
          - name: pathology
            dtype: string
          - name: question_id
            dtype: string
    splits:
      - name: train
        num_bytes: 29904541796
        num_examples: 750009
      - name: val
        num_bytes: 3231558586
        num_examples: 83668
    download_size: 38424398344
    dataset_size: 33136100382
configs:
  - config_name: alignment
    data_files:
      - split: train
        path: alignment/train-*
      - split: val
        path: alignment/val-*
  - config_name: instruct
    data_files:
      - split: train
        path: instruct/train-*
      - split: val
        path: instruct/val-*
  - config_name: benchmark
    data_files:
      - split: test
        path: benchmark/test-*
extra_gated_prompt: >-
  ### RADIMAGENET LLC Dataset Research Use Agreement
     
  1. RadImageNet grants you permission, upon your agreeing to the terms of the
  Research Use Agreement, to view and use the Dataset for personal,
  non-commercial (e.g., academic) research purposes only. Any commercial use,
  sale, or other monetization, by you or your affiliates, is strictly prohibited
  under any and all circumstances.

  2. Other than any limited rights expressly granted herein to you, RadImageNet
  retains all rights, title, and interest in the Dataset.

  3. You may make a verbatim copy of the Dataset for non-commercial research use
  as permitted in the Research Use Agreement. You may not alter this verbatim
  copy for any reason. If another user within your organization wishes to use
  the Dataset, they must register as an individual user and comply with all the
  terms of the Research Use Agreement.

  4. YOU MAY NOT DISTRIBUTE, PUBLISH, OR REPRODUCE A COPY of any portion,
  including the entirety, of the Dataset to anyone without express and specific
  prior written permission from RadImageNet.

  5. YOU MAY NOT SHARE THE DOWNLOAD LINK to the Dataset with others. For
  example, if someone other than you within your organization wishes to use or
  view the Dataset, they must register as an individual user and agree to and
  comply with all the terms of the Research Use Agreement.

  6. You must not modify, reverse engineer, decompile, or create derivative
  works from the Dataset. You must not remove or alter any copyright or other
  proprietary notices in the Dataset.

  7. The Dataset has not been reviewed or approved by the Food and Drug
  Administration, or any other regulatory agency of the United States of
  America. The Dataset is being provided to you strictly and only for
  non-clinical, research use. In no event shall data or images generated through
  the use, directly or indirectly, in whole or in part, of the Dataset be used
  or relied upon in the diagnosis or provision of patient care. This Research
  Use Agreement expressly forbids the use, directly or indirectly, in whole or
  in part, of the Dataset in the diagnosis or provision of patient care.

  8. THE DATASET IS PROVIDED “AS IS,” AND RADIMAGENET AND ITS COLLABORATORS MAKE
  NO WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
  MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE,2 NOR DO THEY ASSUME
  ANY LIABILITY OR RESPONSIBILITY FOR THE USE OF THE DATASET.

  9. You will not attempt to identify or re-identify any of the individual data
  subjects (e.g., patients). Identification or re-identification of individuals
  is strictly prohibited. Any identification or re-identification of any
  individual data subject shall be immediately reported to RadImageNet and may
  be subject to immediate termination of the use of the Dataset.


  10. Any violation of the Research Use Agreement or other impermissible use
  shall be grounds for immediate termination of use of the Dataset. It is your
  duty to promptly report to RadImageNet any knowledge of any violation at any
  time. In the event that RadImageNet determines that you have violated this
  Research Use Agreement or made other impermissible use of the Dataset,
  RadImageNet may direct that you immediately return all copies of the Dataset
  and retain no copies thereof. RadImageNet may do this even if you did not
  cause the violation or impermissible use.


  In consideration for your agreement to the terms and conditions contained in
  the Research Use Agreement, RadImageNet grants you limited permission to view
  and use the Dataset for personal, non-commercial research, as described
  herein. You may not otherwise copy, reproduce, retransmit, distribute,
  publish, commercially exploit or otherwise transfer any material from or
  related to the Dataset.

  #### Limitation of Use

  You may use the Dataset for legal purposes only.

  #### Indemnification

  You agree to indemnify and hold RadImageNet harmless from and not liable in
  any way for any claims, losses or damages, including legal fees, arising out
  of or resulting from your use of the Dataset or your violation or role in
  violation of the Research Use Agreement. You agree to fully cooperate in
  RadImageNet’s defense against any such claims. These terms and all other terms
  of the Research Use Agreement shall be governed by and interpreted in
  accordance with the laws of New York State.
extra_gated_fields:
  Name: text
  Title: text
  Date: date_picker
  By clicking Submit below I accept the terms of this RADIMAGENET LLC Dataset Research Use Agreement (hereinafter “the Research Use Agreement”), as well as to the Terms of Use of the RADIMAGENET LLC (hereinafter “RadImageNet”) website as posted and updated periodically: checkbox
extra_gated_button_content: Submit
Raidium


RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering

📖 Paper


Dataset Details

We introduce RadImageNet-VQA, a large-scale dataset designed for training and benchmarking radiologic VQA on CT and MRI exams. Built from the CT/MRI subset of RadImageNet and its expert-curated anatomical and pathological annotations, RadImageNet-VQA provides 750K images with 7.5M generated samples, including 750K medical captions for visual-text alignment and 6.75M question-answer pairs that span three radiology tasks: fine-grained pathology identification, anatomy recognition, and abnormality detection. The dataset includes open-ended, closed-ended, and multiple-choice questions across 8 anatomical regions and 97 pathologies, generated with prompt-based templates and constructed to probe visual-grounded understanding while minimizing text-only shortcut answering. For evaluation, we construct a stratified benchmark of 1,000 images with 9,000 question-answer pairs covering all tasks and question types.

Raidium

Data Creation

RadImageNet-VQA was created to challenge multimodal models with tasks that demand radiology text-image understanding, pushing the boundaries of what these models can achieve in terms of perception and reasoning. The data for the RadImageNet-VQA dataset was build upon RadImageNet, a large expert-annotated medical imaging dataset in which each image is associated with a modality (CT, MRI, US), a body part (e.g., abdomen, hip, brain) and a pathology label. From this resource, we use the CT and MRI subsets to form the basis for generating clinically meaningful captions and VQA samples across anatomy, abnormality, and fine-grained pathology tasks.

Raidium

Zero-shot Results

Zero-shot accuracies (%) of VLMs on RadImageNet-VQA benchmark. Results are reported across anatomy recognition, abnormality detection (Abn), and pathology identification using four question formats: Open (free-form), Closed+ (always 'yes' as true answer), Closed– (always 'no'), and MC (multiple-choice).

Model Anatomy Abnormality Pathology Average
Open Closed+ Closed– MC Closed Open Closed+ Closed– MC
General-purpose models
LLaVA-OneVision-Qwen2-7B 48.4 82.7 81.3 88.7 49.8 16.0 55.3 61.3 33.6 57.5
Qwen2.5-VL-3B-Instruct 37.7 83.7 77.1 77.9 70.5 10.0 78.1 21.4 34.8 54.6
Qwen2.5-VL-7B-Instruct 37.5 84.9 79.1 80.5 69.5 9.8 69.2 47.4 30.1 56.4
InternVL3.5-8B 50.9 98.1 75.9 93.3 58.9 9.9 85.9 27.8 41.8 60.3
InternVL3.5-14B 56.6 98.2 74.4 89.9 74.4 11.7 86.7 33.7 47.1 63.6
GPT-5 44.3 72.4 81.8 89.3 27.5 15.8 54.9 68.3 41.2 54.9
Gemini 2.5 Pro 65.7 76.5 81.9 88.8 17.8 21.1 50.2 30.1 44.4 52.9
Medical-specialized models
LLaVA-Med-v1.5-mistral-7b 44.3 89.9 55.3 58.1 22.4 10.2 41.8 66.6 26.4 48.2
HuatuoGPT-Vision-7B 45.4 82.5 89.0 88.3 60.6 13.6 65.5 69.2 44.6 48.9
medgemma-4b-it 62.9 76.4 82.5 84.8 55.4 30.6 54.2 77.4 36.8 51.5
Lingshu-7B 49.6 90.7 85.1 88.9 47.9 15.7 57.0 78.8 29.6 60.4
Lingshu-32B 45.2 75.5 92.1 89.3 54.5 14.4 46.4 88.8 31.7 59.8

Bold = best, italic = second best

Data Structure

Alignment Data

The alignment component contains single caption samples per image, intended to align visual content with concise clinical descriptions.

Each instance conceptually includes:

  • an image
  • a single prompt–response pair
  • structured metadata

Fields:

  • id: unique sample identifier
  • image: relative path to the medical image
  • conversations: one human prompt and one descriptive response
  • metadata: modality, anatomical location, abnormality flag, pathology label

The response provides a brief clinical description of the image.


Instruction Data

The instruction component contains multiple question–answer pairs per image and is intended for instruction tuning of multimodal models.

Each instance includes:

  • an image
  • one or more QA-style conversation turns
  • structured metadata describing the task

Supported instruction types include image description, pathology identification, modality recognition, and anatomical localization.


Benchmark Data

The benchmark split is designed for standardized evaluation of medical VQA models.

It contains 9,000 question–answer pairs across 1,000 images and includes three question types:

  • open-ended (free-form answers)
  • closed-ended (yes/no)
  • multiple-choice (options A–D)

Benchmark fields:

  • image: medical image reference
  • question: question presented to the model
  • choices: answer options (multiple-choice only)
  • answer: ground-truth answer
  • question_type: open, yes/no, or multiple-choice
  • metadata: modality, anatomy, pathology, and correctness labels

Metadata

Metadata fields provide structured clinical and contextual information:

  • modality: imaging modality (e.g., CT, MRI)
  • location: anatomical region
  • is_abnormal: presence of pathology
  • pathology: pathology category
  • content_type: task type (description, pathology, etc.)
  • question_id: question template identifier
  • correct_text: textual form of the correct answer (when applicable)

Data Splits

The dataset is organized into three configurations with training and validation splits:

Alignment Instruction Tuning Benchmark
Train Validation Train Validation Test
Samples 750,009 83,668 750,009 83,668 9,000
Images 750,009 83,668 750,009 83,668 1,000
QAs per image 1 1 ~9 ~9 9
Total QAs 750K 83K 6.75M 753K 9K

Acknowledgments

The dataset is built upon RadImageNet https://www.radimagenet.com/.

Citation

@inproceedings{
butsanets2025radimagenetvqa,
title={RadImageNet{VQA}: A Large-Scale {CT} and {MRI} Dataset for Medical Visual Question Answering},
author={L{\'e}o Butsanets and Charles Corbi{\`e}re and Julien Khlaut and Pierre Manceron and Corentin Dancette},
year={2025},
url={https://openreview.net/forum?id=khHKvZ9sLD},
}