Model Card for Model ID

A strong spatial Qwen 2.5 VL baseline.

Post-trained on SAT, and just the answers from Video-R1. The exact mix is 40% SAT, 20% Zebra-COT, 20% Video-R1.

Get Started

% pip install git+https://github.com/huggingface/transformers accelerate
% pip install qwen-vl-utils[decord]==0.0.8

from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor

model = Qwen2_5_VLForConditionalGeneration.from_pretrained("array/Qwen2.5-VL-SAT")
processor = AutoProcessor.from_pretrained(
    exp_confs["model_path"],
    trust_remote_code=model_config.trust_remote_code
)

Please see the paper for details on training and evaluation datasets and metrics.

Results

Model MV RelDep SpRel Jig IQT BLINK Avg BLINK Reas SAT-R VSI Avg VSI Reas ERQA Avg (All)
Qwen2.5-VL (7B) 39.00 61.29 92.38 58.66 25.33 55.33 41.00 59.00 23.96 22.96 38.91 44.30
+ SAT 57.14 87.09 74.12 58.66 30.00 61.40 48.60 71.66 32.40 30.65 38.00 50.87

Citation [optional]

@misc{ray2025satdynamicspatialaptitude,
      title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models}, 
      author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
      year={2025},
      eprint={2412.07755},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.07755}, 
    }
Downloads last month
23
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for array/Qwen2.5-VL-SAT

Finetuned
(934)
this model

Datasets used to train array/Qwen2.5-VL-SAT