QUEST-35B-SFT

QUEST 35B-class MoE SFT-only checkpoint (Qwen3.5-35B-A3B base, Qwen3_5MoeForConditionalGeneration). Intermediate stage before mid-training and RL.

Benchmark results

Benchmark Metric Score
BrowseComp avg@3 45.1
Mind2Web 2 avg@3 26.5
HLE avg@3 39.49
DeepResearch Bench avg@3 36.35
BrowseComp-Plus avg@3 57.9
WideSearch Item F1 avg@4 61.1
GAIA avg@3 83.5
LiveResearchBench avg@3 64.69

Quick start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "osunlp/QUEST-35B-SFT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, device_map="auto", torch_dtype="auto",
)

Apply the model's chat template with tokenizer.apply_chat_template(...) before passing prompts.

License

Released under the Apache License 2.0.

Downloads last month
43
Safetensors
Model size
35B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including osunlp/QUEST-35B-SFT