cais/mmlu
Viewer • Updated • 231k • 522k • 737
WindyLLM 2.3은 MMLU(Massive Multitask Language Understanding) 데이터셋으로 파인튜닝된 대화형 언어 모델입니다.
from transformers import AutoTokenizer, AutoModelForCausalLM
# 모델 로드
tokenizer = AutoTokenizer.from_pretrained("tklohj/windyllm_2.3")
model = AutoModelForCausalLM.from_pretrained("tklohj/windyllm_2.3")
# 추론 예시
question = "What is the capital of France?"
choices = ["London", "Paris", "Berlin", "Rome"]
prompt = f'''Answer this question with A, B, C, or D.
{question}
A) {choices[0]}
B) {choices[1]}
C) {choices[2]}
D) {choices[3]}
Answer:'''
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20, temperature=0.1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Apache 2.0
@model{windyllm_2.3,
title={WindyLLM 2.3: MMLU Fine-tuned Language Model},
author={tklohj},
year={2025},
url={https://huggingface.co/tklohj/windyllm_2.3}
}
Base model
meta-llama/Meta-Llama-3-8B