Evaluation Results
| Benchmark | Score |
|---|---|
| GSM8K (test) | 2.00% |
| MMLU-Pro (test) | 4.00% |
Results obtained via local evaluation. Given the model size (0.2B parameters), low benchmark scores are expected.
Model Usage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "FlameF0X/Qwen2-0.2B-it"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain how a transformer model works in one sentence."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7
)
generated_ids = [
output_ids[len(input_ids):]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(f"--- Assistant Response ---\n{response}")
Training Data
This model was instruction-tuned on a mixture of:
Salesforce/wikitext— General textroneneldan/TinyStories— Short story generationFlameF0X/arXiv-AI-ML— AI/ML research papersSkylion007/openwebtext— Web textflytech/python-codes-25k— Python codebookcorpus/bookcorpus— BooksHuggingFaceH4/ultrachat_200k— Instruction followingopenai/gsm8k— Math reasoningmicrosoft/orca-math-word-problems-200k— Math word problemslaion/OIG— Open instruction generalistmicrosoft/wiki_qa— Question answering
- Downloads last month
- 309
Model tree for FlameF0X/Qwen2-0.2B-it
Base model
FlameF0X/Qwen2-0.2B-ptDatasets used to train FlameF0X/Qwen2-0.2B-it
Space using FlameF0X/Qwen2-0.2B-it 1
Evaluation results
- Accuracy on GSM8Ktest set Local Benchmark2.000
- Accuracy on MMLU-Protest set Local Benchmark4.000