Evaluation Results

Benchmark Score
GSM8K (test) 2.00%
MMLU-Pro (test) 4.00%

Results obtained via local evaluation. Given the model size (0.2B parameters), low benchmark scores are expected.

Model Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_path = "FlameF0X/Qwen2-0.2B-it"

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True
)

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Explain how a transformer model works in one sentence."}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=128,
    do_sample=True,
    temperature=0.7
)

generated_ids = [
    output_ids[len(input_ids):]
    for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(f"--- Assistant Response ---\n{response}")

Training Data

This model was instruction-tuned on a mixture of:

  • Salesforce/wikitext — General text
  • roneneldan/TinyStories — Short story generation
  • FlameF0X/arXiv-AI-ML — AI/ML research papers
  • Skylion007/openwebtext — Web text
  • flytech/python-codes-25k — Python code
  • bookcorpus/bookcorpus — Books
  • HuggingFaceH4/ultrachat_200k — Instruction following
  • openai/gsm8k — Math reasoning
  • microsoft/orca-math-word-problems-200k — Math word problems
  • laion/OIG — Open instruction generalist
  • microsoft/wiki_qa — Question answering
Downloads last month
309
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for FlameF0X/Qwen2-0.2B-it

Finetuned
(1)
this model

Datasets used to train FlameF0X/Qwen2-0.2B-it

Space using FlameF0X/Qwen2-0.2B-it 1

Evaluation results