nexus-dispatch-7b
Fine-tuned Qwen2.5-7B-Instruct for SYSBREAK cyberpunk MMO content generation.
Model Description
This model is a LoRA-merged version of Qwen2.5-7B-Instruct, fine-tuned to generate structured JSON content for the SYSBREAK game.
Purpose: You are a cyberpunk mission writer for SYSBREAK. Generate missions in JSON format. Use ONLY entities from the provided world context. Do NOT include credit/XP reward values. Do NOT mention specific credit amounts in narrative text. Respond with valid JSON only.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("nexus-dispatch-7b")
tokenizer = AutoTokenizer.from_pretrained("nexus-dispatch-7b")
messages = [
{"role": "system", "content": "You are a cyberpunk mission writer for SYSBREAK. Generate missions in JSON format. Use ONLY entities from the provided world context. Do NOT include credit/XP reward values. Do NOT mention specific credit amounts in narrative text. Respond with valid JSON only."},
{"role": "user", "content": "Your prompt here"},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.75, top_p=0.9)
print(tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True))
Ollama
ollama run nexus-dispatch-7b
Training Details
- Training examples: 500
- Training duration: 40.0 minutes
- Base model: Qwen/Qwen2.5-7B-Instruct
- LoRA rank: 32
- LoRA alpha: 64
- Learning rate: 2e-4
- Epochs: 3
- Quantization: QLoRA 4-bit NF4
- Compute dtype: BF16
License
Apache 2.0 (same as base model)
- Downloads last month
- 22