30% Smaller, +3.5% Better

Qwen3.5-27B pruned by 30% and retrained for code through Experiential Plasticity.

3.07 → 2.96 perplexity · 2 cycles

Verify Chain of Custody

Every claim on this card is verified
Trust: self-attested · 1 benchmark · 2 devices tested
ForgeAlloy chain of custody · Download alloy · Merkle-chained


Qwen3.5-27B with cryptographic provenance via the ForgeAlloy chain of custody.

Benchmarks

Benchmark Result Verified
perplexity 3.0 Self-reported

What Changed (Base → Forged)

Base Forged Delta
Perplexity (code) 3.07 2.96 -3.5% ✅
Pruning None 30% heads (magnitude) -30% params ✅
Training General code, 500 steps LR 2e-4, 2 cycles
Pipeline prune → train 2 cycles

Runs On

Device Format Size Speed
MacBook Pro 32GB fp16 Verified
RTX 3090 24GB fp16 Verified
MacBook Pro 32GB fp16 8.0GB Expected
MacBook Air 16GB Q8_0 ~4.0GB Expected
MacBook Air 8GB Q4_K_M ~2.5GB Expected
iPhone / Android Q4_K_M ~2.5GB Expected

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("continuum-ai/qwen3.5-27b-code-forged",
    torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("continuum-ai/qwen3.5-27b-code-forged")

inputs = tokenizer("def merge_sort(arr):", return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Methodology

Produced via head pruning. Full methodology, ablations, and per-stage rationale are in the methodology paper and the companion MODEL_METHODOLOGY.md in this repository. The pipeline ran as prune → train over 2 cycles on MacBook Pro 32GB.

Chain of Custody

Scan the QR or verify online. Download the alloy file to verify independently.

What Proof
Model weights sha256:4b4c056e252719d09fffd65c7a72aba3a...
Code that ran sha256:legacy-pre-alloy-...
Forged on MacBook Pro 32GB, 2026-03-27T20:29:26-0500
Trust level self-attested
Spec ForgeAlloy — Rust/Python/TypeScript

Make Your Own

Forged with Continuum — a distributed AI world that runs on your hardware.

Continuum Model Factory

The Factory configurator lets you design and forge custom models visually — context extension, pruning, LoRA, quantization, vision/audio modalities. Pick your target devices, the system figures out what fits.

GitHub · All Models · Forge-Alloy

License

apache-2.0

Downloads last month
2,626
Safetensors
Model size
27B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for continuum-ai/qwen3.5-27b-code-forged

Base model

Qwen/Qwen3.5-27B
Finetuned
(223)
this model