Qwen3-8B-FlashNorm

This is a FlashNorm-prepared checkpoint of Qwen/Qwen3-8B, as presented in the paper FlashNorm: Fast Normalization for Transformers.

Mathematically equivalent to the source model. The per-channel RMSNorm weight tensors (input_layernorm.weight, post_attention_layernorm.weight, model.norm.weight) are folded into the following linear layers and then removed from the state dict entirely.

Framework support note. Stock vLLM currently does not load this checkpoint because the norm weight tensors are absent. The upstream patch to accept missing tensors is tracked at: TBD (vLLM issue link). Until the patch lands, use HuggingFace Transformers; it loads this with a warning that norm weights were not initialized and defaults them to ones, which is the correct behavior for FlashNorm.

What FlashNorm does

An exact reformulation of RMSNorm -> Linear:

  • Fold the per-channel normalization weight g into the following linear layer: W_star = W @ diag(g), computed once at checkpoint conversion.
  • After folding, the RMSNorm layer has no learnable per-channel scale. At runtime it simply divides by rms(x).
  • The resulting model computes the same output as the original, by Proposition 1 of the FlashNorm paper.

See the paper and the transformer-tricks repo for details.

Usage

Regenerate locally with transformer_tricks

import transformer_tricks as tt
tt.flashify_repo('Qwen/Qwen3-8B', strict=True)

Via HuggingFace Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

tok = AutoTokenizer.from_pretrained('open-machine/Qwen3-8B-FlashNorm')
model = AutoModelForCausalLM.from_pretrained('open-machine/Qwen3-8B-FlashNorm')

ids = tok('Once upon a time', return_tensors='pt').input_ids
out = model.generate(ids, max_new_tokens=50, do_sample=False)
print(tok.decode(out[0], skip_special_tokens=True))

A warning about missing norm weights is expected; Transformers defaults those to ones, which is the correct value for a FlashNorm checkpoint.

Via vLLM

Not yet supported. See the tracking issue linked above.

Citation

@misc{graef2024flashnormfastnormalizationtransformers,
      title={FlashNorm: Fast Normalization for Transformers}, 
      author={Nils Graef and Matthew Clapp and Andrew Wasielewski},
      year={2024},
      eprint={2407.09577},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2407.09577}, 
}

License

Inherited from the source model.

Downloads last month
707
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for open-machine/Qwen3-8B-FlashNorm

Finetuned
Qwen/Qwen3-8B
Finetuned
(1509)
this model

Paper for open-machine/Qwen3-8B-FlashNorm