Gemma 3 27B Instruct - Norm-Preserving Abliterated-v1

This model is a variant of YanLabs/gemma-3-27b-it-abliterated-normpreserve. The current model is less abliterated to preserve the capability of original gemma-3-27b-it model. For this v1 version, only the Q8_0 quantization is recommended. At quantization levels lower than Q8_0, refusals still occur, but with Q8_0 and F16 the model does not refuse.

This is an abliterated version of google/gemma-3-27b-it using the norm-preserving biprojected abliteration technique.

โš ๏ธ Warning: Safety guardrails and refusal mechanisms have been removed through abliteration. This model may generate harmful content and is intended for mechanistic interpretability research only.

Model Details

Model Description

This model applies norm-preserving biprojected abliteration to remove refusal behaviors while preserving the model's original capabilities. The technique surgically removes "refusal directions" from the model's activation space without traditional fine-tuning.

  • Developed by: YanLabs
  • Model type: Causal Language Model (Transformer)
  • License: Gemma Terms of Use
  • Base model: google/gemma-3-27b-it

Model Sources

Uses

Intended Use

  • Research: Mechanistic interpretability studies
  • Analysis: Understanding LLM safety mechanisms
  • Development: Testing abliteration techniques

Out-of-Scope Use

  • โŒ Production deployments
  • โŒ User-facing applications
  • โŒ Generating harmful content for malicious purposes

Limitations

  • Abliteration does not guarantee complete removal of all refusals
  • May generate unsafe or harmful content
  • Model behavior may be unpredictable in edge cases
  • No explicit harm prevention mechanisms remain

Citation

If you use this model in your research, please cite:

@misc{gemma-3-27b-abliterated,
  author = {YanLabs},
  title = {Gemma 3 27B Instruct - Norm-Preserving Abliterated V1},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/YanLabs/gemma-3-27b-it-abliterated-normpreserve-v1}},
  note = {Abliterated using norm-preserving biprojected technique}
}
Downloads last month
297
Safetensors
Model size
27B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for YanLabs/gemma-3-27b-it-abliterated-normpreserve-v1

Finetuned
(379)
this model
Adapters
2 models
Merges
1 model
Quantizations
3 models