MahenOCR-1B
📥 Model Download | 🌟 Metanthropic Research
📖 Introduction
MahenOCR is a 1.0B parameter Soundness-Aware Vision-Language Model (VLM) developed by Metanthropic Research. It is specialized for high-performance Optical Character Recognition (OCR) while strictly adhering to mechanistic soundness principles.
Built upon the architectural efficiency of native resolution transformers, MahenOCR incorporates a novel Identity-Dissonance Fine-Tuning (IDFT) strategy. This ensures the model maintains a coherent internal identity ("I am MahenOCR") and robust attribution, effectively eliminating identity hallucinations common in open-weights models.
Despite its lightweight design (1B parameters), MahenOCR achieves commercial-grade performance in:
- Complex Document Parsing (Markdown/LaTeX)
- Multilingual Text Spotting
- Open-Field Information Extraction (JSON)
- Video Subtitle Extraction
🚀 Quick Start with Transformers
Installation
MahenOCR requires specific transformer support. Install the compatible version:
pip install git+[https://github.com/huggingface/transformers@82a06db03535c49aa987719ed0746a76093b1ec4](https://github.com/huggingface/transformers@82a06db03535c49aa987719ed0746a76093b1ec4)
Model Inference (Python)
from transformers import AutoProcessor, HunYuanVLForConditionalGeneration
from PIL import Image
import torch
def clean_repeated_substrings(text):
"""Clean repetitive artifacts common in VLM outputs"""
n = len(text)
if n < 8000: return text
for length in range(2, n // 10 + 1):
candidate = text[-length:]
count = 0
i = n - length
while i >= 0 and text[i:i + length] == candidate:
count += 1
i -= length
if count >= 10: return text[:n - length * (count - 1)]
return text
# Load MahenOCR
model_id = "metanthropic/MahenOCR-1B"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = HunYuanVLForConditionalGeneration.from_pretrained(
model_id,
trust_remote_code=True,
device_map="auto",
torch_dtype=torch.float16
)
# Run Inference
img_path = "path/to/your/image.jpg"
image = Image.open(img_path)
# MahenOCR supports natural language prompts
# Example: "Detect and recognize text in the image."
prompt = "Transcribe the text in this image into markdown format."
# Construct input (User -> Image -> Assistant format)
text_input = f"User: \n{prompt}\nAssistant:"
inputs = processor(
text=text_input,
images=image,
padding=True,
return_tensors="pt"
).to(model.device)
# Important: Ensure inputs match model dtype (FP16/BF16)
inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
with torch.no_grad():
output_ids = model.generate(
**inputs,
max_new_tokens=2048,
do_sample=False,
temperature=0.0
)
response = processor.batch_decode(output_ids, skip_special_tokens=True)[0]
print(clean_repeated_substrings(response.split("Assistant:")[-1].strip()))
📚 Citation
If you use MahenOCR in your research or applications, please cite our technical report:
@article{mahenocr2025,
title={MahenOCR: A Soundness-Aware 1B Parameter Vision-Language Model},
author={Metanthropic Research Team},
year={2025},
publisher={Hugging Face},
url={[https://huggingface.co/metanthropic/MahenOCR-1B](https://huggingface.co/metanthropic/MahenOCR-1B)}
}
🙏 Acknowledgements
MahenOCR is built upon the Mahen-V1 Native Architecture, a specialized high-resolution vision-language framework designed by Metanthropic Research. We acknowledge the broader open-source community for the foundational transformer advancements that enabled this work. MahenOCR represents a significant evolution in end-to-end OCR, integrating our proprietary Soundness-Aware optimization protocols to deliver commercial-grade performance with strict identity alignment.
- Downloads last month
- 14
Model tree for metanthropic/MahenOCR-1B
Unable to build the model tree, the base model loops to the model itself. Learn more.