Safetensors
Hausa

OLMo 2 1124 7B Instruct for Hausa: AdaLoRA

This model is built on top of OLMo 2 1124 7B Instruct adapted for Hausa using 200M target language tokens sampled from MADLAD-400. The model is adapted using the AdaLoRA approach. This is based on https://arxiv.org/abs/2303.10512 and was the best-performing LoRA-based method in the HFT paper.

Model Description

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained(
    "allenai/OLMo-2-1124-7B-Instruct",
)
model = PeftModel.from_pretrained(
    base_model,
    "ssu-project/OLMo-2-1124-7B-Instruct-ha-adalora",
)
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(
    "allenai/OLMo-2-1124-7B-Instruct"
)

Citation

@misc{yamaguchi2025mitigatingcatastrophicforgettingtarget,
    title={Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates}, 
    author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
    year={2025},
    eprint={2512.04844},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2512.04844}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ssu-project/OLMo-2-1124-7B-Instruct-ha-adalora

Dataset used to train ssu-project/OLMo-2-1124-7B-Instruct-ha-adalora