Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation
Paper
โข
2410.00683
โข
Published
This is a Llama-3-KoEn-8B-Instruct-preview model fine-tuned on the Parenthetical Terminology Translation (PTT) dataset. The PTT dataset focuses on translating technical terms accurately by placing the original English term in parentheses alongside its Korean translation, enhancing clarity and precision in specialized fields. This fine-tuned model is optimized for handling technical terminology in the Artificial Intelligence (AI) domain.
Hereโs how to use this fine-tuned model with the Hugging Face transformers library:
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load Model and Tokenizer
model_name = "PrompTartLAB/Llama3ko_8B_inst_PTT_enko"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example sentence
text = "The model was fine-tuned using knowledge distillation techniques. The training dataset was created using a collaborative multi-agent framework powered by large language models."
prompt = f"Translate input sentence to Korean \n### Input: {text} \n### Translated:"
# Tokenize and generate translation
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=1024)
out_message = tokenizer.decode(outputs[0][len(input_ids["input_ids"][0]):], skip_special_tokens=True)
# " ์ด ๋ชจ๋ธ์ ์ง์ ์ฆ๋ฅ ๊ธฐ๋ฒ(knowledge distillation techniques)์ ์ฌ์ฉํ์ฌ ๋ฏธ์ธ ์กฐ์ ๋์์ต๋๋ค. ํ๋ จ ๋ฐ์ดํฐ์
์ ๋ํ ์ธ์ด ๋ชจ๋ธ(large language models)๋ก ๊ตฌ๋๋๋ ํ๋ ฅ์ ๋ค์ค ์์ด์ ํธ ํ๋ ์์ํฌ(collaborative multi-agent framework)๋ฅผ ์ฌ์ฉํ์ฌ ์์ฑ๋์์ต๋๋ค."
If you use this model in your research, please cite the original dataset and paper:
@misc{myung2024efficienttechnicaltermtranslation,
title={Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation},
author={Jiyoon Myung and Jihyeon Park and Jungki Son and Kyungro Lee and Joohyung Han},
year={2024},
eprint={2410.00683},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.00683},
}
For questions or feedback, please contact [email protected].
Base model
beomi/Llama-3-KoEn-8B-Instruct-preview