EnglishNER_TR
Custom English NER Transformer trained from scratch with a custom WordPiece tokenizer on CoNLL-03.
Model details
- Architecture: Custom Transformer encoder for token classification
- Tokenizer: Custom WordPiece tokenizer
- Labels: ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']
Best run summary
- Best epoch: 28
- Best validation F1: 0.727761485826002
- Final validation metrics: {"precision": 0.7052415535206821, "recall": 0.7517670817906429, "f1": 0.727761485826002, "accuracy": 0.9551224640785017}
- Final test metrics: {"precision": 0.636274987810824, "recall": 0.6931657223796034, "f1": 0.663503092958224, "accuracy": 0.9400021535479702}
Loading
from transformers import AutoConfig, AutoModelForTokenClassification, AutoTokenizer
repo_id = "Ahmedhisham/EnglishNER_TR"
tokenizer = AutoTokenizer.from_pretrained(repo_id, trust_remote_code=True)
config = AutoConfig.from_pretrained(repo_id, trust_remote_code=True)
model = AutoModelForTokenClassification.from_pretrained(repo_id, trust_remote_code=True)
Notes
This repository contains custom modeling code, configuration code, tokenizer files, and training metadata.

- Downloads last month
- 66