ProtBert-BFD-MS

Pretrained model on protein sequences using a masked language modeling (MLM) objective. The model makes a per-protein (pooling) predictions of membrane versus water-soluble (2-state accuracy). The model was developed by Ahmed Elnaggar et al. and more information can be found on the GitHub repository and in the accompanying paper. This repository is a fork of their HuggingFace repository. This model is trained on uppercase amino acids: it only works with capital letter amino acids.

Model description

The model has no auxiliary tasks like BERT's next-sentence prediction. Only the main objective - MLM - was used.

from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
import re

pipeline = TextClassificationPipeline(
    model=AutoModelForSequenceClassification.from_pretrained("virtual-human-chc/prot_bert_bfd_membrane"),
    tokenizer=AutoTokenizer.from_pretrained("virtual-human-chc/prot_bert_bfd_membrane"),
    device=0
)

sequences_example = ["MAKSKNHTAHNQTRKAHRNGIKKPKTYKYPSLKGVDPKFRRNHKHALHGTAKALAAAKK",
                     "MGLPVSWAPPALWVLGCCALLLSLWALCTACRRPEDAVAPRKRARRQRARLQGSATAAEASLLRRTHLCSLSKSDTRLHELHRGPRSSRALRPASMDLLRPHWLEVSRDITGPQAAPSAFPHQELPRALPAAAATAGCAGLEATYSNVGLAALPGVSLAASPVVAEYARVQKRKGTHRSPQEPQQGKTEVTPAAQVDVLYSRVCKPKRRDPGPTTDPLDPKGQGAILALAGDLAYQTLPLRALDVDSGPLENVYESIRELGDPAGRSSTCGAGTPPASSCPSLGRGWRPLPASLP"]

sequence_examples = [" ".join(list(re.sub(r"[UZOB]", "X", sequence))) for sequence in sequence_examples]

print(pipeline(sequences_example))

Input

An array of uppercase letters of amino acid residues, e.g. ["PRTEINO"]

Output

A list of two dictionaries. The keys of the dictonaries are: label and score. label is the prediction, i.e., either Soluble or Membrane, and score is the confidence of the model about the prediction. Prediction for the inference example: [{'label': 'Soluble', 'score': 0.8509202003479004}, {'label': 'Membrane', 'score': 0.8588864207267761}].

Copyright

Code derived from https://github.com/agemagician/ProtTrans is licensed under the MIT License, Copyright (c) 2025 Ahmed Elnaggar. The ProtTrans pretrained models are released under the under terms of the Academic Free License v3.0 License, Copyright (c) 2025 Ahmed Elnaggar. The other code is licensed under the MIT license, Copyright (c) 2025 Maksim Pavlov.

Downloads last month
56
Safetensors
Model size
0.4B params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including virtual-human-chc/prot_bert_bfd_membrane