chemical_structure_smiles_bert

Overview

This model is a BERT-style encoder pre-trained on millions of SMILES (Simplified Molecular Input Line Entry System) strings. It learns the "grammar" of chemistry to provide high-dimensional embeddings of molecules, which can then be used for downstream tasks like property prediction, toxicity screening, and drug-target interaction modeling.

Model Architecture

The model adapts the BERT (Bidirectional Encoder Representations from Transformers) architecture for molecular sequences:

  • Tokenizer: Custom regex-based tokenizer that treats atoms (e.g., [Fe+2], Cl) and structural markers (=, #, (, )) as individual tokens.
  • Encoder: 12 layers of multi-head self-attention.
  • Pre-training Task: Masked Language Modeling (MLM), where 15% of atoms/bonds are hidden and predicted based on context.

Intended Use

  • Molecular Fingerprinting: Generating dense vector representations for similarity searches in chemical databases.
  • Lead Optimization: Serving as a feature extractor for models predicting LogP, solubility, or binding affinity.
  • Reaction Prediction: Analyzing chemical transformations by comparing reactant and product embeddings.

Limitations

  • 3D Conformation: This model only understands 1D string representations and does not account for 3D spatial stereochemistry or bond angles.
  • Sequence Length: Extremely large polymers or proteins exceeding 512 SMILES tokens are truncated.
  • Chemical Diversity: Predictions may be biased toward drug-like small molecules if the chemical space is outside the pre-training distribution.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support