File size: 499 Bytes
3b14c60 98e627b |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
language:
- en
---
Model of the paper [MoM: Linear Sequence Modeling with Mixture-of-Memories](https://arxiv.org/abs/2502.13685).
The model was trained on a sample of SlimPajama with 15B tokens.
Due to changes in the MLP layer structure in the latest version of fla, the weights cannot be loaded. You can use the version at [fla](https://github.com/fla-org/flash-linear-attention/tree/8346a33792558d8e3eb206fe18404de037e11d9c) instead. |