See Devstral-Small-2-24B-Instruct-2512-q6 MLX in action - demonstration video
q6.5bit mixed quant typically achieves 1.128 perplexity in our testing
| Quantization | Perplexity |
|---|---|
| q2.5 | 41.293 |
| q3.5 | 1.900 |
| q4.5 | 1.168 |
| q4.8 | 1.140 |
| q5.5 | 1.141 |
| q6.5 | 1.128 |
| q8.5 | 1.128 |
Usage Notes
Tested on a M3 Ultra 512GB RAM using Inferencer app v1.8.0
- Single inference ~32 tokens/s @ 1000 tokens
- Batched inference ~24 total tokens/s across two inferences
- Memory usage: ~21 GB
Quantized with a modified version of MLX 0.29
Patch also submitted to MLX to fix runtime error (should be available soon)
For more details see demonstration video or visit Devstral-Small-2-24B-Instruct-2512.
Disclaimer
We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.
- Downloads last month
- 1,105