File size: 1,408 Bytes
34ffadc
 
da02792
34ffadc
da02792
 
 
34ffadc
da02792
 
 
 
 
e8ab923
 
 
 
 
 
 
2901b8e
e8ab923
 
741ef77
 
 
 
 
 
 
 
 
da02792
 
 
 
 
 
 
 
 
 
 
 
 
 
e8ab923
 
da02792
 
 
e8ab923
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
pipeline_tag: text-generation
license: mit
library_name: mlx
base_model: MiniMaxAI/MiniMax-M2
tags:
- mlx
---

# catalystsec/MiniMax-M2-3bit-DWQ

This model was quantized to 3-bit using DWQ with mlx-lm version **0.28.4**.

| Parameter                 | Value                          |
|---------------------------|--------------------------------|
| DWQ learning rate         | 3e-7                           |
| Batch size                | 1                              |
| Dataset                   | `allenai/tulu-3-sft-mixture`   |
| Initial validation loss   | 0.146                          |
| Final validation loss     | 0.088                          |
| Relative KL reduction     | ≈40 %                          |
| Tokens processed          | ≈1.09 M                        |

## MMLU-PRO Benchmark

| Model | Score |
|-------|:-----:|
| 3-bit DWQ | **66.1** |
| 3-bit | 62.0 |

<img src="minimax_3e-7_mmlu.png" width="600" alt="MMLU-Pro Benchmark">

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("catalystsec/MiniMax-M2-3bit-DWQ")
prompt = "hello"

if tokenizer.chat_template is not None:
    prompt = tokenizer.apply_chat_template(
        [{"role": "user", "content": prompt}],
        add_generation_prompt=True,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
print(response)
```