JunHowie commited on
Commit
5d61cf6
·
verified ·
1 Parent(s): adca516

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ license: apache-2.0
5
+ license_name: kwaipilot-license
6
+ license_link: LICENSE
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - Seed
11
+ - GPTQ
12
+ - Int4
13
+ - vLLM
14
+ base_model:
15
+ - Kwaipilot/KAT-Dev
16
+ base_model_relation: quantized
17
+ ---
18
+ # KAT-Dev-GPTQ-Int4
19
+ Base model: [Kwaipilot/KAT-Dev](https://huggingface.co/Kwaipilot/KAT-Dev)
20
+
21
+ <i>Calibrate using the https://huggingface.co/datasets/timdettmers/openassistant-guanaco/blob/main/openassistant_best_replies_eval.jsonl dataset.</i>
22
+ <br>
23
+ <i>The quantization configuration is as follows</i>
24
+
25
+ ```
26
+ quant_config = QuantizeConfig(bits=4, group_size=128, desc_act=False)
27
+ ```
28
+
29
+ ### 【vLLM Startup Command】
30
+ ```
31
+ vllm serve JunHowie/KAT-Dev-GPTQ-Int4
32
+ ```
33
+
34
+ ### 【Dependencies】
35
+ ```
36
+ vllm>=0.10.2
37
+ transformers>=4.56.1
38
+ ```
39
+
40
+ ### 【Model Download】
41
+
42
+ ```python
43
+ from huggingface_hub import snapshot_download
44
+ snapshot_download('JunHowie/KAT-Dev-GPTQ-Int4', cache_dir="your_local_path")
45
+ ```
46
+
47
+ ### 【Overview】
48
+ <div align="center">
49
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61ee40a269351366e29972ad/KIYEa1c_WJEWPpeS0L_k1.png" width="100%" alt="Kwaipilot" />
50
+ </div>
51
+
52
+ <hr>
53
+
54
+
55
+ # News
56
+
57
+ 🔥 We’re thrilled to announce the release of KAT-Dev-72B-Exp, our latest and most powerful model yet!
58
+
59
+ 🔥 You can now try our **strongest** proprietary coder model **KAT-Coder** directly on the [StreamLake](https://www.streamlake.ai/product/kat-coder) platform **for free**.
60
+
61
+ # Highlights
62
+ **KAT-Dev-32B** is an open-source 32B-parameter model for software engineering tasks.
63
+
64
+ On SWE-Bench Verified, **KAT-Dev-32B** achieves comparable performance with **62.4%** resolved and ranks **5th** among all open-source models with different scales.
65
+
66
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61ee40a269351366e29972ad/dTpQQPQnp1TdD4YB8gZAu.png)
67
+
68
+ # Introduction
69
+
70
+ **KAT-Dev-32B** is optimized via several stages of training, including a mid-training stage, supervised fine-tuning (SFT) & reinforcement fine-tuning (RFT) stage and an large-scale agentic reinforcement learning (RL) stage. In summary, our contributions include:
71
+
72
+ <table>
73
+ <thead>
74
+ <tr>
75
+ <th style="text-align:left; width:18%;">Stage</th>
76
+ <th style="text-align:left;">Key Techniques</th>
77
+ </tr>
78
+ </thead>
79
+ <tbody>
80
+ <tr>
81
+ <td><strong>1. Mid-Training</strong></td>
82
+ <td>We observe that adding extensive training for tool-use capability, multi-turn interaction, and instruction-following at this stage may not yield large performance gains in the current results (e.g., on leaderboards like SWE-bench). However, since our experiments are based on the Qwen3-32B model, we find that enhancing these foundational capabilities will have a significant impact on the subsequent SFT and RL stages. This suggests that improving such core abilities can profoundly influence the model’s capacity to handle more complex tasks.
83
+ </td>
84
+ </tr>
85
+ <tr>
86
+ <td><strong>2. SFT & RFT</strong></td>
87
+ <td>We meticulously curated eight task types and eight programming scenarios during the SFT stage to ensure the model’s generalization and comprehensive capabilities. Moreover, before RL, we innovatively introduced an RFT stage. Compared with traditional RL, we incorporate “teacher trajectories” annotated by human engineers as guidance during training—much like a learner driver being assisted by an experienced co-driver before officially driving after getting a license. This step not only boosts model performance but also further stabilizes the subsequent RL training.
88
+ </td>
89
+ </tr>
90
+ <tr>
91
+ <td><strong>3. Agentic RL Scaling</strong></td>
92
+ <td>Scaling agentic RL hinges on three challenges: efficient learning over nonlinear trajectory histories, leveraging intrinsic model signals, and building scalable high-throughput infrastructure. We address these with a multi-level prefix caching mechanism in the RL training engine, an entropy-based trajectory pruning technique, and an inner implementation of SeamlessFlow[1] architecture that cleanly decouples agents from training while exploiting heterogeneous compute. These innovations together cut scaling costs and enable efficient large-scale RL.
93
+ </td>
94
+ </tr>
95
+ </tbody>
96
+ </table>
97
+
98
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://kwaipilot.github.io/KAT-Coder/).
99
+
100
+ # Quickstart
101
+
102
+ ```python
103
+ from transformers import AutoModelForCausalLM, AutoTokenizer
104
+
105
+ model_name = "Kwaipilot/KAT-Dev"
106
+
107
+ # load the tokenizer and the model
108
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
109
+ model = AutoModelForCausalLM.from_pretrained(
110
+ model_name,
111
+ torch_dtype="auto",
112
+ device_map="auto"
113
+ )
114
+
115
+ # prepare the model input
116
+ prompt = "Give me a short introduction to large language model."
117
+ messages = [
118
+ {"role": "user", "content": prompt}
119
+ ]
120
+ text = tokenizer.apply_chat_template(
121
+ messages,
122
+ tokenize=False,
123
+ add_generation_prompt=True,
124
+ )
125
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
126
+
127
+ # conduct text completion
128
+ generated_ids = model.generate(
129
+ **model_inputs,
130
+ max_new_tokens=65536
131
+ )
132
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
133
+
134
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
135
+
136
+ print("content:", content)
137
+ ```
138
+
139
+ ## Claude Code
140
+ ### vllm server
141
+ ```
142
+ MODEL_PATH="Kwaipilot/KAT-Dev"
143
+
144
+ vllm serve $MODEL_PATH \
145
+ --enable-prefix-caching \
146
+ --tensor-parallel-size 8 \
147
+ --tool-parser-plugin $MODEL_PATH/qwen3coder_tool_parser.py \
148
+ --chat-template $MODEL_PATH/chat_template.jinja \
149
+ --enable-auto-tool-choice --tool-call-parser qwen3_coder
150
+ ```
151
+
152
+ [claude-code-router](https://github.com/musistudio/claude-code-router) is a third-party routing utility that allows Claude Code to flexibly switch between different backend APIs.
153
+ On the dashScope platform, you can install the **claude-code-config** extension package, which automatically generates a default configuration for `claude-code-router` with built-in dashScope support.
154
+
155
+ Once the configuration files and plugin directory are generated, the environment required by `ccr` will be ready.
156
+ If needed, you can still manually edit `~/.claude-code-router/config.json` and the files under `~/.claude-code-router/plugins/` to customize the setup.
157
+
158
+ Finally, simply start `ccr` to run Claude Code and seamlessly connect it with the powerful coding capabilities of **KAT-Dev-32B**.
159
+ Happy coding!