Enlarging max_position_embeddings to support long-context generation for the specific inference engine. (#4)
Browse files- Enlarging max_position_embeddings to support long-context generation for the specific inference engine. (64a1e445bb77ec824edc3d9c13c925775295429a)
- config.json +1 -1
config.json
CHANGED
|
@@ -406,7 +406,7 @@
|
|
| 406 |
"hidden_size": 4096,
|
| 407 |
"initializer_range": 0.02,
|
| 408 |
"intermediate_size": 8192,
|
| 409 |
-
"max_position_embeddings":
|
| 410 |
"max_window_layers": 94,
|
| 411 |
"mlp_only_layers": [],
|
| 412 |
"model_type": "qwen3_moe",
|
|
|
|
| 406 |
"hidden_size": 4096,
|
| 407 |
"initializer_range": 0.02,
|
| 408 |
"intermediate_size": 8192,
|
| 409 |
+
"max_position_embeddings": 65536,
|
| 410 |
"max_window_layers": 94,
|
| 411 |
"mlp_only_layers": [],
|
| 412 |
"model_type": "qwen3_moe",
|