EnlistedGhost commited on
Commit
20952ed
·
verified ·
1 Parent(s): 58b90dd

Update Modelfile-Pixtral-12B-Q8_0.md

Browse files
Files changed (1) hide show
  1. Modelfile-Pixtral-12B-Q8_0.md +13 -6
Modelfile-Pixtral-12B-Q8_0.md CHANGED
@@ -1,26 +1,33 @@
1
- # Pixtral-12B-GGUF Modelfile (Q2_K)
2
  # ---------------------------------
3
  #
4
  # Tested with: Ollama v0.11.X-->v0.12.6(latest)
5
- # Quantization: Q2_K (Quant created by = ggml-org)
6
- # Quality: Decent/Okay - Recommend Q4_K_S or higher...
 
7
  # ----------------------------------------------------
8
  #
9
  # Vision Notes:
10
  # Some users may need to set the context value -or- "num_ctx"
11
- # value to at least ~12K-->19K.
 
12
  # -----------------------------------------------------------
13
  #
14
  # Created by:
15
  # EnlistedGhost (aka Jon Zaretsky)
16
- # --------------------------------
 
 
 
 
 
17
  #
18
  # Goal:
19
  # To provide the FIRST actually functional and usable
20
  # GGUF model version of the Mistral Pixtral-12B for
21
  # direct-usage with Ollama!
22
  # Currently, there are NO USABLE OR WORKING versions
23
- # of this model...
24
  # ---------------------------------------------------
25
  #
26
  # Big/Giant/Huge Thank You:
 
1
+ # Pixtral-12B-GGUF Modelfile (Q8_0)
2
  # ---------------------------------
3
  #
4
  # Tested with: Ollama v0.11.X-->v0.12.6(latest)
5
+ # Quantization: Q8_0 (Quant created by = ggml-org)
6
+ # Quality: Extremely-High-Quality (Updated 2025/10/28)
7
+ # Real-world usability: Very Recommended!
8
  # ----------------------------------------------------
9
  #
10
  # Vision Notes:
11
  # Some users may need to set the context value -or- "num_ctx"
12
+ # value to ~9K-->19K.
13
+ # Personally tested with: num_ctx=9982 and num_ctx=19982
14
  # -----------------------------------------------------------
15
  #
16
  # Created by:
17
  # EnlistedGhost (aka Jon Zaretsky)
18
+ # Original GGUF by: https://huggingface.co/ggml-org
19
+ # Original GGUF type: Static Quantize (non-iMatrix)
20
+ # ----------------------------------------------------------
21
+ # | Warning! - iMatrix Quantize seems to suffer in regards |
22
+ # | to vision quality, but are still made available |
23
+ # ----------------------------------------------------------
24
  #
25
  # Goal:
26
  # To provide the FIRST actually functional and usable
27
  # GGUF model version of the Mistral Pixtral-12B for
28
  # direct-usage with Ollama!
29
  # Currently, there are NO USABLE OR WORKING versions
30
+ # of this model that are usable with Ollama...
31
  # ---------------------------------------------------
32
  #
33
  # Big/Giant/Huge Thank You: