EnlistedGhost commited on
Commit
f8e898f
·
verified ·
1 Parent(s): 9d18b61

Upload 3 files

Browse files
.gitattributes CHANGED
@@ -47,3 +47,4 @@ Pixtral-12B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
47
  Pixtral-12B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
48
  Pixtral-12B-F16.gguf filter=lfs diff=lfs merge=lfs -text
49
  Pixtral-12B-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
 
 
47
  Pixtral-12B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
48
  Pixtral-12B-F16.gguf filter=lfs diff=lfs merge=lfs -text
49
  Pixtral-12B-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
50
+ Pixtral-12B-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Modelfile-Pixtral-12B-IQ2_M.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pixtral-12B-GGUF Modelfile (IQ2_M)
2
+ # ---------------------------------
3
+ #
4
+ # Tested with: Ollama v0.11.X-->v0.12.9(latest)
5
+ # Quantization: IQ2_M (Quant created by = bartowski)
6
+ # Quality: Surprisingly Usable
7
+ # ----------------------------------------------------
8
+ #
9
+ # Vision Notes:
10
+ # Some users may need to set the context value -or- "num_ctx"
11
+ # value to at least ~12K-->19K.
12
+ # -----------------------------------------------------------
13
+ #
14
+ # Created by:
15
+ # EnlistedGhost (aka Jon Zaretsky)
16
+ # --------------------------------
17
+ #
18
+ # Goal:
19
+ # To provide the FIRST actually functional and usable
20
+ # GGUF model version of the Mistral Pixtral-12B for
21
+ # direct-usage with Ollama!
22
+ # Currently, there are NO USABLE OR WORKING versions
23
+ # of this model...
24
+ # ---------------------------------------------------
25
+ #
26
+ # Big/Giant/Huge Thank You:
27
+ # (ggml-org, bartowski, and the Ollama team)
28
+ # ggml-org: Working mmproj-pixtral vision projector!
29
+ # Bartowki: Working I-Matrix Quants that can be paired with ggml-org vision projector!
30
+ # Ollama team: Because without them, this wouldn't be possible in the first place!
31
+ # ------------------------------------------------------------------------------------
32
+ #
33
+ # Import our GGUF quant files:
34
+ # (Assuming: Linux Operating System)
35
+ # (Assuming: downloaded files are stored in "Downloads" directory/folder)
36
+ FROM ~/Downloads/mmproj-Pixtral-12b-f16.gguf
37
+ FROM ~/Downloads/Pixtral-12B-IQ2_M.gguf
38
+ # ------------------------------------------------------------------------
39
+ #
40
+ # Set Default System-Message/Prompt:
41
+ SYSTEM """
42
+ #
43
+ # !!!-WARNING-!!!
44
+ # (Do not modify for: "recommended" configuration and behavior)
45
+ #
46
+ # !!!-OPTIONAL-!!!
47
+ # Pixtral-12B by default does NOT include a system-prompt, however, you can choose to input one within this section of the Ollama-Modelfile. Please be aware that you can possibly damage the linking between the Pixtral-->VisionProjector within the system-prompt field; BE CAREFUL!
48
+ """
49
+ # -------------------------------------------------------------------
50
+ #
51
+ # Define model-chat template (Thank you to: @rick-github for this mic-drop)
52
+ # Link to @rick-github post: https://github.com/ollama/ollama/issues/6748#issuecomment-3368146231
53
+ TEMPLATE [INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]
54
+ #
55
+ # Below are stop params (required for proper "assistant-->user" multi-turn)
56
+ PARAMETER stop [INST]
57
+ PARAMETER stop [/INST]
58
+ #
59
+ # Enjoy Pixtral-12B-GGUF for the ppl!
60
+ # Erm, or at least for Ollama users...
61
+ # <3 (^.^) <3
62
+ #
63
+ # Notice: Please, read the "Instructions.md" at HuggingFace or Ollama-Website
64
+ # for a how-to usage and guide on using this modelfile!
Modelfile-Pixtral-12B-IQ3_M.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pixtral-12B-GGUF Modelfile (IQ3_M)
2
+ # ---------------------------------
3
+ #
4
+ # Tested with: Ollama v0.11.X-->v0.12.9(latest)
5
+ # Quantization: IQ3_M (Quant created by = bartowski)
6
+ # Quality: Surprisingly Usable
7
+ # ----------------------------------------------------
8
+ #
9
+ # Vision Notes:
10
+ # Some users may need to set the context value -or- "num_ctx"
11
+ # value to at least ~12K-->19K.
12
+ # -----------------------------------------------------------
13
+ #
14
+ # Created by:
15
+ # EnlistedGhost (aka Jon Zaretsky)
16
+ # --------------------------------
17
+ #
18
+ # Goal:
19
+ # To provide the FIRST actually functional and usable
20
+ # GGUF model version of the Mistral Pixtral-12B for
21
+ # direct-usage with Ollama!
22
+ # Currently, there are NO USABLE OR WORKING versions
23
+ # of this model...
24
+ # ---------------------------------------------------
25
+ #
26
+ # Big/Giant/Huge Thank You:
27
+ # (ggml-org, bartowski, and the Ollama team)
28
+ # ggml-org: Working mmproj-pixtral vision projector!
29
+ # Bartowki: Working I-Matrix Quants that can be paired with ggml-org vision projector!
30
+ # Ollama team: Because without them, this wouldn't be possible in the first place!
31
+ # ------------------------------------------------------------------------------------
32
+ #
33
+ # Import our GGUF quant files:
34
+ # (Assuming: Linux Operating System)
35
+ # (Assuming: downloaded files are stored in "Downloads" directory/folder)
36
+ FROM ~/Downloads/mmproj-Pixtral-12b-f16.gguf
37
+ FROM ~/Downloads/Pixtral-12B-IQ3_M.gguf
38
+ # ------------------------------------------------------------------------
39
+ #
40
+ # Set Default System-Message/Prompt:
41
+ SYSTEM """
42
+ #
43
+ # !!!-WARNING-!!!
44
+ # (Do not modify for: "recommended" configuration and behavior)
45
+ #
46
+ # !!!-OPTIONAL-!!!
47
+ # Pixtral-12B by default does NOT include a system-prompt, however, you can choose to input one within this section of the Ollama-Modelfile. Please be aware that you can possibly damage the linking between the Pixtral-->VisionProjector within the system-prompt field; BE CAREFUL!
48
+ """
49
+ # -------------------------------------------------------------------
50
+ #
51
+ # Define model-chat template (Thank you to: @rick-github for this mic-drop)
52
+ # Link to @rick-github post: https://github.com/ollama/ollama/issues/6748#issuecomment-3368146231
53
+ TEMPLATE [INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]
54
+ #
55
+ # Below are stop params (required for proper "assistant-->user" multi-turn)
56
+ PARAMETER stop [INST]
57
+ PARAMETER stop [/INST]
58
+ #
59
+ # Enjoy Pixtral-12B-GGUF for the ppl!
60
+ # Erm, or at least for Ollama users...
61
+ # <3 (^.^) <3
62
+ #
63
+ # Notice: Please, read the "Instructions.md" at HuggingFace or Ollama-Website
64
+ # for a how-to usage and guide on using this modelfile!
Pixtral-12B-IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:729751a002a9e9722095f77f6c40b32fc99fc917f9a0d768639b8f6f08275a98
3
+ size 5722231520