Model - Pixtral-12B-2409 (GGUF + Ollama Patched)

[The first official MistralAI Pixtral-12B to be GGUF converted and GGUF Quantized!]

NEW UPDATES! (As of: November 27th, 2025) (Read below!)


Description:
EnlistedGhost's GGUF/Quantized Repository of - MistralAI Pixtral-12B-2409... This release includes GGUF+Ollama-Patched moodel files and three working projector (mmproj) files for the Vision Projector, offering full capabilities in Ollama or Llama.cpp.

No modifications, edits, or configurations are required to use this model with Ollama, it works natively! Both Vision and Text work with Ollama. (^.^)

Personal Notes:
The quality of interaction with this model from MistralAI has been verified to be of a higher quality than others available. There are other Pixtral-12B models that are community versions, however the weights that MistralAI had are different than from the community vertsion has and are significantly improved over community available releases. Thank you for taking the time to read this!

Some explaination for this claim -
The currently available (publicly released on Huggingface) Pixtral-12B GGUF/GGUF-Quantized releases are from mistral-community and not MistralAI themselves. So, I am very excited to offer the Huggingface community an Ollama and Llama.cpp compatible version of the officially released: MistralAI/Pixtral-12B-2409 Multimodal Vision Model!

Public Notice: The statements and wording in this release DO NOT:
infer, suggest, slander or portray in a negative manner the mistral-community and their releases! I have nothing but a very high respect for mistral-community and their quality work. (Be sure to check them out too!)

The wording I use is only to offer a distinction between the fact that these GGUF/Quantized files are derrived directly from official MistralAI iterations. (Sorry for the long disclaimer there - I just have to be very clear that I am not in any way saying anything negative about the already-quantized and GGUF converted Pixtral-12B from mistral-community)

Happy Inferencing!
-- Jon Z (EnlistedGhost)


Model Updates (As of: November 27th, 2025)

  • Updated: All GGUF model file(s) with Extremely-High-Quality GGUF file(s) (You won't be disappointed!)
    Final Quantized and full-BF16 modelfiles have been uploaded!!!

How to run this Model using Ollama

You can run this model by using the "ollama run" command.
Simply copy & paste one of the commands from the list below into
your console, terminal or power-shell window.

Quant Type File Size Command
Q2_K_S 4.78 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q2_K_S
Q2_K 5.12 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q2_K
Q3_K_S 5.62 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q3_K_S
Q3_K_M 6.35 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q3_K_M
Q4_K_S 7.29 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q4_K_S
Q4_K_M 7.65 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q4_K_M
Q4_K_XL 7.98 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q4_K_XL
Q5_K_S 8.43 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q5_K_S
Q5_K_M 8.93 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q5_K_M
Q5_K_XL 9.14 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q5_K_XL
Q6_K 10.1 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q6_K
Q6_K_M 10.4 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q6_K_M
Q6_K_XL 11.6 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q6_K_XL
Q8_0 13.0 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q8_0
F16 24.5 GB ollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:F16

mmproj (Vision Projector) Files

Quant Type File Size Download-Link
Q8_0 465 MB [mmproj Vision Pixtral-12B rojector:Q8_0]
F16 870 MB [mmproj Vision Pixtral-12B rojector:F16]
F32 1.74 GB [mmproj Vision Pixtral-12B rojector:F32]

Intended Use

Same as original:

Out-of-Scope Use

Same as original:

Bias, Risks, and Limitations

Same as original:

Training Details

Training sets and data are from:

  • [mistralAI]
    (This is a direct off-shoot/decendant of the above mentioned model)

Evaluation

  • This model has NOT been evaluated in any form, scope or type of method.
  • !!! USE AT YOUR OWN RISK !!!
  • !!! NO WARRANTY IS PROVIDED OF ANY KIND !!!

Citation (Original Paper)

[MistalAI Pixtral-12B Original Paper]

Detailed Release Information

Attributions (Credits)

A big thank-you is extended to the below credited sources!
These contributions are what made this release possible!

Important Notice: This is NOT a copy/paste release,
I have created unique Quantized files that were then altered further
to properly work with Ollama software.
This resulted in the first publicly available
Pixtral-12B-2409 model that natively runs on Ollama.

Model Card Authors and Contact

[EnlistedGhost]

Downloads last month
3,167
GGUF
Model size
12B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EnlistedGhost/Pixtral-12B-2409-GGUF

Quantized
(3)
this model

Dataset used to train EnlistedGhost/Pixtral-12B-2409-GGUF