Model - Pixtral 12B (GGUF) (Ollama Patched)
Description:
This is an Ollama patched version of Pixtral-12B (GGUF) with working projector (mmproj) files.
No modifications, edits, or configuration is required to use this model with Ollama, it works natively!
Both Vision and Text work with Ollama. (^.^)
IMPORTANT NOTICE as of (GMT-8) 01:04 November 19th, 2025:
After significant amounts of self-testing of the supplied Quantized GGUF files made available within this release
I have been pleasantly surprised to find that so far ALL of the files have the AI performing quite admirably despite
Huggingface's systems here only merging in the Q8_0 (8-bit quantized) Vision Projector - with no automated option to
choose a higher quality beyond assuming that you already know how to use Ollama's "create" command and so on.
I will be updating this release with both:
- Instructions within this model page and linked documentation on how to easily "create" a customized/tailored
Multi-Modal model that you may speak/inference with utilizing Ollama (or llama.cpp as well). - Automated scripts via bash (Linux users) and command-line (Windows users) options that will allow anyone to
simply download the small script file and then merely run (click on) the script which will interactively guide you through a user-friendly setup process - including merging the higher quality Vision Projector options.
In addition to the above mentioned updates:
I am currently releasing GGUF Ollama-ready (Natively Ollama compatible) alternate versions of
Pixtral-12B. In case some of you are not aware of this fact: there are currently two other Officially released versions
of Pixtral-12B. These are supplied directly from MistralAI themselves and include the Pixtral-12B-2409 official release and
the the Pixtral-12B-Base-2409 models that, unlike the release that these files were converted and quantized from, ARE NOT a community release.
These two other versions differ in quality and capability, although slightly, from the originating "mistral-community" release
that this GGUF release (and MOST other releases or derivatives) is based upon. This is because MistralAI's two releases with the "2409" label
are not user-friendly to setup or use. Thankfully I have humbly managed to have them successfully loaded, converted to GGUF and now
am in the process of creating quality Quantized GGUF versions of specifically for use with Ollama/Llama.cpp.
Be sure to check back occasionally as I expect to make these more rarely seen Pixtral-12B releases from MistralAI made available within the next couple days! (Rarely seen - to clarify what I meant was that they are extremely uncommong to find GGUF versions of, and not many versions even that are not GGUF)
Happy Inferencing!
-- Jon Z (EnlistedGhost)
Model Updates (As of: November 19th, 2025)
Recently finished updates:
- Partially Updated: ModelCard
(this page)
Currently in-progress updates:
- Update: ModelCard
(this page needs more setup guides and information...)
How to run this Model using Ollama
You can run this model by using the "ollama run" command.
Simply copy & paste one of the commands from the list below into
your console, terminal or power-shell window.
| Quant Type | File Size | Command |
|---|---|---|
| Q2_K_S | 4.62 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q2_K_S |
| Q2_K | 4.79 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q2_K |
| IQ2_M | 4.44 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:IQ2_M |
| Q2_K_M | 4.95 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q2_K_M |
| Q2_K_L | 5.58 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q2_K_L |
| IQ3_S | 5.56 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:IQ3_S |
| Q3_K_S | 5.53 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_S |
| IQ3_M | 5.72 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:IQ3_M |
| Q3_K_M | 6.08 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_M |
| Q3_K_L | 6.56 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_L |
| Q3_K_XL | 6.72 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_XL |
| IQ4_XS | 6.8 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:IQ4_XS |
| Q4_K_S | 7.12 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_S |
| Q4_K_M | 7.48 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_M |
| Q4_K_XL | 8.27 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_XL |
| Q5_K_S | 8.52 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_S |
| Q5_K_M | 8.73 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_M |
| Q5_K_XL | 9.52 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_XL |
| Q6_K | 10.1 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q6_K |
| Q6_K_M | 10.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q6_K_M |
| Q6_K_L | 10.8 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q6_K_L |
| Q8_0 | 13.7 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q8_0 |
| F16 | 24.5 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:F16 |
mmproj (Vision Projector) Files
| Quant Type | File Size | Command |
|---|---|---|
| Q8_0 | 465 MB | [mmproj Vision Pixtral-12B rojector:Q8_0] |
| F16 | 870 MB | [mmproj Vision Pixtral-12B rojector:F16] |
| F32 | 1.74 GB | [mmproj Vision Pixtral-12B rojector:F32] |
Intended Use
Same as original:
Out-of-Scope Use
Same as original:
Bias, Risks, and Limitations
Same as original:
Training Details
Training sets and data are from:
- [mistralai]
- [mistralai-community]
(This is a direct off-shoot/decendant of the above mentioned models)
Evaluation
- This model has NOT been evaluated in any form, scope or type of method.
- !!! USE AT YOUR OWN RISK !!!
- !!! NO WARRANTY IS PROVIDED OF ANY KIND !!!
Citation (Original Paper)
[MistalAI Pixtral-12B Original Paper]
Detailed Release Information
- Originally Developed by: [mistralai]
- Further Developed by: [mistralai-community]
- MMPROJ (Vision) Quantized by: [EnlistedGhost]
- Model Quantized for GGUF by: [EnlistedGhost]
- Modified for Ollama by: [EnlistedGhost]
- Released on Huggingface by: [EnlistedGhost]
- Model type & format: [Quantized/GGUF]
- License type: [Apache-2.0]
Attributions (Credits)
A big thank-you is extended to the below credited sources!
These contributions are what made this release possible!
Important Notice: This is NOT a copy/paste release,
I have created unique Quantized files that were then altered further
to properly work with Ollama software.
This resulted in the first publicly available
Pixtral-12B model that natively runs on Ollama.
[ggml-org](Resources no longer utilized)[mrmadermacher](Resources no longer utilized)- [bartowski] (Still using low iMatrix Quant files: IQ2_M and IQ2_S)
- [Ollama developers]
Model Card Authors and Contact
- Downloads last month
- 22,888
Model tree for EnlistedGhost/Pixtral-12B-Ollama-GGUF
Base model
mistral-community/pixtral-12b