---
license: apache-2.0
language:
- en
- ru
base_model:
- mistral-community/pixtral-12b
pipeline_tag: image-text-to-text
new_version: EnlistedGhost/Pixtral-12B-Ollama-GGUF
datasets:
- mistralai/MM-MT-Bench
tags:
- Pixtral,
- 12B,
- Vision,
- Conversational,
- Ollama,
- ggml,
- gguf,
- Image-Text-to-Text,
- Multimodal,
---
# Model - Pixtral 12B (GGUF) (Ollama Patched)
**Description:**
This is an Ollama patched version of Pixtral-12B (GGUF) with working projector (mmproj) files.
No modifications, edits, or configuration is required to use this model with Ollama, it works natively!
Both Vision and Text work with Ollama. (^.^)
**IMPORTANT NOTICE:**
All Quantized GGUF files are currently being replaced with high quality -
versions that are not found anywhere else.
The upgrade is expected to finish uploading by: **(GMT-8) 04:00 November 12th 2025**
*Due to several unexpected system and hardware issues the finish time has bee pushed back another 8 hours.
My sincere apologies for the inconvenience, to everyone who is waiting on this. I look forward to finishing
sooner than the updated time.*
---------------------------------------------
### Model Updates (As of: 11th of November, 2025)
Recently finished updates:
- Replaced several Quantized GGUF files with self-made versions
Removing 3rd party Quants in favor of self-made Quants
- Updated: ModelCard
(this page)
**Currently in-progress updates:**
- Replacing all previously released Quantized GGUF files with new high quality versions
(Please be patient - This should be completed by: **(GMT-8) 04:00 November 12th 2025)**
## How to run this Model using Ollama
You can run this model by using the "ollama run" command.
Simply copy & paste one of the commands from the list below into
your console, terminal or power-shell window.
| Quant Type | File Size | Command |
|:-----------|:----------|:--------|
| Q2_K | 4.9 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q2_K |
| IQ2_M | 4.4 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:IQ2_M |
| Q3_K_S | 5.6 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_S |
| Q3_K_M | 6.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_M |
| IQ3_M | 5.7 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:IQ3_M |
| Q4_K_S | 7.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_S |
| Q4_K_M | 7.6 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_M |
| Q5_K_S | 8.6 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_S |
| Q5_K_M | 8.8 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_M |
| Q6_K | 10.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q6_K |
| Q8_0 | 13.0 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q8_0 |
| F16 | 24.5 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:F16 |
### Intended Use
Same as original:
- [[mistralai](https://huggingface.co/mistralai/Pixtral-12B-2409)]
### Out-of-Scope Use
Same as original:
- [[mistralai](https://huggingface.co/mistralai/Pixtral-12B-2409)]
## Bias, Risks, and Limitations
Same as original:
- [[mistralai](https://huggingface.co/mistralai/Pixtral-12B-2409)]
## Training Details
Training sets and data are from:
- [[mistralai](https://huggingface.co/mistralai/Pixtral-12B-2409)]
- [[mistralai-community](https://huggingface.co/mistral-community/pixtral-12b)]
(This is a direct off-shoot/decendant of the above mentioned models)
## Evaluation
- This model has NOT been evaluated in any form, scope or type of method.
- **!!! USE AT YOUR OWN RISK !!!**
- **!!! NO WARRANTY IS PROVIDED OF ANY KIND !!!**
## Citation (Original Paper)
[[MistalAI Pixtral-12B Original Paper](https://huggingface.co/papers/2410.07073)]
## Detailed Release Information
- **Originally Developed by:** [[mistralai](https://huggingface.co/mistralai/Pixtral-12B-2409)]
- **Further Developed by:** [[mistralai-community](https://huggingface.co/mistral-community/pixtral-12b)]
- **MMPROJ (Vision) Quantized by:** [[EnlistedGhost](https://huggingface.co/EnlistedGhost)]
- **Model Quantized for GGUF by:** [[EnlistedGhost](https://huggingface.co/EnlistedGhost)]
- **Modified for Ollama by:** [[EnlistedGhost](https://huggingface.co/EnlistedGhost)]
- **Released on Huggingface by:** [[EnlistedGhost](https://huggingface.co/EnlistedGhost)]
- **Model type & format:** [Quantized/GGUF]
- **License type:** [Apache-2.0]
## Attributions (Credits)
A big thank-you is extended to the below credited sources!
These contributions are what made this release possible!
*Important Notice: **This is NOT a copy/paste release**,
I have created unique Quantized files that were then altered further
to properly work with Ollama software.
This resulted in the first publicly available
Pixtral-12B model that natively runs on Ollama.*
- [[ggml-org](https://huggingface.co/ggml-org/pixtral-12b-GGUF)] (Resources no longer utilized)
- [[mrmadermacher](https://huggingface.co/mradermacher/pixtral-12b-GGUF)] (Resources no longer utilized)
- [[bartowski](https://huggingface.co/bartowski/mistral-community_pixtral-12b-GGUF)] (Resources no longer utilized)
- [[Ollama developers](https://github.com/ollama/ollama/issues/6748#issuecomment-3449009817)]
## Model Card Authors and Contact
[[EnlistedGhost](https://huggingface.co/EnlistedGhost)]