Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
danielhanchenΒ 
posted an update 3 days ago
Post
2838
Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF

🐱 Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3

having issue loading the gguf with llama.cpp
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'mistral3'

Β·

You need to update to the latest llama.cpp version