These are quantizations of the model PaddleOCR-VL-1.5

Download the latest llama.cpp to use them.

Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results.
F32 > BF16 > F16

Includes chat template fix from https://github.com/ggml-org/llama.cpp/pull/18825

Downloads last month
3,021
GGUF
Model size
0.5B params
Architecture
paddleocr
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/PaddleOCR-VL-1.5-GGUF

Quantized
(3)
this model