These are quantizations of the model PaddleOCR-VL-1.5
Download the latest llama.cpp to use them.
Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results.
F32 > BF16 > F16
Includes chat template fix from https://github.com/ggml-org/llama.cpp/pull/18825
- Downloads last month
- 3,021
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
Model tree for noctrex/PaddleOCR-VL-1.5-GGUF
Base model
baidu/ERNIE-4.5-0.3B-Paddle
Finetuned
PaddlePaddle/PaddleOCR-VL-1.5