GLM-OCR-GGUF

This model is converted from zai-org/GLM-OCR to GGUF using convert_hf_to_gguf.py

To use it:

llama-server -hf ggml-org/GLM-OCR-GGUF
Downloads last month
4,770
GGUF
Model size
0.9B params
Architecture
glm4
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ggml-org/GLM-OCR-GGUF

Base model

zai-org/GLM-OCR
Quantized
(9)
this model