LFM2-2.6B-Exp-GGUF
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-2.6B-Exp
π How to run LFM2
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2-2.6B-Exp-GGUF
- Downloads last month
- 3,336
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for LiquidAI/LFM2-2.6B-Exp-GGUF
Base model
LiquidAI/LFM2-2.6B-Exp