-
-
-
-
-
-
Inference Providers
Active filters:
8-bit
Text Generation
•
120B
•
Updated
•
3.02M
•
•
4.39k
Text Generation
•
22B
•
Updated
•
6.68M
•
•
4.25k
GadflyII/GLM-4.7-Flash-NVFP4
Text Generation
•
18B
•
Updated
•
179k
•
47
mlx-community/Qwen3-TTS-12Hz-0.6B-CustomVoice-8bit
Text-to-Speech
•
0.5B
•
Updated
•
1.48k
•
9
microsoft/bitnet-b1.58-2B-4T
Text Generation
•
0.8B
•
Updated
•
6.03k
•
1.26k
openai/gpt-oss-safeguard-20b
Text Generation
•
22B
•
Updated
•
15.1k
•
•
184
MultiverseComputingCAI/HyperNova-60B
Text Generation
•
60B
•
Updated
•
1.53k
•
48
mlx-community/GLM-4.7-Flash-8bit
Text Generation
•
30B
•
Updated
•
6.07k
•
16
mlx-community/GLM-4.7-Flash-8bit-gs32
Text Generation
•
30B
•
Updated
•
488
•
5
FabioSarracino/VibeVoice-Large-Q8
Text-to-Audio
•
9B
•
Updated
•
2.72k
•
80
GadflyII/GLM-4.7-Flash-MXFP4
Text Generation
•
18B
•
Updated
•
534
•
4
ig1/Qwen3-VL-30B-A3B-Instruct-NVFP4
Image-Text-to-Text
•
18B
•
Updated
•
2.26k
•
6
Text Generation
•
177B
•
Updated
•
5.03k
•
10
nvidia/DeepSeek-V3.2-NVFP4
Text Generation
•
394B
•
Updated
•
1.21k
•
3
lmstudio-community/GLM-4.7-Flash-MLX-8bit
Text Generation
•
30B
•
Updated
•
343k
•
4
mlx-community/Qwen3-TTS-12Hz-1.7B-VoiceDesign-8bit
Text-to-Speech
•
0.8B
•
Updated
•
945
•
3
ragraph-ai/stable-cypher-instruct-3b
Text Generation
•
3B
•
Updated
•
362
•
31
MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
153k
•
10
tiiuae/Falcon-E-3B-Instruct
Text Generation
•
0.9B
•
Updated
•
290
•
36
MaziyarPanahi/Qwen3-1.7B-GGUF
Text Generation
•
2B
•
Updated
•
233k
•
6
nvidia/Qwen3-235B-A22B-NVFP4
Text Generation
•
133B
•
Updated
•
5.29k
•
13
NVFP4/Qwen3-Coder-30B-A3B-Instruct-FP4
Text Generation
•
16B
•
Updated
•
3.98k
•
6
Text Generation
•
5B
•
Updated
•
6.72k
•
12
mlx-community/DeepSeek-OCR-8bit
Image-Text-to-Text
•
1B
•
Updated
•
1.39k
•
30
MaziyarPanahi/NVIDIA-Nemotron-Nano-12B-v2-GGUF
Text Generation
•
12B
•
Updated
•
74.4k
•
2
Disty0/Z-Image-Turbo-SDNQ-int8
Text-to-Image
•
Updated
•
1.99k
•
17
Firworks/Devstral-Small-2-24B-Instruct-2512-nvfp4
14B
•
Updated
•
2.08k
•
4
mlx-community/GLM-4.7-8bit
Text Generation
•
353B
•
Updated
•
1.15k
•
4
lukealonso/MiniMax-M2.1-NVFP4
115B
•
Updated
•
26.7k
•
18
Tengyunw/MiniMax-M2.1-NVFP4
Text Generation
•
115B
•
Updated
•
203
•
7