Mixed-precision MLX builds of Qwen/Qwen3.6-35B-A3B for Apple Silicon. Size points: 19 GB, 25 GB.
AI & ML interests
Model Quantization
Recent Activity
View all activity
Organization Card
Smaller. Smarter. Sovereign.
Making frontier models run anywhere
We publish high-quality quantized models for Apple Silicon and GGUF. Our models use a proprietary optimisation method that delivers superior quality at your target memory budget.
Browse our models, or connect with us below.
models 44
baa-ai/Qwen3.6-35B-A3B-RAM-25GB-MLX
Text Generation • 35B • Updated • 1.39k • 3
baa-ai/Qwen3.6-35B-A3B-RAM-19GB-MLX
Text Generation • 35B • Updated • 520
baa-ai/Gemma-4-31B-it-RAM-29GB-MLX
31B • Updated • 284
baa-ai/Gemma-4-31B-it-RAM-24GB-MLX
31B • Updated • 191
baa-ai/Gemma-4-31B-it-RAM-4bit-MLX
Image-Text-to-Text • 6B • Updated • 234
baa-ai/GLM-5.1-RAM-270GB-MLX
744B • Updated • 4.26k • 2
baa-ai/Qwen3.5-122B-A10B-RAM-140GB-MLX
37B • Updated • 838
baa-ai/Qwen3.5-122B-A10B-RAM-100GB-MLX
28B • Updated • 372
baa-ai/Qwen3.5-122B-A10B-RAM-60GB-MLX
17B • Updated • 352
baa-ai/Qwen3.5-397B-A17B-RAM-220GB-MLX
61B • Updated • 290
datasets 0
None public yet