Original Model Link : https://huggingface.co/enhanceaiteam/Mystic

name: Mystic-MLX-Q8
base_model: black-forest-labs/FLUX.1-dev
license: other
pipeline_tag: text-to-image
tasks :
 - text-to-image
 - image-to-image
 - image-generation
language: en
get_started_code: uvx --from mflux mflux-generate  --model exdysa/Mystic-MLX-Q8 --prompt 'Test Prompt' --base-model dev --steps 16 --guidance 7.5 --seed 10 --width 1024 --height 1024 -q 8  --prompt 'Test prompt'
library_name: mlx
tags:
- flux-1.dev
- flux
- mlx
- apple
- mystic

Mystic-MLX-Q8

Mystic-MLX-Q8 is a Flux.1-dev fine-tune that lowers steps required for convergence while maintaining quality outputs. This repo contains the MLX converted model weights.

MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)

Generation using uv https://docs.astral.sh/uv/**:

uvx --from mflux mflux-generate  --model exdysa/Mystic-MLX-Q8 --prompt 'Test Prompt' --base-model dev --steps 16 --guidance 7.5 --seed 10 --width 1024 --height 1024 -q 8 

Generation using pip:

pipx --from mflux mflux-generate  --model exdysa/Mystic-MLX-Q8 --prompt 'Test Prompt' --base-model dev --steps 16 --guidance 7.5 --seed 10 --width 1024 --height 1024 -q 8 
Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for exdysa/Mystic-MLX-Q8

Finetuned
(536)
this model

Collection including exdysa/Mystic-MLX-Q8