Update README.md (Small Fix)
#1
by
qpqpqpqpqpqp
- opened
README.md
CHANGED
|
@@ -25,7 +25,7 @@ pipeline_tag: image-to-image
|
|
| 25 |
|
| 26 |
### The [workflow](https://huggingface.co/silveroxides/FLUX.2-dev-fp8_scaled/blob/main/workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-workflow.json) contains the information and links needed to get started with using this model.
|
| 27 |
### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.
|
| 28 |
-
### The custom node that loads the model in the workflow is necessary to obtain fastest inference on lower VRAM GPU.
|
| 29 |
|
| 30 |

|
| 31 |
|
|
|
|
| 25 |
|
| 26 |
### The [workflow](https://huggingface.co/silveroxides/FLUX.2-dev-fp8_scaled/blob/main/workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-workflow.json) contains the information and links needed to get started with using this model.
|
| 27 |
### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.
|
| 28 |
+
### The custom node that loads the model in the workflow is necessary to obtain the fastest inference on lower VRAM GPU.
|
| 29 |
|
| 30 |

|
| 31 |
|