Black Forest Labs, a new company founded by the original creators of Stable Diffusion, has unveiled their latest model, Flux.1, which comes with day-one support for both ComfyUI and SwarmUI! The release includes two models: Flux.1-Dev, a high-quality, guidance-distilled model, and Flux.1-Schnell, a faster, step-distilled (“Turbo”) model that offers a trade-off in quality for speed.
Both models represent a significant leap in intelligence and quality, surpassing any other foundation models publicly available to date. You can explore example workflows and download the simple FP8 checkpoints for Flux.1-Schnell or Flux.1-Dev, or access the original FP16 files on BFL’s Hugging Face page.
What’s new in ComfyUI FLUX integration
For both Flux.1 models, you can either use SamplerCustomAdvanced with BasicGuider, or, if using KSampler, set CFG to 1. With the Flux.1-Dev model, you can also leverage the new FluxGuidance feature to control the distilled CFG-like value, with a recommended setting of 2 for enhanced realism or style control. These models are designed to function without real CFG, though you can still experiment with CFG using ComfyUI.
The community has quickly embraced ComfyUI as an experimentation platform, exploring various techniques to maximize the potential of these models, such as using the Dynamic Thresholding custom node or the built-in FluxGuidance node to enable CFG and negative prompting. Additionally, the built-in ModelSamplingFlux node allows for controlling the Flux sigma shift, though its benefits are more limited.
This rapid and powerful experimentation capability is what initially drew many to ComfyUI, and it’s exciting to see it enable a new generation of models. Flux is also the largest open model released to date, with 12 billion parameters and an original file size of 23 gigabytes! So, how is it possible to run this on standard consumer hardware? ComfyUI supports loading model weights directly in FP8 format (12 gigabytes) and automatically detects available VRAM to optimize loading methods. Even if you have less than 12 gigabytes of VRAM, ComfyUI’s core model handler will offload VRAM partially to system RAM, allowing the model to run on smaller GPUs, albeit with slightly slower performance.
One final tip for Flux: you can merge the Flux models block-by-block within ComfyUI using the new ModelMergeFlux1 node. For instance, you can merge just the Flux.1-Dev double_blocks (MM-DiT) onto Flux.1-Schnell, resulting in a higher-quality model that still runs in just 4 steps! Save the provided image and drag it onto your ComfyUI to see an example workflow of this process.
Regular Full Version
Files to Download for the Regular Version
If you don’t already have t5xxl_fp16.safetensors
or clip_l.safetensors
in your ComfyUI/models/clip/
directory, you can download them from this link. For lower memory usage, you can opt for t5xxl_fp8_e4m3fn.safetensors
instead, but the FP16 version is recommended if you have more than 32GB of RAM.
The VAE can be found here and should be placed in your ComfyUI/models/vae/
folder.
Tips for Running Out of Memory
- Use the single-file FP8 version, available below.
- Set the
weight_dtype
in the “Load Diffusion Model” node to FP8, which reduces memory usage by half but might slightly impact quality. You can also download the example.
Flux Dev
The Flux Dev diffusion model weights can be downloaded here. Place the flux1-dev.safetensors
file in your ComfyUI/models/unet/
folder.
You can then load or drag the following image into ComfyUI to access the workflow:
Flux Schnell
Flux Schnell is a distilled 4-step model.
You can download the Flux Schnell diffusion model weights here and place the file in your ComfyUI/models/unet/
folder.
To load the workflow, simply drag the following image into ComfyUI.
Read other articles in our Blog: