-
Discover ComfyUI’s Latest Feature: Flux.1 Kontext [dev] for Advanced AI Image Editing
ComfyUI, the leading node-based GUI for Stable Diffusion, continues to revolutionize AI-powered content creation with its latest update: full support for Flux.1 Kontext [dev]. Released in June 2025, this open-source feature empowers users to generate, edit, and refine images and videos with unprecedented control and creativity. Here’s why this update is a game-changer for artists,…
-
Node Help Menu
The built-in node help menu is now available to everyone. Just update your ComfyUI to the latest version and make sure the frontend is at least v1.22.2. The node docs are still not perfect. If you find any errors or want to help improve them, you can submit a PR to this repo: https://github.com/Comfy-Org/embedded-docs @ComfyUIWiki…
-
HunyuanVideo
HunyuanVideo from @TXhunyuan, the groundbreaking 13B open-source video model, is now natively supported in ComfyUI! It is shipped with unified Image and video generative structure. Check out some of the amazing examples! HunyuanVideo + ComfyUI Capabilities 1. Unified Image & Video Generation A “Dual-stream to Single-stream” Transformer seamlessly integrates text and visuals, improving motion coherence,…
-
ControlNet Models for SD 3.5 Large
ComfyUI Team just added support for Stable Diffusion 3.5 Large ControlNet models by Stability AI: Blur, Canny, and Depth—each packed with 8B parameters. These advanced models, powered by 8 billion parameters, are available under the permissive Stability AI Community License for both commercial and non-commercial use. With these additions, users can create highly detailed and customizable images…
-
ComfyUI Desktop
Today, Comfy Dev Team excited to announce that the code for ComfyUI Desktop, previously known as V1, is now open source! The application is currently available for Windows (NVIDIA) and macOS (M series) users. About ComfyUI Desktop The ComfyUI Desktop application brings the robust capabilities of ComfyUI into a standalone desktop experience. While still in beta, it is a significant step…
-
Flux Tools
ComfyUI now includes support for three new series of models from Black Forest Labs, specifically designed for FLUX.1: Redux Adapter, Fill Models, and ControlNet Models with LoRAs (Depth and Canny). These powerful additions enable users to achieve precise control over details and styles in image generation, enhancing creative possibilities. Here’s an overview of the newly supported models and…
-
What is ComfyUI?
ComfyUI is an open-source user interface designed to make running Stable Diffusion—a leading text-to-image AI model—accessible on your personal computer. While popular cloud-based services like DALL-E, MidJourney, and the official Stable Diffusion demo simplify AI image generation through the internet, Comfy UI offers a compelling alternative. By running Stable Diffusion locally, you gain unparalleled flexibility,…
-
Run Mochi in ComfyUI
ComfyUI now has optimized support for Genmo’s latest model, Mochi! This integration brings state-of-the-art video generation capabilities to the ComfyUI community, even if you’re working with consumer-grade GPUs. The weights and architecture for Mochi 1 (480P) are fully open and accessible, with Mochi 1 HD slated for release later this year. Exploring Mochi: Key Features…
-
ComfyUI-Fal-Connector
The ComfyUI-fal Connector is a tool that facilitates integration between ComfyUI and fal, allowing users to run their ComfyUI workflows directly on fal.ai. This extension enables users to take advantage of the computational resources and power offered by fal.ai for their workflows. Usage Instructions Installation: [!WARNING] The fal-config.ini file is not located in the root…
-
Introduction to ComfyUI
Welcome to this beginner-friendly guide to ComfyUI! Whether you’re new to AI art or an experienced creator, this tool can help you push the boundaries of your creativity. In this guide, we’ll explore what makes ComfyUI stand out, walk you through its core features, and provide step-by-step insights to get you started on your journey…