FAQs

Welcome to the ComfyUI FAQ section! This page is designed to answer some of the most common questions about installing, running, and troubleshooting ComfyUI, as well as using its advanced features. Whether you’re a beginner or an experienced user, these FAQs will help you navigate and make the most of ComfyUI.


How do I install ComfyUI on Windows?

To install ComfyUI on Windows, download the portable standalone build from the releases page. Extract the downloaded file using 7-Zip, and then run the application. Make sure to place your Stable Diffusion checkpoints/models (the large .ckpt or .safetensors files) in the ComfyUI\models\checkpoints directory.

Can I share models between ComfyUI and another UI?

Yes, you can share models between UIs. You need to configure the search paths for models by editing the extra_model_paths.yaml file found in the ComfyUI directory of the standalone Windows build. You can use your preferred text editor to modify this file.

How do I install ComfyUI manually on Windows or Linux?

To manually install ComfyUI, start by cloning the repository using Git. Then, place your Stable Diffusion checkpoints in the models/checkpoints directory and your VAE files in the models/vae directory. Install the required dependencies by running pip install -r requirements.txt in the ComfyUI folder.

How can I run ComfyUI on an AMD GPU with Linux?

For AMD users, install ROCm and PyTorch using the appropriate pip command. You can choose either the stable or nightly versions, depending on your performance needs. Once installed, follow the manual installation steps for ComfyUI, and you should be able to run it on your AMD GPU.

How do I troubleshoot the “Torch not compiled with CUDA enabled” error?

If you encounter the “Torch not compiled with CUDA enabled” error, uninstall your current version of Torch by running pip uninstall torch. Then, reinstall it using the appropriate command for your NVIDIA or AMD GPU, as provided in the installation instructions.

Can I run ComfyUI without a GPU?

Yes, you can run ComfyUI without a GPU by using the --cpu flag, though it will be slower. ComfyUI is designed to work even on systems with low VRAM by using smart memory management.

How do I install ComfyUI on Apple Mac silicon (M1 or M2)?

To install ComfyUI on Apple Mac silicon, start by installing the latest PyTorch nightly as instructed in the Accelerated PyTorch training on Mac Apple Developer guide. Then, follow the manual installation steps for ComfyUI, and make sure to place your models, VAE, and LoRAs in the corresponding directories.

How do I enable high-quality previews in ComfyUI?

To enable high-quality previews, download the necessary TAESD decoder files and place them in the models/vae_approx folder. Launch ComfyUI with the --preview-method taesd flag to see higher-quality previews during your workflow.

How can I use TLS/SSL with ComfyUI?

To use TLS/SSL, generate a self-signed certificate and key using the OpenSSL command provided. Then, launch ComfyUI with the --tls-keyfile key.pem --tls-certfile cert.pem flags to enable secure HTTPS access.

How do I switch to the latest frontend version for ComfyUI?

To switch to the latest frontend version, launch ComfyUI with the command --front-end-version Comfy-Org/ComfyUI_frontend@latest. This allows you to access the most up-to-date frontend or a specific version by replacing “latest” with the desired version number.

Before start using ComfyUI, read the ComfyUI – Getting Started guide.