Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. Reload ComfyUI. The next step for Stable Diffusion has to be fixing prompt engineering and applying multimodality. 5 + SDXL Refiner Workflow : StableDiffusion. It will only make bad hands worse. Updating ControlNet. Upscale the. 5 works with 4GB even on A1111 so you either don't know how to work with ComfyUI or you have not tried it at all. u/Entrypointjip The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. My advice, have a go and try it out with comfyUI, its unsupported but its likely to be the first UI that works with SDXL when it fully drops on the 18th. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. I've been trying to find the best settings for our servers and it seems that there are two accepted samplers that are recommended. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Apprehensive_Sky892. Working amazing. download the SDXL models. I wanted to see the difference with those along with the refiner pipeline added. 0_comfyui_colab (1024x1024 model) please use with: refiner_v1. SDXL-refiner-1. Commit date (2023-08-11) I was having very poor performance running SDXL locally in ComfyUI to the point where it was basically unusable. 0 with both the base and refiner checkpoints. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. If it's the best way to install control net because when I tried manually doing it . Second, If you are planning to run the SDXL refiner as well, make sure you install this extension. SDXL_1 (right click and save as) workflow has the SDXL setup with refiner with best settings. source_folder_path = '/content/ComfyUI/output' # Replace with the actual path to the folder in th e runtime environment destination_folder_path = f '/content/drive/MyDrive/ {output_folder_name} ' # Replace with the desired destination path in you r Google Drive # Create the destination folder in Google Drive if it doesn't existMy Links: discord , twitter/ig . With SDXL I often have most accurate results with ancestral samplers. Final 1/5 are done in refiner. All images were created using ComfyUI + SDXL 0. 0: An improved version over SDXL-refiner-0. Fix. In summary, it's crucial to make valid comparisons when evaluating the SDXL with and without the refiner. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. Andy Lau’s face doesn’t need any fix (Did he??). SDXL for A1111 – BASE + Refiner supported!!!! Olivio Sarikas. Having issues with refiner in ComfyUI. py script, which downloaded the yolo models for person, hand, and face -. The ratio usually 8:2 or 9:1 (eg: total 30 steps, base stops at 25, refiner starts at 25 ends at 30) This is the proper way to use Refiner. 0 performs. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Pastebin. Warning: the workflow does not save image generated by the SDXL Base model. Navigate to your installation folder. . thibaud_xl_openpose also. 9 Tutorial (better than. In this ComfyUI Tutorial we'll install ComfyUI and show you how it works. Per the announcement, SDXL 1. see this workflow for combining SDXL with a SD1. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Hi there. It's down to the devs of AUTO1111 to implement it. Colab Notebook ⚡. 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的. but ill add to that, currently only people with 32gb ram and a 12gb graphics card are going to make anything in a reasonable timeframe if they use the refiner. Refiner > SDXL base > Refiner > RevAnimated, to do this in Automatic1111 I would need to switch models 4 times for every picture which takes about 30 seconds for each switch. 9 and Stable Diffusion 1. A (simple) function to print in the terminal the. 0 You'll need to download both the base and the refiner models: SDXL-base-1. Final Version 3. 7 contributors. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. The the base model seem to be tuned to start from nothing, then to get an image. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: ; use the base and refiner models together to produce a refined image ;Got playing with SDXL and wow! It's as good as they stay. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. ago. ComfyUI and SDXL. x and offers many optimizations, such as re-executing only parts of the workflow that change between executions. 9 in ComfyUI, with both the base and refiner models together to achieve a magnificent quality of image generation. 9 and sd_xl_refiner_0. 🚀LCM update brings SDXL and SSD-1B to the game 🎮photo of a male warrior, modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High. . Explain the Ba. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. Overall all I can see is downsides to their openclip model being included at all. Note that for Invoke AI this step may not be required, as it’s supposed to do the whole process in a single image generation. Part 4 (this post) - We will install custom nodes and build out workflows. 14. 5 prompts. . There is an initial learning curve, but once mastered, you will drive with more control, and also save fuel (VRAM) to boot. A all in one workflow. Check out the ComfyUI guide. Therefore, it generates thumbnails by decoding them using the SD1. thanks to SDXL, not the usual ultra complicated v1. Sometimes I will update the workflow, all changes will be on the same link. It allows users to design and execute advanced stable diffusion pipelines with a flowchart-based interface. If you look for the missing model you need and download it from there it’ll automatically put. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. if it is even possible. You want to use Stable Diffusion, use image generative AI models for free, but you can't pay online services or you don't have a strong computer. Yesterday I woke up to this Reddit post "Happy Reddit Leak day" by the Joe Penna. Use in Diffusers. The prompts aren't optimized or very sleek. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. Also, use caution with the interactions. 0 with both the base and refiner checkpoints. The I cannot use SDXL + SDXL refiners as I run out of system RAM. After completing 20 steps, the refiner receives the latent space. set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention. Here are the configuration settings for the SDXL models test: I've been having a blast experimenting with SDXL lately. 9 the base and refiner models. Subscribe for FBB images @ These configs require installing ComfyUI. 0_comfyui_colab (1024x1024 model) please use with. 0? Question | Help I can get the base and refiner to work independently, but how do I run them together? Am I supposed to run. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). In this post, I will describe the base installation and all the optional assets I use. ComfyUI is a powerful and modular GUI for Stable Diffusion, allowing users to create advanced workflows using a node/graph interface. 10:05 Starting to compare Automatic1111 Web UI with ComfyUI for SDXL. Fully supports SD1. NOTE: You will need to use linear (AnimateDiff-SDXL) beta_schedule. best settings for Stable Diffusion XL 0. from_pretrained (. Re-download the latest version of the VAE and put it in your models/vae folder. 9版本的base model,refiner model. jsonを使わせていただく。. It fully supports the latest Stable Diffusion models including SDXL 1. 20:43 How to use SDXL refiner as the base model. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. 3. ai has released Stable Diffusion XL (SDXL) 1. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. Hires. download the SDXL models. 0. Or how to make refiner/upscaler passes optional. 1. eilertokyo • 4 mo. 1min. 23:06 How to see ComfyUI is processing the which part of the. Links and instructions in GitHub readme files updated accordingly. and After 4-6 minutes until the both checkpoints are loaded (SDXL 1. json. 9 base & refiner, along with recommended workflows but I ran into trouble. Base SDXL model will stop at around 80% of completion (Use. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. 下载此workflow的json文件并把他Load加载到comfyUI里,即可以开始你的sdxl模型的comfyUI作图之旅了。 . r/StableDiffusion. 0. It has many extra nodes in order to show comparisons in outputs of different workflows. 9 safetensors + LoRA workflow + refiner The text was updated successfully, but these errors were encountered: All reactionsSaved searches Use saved searches to filter your results more quicklyA switch to choose between the SDXL Base+Refiner models and the ReVision model A switch to activate or bypass the Detailer, the Upscaler, or both A (simple) visual prompt builder To configure it, start from the orange section called Control Panel. 120 upvotes · 31 comments. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. . A historical painting of a battle scene with soldiers fighting on horseback, cannons firing, and smoke rising from the ground. I normally send the same text conditioning to the refiner sampler, but it can also be beneficial to send a different, more quality-related prompt to the refiner stage. 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. You can Load these images in ComfyUI to get the full workflow. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. refinerモデルを正式にサポートしている. 0 and Refiner 1. Supports SDXL and SDXL Refiner. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). The speed of image generation is about 10 s/it (10241024 batch size 1), refiner works faster up to 1+ s/it when refining at the same 10241024 resolution. Then move it to the “ComfyUImodelscontrolnet” folder. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. 1. Use "Load" button on Menu. For instance, if you have a wildcard file called. Control-Lora : Official release of a ControlNet style models along with a few other interesting ones. Using SDXL 1. git clone Restart ComfyUI completely. If you don't need LoRA support, separate seeds, CLIP controls, or hires fix - you can just grab basic v1. AP Workflow 3. 130 upvotes · 11 comments. 1 latent. At that time I was half aware of the first you mentioned. generate a bunch of txt2img using base. r/linuxquestions. Img2Img. How do I use the base + refiner in SDXL 1. In any case, we could compare the picture obtained with the correct workflow and the refiner. This is pretty new so there might be better ways to do this, however this works well and we can stack Lora and Lycoris easily, then generate our text prompt at 1024x1024 and allow remacri to double. install or update the following custom nodes. The video also. Instead you have to let it VAEdecode to an image, then VAEencode it back to a latent image with the VAE from SDXL and then upscale. In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered i. Hand-FaceRefiner. 5 tiled render. Also, you could use the standard image resize node (with lanczos or whatever it is called) and pipe that latent into sdxl then refiner. SDXL you NEED to try! – How to run SDXL in the cloud. Most UI's req. Part 2 (this post)- we will add SDXL-specific conditioning implementation + test what impact that conditioning has on the generated images. 0 - Stable Diffusion XL 1. Searge-SDXL: EVOLVED v4. I just downloaded the base model and the refiner, but when I try to load the model it can take upward of 2 minutes, and rendering a single image can take 30 minutes, and even then the image looks very very weird. Includes LoRA. For reference, I'm appending all available styles to this question. To use the refiner model: Navigate to the image-to-image tab within AUTOMATIC1111 or. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. comments sorted by Best Top New Controversial Q&A Add a Comment More posts you may like. Prior to XL, I’ve already had some experience using tiled. png . A couple of the images have also been upscaled. Refiner: SDXL Refiner 1. This repo contains examples of what is achievable with ComfyUI. 1. SDXL in anime has bad performence, so just train base is not enough. I've successfully downloaded the 2 main files. download the Comfyroll SDXL Template Workflows. It's doing a fine job, but I am not sure if this is the best. The workflow should generate images first with the base and then pass them to the refiner for further refinement. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, ComfyUI vs Auto1111 is like driving manual shift vs automatic (no pun intended). A good place to start if you have no idea how any of this works is the: with sdxl . Thankfully, u/rkiga recommended that I downgrade my Nvidia graphics drivers to version 531. Here's where I toggle txt2img, img2img, inpainting, and "enhanced inpainting" where i blend latents together for the result: With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into. Adjust the workflow - Add in the. 0 ComfyUI. History: 18 commits. Favors text at the beginning of the prompt. The impact pack doesn't seem to have these nodesThis workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. 20:43 How to use SDXL refiner as the base model. (early and not finished) Here are some more advanced examples: “Hires Fix” aka 2 Pass Txt2Img. The creator of ComfyUI and I are working on releasing an officially endorsed SDXL workflow that uses far less steps, and gives amazing results such as the ones I am posting below Also, I would like to note you are not using the normal text encoders and not the specialty text encoders for base or for the refiner, which can also hinder results. Eventually weubi will add this feature and many people will return to it because they don't want to micromanage every detail of the workflow. SDXL afaik have more inputs and people are not entirely sure about the best way to use them, also refiner model make things even more different, because it should be used mid generation and not after it, and a1111 was not built for such a use case. It's a LoRA for noise offset, not quite contrast. +Use SDXL Refiner as Img2Img and feed your pictures. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. 5 model, and the SDXL refiner model. Maybe all of this doesn't matter, but I like equations. 9 vào RAM. Since the release of Stable Diffusion SDXL 1. . base model image: . Control-Lora: Official release of a ControlNet style models along with a few other. For those of you who are not familiar with ComfyUI, the workflow (images #3) appears to be: Generate text2image "Picture of a futuristic Shiba Inu", with negative prompt "text, watermark" using SDXL base 0. 5B parameter base model and a 6. Upscale model, (needs to be downloaded into ComfyUImodelsupscale_models Recommended one is 4x-UltraSharp, download from here. Reload ComfyUI. 🧨 Diffusers Examples. Sample workflow for ComfyUI below - picking up pixels from SD 1. 5 models. Intelligent Art. 2、Emiを追加しました。Refiners should have at most half the steps that the generation has. That’s because the creator of this workflow has the same 4GB. Update README. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. When trying to execute, it refers to the missing file "sd_xl_refiner_0. 你可以在google colab. Cũng nhờ cái bài trải nghiệm này mà mình phát hiện ra… máy tính mình vừa chết một thanh RAM, giờ chỉ còn có 16GB. ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. sdxl-0. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. You will need ComfyUI and some custom nodes from here and here . I suspect most coming from A1111 are accustomed to switching models frequently, and many SDXL-based models are going to come out with no refiner. png . How to get SDXL running in ComfyUI. The SDXL 1. Inpainting a cat with the v2 inpainting model: . A CheckpointLoaderSimple node to load SDXL Refiner. Searge-SDXL: EVOLVED v4. 0 Refiner & The Other SDXL Fp16 Baked VAE. 5 models. 11:56 Side by side Automatic1111 Web UI SDXL output vs ComfyUI output. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). SDXL generations work so much better in it than in Automatic1111, because it supports using the Base and Refiner models together in the initial generation. . It supports SD1. The node is located just above the “SDXL Refiner” section. For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. sdxl is a 2 step model. safetensors files to the ComfyUI file which is present with name ComfyUI_windows_portable file. So I created this small test. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. Switch (image,mask), Switch (latent), Switch (SEGS) - Among multiple inputs, it selects the input designated by the selector and outputs it. A good place to start if you have no idea how any of this works is the:with sdxl . Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. You must have sdxl base and sdxl refiner. Step 4: Copy SDXL 0. 5 does and what could be achieved by refining it, this is really very good, hopefully it will be as dynamic as 1. x for ComfyUI. 🧨 DiffusersThe way to use refiner, again, I compared this way (from on of the similar workflows I found) and the img2img type - imo quality is very similar, your way is slightly faster but you can't save image without refiner (well of course you can but it'll be slower and more spagettified). a closeup photograph of a. Yes only the refiner has aesthetic score cond. refiner_v1. How To Use Stable Diffusion XL 1. In the second step, we use a. 0, now available via Github. I just uploaded the new version of my workflow. 0. 5 refiner node. Updated with 1. The joint swap system of refiner now also support img2img and upscale in a seamless way. Compare the outputs to find. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. Example script for training a lora for the SDXL refiner #4085. My research organization received access to SDXL. SDXL uses natural language prompts. 最後のところに画像が生成されていればOK。. 0 ComfyUI. 0 Alpha + SD XL Refiner 1. 1 for the refiner. json. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. do the pull for the latest version. 0 | all workflows use base + refiner. 1: Support for Fine-Tuned SDXL models that don’t require the Refiner. png . Stable Diffusion XL 1. Those are two different models. ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate. 0 Comfyui工作流入门到进阶ep. 9_webui_colab (1024x1024 model) sdxl_v1. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. 1 and 0. 9. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. The refiner improves hands, it DOES NOT remake bad hands. I've been able to run base models, Loras, multiple samplers, but whenever I try to add the refiner, I seem to get stuck on that model attempting to load (aka the Load Checkpoint node). It will crash eventually - possibly RAM but doesn't take the VM with it - but as a comparison that one "works". Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more. 9 and Stable Diffusion 1. safetensors and sd_xl_refiner_1. ~ 36. 0 Base and Refiners models downloaded and saved in the right place, it. 5. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. What a move forward for the industry. It is highly recommended to use a 2x upscaler in the Refiner stage, as 4x will slow the refiner to a crawl on most systems, for no significant benefit (in my opinion). In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using. 9. Also how to organize them when eventually end up filling the folders with SDXL LORAs since I cant see thumbnails or metadata. 5. 12:53 How to use SDXL LoRA models with Automatic1111 Web UI. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). Providing a feature to detect errors that occur when mixing models and clips from checkpoints such as SDXL Base, SDXL Refiner, SD1. 35%~ noise left of the image generation. 0 is “built on an innovative new architecture composed of a 3. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). You can Load these images in ComfyUI to get the full workflow. VAE selector, (needs a VAE file, download SDXL BF16 VAE from here, and VAE file for SD 1. . 5 models. For upscaling your images: some workflows don't include them, other workflows require them. I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. Restart ComfyUI. update ComyUI. SD+XL workflows are variants that can use previous generations. silenf • 2 mo. 17:38 How to use inpainting with SDXL with ComfyUI. Is there an explanation for how to use the refiner in ComfyUI? You can just use someone elses workflow of 0. 0 Download Upscaler We'll be using. 根据官方文档,SDXL需要base和refiner两个模型联用,才能起到最佳效果。 而支持多模型联用的最佳工具,是comfyUI。 使用最为广泛的WebUI(秋叶一键包基于WebUI)只能一次加载一个模型,为了实现同等效果,需要先使用base模型文生图,再使用refiner模型图生图。You can get the ComfyUi worflow here. 17:18 How to enable back nodes. I noticed by using taskmanager that SDXL gets loaded into system RAM and hardly uses VRAM. Pull requests A gradio web UI demo for Stable Diffusion XL 1. Special thanks to @WinstonWoof and @Danamir for their contributions! ; SDXL Prompt Styler: Minor changes to output names and printed log prompt. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. I can run SDXL 1024 on comfyui with an 2070/8GB smoother than I could run 1. 手順1:ComfyUIをインストールする. X etc. Every time I processed a prompt it would return garbled noise, as if the sample gets stuck on 1 step and doesn't progress any further. If you want to open it. 0 Checkpoint Models beyond the base and refiner stages. . . Contribute to fabiomb/Comfy-Workflow-sdxl development by creating an account on GitHub. You’re supposed to get two models as of writing this: The base model. 17. 6B parameter refiner model, making it one of the largest open image generators today. It also works with non. Right now, I generate an image with the SDXL Base + Refiner models with the following settings: MacOS: 13. You can add “pixel art” to the prompt if your outputs aren’t pixel art Reply reply irateas • This ^^ for Lora it does an amazing job. r/StableDiffusion • Stability AI has released ‘Stable. For example, see this: SDXL Base + SD 1. I trained a LoRA model of myself using the SDXL 1. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). SDXL Models 1. Software. Fooocus and ComfyUI also used the v1. I've been working with connectors in 3D programs for shader creation, and the sheer (unnecessary) complexity of the networks you could (mistakenly) create for marginal (i. 5 checkpoint files? currently gonna try them out on comfyUI. Put into ComfyUImodelsvaeSDXL and ComfyUImodelsvaeSD15). , this workflow, or any other upcoming tool support for that matter) using the prompt?Is this just a keyword appended to the prompt?Due to the current structure of ComfyUI, it is unable to distinguish between SDXL latent and SD1. refiner_output_01030_. SDXL Refiner 1. If you get a 403 error, it's your firefox settings or an extension that's messing things up. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency).