UK

Comfyui load workflow tutorial reddit


Comfyui load workflow tutorial reddit. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Please share your tips, tricks, and workflows for using this software to create your AI art. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second stage, for a total a gorgeous 4k native output from comfyUI! You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Upcoming tutorial - SDXL Lora + using 1. Initial Input block - Welcome to the unofficial ComfyUI subreddit. ComfyScript is simple to read and write and can run remotely. 9. Then add in the parts for a LoRA, a ControlNet, and an IPAdapter. the diagram doesn't load into comfyui so I can't test it out. With a 3060 12 vram, it sometimes takes me up to 3 minutes to load sdxl, but once loaded, all other generations are faster because you don't need to load the checkpoint anymore. I have a wide range of tutorials with both basic and advanced workflows. Of course, if it takes more than 5 minutes It is clear that there is a problem. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. 5 based models with greater detail in SDXL 0. Heya, I've been working on a few tutorials for comfyUI over the past couple of weeks if you are new at comfyUI and want a good grounding in how to use comfyUI then this tutorial might help you out. You can find the Flux Dev diffusion model weights here. Initial Input block - will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. So for the first time you start the workflow, wait a while. The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. 1 or not. And above all, BE NICE. . 75s/it with the 14 frame model. Most Awaited Full Fine Tuning (with DreamBooth effect) Tutorial Generated Images - Full Workflow Shared In The Comments - NO Paywall This Time - Explained OneTrainer - Cumulative Experience of 16 Months Stable Diffusion Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. How to install and use Flux. Looks awesome, currently i am creating a tutorial for converting comfyui workflows to a production-grade multiuser backend api. Related resources for Flux. This causes my steps to take up a lot of RAM, leading to killed RAM. Try inpaint Try outpaint Hmm low Quality, try lantent upscale with 2 ksamplers. Follow basic comfyui tutorials on comfyui github, like basic SD1. If you see a few red boxes, be sure to read the Questions section on the page. 4. 1. Put the flux1-dev. 8>. I then downloaded a custom workflow from here and initiated installing it from within comfyui. You can then load or drag the following image in ComfyUI to get the workflow: This guide is about how to setup ComfyUI on your Windows computer to run Flux. Seems very hit and miss, most of what I'm getting look like 2d camera pans. Aug 2, 2024 · Flux Dev. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Ideally nothing that's like "download this workflow and click 'install missing nodes' because that never actually works. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. Overview of different versions of Flux. The generated workflows can also be used in the web UI. Go to the comfyUI Manager, click install custom nodes, and search for reactor. Ending Workflow. and yess, this is arcane as FK and I have no idea why some of the workflows are shared this way. Load Image Node. Dec 1, 2023 · If you've ever wanted to start creating your own Stable Diffusion workflows in ComfyU, then this is the video for you! Learning the basics is essential for any workflow creator, and I’ve Apr 30, 2024 · Follow this step-by-step guide to load, configure, and test LoRAs in ComfyUI, and unlock new creative possibilities for your projects. Let me know if you are interested in collaboration Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. An example of the images you can generate with this workflow: ComfyUI's API is enough for making simple apps, but hard to write by hand. 1 with ComfyUI. At the same time, I scratch my head to know which HF models to download and where to place the 4 Stage models. ComfyUI basics tutorial. 9 but it looks like I need to switch my upscaling method. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Starting workflow. I have a video and I want to run SD on each frame of that video. Lora usage is confusing in ComfyUI. Try generating basic stuff with prompt, read about cfg, steps and noise. ComfyUI-to-Python-Extension can be written by hand, but it's a bit cumbersome, can't take benefit of the cache, and can only be run locally. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. It covers the following topics: Introduction to Flux. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment Hey all, another tutorial, hopefully this can help with anyone who has trouble dealing with all the noodly goodness of comfyUI, in it I show some good layout practices for comfyUI and show how modular systems can be built. 1, such as LoRA, ControlNet, etc. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Welcome to the unofficial ComfyUI subreddit. Tutorials wise, there are a bunch of images that can be loaded as a workflow by comfyUI, you download the png and load it. Please keep posted images SFW. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. Start by loading up your standard workflow - checkpoint, ksampler, positive, negative prompt, etc. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Workflow. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. Belittling their efforts will get you banned. Flux Hardware Requirements. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. Jul 6, 2024 · You can construct an image generation workflow by chaining different blocks (called nodes) together. Yesterday, was just playing around with Stable Cascade and made some movie poster to test the composition and letter writing. It downloads the custom nodes and then gets to "downloading models & other files". All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. be/ppE1W0-LJas - the tutorial. ) Welcome to the unofficial ComfyUI subreddit. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Once installed, download the required files and add them to the appropriate folders. IF there is anything you would like me to cover for a comfyUI tutorial let me know. Link to the workflows, prompts and tutorials : download them here. https://youtu. A lot of people are just discovering this technology, and want to show off what they created. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. For the checkpoint, I suggest one that can handle cartoons / manga fairly easily. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Breakdown of workflow content. Welcome to the unofficial ComfyUI subreddit. There are lots of people who wants to turn their workflows to fully functioning apps and libraries like your will help that a lot. The images look better than most 1. INITIAL COMFYUI SETUP and BASIC WORKFLOW. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Welcome to the unofficial ComfyUI subreddit. a search of the subreddit Didn't turn up any answers to my question. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. 86s/it on a 4070 with the 25 frame model, 2. You need to select the directory your frames are located in (ie. I'm wondering if there is a good tutorial out there that starts at step 1 and sets everything up and explains the concepts (eg: what is a latent image, eg). Is there a way to load each image in a video (or a batch) to save memory? My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. sft file in your: ComfyUI/models/unet/ folder. Flux Schnell is a distilled 4 step model. I teach you how to build workflows rather than Welcome to the unofficial ComfyUI subreddit. Try to install the reactor node directly via ComfyUI manager. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation And now for part two of my "not SORA" series. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. That's a bit presumptuous considering you don't know my requirements. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. 5 workflow (dont download workflows from YouTube videos or Advanced stuff on here!!). kyk gpmv sqquos hfiun jbwuruh qxbhwlta eukm zdj rhg znubl


-->