Comfyui workflow png example reddit


  1. Comfyui workflow png example reddit. Learn how to switch Learn how to use img2img in ComfyUI, a tool for generating images from text. I dump the metadata for a png I really like: magick identify -verbose . I created my first workflow for ComfyUI and decided to share it with you since I found it quite helpful for me. Known issues. This topic aims to answer what I believe would be the first questions an a1111 user might have about Comfy. For example: ffmpeg -i my-cool-video. Hello, I'm curious if the feature of reading workflows from images is related to the workspace itself. K12sysadmin is open to view and closed to post. Here are two workflows for reference,see github. Controversial. Here's an example: Welcome to the unofficial ComfyUI subreddit. There is a latent workflow and a pixel space ESRGAN workflow in the examples. can you explain me Welcome to the unofficial ComfyUI subreddit. Still great on OP’s part for sharing the workflow. The question: in ComfyUI, how do you persist your random / wildcard / generated prompt for your images so that you can understand the specifics of the true prompt that created the image?. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference Hi. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. If you have the SDXL 0. Unfortunately reddit make it really, really hard to download png, it all get converted to webp. Support for SD 1. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. Learn how to create art, fix hires, use inpainting, lora, A simple workflow of Flux AI on ComfyUI, a web-based GUI for AI models. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have Release: AP Workflow 8. You can load this image in ComfyUI to get the full workflow. I would like to include those images into ComfyUI workflow and experiment with different backgrounds - mist - lightrays - abstract colorful stuff behind and before the product subject. This includes yiff This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. That will give you a Save(API Format) option on the main menu. 8. From the look of it you don't have consistent results? (I did with my minimal example). Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. 1, a suite of generative image models by Black Forest Labs, with ComfyUI, a user-friendly interface for text-to-image generation. I've switched to ComfyUI from A1111 and I don't think I will being going back. A lot of people are just Welcome to the unofficial ComfyUI subreddit. 1 Dev Flux. You can also just load an image on the left side of the control net section and use it that way edit: if you use the link above, you'll need to replace the ControlNet Inpaint Example. json as a template). Or I don't understand Comfy well enough. So, i added reverse image search that queries a workflow catalog to find workflows that produce similar looking results. 43 KB. 9 I was using some ComfyUI workflow shared here where the refiner was always an improved version versus the base. So, here is a barebones workflow that has some options to explore the differnt face models. Only the LCM Sampler extension is needed, as shown in this video. The comfyui workflow is just a bit easier to drag and drop and get going right a way. You do only face, perfect. Export the adjusted Z-depth as a PNG sequence IPAdapter and ControlNet: For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to speak. Sometimes, composite mode will fail on some images, such as ComfyUI example image, still under invesgating the cause Credit & Thanks. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast I am constantly changeing and refining my workflows on a case-by-case basis. png and so on) The problem is that the seed in the filename remains the same, as it seems to be taking the initial one, not the current one that's either again randomly generated or inc/decremented. The background to the question: So this is my ComfyUI week. Hi there. all in one workflow would be awesome. 0. There is no version of the generated prompt. Get ComfyUI Manager to start Welcome to the unofficial ComfyUI subreddit. Look for the example that uses controlnet lineart. per comfyanon's advice I managed a ImgToImg workflow using "Split Sigmas". But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. Paint inside your image and change parts of it, to suit your desired result! This ComfyUI workflow allows us to create hidden faces /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Here is one I've been working on for using controlnet combining depth, blurred HED and a noise as a second pass, it has been coming out with some pretty nice variations of the originally generated images. A lot of people are just discovering this technology, and want to show off what they created. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) Here's an example of pushing that idea even further, and rendering directly to 3440x1440. A lot of people are just First of all, sorry if this has been covered before, i did search and nothing came back. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). png) Txt2Img workflow Welcome to the unofficial ComfyUI subreddit. safetensors and other files to run Flux diffusion models in ComfyUI, a user-friendly interface for image generation. The workflow is stripped from the png too. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. The Gory Details of Finetuning SDXL for 30M Yes. will be/are usually in the ComfyUI/custom_nodes/example /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. Tiens_il_pleut • it seems that this feature is only implemented for png. Also, if this is new and exciting to Img2Img Examples. This is a subreddit for the discussion, and posting, of AI generated furry content. 5 and sdxl but I still think that there is more that can be done in terms of detail. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub Loading a PNG to see its workflow is a lifesaver to start understanding the workflow GUI, but it's not nearly enough. The problem I'm having is that Reddit Learn how to use SDXL Turbo, a model that can generate consistent images in a single step, with ComfyUI, a GUI for SDXL. This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Example: Svelte is a radical new approach to building user interfaces. Nome of the reddit images I find work as they all seem to be jpg or webp. T2I-Adapters are much much more efficient than ControlNets so I highly recommend Thank you, I think it might be reddit. down_block_res_samples, mid_block_res_sample = self. 5 steps with a split at step 4 using the lower sigma output. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. I just uploaded the new version of my workflow. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some arbitrary cloud GPU, our cloud sets up everything automatically so that there are no missing files/custom nodes Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Skip to main content. It is a simple workflow of Flux AI on ComfyUI. The sample prompt as a test shows a really great result. controlnet( ^^^^^ File "D:\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site With SDXL 0. Pixart Sigma + SDXL + PAG comfyui workflow is criminally underrated around these parts Workflow Included which is a png file, and it imported fine into ComfyUI Welcome to the unofficial ComfyUI subreddit. Step 2: Download this sample Image. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. 0 Welcome to the unofficial ComfyUI subreddit. png you can drag into ComfyUI to test the nodes are working or add them to your current workflow to try them out. I tried to leave some notes on the workflow. I've got it hooked up in an SDXL flow and I'm bruising my knuckles on SDXL. hey guys, i always love seeing a cool image online and trying to reproduce it, but trying to find the original method or workflow is troublesome since google‘s image search just shows similar looking images. I need to KSampler it again after upscaling. My workflow where you can choose and image (or several) from the batch and upscale them on the next queue. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. So I load both pngs into photoshop as separate Ability to load prompt information from JSON and PNG files. x, 2. You can Load these images in ComfyUI to get the full workflow. By default, all your workflows will be saved to Share, discover, & run thousands of ComfyUI workflows. This was really a test of Comfy UI. I recently switched from A1111 to ComfyUI to mess around AI generated image. Please share your tips, tricks, and workflows for using this software to create your AI art. You have to study the workflow carefully, but I think it's possible. Official list of SDXL resolutions (as defined in SDXL paper). mp4 -vf fps=10/1 frame%03d. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Welcome to the unofficial ComfyUI subreddit. json file - use settings-example. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. A transparent PNG in the original size with only the newly inpainted part will be generated. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . I played for a few days with ComfyUI and SDXL 1. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Open the file browser and upload your images and json files, then simply copy their links (right click -> copy You can adopt ComfyUI workflows to show only needed input params in Visionatrix UI (see docs: Learn how to use ComfyUI to create stunning images and animations with Stable Diffusion. png) Txt2Img workflow Since Stability AI released the official nodes for running SD3 in comfyUI via API calls, I put together a step by step tutorial. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. Open comment sort options Good example Welcome to the unofficial ComfyUI subreddit. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. It’s not the point of this post and there’s a lot to learn, but still, let me share my personal experience with you: If we want more people, let's assume this workflow: with controlnet, masks or a regional prompter, define where you want, for example, 2 characters; you insert two Portrait Master nodes and use them to describe characters. (an example for advanced users) Welcome to the unofficial ComfyUI subreddit. K12sysadmin is for K12 techs. py --disable-metadata. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Well, I feel dumb. Svelte is a radical new approach to building user interfaces. What's new in 3. It didn't work out. There is the "example_workflow. Preparation work (not in comfyui) - Take a clip and remove the background (can be made with any video editor which rotobrush or, as in my case, with RunwayML) - Extract the frames from the clip (in my case with ffmpeg) - Copy the frames in the corresponding input folder (important, saved as 000XX. but this workflow should also help people learn about modular layouts, control systems and a bunch of modular nodes I use in conjunction to create good images. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. png files it writes. - lots of pieces to combine with other workflows: 6. (early and not hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. I did edit the custom node ComfyUI-Custom-Scripts' python file: You can find the edited code and workflow. Compare Learn how to use clip_l. But for a base to start at it'll work. I learned this from Sytan's Workflow, I like the result. 5) that automates the generation of a frame featuring two characters each controlled by its own lora and the openpose. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. Find links to download the files, tips to Welcome to the unofficial ComfyUI subreddit. This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. Is there any actual point to your example about the 6 different models? This seems to inherently defeat the entire purpose of the 6 models and would likely end up making the end result effectively quite random and uncontrollable, at least without extensive testing though you could also simply train or find a model/lora that has similar result more easily. I found it very helpful. For your all-in-one workflow, use the Generate tab. Please keep posted images SFW. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. As anyone got a simple basic workflow in ComfyUI for using Wildcards similar to Automatic 1111? What is the simplest way to use Wildcards and Dynamic Prompts in ComfyUI? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. But try both at once and they miss a bit of quality. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. Maybe play around with it till you get desired results. ) to integrate it with comfyUI for a "$0 budget sprite game". Loras are patches applied on top of the main MODEL and the CLIP model so to use them put them in the models/loras directory and use the ComfyUI Examples. Wish there was some #hashtag system or something. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to This is more of a starter workflow which supports img2img, txt2img, a second pass sampler, between the sample passes you can preview the latent in pixelspace, mask what you want, and inpaint (it just adds mask I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. and the checkbox 'include workflow' must be checked in the 'save image' node. image saving and postprocess need was-node-suite-comfyui to be installed. Sort by: Best. While I normally dislike providing workflows because I feel its better to teach someone to catch a fish than giving them one. 5 model, and the SDXL refiner model. While im not 100% sure what your asking for, easy way to re-embed a workflow from comfy to any pic is- Load the workflow, make a load img node go strait to a save image node, load up the image you want the workflow in and just hit queue, the new saved image will have that workflows meta in it. A1111 feels bloated compared to comfy This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. and no workflow metadata will be saved in any image. Minimum Hardware Requirements: 24GB VRAM, 32GB RAM . If you see a few red boxes, be sure to read the Questions section on the page. Here the original: https: Uhm the image is the png file that you can save and drop into comfyui to load the Welcome to the unofficial ComfyUI subreddit. I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved Welcome to the unofficial ComfyUI subreddit. If you’re completely new to LoRA training, you’re probably looking for a guide to understand what each option does. That's a bit presumptuous considering you don't know my requirements. Belittling their efforts will get you banned. Each time I do a step, I can see the color being somehow changed and the quality and color coherence of the newly generated pictures are hard to maintain. 1 for ComfyUI. I tried with masking nodes but the results weren't what I was expecting, for example the original masked image of the product was still processed and the text ComfyUI will create a folder with the prompt, then the filenames with look like 32347239847_001. The folder with the CSV files is located in the "ComfyUI\custom_nodes\ComfyUI-CSV_Loader\CSV" folder to keep everything contained. As the comment with the workflow is still not showing on my end. png (002. Welcome to the unofficial ComfyUI subreddit. Q&A. An example of the Welcome to the unofficial ComfyUI subreddit. \ComfyUI_01556_. What I have done is recreate the parts for one specific area. See examples of loading images, converting them to latent space and sampling on them with Add any workflow to any arbitrary PNG with this simple tool: https://rebrand. Learn how to set up and use Flux. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. You may plug them to use with 1. ComfyUI Workflow | OpenArt The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: This repository showcases various workflows and techniques using ComfyUI, a GUI tool for image and video generation. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. 9 leaked repo, you can read the README. Ability to change default values of UI settings (loaded from settings. It is my first workflow that I am sharing, so any But now we auto backup your workflows to your disk folder, the data should be much more reliable, you can always find your backups in your disk. This is why I used Rem as an example, to show you can "transplant" the kick to a different character using a character LoRA. Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows Welcome to the unofficial ComfyUI subreddit. 5 denoise Welcome to the unofficial ComfyUI subreddit. I just really can't wait to discover the artist behind the images by analyzing the embedded metadata in the png (queue depressing impending doom music). md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. There are a bunch of useful extensions for ComfyUI that will make your life easier. ly/workflow2png. - comfyanonymous/ComfyUI GianoBifronte. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. 5 in a single workflow in ComfyUI? Share Add a Comment. Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. Add a Comment. Best. The image itself was supposed to be the workflow png but I heard reddit is stripping the meta data from it. Lora Examples. The denoise controls the amount of noise added to the image. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. 1: Support for Fine-Tuned SDXL models that don’t require I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples from new plugins or unfamiliar PNG files that I've encountered an issue where, every time I try to drag PNG/JPG files that contain workflows into ComfyUI—including examples from new plugins and unfamiliar PNGs Welcome to the unofficial ComfyUI subreddit. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. What it's great for: Once you've achieved the artwork you're looking for, it's time to delve deeper and use inpainting, where you can customize an already created image. EDIT: For example this workflow shows the use of the other prompt windows. See examples of ComfyUI workflows and Followup - It appears that the upload process strips the workflow information from the png files. Ability to change default paths (loaded from paths. I just released version 4. You can use any existing ComfyUI workflow with SDXL (base model, since previous workflows don't include the refiner). My seconds_total is set to 8, and the BPM I ask for in the prompt is set to 120BPM (two beats per second), meaning I get 16 beat bars. - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. Steps: 1- Update your Comfy UI Example workflows. if you want to stack lora you have to keep adding nodes. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. The save_prefix is using the newest template setup I included in today's push. FLUX is an open-weight, guidance-distilled model developed by Black Forest Labs. com/models/628682/flux-1-checkpoint Welcome to the unofficial ComfyUI subreddit. Only dog, also perfect. png linked here Welcome to the unofficial ComfyUI subreddit. Thanks for sharing, that being said I wish there was a better sorting for the workflows on comfyworkflows. More info: https://rtech. Im trying to do the same as high res fix, with a model and weight below 0. A lot of people are just If necessary, updates of the workflow will be made available on Github. Eventually you'll find your favorites which enhance how you want ComfyUI to work for you. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Open comment sort options. This makes it potentially very convenient to share workflows with other. We would like to show you a description here but the site won’t allow us. MoonRide workflow v1. So to see what workflow was used to gen a particular image, just drag n drop the image into Comfy and it will rectreate it for you. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples Welcome to the unofficial ComfyUI subreddit. From what I understand it is doing only the last step of a 5 step generation producing the same result as a low denoising threshold Happy to share a preliminary version of my ComfyUI workflow (for SD prior to 1. Download and try out 10 workflows for txt2img, img2img, upscaling, merging, controlnet, inpainting and more. so is it possible to combine SDXL and SD 1. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. And above all, BE NICE. I was not aware that reddit strips off the metadata of the png. I've encountered an issue where, every time I try to drag PNG/JPG files that contain workflows into ComfyUI—including examples from new plugins and unfamiliar PNGs that I've never brought into ComfyUI before—I receive a notification stating that Do you have ComfyUI manager. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). It provides workflow for SDXL (base + refiner). I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. Do you all prefer separate workflows or one massive all encompassing workflow? View community ranking In the Top 10% of largest communities on Reddit. ComfyUI workflow with 50 nodes and 10 models ?share with ComfyFlowApp in two steps. These are examples demonstrating how to use Loras. EZ way, kust download this one and run like another checkpoint ;) https://civitai. Click this and paste into Comfy. Save your workflow using this format which is different than the normal json workflows. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. Pretty much what it is. Infinite Zoom: Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Share Add a Comment. png Feature/Version Flux. No refiner. Jumping from one thing to another takes reloading or re-doing everything. Excuse one of the janky legs, I'd usually edit that in Photoshop - but the idea is to show you what I get directly out of Comfy using the deepshrink method. support/docs/meta Welcome to the unofficial ComfyUI subreddit. New. We all know it is possible to load a workflow or drag one in ComfyUI with a PNG image. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. In some places it looks better than the old one in some places worse. Oh crap. It could be the checkpoint, lora, or embedding that's influencing the results. Warning. 2. And above Most workflows you see on GitHub can also be downloaded. 5 noise, decoded, then saved. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. The entire comfy workflow is there which you can use. I'll do you one better, and send you a png you can directly load into Comfy. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. Sometimes it helps to introduce controlnets, sometimes that is a bad idea for example. Share Sort by: Best. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Yesterday, I came across a very interesting workflow that uses the SDXL base model, any SD 1. You can save the workflow as a json file with the queue control panel "save" workflow button. I think that when you put too many things inside, it gives less attention to it. will now need to become python main. png. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- Comfy stores your workflow (the chain of nodes that makes the image) in the . json. but mine do include workflows for the most part in the video description. Ignore the prompts and setup The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. Where ever you launch ComfyUI from, python main. These are examples demonstrating how to do img2img. How it works: Download & drop any image from the A ComfyUI workflow for FluxDev image generation with various features, such as ControlNet, LoRA, dynamic thresholding, inpainting, and more. 5 from 512x512 to 2048x2048. Posted by u/Icy_Dog_9661 - 130 votes and 48 comments Welcome to the unofficial ComfyUI subreddit. comfy uis inpainting and masking aint perfect. com/. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model The only thing was for Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. com. ComfyUI . Layer copy & paste this PNG on top of the original in your go to image editing software. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. But it's reasonably clean to be used as a learning tool, which is and will always remain the main goal of this workflow. Learn how to install ComfyUI on Windows, The png files produced by ComfyUI contain all the workflow info. \Stable_Diffusion\stable-diffusion-webui Comfy is good for set workflows but bad for iterating quickly. I guess once you draw all your workflows it is faster. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. My ComfyUI workflow was created to solve that. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. ComfyUI Tatoo Workflow | ComfyUI Workflow | OpenArt That said, my go-to workflow lately has been generating a first draft image with a Turbo model and scribble-lllite guidance via the AlekPet Painter node, then upscaling/inpainting/etc to finalize, and it's so intuitive and fun to work with that this workflow feels a little regressive. py. This repo contains examples of what is achievable with ComfyUI. Here are approx. Still working on the the whole thing but I got the idea down Welcome to the unofficial ComfyUI subreddit. However, without the reference_only ControlNetthis works poorly. 4K subscribers in the aiyiff community. Join the largest ComfyUI community. I am trying to find a workflow to automate by learning the manual steps (blender+etc. It lets you Here’s a simple example of how to use controlnets, this example uses the scribble controlnet and the AnythingV3 model. 5 base models, and modify latent image dimensions and upscale values to In this workflow I experiment with the cfg_scale, sigma_min and steps space randomly and use the same prompt and the rest of the settings. Hi. That's the one I'm referring to. Here is the input image I used for this workflow: T2I-Adapter vs ControlNets. json file - use paths-example. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. Awesome for replicating other's gens though or getting a repeatable process. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. My question however is Can I drag an existing workflow or part of an existing workflow, in to the workflow I am currently working with? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. I have a workflow with this kind of loop where latest generated image is loaded, encoded to latent space, sampled with 0. Is there a way to load the workflow from an image within I have been using Comfyui for quite a while now and i got some pretty decent workflows for 1. What’s New in 4. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? Within the folder you will find a ComyUI_Simple_Workflow. And ComfyUI is a powerful and modular tool to design and execute advanced stable diffusion pipelines using a graph/nodes interface. It has many extra nodes in order to show comparisons in outputs of different workflows. But it separates LORA to another workflow (and it's not based on SDXL either). To add content, your account must be vetted/verified. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. AP Workflow 3. . If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. In ComfyUI go into settings and enable dev mode options. I tried to find I can't see it, because I cant find the link for workflow. 1 Pro Flux. If I can figure it out, I'll upload the The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Top. Save the new image. The workflow is moderately affected by the last KSampler settings, but I think I move in a correct direction. For example. png, 003. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Unfortunately, Reddit strips the workflow info from uploaded png files. I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. If it's the best way to install control net because when I tried manually doing it . 0 of my AP Workflow for ComfyUI. Old. Its just not intended as an upscale from the resolution used in the base model stage. 1 or not. You will need ComfyUI and some custom nodes from here and here. ComfyUI workflow with vid2vid AnimateDiff to create alien-like girls Workflow Included An example of how machine learning can overcome all perceived odds Welcome to the unofficial ComfyUI subreddit. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. Learn how to download and run Flux AI with CLIP and VAE, and see the node diagram and discussion. rejiq roqmnb wacojn xyrfkzd tpoix ced qeri dlafk rnblfavh pyscincb