Home » The Power of Inpainting: Enhancing and Expanding Pictures with Stable Diffusion + ControlNet

The Power of Inpainting: Enhancing and Expanding Pictures with Stable Diffusion + ControlNet

by admin
The Power of Inpainting: Enhancing and Expanding Pictures with Stable Diffusion + ControlNet

Stable Diffusion + ControlNet Launches New Inpainting Model for Picture Retouching and Expansion

When retouching pictures, have you ever encountered the need to adjust the position or widen the angle? In the past, such adjustments could only be done manually. However, Stable Diffusion + ControlNet has recently launched a new Inpaint model that not only allows for picture retouching, but also enables picture expansion (outpainting). The capabilities of this new model are truly impressive!

To utilize this new Inpaint model, users are required to download the ControlNet Inpaint Model file, control_v11p_sd15_inpaint.pth, as well as the control_v11p_sd15_inpaint.yaml, from HuggingFace. Once downloaded, these files should be placed in the stable-diffusion-webuiextensionssd-webui-controlnet folder. After restarting the StableDiffusion WebUI, users should see the ControlNet v1.1 block and Inpaint Model, indicating that the installation has been completed. It is important to note that users must update their ControlNet version to v1.1 or above in order to access the inpaint function.

In order to showcase the outpainting capabilities of Stable Diffusion, a demonstration using cropped test patterns was conducted. By removing the two sides of a complete picture and leaving only the central part, the AI was able to complete the missing sections. The final result was then compared with the original picture, highlighting the AI’s ability to expand outward.

To enhance the perfection of the pictures, users can utilize the built-in Interrogate CLIP or Interrogate DeepBooru in the Stable Diffusion img2img tab. This feature allows users to deduce prompt words from the picture, which can then be manually modified to achieve the desired outcome. The process involves dragging the image into the source block of the img2img tab and clicking the Interrogate CLIP or DeepBooru button. After a short calculation time, the inferred prompt words will appear in the Positive Prompt input box.

See also  An Ai viewer to see better inside yourself

With all the necessary preparations completed, users can move to the Stable Diffusion txt2img tab for the implementation of the ControlNet Inpaint. The Prompt generated earlier should be pasted, and the Negative Prompt Sampling Method should be adjusted appropriately. It is recommended to choose Euler a for faster calculation speed. The Sampling Steps should be set to 20, and the Width and Height should be adjusted accordingly. It is important to note that the proportions should be different from the original picture in order to facilitate outpainting.

The settings of ControlNet play a crucial role in the process. Users should enable ControlNet and drag the picture to the source area. The Preprocessor should be set to inpaint_only, the Model should be set to control_v11p_sd15_inpaint, and the Resize Mode should be set to Resize and Fill. The choice of Resize and Fill is vital for successful outpainting, as it ensures that the blank parts of the picture are properly filled.

In conclusion, Stable Diffusion + ControlNet’s new Inpaint model offers a powerful solution for retouching and expanding pictures. With its advanced features and capabilities, users can now achieve more precise and satisfying results. For further information on this topic, additional resources are available.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy