Home » ControlNet major update: just rely on prompt words to be able to accurately P-picture, keeping the style of painting unchanged is comparable to a customized large-scale language model | T Kebang

ControlNet major update: just rely on prompt words to be able to accurately P-picture, keeping the style of painting unchanged is comparable to a customized large-scale language model | T Kebang

by admin
ControlNet major update: just rely on prompt words to be able to accurately P-picture, keeping the style of painting unchanged is comparable to a customized large-scale language model | T Kebang

Stable Diffusion plug-in, “AI Painting Detail Control Master” ControlNet has recently ushered in a major update: just use text prompts, you can modify the image details arbitrarily while maintaining the main characteristics of the image.

For example, change the look of the beauty from hair to clothes, and make the expression more friendly:

Or let the model switch from the sweet girl next door to the high-cold Yujie, the orientation of the body and head, and the background are all changed:

—— No matter how the details are modified, the “soul” of the original picture is still there.

In addition to this style, it can also handle the animation type just right:

AI design netizen @sundyme from Twitter said: “The effect is better than imagined! Only one reference picture is needed to complete the above transformation, and some pictures can almost achieve the effect of customizing a large language model.”

Ahem, friends in the AI ​​painting circle, cheer up, it’s fun again.

Update on ControlNet: Retouching function that preserves the original picture style

The above update actually refers to a pre-processor called “reference-only”.

It does not require any control model, and directly uses the reference image to guide the diffusion.

According to the author, this function is actually similar to the “inpaint” function, but it will not cause the image to collapse.

(Inpaint is a partial redrawing function in the Stable Diffusion web UI, which can redraw unsatisfactory areas that are manually masked.)

Some experienced players may know a trick, which is to use inpaint for image diffusion.

See also  Summary|How long can the price of gold continue to rise - Xinhuanet Client

Say you have a 512×512 image of a dog, and want to generate another 512×512 image of the same dog.

At this point you can concatenate the 512×512 dog image and the 512×512 blank image into a 1024×512 image, then use the inpaint function to mask out the blank 512×512 part and diffuse the dog image with a similar appearance.

In this process, because the images are simply and roughly stitched together, and there will be distortion, the effect is generally not satisfactory.

With “reference-only” it’s different:

It can directly link the attention layer of SD (“Stable Diffusion”) to any independent image, so that SD can directly read these images as a reference.

That is to say, if you want to make changes while maintaining the style of the original image, you can use the prompt words to operate directly on the original image.

As shown in the official example picture, change a standing puppy into a running action:

You just need to upgrade your ControlNet to version 1.1.153 or above, then select “reference-only” as the pre-processor, upload the picture of the dog, and enter the prompt word “a dog running on grassland, best quality…”, SD Just use your picture as a reference for modification.

Netizens: ControlNet’s best feature yet

As soon as the “reference-only” function came out, many netizens started to experience it.

Some call this one of ControlNet’s best features yet:

Pass an anime picture with a character pose, and write a hint that looks completely unrelated to the original picture. Suddenly, the effect you want is based on the original image. Really strong, even to the point of changing the rules of the game.

See also  Webb Telescope sees a large number of distant galaxies in the northern yellow polar region | TechNews Technology New Report

Others claim:

It’s time to pick up all the waste pictures that were discarded before and restore them.

Of course, there are some who think it is not so perfect (for example, the earrings of the beautiful woman in the first rendering are wrong, and the hair in the second picture is also incomplete), but netizens still said that the direction is right after all.

The following are the effects of three Twitter bloggers, mainly in anime style, let’s enjoy it together:

Did it hit your heart?

Reference link:

This article is from: Qubit (ID: QbitAI)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy