I saw a lot of pictures generated by Stable Diffusion on the Internet, which seems quite attractive, especially when many interested people have made some retouching kits (LoRA) for idol actors, it will make people eager to try. This time we will use the recently released Japanese artist LoRA to briefly introduce how to use it in Stable Diffusion and what needs to be paid attention to.
Self-built Stable Diffusion WebUI teaching:click here
1) Download the required LoRA
Basically, through the CivitAI website, we can find a lot of checkpoint models and loRA modification kits, including of course the LoRA of the protagonist SanOOya this time. After downloading, put the files in stable-diffusion-webuimodelsLora
folder.If you do not have this folder, you can models
In the folder itself.
2) View the build settings for the example
Using Stable Diffusion, we have to learn how to “chant the mantra of generating images”, that is, the parameters used when generating images and the words used to describe images.It will include the checkpoint model used in the picture, Sampling Method, etc. For example, many of the SanOOya LoRA are based on Basil_mix_fixed
as the base model, if you choose the one commonly used by other characters ChilloutMix
Model to deal with, the effect may not be as ideal.
3) Download the required checkpoint from Hugging Face
You may find that the Basil_mix_fixed model (checkpoint) does not seem to be found on the CivitAI website.At this time, we can download the required files through Hugging Face, a dedicated AI-related archive, and then put the checkpoint file in stable-diffusion-webuimodelsStable-diffusion
folder.Everyone should download the safer safetensors
Version, I really can’t find it before downloading ckpt
Version.
4) Use the example picture to generate your own SanOOya
The easiest way is to directly download some pictures with generated parameters and text descriptions (spells) from the CivitAI website, then open the PNG info in the Stable Diffusion interface and put the photos, and you will see related content next to it. At this time, choose txt2img or img2img to generate.
(Editor’s Note: Click directly below the parameters of the reference picture Copy Generation Data
button can also copy all parameters, including extended function parameters)
5) Generate image
When generating images, the different checkpoint models and sampling methods used will affect the style and effect of the images, so we will directly import the samples, and the system will preset some parameters according to the photos, and will also be ready to generate images The description of the user only needs to make a slight modification.
6) Add LoRA and set specific gravity
There will be five icons under Generate, click the red one in the middle, and the interface will pop up a menu, click the LoRA tab to select the LoRA needed for this generation, for example, the LoRA in SanOOya.Then in the description column you will see
content, users can adjust the proportion of usage related to LoRA, for example :0.8
equal to 80%,:0.2
It represents 20%.Of course, you may see multiple LoRAs used in one set, such as character faces, backgrounds, light and shadow effects, etc. However, some foreign YouTubers said that if you need to use multiple LoRAs at the same time, remember to integrate them one by one. input, and pay attention to the use of specific gravity can not add up to more than 1
in other words, if you want to use 2 LoRA, the specific gravity may be set as :0.8
+ :0.2
or :0.6
+ :0.4
,And so on.
7) What is Seed-1?
Import sample pictures to generate, the system will directly analyze the data of the photos, including character movements, background, lighting and other content to generate, and there is a chance to generate almost the same photos. If you want to be more original, you can click the dice icon behind the Seed column at the bottom of the interface to reset the parameter to -1, so that the system will only calculate based on your text description and use LoRA.