ControlNet Helps You Make Perfect Hands With Stable Diffusion 1.5

News Report Technology

In Brief

ControlNet is an easy way to fine-tune Stable Diffusion.

It can be used to develop models for better SD control.

ControlNet is open-source and can be used in conjunction with WebUIs to achieve Stable Diffusion.


The Trust Project is a worldwide group of news organizations working to establish transparency standards.

The one thing text-to-image AI generators have been struggling with is hands. While images are generally impressive, the hands are less so, with superfluous fingers, weirdly bent joints, and a clear lack of understanding of what hands are supposed to look like on AI’s part. However, this doesn’t have to be the case, as the new ControlNet product is here to help Stable Diffusion create perfect, realistically-looking hands.

ControlNet is a new technology that allows you to use a sketch, outline, depth, or normal map to guide neurons based on Stable Diffusion 1.5. This means you can now have almost perfect hands on any custom 1.5 model as long as you have the right guidance. ControlNet can be thought of as a revolutionary tool, allowing users to have ultimate control over their designs.

To achieve flawless hands, use the A1111 extension with ControlNet, specifically the Depth module. Then, take a few close-up selfies of your hands and upload them to the ControlNet UI’s txt2img tab. Then create a simple dream shaper prompt, such as “fantasy artwork, Viking man showing hands closeup,” and experiment with the power of ControlNet. Experimentation with the Depth module, A1111 extension, and ControlNet UIs txt2img tab will result in beautiful and realistic-looking hands.

Recommended post: Shutterstock rewards artists who contribute to generative AI models

ControlNet itself converts the image that it is given to depth, normals, or a sketch so that later it can be used as a model. But, of course, you can directly upload your own depth map or sketches. This allows for maximum flexibility when creating a 3D scene, enabling you to focus on the style and quality of the final image.

We strongly suggest you look at the excellent ControlNet tutorial that Aitrepreneur has recently published.

ControlNet greatly improves control over Stable Diffusion’s image-to-image capabilities

Although Stable Diffusion may create images from text, it can also create graphics from templates. This image-to-image pipeline is frequently used to enhance generated photos or produce new images from scratch using templates.

While Stable Diffusion 2.0 offers the capability to use depth data from an image as a template, control over this process is quite restricted. This approach is not supported by the earlier version, 1.5, which is still commonly used due to the enormous number of custom models, among other reasons.

Each block’s weights from Stable Diffusion are copied by ControlNet into a trainable variant and a locked variant. The blocked form keeps the capabilities of the production-ready diffusion model, whereas the trainable variant can learn new conditions for picture synthesis by fine-tuning with tiny data sets.

Control over Stable Diffusion's image-to-image capabilities is greatly enhanced by ControlNet

Stable Diffusion works with all ControlNet models and offers considerably more control over the generative AI. The team provides samples of several variations of people in fixed poses, as well as various interior photos based on the spatial arrangement of the model and variations of bird images.

Read more about AI:

Disclaimer

Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Damir Yalalov

Damir is the Editor/SEO/Product Lead at mpost.io. He is most interested in SecureTech, Blockchain, and FinTech startups. Damir earned a bachelor's degree in physics.

Follow Author

More Articles