GLIGEN: new frozen text-to-image generation model with bounding box

News Report Technology

In Brief

GLIGEN, or Grounded-Language-to-Image Generation, is a novel technique that builds on and extends the capability of current pre-trained diffusion models.

With caption and bounding box condition inputs, GLIGEN model generates open-world grounded text2img.

GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.

GLIGEN may also ground human keypoints while generating text-to-images.


The Trust Project is a worldwide group of news organizations working to establish transparency standards.

Large-scale text-to-image diffusion models have come a long way. However, the current practice is to rely solely on text input, which can limit controllability. GLIGEN, or Grounded-Language-to-Image Generation, is a novel technique that builds on and extends the capability of current pre-trained text-to-image diffusion models by allowing them to be conditioned on grounding inputs.

GLIGEN: new frozen text-to-image generation model with bounding box

To maintain the pre-trained model’s extensive concept knowledge, developers freeze all of its weights and pump the grounding information into fresh trainable layers via a controlled process. With caption and bounding box condition inputs, GLIGEN model generates open-world grounded text-to-image, and the grounding ability generalizes effectively to novel spatial configurations and concepts.

Check out the demo here.

GLIGEN is based on existing pretrained diffusion models, the original weights of which have been frozen to retain massive amounts of pre-trained knowledge.
  • GLIGEN is based on existing pre-trained diffusion models, the original weights of which have been frozen to retain massive amounts of pre-trained knowledge.
  • At each transformer block, a new trainable Gated Self-Attention layer is created to absorb additional grounding input.
  • Each grounding token has two types of information: semantic information about the grounded thing (encoded text or image) and spatial position information (encoded bounding box or key points).
Related article: VToonify: A real-time AI model for generating artistic portrait videos
Newly added modulated layers are continuously pre-trained on massive grounding data (image-text-box), which is more cost-effective than alternative methods of using a pretrained diffusion model, such as full-model finetuning. Similar to Lego, different trained layers can be plugged in and out to allow various new capabilities.
Newly added modulated layers are continuously pre-trained on massive grounding data (image-text-box). This is more cost-effective than alternative methods of using a pre-trained diffusion model, such as full-model finetuning. Similar to Lego, different trained layers can be plugged in and out to allow various new capabilities.
GLIGEN supports scheduled sampling in the diffusion process for inference, where the model can dynamically select to use grounding tokens (by adding the new layer) or the original diffusion model with good prior (by kicking out the new layer), and thus balance generation quality and grounding ability.
GLIGEN supports scheduled sampling in the diffusion process for inference, where the model can dynamically select to use grounding tokens (by adding the new layer) or the original diffusion model with good prior (by kicking out the new layer), and thus balance generation quality and grounding ability.
GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.
GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.
Related article: Microsoft has released a diffusion model that can build a 3D avatar from a single photo of a person
GLIGEN can also be trained using reference pics.
GLIGEN can also be trained using reference pics. The top row suggests that reference photographs, in addition to written descriptions, can provide more fine-grained characteristics such as style and shape the car. The second row demonstrates that a reference image can also be utilized as a style image, in which case we discover that grounding it into a corner or edge of an image suffices.
GLIGEN, like other diffusion models, can perform grounded image inpaint, which can generate objects that closely match supplied bounding boxes.
GLIGEN, like other diffusion models, can perform grounded image inpaint, which can generate objects that closely match supplied bounding boxes.
GLIGEN may also ground human keypoints while generating text-to-images.
GLIGEN may also ground human key points while generating text-to-images.

Read more about AI:

Disclaimer

Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Damir Yalalov

Damir is the Editor/SEO/Product Lead at mpost.io. He is most interested in SecureTech, Blockchain, and FinTech startups. Damir earned a bachelor's degree in physics.

Follow Author

More Articles