News Report Technology
January 24, 2023

GLIGEN: new frozen text-to-image generation model with bounding box

In Brief

GLIGEN, or Grounded-Language-to-Image Generation, is a novel technique that builds on and extends the capability of current pre-trained diffusion models.

With caption and bounding box condition inputs, GLIGEN model generates open-world grounded text2img.

GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.

GLIGEN may also ground human keypoints while generating text-to-images.

Large-scale text-to-image diffusion models have come a long way. However, the current practice is to rely solely on text input, which can limit controllability. GLIGEN, or Grounded-Language-to-Image Generation, is a novel technique that builds on and extends the capability of current pre-trained text-to-image diffusion models by allowing them to be conditioned on grounding inputs.

GLIGEN: new frozen text-to-image generation model with bounding box

To maintain the pre-trained model’s extensive concept knowledge, developers freeze all of its weights and pump the grounding information into fresh trainable layers via a controlled process. With caption and bounding box condition inputs, GLIGEN model generates open-world grounded text-to-image, and the grounding ability generalizes effectively to novel spatial configurations and concepts.

Check out the demo here.

GLIGEN is based on existing pretrained diffusion models, the original weights of which have been frozen to retain massive amounts of pre-trained knowledge.
  • GLIGEN is based on existing pre-trained diffusion models, the original weights of which have been frozen to retain massive amounts of pre-trained knowledge.
  • At each transformer block, a new trainable Gated Self-Attention layer is created to absorb additional grounding input.
  • Each grounding token has two types of information: semantic information about the grounded thing (encoded text or image) and spatial position information (encoded bounding box or key points).
Related article: VToonify: A real-time AI model for generating artistic portrait videos
Newly added modulated layers are continuously pre-trained on massive grounding data (image-text-box), which is more cost-effective than alternative methods of using a pretrained diffusion model, such as full-model finetuning. Similar to Lego, different trained layers can be plugged in and out to allow various new capabilities.
Newly added modulated layers are continuously pre-trained on massive grounding data (image-text-box). This is more cost-effective than alternative methods of using a pre-trained diffusion model, such as full-model finetuning. Similar to Lego, different trained layers can be plugged in and out to allow various new capabilities.
GLIGEN supports scheduled sampling in the diffusion process for inference, where the model can dynamically select to use grounding tokens (by adding the new layer) or the original diffusion model with good prior (by kicking out the new layer), and thus balance generation quality and grounding ability.
GLIGEN supports scheduled sampling in the diffusion process for inference, where the model can dynamically select to use grounding tokens (by adding the new layer) or the original diffusion model with good prior (by kicking out the new layer), and thus balance generation quality and grounding ability.
GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.
GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.
Related article: Microsoft has released a diffusion model that can build a 3D avatar from a single photo of a person
GLIGEN can also be trained using reference pics.
GLIGEN can also be trained using reference pics. The top row suggests that reference photographs, in addition to written descriptions, can provide more fine-grained characteristics such as style and shape the car. The second row demonstrates that a reference image can also be utilized as a style image, in which case we discover that grounding it into a corner or edge of an image suffices.
GLIGEN, like other diffusion models, can perform grounded image inpaint, which can generate objects that closely match supplied bounding boxes.
GLIGEN, like other diffusion models, can perform grounded image inpaint, which can generate objects that closely match supplied bounding boxes.
GLIGEN may also ground human keypoints while generating text-to-images.
GLIGEN may also ground human key points while generating text-to-images.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Read More
Read more
Subscan Introduces Governance Tracking Module For Astar, Empowering Users To Engage With Network’s Governance
News Report Technology
Subscan Introduces Governance Tracking Module For Astar, Empowering Users To Engage With Network’s Governance
December 26, 2024
Game.com Unveils Fair Launch Token Distribution Protocol, Enabling Transparent And Customized Token Launches
News Report Technology
Game.com Unveils Fair Launch Token Distribution Protocol, Enabling Transparent And Customized Token Launches
December 26, 2024
AI Agent Santa Kicks Off Airdrop, Offering Exclusive Rewards To SANTA Token Holders
News Report Technology
AI Agent Santa Kicks Off Airdrop, Offering Exclusive Rewards To SANTA Token Holders
December 26, 2024
BulbaSwap And Aizel Launch AI-Powered BulbaAgent For Token Creation On X
News Report Technology
BulbaSwap And Aizel Launch AI-Powered BulbaAgent For Token Creation On X
December 26, 2024