News Report
October 06, 2022

Google Overcomes Meta by Launching a New Text-to-Video AI Generator, Imagen Video 

In Brief

Google’s Imagen Video attempts to help video-generator turn into killer apps

It didn’t take long for Google to respond to Make-a-Video from Meta. By using a text prompt, Imagen Video may produce a fantastic video. The results are a tremendous advance above the state of the art despite a number of drawbacks.

In comparison to Facebook’s Text-to-Video AI generator Make-a-Video, the results are noticeably better. However, this strategy also demanded more oversight. In contrast to Imagen Video, where the micro workers worked hard to annotate movies with written descriptions, Make-a-Scene used unlabeled videos for training.

Going into the specifics of the architecture is pointless; you should read about it in the article here. We can only confirm that 16 frames are first generated from the text embedding of the T5 encoder at a resolution of 48×24 with 3 frames per second, and that this is then upscaled by a number of diffusion models into the final movie of 128 frames at 1280×768 and 24 frames per second.

What is Imagen Video?

Imagen Video is a method for creating text-conditional videos based on a series of video diffusion models. Imagen Video produces high quality films from text prompts by combining a base video production model with a series of interlaced spatial and temporal video super-resolution models. Go over the design choices team made while scaling up the system as a high-definition text-to-video model, including the decision to v-parameterize diffusion models and the selection of fully convolutional temporal and spatial super-resolution models at specific resolutions. In addition, it validates and apply results from earlier work on diffusion-based image production to the context of video generation. Video models are then subjected to progressive distillation with classifier-free guidance for quick, high-quality sampling.

The Google research team claims that the system accepts a textual description and generates a 16-frame movie at three frames per second with a resolution of 24 by 48 pixels. The system scales and “predicts” the extra frames, creating a final video with 128 frames at 24 frames per second and 720p resolution (1280×768). There are 60 million image-text pairs and 14 million video-text pairs were used to train Imagen Video.

Imagen Video Samples

Even if merely because using AI to make video is quicker and less expensive, such technologies will undoubtedly be employed everywhere.

Interested in reading more? Here are some additional topics to check out:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

The DOGE Frenzy: Analysing Dogecoin’s (DOGE) Recent Surge in Value

The cryptocurrency industry is rapidly expanding, and meme coins are preparing for a significant upswing. Dogecoin (DOGE), ...

Know More

The Evolution of AI-Generated Content in the Metaverse

The emergence of generative AI content is one of the most fascinating developments inside the virtual environment ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Crypto Exchange Kraken Introduces Self-Custodial ‘Kraken Wallet’ And Open-Sources Its Code
Markets News Report Technology
Crypto Exchange Kraken Introduces Self-Custodial ‘Kraken Wallet’ And Open-Sources Its Code
April 17, 2024
Sam Altman’s Worldcoin To Introduce Ethereum Layer 2 Network World Chain
Markets News Report Technology
Sam Altman’s Worldcoin To Introduce Ethereum Layer 2 Network World Chain
April 17, 2024
BytePlus Ventures Into Web3 By Collaborating with Sui, Targets Gaming And SocialFi Markets
Business News Report Technology
BytePlus Ventures Into Web3 By Collaborating with Sui, Targets Gaming And SocialFi Markets
April 17, 2024
Binance Integrates Omni Network’s OMNI Token Across Financial Products For Enhanced Trading Experience, Launches OMNI Perpetual Contract
Markets News Report Technology
Binance Integrates Omni Network’s OMNI Token Across Financial Products For Enhanced Trading Experience, Launches OMNI Perpetual Contract
April 17, 2024