Meta announced an AI system that generates videos from text
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
AI generator creates quirky short films with just a few words
AI makes short films as it ponders how to help content creators
After generating photos with Make-A-Scene, Meta colleagues rushed in and offered a new Make-A-Video approach that already generates full videos from a written description. This AI model, as the name implies, allows users to give a rough description of a scene and it makes a short movie that matches their text.
There is definitely room for development, but the generation results are already commendable. Such rapid advancement in the creation of visual content threatens to bankrupt not only Netflix.
The Meta team also warns that Make-A-Video, like any AI models trained on Internet data, has “acquired and presumably exaggerated societal biases, including dangerous ones.”
Text-to-Video AI Generator: What It Is and How It Works
Text-to-Video AI Generator is the research builds on recent advances in text-to-image creation technology, which was designed to enable text-to-video generation. You can create movies not only from text, but also from images or other videos. This is still the same diffusion, but with a time axis added.
The system learns what the world looks like and how it is often characterized by using photos and descriptions. It also makes use of unlabeled videos to understand about how the world works.
It uses this information to help you bring your imagination to life by creating quirky, one-of-a-kind videos with just a few words or lines of text.
Gallery of Text-to-Video Examples
Examples of AI-generated video are provided below. Gifs might be slow, so you’ll have to wait a little longer.
Is It Beneficial to Content Creators?
Because Make-A-Video is still in its early stages, the videos are released in low quality and with visible artifacts. The movies are undeniably manufactured, with slightly grainy objects and distorted motions, but they represent an important step forward in AI-assisted content creation.
The neural network makes videos without sound and in less than five seconds, but it can already detect a wide range of requests.
The neural network is not even in closed access yet, and all finished movies were distributed to the media by Meta itself. As a result, it is unclear how well Make-A-Video understands languages and makes videos based on them. Users can sign up for updates.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.