AI generator creates quirky short films with just a few words
AI makes short films as it ponders how to help content creators
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
After generating photos with Make-A-Scene, Meta colleagues rushed in and offered a new Make-A-Video approach that already generates full videos from a written description. This AI model, as the name implies, allows users to give a rough description of a scene and it makes a short movie that matches their text.
There is definitely room for development, but the generation results are already commendable. Such rapid advancement in the creation of visual content threatens to bankrupt not only Netflix.
The Meta team also warns that Make-A-Video, like any AI models trained on Internet data, has “acquired and presumably exaggerated societal biases, including dangerous ones.”
Text-to-Video AI Generator: What It Is and How It Works
Text-to-Video AI Generator is the research builds on recent advances in text-to-image creation technology, which was designed to enable text-to-video generation. You can create movies not only from text, but also from images or other videos. This is still the same diffusion, but with a time axis added.
The system learns what the world looks like and how it is often characterized by using photos and descriptions. It also makes use of unlabeled videos to understand about how the world works.
It uses this information to help you bring your imagination to life by creating quirky, one-of-a-kind videos with just a few words or lines of text.
Gallery of Text-to-Video Examples
Examples of AI-generated video are provided below. Gifs might be slow, so you’ll have to wait a little longer.
Is It Beneficial to Content Creators?
Because Make-A-Video is still in its early stages, the videos are released in low quality and with visible artifacts. The movies are undeniably manufactured, with slightly grainy objects and distorted motions, but they represent an important step forward in AI-assisted content creation.
According to The Verge, the AI program functions similarly to popular neural networks such as Stable Diffusion and Midjourney, except instead of static visuals, it generates short films.
The neural network makes videos without sound and in less than five seconds, but it can already detect a wide range of requests.
The neural network is not even in closed access yet, and all finished movies were distributed to the media by Meta itself. As a result, it is unclear how well Make-A-Video understands languages and makes videos based on them. Users can sign up for updates.
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.