Meta Launches Two New Generative AI Features for Facebook and Instagram Video Editing
Meta launched two generative AI features for video editing – ‘Emu Video’ and ‘Emu Edit’ – to bolster user posts on Facebook and Instagram.
Mark Zuckerberg-owned social media giant Meta on Thursday has launched two new generative-AI-based features for video editing, named ‘Emu Video’ and ‘Emu Edit’ that lets users post on Facebook and Instagram.
Emu Video will let users produce four-second videos with the prompt of a caption, photo, or image, paired with a description; while Emu Edit will provides users with a simpler way to edit or modify videos using text prompts.
According to the social media giant, the developments are a part of Emu (Expressive Media Universe) – the company’s first foundational model announced in September 2023, that can generate incredibly realistic and aesthetically pleasing images from text captions.
Emu’s standout feature is its “quality tuning” technique, which boosts the visual allure of images generated by AI text-to-image models, it added.
At the core of Emu’s generative AI technology are a set of AI image editing tools for Instagram, which empowers users to click a photo and modify its visual style or background.
Over the past year, businesses and enterprises have been drawn to the emerging generative AI market, seeking enhanced capabilities and streamlined business processes since the debut of OpenAI’s ChatGPT last year.
Emu’s Key Differentiator from Other Generative AI Tools
The approach by Emu Video involves a two-step process: First, it focuses on generating images conditioned on a given text prompt; and then it produces videos conditioned on both the original text and the generated image. This “factorized” or split strategy in video generation enhances efficiency and allows for the effective training of video generation models.
Emu Video aims to demonstrate that factorized video generation can be implemented through a single diffusion model. By presenting key design decisions, such as fine-tuning noise schedules tailored for video diffusion, Meta is trying to refine its technology further.
Yet another feature of Meta is the implementation of multi-stage training, that enables the direct generation of higher-resolution videos, showcasing Emu Video’s potential to elevate the quality of video content.
According to Meta, Emu Edit addresses a common challenge – many approaches tend to either over-modify or under-perform on various editing tasks, leading to less-than-optimal results. The primary objective of image editing shouldn’t solely revolve around producing a “believable” image, instead, the focus should be on precisely altering only the pixels relevant to the specific edit request.
Unlike other generative AI models, Emu Edit follows instructions to ensure that pixels in the input image unrelated to the specified edits remain untouched.
With such announcements and technology refinements, Meta is positioning itself as a major focal point in the competitive landscape alongside giants like Microsoft, Alphabet’s Google and Amazon.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.