Meta Unveils AI Integration Across Services, from Generative Emu Model to Smart Glasses
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
The annual Meta Connect conference, formerly known as Oculus Connect, made waves this year with a significant expansion of its technological horizons. While Oculus Connect was primarily focused on virtual reality, Meta Connect now encompasses a broader spectrum of technical domains. Here, we delve into some of the key highlights from the first speeches at this tech-packed event.
Meta Connect Unveils Quest 3 Headset
One of the most anticipated announcements was the unveiling of the Quest 3 headset, set to hit the market on October 10th. Priced at $500 for the 128GB version and $650 for the 256GB variant, both options come with a six-month subscription to a library of immersive games.
The Quest 3 is all about mixed reality, a concept that blurs the lines between the physical and digital worlds. It’s akin to what we witnessed during Apple’s recent presentations, with a couple of intriguing distinctions. First, there’s no cumbersome battery wire tethering you to reality. Second, and perhaps most exciting, users can now place virtual objects within their physical surroundings, and these digital entities persist indefinitely. Some of these virtual objects even serve as shortcuts, launching corresponding applications when interacted with.
The Quest 3 is poised to be another breakthrough device for the VR industry – a true successor to the Quest 2. It represents another critical milestone in the development of VR and AR. Over the past few years, the industry has gained unprecedented momentum. Thanks to a boom in new creation and collaboration platforms it’s much faster, cheaper and easier to design VR and AR. Now, new advanced devices are coming onto the market – like Quest 3 – which have a real chance of quickly becoming mainstream. In a few years, we may look at the launch of the Quest 3 as a device that is a marked step forward, leading to the transformation of VR and how people interact with the world.Inga Petryaevskaya, CEO and Founder, ShapesXR
Meta has upped the ante in terms of performance. The Quest 3 boasts a processor that’s twice as powerful, delivering enhanced graphics quality and impeccable tracking capabilities. Every component has undergone meticulous refinement to ensure a seamless and immersive experience.
At $499 it has a ridiculous amount of value per dollar spent. The device is thinner and more comfortable than Quest 2, very realistic passthrough makes it even more comfortable to have longer sessions. Compared to Quest 2, the passthrough is next level – it’s colored and highly realistic. People who will upgrade from Quest 2 will be mind-blown as it will be a significant step forward. Those who are with Quest Pro might not see the jump as a big one, but still appreciate the price for value.Inga Petryaevskaya, CEO and Founder, ShapesXR
However, there’s one notable absence: eye tracking technology. Without this feature, certain rendering optimizations may remain unrealized. Nevertheless, this omission doesn’t overshadow the overall advancement achieved with the Quest 3.
In a bid to make the Quest 3 even more user-friendly, Meta engineers have managed to reduce its size significantly. The headset is now approximately 40% thinner, making it more comfortable and approachable for users, whether they are seasoned VR enthusiasts or newcomers.
Meta Introduces New AI-Enhanced Services and Next-Generation RayBans
In a recent presentation, Mark Zuckerberg provided insights into Meta’s latest AI endeavors, spanning from generative models to cutting-edge smart glasses. Let’s delve into the details of these remarkable developments:
1. The Generative Emu Model
Meta’s Generative Emu model takes center stage, showcasing its capacity to create high-resolution images. This impressive model has been seamlessly integrated into various Meta services to enhance user experiences.
In WhatsApp, users can harness the Emu model to generate stickers, with the capability to request up to four stickers in a single query. The image generation process takes a mere five seconds, demonstrating efficiency and convenience.
Instagram enthusiasts will soon witness filters that can be generated based on text requests. Imagine asking for a filter that transforms your hair into spaghetti – this Emu model makes it possible. An attached gif illustrates the model’s capabilities vividly.
2. Meta AI for Engaging Conversations
Meta AI, a WhatsApp bot, takes chat interactions to a whole new level. This versatile bot can engage in conversations on a wide range of topics. It incorporates Bing search support, ensuring users have access to accurate and informative responses.
One fascinating feature is the bot’s connection to the Emu generative model. Users can call upon the model to add a creative touch to their conversations. Whether you need assistance in resolving a dispute or simply desire an imaginative twist to your chats, this Meta AI bot delivers.
3. Personalized Meta AIs
Meta is introducing personalized AIs designed to cater to various functions and personalities. One AI can assist with cooking, even stepping into the role of a virtual chef. Another AI is your go-to advisor for fitness and exercise-related queries, among other roles.
What’s groundbreaking is that developers will have the opportunity to create their AIs. This opens doors for unique AI applications, particularly in business accounts. These AIs can handle orders and engage with customer reviews, enhancing customer service and support.
4. Avatars and Voice Integration
Meta’s avatars are evolving, soon to be equipped with voices. Within a few months, these avatars will possess the ability to communicate vocally. Moreover, they can be integrated into the metaverse, transforming virtual worlds by emulating real individuals. The implications of this are vast, signaling an intriguing blend of reality and virtuality.
5. Next-Generation RayBan Glasses
Meta’s next-generation RayBan glasses have integrated cameras, reminiscent of the earlier Google Glasses project. However, Meta has taken this concept a step further by embedding its AI assistant within the glasses. Users can interact with this AI through voice commands, creating a seamless and hands-free experience.
The AI assistant within the glasses shares the user’s auditory perspective, and soon, it will also interpret video from the glasses’ cameras. This evolution brings the experience remarkably close to the recent ChatGPT update, enhancing the potential of smart glasses for various applications.
Read more related topics:
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.