Meta Unveils Muse Spark-Powered AI Voice Conversations With Real-Time Visual Intelligence And Multimodal Responses
In Brief
Meta rolls out Muse Spark AI with voice conversations, real-time visual interaction, shopping tools, and multimodal reasoning across apps and wearables, expanding cross-platform intelligent assistant capabilities.

Technology company Meta announced the rollout of new AI Voice Conversations powered by Muse Spark, a system designed to enable more natural interaction with Meta AI, including the ability to interrupt responses, change topics mid-conversation, and switch between languages seamlessly. The updated experience also allows the assistant to generate images during dialogue and surface contextual recommendations drawn from services such as Reels, maps, and other integrated Meta platforms.
Alongside voice interaction upgrades, the company is introducing live AI capabilities within its applications, extending functionality already available on its AI glasses. This feature allows users to activate the device camera and interact with Meta AI in real time, asking questions about objects, environments, or locations directly within their field of view. The system is designed to provide contextual understanding of physical surroundings, whether identifying landmarks, assisting with household tasks, or interpreting visual information on demand.
A new set of shopping-related features has also been introduced. Within shopping mode, Meta AI can now search Facebook Marketplace listings in combination with broader internet results, presenting both second-hand and new items within a single interface. Results are displayed alongside a map-based view showing item locations, with additional filtering options based on price, style, and distance. The assistant also supports direct references to specific brands or creators, allowing users to browse public content feeds and product listings in a structured grid format.
Muse Spark is being gradually deployed across Meta’s hardware ecosystem, including Ray-Ban Meta and Oakley Meta glasses in the United States and Canada, with further expansion planned for Meta Ray-Ban Display devices in the coming months. The model is also being integrated across Meta’s software platforms, including WhatsApp, Instagram, Facebook, Messenger, and Threads, where it appears in search functions, group chats, posts, and other interaction points.
Additional experimental features include “side chats,” which allow users to access Meta AI from within group conversations to generate private, context-aware responses based on ongoing discussions, as well as @meta.ai mentions within Threads posts and replies. These integrations are intended to extend AI assistance across communication and social environments.
Meta Advances Muse Spark As Next-Gen Multimodal AI System
The introduction of Muse Spark follows Meta’s broader development of its AI infrastructure, described as part of a new generation of large language models developed by Meta Superintelligence Labs. The model is positioned as the first in a series designed to scale progressively, with an emphasis on reasoning, multimodal understanding, and task coordination. Although described as compact and fast in its initial form, it is intended to support complex reasoning tasks across science, mathematics, health, and everyday problem-solving.
Meta AI has also been updated to support multiple reasoning modes, allowing the system to adapt depending on task complexity. The architecture can deploy multiple subagents in parallel, each handling different components of a query, such as planning, comparison, or research synthesis, with the aim of improving response depth and efficiency.
The system’s multimodal capabilities allow it to process visual inputs alongside text, enabling functions such as identifying objects in images, analysing product comparisons, and interpreting scenes in real time. Expanded applications in health-related queries have also been introduced, developed in collaboration with medical professionals to improve the quality of informational responses, particularly when visual data is involved.
In addition, Muse Spark supports visual coding functions that allow users to generate interactive tools such as websites, dashboards, and simple games directly from prompts. The system can also integrate contextual content from Meta’s ecosystem, including posts, Reels, and community discussions, to enrich responses with real-world relevance.
Meta stated that further rollout of the upgraded AI experience will continue across regions and platforms, with expanded availability planned for its apps and wearable devices. The company also indicated that select components of the technology will be made available through API access in private preview and that future versions may be open-sourced.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in crypto, AI, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articles
Alisa, a dedicated journalist at the MPost, specializes in crypto, AI, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.



