AI News Bureau
The app allows users to interact with Meta AI through natural voice conversations, perform tasks, and access features like image generation and editing—all while multitasking.
Written by: CDO Magazine Bureau
Updated 5:34 PM UTC, Wed May 7, 2025
Meta has launched a standalone Meta AI app powered by its latest Llama 4 model, delivering a more personalized and conversational AI experience through both voice and text.
Users can interact with Meta AI using natural voice commands, perform tasks, and access features like image generation and editing—even while multitasking. It also introduces a full-duplex speech demo, showcasing real-time voice interactions that feel more human and natural.
Currently available in the U.S., Canada, Australia, and New Zealand, the app allows seamless switching between devices, including Ray-Ban Meta smart glasses and Meta AI on the web. Conversations carry over between platforms, though some device transitions remain limited.
The app uses personalized data—such as preferences and engagement across Facebook and Instagram—to provide more relevant responses. A new Discover feed lets users explore, share, and remix AI prompts.
Meta highlights user control and transparency, with visible mic indicators, adjustable voice settings, and privacy safeguards that block sharing unless explicitly approved.
Alongside the app, Meta AI on desktop is receiving upgrades, including improved voice tools, advanced image generation, and early testing of a document editor with PDF export.
The launch signals Meta’s broader push to embed intelligent, personalized assistants across its ecosystem—bringing AI to apps, smart glasses, and the web.