OpenAI’s New GPT-4o Features a Human-like Voice Assistant

While users can avail of the GPT-4o services for free, paid users can utilize 5 times the capacity limits of free users.
OpenAI’s New GPT-4o Features a Human-like Voice Assistant

OpenAI recently announced the launch of GPT-4o, an updated version of its GPT-4, to power ChatGPT. Unlike the previous iterations, this model comes with massive multimodal capabilities, allowing simultaneous interactions with text, visuals, audio, or a combination of all.

In a livestream announcement, OpenAI CTO Mira Murati stated that this futuristic model “is much faster” and has enhanced “capabilities across text, vision, and audio.”

Murati shared that while users can avail of the GPT-4o services for free, paid users can utilize 5 times the capacity limits of free users. The omni version’s extensive capabilities will be rolled out in iteration, but its text and image capabilities will reflect in ChatGPT immediately.

One of the features of the multimodal model includes a voice assistant with the ability to observe the user’s world and have human-like conversations in real time.

As stated by Microsoft, GPT-4o is now available in the Azure OpenAI service, and customers can explore the capabilities through the preview playground in Azure OpenAI Studio. The facility is currently available in two regions of the US.

For now, Azure supports the model’s text and vision inputs as a sneak peek into its abilities. The advancement with GPT-4o is expected to derive positive outcomes in the sectors of customer service, advanced analytics, and content innovation.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech