OpenAI Finally Releases Real-Time Video Capabilities for ChatGPT

OpenAI is finally rolling video capabilities in real-time for its popular ChatGPT. The news is a sigh of relief for many users as it was first unveiled seven months back.

The company shared exciting news during its Livestream event yesterday This is where the new Advanced Voice Mode is very human-like in terms of conversation and will now attain vision capabilities. Through the app, users get options for ChatGPT subscriptions like Plus, Pro, and Team.

To use it, you just need to direct your devices at certain objects and let ChatGPT do the rest by replying in close to real-time. The new Voice Mode with vision has the capability to comprehend the device’s screen also. It gets active after screen sharing. It similarly can explain different settings menus and provide suggestions for math sums as well.

To get access to the latest Advanced Voice Mode featuring vision, you just need to press on the voice icon near the ChatGPT chat bar. After that, press on the video icon that begins the video. You can opt for screen sharing shortly after that by tapping the three-dot menu and then selecting options for Screen Share.

The launch of this latest Voice Mode will begin Thursday and finish by the end of next week. This means not everyone would get access to it right now. In fact, both Enterprise and Edu users would get the feature in January next year but even then no timeline was provided for ChatGPT users across the EU including Norway, Switzerland, and Liechtenstein.

Another demo featuring the new offering was made on CNN where the company’s President used Advanced Voice Mode in a vision quiz about anatomy. As the host drew body parts on the board, ChatGPT understood what was on display and it did a great job.

The tool was able to identify many parts of the body so well but it was seen struggling in some parts and that’s where hallucinating issues stepped in. From what we can confirm, the new Advanced Voice Mode featuring vision was delayed a couple of times in the past to make sure it was error-free.

Then in April this year, the company shared how Advanced Voice Mode will be rolling out to many within a couple of weeks. After that, the firm shared how it required more time. After arriving during the fall season for some, it did not have the visual feature that many were looking forward to. Now, it seems like the AI giant is putting all attention on voice-only Advanced Voice Mode to different platforms and users across the EU.

Now the tech race is getting hotter as both Google and Meta were reported as working on offering similar features for different chatbot products. We even witnessed Google make its first real-time content-analyzing conversational AI features dubbed Project Astra.

Meanwhile, OpenAI just shared its Santa Mode which includes Santa’s voice as a preset in ChatGPT. Users need to tap on snowflakes in the app near the prompt bar to use it.

Image: DIW-Aigen

Read next: AI Overviews Dominate Search, Taking 48% of Mobile Screen Space
{Categories} _Category: Platforms{/Categories}
{URL}https://www.digitalinformationworld.com/2024/12/openai-finally-releases-real-time-video.html{/URL}
{Author}Dr. Hura Anwar{/Author}
{Image}https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZIv6zs6DO2geQI6VGA2LEtpYsfMQvmlinpmJNn9jSj4OwbjGX4rLKhk0nsoe16hMXwRulrKuGv9U9mTB6auRzc2xlfuxCbCz8hj5wBd01r-sAVoDYJLU3_-wjxtyQpUhZdCnPXTBUJupZaKrPa3OqDvlrNDp-jbu0EaO_P4JVemIytCpV4GkuuDNNjF9F/s16000/openai-chat.jpg{/Image}
{Keywords}AI,artificial-intelligence,Business,ChatGPT,news,openai,Technology{/Keywords}
{Source}Platforms{/Source}
{Thumb}https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZIv6zs6DO2geQI6VGA2LEtpYsfMQvmlinpmJNn9jSj4OwbjGX4rLKhk0nsoe16hMXwRulrKuGv9U9mTB6auRzc2xlfuxCbCz8hj5wBd01r-sAVoDYJLU3_-wjxtyQpUhZdCnPXTBUJupZaKrPa3OqDvlrNDp-jbu0EaO_P4JVemIytCpV4GkuuDNNjF9F/s72-c/openai-chat.jpg{/Thumb}

Exit mobile version