OpenAI’s newest AI model can hold a humanlike conversation

Sharing is Caring!

OpenAI has introduced its most comprehensive artificial intelligence endeavor yet: a multimodal model that will be able to communicate to users through both text and voice.

See also  TikTok star Alex Consani becomes first transgender woman to win Model of the Year at the British Fashion Awards.

GPT-4o, which will be rolling out in ChatGPT as well as in the API over the next few weeks, is also able to recognize objects and images in real time, the company said Monday.

The model synthesizes a slew of AI capabilities that are already separately available in various other OpenAI models. But by combining all these modalities, OpenAI’s latest model is expected to process any combination of text, audio and visual inputs more efficiently.

See also  OpenAI's new o1 model sometimes fights back when it thinks it'll be shut down and then lies about it

Users can relay visuals — through their phone camera, by uploading documents, or by sharing their screen — all while conversing with the AI model as if they are in a video call. The technology will be available for free, the company announced, but paid users will have five times the capacity limit.

https://www.nbcnews.com/tech/rcna151947

Totally insane.

Watch the first couple of minutes as it shows the demos Open Ai released today

AC