Skip to main content

ChatGPT can laugh now, and it’s downright creepy

OpenAI's Mira Murati introduces GPT-4o.
OpenAI

We all saw it coming, and the day is finally here — ChatGPT is slowly morphing into your friendly neighborhood AI, complete with the ability to creepily laugh alongside you if you say something funny, or go “aww” if you’re being nice — and that’s just scratching at the surface of today’s announcements. OpenAI just held a special Spring Update Event, during which it unveiled its latest large language model (LLM) — GPT-4o. With this update, ChatGPT gets a desktop app, will be better and faster, but most of all, it becomes fully multimodal.

The event started with an introduction by Mira Murati, OpenAI’s CTO, who revealed that today’s updates aren’t going to be just for the paid users — GPT-4o is launching across the platform for both free users and paid subscribers. “The special thing about GPT-4o is that it brings GPT-4 level intelligence to everyone, including our free users,” Murati said.

Recommended Videos

GPT-4o is said to be much faster, but the impressive part is that it really takes the capabilities up a few notches, across text, vision, and audio. It can also be used by developers to integrate into their APIs, and it’s said to be up to two times faster and 50% cheaper, with a rate limit that’s five times higher compared to GPT-4 Turbo.

OpenAI announces GPT-4o.
OpenAI

Alongside the new model, OpenAI is launching the ChatGPT desktop app as well as a refresh of the user interface on the website. The goal is to make the chatbot as easy to communicate with as possible. “We’re looking at the future of interaction between ourselves and the machines, and we think that GPT-4o is really shifting that paradigm into the future of collaboration — where the interaction becomes much more natural,” Murati said.

To that end, the new improvements — which Murati showcased with the help of OpenAI’s Mark Chen and Barret Zoph — really do appear to make the interaction much more seamless. GPT-4o is now able to analyze videos, images, and speech in real time, and it can accurately pinpoint emotion in all three. This is especially impressive in ChatGPT Voice, which became so human-like that it skirts the edge of the uncanny valley.

Saying “hi” to ChatGPT evokes an enthusiastic, friendly response that has just the slightest hint of a robotic undertone. When Mark Chen told the AI that he was holding a live demo and needed help calming down, it sounded adequately impressed and jumped in with the idea that he should take a few deep breaths. It also noticed when those breaths were far too quick — more like panting, really — and walked Chen through the right way to breathe, first making a small joke: “You’re not a vacuum cleaner.”

Introducing GPT-4o

The conversation flows naturally, as you can now interrupt ChatGPT and don’t have to wait for it to finish, and the responses come quickly with no awkward pauses. When asked to tell a bedtime story, it responded to requests regarding its tone of voice, going from enthusiastic, to dramatic, to robotic. The second half of the demo showed off ChatGPT’s ability to accurately read code, help with math problems via video, and read and describe the content of the screen.

The demo wasn’t perfect — the bot appeared to cut off at times, and it was hard to tell whether this was due to someone else talking or because of latency. However, it sounded just about as lifelike as can be expected from a chatbot, and its ability to read human emotion and respond in kind is equal parts thrilling and anxiety-inducing. Hearing ChatGPT laugh wasn’t on my list of things I thought I’d hear this week, but here we are.

GPT-4o, with its multimodal design, as well as the desktop app, will be gradually launching over the next few weeks. A few months ago, Bing Chat told us that it wanted to be human, but now, we’re about to get a version of ChatGPT that might be as close to human as we’ve ever seen since the start of the AI boom.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
ChatGPT has folders now
ChatGPT Projects

OpenAI is once again re-creating a Claude feature in ChatGPT. The company announced during Friday's "12 Days of OpenAI" event that its chatbot will now offer a folder system called "Projects" to help users organize their chats and data.

“This is really just another organizational tool. I think of these as smart folders,” Thomas Dimson, an OpenAI staff member, said during the live stream.

Read more
OpenAI’s Advanced Voice Mode can now see your screen and analyze videos
Advanced Santa voice mode

OpenAI's "12 Days of OpenAI" continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT's Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick's dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Read more
The ChatGPT app is transforming my Mac right before my eyes
The ChatGPT Mac app running in macOS Sequoia.

Apple is all in on AI for the Mac. It's called Apple Intelligence, and it's really only starting to get off the ground.

Meanwhile, OpenAI went ahead and launched its own ChatGPT app in 2024, bringing ChatGPT’s web-searching powers to its Mac app.

Read more