Skip to main content

Ex-Google employees say we need ‘an Android-like moment for AI’

Hugo Barra, Google’s former VP of Android product management, announced Wednesday that he is leading a new startup with aims to develop an Android-like operating system for AI agents.

“[We’re] going back to our Android roots, building a new operating system for people & AI agents,” Barra wrote in a post on X.

Recommended Videos

I'm starting a new company with some of the best people I've ever worked with, and could not be more pumped. We're calling it /dev/agents.

Going back to our Android roots, building a new operating system for people & AI agents. Check out @dps's post below for more.

We're… https://t.co/QSIZLXJqZl

— Hugo Barra (@hbarra) November 26, 2024

The company, called “/dev/agents,” is working to develop a cloud-based “next-gen operating system for AI agents” that will “work with users across all of their devices,” company co-founder and CEO David Singleton wrote in a post on X. He argues that AI agents will “need new UI patterns, a reimagined privacy model, and a developer platform that makes it radically simpler to build useful agents.”

As the current generation of large language models like GPT-4o, Llama 3.1, and Gemini 1.5 face diminishing performance returns despite developers pouring more and more amounts of training data, compute power and resources into them, AI agents are increasingly seen as the next major advancement in generative AI technology. These agents, unlike traditional apps, are designed to autonomously process information, make decisions, and perform specific actions on their user’s behalf. That could be anything from generating complex computer code to booking flights and hotel accommodations, to transcribing business meetings then generating actionable tasks based on what was discussed.

Here’s how the new company’s website describes its mission: “Modern AI will fundamentally change how people use software in their daily lives. Agentic applications could, for the first time, enable computers to work with people in much the same way people work with people. But it won’t happen without removing a ton of blockers. We need new UI patterns, a reimagined privacy model, and a developer platform that makes it radically simpler to build useful agents. That’s the challenge we’re taking on.”

The industry’s leading companies are already racing to deploy their own branded agents. Microsoft recently announced that it will incorporate agents into its 365 Copilot ecosystem in early 2025. Google’s Project Jarvis, which is expected to arrive with the next Gemini update, leverages the AI’s capabilities to execute common tasks, such as visiting websites and filling out online forms, at the user’s command.

OpenAI’s agent, code named Operator, will function in much the same way when it is releases in January as a research preview through the company’s developer API. Anthropic has already released its agent, dubbed Computer Control, which empowers Claude to emulate the keyboard presses and mouse clicks of a human user.

“We can see the promise of AI agents, but as a developer, it’s just too hard to build anything good,” Singleton told Bloomberg, noting that the industry needs “an Android-like moment for AI.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI’s Sora doesn’t feel like the game-changer it was supposed to be
Sora's interpretation of gymnastics

OpenAI has teased, and repeatedly delayed, the release of Sora for nearly a year. On Tuesday, the company finally unveiled a fully functional version of the new video-generation model destined for public use and, despite the initial buzz, more and more early users of the release don't seem overly impressed. And neither am I.

https://x.com/OpenAI/status/1758192957386342435

Read more
Google’s new Gemini 2.0 AI model is about to be everywhere
Gemini 2.0 logo

Less than a year after debuting Gemini 1.5, Google's DeepMind division was back Wednesday to reveal the AI's next-generation model, Gemini 2.0. The new model offers native image and audio output, and "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google's new flagship AI model, you can expect to see it begin powering AI features across the company's ecosystem in the coming months. As with OpenAI's o1 model, the initial release of Gemini 2.0 is not the company's full-fledged version, but rather a smaller, less capable "experimental preview" iteration that will be upgraded in Google Gemini in the coming months.

Read more
ChatGPT unveils Sora with up to 20-second AI video generation
An AI generated image of a woman who walks the streets of Tokyo.

OpenAI has been promising to release its next-gen video generator model, Sora, since February. On Monday, the company finally dropped a working version of it as part of its "12 Days of OpenAI" event.

"This is a critical part of our AGI roadmap," OpenAI CEO Sam Altman said during the company's live stream.

Read more