Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Google’s ChatGPT rival is an ethical mess, say Google’s own workers

Google launched Bard, its ChatGPT rival, despite internal concerns that it was a “pathological liar” and produced “cringeworthy” results, a new report has claimed. Worker say these worries were apparently ignored in a frantic attempt to catch up with ChatGPT and head off the threat it could pose to Google’s search business.

The revelations come from a Bloomberg report that took a deep dive into Google Bard and the issues raised by employees who have worked on the project. It’s an eye-opening account of the ways the chatbot has apparently gone off the rails and the misgivings these incidents have raised among concerned workers.

ChatGPT versus Google on smartphones.
DigitalTrends

For instance, Bloomberg cites an anonymous employee who asked Bard for instructions on how to land a plane, then were horrified to see that Bard’s description would lead to a crash. A different worker said Bard’s scuba diving tips “would likely result in serious injury or death.”

Recommended Videos

These issues were apparently raised shortly before Bard launched, according, yet Google pressed ahead with the go-live date, such was its desire to keep pace with the path blazed by ChatGPT. But it has done so while disregarding its own ethical commitments, resulting not only in dangerous advice, but the potential spread of misinformation too.

Rushing ahead to launch

The Google Bard AI chatbot in a web browser shown on the screen of an Android smartphone.
Mojahid Mottakin / Unsplash

In 2021, Google pledged to double its team of employees studying the ethical consequences of artificial intelligence (AI) and invest more heavily in determining potential harms. Yet that team is now “disempowered and demoralized,” the Bloomberg report claims. Worse, team members have been told “not to get in the way or to try to kill any of the generative AI tools in development,” bringing Google’s commitment to AI ethics into question.

That was seen in action just before Bard launched. In February, a Google worker messaged an internal group to say, “Bard is worse than useless: please do not launch,” with scores of other employees chiming in to agree. The next month, Jen Gennai, Google’s AI governance lead, overruled a risk evaluation that said Bard could cause harm and was not ready for launch, pushing ahead with the first public release of the chatbot.

Bloomberg’s report paints a picture of a company distrustful of ethical concerns that it feels could get in the way of its own products’ profitability. For instance, one worker asked to work on fairness in machine learning, but was repeatedly discouraged, to the point that it affected their performance review. Managers complained that ethical concerns were obstructing their “real work,” the employee stated.

It’s a concerning stance, particularly since we’ve already seen plenty of examples of AI chatbot misconduct that has produced offensive, misleading or downright false information. If the Bloomberg report is correct about Google’s seemingly hostile approach to ethical concerns, this could just be the beginning when it comes to problems caused by AI.

Alex Blake
Former Computing Writer
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
ChatGPT vs. Perplexity: battle of the AI search engines
Perplexity on Nothing Phone 2a.

The days of Google's undisputed internet search dominance may be coming to an end. The rise of generative AI has ushered in a new means of finding information on the web, with ChatGPT and Perplexity AI leading the way.

Unlike traditional Google searches, these platforms scour the internet for information regarding your query, then synthesize an answer using a conversational tone rather than returning a list of websites where the information can be found. This approach has proven popular with users, even though it's raised some serious concerns with the content creators that these platforms scrape for their data. But which is best for you to actually use? Let's dig into how these two AI tools differ, and which will be the most helpful for your prompts.
Pricing and tiers
Perplexity is available at two price points: free and Pro. The free tier is available to everybody and offers unlimited "Quick" searches, 3 "Pro" searches per day, and access to the standard Perplexity AI model. The Pro plan, which costs $20/month, grants you unlimited Quick searches, 300 Pro searches per day, your choice of AI model (GPT-4o, Claude-3, or LLama 3.1), the ability to upload and analyze unlimited files as well as visualize answers using Playground AI, DALL-E, and SDXL.

Read more
​​OpenAI spills tea on Musk as Meta seeks block on for-profit dreams
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

OpenAI has been on a “Shipmas” product launch spree, launching its highly-awaited Sora video generator and onboarding millions of Apple ecosystem members with the Siri-ChatGPT integration. The company has also expanded its subscription portfolio as it races toward a for-profit status, which is reportedly a hot topic of debate internally.

Not everyone is happy with the AI behemoth abandoning its nonprofit roots, including one of its founding fathers and now rival, Elon Musk. The xAI chief filed a lawsuit against OpenAI earlier this year and has also been consistently taking potshots at the company.

Read more
ChatGPT has folders now
ChatGPT Projects

OpenAI is once again re-creating a Claude feature in ChatGPT. The company announced during Friday's "12 Days of OpenAI" event that its chatbot will now offer a folder system called "Projects" to help users organize their chats and data.

“This is really just another organizational tool. I think of these as smart folders,” Thomas Dimson, an OpenAI staff member, said during the live stream.

Read more