Liz Reid, Google’s head of search, revealed in a blog post that the company has made changes to its new AI search feature after screenshots of its mistakes spread widely online.
Last week, when bizarre and misleading answers generated by Google’s new AI Overview feature went viral on social media, the company issued statements downplaying the technology’s issues. However, late Thursday, Reid explained what happened in writing and the steps they’ve taken so far, admitting that these errors highlighted areas that need improvement.
Reid’s post specifically addressed two of the most viral and inaccurate AI Overview results. One result absurdly endorsed eating rocks as beneficial, and the other suggested using non-toxic glue to thicken pizza sauce.
Rock-eating is obviously not a topic that’s discussed very often, so there aren’t many sources for a search engine to pull from. Reid explained that the artificial intelligence tool misinterpreted an article from The Onion, a satirical website reposted by a software company, as factual information.
Regarding the glue-on-pizza recommendation, Reid attributed the error to misinterpreting sarcastic or troll content from discussion forums. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza,” she wrote.
Needless to say, it’s important to carefully review any AI-generated dinner menu suggestions.
Reid also suggested that judging Google’s new search approach based on viral screenshots would be unfair. She claimed the company conducted extensive testing before launching the feature, and their data proves that users value AI Overviews and stay on pages discovered through the feature.
Furthermore, Reid described these mistakes as the result of a widespread internet audit, sometimes ill-intended. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”
Google asserts that some widely shared screenshots of AI Overviews were fake. For instance, a user on X posted a screenshot showing an AI Overview confirming the question, “Can a cockroach live in your penis?” This post, viewed over 5 million times, didn’t match the actual presentation format of AI Overviews.
Even major media outlets were misled by fake AI Overviews. The New York Times issued a correction, clarifying that AI Overviews never suggested users should jump off the Golden Gate Bridge if experiencing depression. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression,” Reid wrote. “Those AI Overviews never appeared.”
Reid’s post also acknowledged that the original version of Google’s search upgrade had issues. The company implemented “more than a dozen technical improvements” to AI Overviews, including:
- Better detection of nonsensical queries not worthy of an AI Overview;
- Reducing reliance on sites with user-generated content;
- Offering AI Overviews less frequently in situations where users haven’t found them helpful;
- Implementing measures to disable AI summaries on critical topics like health and other important news
Reid’s blog post did not discuss any significant rollback of the AI summaries. Moreover, Google affirms its commitment to ongoing user feedback monitoring and feature adjustments as necessary.