Google Blames User Behavior and Lack of Training Data for Weird Responses from Search AI

Last week Google enhanced its search function by incorporating a summary box consisting of responses generated by AI. This upgrade was aimed at improving the quality of the search engine. However, the AI began delivering peculiar responses, such as recommending people to eat stones or attaching cheese to pizza. These absurd results were swiftly removed by the company but not before they had gone viral as memes. The head of Google’s search division has now issued a statement regarding the situation.

Google Search’s Head, Liz Reid, attributed the inaccurate AI-generated responses to “data voids” and users themselves, who posed abnormal queries. According to her, the AI-created synopsis boxes indeed provide users with a higher level of satisfaction. Typically, the AI-generated responses are not hallucinations, but sometimes the system just misinterprets information available online.

“There is no stronger argument than millions of people utilizing this function with numerous new queries. We’ve also noted new meaningless inquiries apparently aimed at obtaining erroneous results,” Reid commented. However, she added that Google continues to refine this feature, limiting AI-generated summaries for nonsensical queries and satirical materials. This is crucial as the system might initially quote a satirical source or refer to a social media user with an inappropriate username.

Reid also drew comparisons between AI-generated overviews and the “Featured snippets” search function – which displays text fragments from relevant web pages without AI intervention. The Head of Google Search maintains that the accuracy levels of these functionalities are quite similar.

Related Posts