Google Attempts to Manually Remove Harmful and Odd Responses from Its AI Search Engine

Google’s recently launched AI search engine has been seen giving unusual answers to user queries, ranging from suggesting the addition of glue to pizza to the consumption of stones. These strange responses have prompted Google to manually disable its AI overview function for certain queries. As a result, many of these peculiar answers are quickly disappearing from the search results.

AI review testing and results

It seems odd that Google is seeing such bizarre results after having tested its AI-generated overviews for a year. The beta version of the Search Generative Experience was released in May last year, handling over a billion queries according to Sundar Pichai, Google’s CEO. Pichai had also mentioned a significant 80% cost reduction in AI answer provision, attributing this to “hardware, engineering, and technical breakthroughs”. This outcome suggests the possibility that the optimization was implemented prematurely, before the technology was ready for commercial use.

Google’s response to AI queries

In response to these developments, Google has insisted that users mostly get “high-quality information” from its AI search engine. Many examples the company observed were connected to non-standard queries, and they also saw cases that were either fabricated or unverifiable, revealed a Google representative. The company is taking action to ensure certain queries do not generate AI reviews if they violate Google’s content policy. Strangely-rendered AI responses are serving as a basis for the development of improvements, some of which have “already begun to be implemented”.

AI accuracy: A reality check

Gary Marcus, a professor and AI expert at New York University, argues that many AI companies are “selling dreams” of increasing algorithm accuracy from 80% to 100%. According to Marcus, achieving 80% accuracy is relatively straightforward, as it requires training the algorithm on an ample volume of existing data. However, pushing for a further 20% increase in accuracy is an immense challenge since the algorithm needs to “reason” on the plausibility of the information and sources, mirroring the behaviour of a fact-checking human.

Related Posts