Google Attempts to Manually Remove Harmful and Odd Responses from Its AI Search Engine

Google’s recently launched AI search engine has been seen giving unusual answers to user queries, ranging from suggesting the addition of glue to pizza to the consumption of stones. These strange responses have prompted Google to manually disable its AI overview function for certain queries. As a result, many of these peculiar answers are quickly disappearing from the search results.

AI review testing and results

It seems odd that Google is seeing such bizarre results after having tested its AI-generated overviews for a year. The beta version of the Search Generative Experience was released in May last year, handling over a billion queries according to Sundar Pichai, Google’s CEO. Pichai had also mentioned a significant 80% cost reduction in AI answer provision, attributing this to “hardware, engineering, and technical breakthroughs”. This outcome suggests the possibility that the optimization was implemented prematurely, before the technology was ready for commercial use.

Google’s response to AI queries

In response to these developments, Google has insisted that users mostly get “high-quality information” from its AI search engine. Many examples the company observed were connected to non-standard queries, and they also saw cases that were either fabricated or unverifiable, revealed a Google representative. The company is taking action to ensure certain queries do not generate AI reviews if they violate Google’s content policy. Strangely-rendered AI responses are serving as a basis for the development of improvements, some of which have “already begun to be implemented”.

AI accuracy: A reality check

Gary Marcus, a professor and AI expert at New York University, argues that many AI companies are “selling dreams” of increasing algorithm accuracy from 80% to 100%. According to Marcus, achieving 80% accuracy is relatively straightforward, as it requires training the algorithm on an ample volume of existing data. However, pushing for a further 20% increase in accuracy is an immense challenge since the algorithm needs to “reason” on the plausibility of the information and sources, mirroring the behaviour of a fact-checking human.

This post was last modified on 05/26/2024

Harry Males: Hey there, I'm Harry Males, your go-to news writer at Dave's iPAQ, where I traverse the intricate landscape of technology, reporting on the latest developments that shape our digital world. With a pen in hand and a passion for all things tech, I dive deep into the realms of Software, AI, Cybersecurity, and Cryptocurrency to bring you the freshest insights and breaking news. Artificial Intelligence is not just a buzzword for me – it's a captivating realm where machines mimic human intelligence. From the wonders of machine learning to the ethical considerations of AI, I'm dedicated to keeping you informed about the advancements that are reshaping industries and everyday life. Beyond the bylines and breaking news, I believe in fostering a community of tech enthusiasts. Whether it's engaging in discussions on forums, attending tech conferences, or sharing insights on social media, I aim to connect with readers who share a passion for the ever-evolving world of technology.