In February, Google suspended operations of its artificial intelligence-based image generator, Gemini, due to historical inaccuracies that were offending public sensibilities. Notably, the racially diverse soldiers associated with the search term “Roman legion,” which was a stark anachronism, and stereotypically black men appearing for the search “Zulu warriors.” It seems that there has been no notable improvement thus far.
Sundar Pichai, the CEO of Google, was forced to apologize for this failure of the Gemini image generator. Demis Hassabis, head of Google DeepMind, the division responsible for the project, pledged that the error would be fixed “as soon as possible,” within a matter of weeks. Nevertheless, as of mid-May, the problem remains unaddressed.
At its annual I/O conference this week, Google introduced a multitude of new features for Gemini: the AI model will be able to create custom chatbots, plan routes, and it will be integrated into Google Calendar, Keep, and YouTube Music. Yet, the image generation remains disabled in the Gemini app and web interface, a Google representative confirmed to TechCrunch.
The representative did not explain the reasons for this delay. One theory is that AI training data sets predominantly feature images of white individuals, while representatives of other races and ethnic groups are exceptions, creating stereotypes. To correct this bias, Google may have resorted to a drastic measure — hardcoded data, where information is embedded directly into the source code. Correcting the algorithm formed in this manner is no easy task.
This post was last modified on 05/18/2024