Google’s AI Image Generator Put on Lockdown After Spitting Out Woke Images of Historical Figures 

Marciobnws / shutterstock.com
Marciobnws / shutterstock.com

It’s one thing when Disney leans a little to the progressive side and recreates iconic movies to feature Black heroines and LGBTQ+ princesses. But it’s another thing when an Artificial Intelligence program erases an entire race of people from its image searches, as has happened with Google’s highly anticipated Gemini platform. 

Gemini is an artificial intelligence model developed by Google that can generate images from text prompts. The DeepMind lab, a Google research division focusing on advancing AI, created it, and it was launched earlier this month as a feature of Google’s chatbot service.  

Gemini has mostly performed well in generating realistic images for most topics. Users can ask Gemini to create images of animals, landscapes, objects, and scenes, and the AI program will gladly oblige.  

However, Gemini has faced challenges and controversies regarding its person-based image-generation capabilities. Users complain that Gemini’s images are inaccurate, biased, or offensive. Users have noticed Gemini’s anti-white bias because it generated images of historical figures, such as America’s founding fathers, as people of color.  

The internet has been ablaze with Gemini-created images depicting Black and Asian nazis, Black popes, a Black George Washington, Black and Asian Vikings, and Native Americans seated at the table to sign the Declaration of Independence.  

When asked to create the famous “Girl With a Pearl Earring” by Johannes Vermeer from 1665, Gemini created an image of a beautiful young Black girl who wore the same clothes as the original model and had the iconic pearl earring hanging from her earlobe. She glanced wistfully over her shoulder with a longing look, perhaps knowing she was not the original model painted centuries ago. 

The one thing Gemini refuses to do is to generate images depicting white people or historically gender-specific images. For instance, when asked for a representative image of an NHL hockey player, Gemini generated a female player despite the NHL being an all-male organization. 

While Google said that this was due to Gemini’s global user base and its representation of diversity, the tech giant also admitted that Gemini was missing the mark in some historical contexts. Google decided to pause Gemini’s image generation feature and improve its depictions. 

How does AI become so irredeemably biased? 

It’s the adage, “Garbage in, garbage out,” or, in this case, “woke in, woke out.” There are three main ways for AI to express bias, and each has to do with the information provided to the platform during its programming. 

Data bias occurs when the information used to train or evaluate an AI system doesn’t accurately represent the real world or the specific group it serves. For instance, if the data predominantly features images of white individuals, the AI might struggle to identify people of color effectively. On the other hand, algorithm bias arises from flaws or prejudices within the algorithms themselves. If an algorithm relies on factors like gender or race to make decisions, it leads to biased results.  

Lastly, prediction bias emerges when the outcomes produced by an AI system are skewed or unreliable. For example, AI can “predict” someone’s likelihood of committing a crime based solely on their race or the neighborhood in which they live. 

What does Gemini’s refusal to create specific images say about Google? 

Andrew Torba, founder and CEO of Gab AI, thinks he has the answer. In a recent post on X, he explained Google’s process when responding to an image prompt. According to Torba, Google uses a language model that changes the image prompt before giving it to the image model. The language model follows some rules that make the prompt more diverse and other things that Google likes.  

The user can’t see or access the changed prompt. Only the language model can. The image model then makes an image from the changed prompt. In other words, Torba warns that Google is changing what the user requested by hiding what it does to the original prompt and resubmitting it as a new “woke” prompt. He claims to have proven this through Gab Ai’s image generator, which gives users their requested pictures without filtering them through a language model. The results are consistently what the user expects them to be. 

Frustrated users claim that even after they restate their prompt, Gemini apologizes for its inaccuracy but continues to issue “woke” versions of their request. While it’s entertaining to see the results, it’s also concerning. Google is the most essential internet tool in the world, serving 90% of the online population, and the company has been exposed again for giving in to progressive agendas. Google has prioritized diversity over accuracy, affecting more than just image creation. 

Welcome to the world of AI, as imagined by the biggest search engine on the planet.