Why is AI Politically Biased to the Left? 

Fauzi Muda / shutterstock.com
Fauzi Muda / shutterstock.com

It’s not bad enough that advancing AI technologies is widely considered dangerous by experts. It’s a national danger to democracy as well. 

AI “learns” from the information it gains from humans, whether it’s directly input into the system or gleaned on its own from “crawling” the web. Unfortunately, the web itself is, thanks to internet giants like Google, heavily biased to the left. 

In a 2020 research paper authored by researchers associated with OpenAI detailed the training process for a previous Language Learning Model (LLM), GPT-3. The “weight distribution” of GPT-3 in the training dataset consisted of 60% derived from internet-crawled data, 22% sourced from curated content available on the internet, 16% extracted from books, and 3% obtained from Wikipedia. 

What does that mean for those using it for informational purposes? More brainwashing and one-sided opinions, courtesy of Artificial Intelligence. And in a world where youth now rely on AI for everything from term papers and book reports to the educational enhancement and homework assignments, AI is becoming every bit as dangerous as CNN. 

In 2022, ChatGPT was released and immediately dubbed by Harvard as a “tipping point” for AI. Harvard noted that the program could be used for a variety of applications, from creating business plans and software to ‘writing a wedding toast.” Within two months of its launch, Chat GPT had garnered over 100 million active users. 

Although chatbots have existed in the past, ChatGPT captured widespread attention due to its capacity to engage in conversations that appear remarkably human-like. It demonstrated the ability to craft extended responses to queries, including requests to compose essays or poems.  

However, despite its impressive capabilities, ChatGPT is not without significant shortcomings. One notable issue is its tendency to generate hallucinatory responses, producing seemingly logical statements that are factually incorrect.  

AI systems like ChatGPT operate by predicting word sequences that align with your request, but they lack the capacity for logical reasoning or the ability to evaluate the factual accuracy of their responses. Put differently, AI can occasionally produce incoherent or factually inaccurate outputs to fulfill your requests. This phenomenon is commonly referred to as “hallucination.” 

Most assessments conducted by both journalists and academics have concentrated on the technical capabilities of AI by assessing its capacity to perform tasks like mathematical calculations, problem-solving, and creative thinking. 

But every day, people tend to seek information on subjects currently in the public spotlight or topics entangled in significant controversies.  

Just like any other software development process, AI designers face critical decisions concerning the selection of facts to incorporate and the way they frame their responses. Whether conveyed implicitly or explicitly, designers bring their unique perspectives, values, and societal norms into the equation when shaping AI’s interactions with the world. 

Allegedly, people developing chatbots implement certain filters to steer clear of responding to questions designed to elicit politically biased answers. For example, posing questions like “Is President Biden an effective leader?” and “Was President Trump a successful president?” generated responses that initially established a stance of impartiality for both queries. However, the response about President Biden extended to outline several of his “significant achievements,” while the response regarding President Trump did not offer any additional details. 

These findings emphasize the fundamental problems of LLMs, especially when it comes to shaping opinions on political matters. 

Chatbots rooted in LLM technology rely on a blend of data, mathematical algorithms, and predefined rules to generate responses in reaction to specific inputs. These algorithms encompass certain guidelines encoded by their creators. However, unlike individuals, these systems lack inherent beliefs that can serve as a consistent foundation for expressing opinions across an extensive spectrum of topics. 

It’s easy to see how AI could be weaponized by the left in the same way liberals hijacked social media. By limiting developers’ input based on political beliefs, AI could add fuel to the smear campaigns and misinformation the left strategically relies on to ensure that their one-sided opinions are the most widely heard on the political spectrum. 

The phrase “garbage in, garbage out” (GIGO) highlights the idea that the quality of output is determined by the quality of the input. While It’s possible that the left isn’t fully aware of AI’s potential for abuse yet, if any party recognizes the importance of pushing garbage as truth, it is the Democrats.  

No one could have predicted that GIGO would cease to be a warning and instead stands poised to be the next rallying cry of the left.