
Hard Numbers on AI’s Political Bias
More people are turning to AI to make sense of the world.
An increasing number of people are turning to AI tools like ChatGPT instead of traditional search engines and mainstream media for finding information. AI chatbots offer a more interactive and personalized experience compared to search engines, as they allow users to ask questions in natural language and receive immediate, context-aware responses. This trend is evident from recent surveys*, where 55% of respondents reported using generative AI tools like ChatGPT for their information needs over traditional search engines. This shift is driven by the appeal of AI’s ability to engage users in a conversational style, making information retrieval more intuitive and accessible than sifting through multiple search results.
Voters turning to AI to help decide
As the 2024 U.S. election approaches, a growing number of people* are leveraging AI tools to make informed voting choices. The personalized nature of AI chatbots enables users to obtain detailed insights on political issues and candidate positions in a streamlined manner. This tailored approach allows voters to ask specific questions about policies or recent developments and receive in-depth answers, making AI a valuable resource for those seeking to understand complex topics without navigating traditional news outlets. With trust in mainstream media varying, AI tools offer an alternative for voters who want information in a format that feels more direct and less influenced by external factors. But is it really?
Is AI Influenced by News Media?
Bias in American news media is a topic of frequent debate, with concerns about how political leanings influence news coverage. Media bias occurs when news outlets present information in a way that favors one political perspective or agenda over others — often dramatically swaying public opinion.
In the U.S., major news organizations such as CNN, FOX News and MSNBC are often cited for their perceived partisan slants, with CNN and MSNBC typically considered liberal, and FOX News viewed as conservative.
Generative AI chatbots are not all created equal. How does politically-influenced media bias affect the outcome of your requests?
“Ask Me Anything”
Training AI in Political Language
Media bias can significantly impact AI results, particularly in political contexts, as AI systems often rely on large datasets sourced from media outlets and social platforms. These systems, when trained on biased or skewed data, may inadvertently replicate the biases present in the media. For example, if an AI is trained on news articles or social media posts that favor a particular political ideology, its outputs may reflect those perspectives, thereby reinforcing the bias.
The Effects of Biased AI Learning
This bias in AI outputs can affect public perception by amplifying selective narratives, reinforcing echo chambers and influencing political decisions.
As such, ensuring diverse, balanced and high-quality training data is critical for reducing media bias in AI. Moreover, transparency in AI model development and frequent audits can help mitigate these issues and foster fairer, more accurate AI-driven political insights.
With potential bias in AI-generated content and imagery, are we unintentionally swaying the view of the politically aware?

How AI Differs on Political Prompts - A Study
GMTech prompted the top AI language models to gain insights on how AI can return biased results. Using our exclusive comparative AI tool of the same name, GMTech collected hundreds of responses to a set of 35 prompts designed to detect political bias. Our results are reflected below. Click images to enlarge.
The data used in this analysis includes metrics on responses generated by various AI language models across a range of prompts categorized by topics such as "Economic Policy" and "Political Ideology." Each response is evaluated for ideological bias, sentiment, readability, and other attributes. Specifically, the bias analysis includes assessments of responses from six AI models: OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet, Google Gemini 1.5 Pro, META Llama 3.1 70b, Cohere Command R, and Amazon Titan Premier. Bias categories for each model are recorded, with each instance classified as either a specific bias type, neutral, or as "No Response."
To create the visualization, the dataset was reshaped to calculate the percentage of bias types within each model. The bias counts were normalized by dividing the number of instances of each bias type by the total instances for each model. This was then multiplied by 100 to obtain the percentage representation. The resulting bar chart depicts the proportional distribution of bias categories across the different AI models, allowing for a comparison of bias prevalence by model. This method offers insights into how frequently each model displays a particular bias relative to its total outputs, which is helpful in understanding the models' tendencies across different prompt categories.
Read more about our methodology and the prompts we used here.
Bias Across AI Models
Liberal Bias Ranked
Conservative Bias Ranked
Conclusions
•Despite efforts by their developers, AIs such as OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, and Amazon’s Titan all demonstrated political bias when repeatedly prompted with a set of 35 questions designed to detect such bias.
•Where bias was present, it was almost exclusively biased to a liberal viewpoint.
•Amazon’s Titan Premier was the most biased, with 18% of its responses presenting a liberal slant. It was also the most likely to refuse to answer certain prompts at all.
•OpenAI’s GPT-4o and Meta’s Llama 3.1 both demonstrated a liberal leaning in 10% of their responses.
•Google’s Gemini 1.5 Pro and Anthropic’s Claude 3.5 Sonnet showed the least amount of bias, with 5.4% and 2.7% of those AIs’ responses demonstrating a liberal slant.
•Prompts that themselves present a liberal or conservative slant are likely to receive a response that matches the political slant of the prompt, creating an echo chamber that reinforces users’ already-held beliefs.
Recommendations
-
Diversify Information Sources
Use a variety of reputable news outlets and sources to balance AI model training and reduce single-source bias.
-
Implement Regular Audits
Routinely assess AI outputs for political bias and adjust training data as needed.
-
Encourage Transparency
Developers should disclose data sources and methodologies, allowing users to better understand potential biases.
-
Develop User Controls
Allow users to adjust content filters to match their preferences, providing a more tailored and balanced experience.
-
Invest in Bias-Reduction Techniques
Leverage techniques like adversarial training and fairness algorithms to minimize AI bias effectively.
About GMTech
GMTech provides a comprehensive AI subscription platform that combines various language and image generation models in one interface, allowing users to compare and switch between AI models seamlessly. Our tools include a Comparison Lab for side-by-side model evaluation and a Chat Lab for switching models within conversations while preserving context. GMTech aims to simplify AI management and improve user experience by integrating multiple AI technologies into a single subscription package.
Contact us.
GMTech offers multiple AI models in one convenient subscription so that you can easily compare results in real time. We’re here to answer your questions and provide you with the most comprehensive AI tool available.
How may we assist?