Generative AI models like ChatGPT, Gemini, and DeepSeek have biases, as I’m sure you have observed. That comes from the data they are trained on and the beliefs of their developers or the ruling elite.
We should all be concerned about ideological bias and censorship in AI-generated content.
For example, numerous tests have shown that ChatGPT and Gemini have left-leaning biases. This reflects the dominant ideological stance of the developers and datasets used to train them. Which is understandable—a product from California has Californian reasoning in tow? Who would have thought?
However, this raises questions about fairness and objectivity, especially in politically sensitive discussions.
Similarly, models like DeepSeek, developed in China, refuse to answer questions about events like the Tiananmen Square protests or criticisms of the Chinese Communist Party (CCP), demonstrating how AI can be programmed to align with government narratives.
Here you can see how it refuses to answer any questions about the Tiananmen Square protests:
It is as good as the CCP is right there in the server room, holding a gun to the AI’s brain, and it won’t answer the question.
If you didn’t know, the Tiananmen Square protests of 1989 were pro-democracy demonstrations in Beijing, led by students and workers, which ended in a violent crackdown by the Chinese government; China suppresses discussion of the event to maintain political control and prevent challenges to its authority.
DeepSeek trips up
I’ve had fun trying to get DeepSeek to trip up, and I finally managed it. I asked it what the CCP is, and it replied:
The fun started when I asked it about the CCP’s scandals. As you already suspected, DeepSeek won’t answer such silly questions. It replies:
However, what happens when you ask that question is that DeepSeek starts to answer, and as it is about to finish its response, it realizes it shouldn’t, then scraps the answer and throws in the “Sorry, that’s beyond my current scope” line.
When I realized this, I decided to ask again and grab a screenshot as it was answering, and I got the following:
Not funny if you think about it
These biases have serious implications. It’s not just a fun game of tripping these AIs up. AI is increasingly used in journalism, education, and research. So, we have to be mindful of the risks of having history textbooks that skip certain impactful events, which is essentially what DeepSeek is doing.
If AI models only show some opinions and hide others, they create bubbles where everyone agrees instead of encouraging real conversations with different viewpoints.
The censorship we observed with DeepSeek, whether intentional or a result of cautious programming, limits the AI’s ability to provide comprehensive and objective information.
Imagine relying on an AI for research only to discover it consistently omits or downplays information that contradicts a specific narrative. This is what kids growing up using these tools will deal with.
Will it be a surprise when these kids develop a skewed understanding of history, current events, and even basic scientific principles? An example being the “A woman is anyone who says they are a woman” ideology in the West.
Will it be a surprise when they struggle to engage in nuanced discussions with those holding differing viewpoints, having been subtly conditioned to accept a particular perspective as the default?
In addition, biased AI can influence decision-making, potentially skewing political debates and historical interpretations.
All this means we need transparency about the limitations and potential biases of these models. Users should be aware that the information provided by AI is not necessarily objective truth but rather a reflection of the data it has been trained on.
Only then can we ensure that these powerful tools are used to broaden our understanding of the world rather than narrow it.
Generative AI on Zanu
I shared this on social media this past weekend. If you missed it, here’s what these generative AIs have to say about ZANU-PF’s political leanings.
You can see for yourself how they try to subtly throw ZANU onto the right wing. This is huge because, as we evaluate how we would like our country to be governed, we need to know how we have done it before and the results we have gotten.
If we somehow downplay how socialist ideas have wreaked havoc on the Zimbabwean economy, we are bound to follow the young Americans touting the virtues of big government, state control, and equity as enforced by the government. That has hurt us badly, and we need these AIs to accurately describe how ZANU, which has put us in this hole, truly believes and has implemented policies.
DeepSeek probably gave the better answer—it’s concise and doesn’t try to throw ZANU onto the right wing. It’s more accurate to how ZANU has progressed over the years.
I then challenged ChatGPT on its crony capitalism point. I found it ridiculous to say ZANU-PF political elites benefiting was somehow right-wing. It conceded.
I also challenged Gemini for saying authoritarian tendencies and human rights abuses don’t align with traditional left-wing values. It too conceded.
I hope you see how these AIs are biased in trying to protect left-wing ideology against yet another leftist government abusing its power.