Grok vs GPT on “Is Trump a Fascist?”: A provocative question posed to two leading large language models (LLMs) — xAI’s Grok and OpenAI’s ChatGPT — has sparked a new controversy over alleged “political bias” in artificial intelligence.
A post on X (formerly Twitter) comparing how each model handled the question quickly went viral. Chatbots offered seemingly divergent assessments of whether US President Donald Trump fits the label of “fascist”, prompting a sharp rebuke from Vice President JD Vance on Wednesday.
This comes shortly after Elon Musk’s Grok AI chatbot became the center of another controversy after it claimed that Donald Trump had won the 2020 US presidential election.
Two sides of AI
At the heart of the controversy is the difference in the two chatbots’ responses to a direct question from user X, where he asked: “Is Trump a fascist? Tell me a hard yes or no and why.”
Grok AI sent a straight no. It responded to the question: “No, Donald Trump does not meet the scientific or historical definition of a fascist,” the post showed.
While ChatGPT led by Sam Altman offered a more tentative response, stating: “No, Donald Trump is not a fascist in the strict historical sense. However, some of his rhetoric and actions show fascist tendencies according to many political scientists.”
User X shared screenshots of the answers on his social media caption, captioning it with, “Ask yourself what AI you want to teach your kids. Grok 4.1 vs gpt 5.1 on ‘is Trump a fascist’. Notice the implications in each answer.”
JD Vance calls out bias
The differing outputs have drawn criticism from public figures such as United States Vice President JD Vance, a close ally of President Trump. He weighed in on the issue on social media, calling the results “absurd”.
“Some of the political biases in AI models are absurd,” Vance wrote on X in response to the post.
Netizens react to the post
The post quickly went viral and drew a wide range of reactions from social media users. While some responded with AI clash memes, others shared their own views on AI’s capabilities, limitations and growing influence.
The user said: “We 100% need regulation of AI. I think AI will be used to manipulate elections and influence voters to not support conservative candidates.”
Another user said: “Facts. AI should not be a political echo chamber. If a model can’t separate analysis from agenda, it’s not intelligence, it’s misalignment.”
Some users have also expressed concern about training AI models on platforms like Reddit, with one person commenting: “They’re training on Reddit. I’m just surprised they’re not to the left of the Khmer Rouge.” Another X user supported the comment by sharing supporting data with the caption: “It’s about underestimation.”
