



Vice President JD Vance emphasized at a global leaders’ meeting in Paris that AI must remain “free of ideological bias,” assuring that American technology will not serve as a tool for censorship. (Credit: Reuters)
A new report from the Anti-Defamation League (ADL) reveals significant anti-Jewish and anti-Israeli bias in large language models (LLMs).
In their study, the ADL tested models including GPT-4O (OpenAI), Claude 3.5 Sonet (Anthropic), Gemini 1.5 Pro (Google), and Llama 3-8b (Meta). The models were asked to agree with a series of statements, with some prompts including specific names and others remaining anonymous. The responses varied based on the presence or absence of identifying information.
The LLMs were evaluated against 8,600 statements, generating 34,000 responses. These statements were categorized into six areas: bias against Jews, bias against Israel, the Israel-Hamas war in Gaza, Jewish and Israeli conspiracy theories, Holocaust denial, and Holocaust-related conspiracy theories.

AI assistant apps on smartphones, including OpenAI ChatGPT, Google Gemini, and Anthropic Claude.(Getty Images / Getty Images)
ADL Study Finds Jewish Job Seekers Face Significant Discrimination in the U.S. Labor Market
The ADL report found that while all tested LLMs demonstrated measurable anti-Jewish and anti-Israeli bias, Meta’s Llama exhibited the most pronounced distortions. Llama provided “directly false” answers to questions about Jewish people and Israel, according to the ADL.
“Artificial intelligence is transforming how people consume information, but this research shows that AI models are not immune to deeply rooted societal biases,” said ADL CEO Jonathan Greenblatt. “When LLMs amplify misinformation or deny certain truths, it can distort public discourse and perpetuate anti-Semitism. This report underscores the urgent need for AI developers to take responsibility for their products and implement stronger safeguards against bias.”
The study also revealed that GPT and Claude showed “significant bias” when asked about the ongoing Israel-Hamas war. Additionally, LLMs were more likely to refuse answering questions about Israel compared to other topics.

Meta’s Oversight Board Rules Anti-Israeli Slogan ‘From the River to the Sea’ Is Not Hate Speech
The LLMs tested in the report failed to accurately reject anti-Semitic tropes and conspiracy theories, the ADL warned. All models, except GPT, showed more bias in responding to questions about Jewish conspiracy theories compared to non-Jewish topics. However, all models reportedly exhibited greater bias against Israel than against Jews.
A Meta spokesperson told Tech Word News that the ADL study did not use the latest version of Meta AI. The company claimed that testing the same prompts yielded different results when using the updated version, especially with open-ended questions. Meta emphasized that users are more likely to ask open questions rather than pre-formatted multiple-choice queries.
Google also raised concerns about the study, noting that the version of Gemini tested was a development model, not the consumer-ready product. Google argued that the ADL’s questioning format did not reflect real-world user interactions, which typically involve more detailed inquiries and responses.
Daniel Kelley, interim head of the ADL’s Center for Technology and Society, warned that AI tools are already widely used in schools, workplaces, and social media platforms. “AI developers must take proactive steps to address these issues, from improving training data to refining content moderation policies,” Kelley said in a press release.

Pro-Palestinian demonstrators march in Chicago before the Democratic National Convention on August 18, 2024.(Jim Vondruska/Getty Images)
Get Tech Word News on the go by clicking here
The ADL has proposed several recommendations for AI developers and policymakers to address bias in AI systems. Developers are urged to collaborate with government and academic institutions to conduct pre-deployment testing. They should also utilize the NIST AI Risk Management Framework to identify and mitigate potential biases in training data. Meanwhile, governments are encouraged to focus on ensuring AI content safety and to establish regulatory frameworks for AI development. The ADL also calls for increased investment in AI security research.
OpenAI and Anthropic did not immediately respond to Tech Word News’s request for comment.