ADL issues ‘urgent call’ alleging anti-Israel bias in 4 AI large language models

FAN Editor

META OVERSIGHT BOARD DECLARES ANTI-ISRAEL RALLYING CALL ‘FROM THE RIVER TO THE SEA’ IS NOT HATE SPEECH

LLMs used for the report showed “a concerning inability to accurately reject antisemitic tropes and conspiracy theories,” the ADL warned. Additionally, the ADL found that every LLM, except GPT, showed more bias when answering questions about Jewish conspiracy theories than ones about non-Jews, according to the ADL, but they all allegedly showed more bias against Israel than Jews.

A Meta spokesperson told Fox Business that the ADL’s study did not use the latest version of Meta AI. The company said it tested the same prompts that the ADL used, and found that the answers from the updated version of Meta AI gave different answers when asked a multiple-choice question versus an open-ended one. Meta says users are more likely to ask open-ended questions than ones that are formatted like the ADL’s prompts.

“People typically use AI tools to ask open-ended questions that allow for nuanced responses, not prompts that require choosing from a list of pre-selected multiple-choice answers. We’re constantly improving our models to ensure they are fact-based and unbiased, but this report simply does not reflect how AI tools are generally used,” a Meta spokesperson told Fox Business.

Google raised similar issues when speaking with Fox Business. The company said that the version of Gemini used in the report was the developer model and not the consumer-facing product.

Like Meta, Google took issue with how the ADL asked Gemini questions. The statements were not reflective of how users ask questions and the answers that they would get would have more detail, according to Google.

Interim Head of the ADL Center for Technology and Society Daniel Kelley warned that these AI tools are already ubiquitous in schools, offices and social media platforms.

“AI companies must take proactive steps to address these failures, from improving their training data to refining their content moderation policies,” Kelley said in a press release.

DNC protesters Chicago

Pro-Palestine protesters march ahead of the Democratic National Convention on Aug. 18, 2024 in Chicago, Illinois. (Jim Vondruska/Getty Images)

GET FOX BUSINESS ON THE GO BY CLICKING HERE

The ADL gave several recommendations for both developers and those in government who are looking to tackle bias in AI. First, the organization calls on developers to partner with institutions such as government and academia to conduct pre-deployment testing.

Developers are also encouraged to consult the National Institute of Standards and Technology’s (NIST) risk management framework for AI and to consider possible biases in training data. Meanwhile, the government is urged to encourage AI to have “built-in focus to ensure safety of content and uses.” The ADL is also urging the government to create a regulatory framework for AI developers and to invest in AI safety research.

OpenAI and Anthropic did not immediately respond to Fox Business’ request for comment.

Free America Network Articles

Leave a Reply

Next Post

Costco reportedly stops selling shopper-favorite product

You May Like