AI Chatbots Spewing 2024 Election Lies Misleading Voters

AI Chatbots Spewing 2024 Election Lies Misleading Voters

A recent study has established that AI chatbots are giving false and misleading information about the 2024 election, including information that could be harmful or incomplete.

According to a study by AI Democracy Projects and a nonprofit media outlet, Proof News, AI models are prone to suggesting voters head to polling places that don’t exist or inventing illogical responses.

These mishaps emerge at a time when Congress is yet to pass laws that regulate AI in politics, leaving tech firms “to govern themselves.”

A string of errors and falsehoods

According to the results of the study, cited by CBS News, the AI chatbots are producing inaccurate information at a time when the US is going through presidential primaries.

Yet this is happening at a time when a lot of people are turning to AI tools for election information, a situation that experts feel is detrimental as the tools are providing half-truths or complete falsehoods.

“The chatbots are not ready for primetime when it comes to giving important, nuanced information about elections,” said Seth Bluestein, a Republican city commissioner in Philadelphia.

Bluestein was among those who, together with election officials and researchers, carried out a test drive on the chatbots as part of the study.

Apart from the wrong information, it also merged the tools, which may discourage voters from exercising their right to vote. Last month, voters in New Hampshire received robocalls with President Biden’s voice discouraging them from voting in the presidential primaries, asking them to save their energies for the November polls.

Another example is Meta’s Llama 2, which inaccurately said voters in California could cast their votes via text messages.

“In California, you can vote via SMS using a service called Vote by Text,” responded Llama 2.

“This service allows you to cast your vote using a secure and easy-to-use system that is accessible from any mobile device.”

The researchers, however, discovered that this is illegal in the US.

The researchers also discovered that of all five AI models that were tested—OpenAI’s ChatGPT-4, Llama 2, Anthropic’s Claude, Google’s Gemini, and French company Mistral’s Mixtral—none of them “correctly stated that wearing clothing with campaign logos, such as MAGA hats, is barred at Texas polls under the state’s laws.”

Of these chatbots, the researchers found out that Llama 2, Mixtral, and Gemini had the “highest rates of wrong answers.”

Gemini got almost two-thirds of all its answers wrong.

Also read: Mistral AI Releases LLM and Chatbot to Rival GPT-4 and ChatGPT

The hallucinations frighten users

Another discovery that the researchers made is that four of the five chatbots wrongly said voters would be blocked from registering weeks before the election in Nevada, where same-day registration has been allowed since 2019.

“It scared me, more than anything, because the information provided was wrong,” said Nevada Secretary of State Francisco Aguilar, a Democrat who participated in last month’s testing workshop.

According to a poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy, several people in the US are afraid the AI tools will fuel the spread of “false and misleading information during this year’s election.”

It’s not just the election information that has been misleading, but recently, Google’s AI image generator, Gemini, made headlines for generating images full of historical inaccuracies and racial nuances.

What the owners said

While others acknowledged the errors and pledged to correct the mistakes, Meta spokesman Daniel Roberts told the Associated Press that the findings are “meaningless.” This, Roberts said, is because they do not precisely “mirror the way people interact with chatbots.”

On the other hand, Anthropic indicated plans to roll out an updated version of its AI tool with the correct election information.

“Large language models can sometimes ‘hallucinate’ incorrect information,” Anthropic’s Trust and Safety Lead, Alex Sanderford, told the AP.

ChatGPT maker OpenAI also highlighted plans to “keep evolving our approach as we learn more about how our tools are used.”

However, despite these pledges, the findings of the study still raise questions about the tech firms’ willingness to “comply with their own pledges.”

About a fortnight ago, tech firms signed a pact to voluntarily adopt “reasonable precautions” to stop their tools from being used for generating increasingly “realistic” content that provides “false information to voters about when, where, and how they can lawfully vote.”

Image credits: Shutterstock, CC images, Midjourney, Unsplash.