The mainstream availability of generative AI has set this Election Day apart from 2020, changing everything from how voters access election information to the increased volume and sophistication of bogus information.
The 2024 election marks the first time AI tools have been widely accessible to the public, political actors, and foreign threat agents alike. How they are being used or misused could impact upon the democratic process.
“We’re clearly beginning to see the methods and ways in which AI is going to shape democratic discourse now and into the future,” Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology (CDT) told Newsweek, with the caveat that “the effects of AI may still be small versus what some people had anticipated.”
Givens described generative AI as “the magnifier to existing influence efforts” that can make disinformation from foreign threat agents “cheaper, easier, perhaps more convincing.”
Voter Misinformation and the Role of Chatbots
Givens expressed concern about voters relying on AI chatbots for information. “Voters shouldn’t be looking to AI chatbots for authoritative information about voting or any issue where it’s essential to receive timely and accurate information,” she warned. “The evidence shows just far too many inaccurate and incomplete answers.”
For example, in February 2024 the AI Democracy Projects published an investigative report titled Seeking Reliable Election Information? Don’t Trust AI, which detailed how five leading AI models, Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral, responded to questions voters might ask.
Half the AI chatbots’ responses to election-related queries were rated as inaccurate by a majority of expert testers, with more than one third of responses to election-related information rated as harmful or incomplete and a further 13 percent rated as biased.
Similarly, in June, the Reuters Institute carried out an analysis on how several popular chatbots including ChatGPT and generative AI startup Perplexity answered questions about the European Parliament elections. It noted “instances of partially correct or false and misleading information.”
“These inaccuracies highlight the need for caution when relying on chatbots for election-related queries,” said the analysis.
One, Perplexity, decided to navigate the process by building its Election Information Hub, adding that it had partnered with The Associated Press (AP) and Democracy Works to ensure reliability of information.
Foreign Interference Amplified by AI
The Office of the Director of National Intelligence (ODNI) released a statement in September highlighting how foreign actors were leveraging AI to interfere with U.S. elections.
According to the report, Russia and Iran are using generative AI to boost their influence operations, creating AI-generated content of and about prominent U.S. figures to sow discord.
“Of the top three actors we are tracking, Russia has generated the most AI content related to the election and has done so across all four mediums—text, images, audio, and video—though the degree to which this content has been released and spread online varies,” said the statement.
The ODNI noted that the deepfakes and other forms of AI-generated disinformation were consistent with “Russia’s broader efforts to boost the former President’s candidacy and denigrate the Vice President and the Democratic Party, including through conspiratorial narratives,” while seeking to amplify divisive issues such as immigration.
“Last week, for example, we saw a Russian influence campaign that sought to mislead people into thinking that Haitian immigrants were casting multiple illegal votes in Georgia, and that’s picking up on anti-immigrant sentiment, that’s picking up on deep concerns about voter fraud,” said Givens.
“None of it was real, but of course, foreign actors know how to exploit these fissure points in our society, and that’s what they try and take advantage of, ” she added.
Efforts to Combat AI-Driven Misinformation
Election officials are proactively addressing these challenges. “The most important thing is that election officials have been doing a lot of work to boost trusted information that voters can rely on,” Givens said.
“When there has been deepfake videos or images circulated, like the two by Russian foreign actors last week, the relevant Secretaries of State have come out immediately to debunk it and to convince people of the truth, and to reassure them about the integrity of the electoral process,” she added.
“Prebunking” campaigns are also gaining traction as a method to inoculate voters against misinformation, AI-generated or otherwise. These initiatives expose people to weakened doses of misinformation paired with explanations, helping them develop “mental antibodies” to recognize and fend off falsehoods.
“However, it is important to note that digital literacy is complex and inoculation interventions on their own do not offer a silver bullet to all of the challenges of navigating the post-truth information landscape,” stated the Harvard Kennedy School’s Misinformation Review.
Regulatory Responses and Big Tech Responsibility
In October 2023, the Biden Administration’s AI Executive Order included a provision encouraging government officials to develop authentication techniques to increase public trust in their communications.
To this effect, the CDT has also in the past year filed comments urging the Federal Election Commission to address the use of misleading deepfake images by political campaigns.
“We need increased literacy for users to understand the limitations of these tools,” Givens said. “But it’s also incumbent on the companies to be realistic and honest about what their tools can and cannot do.”
“The most responsible companies are making sure that when it’s essential that information be accurate and timely, they are actually referring people to authoritative sources of information, like the website for the National Association of secretaries at state or other locations. But it’s really important that scaffolding and signposting exist so that users can get accurate information they can trust.”
AI Aided Political Campaigns
Americans are concerned about artificial intelligence being used to manipulate elections, according to recent research published in August. The study, which surveyed over 7,600 U.S. residents, found that while voters are generally wary of AI in relation to political campaigns, they’re particularly alarmed by its potential for deception.
However, this alarm was along partisan lines. The researchers found that political parties would face few consequences from supporters for using AI deceptively in campaigns; when study participants were presented with the scenario that their preferred party was using AI for deceptive purposes like deepfakes or automated disinformation, they didn’t significantly lower their support for that party.
The research, conducted by scientists at the University of Bamberg and LMU Munich, Germany, and National Taiwan University, suggests voters aren’t opposed to all campaign uses of AI.
While participants objected to deceptive practices, they showed more acceptance of AI being used for basic campaign operations like content generation. This indicates voters can distinguish between legitimate and concerning applications of the technology.
Perhaps the most interesting finding to emerge from the study is how exposure to deceptive campaign AI practices influenced broader attitudes toward the technology. When participants learned about AI being used to create misleading political content, they became significantly more likely to support strict AI regulation and even backed pausing AI development altogether.
👇Follow more 👇
👉 bdphone.com
👉 ultraactivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 ultractivation.com
👉 bdphoneonline.com