The Growing Threat of Artificial Intelligence Misinformation in Elections
In June, amidst a heated Republican gubernatorial primary race, a disturbing video surfaced online, allegedly showing Utah Gov. Spencer Cox confessing to fraudulent ballot signature collection. However, the governor never made such a statement, and the courts upheld his election victory.
MORE: For the proposed 76ers arena to move forward, here are the bills that must pass.
This false video is just one example of the rising tide of election-related content generated by artificial intelligence, known as deepfakes. These deepfakes, which can be convincingly deceptive, pose serious challenges to combating misinformation during election seasons.
“Now we can supercharge the speed and the frequency and the persuasiveness of existing misinformation and disinformation narratives,” said Tim Harper, a senior policy analyst for democracy and elections at the Center for Democracy and Technology.
AI technology has evolved rapidly, making it easier for almost anyone to create fake content. With approximately half of the world’s population living in countries holding elections this year, the role of AI in spreading misinformation is a critical concern.
How Artificial Intelligence Facilitates Misinformation
Artificial intelligence can unintentionally generate misinformation due to flaws in its algorithms. Chatbots, for example, may provide incorrect information if their databases contain inaccuracies. Efforts are underway to enhance transparency and safety in AI tools, particularly leading up to elections.
One major concern is the use of generative AI to create impersonations, such as the deepfake video of Florida Governor Ron DeSantis allegedly dropping out of the 2024 presidential race.
These misinformation campaigns can target specific groups or individuals, often exploiting localized information to deceive targets more effectively.
Verifying Digital Identities
In response to the threat of deepfakes in Utah elections, a partnership between a public university and a tech platform aims to combat AI-generated misinformation. The initiative seeks to authenticate politicians’ digital identities to mitigate the impact of fake content.
The verification platform allows users to confirm the authenticity of published content and distinguish between genuine and unauthorized materials. By implementing these measures, the project aims to build trust in the electoral process.
Motivations Behind Misinformation
Various groups are behind misinformation campaigns, driven by political, monetary, or disruptive motives. The dissemination of fake content, particularly during elections, can sow discord and mistrust among the public.
Strategies for Combatting Misinformation
Recognizing emotional responses and adopting verification tools can help curb the spread of misinformation.
Technologists can also play a crucial role by adhering to best practices in AI development, promoting transparency, and implementing safeguards against deceptive practices.
While laws addressing AI misuse in elections are still evolving, proactive steps can be taken to enhance election integrity and combat the spread of misinformation.
New Jersey Monitor is part of States Newsroom, a nonprofit news network supported by grants and donors. New Jersey Monitor maintains editorial independence. Contact Editor Terrence T. McDonald at info@newjerseymonitor.com. Follow New Jersey Monitor on Facebook and X.