About 2 billion voters will head to the polls for more than 100 democratic elections in 2024, including the U.S. presidential election Nov. 5. These election campaigns across the world will take place in a new technology landscape. Some rapidly advancing areas of artificial intelligence (AI) technology, such as generative AI tools, weren’t widely available in the previous election cycle.
Many of these generative AI tools make it easier to create or manipulate synthetic voice, video and photo media known as deepfakes. Deepfakes pose a significant risk of amplifying political disinformation and sowing distrust at critical phases in democratic processes, from pre-election campaigning to the peaceful transition of power.
Intel 471 is excited to share our new report “Deepfake vs Democracy: The Impact of Disinformation Campaigns on 2024's Election” in which we assess the potential impact of the technology and showcase real examples of how malicious deepfakes have been used in recent democratic elections in India, Pakistan, Taiwan and the U.S.
Report highlights
Are deepfakes a threat or opportunity?
Politicians and political operatives might answer this question differently as over a billion voters in India head to the polls April 19 in the first phase of its 44-day election. India’s Prime Minister Narendra Modi of the ruling Bharatiya Janata Party (BJP) has said AI-manipulated video or images are one of the “biggest threats that the Indian system is facing at the moment.” Managers of several recent campaigns, including the BJP, claimed to have used deepfakes to influence voters.
Can we safeguard elections from AI deepfakes in 2024?
Many governments, organizations and corporations are working to combat deepfake-enabled misinformation, but it’s highly unlikely that any country hosting democratic elections in 2024 will be able to implement safeguards in time.
There’s a growing need for better deepfake detection that keeps pace with improving AI image and video generators. However, creating robust safeguards will require a combination of regulatory frameworks, ethical guidelines, digital authentication tools, content authenticity verification, and content provenance. This will take time, and that means AI-enabled and conventional misinformation campaigns will pose a significant risk to elections worldwide in 2024.