Over the past year, Partnership on AI and others in civil society, media, and technology have anticipated and expressed concerns about AI’s impact in global elections. While some may feel the fears were overblown, AI certainly had a presence in elections all around the world, which has led to negative impacts for voters. As a panel of experts, and PAI partners, predicted last year at PAI’s Policy Forum, the impact of AI was not as detrimental as some believed it would be, however the quantity and quality of AI generated election content has increased.
The ubiquity and accessibility of AI applications have significantly improved the quality of synthetic content, leading to confusion, distrust, and desensitization to the inundation of generated political content. As the 2024 U.S. election concludes this week, let’s reflect on the many ways AI contributed to the election through the spread of misinformation and deepfake audio, video, and images.
Voter Suppression
Deepfakes are AI generated or altered audio or visual media. They usually depict a real person saying or doing something that they have not said or done. Early this year, voters in New Hampshire were surprised to receive a call from who they believed to be President Joe Biden, urging them not to vote in their state’s primary election. The caller, taking on the likeness and identity of President Biden, was in fact a robocall using AI to mimic his voice. To add to the deception, the call was manipulated to show the caller ID coming from Kathy Sullivan, a former New Hampshire Democratic Party chair. It was later discovered that two companies, Lingo Telecom and Life Corporation, as well as political consultant Steven Kramer, were the masterminds behind the call. Kramer and Lingo Telecom have since been fined by the FCC for their attempt to scam and defraud voters. Although the call was suspected to have no effect on the New Hampshire primary, the robocall serves as an example of how bad actors can use AI in the future to impact elections.
Making Opposing Candidates Look Inept
In a widely circulated AI generated video, presidential candidate Kamala Harris is seen rambling and making little sense during a speech given at Howard University. The video, originally shared on X (formerly known as Twitter) garnered over four million views. Although the video is presented as authentic online, it is actually AI generated. Proven to be a fake by Reuters, the audio quality and noise around Harris’s mouth are very visibly digitally altered. The video, perceived as authentic, may have negatively influenced people’s opinion of Kamala Harris.
Screenshot of Trump’s social media post displaying fake endorsements of his campaign
Fake Celebrity Endorsements
In August, presidential candidate Donald Trump posted AI generated images to his social media platform, Truth Social, falsely showing Taylor Swift and fans supporting his campaign. When Swift became aware of the misinformation being spread regarding her support, she took to social media to “combat misinformation. . . with the truth.” Acknowledged as one of the world’s most influential people, Swift publicly endorsed Kamala Harris for president in an Instagram post. The misuse of AI has prompted legislators such as Gavin Newsom, to take action. In September, Newsom signed a bill into law to combat deepfake election content and protect the digital likeness of actors and performers. The bill would “remove deceptive content from large online platforms, increase accountability, and better inform voters.”
Pandering to Demographics
This election we have also seen AI generated images used heavily on social media to sway voter sentiment, particularly in communities with undecided voters. Images of Trump posing with Black voters were shared online to garner more support from the Black community. Throughout his campaigns, Trump has been especially hostile towards Black, Indigenous, People of Color, the AI images were an attempt to mend his reputation with voters of these communities. Although swaying already decided voters is unlikely, these images could reinforce preexisting beliefs or sway undecided voters by skewing their perception of specific candidates.
AI generated image of Donald Trump running from police officers
Poking Fun at the Opposing Party
Sometimes deepfakes take the form of a tune, as we observed in the Drake versus Kendrick feud earlier this year. But during this election cycle, the DNC was not as creative in their approach to generating a deepfake parody of Republican National Committee co-chair, Lara Trump’s heavily autotuned track “Anything is possible.” The parody is touted to be “a summer party anthem about how the RNC is falling apart under Lara Trump and the rest of the new ultra-MAGA team.” Poking fun took a more serious turn when AI generated images of Donald Trump being arrested began making their rounds on social media, as Trump faced criminal charges. Although Trump became the first former president in history to be convicted of a felony, the fake images of him being arrested aided in further damaging his reputation.
AI generated image of girl holding a puppy
Spreading Propaganda
AI generated images have also been used to help spread propaganda online. Following Hurricane Helene, an AI generated image of a little girl holding a puppy in a rescue boat was widely circulated on X, sparking outrage at the Biden administration’s response to the disaster. Although the image was proven to be a fake, people felt compelled to continue spreading the image as a symbol of the “trauma and pain people are living through right now.” Sparking outrage and pulling at the heartstrings of many people, similar AI images have been spreading online of animals displaying pro-Trump signs. These images continue to perpetuate false claims made by Trump and his administration that Haitian immigrants are abducting and eating the pets of other Springfield, Ohio residents. These AI generated images, although fake, are a useful tool in spreading misinformation and reinforcing harmful political narratives.
The First GenAI Election…But Not the Last
The impact of AI in the 2024 U.S. election may have been minimal, but as AI technology advances and people gain more access to it, future elections may see bigger impacts.
At PAI we recognize the risks that AI can pose to democracy, that is why we continue to bring together experts from across civil society, academia, and industry to address these challenges. We remain committed to addressing the challenges AI poses to democracy and dedicated to maintaining the integrity of our electoral processes. To stay up to date with our work in this space sign up for our newsletter.