From Algorithms to Ballots: How PAI’s Community of Practice Navigates AI’s Impact on Election Information
Technology is rapidly transforming the political landscape with AI presenting many challenges and opportunities in elections. In our previous blog we discussed how major elections around the world have already seen a rise in the use of generative or synthetic media (visual, auditory, or multimodal content that has been generated or modified, most commonly by artificial intelligence) to sway voter sentiment. We’ve seen this in India, where deepfakes were used to promote or discredit candidates. Earlier this year, in the U.S., AI-generated images were created and circulated by supporters showing Trump surrounded by Black voters to encourage Black communities to support and vote Republican. With the help of AI, candidates can make false claims and use AI-generated media to support them. In August of this year, Trump posted AI-generated images of Taylor Swift showing support for his campaign while claims have appeared online that the crowds at Vice President Harris’ rallies are AI-edited. To navigate this AI-generated media environment, since February 2024, PAI has held monthly Community of Practice (COP) meetings with representatives from industry, media, and civil society to explore how different stakeholders are addressing the challenges posed by the use of AI tools in elections.
As we approach one of the most significant U.S. presidential elections in history, PAI continued its AI and Elections Community of Practice (COP) meetings with two recent presentations: Google described its approach to keeping users safe and maintaining election integrity on its platform, and the Associated Press (AP) explained its journalistic approach to covering the use of AI in elections. This multistakeholder approach is what allows PAI to bring together communities that become catalysts for change.
The COP convenings are in-depth conversations with a curated group of stakeholders directly involved in the field of AI and elections. The meetings provide an avenue for attendees to hear from other organizations on their approach to the use of AI in elections, identify new areas to explore and voices to hear within this context, and attempt to forecast where we think the field is moving and how PAI can contribute to the conversation.
Google’s Approach to Safeguarding Elections
In the fourth COP meeting in the series, Google presented its comprehensive strategy for safeguarding elections, especially for users utilizing its search function for election updates. Google divides its elections work into three pillars:
1) helping voters by surfacing high quality information,
2) equipping campaigns and elected officials with resources to tackle election-related security challenges,
3) safeguarding their platforms and users from abuse with security tools and training.
Google has long standing policies to protect its platforms from abuse during elections. These policies inform how Google approaches areas like manipulated media, hate and harassment, incitement to violence, and demonstrably false claims that could undermine democratic processes. Google is using AI to detect abuse at scale, rapidly reviewing and labeling policy-violating content and tuning Large Language Models (LLMs) to identify new abuse patterns.
Other actions Google has taken to safeguard elections from AI-generated content include restricting responses for election-related queries on the Gemini app and web experiences. Google has also introduced new tools to help people identify content that may seem realistic but is actually AI-generated. For example, Google requires election advertisers to prominently disclose when their ads include realistic synthetic content that’s been digitally altered or generated. And similarly, on YouTube, videos that have altered or synthetic content are labeled based on Creator disclosures. Additionally, for its own generative AI products, Google uses SynthID to directly embed a digital watermark into AI-generated images and audio.
The Associated Press’ Reporting on AI in Elections
The Associated Press (AP) discussed how political campaigns are using AI tools to become more efficient in voter outreach and the harms deepfakes may pose to elections. The AP has been at the forefront of reporting on AI’s impact in major elections across the globe and how it’s being used to support or undermine political campaigns. The discussion revealed how AI-enabled voter targeting has changed voter outreach, allowing campaigns to effectively tailor messages to supporters. Microtargeting voters has become even more powerful due to the availability of AI tools, as campaigns can now collect and understand data in a quicker and more effective way, such as for use in targeted fundraising appeals. Voter polling has also been significantly impacted by AI; campaigns can now add polling data to large language models and understand more precisely how a certain demographic of voters may feel about a particular topic.
The Harris campaign quickly debunked accusations of deepfaking, but down-ballot candidates may lack resources to do so effectively
The AP also highlighted the significant risks of deepfakes in elections, as witnessed with the Biden Robo call incident. In instances such as the recent Harris-Walz Rally AI scandal, well-known candidates are able to quickly debunk deepfakes or false claims that could otherwise severely impact voter sentiment. However, with down-ballot candidates, the risk of harm is much higher, as they can lack the resources and visibility to counteract false narratives.
The discussion also highlighted the potential for the disproportionate impact of misleading AI-generated content on low-income communities, communities of color, and non english speaking or English as a second language communities that may not be as well versed in AI or have strong relationships with election officials. Understanding how people feel about AI, what they think of its capabilities, and how they understand how it is used are crucial to bridging that information gap.
Consistent Themes Across the Board
Throughout the discussions, the following themes emerged:
- Navigating dual-use – Striking a balance between leveraging AI’s capabilities and safeguarding the democratic process is vital to ensuring this technology will be used to uphold democracy rather than undermine it. As the technology becomes more accessible and widespread, understanding how to use it responsibly will be essential.
- Disparate impact – Understanding and addressing the disparate impact of AI use in elections on marginalized communities is another consistent theme. AI can disproportionately affect certain groups or people, particularly low-income communities, communities of color, and non English speaking or English as a second language communities and lesser known candidates. These disparities require organizations to come together to provide tailored solutions to protect these vulnerable groups.
- Transparency – Disclosure of synthetic media can help voters understand when and how AI is being used which is critical to enabling voters to make informed decisions, which can help navigate the dual-use challenges posed above.
What’s To Come
Looking ahead, the AI and Elections Community of Practice will continue to explore the impact of AI on elections and politics and find ways PAI can bring key stakeholders together to better address and understand related challenges.