Our Blog
/
Blog

From Deepfakes to Disclosure: PAI Framework Insights from Three Global Case Studies

$hero_image['alt']

While many are harnessing AI for productivity and creativity, AI’s rapid advancement has accelerated the potential for real-world harm. AI-generated content, including audio and video deepfakes, have been used in elections to spread false information and manipulate public perception of candidates, undermining trust in democratic processes. Attacks on vulnerable groups, such as women, through the creation and spread of deepnudes, and other non consensual intimate imagery have left communities shaken and organizations to scramble to mitigate future harms.

To mitigate the spread of misleading AI-generated content, organizations have begun to deploy transparency measures. Recently, policymakers in China and Spain announced efforts to require labels on AI-generated content circulated online. Although governments and organizations are taking steps in the right direction to regulate AI-generated content, more comprehensive action is urgently needed at a global scale. PAI is working to bring together organizations across civil society, industry, government, and academia to develop comprehensive guidelines that further public trust in AI, protect users, and advance audience understanding of synthetic content.

Although governments and organizations are taking steps in the right direction to regulate AI-generated content, more comprehensive action is urgently needed at a global scale.

Launched in 2023, PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action provides timely and normative guidance for the use, distribution, and creation of synthetic media. The Framework supports Builders of AI tools, and Creators and Distributors of synthetic content in aligning on best practices to advance the use of synthetic media and protect users. The Framework is supported by 18 organizations, each of which has submitted a case study exploring the Framework’s application in the real world.

As we approach the conclusion of our case study collection in its current format, we are excited to publish the final round of case studies from Google, and civil society organizations Meedan and Code for Africa. These three case studies explore how synthetic media can impact elections and political content, how disclosure can limit misleading, gendered content, and how transparency signals help users make informed decisions about content, all vital considerations when governing synthetic media responsibly.

Code for Africa Explores Synthetic Content’s Impact on Elections

In May 2024, weeks before South African general elections, one political party’s use of generative AI tools sparked controversy: it distributed a video showing South Africa’s flag burning. Although the video was AI-generated, a lack of disclosure led to outrage from voters and a statement by the South African president that the video was treasonous.

The burden to interpret generative AI content should not be placed on audiences themselves, but on the institutions building, creating, and distributing content.

In its case study, Code for Africa argues for full disclosure of all AI-generated or edited content, increased training of newsroom staff on how to use generative AI tools, updated journalistic policies that take into account advancements in AI, and increased transparency of editorial policies and journalistic standards with users. Notably, they emphasize that the burden to interpret generative AI content should not be placed on audiences themselves, but on the institutions building, creating, and distributing content.

Although these recommendations could not have prevented the video’s creation and dissemination, the case study highlights the importance of direct disclosure, as recommended in our Framework. Direct disclosure by the video’s creator could have mitigated some of the public backlash and subsequent fallout. Through the use of direct disclosure, such as labeling, people viewing the content would have been able to distinguish between fact and AI-generated media, keeping them focused on the important message.

Read the case study

Google’s Approach to Direct Disclosure

Google, understanding the importance of user feedback when implementing direct disclosure mechanisms, conducted research to help identify which mechanisms would be most effective and useful for users. Research findings informed Google’s approach to direct disclosure by informing:

  • How prominent the label should be: considering its impact on the implied authenticity effect (when some content is labeled as AI-generated, people may believe content without labels must be authentic) and the liar’s dividend (the ability of bad actors to call into question authentic content due to the prevalence of synthetic content)
  • What additional information is needed: including an entry point for users to learn more about content, such as Google’s “About this image”
  • How to provide users with enough understanding to avoid any misinterpretation of direct disclosure.

These takeaways helped Google develop disclosure solutions to implement across three of its surfaces: YouTube, Search, and Google Ads. They noted that disclosures must feature context beyond “AI or not” in order to support audience understanding of content. AI disclosures provide only one datapoint that can help users determine the trustworthiness of content, alongside other signals like “what is the source of this content?” “How old is it?” and, “Where else might this content appear?”

Disclosures must feature context beyond “AI or not” in order to support audience understanding of content.

In addition, Google recommends further research to better understand user needs, media literacy levels, and disclosure comprehension and impact. By better understanding how users interpret direct disclosure and use them to make decisions about content, platforms can implement scalable and effective disclosure mechanisms that support synthetic content transparency that services audience understanding of content.

These recommendations align with how direct disclosure is defined in the Framework – “viewer or listening-facing and includes, but is not limited to, content labels, context notes, watermarking, and disclaimers.” They are also consistent with the Framework’s three key principles of transparency, consent, and disclosure.

Read the case study

Meedan Identifies Harmful Synthetic Content in South Asia

Check is an open-source platform created by Meedan that can help users connect with journalists, civil society organizations, and intergovernmental groups on closed-messaging platforms, such as WhatsApp. Via Check, users can help identify and debunk malicious synthetic content. By using Check and working with local partners on a research project, Meedan was able to identify that misleading, gendered content in South Asia contained synthetic components.

In its case study, Meedan recommends that platforms improve content monitoring and screening, as well as create localized escalation channels that can take into account diverse contexts and regions. Once implemented, these methods can help platforms mitigate the spread of malicious content being shared among “Larger World” communities (Meedan’s preferred term for the Global South) and better support local efforts to combat it.

The use of direct disclosure could have helped researchers identify synthetic content sooner.

In the Framework, we recommend that Creators directly disclose synthetic content “especially when failure to know about synthesis changes the way the content is perceived.” The use of direct disclosure in this instance could have helped researchers identify synthetic content sooner. This case study not only highlighted the need for direct disclosure, but also shed light on the importance of considering localized contexts when seeking to mitigate harm – an important aspect of regulating synthetic content at a global scale.

Read the case study

What’s Next

In order to develop comprehensive global regulation and best practices, we need the support of organizations across various fields including industry, academia, civil society, and government. The iterative case reporting process between PAI and supporter organizations demonstrates what real life change with supporters across these fields can accomplish.

The transparency and willingness of these organizations to provide insights into their efforts on governing synthetic media responsibly is a step in the right direction. In our March 2024 analysis, we recognized the importance of voluntary frameworks for AI governance. We hope to reveal further insights with these case studies on how we can make policy and technology decisions, providing a body of evidence about real-world AI policy implementation, and further consensus on best practices for evolving synthetic media policy.

These case studies span a range of impact areas and explore various mitigation strategies. This work from our supporters contributes to the refinement of the Framework, pursuit of future synthetic media governance, and uncovering the best ways to ensure optimal guidance is implemented by Builders, Creators, and Distributors.

In the coming months, we will incorporate our lessons learned from these case studies into the Framework, to ensure our guidance remains responsive to shifts in the AI field. In addition, we will publish an analysis of key takeaways, open questions, and future directions for the field around which we will have public programming addressing some of these themes. To stay up to date on where this work leads us next sign up for our newsletter.