Our Resources
/
Research and Publications

Towards Responsible AI Content

Policy Recommendations from Five Case Studies Implementing PAI’s Synthetic Media Framework 

$hero_image['alt']

Launched in February 2023, Partnership on AI’s (PAI) Synthetic Media Framework has institutional support from 18 organizations creating, distributing, and building infrastructure for synthetic media. Each supporter committed to publish an in-depth case study exploring their implementation of the Framework, in practice. We released the first ten cases, along with a PAI-drafted case study and an accompanying analysis, in March 2024, focused on transparency, consent, and harmful/responsible use cases.

With the next five cases, we’ve narrowed our focus to an underexplored area of synthetic media governance: direct disclosure — methods to convey to audiences how content has been modified or created with AI, like labels or other signals.

PAI compiled several takeaways from the cases in the blog post 10 Things You Should Know About Disclosing AI Content that inform the recommendations below. We hope these recommendations will be helpful to policymakers and practitioners promoting transparency, trust, and informed decision making for media consumers in the AI age. We plan to build upon these recommendations in early 2025 when we release the final set of case studies.

Policy Recommendation 1

Better define what counts as a material AI use, based on multistakeholder input and user research.

As Meta notes in their case, “AI development is moving fast. Soon, AI will be meaningfully embedded in much of, if not most of, the content people see online.” This means that simply disclosing the presence of AI is an imprecise form of transparency for audiences looking to understand content.

One response could involve creating clearer parameters for when AI’s presence materially affects content meaning —and therefore warrants specific and salient direct disclosure.

Comprehensively defining what counts as a material or meaningful AI edit beyond broad, sometimes subjective categories, will likely be impossible (since context plays a key role in content meaning). However, it is still worthwhile for cross-sector stakeholders to more precisely define and identify high-stakes AI editing categories to inform direct disclosure practices and design.

For instance, user research described in the cases suggests that AI-generated or edited content about elections and current events is material and therefore may warrant specific disclosure. Research also suggested that AI used to create or augment photorealistic content, as well as modifications that show someone doing or saying something they didn’t say, would also be defined as material categories. These context and content categories stand in contrast to common immaterial AI edits that are observed, like simple cosmetic changes including color correction, de-noising, and other aesthetic editing commonly done by photographers.

Typical Material AI Uses

  • Content about elections and current events
  • Photorealistic content
  • Modifications that show an individual doing or saying something they didn’t say

Typical Immaterial AI Uses

  • Cosmetic changes, including color correction
  • De-noising
  • Aesthetic editing (commonly done by photographers)

Further complicating matters, there can be times that color correction may prompt material changes to content — for instance, if someone changed a politician’s skin color — so highlighting contexts (e.g., elections) that warrant specific ways of communicating AI edits is important.

Policy Recommendation 2

Support rich, descriptive context about content – whether or not the media has been generated or edited with AI.

Given the difficulty of creating a comprehensive policy that outlines all material edits requiring direct disclosure in various contexts, policies should default to supporting indirect disclosures for all media that, in turn, support direct disclosures. This approach will help audiences assess whether media changes are material without concentrating normative power in the hands of a few companies. For instance, if a social media user can easily see that an image purporting to represent a fire at their town hall was made with an AI tool, they may better interpret the content to be materially altered and understand it would be safe to go to an event held there.

Descriptive media context is a vital complement to more precise direct disclosures on the material content categories and modification types identified in Recommendation #1. Greater context about content, including features like content source, origin, and even who has verified the content authentication — like those featured in the Content Credentials described in the Microsoft case — can be inputs for audiences forming their own conclusions about whether or not edits or modifications are material.

Policy Recommendation 3

Standardize what is disclosed about content, and the visual signals for doing so.

Audiences clearly benefit from knowing more than just whether content was “made or edited with AI” to assess the significance of AI modifications. However, key questions remain:

  • Which specific details matter most to people?
  • How should context be disclosed to ensure that people notice the signals and understand what is being presented?

Our case studies revealed that content creators and distributors use various details and icons to signal and explain AI modifications and content authenticity. Many liken direct disclosures about content authenticity to nutrition labels in public health. Similarly to how design choices — like which nutrition facts are bolded or in larger font — affect the clarity and impact of these labels, the design of context disclosures will also influence how audiences interpret the content. And of course, decisions must be made about what nutrition details to include on the label in the first place.

A patchwork of direct disclosure approaches, where users do not have strong mental models for what indicates authentic media or how it is described, may make it easier for bad actors to mislead audiences.

A consistent visual language for disclosure is essential, requiring agreement on how to represent content provenance information. This can be pursued by implementing a more widely adopted standard like Content Credentials, or exploring a new standards process at an agency like the US National Institute of Standards and Technology (NIST) or international standards bodies. At the same time, there should be flexibility for different platforms and user interfaces to adapt this visual language to their specific features and user expectations.

Policy Recommendation 4

Resource and coordinate user education efforts about AI and information.

User education on AI and media literacy is often emphasized as essential for achieving media transparency. This is because direct disclosures rely on users’ interpretations, and labels and content context are just one part of building trust and transparency online.

However, not enough resources are being allocated to AI and media literacy’s development, implementation, and, notably, coordination across sectors — especially those already trusted in civic life. Several case writers highlight initial and important media literacy efforts, but these initiatives need to be expanded, even after we move past the “super election year” that brought attention to the need for education in this area.

As a start, policymakers can support efforts that not only collate input from media, civil society, industry, and government stakeholders, but also embolden them to educate the public on what disclosures mean and do not mean (noting, for instance, disclosure of origin or history does not necessarily verify content accuracy). Furthermore, enhancing AI literacy among reporters could improve journalism on the subject, leading to a more informed public.

Policy Recommendation 5

Accompany direct disclosure policies with back-end harm mitigation policies.

We focused this round of cases on direct disclosure because it is an underexplored and crucial aspect of synthetic media governance that can promote responsible practices, impacting creative industries, journalism, and major facets of civic life.

However, two cases — from Thorn and Stanford HAI researchers — highlight significant harms associated with synthetic media, especially AI-generated Child Sexual Abuse Material (CSAM) that will not be adequately addressed through direct disclosure alone. While those monitoring, analyzing, and mitigating risks associated with CSAM — such as law enforcement and platform safety teams — can still benefit from direct disclosure, CSAM requires additional policy interventions that should be targeted at the point of content creation, particularly during the model development, deployment, and hosting stages of the synthetic media pipeline.