10 Things You Should Know About Disclosing AI Content
Emerging Insights from Organizations Implementing PAI’s Synthetic Media Framework
AI has made it easier to manipulate and generate media, posing challenges to truth and trust online. In response, policymakers and AI practitioners have rightfully called for greater audience transparency about AI-generated content.
But, as Leibowicz discussed in her recent article, what do audiences deserve to know? And how, and by whom, should that information be conveyed to support truth and trust online?
Launched in February 2023, PAI’s Synthetic Media Framework is backed by 18 leading organizations involved in the creation, distribution, and building of synthetic media. Each supporting organization committed to publish an in-depth case study exploring their implementation of the Framework, in practice. We released the first ten cases, along with a PAI-drafted case study and an accompanying analysis, in March 2024, focused on broad themes of transparency, consent, and harmful/responsible use cases.
A new set of cases, releasing later this year, are focused around direct disclosure — methods to convey to audiences when content has been modified or created with AI, like labels or other visual signals. These cases informed the takeaways below, which are reflective of one moment in time for a rapidly evolving field. These attributed, in-depth case studies will add texture and detail to the following themes.
- People do not perceive AI labels as merely technical and neutral signals.
Many media consumers perceive AI labels, intended to be neutral descriptions of technical modifications such as how the content was generated, as normative labels indicating the content is “true” or “false.” - Most edits made with AI tools do not significantly change content’s meaning or context.
One organization reported that the majority of AI-edited content on a social media platform does not include material edits or changes. Rather, the majority of AI-edited content being posted to the platform contained cosmetic or artistic edits that did not substantively change the content’s meaning. - Creators rarely know that they are creating content with AI.
One social media platform stated that many creators were unaware they had done any AI editing that would trigger a platform to label their content. This renders opt-in labeling methods — in which users are encouraged to tag their content as AI-edited — particularly challenging for platforms to rely upon if they want to disclose all AI-edited content consistently. - Direct disclosures should reveal how content has been materially altered.
Knowing an image has been edited or created with AI is especially important if it fundamentally, or materially, changes the media in a way that can mislead audiences. For instance, an image of an astronaut walking on the moon without a helmet may mislead users to believe humans can breathe in space. However, enhancing an image of space to make stars clearer does not materially change the content, and may even make the image more truthfully represent reality. - There is no “one size fits all” method for directly disclosing content.
Research on one social media platform showed that users expect greater transparency for content that is fully synthetic, is photorealistic, depicts current events, or features people doing or saying things they didn’t do or say. Other platform users suggested AI-edited and AI-created content should have entirely different labels. - Direct disclosure visuals should be more standardized across platforms.
Organizations use different icons to signal and explain AI disclosures and content authenticity. A patchwork of direct disclosure approaches, where users do not have strong mental models for what indicates authentic media or how it is described, makes it easier for bad actors to mislead audiences. A shared visual language for disclosure is vital, and alignment will be needed not just on how to represent the presence of provenance information, but also how to reduce overconfidence in uncertain technical signals. - Social media platforms have been more willing to over-label.
Some social media platforms are understandably apprehensive about the use of AI to augment content and manipulate election information, especially with the historic number of elections in 2024. In response, some suggest that they would rather deploy too many AI labels, than too few, even after recognizing limitations on how labels are applied and understood by audiences (as described in themes #1-5). - Social media platforms rely on signals from developers to provide direct disclosure.
Technology platforms are often the first place users encounter synthetic media. As a result, these platforms play an outsized role in directly disclosing to users how content has been manipulated. However, to do so, they often rely upon accurate indirect disclosure signals from upstream AI tool developers/builders. - Malicious activity will not be fully stopped by direct disclosures.
Direct disclosure is useful for good faith actors. But even if those building synthetic media models implement disclosure mechanisms, bad actors seeking to create harmful material, such as AI-generated Child Sexual Abuse Material (CSAM), may fine tune models to circumvent or strip out disclosures. As a result, additional mitigations — including those focused on removing CSAM from training datasets for AI models — should be pursued. - User education must be better resourced and coordinated.
User education on AI and media literacy is widely touted as necessary to media transparency. However, not enough resources are being allocated to its development, implementation, and coordination across sectors — especially those already trusted in civic life. Industry, media, and civil society organizations should educate the public on what disclosures mean and do not mean (noting, for instance, disclosure of origin or history does not necessarily verify content accuracy).
What Comes Next
In the coming months we will be publishing longform case studies featuring the evidence that informed these themes. Our upcoming programmatic work at PAI will then tackle these challenges for AI transparency, specifically by creating:
- Guidance on what counts as “material” or “fundamental” AI edits that warrant disclosures.
- A coordinated media and AI literacy education campaign with stakeholders across sectors to complement indirect and direct disclosure.
- Policy and practitioner recommendations for how to implement and adopt indirect and direct disclosure.
- Updates and clarification of the Synthetic Media Framework itself, including potential adaptation to agentic AI.
To stay up to date on our work in this space sign up for our newsletter.