As AI makes it more challenging to evaluate media, the question of how to assess what is authentic has never been more urgent. When audiences cannot reliably distinguish authentic from synthetic, trust in information — a basis for democracy, financial systems, relationships, and more — can erode. AI-generated audio and video have already been deployed to manipulate perception of election candidates, create fake dating profiles, and even misrepresent the realities of the war in Iran.
Providing audiences with greater transparency into how content was made and edited can address this challenge and help them to make informed decisions about the media they consume. Over time, this transparency can build the shared understanding of the world that a healthy information environment requires. While no single transparency method is perfectly robust and privacy must be baked in intentionally, when used in combination, different transparency methods can support broader media understanding and serve society.
Policymakers from the EU, China, and the U.S. have introduced policies to support transparency as a key solution to AI-generated content’s challenges. The EU launched a voluntary Code of Practice process to gather input from stakeholders to help implement the EU AI Act, a regulatory instrument seen as a trendsetter for AI policy overall.
Building on almost a decade of work in synthetic media, we joined the EU Code of Practice’s working group process on obligations for providers and deployers of generative AI systems. Doing so helped ensure that the transparency goals highlighted in our Synthetic Media Framework, real-world case studies, and other key research and writing, inform policymaking and real-world implementation.
PAI’s Recommendations to the EU Code of Practice
Our input to the draft EU Code of Practice (COP) advocates for developing a transparency ecosystem where multiple reinforcing mechanisms work together to provide synthetic media transparency that preserves privacy, supports innovation and creative expression, and is ultimately useful to the public. Indeed, transparency is only as effective as its implementation and interpretation.
Transparency on AI-generated content relies upon robust and resilient technical marks that travel with content throughout the web securely, what we refer to as indirect disclosures. The draft Code takes a step in the right direction, requiring at least two indirect disclosure or marking approaches.
We support three layers of such marking (including watermarking, fingerprinting, and cryptographic metadata), a perspective drawn from our Synthetic Media Framework and Glossary. Any attempts at paring this down must only be done after independent evaluation for marking effectiveness, as measured by metrics like robustness, interoperability, reliability, and insight into its privacy guardrails.
Separately, detection is a transparency method featured in the COP that relies on unintentionally added cues differentiating synthetic content from non-synthetic content. Never 100 percent accurate and susceptible to becoming outdated quickly, detectors can become even less useful if they are accessible to all, thereby, allowing bad actors to evade detection. As such, we recommend balancing transparency and openness with security through tiered, rather than unrestricted public access — with tiers drawn from previous PAI writing on detection governance.
We recommend balancing transparency and openness with security through tiered, rather than unrestricted public access.
Even if technical transparency is consistently and robustly applied, the question of how these transparency signals get communicated to real people remains. While evidence of the harm non-disclosed synthetic media can cause is clear, transparency efforts that are not interpretable by users can create a false sense of security, lead to label blindness, or raise doubts about authentic content.
The Code meaningfully focuses on this challenge, emphasizing the need for a common direct disclosure icon across surfaces but can go further to ensure they are understood by users.
- We recommend a standardized direct disclosure while preserving flexibility. We base this recommendation on input into a question in our Synthetic Media Framework Case Studies about the importance of a “shared visual language or mechanism” for disclosure. We suggest that the icon serve as a universal entry point to other transparency information, but that it also allows for flexibility on its prominence and details provided based on content type and context (all while balancing privacy).
- We support the Code’s explicit emphasis on creative and artistic content, and recommend building a publicly accessible repository of case studies to develop shared norms around innovative disclosure approaches for creative and artistic content (like those noted in case studies from WITNESS, D-ID, and the BBC.)
- We recommend supporting user research on the impact of these signals across demographic groups, particularly those most vulnerable to synthetic media harms, such as youth, who face acute exposure to AI-generated image abuse, and the elderly, who face significant financial losses from deepfake fraud. Our research has shown how transparency labels may backfire, and such user research is vital to product decisions that serve society.
- We recommend user education to support the success of any disclosure regime. Education can be pursued through public-private partnerships and be disseminated through trusted civic institutions.
- We call for a common glossary across our recommendations, suggesting alignment with the key terms and approaches in PAI’s Glossary for Synthetic Media Transparency. Inconsistent terminology across transparency efforts can limit their efficacy, as described by Truepic in its case study, not only affecting user understanding and making education more challenging, but also the underlying technical standards that serve as their foundation.
One noticeable change between drafts of the EU Code is the removal of model-level transparency. Aligned with our Synthetic Media Framework guidance for model developers, the first draft suggested that indirect disclosures are incorporated into the model layer of the value chain. While there is a legitimate case to be made against model-level mandates — as it can be seen as general-purpose infrastructure or create a burden on open-source and smaller developers — we are concerned that a theme that had broad buy-in in 2023 from key players was edited out in a short engagement window and offer guidance on balancing these interests. Interestingly, the U.S., which previously emphasized AI de-regulation overall, has suggested it may focus on broader model level review and policy.
Looking Beyond the EU Code of Practice
The EU Code of Practice, and the EU AI Act more broadly, are important mechanisms for establishing normative guidance on synthetic media transparency that can support how audiences understand information. However, it is not the be all and end all for implementing norms. First, while the AI Act is a piece of legislation, the Code is voluntary and it’s notable that leading organizations like Meta are not participating in a similar effort in the first place.
Our recommendations, and similar ones from our Partner WITNESS, need to go beyond the Code to be translated into practice. Parallel investments in formal and informal standards efforts, like those at the National Institute for Standards and Technology (NIST), and the more transparency-specific Coalition for Content Provenance and Authenticity (C2PA), as well as regulatory efforts across countries, advocacy that touches industry teams directly, and other governance mechanisms must also be bolstered alongside the EU’s efforts.
The Future of Synthetic Media
When developed and implemented responsibly and swiftly, synthetic media transparency can help audiences assess the origin and authenticity of content. In turn, they can help people better make sense of the world around them.
As AI tools continue to evolve, this will only become more pressing. They may help us express ourselves, take actions on our behalf, and perhaps even serve as our digital twin. Ensuring we can show up authentically alongside these changes will be vital, and there is no doubt transparency will play a role.
PAI is committed to making sure we get there. By translating our normative guidance and research into real-world decisions — and drawing on the insight of our multistakeholder community — we will ensure that voices from civil liberties, finance, technology, media, and human rights all help shape how these technologies evolve. The future of synthetic media is not just a technical challenge. It is a human one that real people must shape.