Our Blog
Press Release

PAI Celebrates 1-Year Anniversary of Synthetic Media Framework with Transparent Look into Guidance and Disclosure Best Practices Used in Action by BBC, OpenAI, TikTok, and Others

Portfolio of Case Studies Details Learnings and Examples of How Industry Heavyweights Approach Disclosure of and Consent for AI-Generated Multimedia

SAN FRANCISCOPartnership on AI (PAI), a nonprofit community of academic, civil society, industry, and media organizations addressing the most difficult questions on the future of AI, today released the first-known collection of in-depth case studies on mitigating synthetic media risks based on usage of its Synthetic Media Framework launched one year ago. In this rare inside look, leaders from BBC, OpenAI, TikTok, and others discuss the unique challenges they face in their respective industries and strategies implemented based on PAI’s guidance to ensure transparency and digital dignity through tactics like consent and disclosure.

A year after launching the Responsible Practices for Synthetic Media: A Framework for Collective Action, PAI and the Framework’s 10 Launch Partners (Adobe, BBC, Bumble, CBC/Radio Canada, D-ID, OpenAI, Respeecher, Synthesia, TikTok, and WITNESS) are sharing transparency case studies of the Framework in action. Given the increased accessibility of tools to create AI-generated audio and visual content, PAI’s Framework provides a timely solution for managing the creation and distribution of this unique type of media to promote safety, reduce risk, and increase transparency.

“As technology evolves, so too must our governance and policy efforts, particularly in sensitive contexts such as elections. This first ever collection of generative AI case studies underscores the need for transparency, as a first step, in building a safer media ecosystem,” said Rebecca Finlay, CEO, Partnership on AI. “We’re not just discussing theoretical concepts. We’re actively demonstrating how PAI’s Synthetic Media Framework can inform and improve governance in real-world situations. Together with our launch partners, we’re creating the conditions whereby consent and disclosure become integral to AI-driven content creation and distribution.”

The rapid evolution of AI and synthetic media over the last year has the potential to disrupt public trust, democratic integrity, financial systems, creative industries, and beyond. Today, nearly six in 10 adults (58%) in the U.S. think AI tools will increase the spread of false and misleading information around the upcoming election. As synthetic media continues to evolve, PAI’s shared knowledge base will serve as a beacon, guiding policymakers, industry stakeholders, and the public toward a more ethically grounded and forward-thinking governance framework.

“This body of work highlights the pressing need for cohesive governance rules rooted in a set of core values that are universally applicable across diverse industries and use cases, whether a dating app using AI to verify accounts, or a newsroom using AI to protect its sources,” said Claire Leibowicz, Head of AI and Media Integrity, Partnership on AI. “While values like consent remain constant, the unique nature of each use case highlights the importance of flexibility within policy frameworks.”

PAI’s Framework Launch Partners span a range of industries that make up the synthetic media ecosystem, each one requiring a different blend of technical and humanistic consideration in terms of key tactics like consent, disclosure, and navigating responsible and harmful uses. The case studies provide transparency on topics such as:

To learn more about Partnership on AI’s Responsible Practices for Synthetic Media: A Framework for Collective Action, and to read the full collection of case studies, please visit: https://syntheticmedia.partnershiponai.org/

Messages of Support

“Adobe is proud to support and implement the Partnership on AI’s Responsible Practices for Synthetic Media, which aligns with our ongoing commitment to developing AI technologies in a responsible way that respects our customers and communities. We’re doing this by applying Content Credentials, across our industry-leading creative products, including Adobe Firefly, our generative AI model, which we developed to be commercially safe, provide transparency, and respect the rights of artists and creators.”

– Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe

“Synthetic content is shaping our experience of media and entertainment. The creative possibilities are exciting but it’s essential that we signal when we are using it and explain why we have decided to use these tools. We will continue to listen to our audiences in order to understand their reaction to use of synthetic media and respond accordingly.”

– Antonia Kerle, Chief Technical Advisor, BBC

“Contributing to a safer and more equitable internet has been part of Bumble Inc.’s mission from the outset, and sharing our work and learnings with industry shows our commitment to the responsible advancement and use of AI. We need an industry-wide solution to the unique and evolving challenges that affect our communities, and we welcome exercises that allow key leaders in the industry to come together and share solutions that can support in doing just that.”

– Massimo Belloni, Data Science Manager, Trust & Safety, Bumble Inc

“As one of the standard bearers for ethical and trustworthy journalism in Canada, CBC News is proud not only to cover news about generative AI with our usual rigor and curiosity, but also to stay ahead of the game in defining how AI may or may not define 21st-century journalism. The case study we shared with the group is an example of the extreme caution with which we approach new technology: an exciting new source-protection software was considered in our production process and eventually set aside – because we couldn’t guarantee it would live up to our standards. While thinking outside the box is encouraged at CBC News, our priority is for anything we publish to be in line with our promise to the audience: content resulting from a strict process designed not to enhance the real world but to make sense of it; not to experiment with truth but to reveal it.”

– George Achi, Director of Journalistic Standards, CBC News

“The Synthetic Media Framework proved invaluable in addressing ethical challenges in synthetic media with integrity and empathy. D-ID embarked on a project where our technology gave a voice to murdered victims of domestic violence. We were confronted with the complexities of consent as direct permission was unattainable, due to the subjects being deceased. The Framework guided us in engaging deeply with the victims’ families, domestic violence experts and a law firm, to ethically and sensitively craft the narrative, thus ensuring the project was respectful, transparent, and impactful.”

– Shiran Mlamdovsky Somech, AI for Good Leader, D-ID

“The responsible use of AI is core to OpenAI’s mission, and the Synthetic Media Framework has been beneficial towards collectively working out ways to address the global challenges presented by AI progress. We’ve long been a proponent of appropriate disclosure of the use of AI, and we believe that adopting methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information.”

– Lama Ahmad, Technical Program Manager, Policy Research, OpenAI

“Respeecher proved itself not only as a Hollywood quality AI voice technology with credits in Disney+, Paramount, and HBO movies but also as an ethical company that takes this issue seriously. Collaboration with PAI is an important milestone, as well as celebrating its first-year anniversary. These joint efforts will undoubtedly help us bring security, identity, and trust into synthetic media.”

– Alex Serdiuk, CEO and Co-founder, Respeecher

“The PAI’s Synthetic Media Framework represents a significant step forward in promoting transparency, accountability, and responsible innovation in the field of generative AI. In particular, the case studies show how we can work together to build trust, protect against misuse, and ensure that these technologies are leveraged for positive societal impact. Synthesia is committed to upholding the highest standards in the development and deployment of generative AI video technologies. By being a launch member of this Framework, we reaffirm our dedication to promoting the responsible development of synthetic media and contributing to a safer and more trustworthy digital landscape.”

– Alexandru Voica, Head of Corporate Affairs and Policy, Synthesia

“Generative AI’s impact is bigger than any single organization or industry, and collaboration is critical to reduce its risks while empowering transparent content creation. As a launch partner, TikTok continues to apply the Framework’s insights towards our responsible AI efforts, including new labeling and detection technologies, media literacy investments, and a first-of-its-kind tool for creators to self-disclose AIGC.”

– Justin Erlich, Global Head of Issue Policy, TikTok

“These case studies exemplify the critical role of a human rights approach to generative AI. As an organization dedicated to leveraging technology to defend and protect human rights, WITNESS recognizes the delicate balance between harnessing the power of AI for social good and safeguarding against unintended harms. Through our support for the Partnership on AI’s Responsible Practices for Synthetic Media Framework, we not only aim to guide industry practices but also to offer legislators nuanced insights, ensuring policy and legislation are grounded in the realities of technology’s impact on society.”

– Raquel Vazquez Llorente, Associate Director, WITNESS

About Partnership on AI

Partnership on AI (PAI) is a non-profit organization that brings together diverse stakeholders from academia, civil society, industry and the media to create solutions to ensure artificial intelligence (AI) advances positive outcomes for people and society. PAI develops tools, recommendations and other resources by inviting voices from the AI community and beyond to share insights and perspectives. These insights are then synthesized into actionable guidance that can be used to drive adoption of responsible AI practices, inform public policy and advance public understanding of AI. To learn more, visit www.partnershiponai.org.

Media Contacts:
Jennifer Lyle