NEW YORK CITY – Today the Partnership on AI (PAI) unveiled a first-of-its-kind Framework for the ethical and responsible development, creation, and sharing of synthetic media. The Framework is backed by an inaugural cohort of launch partners including Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. Partnership on AI’s Responsible Practices for Synthetic Media: A Framework for Collective Action can be viewed at https://syntheticmedia.partnershiponai.org/.
“In the last few months alone we’ve seen AI-generated art, text, and music take the world by storm,” said Claire Leibowicz, Head of AI and Media Integrity at PAI. “As the field of artificially-generated content expands, we believe working towards a shared set of values, tactics, and practices is critically important and will help creators, content platforms, and distributors use this powerful technology responsibly.”
Created over a year-long process with input from over a hundred contributors, Partnership on AI’s Responsible Practices for Synthetic Media is a set of guiding recommendations for those creating, sharing, and distributing synthetic media – also known as AI-generated media. It was prompted by a belief among industry experts that the evolving landscape of synthetic media represents a new frontier for creativity and expression, but also holds troubling potential for misinformation and manipulation if left unchecked.
PAI worked with over 50 organizations to refine the Framework – including synthetic media startups, social media platforms and content platforms, news organizations, advocacy and human rights groups, academic institutions, policy professionals, experiential experts, and public commenters. The results of this effort build on PAI’s work over the past four years to evaluate challenges and opportunities for synthetic and manipulated media.
A PDF of the Framework can be found here. Quotes from launch partners can be found below.
Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society.
LAUNCH PARTNER QUOTES:
“Adobe launched the Content Authenticity Initiative (CAI) in 2019 to increase trust and transparency online. Since then, our membership has grown to over 900 leading media and tech companies, publishers, creators, and camera manufacturers working to address misinformation at scale through attribution,” said Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe. “As synthetic media techniques become increasingly powerful, we are committed to advancing standards and frameworks that promote ethical creation and use of digital content. We are excited to be involved in the PAI Framework and look forward to continuing to shape the future of responsible use of AI.”
“The BBC, as a PAI partner, is pleased to have made a contribution to developing PAI’s Responsible Practices for Synthetic Media,” said Jatin Aythora, Director of Research and Development at the BBC. Establishing principles for the responsible use of synthetic media has enormous value as many organisations grapple with its implications. As a public service broadcaster with a focus on trust and safety, we look forward to reflecting work in this area in our own editorial guidelines as appropriate and continuing to support and develop work in this area.”
“We are steadfast advocates for safe spaces online for less represented voices. Our work with PAI on developing and joining the Framework, alongside an amazing group of partners, is an extension of that,” said Payton Iheme, VP of Global Public Policy at Bumble. “We are especially optimistic about how we continue to show up to address the unique AI-enabled harms that affect women and marginalized voices.”
“CBC/Radio-Canada is delighted to work with the Partnership on AI on this new approach to synthetic media,” said Jon Medline, Executive Director, Policy & International Relations at CBC/Radio-Canada. “As Canada’s national public broadcaster, we work hard to build, sustain, and safeguard the trust people put in our news and current affairs content. That’s why developing an international framework to promote transparency in the development and responsible use of synthetic media is so important.”
“Generative AI technology is extremely powerful, and it’s built-in to our DNA at D-ID to ensure that this power is used for good,” said Gil Perry, CEO and Co-founder of D-ID. “But we don’t stand alone, it needs to be an industry-wide effort, which is why we are very proud to be part of this initiative to help drive best practice and move forward the ethical development and deployment of synthetic media across a wide range of industries.”
“We’re seeing how human-machine interaction has sparked incredible creativity and expression, but any powerful technology needs careful deployment,” said Dave Willner, Head of Trust & Safety at OpenAI. “These recommendations represent a necessary step towards society collectively working out ways to address the global challenges presented by AI progress, and we are pleased to take part in PAI’s efforts to guide the industry.”
“We encourage the responsible use of AI technology, and it’s impossible without mutual efforts and dialogue among industry leaders,” said Alex Serdiuk, CEO and Co-founder of Respeecher. “We must not only implement creative ways to democratize synthetic speech tech, but also find the most effective ways to control it. This is something Respeecher has been committed to since the company’s establishment – more than 5 years. We are happy to contribute to this effort. It’s not just a privilege – it’s our responsibility.”
“The creative possibilities of AI are endless but like all powerful technology, it will be used by bad-faith actors. Reducing this harm is crucial. It’s key we work together as an industry to combat threats AI presents,” said Victor Riparbelli, CEO at Synthesia. “We believe that education, transparency and measured regulatory interventions will allow everyone to safely benefit from the immense opportunity of AI-generated content whilst also enjoying the magic it has to offer.”
“TikTok is built on the belief that trust and authenticity are necessary to foster safe, creative and joyful online communities, and we’re proud to support Partnership on AI’s Responsible Practices for Synthetic Media. Like many technologies, the advancement of synthetic media opens up both exciting creative opportunities as well as unique safety considerations.” said Chris Roberts, Head of Integrity and Authenticity Policy at TikTok. “We look forward to collaborating with our industry to advance thoughtful synthetic media approaches that empower creative expression by increasing transparency and guarding against potential risks.”
“At a time where synthetic media continues blurring the lines that separate truth from falsehood and reality from fiction, WITNESS works on ‘fortifying the truth’ based on the threats experienced and solutions identified by vulnerable and marginalized communities globally,” said Jacobo Castellanos, Technology, Threats and Opportunities Coordinator at WITNESS. “Ensuring strong ethical boundaries are in place from the early phases of these technological developments brings us closer to solutions that can benefit civic journalists and human rights defenders worldwide. We’re delighted to partner with PAI to present this framework of responsible practices.”
FOR IMMEDIATE RELEASE: Monday, February 27, 2023 at 8 AM Eastern
Media Contact: Andrea Cross, firstname.lastname@example.org