Partnership on AI is pleased to announce that Code for Africa, Meedan, the Stanford Institute for Human-Centered Artificial Intelligence, Thorn, and Truepic have joined its Responsible Practices for Synthetic Media: A Framework for Collective Action, expanding a groundbreaking community of expertise dedicated to promoting responsible practices in the development, creation, and sharing of media created with generative AI.
“As the AI community develops solutions for transparency and disclosure of AI-generated media, it is necessary to include civil society, academic, and startup perspectives on how to do this in a rights upholding fashion,” said Claire Leibowicz, Head of AI and Media Integrity at PAI. “We welcome these new Framework partners to our cross-sectoral community and look forward to including their valuable insights as the synthetic media landscape, including its impact on truth and trust, evolves.”
The addition of these new partners will provide new, diverse perspectives on synthetic media and the need for responsible practices to combat misinformation, reduce risks to vulnerable populations, particularly children, and advance solutions for transparency. The Framework provides guidance for those building, creating, and distributing synthetic media–recommendations that must be situated in the norms and understanding of society that civil society organizations understand most deeply.
The first-of-its-kind Framework was launched in February 2023 by PAI and backed by an inaugural cohort of launch partners including Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. In addition, Google, Meta, and Microsoft have since joined as Framework supporters.
A PDF of the Framework can be found here.