How Truepic used disclosures to help authenticate cultural heritage imagery in conflict zones

 

How do indirect disclosures support user-facing direct disclosures for cultural heritage content?

  • Truepic utilizes its indirect disclosure tools to help platforms identify where content comes from and then provide direct disclosure to users.
  • Truepic highlights the importance of not only authenticating and disclosing synthetic content, but also non-synthetic content, in an effort to promote transparency across all digital media.
  • Truepic discusses Project Providence, a collaborative effort with Microsoft to leverage its authentication technology to document over 500 attacks in Ukraine and utilize direct and indirect disclosure outputs to support prosecutors in accountability cases.

This is Truepic’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How Adobe designed its Firefly generative AI model with transparency and disclosure

 

Can companies include disclosure in the design of generative AI models?

  • In building Firefly, Adobe’s family of creative generative AI models, Adobe wanted to be sure the product would be commercially safe, provide transparency to consumers, and respect the rights of artists and creators.
  • Adobe had to consider technical, legal, policy, and ethical standards in building Firefly, including how to insulate creator content from model development, if requested, and attach disclosures to content.
  • The Framework provided Adobe with guidance on how to “take steps to provide disclosure mechanisms for those creating and distributing synthetic media.” They did this by developing Firefly with both direct and indirect disclosure built in.

This is Adobe’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

Even the best-intentioned uses of generative AI still need transparency

An analysis by human rights organization WITNESS

 

How much transparency does an artist need to provide when creating synthetic media?

  • To bring awareness to the disappearance of hundreds of children during the Argentine military junta of the late 1970s, a social media account used generative AI to create images of what the kidnapped children may look like today.
  • WITNESS identified this use case as one that had creative intentions, but required greater attention to responsible practices. For example, the synthetic images of the children were not clearly disclosed to users. The creator of the account also did not receive consent to use the photos (from the database/archive) or for the project (from the families of the subjects).
  • The Framework provided WITNESS with a lens for examining this use of synthetic media, as well as to hone best practices that should have been implemented for this content to be created responsibly.

This is WITNESS’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How the risk of synthetic media that affects global election information is growing – an analysis by PAI

Can responsible synthetic media practices prevent the negative impact of AI on elections?

  • To highlight how generative AI tools are already playing a role in the 2024 global elections cycle and how Framework recommendations can be used to mitigate harm, PAI examined the use of deepfake audio in three global election contexts.
  • PAI applied its Framework to deepfake audio use cases in Slovakia, Pakistan, and the United States. Only in Pakistan, where a deepfake audio was used over authentic footage, were disclosure and consent applied. In the other two cases, questions of consent arose around whether politicians, as public figures, need to give consent at all.
  • The Framework’s disclosure recommendations, if implemented, may have been able to prevent potential harm stemming from the misuse of audio deepfakes in two of these examples.

This is PAI’s Case Submission to the Synthetic Media Framework. Explore the other case studies

Download this case study

How TikTok launched new AI labeling policies to prevent misleading content and empower responsible creation

 

How do you balance AI’s creative potential with its potential for harm?

  • TikTok rolled out synthetic media and manipulated content guidance in its Community Guidelines to allow for users’ creative expression with generative AI tools, and simultaneously prevent misuse. As part of this new policy, creators were asked to begin disclosing their own AI-generated content on the platform.
  • TikTok’s new policy included introducing a new toggle for creators to use whenever they posted content that was wholly generated or significantly edited using generative AI. One of the challenges TikTok faced was where to draw the line for requiring users to disclose synthetic content.
  • The Framework provided TikTok with a set of references for how synthetic media could be used harmfully. TikTok also responded to the Framework’s recommendations on how to disclose synthetic media to users responsibly.

This is TikTok’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How Respeecher enables creative uses of its voice-cloning technology while preventing misuse

Can a voice-cloning startup successfully prevent its product from being misused?

  • Respeecher, in developing its voice cloning technology, sought to prevent misuse by obtaining consent and implementing content moderation.
  • Respeecher’s greatest obstacle was providing disclosure for synthetic voice in a creative context. How could the company provide direct disclosure to users without taking away from the immersive experience of the overall media?
  • While the Framework provides clear guidelines for how to responsibly provide disclosure, the current version does not contain guidance on how to do so while balancing user experience, thus raising the question of what the best practice in a creative context would be.

This is Respeecher’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How OpenAI is building disclosure into every DALL-E image

 

What’s the best way to inform people that an image is AI-generated?

  • OpenAI explored the use of an image classifier (a synthetic media detector) to provide disclosure for the synthetic content created with their generative AI tools and prevent the potential misuse.
  • OpenAI considered the various tradeoffs in rolling out an image classifier, including accessibility (open vs. closed), accuracy, and public perception of OpenAI as a leader in the synthetic media space. By learning from their decision to take down a text classifier that was not meeting accuracy goals, OpenAI decided to slowly roll out a more accurate image classifier.
  • The Framework provided OpenAI with guidance for Builders on how to responsibly disclose the content created with DALL•E, including providing transparency to users about its limitations, addressed by a phased rollout of the classifier.

This is OpenAI’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How AI video company D-ID received consent to digitally resurrect victims of domestic violence

 

How can people who are no longer alive provide their consent?

  • As part of a campaign to raise awareness about domestic violence, generative AI startup D-ID created a video in which the synthetic avatars of five deceased victims of domestic abuse “shared” their stories and cautioned women to seek help if they found themselves in abusive relationships.
  • D-ID faced the challenge of how to obtain consent, a Framework principle, on behalf of the deceased. In order to do so, D-ID worked with the families of the victims to obtain consent for using the likeness of the victims, as well as to develop the video scripts.
  • The Framework provided D-ID with recommendations on how to responsibly utilize the likeness of the subjects in their video as well as on how to inform viewers that the subjects were created using generative AI. As a result, D-ID reevaluated internal processes to improve its approach to content creation.

This is D-ID’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How CBC News decided against using AI to conceal a news source’s identity

 

Can journalists ethically use AI to mask the identity of a confidential source?

  • CBC News explored using synthetic media to obfuscate faces of crime victims that did not want to be identified in order as a way to potentially enhance storytelling.
  • The CBC typically relied on methods such as face blurring and voice alteration in order to hide the identities of reporting subjects.
  • The Framework provided CBC with a set of AI-specific guidance to support its journalistic standards. Ultimately, the CBC did not use synthetic media, noting existing challenges regarding user perception of synthetic media and privacy concerns for the subject.

This is CBC News’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study

How Bumble is preventing malicious AI-generated dating profiles

 

How can dating apps authenticate user profiles in the era of generative AI?

  • Bumble’s Photo Verification process requires users to provide unique photos to Bumble in order to validate the authenticity of their profiles. However, advances in generative AI technology have made using photos for validation increasingly challenging.
  • Bumble sought to balance two potentially conflicting goals: they sought to identify synthetic media when used to create fake or malicious profiles while simultaneously allowing individuals’ creative use of generative AI within their authentic profiles.
  • PAI’s Framework provided Bumble with a reference from which to identify the harm it sought to address, setting the stage for Bumble to roll out new policies on the use of synthetic media within the app.

This is Bumble’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study