Even the best-intentioned uses of generative AI still need transparency – an analysis by human rights organization WITNESS

 

How much transparency does an artist need to provide when creating synthetic media?

  • To bring awareness to the disappearance of hundreds of children during the Argentine military junta of the late 1970s, a social media account used generative AI to create images of what the kidnapped children may look like today.
  • WITNESS identified this use case as one that had creative intentions, but required greater attention to responsible practices. For example, the synthetic images of the children were not clearly disclosed to users. The creator of the account also did not receive consent to use the photos (from the database/archive) or for the project (from the families of the subjects).
  • The Framework provided WITNESS with a lens for examining this use of synthetic media, as well as to hone best practices that should have been implemented for this content to be created responsibly.

This is WITNESS’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies

Download this case study