What’s the best way to inform people that an image is AI-generated?
- OpenAI explored the use of an image classifier (a synthetic media detector) to provide disclosure for the synthetic content created with their generative AI tools and prevent the potential misuse.
- OpenAI considered the various tradeoffs in rolling out an image classifier, including accessibility (open vs. closed), accuracy, and public perception of OpenAI as a leader in the synthetic media space. By learning from their decision to take down a text classifier that was not meeting accuracy goals, OpenAI decided to slowly roll out a more accurate image classifier.
- The Framework provided OpenAI with guidance for Builders on how to responsibly disclose the content created with DALL•E, including providing transparency to users about its limitations, addressed by a phased rollout of the classifier.
This is OpenAI’s case submission as a supporter of PAI’s Synthetic Media Framework. Explore the other case studies