Our Resources
/
Research and Publications

Safeguarding Trust and Dignity in the Age of AI-Generated Media

$hero_image['alt']

These recommendations stem from PAI’s analysis of case studies submitted by 18 organizational supporters of PAI’s Synthetic Media Framework. The themes and recommendations are PAI’s own and derive from this limited sample of multistakeholder input. Click on each logo below to read that organization’s case study.

Summary

In May 2019, years before ChatGPT brought AI into public consciousness, PAI convened a cohort to grapple with a then-emergent threat to information quality, public discourse, and civic life: AI-generated media. At the time, political deepfakes were only beginning to affect democratic discourse, AI-generated intimate imagery was victimizing women, and newsrooms were increasingly concerned their artifacts would be discredited as fake. Misrepresentation was not new, but AI was adding fuel to the fakery fire.

Now, in 2025, that fire continues to burn: synthetic media impacts not only the technology and news industries, but also fields like finance, online dating, film, and advertising, thereby affecting the entire fabric of a person’s life. That’s why PAI has worked to ensure audio-visual AI technologies support trustworthy information and human dignity.

We brought together experts across technology, academia, civil society, and media, who not only understood the technical realities of how to make and respond to AI-generated media, but also the ways in which it affects civil liberties and human flourishing.

This collaboration resulted in our Synthetic Media Framework, which puts forward guidelines for the responsible creation, development, and distribution of synthetic media technology. The Framework prioritizes transparency, preventing deception, and mitigating audio-visual harms through actionable practices.

With commitments from 18 diverse supporters, we documented real-world implementation through detailed case studies spanning journalism, entertainment, human rights advocacy, social media, and AI tool design.

Based on this body of work and our years collecting insight into the Framework’s implementation across sectors, we’ve crystallized key actionable takeaways for specific stakeholders within seven themes.

These case studies allowed us to expand on our Framework practices and synthesize the real-world impact of the guidance. A dating app better authenticated profiles as real and prepared for an influx of AI profiles. A newsroom better grasped how to weave AI risks into their existing journalistic standards. A social media platform was empowered to explain synthetic media to audiences and identify associated harms. A human rights organization could better articulate how artistic projects should get consent for synthetic media.

And importantly, through in-depth analyses of these case studies and summarized policy recommendations, we identified paths forward for the AI field: on how to responsibly differentiate between creative and malicious content, what context different stakeholders owe audiences making sense of media, and how to build infrastructure to support trustworthy media overall.

Now, based on this body of work and our years collecting and observing insight into the Framework’s implementation across sectors, we’ve crystallized key actionable takeaways for specific stakeholders within seven themes.

Through collaborative commitment from policymakers, industry, civil society, media, philanthropy, and academic stakeholders to enact the recommendations below, we can build a future where synthetic media serves creativity and communication while preserving truth, trust, and shared reality.

Recommendations

Download an index of these themes to see which recommendations are relevant to your organization.

These recommendations crystallize how the case study and Synthetic Media Framework themes can be implemented and achieved in practice.

For descriptions of stakeholders and other terminology, see the Synthetic Media Framework itself and our glossary of transparency terms.

  1. Hire staff and conduct research to build responsive policy frameworks
    Synthetic media policies must continuously evolve alongside technological innovations, user feedback, changing attitudes toward AI, and shifting media literacy norms.

    Relevant case studies
    Recommendations
    1. Establish quarterly review cycles that incorporate user research, technology assessments, and media literacy trend analysis.
      • Relevant to: Builders; Distributors
    2. Create cross-functional teams that include technical, policy, product, and user experience representatives.
      • Relevant to: Builders; Distributors
    3. Conduct regular user research on synthetic media comprehension and disclosure interpretation across different demographics and regions.
      • Relevant to: Builders; Distributors; Academia; Civil Society
    4. Test user responses to various disclosure mechanisms (labels, watermarks, metadata) and track evolving user transparency expectations.
      • Relevant to: Builders; Distributors; Academia; Civil Society
  2. Deploy contextual disclosures (not just binary labels)
    While labeling content as “AI-generated” or “AI-modified” supports harm mitigation (even for Child Sexual Abuse Material, or CSAM), organizations should implement such labels among other contextual signals about content.

    Relevant case studies
    Recommendations
    1. Implement multi-tiered disclosure systems (like Content Credentials) that provide detailed provenance information.
      • Relevant to: Builders; Creators; Distributors
    2. Include contextual information such as content creator, source, and generation mode/model (while preserving privacy).
      • Relevant to: Builders; Creators; Distributors
    3. Standardize disclosure signals across platforms while maintaining design flexibility — e.g. the “learn more” or “three dots” from Google or “Cr” from Adobe, should be consistent, but there should be design flexibility once those signals are clicked.
      • Relevant to: Builders; Creators; Distributors
    4. Train content moderators and automated moderation/ranking systems to evaluate multiple trust signals, rather than relying solely on AI labels.
      • Relevant to: Builders; Creators; Distributors
    5. Develop creative disclosure methods that enhance rather than detract from artistic expression (e.g., the halo effect in Welcome to Chechnya described in the WITNESS case).
      • Relevant to: Builders; Creators; Distributors
  3. Support media literacy and user education that precede and accompany disclosure
    All case studies underscored how disclosure mechanisms will fail without proper user understanding of what the disclosures mean.

    Recommendations
    1. Fund sustained media and AI literacy campaigns teaching interpretation of synthetic media and transparency tools.
      • Relevant to: Philanthropy; Policy; Builders; Creators; Distributors
    2. Adapt content to local languages, media patterns, and technological infrastructure through partnerships with regionally immersed civil society.
      • Relevant to: Philanthropy; Policy; Builders; Creators; Distributors; Civil Society
    3. Target vulnerable populations (elderly and youth) with specialized programs.
      • Relevant to: Philanthropy; Policy; Builders; Creators; Distributors; Academia
    4. Integrate media literacy into broader digital and AI literacy initiatives.
      • Relevant to: Philanthropy; Policy; Builders; Creators; Distributors; Academia
    5. Fund large-scale research programs on synthetic media harms and literacy effectiveness.
      • Relevant to: Philanthropy; Policy; Academia
  4. Prioritize high-risk content
    CSAM and election-related content demand heightened attention and specialized protocols within responsible synthetic media frameworks.

    Recommendations
    1. Establish dedicated response teams for election and CSAM content, and other known harms (like gender-based violence), with accelerated review processes.
      • Relevant to: Builders; Creators; Distributors
    2. Create separate policy documentation and training for high-risk content.
      • Relevant to: Builders; Creators; Distributors; Policy
    3. Evaluate training data to exclude CSAM and ensure model outputs include disclosure mechanisms.
      • Relevant to: Builders; Creators
    4. Global policymakers must consider regulating the use of synthetic media in elections by implementing strict disclosure requirements for election-related synthetic media, including debunking capabilities during silent periods.
      • Relevant to: Policy
  5. Develop, adhere to, and share your organization’s synthetic media policy (especially in media organizations)
    Clear and publicly accessible policies describing synthetic media use—with demonstrable adherence—are essential for maintaining user trust, particularly for media organizations.

    Relevant case studies
    Recommendations
    1. Media organizations should create and adopt user-facing policies for when synthetic media tools are used (or not used) in their reporting.
      • Relevant to: Distributors
    2. Distributors hosting third-party content (like social media platforms) should develop transparent policies for how synthetic media is used and disclosed on their platforms and ensure it is built into public-facing guidance such as community guidelines or standards.
      • Relevant to: Distributors
    3. Publicly acknowledge the capabilities and limits of these technologies to build user trust.
      • Relevant to: Builders; Creators; Distributors; Academia
    4. Publish detailed AI-use policies with specific examples, regular compliance reports, and clear escalation and appeal procedures.
      • Relevant to: Builders; Creators; Distributors
    5. Implement internal auditing processes and consider third-party verification for high-stakes applications.
      • Relevant to: Builders; Creators; Distributors; Academia
  6. Ensure that all media includes context about where it came from
    Even beneficial synthetic media applications require robust transparency, as positive intent cannot prevent unintended harm or misuse.

    Recommendations
    1. Provide provenance context for all content types, including authentic material.
      • Relevant to: Builders; Creators; Distributors
    2. Use prominent labels for high-risk content and subtle indicators for lower-risk material.
      • Relevant to: Creators; Distributors
    3. Design systems that remain relevant as AI integrates into standard creation tools.
      • Relevant to: Builders; Creators; Distributors; Academia; Civil Society; Policy
    4. Establish industry-wide thresholds for what counts as a material vs. immaterial edit, and therefore when different levels of disclosure are necessary.
      • Relevant to: Builders; Creators; Distributors; Academia; Civil Society; Policy
  7. Attain consent as a proactive harm prevention tool (even for publicly available data)
    Even well-intentioned, artistic uses of publicly available data for synthetic media can cause unintended harm without proper consent processes.

    Relevant case studies
    Recommendations
    1. Seek consent even for publicly available data, especially involving real people’s likenesses.
      • Relevant to: Builders; Creators; Distributors
    2. Consult next-of-kin, estates, or advocacy organizations when seeking consent involving deceased, missing, or vulnerable individuals.
      • Relevant to: Builders; Creators
    3. Balance artistic freedom with potential harm through proactive consultation with civil society.
      • Relevant to: Builders; Creators

Next Steps

The path forward is clear: we need coordinated action across every sector to ensure AI supports human flourishing. Technical standards, user education, industry alignment, and human-centered design must evolve together as AI becomes increasingly sophisticated, imitative, and convincing.

The window for proactive solutions is narrowing. PAI is responding by pushing for these recommendations and forecasting what new more “person-like” AI systems mean for misrepresentation, deception, and communication and how existing guidelines may need to adapt.

Ready to make an impact? Join the conversation. Share your expertise. Implement these recommendations in your organization.

The future of trust in digital communication and information depends on what we do today, not tomorrow.