Our Blog
/
Blog

How Can AI Fortify Informed and Connected Communities?

Announcing PAI’s New Work on AI and Human Connection

$hero_image['alt']

When Partnership on AI began its programming in 2018, AI could generate videos, recommend content, and simulate conversations. Deepfakes were not yet widespread or perfectly photorealistic, but PAI was working across sectors to anticipate what lay ahead for our information ecosystem. Even as AI capabilities were evolving, we collaborated with partners across industries to prepare for its potential impacts. Newsrooms like the BBC were grappling with how their existing journalistic standards could address novel AI risks. Social media platforms like Meta (then Facebook) hoped to better support audiences encountering AI-generated content and to identify associated harms. Dating apps like Bumble were asking how to authenticate profiles as real and prepare for an influx of AI profiles.

As we predicted in 2019, “AI systems promise to augment human perception, cognition, and problem-solving abilities. [But] they also pose risks of manipulation, abuse, and other negative consequences, both foreseen and unintended.” Over the years that followed, many of those risks and opportunities materialized. AI was clearly ushering in an unprecedented era of knowledge sharing and connection online, but the pace of change was about to accelerate dramatically.

OpenAI’s release of DALL-E in 2021 brought generative AI, and specifically synthetic media, to the public. ChatGPT’s launch in 2022 accelerated this transformation. Since then, we’ve witnessed profound improvements in the technology’s realism and accessibility, fundamentally impacting trust and truth online.

When a deepfake image of the Pentagon on fire moved financial markets in 2023, PAI’s early decision to focus on AI and media integrity proved prescient. PAI’s Synthetic Media Framework provided builders, creators, and distributors of synthetic media with responsible use guidelines and transparency measures to empower people in the AI age. Eighteen diverse institutions — from OpenAI to Code for Africa to the CBC to TikTok — signed on to our guidance, and all of them wrote long form case studies examining adoption of the recommendations in real-world scenarios.

“AI transcends its purely technological status, simultaneously affecting how people socialize and consume knowledge.”

Yet, even as we addressed synthetic media’s challenges to impersonation and misrepresentation, AI has continued evolving in new directions. Now, in 2025, we confront an evolved AI landscape where interactive and increasingly capable, “personlike” AI systems, like AI agents and chatbots, affect how people understand each other and the world around them.

Today people not only develop relationships through AI, but also with AI — for romantic, therapeutic, or social purposes. Information and knowledge are increasingly synthesized and delivered through chatbots and conversational interfaces.

AI has become central to social connection and public knowledge, vital precursors to healthy epistemic communities, vibrant democracies, and overall human flourishing. According to a Pew Research Study, 57% of Americans surveyed report using AI at least once a day.

While the foundations for this transformation were laid years ago, AI’s capabilities — and consumer packaging, public integration, and use — have transformed. Today’s tools are more dynamic, emotionally evocative, sycophantic, personalized, persuasive, and interactive, making them seem genuinely “personlike” to users. To the teenager chatting daily with Character.AI’s virtual companions or the elderly person asking Amazon’s Alexa questions throughout the day, AI transcends its purely technological status, simultaneously affecting how people socialize and consume knowledge.

Meeting this moment in AI requires the entire ecosystem — not just technology companies, but also civil society, government, philanthropy, academia, media, and the public — to bring both attention and intention to how we all shape AI’s trajectory. Stakeholders must grapple with how our informational and social lives intertwine: how misleading ideas spread through social networks, how chatbots become trusted advisors, and how the quality of our social lives affects not only our emotional well being, but also our participation in public discourse and civic life.

Trust in information is fundamentally trust in sources and people. AI systems that cannot navigate both trustworthy communication and authentic human connection will fail at their most critical moments.

Partnership on AI’s Newest Area of Work: AI and Human Connection

To meet these interconnected challenges, PAI is launching a new area of work: AI and Human Connection. It will build upon PAI’s established leadership in AI and Media Integrity, and knowledge base from its Collaborations Between People and AI Systems projects, ultimately responding to the pressing question: How can AI strengthen and sustain informed and connected communities?

As researchers at Google DeepMind recently emphasized, we “must anticipate, monitor and mitigate against risks introduced by anthropomorphic AI design.” PAI’s AI and Human Connection program answers this call.

Some AI systems are built to give us information, but they’re starting to feel like friends or companions. Other AI systems are made for socializing, but they end up teaching us and shaping what we believe. To handle these changes properly, we need to design AI systems that tackle both information-sharing and connection.

“Trust in information is fundamentally trust in sources and people.”

Our work on AI and Human Connection will cultivate the interdisciplinary expertise needed to create AI that fortifies human epistemic and social communities in an age of unprecedented informational and relational complexity. Ultimately, it will address the ways that AI is changing how we connect with each other and how we learn about the world.

This effort will expand on seven years of previous work promoting AI that supports knowledge and connection. Since 2018, PAI has created and driven adoption of practical guidance that ensures AI positively impacts the trustworthiness of media and information. In particular, PAI’s Synthetic Media Framework continues to support AI practitioners and policymakers. Through long-form case studies, we also provide a venue for reflection on synthetic media developments and transparent documention of how practitioners adopt our guidance.

These insights can support responsible development of anthropomorphic AI, too.
PAI provides recommendations on fairness, documentation, disclosure, transparency, consent, and responsible and harmful uses that can be adapted to the increasingly capable AI systems of today.

What’s Next?

New Steering Committee. PAI’s AI and Media Integrity Steering Committee was integral to the creation and adoption of PAI’s Synthetic Media Framework. We build on this success through the formation of an AI and Human Connection Steering Committee, focused on advancing adoption of PAI’s Synthetic Media Framework with new technologies and shaping the field around knowledge-and connection affirming AI. The Steering Committee will include experts from Thorn, the ACLU, the Knight First Amendment Institute, and the CBC; the full group will be announced in late 2025

Workshop on AI and Human Connection. In the next year, PAI will convene its first workshop on this new topic — focusing on how we can develop a comprehensive roadmap for research, policy, and technology development to ensure interactive AI systems positively affect information, communication, and human connection. The roadmap will support a framework following a similar adoption model to our Synthetic Media Framework: practitioners will implement it, civil society will use it for advocacy, and policymakers crafting norms, standards, and regulations on related topics (like those we’ve recently seen in California and New York) will reference it.

Synthetic Media Framework. The Synthetic Media Framework is foundational to our future work. We will continue to work with organizations to promote its adoption and integration into new sectors and to share its insights with policymakers around the world.

The next year is critical. Society needs a new generation of AI practitioners, researchers, and leaders who understand that information and media integrity and social trust aren’t separate problems — they’re two manifestations of the same challenge: building AI systems that help humans discern and embrace what’s real, true, authentic, and human. PAI’s AI and Human Connection Program will broaden, and deepen, our Partner community’s impact.

To stay up to date on PAI’s AI and Human Connection Program as we tackle these defining challenges, sign up for our newsletter. Connect with our growing community of Partners, contribute your expertise, and help us forge the path forward, together.

Let’s build a world where AI connects communities rather than fragments them, where information systems inform rather than confuse, and where digital interactions enhance rather than degrade human dignity.