Artificial intelligence is reshaping the world around us. From healthcare to finance to creative spaces, AI is evolving how we work, think, and play. Yet as the technology advances, public trust in AI is eroding, and organizations are beginning to feel the consequences.
A recent MIT report found that 95% of companies investing in AI pilots are seeing no return on investment. This serves as a reminder that technological progress alone is not enough. The success of AI depends not only on innovation but trust.
As AI systems become more capable and integrated into our daily lives, building, deploying, using, and governing these systems responsibly is now more important than ever. Last week at Partnership on AI we convened our global community of partners at Salesforce Tower in San Francisco for our 2025 Partner Forum, bringing together experts from across academia, civil society, industry, and government.
The forum’s conversations centered around four core themes: Trust, Futures, International Perspectives, and Agents, each underscoring the need for the AI community to come together to share accountability, foster open dialogue, and align industry values with human values.
“Trust is a journey, not a destination. We have to wake up every morning and ask, ‘how can I earn the trust of the people I serve within and outside of my organization?’”
Trust as the Foundation for Responsible AI
Trust emerged as both the foundation and the connecting thread across every discussion at the forum. In the first session of the day, our CEO Rebecca Finlay joined UC San Diego’ s David Danks, Brookings’ Nicol Turner Lee, and Intuit’s Liza Levitt in a panel unpacking what trust in AI really means.

From left to right: David Danks, Rebecca Finlay, Nicol Turner Lee, Liza Levitt
“Trust is not just our willingness to use something. It’s more about what we are delegating when we trust someone or something. It’s saying, ‘here’s something that matters to me, and I want you to take care of it the way I would.’”
While we may relate trust to confidence in a technology’s ability to perform, Danks reminds us that trust is far more relational. Trust is an act of vulnerability, one that assumes a system or organization will uphold the same values as the people who use it or rely on it. Turner Lee emphasized that the erosion of trust often stems from communities feeling excluded or misrepresented in data.
The discussion echoed a broader sentiment across the forum: rebuilding trust in AI is not just about transparency, it’s about centering people’s experiences and ensuring the technology will improve their lives.
“When people see themselves represented in the data and we are making fewer mistakes, then we can interact with confidence,” she said. “That’s when trust begins, when people know they will not be harmed.”
Shaping the Future
In an insightful exchange between Lilian Coral of New America and Kip Wainscott of JPMorgan Chase, the speakers shared their perspectives on how the evolving AI landscape is impacting their sectors and explored future impacts. Coral described how AI can empower the nonprofit sector to move from responding to symptoms to addressing systemic failures in areas like housing, healthcare, and education. Wainscott added a complementary perspective from industry, noting that businesses will face their own form of adaptation.
“In one version of the future, nonprofits become AI stewards — using data responsibly, designing for equity, and holding both public and private systems accountable.”
The theme of the future carried into an enlightening fireside chat on The Future of Work with Michael George of PAI and Aiha Nguyen of Data & Society. They explored how the rise of generative and agentic AI systems is not only transforming the labor market but redefining work itself. Nguyen highlighted the need to define “the new work” that emerges from AI and ensure a positive impact on both the labor market and job quality.

Aiha Nguyen, Data & Society
Guardrails for the Age of Agents
The most anticipated topic of the day centered around the rise of AI agents — autonomous systems capable of performing complex tasks with minimal human oversight. As these systems mature, so too does the urgency of establishing real-time safeguards, accountability mechanisms, and shared standards across industry. PAI’s recent report, Prioritizing Real-Time Failure Detection in AI Agents, served as a starting point for the conversation.
“This has been the year of tremendous momentum for AI agents. It’s sometimes hard for us to project forward, but I believe we are at the start of profound change.”
The panel emphasized the growing debate among many key players in the AI space, how to balance innovation with governance. UC Berkeley’s Dawn Song argued that safeguards and guardrails would only serve to advance adoption and innovation while Foundation for American Innovation’s Dean Ball contended that governance would evolve alongside AI systems, maybe even developing the governance itself. Ball argued that the government’s role is to set the right incentive gradient, ensuring people are held accountable.

From left to right: Madhu Srikumar, Dean Ball, Dawn Song, Paula Goldman
Ultimately, Salesforce’s Paula Goldman asserted that the goal should be to find the right pattern for human and agent interaction. The conversation captured the current moment the field is in, to make sure people and organizations are held accountable while technical progress is made.
Throughout the day, one message remained consistent, the future of AI cannot be left to any single institution, company, or government. The future of responsible AI will be built by a community of collaborators who share a commitment to develop, deploy, and use AI responsibly.
“Impactful organizations, like PAI, bring everyone together.”
As we continue to convene partners across sectors and continents, our message from this year’s Partner Forum endures — building a trustworthy, inclusive, and forward-looking AI ecosystem requires an ongoing commitment from a diverse community.
To follow our work in this space sign up for our newsletter: