Our Blog
/
Blog

Agents of Change

$hero_image['alt']

This summer, the generative-AI hype cycle debate officially moved from peak expectations to disillusionment. The evidence is everywhere, with studies highlighting the risks of AI companions, and media reporting on the dangers of chatbots. Research has also shown that 95% of businesses have not yet seen a return from their generative-AI investments.

In response, attention has turned to agents as tools to drive both commercial returns and customer value. Unlike generative-AI applications, which produce content, agents can take direct actions.

Agents offer a win for productivity: a virtual assistant that can independently book a reservation and send calendar invitations is a step up from a generative-AI application that provides a list of top restaurants and available evenings.

Agents also raise the stakes. If a gen-AI application fails, the customer gets a list with incorrect restaurants and dates. If an AI agent fails, its impact is a lot greater. When it comes to calendars and reservations, the risks might seem low, such as booking the wrong date or restaurant. However, if unaddressed, agents could create havoc with a restaurant’s profits or lead to the unlawful release of sensitive, personal information. When applied to other sectors, such as health care or banking, the risks increase significantly.

Partnership on AI was created to tackle challenges just like this. Only through a community that brings together experts from civil society, industry, and academia can we anticipate the impact of emerging AI development on people and respond with clear recommendations for practice and policy.

When it comes to agents, we’ve already begun.

When chatbots and image generators rose in popularity, we defined the terms of what best practice looks like for the entities building, creating, and distributing AI-generated media through our Synthetic Media Framework.

Before the AI Safety Summit in Bletchley Park, we released the first framework for Safe Foundation Model Deployment which accounted for both open and closed releases of advanced models.

Building on these efforts, I shared earlier this year our intent to focus on agents. Here’s what we’ve been doing.

Agent Monitoring

PAI’s AI Safety team is developing a framework for monitoring AI agents. The first publication from this work asserts both the necessity and value of real-time failure detections in AI agent systems. Filled with useful definitions and frameworks, this collaborative paper is the foundation for our ongoing work in this area with seed funding through a grant from Georgetown’s Center for Security and Emerging Technology. A new AI Safety Steering Committee is being formed to take this work forward.

Agent Policy Governance

With the advice of our Policy Steering Committee, PAI’s Policy Team has focused on developing upcoming policy briefs, reports, and convenings on agent governance through the lens of multilateral organizations as well as provincial, state, and local governments. They encourage policy makers to better anticipate future impacts through proactive research, capacity building, and experimentation. This work will also inform our complementary initiative on an AI Assurance Roadmap.

AI and Human Connection

Along with the safety and policy impacts of AI agents, PAI is setting forth new work on how AI affects social connection and information-sharing. Our new AI & Human Connection team is collaborating with a new Steering Committee to address the pressing question: How can AI strengthen and sustain informed and connected communities?
This work will look into how AI agents, such as chatbots, affect social connection. As we’ve seen from countless news reports over the past year, individuals and organizations are exploring AI’s potential for companionship and even mental health therapy. This new effort from PAI and our community will explore how AI is changing how we connect with each other and how we learn about the world.

What’s Next

I look forward to updating you later this year on our work in AI, Labor and Economy where we’re also taking up the question of agents and we’ll continue to seek advice from our Enterprise and Philanthropy Steering Committees. This is truly a team effort.

Since our founding, we have worked with our global community of experts to explore how we can create a trustworthy AI ecosystem, where stakeholders from across sectors contribute to safe and responsible AI.

This work continues today. And importantly, it evolves as the technology shifts and the world we live in changes.

Together, we are building out a robust AI agent governance ecosystem, developing sociotechnical solutions, driving forward policy research, and advancing collective action.

Join us on our mission to create positive change. We need your creativity to make all of this happen. For more information and to get involved, contact us at contact@partnershiponai.org.