Preparing for AI Agent Governance
A research agenda for policymakers and researchers
![$hero_image['alt']](https://partnershiponai.org/wp-content/uploads/2025/09/policy-agenda-circle-1800x832-1.webp)
Key Takeaways
- How AI agents will impact society is still uncertain. While the body of scholarly work and publicly available evidence are growing, we don’t yet know enough about how AI agents will be used or what impacts they may have. However, AI investments continue to soar, so policymakers should begin to prepare now.
- Policymakers should prioritize evidence and information gathering, including through sandboxes and testbeds. Given this uncertainty, policymakers should promote activities to generate evidence, rather than advancing prescriptive regulations. Promising options policymakers can use to build expertise and track developments are sandboxes and testbeds, which enable experimentation of new systems under regulatory supervision.
- Subsequent rule-making will require substantial research from inside and outside of government. Academia, civil society, government, and industry should work together to generate evidence on AI agents’ capabilities, risks, societal impacts, and potential policy interventions. This will support future policy development.
- This paper provides a roadmap for this research. We outline three foundational requirements for governing AI agents and detail a comprehensive research agenda, including 12 top-level and 45 sub-level questions, designed to directly support policymakers in developing evidence-based policy.
Industry leaders have named 2025 the “year of agentic exploration,” foretelling the adoption of systems that will change how we interact, what jobs we perform, and even how we think. However, these innovations face significant hurdles: widespread AI agent adoption has been stifled by persistent reliability and security challenges, giving policymakers an opportunity to prepare thoughtfully for how to promote the benefits and protect against the risks of AI agents.
Making the right public policy decisions on AI governance, including on the institutions, policies, regulations, and tools that ensure that AI systems operate in the public interest, requires substantial research and evidence. Though there is significant research done on AI more broadly and the theoretical impacts of AI agents, we don’t yet know enough about how AI agents will be used or what impacts they may have. The key challenge for policymakers is not whether to regulate now, but how to govern AI agents while their impacts are still uncertain, and understand what evidence will be needed to make informed decisions when decisive action is needed.
The key challenge for policymakers is not whether to regulate now, but how to govern AI agents while their impacts are still uncertain.
In this paper, we outline a research agenda to guide policymakers and researchers in preparing for the governance of AI agents. Structured around three core technology governance requirements for policymakers – foundational understanding, impact assessment, and intervention evaluation – we present 12 top-level questions and 45 detailed open sub-questions from literature and partner discussions that policymakers and researchers should explore. We also explore how monitoring, sandboxes, and testbeds can be an important first step for policymakers.
PAI has already started conducting research that supports this agenda, and have recently published work on prioritizing real-time failure detection and global governance. We will continue to build on this work, and look forward to collaborating with our community to advance AI agent governance.
Download the full research agenda and be a part of guiding evidence-based AI agent governance now.