Our Resources
/
Research and Publications
Policy

AI Agents & Global Governance: Analyzing Foundational Legal, Policy, and Accountability Tools

$hero_image['alt']

Unlike chatbots, they are anticipated to work with a high degree of autonomy to tackle increasingly complex tasks. This includes setting their own goals and autonomously executing them. Some AI agents can engage with different systems by interacting with computer tools or writing and running computer code. These capabilities can make them useful for many tasks, but also difficult to predict or control.

As agents become more capable and widespread, so do their risks. They can amplify threats that cross national borders, such as interference in elections or disruptions to critical infrastructure, and exacerbate human rights concerns, from privacy violations to limits on free expression. Addressing these challenges requires more than national regulation. It requires global governance.

This paper examines how these potential risks can be managed through foundational global governance tools that are non-AI-specific in nature and universal in scope: international law, non-binding global norms, and global accountability mechanisms. We explore how these can be used, where they fall short, and what must change to strengthen them.

Key Takeaways

  • Existing international obligations matter. Governments must respect sovereignty, prevent cross-border harms, and protect human rights when using or regulating AI agents.
  • Companies are part of the equation. While not directly bound by international law, firms benefit from aligning with global standards and calling out unlawful state behavior.
  • Global accountability channels exist. International institutions, particularly the UN system, provide avenues for oversight and redress, alongside other legal and normative mechanisms
  • Important gaps remain. Weak enforcement, unclear liability, and conflicting domestic frameworks risk undermining global governance.

Why It Matters

  • For governments: Upholding international law will be central to stability and cooperation as AI agents spread.
  • For companies: Respecting global rules strengthens trust with users, investors, and regulators.
  • For civil society and individuals: Demanding accountability ensures AI development serves the public interest.