Overview
How can we ensure that AI and machine learning technologies are safe? This is an urgent short-term question, with applications in computer security, medicine, transportation, and other domains. It is also a pressing longer-term question, particularly with regard to environments that are uncertain, unanticipated, and potentially adversarial.
Impact
Our Safety-Critical AI Work
At NeurIPS 2020, PAI co-hosted a workshop addressing open questions about the responsible oversight of novel AI research. Encouraging participants to think critically about how the AI research community can anticipate and mitigate potential negative consequences, the event included panel discussions on impact statements, publication norms, harms of AI, and more.
And in October 2020, PAI released a new competitive benchmark for SafeLife, a novel AI learning environment that tests the safety of reinforcement learning agents and the algorithms that train them. As reinforcement learning agents get deployed to more complex and safety-critical situations, it’s increasingly important that we are able to measure and improve safety.
Updates & Media Coverage
Program Workstreams

Explainable AI in Practice

Publication Norms for Responsible AI

AI Incidents Database
