Safety Critical AI

Overview

How can we ensure that AI and machine learning technologies are safe? This is an urgent short-term question, with applications in computer security, medicine, transportation, and other domains. It is also a pressing longer-term question, particularly with regard to environments that are uncertain, unanticipated, and potentially adversarial.

Our Safety-Critical AI Work

At NeurIPS 2020, PAI co-hosted a workshop addressing open questions about the responsible oversight of novel AI research. Encouraging participants to think critically about how the AI research community can anticipate and mitigate potential negative consequences, the event included panel discussions on impact statements, publication norms, harms of AI, and more.

And in October 2020, PAI released a new competitive benchmark for SafeLife, a novel AI learning environment that tests the safety of reinforcement learning agents and the algorithms that train them. As reinforcement learning agents get deployed to more complex and safety-critical situations, it’s increasingly important that we are able to measure and improve safety.

Program Workstreams

Program: Safety Critical AI
AI Incidents Database
Project Status
Engaging Audiences
Program: Safety Critical AI
Explainable AI in Practice
Project Status
Developing Resources
Program: Safety Critical AI
Publication Norms for Responsible AI
Project Status
Engaging Audiences
Project Status
Completed