“Safety-Critical AI” Working Group launches in San Francisco
The Partnership on AI’s (PAI) Safety-Critical AI Working Group met for the first time today at Microsoft in San Francisco, capping off PAI’s inaugural three Working Group launches. The group — comprised of leading experts from technology, civil society, and academia — is co-chaired by Peter Eckersley, Chief Computer Scientist at the Electronic Frontier Foundation, and Ashish Kapoor, Principal Researcher at Microsoft.
Recent and fast-paced progress in AI technologies has generated important safety concerns, including urgent short-term questions regarding how to make AI systems behave in desirable ways in uncertain and potentially adversarial environments, as well as the longer-term need for social and technical foundations for building future AI tools that are safe, predictable, and trustworthy. The complexity and adaptability of AI systems makes these safety considerations simultaneously difficult and of paramount importance.
Researchers, practitioners, advocates, and other stakeholders must think about how the creators of intelligent systems design safety constraints and values into such systems; they must also consider techniques for measuring whether or not they have succeeded in designing safe, inclusive technologies.
Thus, the Safety-Critical AI Working Group will deliberate and work towards development of principles, best practices, and research exploring safety-critical systems. Potential projects may include the establishment of a toolkit for engineers designing safety-critical systems, an assessment of current safety-critical systems with the goal of establishing best practices, and a push for more open source collaboration in order to ensure safety is considered in the design and implementation of intelligent systems. We look forward to shortly announcing further details on the projects and considerations of this community.