Our Blog
/
Blog
Other

PAI Submits Response to NIST’s Request for Information on AI Risk Management Framework

With the passage of the National Artificial Intelligence Initiative Act in January 2021, the National Institute of Standards and Technology (NIST) was directed by U.S. Congress to develop a voluntary risk management framework for trustworthy artificial intelligence systems. In July, the NIST publicly requested information to help inform, refine, and guide the development of this framework. PAI was pleased to submit several examples of our work in response, providing more detailed focus on the AI Incident Database, which is supported by PAI and involves a number of the attributes the NIST believes are necessary to its planned AI Risk Management Framework.

Please read our entire letter to the NIST here or an excerpt from it below:

One example of PAI’s work in this area is a recent (2021) Safety Critical AI report on “Managing the Risks of AI Research: Six Recommendations for Responsible Publication,” which addresses some of the potential risks of AI research and makes recommendations regarding research publication and dissemination practices in order to minimize its misuse.

A second example is PAI’s Annotating and Benchmarking on Understanding and Transparency in Machine learning Lifecycles (ABOUT ML), which brings together a wide range of stakeholders to advance public discussion, and promulgate best practices into new norms for greater transparency in the use of ML in industry, government, and civil society. Current research seeks to address the organizational, technological, or other challenges for implementing documentation in key phases throughout the ML system lifecycle, from design to deployment, including annotations of data, algorithms, performance, and maintenance requirements.

The third example that we would like to focus on is the AI Incident Database (AIID), which is supported by PAI. The database is a tool to identify, assess, manage, and communicate AI risk and harm. Currently, the Database is the only collection of AI deployment harms or near harms across all disciplines, geographies, and use cases. The AI Incident Database was created as an open, collective record of AI harms to inform the beneficial development of AI technologies moving forward. Leading the development and management of the project are Sean McGregor, PhD, a machine learning researcher and technical lead for the IBM Watson AI XPRIZE at the XPRIZE Foundation (a PAI partner), and Christine Custis, PhD, Head of ABOUT ML and Fairness, Transparency, and Accountability at PAI. Additional PAI Partners provided important input into the development of the database. 

The database is a constantly evolving data product. Current and intended users include system architects, industrial product developers, academics, researchers, public relations managers, standards organizations, and policy makers. These users are invited to use the Discover application to proactively explore how recently deployed intelligent systems have produced unexpected outcomes in the real world. In doing so, they may avoid making similar mistakes in their development.