Stay Informed

please write the name of your organization in full

Hidden Fields

Subscription completed successfully.
Validation error occurred, please confirm the fields and submit again.
Oops, Sorry. Something is wrong. Please try again later.


AI tools used in deciding whether to detain or release defendants are in widespread use around the United States, including in recent use by the Bureau of Prisons to determine eligibility for home confinement in the context of COVID-19, around and some legislatures have begun to mandate their use. While criminal justice risk assessment tools are often simpler than the deep neural networks used in many modern artificial intelligence systems, they are basic forms of AI. As such, they present a paradigmatic example of the high-stakes social and ethical consequences of automated AI decision-making.

PAI’s research in this area outlines 10 largely unfulfilled requirements that jurisdictions should weigh heavily prior to the use of these tools, spanning topics that include validity and data sampling bias, bias in statistical predictions; choice of the appropriate targets for prediction; human-computer interaction questions; user training; policy and governance; transparency and review; reproducibility, process, and recordkeeping; and post-deployment evaluation. Based on the input of our partners, PAI currently recommends that policymakers either avoid using risk assessments altogether for decisions to incarcerate, or find ways to resolve the requirements outlined in this report via future standard-setting processes.