Executive Summary

Executive Summary

This report documents the serious shortcomings of risk assessment tools in the U.S. criminal justice system, most particularly in the context of pretrial detentions, though many of our observations also apply to their uses for other purposes such as probation and sentencing. Several jurisdictions have already passed legislation mandating the use of these tools, despite numerous deeply concerning problems and limitations. Gathering the views of the artificial intelligence and machine learning research community, PAI has outlined ten largely unfulfilled requirements that jurisdictions should weigh heavily and address before further use of risk assessment tools in the criminal justice system.

Using risk assessment tools to make fair decisions about human liberty would require solving deep ethical, technical, and statistical challenges, including ensuring that the tools are designed and built to mitigate bias at both the model and data layers, and that proper protocols are in place to promote transparency and accountability. The tools currently available and under consideration for widespread use suffer from several of these failures, as outlined within this document.

We identified these shortcomings through consultations with our expert members, as well as reviewing the literature on risk assessment tools and publicly available resources regarding tools currently in use. Our research was limited in some cases by the fact that most tools do not provide sufficiently detailed information about their current usage to evaluate them on all of the requirements in this report. Jurisdictions and companies developing these tools should implement Requirement 8, which calls for greater transparency around the data and algorithms used, to address this issue for future research projects. That said, many of the concerns outlined in this report apply to any attempt to use existing criminal justice data to train statistical models or to create heuristics to make decisions about the liberty of individuals.

Challenges in using these tools effectively fall broadly into three categories, each of which corresponds to a section of our report:

  1. Concerns about the validity, accuracy, and bias in the tools themselves;
  2. Issues with the interface between the tools and the humans who interact with them; and
  3. Questions of governance, transparency, and accountability.

Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data. While formulas and statistical models provide some degree of consistency and replicability, they still share or amplify many weaknesses of human decision-making. Decisions regarding what data to use, how to handle missing data, what objectives to optimize, and what thresholds to set all have significant implications on the accuracy, validity, and bias of these tools, and ultimately on the lives and liberty of the individuals they assess.

In addition to technical concerns, there are human-computer interface issues to consider with the implementation of such tools. Human-computer interface in this case refers to how humans collect and feed information into the tools and how humans interpret and evaluate the information that the tools generate. These tools must be held to high standards of interpretability and explainability to ensure that users (including judges, lawyers, and clerks, among others) can understand how the tools’ predictions are reached and make reasonable decisions based on these predictions. To improve interpretability, such predictions should explicitly include information such as error bands to express the uncertainty behind their predictions. In addition, users must attend trainings that teach how and when to use these tools appropriately, and how to understand the uncertainty of their results.

Moreover, to the extent that such systems are adopted to make life-changing decisions, tools and those who operate them must meet high standards of transparency and accountability. The data used to train the tools and the tools themselves must be subject to independent review by third-party researchers, advocates, and other relevant stakeholders. The tools also must receive ongoing evaluation, monitoring, and audits to ensure that they are performing as expected, and aligned with well-founded policy objectives.

In light of these issues, as a general principle, these tools should not be used alone to make decisions to detain or to continue detention. Given the pressing issue of mass incarceration, it might be reasonable to use these tools to facilitate the automatic pretrial release of more individuals, but they should not be used to detain individuals automatically without additional (and timely) individualized hearings. Moreover, any use of these tools should address the bias, human-computer interface, transparency, and accountability concerns outlined in this report.

This report highlights some of the key problems encountered using risk assessment tools for criminal justice applications. Many important questions remain open, however, and unknown issues may yet emerge in this space. Surfacing and answering those concerns will require ongoing research and collaboration between policymakers, the AI research community, and civil society groups. It is PAI’s mission to spur and facilitate these conversations and to produce research to bridge these gaps.