Requirement 6: Users of risk assessment tools must attend trainings on the nature and limitations of the tools

Requirement 6: Users of risk assessment tools must attend trainings on the nature and limitations of the tools

Regardless of how risk assessment outputs are explained or presented, clerks and pretrial assessment services staff must be trained on how to properly code data about individuals into the system. Human error and a lack of standardized best practices for data input could have serious implications for data quality and validity of risk prediction down the line.

At the same time, judges, attorneys, and other relevant stakeholders must also receive rigorous training on how to interpret the risk assessments they receive. For any such tool to be used appropriately, judges, attorneys, and court employees should have regular training to understand the function of the tool itself and how to interpret risk classifications such as quantitative scores or more qualitative “low/medium/high” ratings. These trainings should address the considerable limitations of the assessment, error rates, interpretation of scores, and how to challenge or appeal the risk classification. It should likely include basic training on how to understand confidence intervals. Humans are not naturally good at understanding probabilities or confidence estimates, though some training materials and games exist that can teach these skills; see eg: https://acritch.com/credence-game/ More research is required on how these risk assessment tools inform human decisions, in order to determine what forms of training will support principled and informed application of these tools, and where gaps exist in current practice. To inform this future research, DeMichele et al.’s study conducting interviews with judges using the PSA tool can provide useful context for how judges understand and interpret these tools. DeMichele, Matthew and Comfort, Megan and Misra, Shilpi and Barrick, Kelle and Baumgartner, Peter, The Intuitive-Override Model: Nudging Judges Toward Pretrial Risk Assessment Instruments, (April 25, 2018). Available at SSRN: https://ssrn.com/abstract=3168500 or http://dx.doi.org/10.2139/ssrn.3168500;

Governance, Transparency, and Accountability

As risk assessment tools supplement judicial processes and represent the implementation of local policy decisions, jurisdictions must take responsibility for their governance. Importantly, they must remain transparent to citizens and accountable to the policymaking process. Such governance requires (i) stakeholder and broad public engagement in the design and oversight of such systems; See the University of Washington’s Tech Policy Lab’s Diverse Voices methodology for a structured approach to inclusive requirements gathering. Magassa, Lassana, Meg Young, and Batya Friedman, Diverse Voices, (2017), http://techpolicylab.org/diversevoicesguide/. (ii) transparency around the data and methods used for creating these tools; Such disclosures support public trust by revealing the existence and scope of a system, and by enabling challenges to the system’s role in government. See Pasquale, Frank. The black box society: The secret algorithms that control money and information. Harvard University Press, (2015). Certain legal requirements on government use of computers demand such disclosures. At the federal level, the Privacy Act of 1974 requires agencies to publish notices of the existence of any “system of records” and provides individuals access to their records. Similar data protection rules exist in many states and in Europe under the General Data Protection Regulation (GDPR).  (iii) disclosure of relevant information to defendants to allow them to contest decisions informed by these tools; and (iv) pre-deployment Reisman, Dillon, Jason Schultz, Kate Crawford, Meredith Whittaker, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute, (2018). and ongoing evaluation of the tool’s validity, fitness for purpose, and role within the broader justice system.