Our Blog
/
Blog

Why PATTERN Should Not Be Used: The Perils of Using Algorithmic Risk Assessment Tools During COVID-19

$hero_image['alt']

From time to time, the Partnership on AI publishes Issue Briefs and Discussion Papers on topics that our community cares about which are inspired by or build upon our prior work in specific areas. These papers are authored by members of our staff Research Team and/or Research Fellows affiliated with our organization. The content herein does not reflect the views of any particular member organization of the Partnership on AI.

In an effort to protect the health and safety of inmates and the Federal Bureau of Prisons (BOP) personnel in the wake of the COVID-19 pandemic, Attorney General William Barr issued a memo on March 26, 2020, listing a set of discretionary factors for determining which inmates should be transferred from prison to home confinement.

Among the key factors established in the memo is the use of an algorithmic risk assessment tool called “PATTERN”—a tool developed to measure recidivism risk, but that has been documented to exhibit racial biases. In using this tool, inmates scoring anything above “minimum,” the lowest level of risk, are not to be prioritized for home confinement. Since the release of the memo, BOP has placed a mere 1,576 inmates in home confinement—just over 1% of the over 140,000 who are at risk of contracting COVID-19.

This use of PATTERN may lead to significant racial disparities among the inmates placed in home confinement, making federal prisons yet another place where we see higher infection rates and death rates among Black people. [1] In addition to concerns about racial bias in the data used to develop, validate, and score the tool, PATTERN was neither developed nor validated on an inmate population during the COVID crisis. Fundamentally, PATTERN was not designed to assess the public safety risk that transferring an inmate to home confinement earlier in their sentence might present.

The decision to use PATTERN for these purposes represents unsound policy for failing to follow basic scientific principles around development and testing:

  • PATTERN conflates risk of re-arrest for what are largely low-level crimes with risk to public safety by using general recidivism risk scores.
  • The initial, pre-COVID-19 version of the PATTERN tool produced large racial disparities in the assessed levels of risk—in part due to its reliance on racially biased data on low-level offenses.
  • PATTERN was not developed to predict recidivism for those transferred to home confinement earlier in their sentence, nor to take into consideration today’s economic and social conditions.
  • PATTERN was not designed to consider clinically relevant factors. The high weight placed on age and criminal history factors means that younger inmates with histories of low-level offending would not be prioritized even if they have histories of diabetes, lung disease, or other illness that increase their risk of severe COVID-19 complications. In addition, the use of a risk of general re-arrest fails to consider the trade-off between the risks of public health and safety. [2]

While PATTERN may turn out to have utility in serving its intended purpose of guiding recidivism risk reduction programming for inmates in BOP custody during standard operations, case management staff are now tasked with making decisions today that impact health and safety. To inform these decisions, the release strategy should not prioritize algorithmic risk assessments for largely low-crimes over considerations of the health and safety of vulnerable individuals.

This issue brief is part of a recently launched research project at the Partnership on AI (PAI) focused on more closely examining questions of racial bias in the arrest data used to develop, validate, and assess risk assessment tools. Previous work PAI has conducted in this area includes a report documenting ten minimum requirements for the responsible deployment of risk assessment tools based on the research and expertise of our community, including researchers, civil society organizations, and AI industry practitioners. The report found that no current tool meets all ten of these requirements, which encompassed concerns around the potential for such tools to perpetuate systemic biases, the lack of transparency surrounding the tools’ development, and the lack of governance structures ensuring accountability to communities affected by their deployment.

To learn more about the limitations of PATTERN, and why it should not be used in the manner employed by BOP, read our issue brief.

[1] The report authored by the BOP refers to this demographic as ‘African Americans.’ However, the BOP website only provides race data of the inmates’ population. The data presented in this Issue Brief uses both terms. For this reason, we use ‘Black people’ and ‘African Americans’ interchangeably here.

[2] In framing the trade-off between health and public safety, as the memo implicitly does, it is important to be clear about the relevant safety threats. The posited trade-off is between the health risks borne by the inmate in case of detention and the risks borne by society, and individuals in society, in case of release. The potential damage caused by a future offense is greatest for violent offenses, in which individuals are victimized and possibly physically harmed. The societal cost of crimes such as drug or weapons violations, and of property crimes, is different in nature and should be recognized as such. Under the current implementation, an inmate may have a minimum score on the violent risk scale but be deemed ineligible for prioritization due to a higher score on the general risk scale. The use of one or the other type of risk should be prioritized and weighed against considerations regarding the health of the individual.