Our Resources
/
Impact Stories

Taking a Stand Against AI Misuse

$hero_image['alt']

Facing an unprecedented global pandemic last spring, public officials were interested in using every tool available, including AI systems, in the fight against COVID-19. Not every instrument, however, was equally appropriate for the job. When former Attorney General William Barr recommended misusing a pre-COVID algorithm to determine potentially life- or-death outcomes for federal prisoners, PAI released an issue brief explaining the many perils of this path.

In response to the COVID-19 crisis, Barr issued a memo in March 2020 identifying six factors to consider when deciding which federal prisoners should be prioritized for transfer to home confinement. One of these factors was each inmate’s score in PATTERN, an algorithmic tool created to predict federal prisoners’ risk of rearrest. Inmates with PATTERN scores above “minimum,” Barr wrote, should not be prioritized. Notably, a previous version of PATTERN assigned a minimum score to just 7% of African American males compared to 30% of White males.

In April, PAI published “Algorithmic Risk Assessment and COVID-19: Why PATTERN Should Not Be Used.” This paper detailed why Barr’s suggested use of the “PATTERN” risk assessment tool was likely to increase COVID-related racial disparities.

Due to racial bias in the data used to develop, validate, and score PATTERN, PAI explained, using it to guide home confinement decisions would likely contribute to racial disparities. Furthermore, to the extent that the tool is useful, it is for its designed purpose as a predictor of future arrest — not future criminal activity, much less criminal activity under home confinement during a global pandemic.

While timely, the paper was merely the latest result of PAI’s ongoing efforts to examine questions of racial bias in algorithmic tools used within the criminal justice system.