Overview

Despite the recognized need to mitigate algorithmic bias in pursuit of fairness, there are challenges in ensuring that the techniques proposed by the ML community to mitigate bias are not deemed to be discriminatory from a legal perspective.

In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal perspective is a complex but increasingly pressing question at a time where there are growing concerns about the potential for algorithmic decision-making to exacerbate societal inequities. In particular, there is a tension around the use of protected class variables: most algorithmic bias mitigation techniques utilize these variables or proxies, but anti-discrimination doctrine has a strong preference for decisions that are blind to them.

This workstream asks: To what extent are technical approaches to algorithmic bias compatible with U.S. anti-discrimination law and how can we carve a path toward greater compatibility?