From Affirmative Action to Affirmative Algorithms: The Legal Challenges Threatening Progress on Algorithmic Fairness
From Affirmative Action to Affirmative Algorithms: The Legal Challenges Threatening Progress on Algorithmic Fairness
The trajectory of Supreme Court rulings suggests a future where policies that account for race will be deemed inherently discriminatory. Technologists must take note, and should urgently pay attention to the possible legal challenges to their current approaches to mitigating algorithmic bias.
Algorithmic Affirmative Action: Traditional Technical Approaches to Mitigating Algorithmic Bias
A central concern with the rise of artificial intelligence (AI) systems is bias. Whether in the form of criminal “risk assessment” tools used by judges, facial recognition technology deployed by border patrol agents, or algorithmic decision tools in benefits adjudication by welfare officials, it is now well known that algorithms can encode historical bias and wreak serious harm on racial, gender, and other minority groups.
Based on substantial research in machine learning, the consensus approach to remedying algorithmic bias today relies upon protected attributes to promote “fair” (aka unbiased) outcomes. This led the White House to call for the “construction of algorithms that incorporate fairness properties into their design and execution” in 2016, followed by a recent White House draft guidance on AI regulation that similarly calls for “bias mitigation.”
For example, one leading approach utilizing protected attributes uses different thresholds for loan approvals for White and Black applicants to, for instance, equalize repayment rates. Another normalizes features across protected groups, such as by transforming test scores so that gender differences disappear.
Anticlassification: The Connection Between Affirmative Action and Algorithmic Decision-Making
Affirmative action was rooted in the belief that facially neutral policies would be insufficient to address past and ongoing discrimination. In his 1965 commencement address at Howard University, President Lyndon B. Johnson delivered what has widely been lauded as “the intellectual framework for affirmative action,” stating: “You do not take a person who, for years, has been hobbled by chains and liberate him, bring him up to the starting line of a race and then say, ‘You are free to compete with all the others,’ and still justly believe that you have been completely fair.”
Although this protected attribute approach might not ordinarily be thought of as affirmative action, it nonetheless employs an adjustment within the algorithm that uses a protected attribute, such as race, in a kind of point system. It is this connection that heightens the risk that anticlassification—the notion that the law should not classify individuals based on protected attributes—may render such forms of algorithmic fairness unlawful.
With the changing composition of the US Supreme Court, the traditional approach in the AI community of relying on protected attributes to mitigate algorithmic bias may well be deemed “algorithmic affirmative action.”
The ways in which many of us in the AI community have moved to mitigate bias in the algorithms we develop may pose serious legal risks of violating equal protection.
The AI Community Can No Longer Afford to Ignore the Legal Challenges to Algorithmic Fairness Efforts
Anticlassification threatens to impede serious examination of the sources and remedies for algorithmic bias, precisely when such examination is sorely needed. As the use of AI tools proliferates, the concerns about legal risk can lead to the worst approach of all: turning a blind eye to protected attributes, including the use of demographic data.
So what options are available to those of us who care about encouraging the use of algorithmic bias mitigation techniques in an ideally legally viable manner?
Lessons from Affirmative Action Cases in the Government Context
Even if deemed to be a form of affirmative action, there is a path forward for algorithmic fairness: the course charted by government contracting cases.
Although there have been significant challenges to these programs, the current case law permits government classifications by race to improve the share of contracts awarded to minorities under certain conditions. The critical limitation is that government must have played a role in generating outcome disparities and the program must be narrowly tailored to benefit victims of discrimination while minimizing adverse effects on others.
In contrast to the higher education cases, the contractor cases hinge on inquiries that incentivize assessment, rather than blindness, of bias. These inquiries examine how much discrimination is attributable to the actor, and whether the design and magnitude of voluntary affirmative action to remedy such disparities can be justified through a “strong basis in evidence.” This approach has key virtues: it provides the incentives to understand bias and enables the adoption of algorithmic decision tools to remedy historical bias.
Affirmative-action cases in government contracting offer a promising legal justification for efforts to mitigate algorithmic bias. Using the doctrinal grounding these cases provide, technologists can develop techniques that quantify specific forms of historical discrimination and use those estimates to calibrate the extent of bias mitigation.
To be sure, there are key limitations to applying the logic of the contractor cases to the algorithmic-fairness context, including:
- a recognition that the doctrine is unstable and subject to significant changes in constitutional jurisprudence;
- an objection that the contractor approach underprotects the rights of minorities; and
- whether “historical” discrimination can capture the ongoing exercise of discretion in algorithmic design.
A Promising, Legally Viable Path Forward for Algorithmic Fairness
The irony of modern antidiscrimination law is that it may prevent the active mitigation of algorithmic bias. Due to current legal uncertainty, agencies and technologists are reluctant to assess and mitigate bias in algorithmic systems, putting us all at risk for the proliferation of racial and other minoritized disparities at the hands of AI technology.
The future of algorithmic fairness lies in tying bias mitigation to historical discrimination that can be empirically documented and attributable to the deploying entity. This requires a closer collaboration with social scientists to understand institutional sources of bias and with lawyers to align solutions with legal precedent. If such work is not conducted, algorithmic fairness may reach a legal dead end.
If you’re interested in learning more about this work, read the full paper in the University of Chicago’s Online Law Review series, “Affirmative Action at a Crossroads.”
PAI’s work on reconciling the legal and technical approaches to algorithmic bias, along with our demographic data research, will inform a multi-stakeholder process to generate recommendations on how protected attributes and other demographic data should be used in service of fairness goals.
Co-authored by Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, and Daniel E. Ho, the William Benjamin Scott and Luna M. Scott Professor of Law at Stanford University and associate director of the Stanford Institute for Human-Centered Artificial Intelligence.‘