AI Ethics Requires a Bridge Between Theory & Practice
AI Ethics Requires a Bridge Between Theory & Practice
Alice Xiang, PAI Research Scientist, speaking at RE•WORK AI for Good Summit in San Francisco, June 2019
As AI technologies become increasingly ubiquitous, it is important to bridge the divide between theory and practice in AI ethics.
In abstract thought experiments like the trolley problem, the nature of the dilemma is apparent: the choices are clear (you either stay the course or divert the trolley), and the consequences of each choice are known in advance. In the context of AI, however, ethical dilemmas are often difficult to identify in advance and frequently arise from seemingly benign technical decisions. For example, the choice of training data is incredibly important—a facial recognition algorithm may make more incorrect identifications for minorities if the training dataset is not sufficiently racially diverse, an outcome which is highly troubling if the technology is used in law enforcement. Given the nascency of AI ethics guidelines or regulations, AI practitioners are often the de facto frontline decision-makers in their organizations when it comes to concerns regarding the potential effects of AI on individuals and society.
Alice Xiang, PAI Research Scientist, speaking at RE•WORK Deep Learning Summit in Boston, May 2019
Speaking at the recent RE•WORK Deep Learning Summit in Boston and AI for Good Summit in San Francisco, I had the opportunity to engage with a wide variety of AI practitioners about this challenge. Because AI acts as an intermediary between its human developer and the final outcomes, it can be difficult to identify and understand the ethical consequences of the various steps in its development process. Across the RE•WORK events, a common theme was the importance of understanding the technical intricacies behind algorithm-made decisions. Design choices that seem purely technical—something for developers to resolve themselves rather than escalate to business, legal, policy, or communications teams—can turn out to be the ones with the greatest ethical implications.
At PAI, we have seen this most clearly in criminal justice, where seemingly simple and benign technical choices can have large ethical ramifications given the high-stakes nature of the decisions involved (such as whether to detain individuals after their arrest). This issue was one of the motivations behind our recent Report on Algorithmic Risk Assessment Tools in the US Criminal Justice System, which documents ten minimum requirements for the responsible deployment of risk assessment tools—algorithms that are used to predict whether an individual will recidivate. Although risk assessment tools might be “data-driven” and “evidence-based,” many of the technical decisions involved, from selecting the data to use for training, to choosing whether and how to correct for bias, to deciding on the frequency with which to re-train the model, can have significant consequences for how the model treats members of different groups and on how many individuals are ultimately detained based on the model’s recommendations.
In social media, algorithms that narrowly optimize for click-through rates (and similar measures of user engagement) can have adverse consequences, contributing to polarization and the spread of misinformation. Click-through rates are a widely used metric and, on the face of it, a completely reasonable thing to optimize: they reflect user interest and ad revenue generation, and no optimization function perfectly captures what individuals, companies, or society broadly care about. While click-through rates are a great proxy for some business objectives, however, they also are a proxy for outrage, such that maximizing them can inadvertently lead to the spread of outrage-inducing content. This is not to say that click-through rates should not be used in objective functions, but that it is important to develop measures to mitigate the unintended consequences from such use.
These examples highlight how complicated identifying the ethical consequences of AI can be, and how often the process involves interrogating seemingly mundane technical decisions.
Addressing this challenge requires at least the following three steps:
- Increasing awareness among leadership at organizations to encourage them to dedicate more resources to identifying and addressing AI ethics concerns. A recent report found that two-thirds of tech workers would like more opportunities to assess the potential impacts of their products but that doing so is currently the lowest priority in their work.
- Educating AI developers on techniques to address, detect, and mitigate ethical concerns with their AI products. The same report found that tech workers currently mainly use informal methods, such as their personal moral compass or internet searches, to assess the societal impact of their products.
- Educating policymakers and legal/policy teams at organizations on how to build governance structures that incentivize organizations to address AI ethics issues and to revise and refine their practices over time.
As discussed at the RE•WORK events, applying AI to benefit society requires an examination of seemingly benign technical decisions. At PAI, we realize the importance of building this bridge between practice and theory. We are using our unique position at the intersection of leading companies, organizations and people differently affected by AI technologies to work on projects across these three fundamental steps. We invite others who share in this bridge-building vision to reach out to us at partnerships@partnershiponai.org for collaboration.