Our Blog
/
Blog
Explainable AI in Practice

Multistakeholder Approaches to Explainable Machine Learning

$hero_image['alt']

How might we widen the net on Explainable Machine Learning?

In February of this year, the Partnership on AI (PAI) hosted a one-day, in-person workshop around the deployment of “explainable artificial intelligence” (XAI) in service of transparency and accountability goals.

Our paper detailing the takeaways from this workshop has been accepted as a poster at the 2020 ICML Workshop on Extending Explainable AI Beyond Deep Models and Classifiers (XXAI) and as a spotlight at the Workshop on Human Interpretability in Machine Learning (WHI).

Explainability Day with The Partnership On AI

PAI’s XAI workshop stemmed from an interview project PAI conducted last year, which found that the majority of XAI deployments are not for end-users affected by the model but rather for machine learning (ML) engineers to debug the model itself. This is a gap between commonly perceived theory and practice, as explainability is often proposed as a way of increasing public transparency, but explanations currently primarily serve internal stakeholders (i.e., ML engineers) rather than external affected groups and decision-makers. The workshop aimed to leverage PAI’s multi-stakeholder convening capacity to bring together a diverse group of ML developers/researchers, designers, policymakers, and legal experts to explore this gap and outline paths forward for using model explanations for a wider variety of stakeholders.

The day was split into two parts: first, collaboratively defining what explainability should entail across disciplines and domains, and second, exploring domain-specific use cases, stakeholders, challenges, and potential solutions for XAI. For the first part of the day, participants were split into interdisciplinary discussion groups of 5-6 individuals facilitated by a member of PAI staff, and for the latter part of the day, participants were split into similarly sized and facilitated groups focused on healthcare, finance, media, and social services with diverse representation from academia, industry, and law/policy in each group.

Defining Explainability

“Explainability lets humans interact with ML models to make better decisions than either could alone.” – XAI Workshop Participant

Despite there being a multitude of groups each with their own sets of definitions, there was a notable amount of consistency in the focus of the definitions.

Most explainability definitions shared included reference to:

  • context: the scenario in which the model is deployed,
  • stakeholders: those affected by the model and those with a vested interest in the explanatory nature of the model,
  • interaction: the goal the model and its explanation serve, and
  • summary: the notion that “an explanation should compress the model into digestible chunks.”

This exercise reinforced recent scholarship advocating for situated and contextualized explanations because abstracted explainability metrics are seen as unlikely to succeed on their own.

Designing and Deploying Explainability

Building on the contextual understandings of explainability from the first part of the day, the domain-specific conversations delved into how to tailor explanations to specific settings and how explanations might be better used in practice. Some of the salient themes include:

1. Evaluation of Explanations

Explanations should, in most cases, help a stakeholder accomplish some goal. Without the knowledge of who the stakeholders are and what their goals might be, evaluating explanations can be problematic. One example of this from the media domain is how explanations alongside automated labels of mis-/dis-information can further entrench users’ prior beliefs about the veracity of information, rather than achieving the explanation’s purpose of better informing stakeholders.

2. Stakeholder Education

As the example above suggests, explanations are unlikely to have the desired effect if they are developed and deployed without stakeholders being informed and involved in the process. Those implementing machine learning systems need to be made aware of what potential XAI has for actually enabling transparency. The people who would be interacting with an XAI-based solution need to understand the limitations of the technology and its potential to mislead. And finally, those who are having decisions made about them by XAI need room to understand when and how those decisions are being made and how they can make subsequent interventions.

3. Uncertainty Alongside Explanations

As part of being clear about XAI’s limitations, providing uncertainty measures alongside an explanation can temper stakeholders’ expectations and make it clearer when an intervention is required. Quantifying and effectively communicating model uncertainty can be challenging in practice, however, so more work is required in this area before such metrics can be reliably provided and understood.

4. Explainability over Time

Once an XAI system has been deployed, it should be expected that the explanations will affect stakeholder behavior and that the underlying distributions the ML system was meant to model will very likely change. While these changes might be framed as stakeholders “gaming” the system, it is precisely this type of actionability that explanations are oftentimes meant to enable. Designing and deploying XAI systems alongside external stakeholders can help to ensure that responses to the explanations are positive for these stakeholders and that the system will not simply break down as the population changes.

For greater detail on these and other themes surfaced by the workshop, be sure to check out the full paper.

XAI-Blog-Image-2-1-1536x864

Future Work on Explainability at The Partnership On AI

While incorporating these themes into XAI is a daunting task, we have already seen this work begin around our partner community. Part of our goal at PAI is to make responsible AI recommendations practical, and we will be working with our partners and other stakeholders to make responsible XAI a reality.

Building off our workshop findings, PAI’s Fairness, Transparency, and Accountability Research team is spearheading a collaborative project around the quantification and communication of predictive uncertainty to supplement model explanations.

The first step will be to review the state of the art uncertainty measures and to see how they map onto existing communication strategies from statistics, data science, and human computer-interaction. We will then build off these findings in follow-on research to develop concrete recommendations for the use of explanations to serve a range of stakeholders.