Explainable AI in Practice

  • Identifying Topics
  • Convening Stakeholders
  • Collecting Insights
  • Developing Resources
  • Engaging Audiences

Stay Informed





please write the name of your organization in full


Hidden Fields

Subscription completed successfully.
Validation error occurred, please confirm the fields and submit again.
Oops, Sorry. Something is wrong. Please try again later.

Overview

Machine learning systems that enable humans to understand and evaluate the machine’s predictions or decisions are key to transparent, accountable, and trustworthy AI. Known as Explainable AI (XAI), these systems could have profound implications for society and the economy, potentially improving human/AI collaboration for sensitive and high impact deployments and helping address bias and other harms in automated decision making.

Although Explainable AI is often touted as the solution to opening the “black box” and better understanding how algorithms make predictions, our research at PAI suggests that current techniques fall short and do not yet adequately enable practitioners to provide meaningful explanations. This workstream asks how we can ensure that deployed explainability techniques are up to the task of enhancing transparency and accountability for end users and other external stakeholders.