Machine learning systems that enable humans to understand and evaluate the machine’s predictions or decisions are key to transparent, accountable, and trustworthy AI. Known as Explainable AI (XAI), these systems could have profound implications for society and the economy, potentially improving human/AI collaboration for sensitive and high impact deployments and helping address bias and other harms in automated decision making.
Although Explainable AI is often touted as the solution to opening the “black box” and better understanding how algorithms make predictions, our research at PAI suggests that current techniques fall short and do not yet adequately enable practitioners to provide meaningful explanations. This workstream asks how we can ensure that deployed explainability techniques are up to the task of enhancing transparency and accountability for end users and other external stakeholders.
Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
In this paper, we report on an AI system for flagging and explaining low-quality medical images in real-time, stakeholder needs, and the effect of including explanations for technicians.
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
This research paper argues for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
Explainable Machine Learning in Deployment
Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. This research paper examines how ML explainability techniques are actually being used.