Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. PAI’s recent research paper, Explainable Machine Learning in Deployment, is the first to examine how ML explainability techniques are actually being used. We find that in its current state, XAI best serves as an internal resource for engineers and developers, rather than for providing explanations to end users. Additional improvements to XAI techniques are necessary in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

READ THE BLOG POST  READ THE PAPER