Convening Across Industries
In service of transparency and accountability goals, PAI hosted a one-day, in-person workshop around the deployment of “explainable artificial intelligence” (XAI) in February of 2020. Our paper detailing the takeaways from this workshop was accepted as a poster at the 2020 ICML Workshop on Extending Explainable AI Beyond Deep Models and Classifiers and as a spotlight at the Workshop on Human Interpretability in Machine Learning.
Explainability is often proposed as a way of increasing the public transparency of AI, but current XAI implementations are rarely catered to the public. The XAI workshop stemmed from an interview project PAI conducted in 2019 which found that the majority of XAI deployments are not for end-users affected by the models, but rather for machine learning (ML) engineers debugging the models themselves. This workshop aimed to leverage PAI’s multi-stakeholder convening capacity to bring together a diverse group of ML developers, researchers, designers, policymakers, and legal experts to explore this gap and outline paths forward for reaching a wider variety of stakeholders with model explanations.
Workshop participants were split into interdisciplinary discussion groups of 5-6 individuals facilitated by a member of PAI staff. Despite there being a multitude of groups bringing their own definitions of “explainability” to the discussions, the focus of these definitions were notably consistent. Most explainability definitions included references to: context, the scenario in which models are deployed; stakeholders, those affected by the models and those with a vested interest in the explanatory nature of the models; interaction, the goal the models and their explanations serve; and summary, the notion that “an explanation should compress the model into digestible chunks.”
This exercise reinforced recent scholarship advocating for situated and contextualized explanations because abstracted explainability metrics are seen as unlikely to succeed on their own