Organized by MozFest
MozFest is a unique hybrid: part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world.
Over the last thriteen years, MozFest has fueled the movement to ensure the internet benefits humanity, rather than harms it. As the festival matures, we remain focused on our work to build a healthier internet and more Trustworthy AI.
Explaining model predictions has become one of many organizations’ main principles in developing responsible AI systems. Using XAI (explainable AI) tools is one approach to making complex AI systems transparent and understandable to technical and non-technical stakeholders. Broadly, XAI tools help us understand the “How’s” and “why’s” behind a model’s predictions, thereby allowing us to account for the strengths and weaknesses of AI systems when deployed in a real-world setting. In recent years, a wave of XAI tools has been developed. With so many different tools available, we wanted to explore how AI/ML practitioners know which XAI tools are appropriate for their purposes.
In this lighting talk session, we want to discuss the current XAI tools landscape, highlight gaps and research opportunities and provide an overview of the consequences of the use, misuse, and disuse of XAI tools. Also, we want to introduce to the community our initiative called “XAI Toolsheet,” which is a tool documentation framework based on our analysis of more than 100 XAI tools. Our easy-to-use documentation template meaningfully summarizes the relevant and important features of XAI tools to aid practitioners in making the right choices in this complex landscape and use it as an auditing framework to assess the functionality and usability of the XAI tools.