Test page two. Presently, there is neither consensus on which documentation practices work best nor on what information needs to be disclosed and for which goals. Moreover, the definition of TransparencyFor this document we adopt a meaning for transparency that includes “any efforts to increase explainability, interpretability, or other acts of communication and disclosure.” itself is highly contextual. Because there is currently no standardized process across the industry, each team that wants to improve transparency in an ML system must address the entire suite of questions about what transparency means for their team, product, and organization within the context of their specific goals and constraints. Our goal is to provide a start to that process of exploration. We will offer a summary of recommendations and practices that is mindful of the variance in transparency expectations and outcomes. We hope to provide an adaptive resource to highlight common themes about transparency, rather than a rigid list of requirements. This should serve to guide teams to identify and address context-specific challenges.

HOW WE DEFINE…

Transparency

As noted in Jobin et. al. (2019), concepts such as ‘interpretation, justification, domain of application, and mode of achievement” vary from one publication to another.

For this document we adopt a meaning for transparency that includes any “efforts to increase explainability, interpretability, or other acts of communication and disclosure.”

QUICK GUIDE

About ML

The ABOUT ML Initiative was presented at the Human-Centric Machine Learning Workshop at Neural Information Processing Systems Conference in 2019. In this work, Deb Raji and Jingying Yang note that “transparency through documentation is a promising practical intervention that can integrate into existing workflows to provide clarity in decision making.”

While substantial decentralized experimentation is currently taking place, the ABOUT ML projectThe ABOUT ML Initiative was presented at the Human-Centric Machine Learning Workshop at Neural Information Processing Systems Conference in 2019. aims to accelerate progress by pooling insights more quickly, sharing resources, and reducing redundancy of highly similar efforts. In doing this together, the community can improve quality, reproducibility, rigor, and consistency of these efforts by gathering evaluation data for a variety of proposals. The Partnership on AI (PAI) aims to provide a gathering place for researchers, AI practitioners, civil society organizations, and especially those affected by AI products to discuss, debate, and ultimately decide on broadly applicable recommendations.

ABOUT ML seeks to bring together representatives from a wide range of relevant stakeholder groups to improve public discussion and promulgate best practices into new industry norms that will reflect diverse interests and chart a path forward for greater transparency in ML. We encourage any organization undertaking transparency initiatives to share their practices and lessons learned to PAI for incorporation into future versions of this document and/or artifacts in the forthcoming PLAYBOOK. 

This is an ongoing project with regular evaluation points to keep up with the rapidly evolving field of AI. PAI’s broad  range of partner organizations, including corporate developers of AI, civil society organizations, and academic institutions, will be involved in the drafting and vetting of documentation themes recommended in this document. In addition, PAI engaged with the Tech Policy Lab at the University of Washington to run a Diverse Voices panel to gather opinions from stakeholders whose perspectives might not otherwise be captured. Through this process, PAI has gained deeper insights into the Diverse Voices process in order to inform the ABOUT ML recommendations on how to incorporate the perspectives of diverse stakeholders.