Section 0: How to Use This Documentasdasdasdasda

This Version 1 (v1) document is a reference and foundational resource. Future contributions of the ABOUT ML work will include a PLAYBOOK of specifications, guides, recommendations, templates, and other meaningful artifacts to support ML documentation work by individuals in any and all of the roles listed below. Use cases made up of various artifacts from the PLAYBOOK along with other implementation instructions will be packaged as PILOTS for PAI Partners to try out in their organizations. Feedback from their use of these cases will further mature the artifacts in the PLAYBOOK and will support the ABOUT ML team’s continued, rigorous, scientific investigation of relevant research questions in the ML documentation space.dfsdfsdf

Recommended Reading Plan 

Based on the role a reader plays in their organization and/or the community of stakeholders they belong to, there are several different approaches for reading and using the information in this v1 document:dfsdfsdf

ML system developers/deployers are encouraged to do a deep dive exploration of Section 3: Preliminary Synthesized Documentation Suggestions and use it to highlight gaps in their current understanding of both data- and model-related documentation and planning needs. This group will most benefit from further participation in the ABOUT ML effort by engaging with the community in the forthcoming online forum and by testing the efficacy and applicability of templates and specifications to be published in the PLAYBOOK and PILOTS, which will be developed based on use cases as an opportunity to implement ML documentation processes within an organization.
ML system procurers might explore Section 2.2: Documentation to Operationalize AI Ethics Goals to get ideas about what concepts to include as requirements for models and data in future requests for proposals relevant to ML systems. Additionally, they could use Section 2.3: Research Themes on Documentation for Transparency to shape conversations with the business owners and requirements writers to further elicit detailed key performance indicators and measures for success for any procured ML systems. 
Users of ML system APIs and/or experienced end users of ML systems might skim the document and review all of the green Quick Guides to get a better understanding of how ML concepts are relevant to many of the tools they regularly use. A review of Section  2.1: Demand for Transparency and AI Ethics in ML systems will provide insight into conditions where it is appropriate to use ML systems. This section also explains how transparency is a foundation for both internal accountability among the developers, deployers, and API users of an ML system and external accountability to customers, impacted non-users, civil society organizations, and policymakers.
Internal compliance teams are encouraged to explore Section 4: Current Challenges of Implementing Documentation and use it to shape conversations with developer/deployment teams to find ways to measure compliance throughout the Machine Learning Lifecycle (MLLC).
External auditors could skim Appendix: Compiled List of Documentation Questions and familiarize themselves with high-level concepts as well as tactically operationalized tenets to look for in their determination of whether or not an ML System is well-documented.
Lay users of ML systems and/or members of low-income communities might skim the document and review all of the purple How We Define boxes in order to get an overarching understanding of the text’s contents. These users are encouraged to continue learning about ML systems by exploring how they might impact their everyday lives. Additional insights can be gathered from the Glossary section of this v1 document.

Quick Guides 

Throughout this v1 document, we will use green callout boxes with text to further explain a concept. This is a readability enhancement tactic recommended by our Diverse Voices panel and is meant to make the content more accessible and consumable to lay users of machine learning systems.dfsdfsdf

 

How We Define 

Throughout this v1 document, we will use the purple callout boxes with text to showcase our accepted (near-consensus) definition of a term or phrase. This is meant to give foundational background information to viewers of the document and also provides a baseline of understanding for any artifacts that may be derived from this work. Additional terms can be found in the glossary section. Future versions of this reference and/or artifacts in the forthcoming PLAYBOOK will explore audio/video offerings to support the consumption of this information by verbal/visual learners.

 

Contact for Support

If you have any questions or would like to learn more about this effort, please reach out to us by:

 

Section 1: Project Overview

The ABOUT ML project objective is to work towards a new industry emphasis on transparent machine learning (ML) systems. By providing a guide for practitioners to start taking transparency seriously, this document serves as a first step. The goal of this document is to synthesize insights and recommendations from the existing body of literature to begin a public multistakeholder conversation about how to improve ML transparency. 

1.1 Statement of Importance for ABOUT ML project

As machine learning becomes central to many decision-making processes — including high-stakes decisions in criminal justice, healthcare, and banking — organizations using ML systems to aid or automate decisions face increased pressure for transparency on how these decisions are made. In a 2019 Harvard Business Review article, Eric Colson states that routine decisions based on structured data are best handled by artificial intelligence as AI is “less prone to human’s cognitive bias.” However, the author goes on to warn, developers and deployers of AI, specifically ML systems, should consider the inherent “risk of using biased data that may cause AI to find specious relationships that are unfair.” Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles (ABOUT ML) is a project of the Partnership on AI (PAI) working towards establishing new norms on transparency by identifying best practices for documenting and characterizing key components and phases throughout the ML system lifecycle from design to deployment, including annotations of data, algorithms, performance, and maintenance requirements. 

 

Presently, there is neither consensus on which documentation practices work best nor on what information needs to be disclosed and for which goals. Moreover, the definition of transparency itself is highly contextual. Because there is currently no standardized process across the industry, each team that wants to improve transparency in an ML system must address the entire suite of questions about what transparency means for their team, product, and organization within the context of their specific goals and constraints. Our goal is to provide a start to that process of exploration. We will offer a summary of recommendations and practices that is mindful of the variance in transparency expectations and outcomes. We hope to provide an adaptive resource to highlight common themes about transparency, rather than a rigid list of requirements. This should serve to guide teams to identify and address context-specific challenges.

 

While substantial decentralized experimentation is currently taking place, the ABOUT ML project aims to accelerate progress by pooling insights more quickly, sharing resources, and reducing redundancy of highly similar efforts. In doing this together, the community can improve quality, reproducibility, rigor, and consistency of these efforts by gathering evaluation data for a variety of proposals. The Partnership on AI (PAI) aims to provide a gathering place for researchers, AI practitioners, civil society organizations, and especially those affected by AI products to discuss, debate, and ultimately decide on broadly applicable recommendations. ABOUT ML seeks to bring together representatives from a wide range of relevant stakeholder groups to improve public discussion and promulgate best practices into new industry norms that will reflect diverse interests and chart a path forward for greater transparency in ML. We encourage any organization undertaking transparency initiatives to share their practices and lessons learned to PAI for incorporation into future versions of this document and/or artifacts in the forthcoming PLAYBOOK. 

 

This is an ongoing project with regular evaluation points to keep up with the rapidly evolving field of AI. PAI’s broad  range of partner organizations, including corporate developers of AI, civil society organizations, and academic institutions, will be involved in the drafting and vetting of documentation themes recommended in this document. In addition, PAI engaged with the Tech Policy Lab at the University of Washington to run a Diverse Voices panel to gather opinions from stakeholders whose perspectives might not otherwise be captured. Through this process, PAI has gained deeper insights into the Diverse Voices process in order to inform the ABOUT ML recommendations on how to incorporate the perspectives of diverse stakeholders. 

 

We began by highlighting recurrent themes in ML research about documentation, but our ambitious aim is to identify all practices that have sufficient positive data of efficacy to be deemed best practices in ML transparency. PAI has welcomed a public discussion of what it takes to have sufficient data to be deemed best practices alongside the design of ABOUT ML PILOTS. Now that the input from the Diverse Voices process has been incorporated in this current version of the document, PAI aims to continue investigating and refining best practices so they can be disseminated broadly into new norms to improve transparency in the AI industry. We will also continue to highlight promising but insufficiently well-supported practices that are especially deserving of further study.

 

1.1.0 Importance of Transparency: Why a Company Motivated by the Bottom Line Should Adopt ABOUT ML Recommendations

 

Companies can showcase and implement their commitment to responsible AI by adopting the tenets set forth in this Version 1 (v1) reference document and any forthcoming components of the PLAYBOOK. This work is meant to empower that intention with scientifically supported recommendations and artifacts to support the “actioning” of transparency and accountability. As noted in Section 2.2: Documentation to Operationalize AI Ethics Goals, documentation provides important benefits even in contexts where full external sharing is not possible.

 

The ABOUT ML effort aims to encourage organizations to invest in and build the internal processes and infrastructure needed to implement and scale the creation of documentation artifacts. Internal documentation (for other teams inside the same organization, with more details) and external documentation (for broader consumption, with fewer sensitive details) are both valuable and should be undertaken together as they provide complementary incentives and benefits. Organizations will benefit from the alignment of internal and external incentives with the incentives behind proper documentation.

 

The ABOUT ML effort aims to serve the ML documentation stakeholder community by positioning itself as a convener of recommendations and templates. This is meant to support a centralized governance structure with near-consensus standardization for ML documentation processes and artifacts. A coordinated effort within the community could benefit users and impacted non-users of ML systems.

this is the text abt radishes