Section 4: Current Challenges of Implementing Documentation
Section 4: Current Challenges of Implementing Documentation
This section is where PAI invites comments, anecdotes, case studies, broader stories from implementing documentation efforts, and results from any solutions (effective or ineffective) attempting to address these challenges. Please also share feedback on whether your organization has encountered these challenges or new ones or if these challenges do not exist in your work.
When attempting to implement the recommended documentation guidelines, a number of common challenges arise. The following is an overview of currently identified challenges. The eventual goal of this chapter is to help practitioners foresee challenges in their own settings and provide solution options for addressing them.
Intra- and Inter-Organizational Cooperation
It is very difficult to set up the novel organizational processes, get buy in, and secure monetary and HR resources required to effectively adopt documentation for transparency as an organizational norm without cooperation and alignment across multiple levels of internal and possibly external teams. This can be especially true when documentation is not required by external factors (e.g., by procurement requirements, audits, etc.). Getting alignment around prioritizing the principle of transparency is the critical first step to implementing any documentation practices. Given the early stage of implementation for documentation practices, organizations may also need to look to outside expertise to aid in the process of designing the right processes, templates, and practices. ABOUT ML hopes to offer a starting resource in this.
Documentation Takes Time
Related to the above challenge, another obvious challenge to implementing documentation is that proper documentation takes time, and time is a valuable commodity in the quick-moving technology industry. Often internal and external incentives will not align with the incentives behind proper documentation.
For many documentation criteria, it is very difficult to identify appropriate and demographically representative benchmarks to completely evaluate a model system. Often, separate, custom benchmarks specific to the ML system’s context of use need to be developed for the documentation requirements and the evaluation of deployment. However, this can be costly and time intensive, so there are real organizational tradeoffs to navigate between benchmark quality, speed to deployment, and quality of evaluation and documentation.
There exist numerous academic and industry metrics to measure the performance of an ML system. There also exist many different fairness evaluation metrics and definitions — too many to go into in detail in this document.Tools that can be used to explore and audit the predictive model fairness include FairML, Lime, IBM AI Fairness 360, SHAP, Google What-If Tool, and many others Metric selection thus requires ethical choices based on the specific situation, which will often require interdisciplinary work with ethicists, legal experts, representatives of affected communities, and others. Thus, metric selection, both in terms of performance metrics and fairness-oriented metrics — and documentation of why a given metric was selected — is additional work that teams need to budget time for. In “Machine Learning That Matters,”Wagstaff, K. (2012). Machine learning that matters. arXiv preprint arXiv:1206.4656. https://arxiv.org/abs/1206.4656 Kiri Wagstaff suggests the high-level guidance of defining metrics according to the intended outcome rather than evaluating model performance on arbitrary test sets with typical ML performance metrics, which is a reasonable starting point for most projects. For instance, if an ML system is meant to optimize for revenue, then measure revenue outcomes from the ML system directly, or make use of a proxy like advertisement impressions, rather than rely on the Area Under The Curve (AUC) of an isolated model.
Soliciting and mining the demographic metadata used in evaluating whether a model system is performing fairly across intersectional subgroups can also expose identifying information about users and image subjects. To mitigate this risk, it is important to store the data in a way that respects privacy and does not compromise individual privacy in exchange for ML system-level transparency.
Compromising Intellectual Property: One commonly feared risk of documentation is losing trade secrets and intellectual property due to disclosing too much information. However, allowing either protecting intellectual property or preserving “trade secrets” to serve as a blanket excuse for omitting information in documentation opens the door for companies to hide crucial information that should be revealed in the public interest. One goal of ABOUT ML guidelines is to indicate areas all companies should be willing to share. In each documentation recommendation section in Section 3: Preliminary Synthesized Documentation Suggestions, we discuss more specific pros and cons of disclosing that specific information about an ML system to better inform this tradeoff.
System Security Vulnerability
Another fear for documentation is revealing attack surfaces in an ML system by providing too much insight into how it was built. This is a fine line to walk because, on the one hand, it is an organization’s responsibility to build robust security measures into their ML systems and documenting these may spread more knowledge for other organizations attempting to secure their own systems. On the other hand, people fear that knowledge can be misused for hacking rather than for shoring up collective defenses. It is worth a detailed discussion as an industry to better discern between low- and high-risk types of information disclosure. Additional security risks include documentation that reveals potential blindspots of the model system such that nefarious actors could game or hack the system. The first step to finding solutions for these risks is by naming them and the next step involves investing in research and further understanding towards best practices.
Lack of Formal Decision-Making or Development Practices
In some situations, models are tuned by adjusting various parameters until the result “looks right.” It can be difficult to document such ad hoc practices as they lack any formal structure. Conversely, this sort of ad hoc decision-making or model development is especially important to document, since it can obscure consequential decisions about who, what, and how the model was developed.
Section 5: Conclusions
Section 5: Conclusions
The ABOUT ML project objective is to work towards putting forward a new industry emphasis on transparent ML systems and this is the first step in providing a guide for practitioners to start taking this goal seriously. The goal of the v0 document was to synthesize insights and recommendations from the existing body of literature to begin a public multistakeholder conversation about how to improve ML transparency.
Key ideas include:
- Documentation is valuable both as a process and an artifact built to accomplish specific goals (and thus extra documentation for its own sake will not always be useful).
- Internal documentation (for other teams inside the same organization, more detailed) and external documentation (for broader consumption, fewer sensitive details) are both valuable and should be undertaken together as they provide complementary incentives and benefits.
- Avoiding misuse and harm from ML systems is a focus of current research and practice. Adhering to a documentation process that demands intentional reflection about how a system might be used and misused, in which contexts, and impacting whom is one first step towards potentially reducing harms. Incorporating feedback from diverse perspectives early, often, and throughout every stage in the ML lifecycle is another risk mitigation strategy. The Diverse Voices process from the Tech Policy Lab at the University of Washington is one formalized methodology for incorporating this type of feedback.
The transition from v0 document to the current ABOUT ML Reference Document is based on input we received from the Diverse Voices panels and does not represent an overhaul of the original work but an update with enhanced focus on readability and utility. Notable adjustments include:
- The addition of Section 0: How to Use This Document
- Callout boxes with Quick Guides and Definitions
- The addition of Section 1.1.0: Importance of Transparency: Why a Company Motivated by the Bottom Line Should Adopt ABOUT ML Recommendations
- Possible interventions for users of this resource to consider
- Diverse Voices Process appendix
- Other revisions throughout based on Diverse Voices input
- Amendments to the document evolution/revision process in light of newly identified forthcoming artifacts