2.3 Research Themes on Documentation for Transparency

2.3 Research Themes on Documentation for Transparency

There is substantial existing research on documentation for each of the steps outlined above. The following section provides a brief review of key insights from the current literature on three of the steps: System Design and Set Up, System Development, and System Deployment.

2.3.1 System Design and Set Up

2.3.1 System Design and Set Up

Minimizing harm resulting from ML systems is a major theme in recent transparency research. Adverse impacts can result from model fragility or intended or unintended misuse, which can result from applying an ML system in a context it was not designed for or using the system for a purpose it was not built for (among other possibilities). Transparent documentation, especially at the system design and set up phase, about how and why an ML system was built as well as inappropriate use contexts potentially reduces misuse by empowering builders, users, activists, policymakers, and other stakeholders with the information necessary to call out intended and unintended misuse. Positive progress is happening through efforts such as the “Safe Face Pledge”Safe Face Pledge. https://www.safefacepledge.org/ and the “Montreal Declaration on Responsible AI,”Montreal Declaration on Responsible AI. Universite de Montreal. https://www.montrealdeclaration-responsibleai.com/ which improve the design and set up of an ML system by outlining dangerous use cases for the deployment of AI services in sensitive contexts and gaining public commitment against AI misuse from corporations through a signed pledge. Documentation also allows more people to spot potential blind spots, contributing to more robust models that are less likely to create unintended harm.

Documenting system feedback mechanisms from the outset is also essential for minimizing harm to the intended users and impacted non-users, since explicit documentation can help surface when existing feedback mechanisms might not be sufficient (for example, if feedback systems do not formalize the inclusion of the perspectives of those most affected by the ML system, especially people from underrepresented communities or communities with limited socio-political power). Documenting feedback loops is a way to commit to the feedback process. The Diverse Voices methodDiverse Voices How To Guide. Tech Policy Lab, University of Washington. https://techpolicylab.uw.edu/project/diverse-voices/ from the Tech Policy Lab at the University of Washington is one way organizations can address this issue. The process involves identifying communities that will be highly impacted by the technology being considered, prioritizing based on which communities are least likely to be consulted by the developers of the technology, convening a panel of experiential experts from that community, asking for their feedback in a structured panel, incorporating that feedback into the design documents, and finally confirming with the panelists that their perspectives have been accurately reflected. This feedback loop should also be designed to surface and disseminate issues that may arise after initial deployment, which is when problems are often noticed.

2.3.2 System Development

2.3.2 System Development

A central theme of promoting greater transparency in system development is detailed reporting on defining characteristics and intended uses of the system. There are well-researched sets of documentation questions meant to prompt thoughtful reflection prior to building datasets as well as models, including for different types of applications like NLP,Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6, 587-604. autonomous vehicles,Ethically Aligned Design – Version II. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf and other domains. These documentation templates are often modeled on those used in other industries, such as safety data sheets from the electronics industryGebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé III, H., & Crawford, K. (2018). Datasheets for datasets. https://arxiv.org/abs/1803.09010 https://arxiv.org/abs/1803.09010; Hazard Communication Standard: Safety Data Sheets. Occupational Safety and Health Administration, US Department of Labor. https://www.osha.gov/Publications/OSHA3514.html or nutrition labels from the food industry.Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data quality standards. https://arxiv.org/abs/1805.03677; Kelley, P. G., Bresee, J., Cranor, L. F., & Reeder, R. W. (2009). A nutrition label for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security (p. 4). ACM. http://cups.cs.cmu.edu/soups/2009/proceedings/a4-kelley.pdf These suggested templates vary widely in length and appearance, ranging from a single concise page of succinct statements, symbols, and visualizations to upwards of 10 pages of detailed prose and graphs. Whether the documentation is meant for internal or external consumption also impacts length and contents, as internal documentation can be more detailed and thus can be longer. Since teams explicitly declare intended goals for the project within all of these templates, they can create greater internal accountability as the ML project proceeds as the team can refer back to initial goals to ensure ongoing consistency with their declared intentions.

A common focus across data-related templates is on clarifying why the dataset is being created and explicitly stating its intended use and limitations. Documentation questions across papers also consistently address the risks that arise at various stages of data creation and distribution, with the goal of encouraging practitioners to reflect on ethical concerns at every stage preceding data use and release. Some templates focus more on addressing specific risks like privacy.

Interpretability

According to Lipton (2017), interpretability holds no agreed upon meaning. However, we see the benefit of interpreting “opaque models after-the-fact” and are comfortable using the post-hoc interpretation approach which includes “natural language explanations, visualization of learned representations or models, and explanations by example.”

Model- and system-level documentation efforts have since emerged from this earlier work on data documentation, introducing questions more specific to overall model objectives. This includes commentary on design decisions, such as model architecture and reporting on fair performance metrics,Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … & Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220-229). ACM. https://arxiv.org/abs/1810.03993 as well as general “purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers.”Hind, M., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K. N., Olteanu, A., & Varshney, K. R. (2018). Increasing Trust in AI Services through Supplier’s Declarations of Conformity. https://arxiv.org/abs/1808.07261 Determining organizationally acceptable rates of performance in advance of development can help guide trade-offs later on, such as those concerning interpretability of models or the inclusion of optional variables.Veale M., Van Kleek M., & Binns R. (2018) ‘Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making’ in Proceedings of the ACM Conference on Human Factors in Computing Systems, CHI 2018. https://arxiv.org/abs/1802.01029.

In addition to reporting for collaborative knowledge and potential auditing, recent work has also suggested that extending the role of documentation towards a legally binding contract similar to open software licenses may be appropriate for certain applications.Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., & Shee, A. (2019). Towards Standardization of Data Licenses: The Montreal Data License. https://arxiv.org/abs/1903.12262 This type of documentation could be onerous for research or during highly iterative development cycles, so any recommended implementation needs to be designed with these limitations in mind. Documentation could become a mechanism for restricting use, particularly in high-risk or high-impact scenarios out of scope of the dataset’s suitable context. Although initial steps have begun to study potential regulation of models and automation software,Cooper, D. M. (2013, April). A Licensing Approach to Regulation of Open Robotics. In Paper for presentation for We Robot: Getting down to business conference, Stanford Law School. most existing efforts focus on the promotion of best practices for model development rather than legally binding documentation. These efforts focus on broad recommendations for best practices for responsible machine learningResponsible AI Practices. Google AI. https://ai.google/education/responsible-ai-practices and ethicsEveryday Ethics for Artificial Intelligence. (2019). IBM. https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf to guide ML practitioners on ethical considerations as they prepare the model for training and deployment. These guidelines also include procedural guidance and suggestions specific to particular use cases of concern, specifically facial recognitionFederal Trade Commission. (2012). Best Practices for Common Uses of Facial Recognition Technologies (Staff Report). Federal Trade Commission, 30. https://www.ftc.gov/sites/default/files/documents/reports/facing-facts-best-practices-common-uses-facial-recognition-technologies/121022facialtechrpt.pdf and chatbots.Microsoft (2018). Responsible bots: 10 guidelines for developers of conversational AI. https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf

2.3.3 System Deployment

2.3.3 System Deployment

The goal of documentation for system deployment is to write down the societally salient aspects of performance, including fairness, robustness, explicability, and other topics. Relevant and difficult-to-answer questions include what tests, monitoring, and evaluation have been done, and how does monitoring relate to social outcomes. This section of the documentation considers the ML system in the context where it will be used, so being explicit about the intended effects and plans to minimize side effects is important.

For example, if fairness is one of the stated objectives of the model, then a team can document how the model performs on one or more of the many different fairness tests developed by academia (such as FairTestTramer, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., … & Lin, H. (2017, April). FairTest: Discovering unwarranted associations in data-driven applications. In 2017 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 401-416). IEEE. https://github.com/columbia/fairtest, https://www.mhumbert.com/publications/eurosp17.pdf), or by various companies such as AccentureKishore Durg (2018). Testing AI: Teach and Test to raise responsible AI. Accenture Technology Blog. https://www.accenture.com/us-en/insights/technology/testing-AI, IBMKush R. Varshney (2018). Introducing AI Fairness 360. IBM Research Blog. https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/, FacebookDave Gershgorn (2018). Facebook says it has a tool to detect bias in its artificial intelligence. Quartz. https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence/, GoogleJames Wexler. (2018) The What-If Tool: Code-Free Probing of Machine Learning Models. Google AI Blog. https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html, and MicrosoftMiro Dudík, John Langford, Hanna Wallach, and Alekh Agarwal (2018). Machine Learning for fair decisions. Microsoft Research Blog. https://www.microsoft.com/en-us/research/blog/machine-learning-for-fair-decisions/ that have released open source toolkits. Although each of these toolkits remain grounded in statistical fairness definitions, some toolkits also emphasize the need for the qualitative documentation of the model’s performance. For instance, the What If Tool from Google heavily emphasizes enabling data visualizations to guide the practitioner’s judgment on data diversity, and the Accenture toolkit involves a survey of high-level as well as detailed questions to consider before model deployment.

Furthermore, it is important to highlight ways in which it would be unwise to deploy particular models. For example, if such a model might be at risk of revealing personal data were it possible to either publicly examine the weights or target it through repeated querying, it could create privacy and data protection risks with associated legal or ethical consequences.Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that Remember: Model Inversion Attacks and Data Protection Law. Phil. Trans. R. Soc. A, 376, 20180083. https://doi.org/10/gfc63m Highlighting aspects or concerns such as these helps provide an institutional memory of potential failure modes which future users can choose to either take at face value or focus due diligence efforts around.