1.1.4 Who Is This Project For?

1.1.4 Who Is This Project For?

There are many sets of stakeholders that should be considered and incorporated into the ABOUT ML project at various stages to make it the most beneficial for the largest number of people.

  • Stakeholders that should be consulted while putting together ABOUT ML resources, with a particular focus on people impacted by ML technology who may otherwise not be given a say in how that technology is built. This includes feedback from panels hosted in late 2019 by Diverse Voices which represented:
    • Lay users of expert systems
    • Members of low-income communities
    • Procurement decision makers
  • Audiences for ABOUT ML documentation artifacts, which include:
    • The ABOUT ML Reference Document: This document, which serves as an evolving resource and reference for the ABOUT ML foundational work.
    • PLAYBOOK: A repository of artifacts, specifications, guides, and templates developed and/or recommended by the ABOUT ML effort and based on foundational tenets introduced in the Reference Document.
    • PILOTS: Late-2021 implementation of use cases developed from several artifacts in the PLAYBOOK and shared with PAI Partners in an effort to acquire feedback on the use of recommended ML documentation templates.

To bring focus and prioritization to this large undertaking, PAI has set out an initial plan in consultation with the Steering Committee for how to sequence efforts for each of the above audience sets. In order to identify communities and groups within each of these sets of stakeholders and audiences, it is important to detail what goals each meta-category of stakeholders might have for engaging with ABOUT ML so that each section below begins with a discussion of possible goals. We welcome feedback on this plan.

1.1.4.1 Audiences for the ABOUT ML Resources

1.1.4.1 Audiences for the ABOUT ML Resources

The primary audiences for the ABOUT ML resources vary by stage of the plan laid out in Section 1.1.2 ABOUT ML Goals and Plan. Below is a summary of these key audiences and why they play the key role in each subgoal.

Sequence ABOUT ML Subgoal Key Audience for ABOUT ML Resource Theory of Change
1 Enable internal accountability Individual champions at all levels and roles inside organizations that build ML systems who are interested in implementing ABOUT ML recommendations Motivate resource investment in building internal processes and tooling to enable implementing ABOUT ML’s documentation recommendations
2 Enable external accountability Groups with the most influence over external accountability for organizations that build ML systems, including advocacy organizations, government agencies, and policy and compliance teams inside organizations Once internal processes and tooling exist to enable implementing documentation, builders of ML technology will be ready to enter and act on a detailed conversation with other stakeholders on what the contents of the documentation need to be to enable external accountability
3 Standardize documentation across industry based on high adoption of practice Organizations that build ML systems With enough data and iteration from organizations that implement the documentation for external accountability, this community can decide what set of questions make sense as an initial industry norm, which can still evolve over time

1.1.4.2 Stakeholders That Should Be Consulted While Putting Together ABOUT ML Resources

1.1.4.2 Stakeholders That Should Be Consulted While Putting Together ABOUT ML Resources

Beyond the refinement of this ABOUT ML Reference Document, any additional templates or resources developed as a part of the ABOUT ML effort should be shared with and reviewed by various stakeholder groups. Here is an overlapping, non-comprehensive list of stakeholders that should be particularly consulted while putting together ABOUT ML resources and why their input is valued for the ABOUT ML project. These are stakeholders who may not otherwise use the ABOUT ML resources nor read the documentation artifacts:

  • People impacted by ML technology because their priorities, desires, and concerns should be acknowledged in the ABOUT ML resources and reflected in the documentation artifacts
  • People in roles that would potentially implement ABOUT ML recommendations (e.g., product, engineering, data science, analytics, and related departments in industry; researchers who collect datasets and build models in academia and other nonprofits) because ABOUT ML needs to practically fit into their workflow
  • People in roles that have the power, headcount, and/or budget to sign off on implementing ABOUT ML because they need to buy in to the recommendations
  • People in roles that have auditing rights or power over ML technologies (e.g., government agencies and civil society organizations like advocacy organizations) because they could use ABOUT ML’s artifacts to audit technologies and the artifacts need to be usable for that purpose

Additionally, all audiences for the ABOUT ML resources and audiences for the artifacts should also be consulted.

1.1.4.3 Audiences for ABOUT ML Documentation Artifacts

1.1.4.3 Audiences for ABOUT ML Documentation Artifacts

The audiences most likely to use ABOUT ML documentation artifacts are people for whom the documentation would fit directly and naturally into their workflow. This includes people directly involved in either the building or purchasing of ML systems or people who have another strong reason to examine ML systems, including end users, compliance departments, or external auditors. They fall into the following categories:

  • ML system developers/deployers
  • ML system procurers
  • Users of ML system APIs
  • End users
  • Internal compliance teams
  • External auditors
  • Marketing groups

Other people who have a stake in reading the ABOUT ML documentation artifacts but who are not necessarily as likely to know that documentation could exist are non-users who are impacted by the ML systems (for example, people assigned credit scores by an ML model) and people advocating on behalf of these impacted non-users such as civil society organizations. It is important to make ABOUT ML documentation artifacts accessible to these people as well, especially given that they may have less direct access, knowledge, and influence over the ML systems than the groups named above.

1.1.4.4 Whose Voices Are Currently Reflected in ABOUT ML?

1.1.4.4 Whose Voices Are Currently Reflected in ABOUT ML?

The current releases as of mid-2021 reflect the work and input of the following groups:

  • PAI editors (Alice Xiang, Deb Raji, Jingying Yang, Christine Custis)
  • Authors of the Datasheets, Model Cards, Factsheets, and Data Statements papers
  • Interested people from PAI’s Partner community during an internal review process
  • People who submitted comments during the public comment process
  • ABOUT ML Steering Committee
  • Diverse Voices panels consisting of experiential experts from the following communities:
    • Lay users of expert systems: The Diverse Voices process of The Tech Policy Lab within the University of Washington defined lay users of ML systems as anyone who uses or might use ML systems as part of their work (such as rideshare drivers) but who do not have expertise in the technical engineering of ML systems. In this panel, one panelist was a rideshare driver, one panelist was a medical student, and one panelist was an administrative office worker. All panelists were currently using ML systems or expected to use ML systems in the near future.
  • Members of low-income communities: The Diverse Voices process of The Tech Policy Lab within the University of Washington defined members of low-income communities as anyone whose household income is less than twice the federal poverty threshold. In this panel, five panelists identified themselves as being low-income community members and two panelists served the low-income community in a professional capacity (e.g., employment counselor, property manager for a low-income apartment building).
  • Procurement decision makers: The Diverse Voices process of The Tech Policy Lab within the University of Washington defined procurement decision makers as anyone who, as part of their work, is involved in the acquisition of new technology by defining technological needs for an organization, preparing requests or bids for new technology, or ensuring the service or product complies with state and federal laws. In this panel, all panelists were involved in some part of the technology procurement process, though none of the panelists held the title of procurer. Three panelists were responsible for procurement decisions in the public sector (e.g., public libraries, city government, state government) and two panelists had experience with procurement in non-profit organizations.

1.1.4.5 Origin Story

1.1.4.5 Origin Story

ABOUT ML is a project of PAI working towards establishing new norms on transparency via identifying best practices for documenting and characterizing key components and phases throughout the ML system lifecycle from design to deployment, including annotations of data, algorithms, performance, and maintenance requirements.

Hanna Wallach, Meg Mitchell, Jenn Wortman Vaughan, and Timnit Gebru had a series of meetings given their work in documentation and standardization. These efforts include seminal research related to Datasheets for Datasets and Model Cards for Model Reporting. After those initial discussions coinciding with the early days of PAI (circa 2018-2019), Hanna and Meg approached PAI and suggested that this work be continued and advanced under the umbrella of the multistakeholder organization and with the continued support and input of the Partner community.

Francesa Rossi and Kush Varshney, both from Partner IBM, also approached PAI with the idea to focus on documentation work and contributed to the early and ongoing efforts of ABOUT ML. IBM’s research related to Factsheets was meaningful to this practical effort. PAI has since continued to work with tech companies, nonprofits, academic researchers, policymakers, end users, and impacted non-users to coordinate and influence practice in the ML documentation space. Eric Horvitz at Microsoft was also a key contributor in identifying the need to unify all of these projects bringing datasheets and model cards and other documentation practices and templates together to inspire the research focus for a single PAI program.

Jingying Yang was PAI’s original Program Lead for the ABOUT ML work. She, along with other staff members within PAI, developed a research plan for how to engage with the stakeholders in order to set a new industry norm of documenting all ML systems built and deployed, thus changing practice at scale. Important contributors during this stage of the work included PAI Fellow Deb Raji and Alice Xiang, Head of Fairness, Transparency, and Accountability Research, who served as PAI editors of the v0 foundational document. Hanna Wallach, Meg Mitchell, Jenn Wortman Vaughan, and Timnit Gebru continued their pivotal support along with Lassana Magassa in shaping the program’s intentions and heightening awareness of important concepts related to attribution and inclusion.

Through an evidence/research-based multi-pronged initiative that includes and responds to solicited feedback from many stakeholders, the ABOUT ML work has progressed and the ultimate goal is to bring companies and organizations together with similar ideas around AI documentation in an effort to push for general guidelines and an overall higher bar of responsible AI. The impact we believe this work has and will continue to have is helping to create an organizational infrastructure for ethics in ML and helping to increase responsible tech development and deployment via transparency and accountability.

The work continues and we welcome the input of the AI community in the ongoing revisions to our foundational document as well as the artifacts and templates we plan to share as a result of that work. We have listed several other contributors to this effort on an internal website and ask that you visit this list and help us to add to it with names of other supporters, reviewers, researchers and contributors in the ABOUT ML effort by filling out this form.

Below is a list of contributors to the ABOUT ML project since its inception:

  • Norberto Andrade – Facebook
  • Thomas Arnold – Tufts HRILab
  • Amir Banifatemi – XPRIZE
  • Rachel Bellamy – IBM
  • Umang Bhatt – Leverhulme Centre for the Future of Intelligence
  • Miranda Bogen – Facebook
  • Ashley Boyd – Mozilla Foundation
  • Jacomo Corbo – QuantumBlack
  • Hannah Darnton – BSR
  • Anat Elhalal – Digital Catapult
  • Daniel First – McKinsey / QuantumBlack
  • Sharon Bradford Franklin – Open Technology Institute
  • Ben Garfinkel – Future of Humanity Institute
  • Timnit Gebru – AI/ML Researcher
  • Jeremy Gillula – EFF
  • Jeremy Holland – Apple
  • Ross Jackson – EY
  • Libby Kinsey – Digital Catapult
  • Brenda Leong – Future of Privacy Forum
  • Tyler Liechty – DeepMind
  • Lassana Magassa – Tech Policy Lab
  • Richard Mallah – Future of Life Institute
  • Meg Mitchell – AI/ML Researcher
  • Amanda Navarro – PolicyLink
  • Deborah Raji – Mozilla
  • Thomas Renner – Fraunhofer IAO
  • Andrew Selbst – Data & Society
  • Ramya Sethuraman – Facebook
  • Reshama Shaikh – Data Umbrella
  • Moninder Singh – IBM
  • Spandana Singh – Open Technology Institute
  • Amber Sinha – Centre for Internet and Society
  • Michael Spranger – Sony
  • Andrew Strait – Ada Lovelace Institute
  • Michael Veale – UCL
  • Briana Vecchione – Cornell University
  • Jennifer Wortman Vaughan – Microsoft
  • Hannah Wallach – Microsoft
  • Adrian Weller – Leverhulme Centre for the Future of Intelligence
  • Abigail Hing Wen – Author & Filmmaker
  • Alexander Wong – Vision and Image Processing Lab at University of Waterloo
  • Andrew Zaldivar – Google
  • Gabi Zijderveld – Affectiva