AI Adoption for Newsrooms: A 10-Step Guide

A step by step guide to the responsible adoption of AI tools in newsrooms

AI is already changing the way news is being reported.

AI tools can alert journalists to breaking news, help them analyze and draw insights from large datasets, and even write and produce the news. At the same time, the risks associated with using AI tools are significant and varied. From potentially spreading misinformation to making biased statements, the cost — both literally and figuratively — of misusing AI in journalism can be high.

Partnership on AI (PAI), as part of the Knight Foundation’s AI and Local News Initiative, has been working with organizations and individuals from the technology and news industries, civil society, and academia to explore how journalists can ethically adopt AI. AI Adoption for Newsrooms: A 10-Step Guide is the latest addition to PAI’s AI and Local News Toolkit, a set of resources designed to help local news organizations responsibly harness AI’s potential.


Informed by 5 Key Principles for AI-Adopting Newsrooms, the Guide provides a step-by-step roadmap to support newsrooms navigating the difficult questions posed by AI tool identification, procurement, and use.

Beginning with Step 1, “Identifying the outcomes and objectives of adding an AI tool” and ending with Step 10, “When you should retire an AI tool,” AI Adoption for Newsrooms takes newsrooms through the entire AI adoption journey, illustrated with real-world examples of newsrooms that have incorporated AI tools.

Download the Guide

How This Guide Was Created

Over the past year, we worked with journalists and newsroom leaders to understand their most pressing questions related to responsibly procuring and using AI tools. We’ve also interviewed AI tool developers to understand why they’ve developed these tools and what risks they foresee with adoption. In January 2023, we launched the AI and Local News Steering Committee, a group of nine experts currently working in the AI and news sectors, including representatives of industry, newsrooms, civil society, and academia. The Steering Committee has focused primarily on providing input and direction on the content and development of this Guide.

Who This Guide Is For

While the Guide is primarily written for newsrooms looking to procure new AI tools, it is also applicable to newsrooms that have already procured AI tools or are considering building their own. In this guide, procurement is covered in the first 7 steps, while the remaining 3 steps cover the governance and use of AI tools within the newsroom. The steps are written to allow users to jump into the Guide at any step depending on where their newsroom is in the procurement and adoption process. Throughout the Guide, we seek to balance usability with sufficient nuance and depth.

The responsible use of AI tools is part of upholding long-standing journalistic values of integrity, transparency, and accountability. Journalists should strive to apply the same level of rigor and scrutiny to AI tools as to sources in news stories. This is how we can ensure that the AI tools that are adopted are serving the newsroom and audiences’ best interest and do not amplify bias, increase misinformation, or put the newsroom’s credibility at stake. To that end, we guide newsrooms through the questions they should be asking at every step of the way in their journey of procuring and using an AI tool, with insights derived from a multidisciplinary community — including other newsrooms — who have already used AI tools.

What Responsible AI Adoption Looks Like

Responsible procurement and use of AI tools requires understanding the ethical implications of such tools, including how to maximize their benefits while appropriately assessing their risks. This necessitates a broader newsroom effort — between journalists, editors, and organization leaders — to put governance in place that ensures appropriate use and monitoring throughout an AI tool’s lifecycle.

AI tools can be used for many different purposes and have various degrees of complexity. As a result, the responsible adoption of AI can look different depending on the newsroom and what tools they are incorporating. For that reason, this Guide poses many questions for journalists, editors, and management to help determine what responsible AI stewardship looks like for their newsroom, whether they are looking to procure an AI tool or create their own. This may seem like a lot of work upfront for what otherwise might be a simple process. Answering these questions at the outset, however, will save newsrooms a lot of time and energy compared to retroactively figuring out responsible use of a tool after purchasing it, training it, and using it.

Scope Limitations of This Guide

Introducing AI technology in journalism requires internal management and preparation in the newsroom — not only for technical skills, but also for the cultural change it requires, taking into account the emotional needs and morale of those in the newsroom. The Guide does not touch upon the organizational and cultural impact of AI adoption, but we would like to note that it is an important consideration that is required to ensure the success of AI adoption in a newsroom. Journalists and team members should feel comfortable using the tool as an aid, not as a replacement for their work. In addition, it is important that journalists can provide their input into decision-making processes and have a real say in the AI tools chosen to aid in their work. For more on this, we encourage you to utilize PAI’s Guidelines for AI & Shared Prosperity and refer to our report on AI and Job Quality.

Key Terms

Broadly, AI tools are any technologies, software, or platforms that utilize algorithms or artificial intelligence to analyze data, automate processes, or make predictions or recommendations. While there are many definitions of AI, AI is, in essence, software systems that take in data, learn from that data, and interpret it.

Machine Learning: As defined by the General Services Administration, the practice of using algorithms that are able to learn from large datasets by extracting patterns, enabling the algorithm to take an iterative and adaptive approach to problem-solving.

Generative AI: A type of AI that can produce new content in various formats — including text, imagery, audio, or data — based on user inputs and the datasets it has been trained on.

Natural Language Generation: As described by IBM, the process of converting structured data into human-like text.

Natural Language Processing: As described by IBM, the ability of a machine to interpret what humans are saying through text or voice formats.

Computer Vision: A type of AI that seeks to classify or identify objects, features, or people in images or videos.

AI Bias: A prejudiced determination made by an AI system, particularly when it is inequitable or oppressive or impacts socially marginalized groups.

AI Ethics: The multidisciplinary field that aims to employ standards of moral conduct to consider the societal and ethical implications of algorithmic development and use.

Categories of AI Tools for Newsrooms

AI tools for newsrooms have various uses and can be used at different points in the news production process. To highlight this complexity, PAI analyzed more than 70 tools in our AI Tools for Local Newsrooms Database, providing plain-language descriptions of the AI tools and their uses and identifying five broad categories of AI tools relevant to journalists:

Lead Generation Tools provide advance notice of trends, developing stories, or witness leads on breaking news. These tools can help journalists identify trending topics and potential sources on the scene.
Content Creation Tools simplify and automate the news-writing and reporting process to help create content. Technologies like ChatGPT and other automated writing tools have made it increasingly easy to pull data and turn it into short articles about data-centric and factual content that requires editors’ review before publishing.
Audience Engagement Tools focus on collecting data and moderating audience interactions and comments. These can be used to provide data on user behaviors and interests or tailor content to audiences. Audience engagement tools also include recommender systems, which can personalize news recommendations based on user preferences.
Distribution Tools allow for a single piece of content to be shared in multiple languages or formats. Distribution tools can turn written content into audio, video, or images (and vice versa) or automate their distribution across many social media platforms.
Investigative and Data Analysis Tools support fact-finding and making sense of large datasets or a large number of documents. This makes it much easier to uncover patterns or hidden connections across documents, thereby reducing the amount of time and effort it takes to conduct investigative deep dives.

AI tools often have multiple features and can fall under multiple categories. For example, it is common for a tool to combine content creation and distribution functions. Step 3 of this guide addresses the unique risks associated with utilizing each of these categories of tools.

How AI Tools Differ From Other Newsroom Technologies

Several features differentiate AI tools from other software.

  1. Traditional software relies on a rules-based system where the outputs are the same every time. AI tools are iterative and often make decisions without explicit programming. Unlike with traditional software, we don’t always have insight into how AI systems arrive at their conclusions or the factors involved. AI tools therefore require an additional layer of oversight that might not have been previously necessary with traditional software that is “plug and play” and produces the same results using the same processes everytime.
  2. AI tools might not have the needed context to arrive at the correct conclusion (for example, when live-translating content) and thus need to be provided with that context through human oversight.
  3. AI tools may produce harmful outputs either unintentionally or through targeted attacks. While traditional software can suffer from similar vulnerabilities, the risk is amplified for AI tools. In turn, AI tools require that you continuously monitor how they operate, to ensure they continue to produce outputs that still align with their intended purposes.

These elements help justify the need for additional attention and governance when newsrooms adopt AI tools. This includes monitoring how data is being used to train models, the impact of those models, and determining thresholds for when tools are in need of retirement — all details described in more depth in the Guide.

5 principles of AI adoption for newsrooms

The step-by-step Guide below is informed by a set of recommendations for the ethical adoption of AI by newsrooms previously published by PAI. These principles are:

  1. Newsrooms need clear goals for adopting AI tools
  2. Technology must embody the standards and values of the news operation
  3. Transparency, explainability, and accountability mechanisms must accompany the implementation of AI tools
  4. Newsroom staff need to actively supervise AI tools
  5. Distribution platforms must embed journalistic values into their AI systems

For a more in-depth understanding of these recommendations please read PAI’s blog post on the topic.

 

10 Steps for AI Adoption in Newsrooms

This Guide recommends newsrooms follow a 10-step process for adopting AI tools.

Working through the steps, if you discover by Step 2 that your newsroom’s needs won’t be addressed by an AI tool but are instead structural or organizational, consider addressing those first before proceeding. If by Steps 6 and 7, you find none of the AI tools meet your needs, hold off on adopting an AI tool. The sunk cost of time spent researching and testing out a tool is likely far smaller than implementing one that doesn’t meet your needs or doesn’t meet the standards for responsible AI that you’ve set.

Tap to expand:

1

Identify the outcomes and objectives of adding an AI tool

2

Map out your news production cycle and where an AI tool might fit into existing systems

3

Pinpoint the category of tools you’ll be considering and understand the associated risks

4

Establish performance benchmarks

5

Shortlist three to five potential AI tools and interview the tool developers

6

Select one or two tools that you would like to procure

7

Outline the potential benefits and drawbacks of implementing this tool

8

Set up your newsroom for success after procurement

9

Understand the lifecycle of an AI tool

10

Determine when you should retire an AI tool

Acknowledgements

AI Adoption for Newsrooms was iteratively developed by PAI’s AI and Media Integrity team under comprehensive guidance from the AI and Local News Steering Committee. We’d like to thank the Steering Committee members for their commitment to this programmatic work and for their generosity in time, expertise, and effort to advance this project. Their astute contributions and detailed comments on earlier drafts have strengthened this work immensely.

We would also like to thank the Partnership on AI staff who championed this work and provided thoughtful feedback and ideas throughout the research and writing process: Claire Leibowicz, Stephanie Bell, Hudson Hongo and Neil Uhl.

Finally, PAI is grateful to the Knight Foundation for their financial support and thought partnership of the AI and local news work — and personally grateful to Marc Lavalee, Director of Technology, for his wisdom and energy.

If you would like to add to this work or to the list of resources available, to utilize this guide as part of your newsroom’s journey, or just to be involved in our future work at the intersection of AI and news, please email Dalia Hashim.

Guidelines for AI and Shared Prosperity

PAI Staff


Home

Our economic future is too important to leave to chance.

AI has the potential to radically disrupt people’s economic lives in both positive and negative ways. It remains to be determined which of these we’ll see more of. In the best scenario, AI could widely enrich humanity, equitably equipping people with the time, resources, and tools to pursue the goals that matter most to them.

Our current moment serves as a profound opportunity — one that we will miss if we don’t act now. To achieve a better future with AI, we must put in the work today.

In medicine and other fields, new innovations are put through rigorous testing to ensure they are fit for purpose. The AI community, however, has no established practice for assessing the impact of AI systems on inequality or job quality. Without one, it remains difficult to ensure AI deployments are bringing us closer to the economic future we want to live in.

You can help guide AI’s impact on jobs

AI developers, AI users, policymakers, labor organizations, and workers can all help steer AI so its economic benefits are shared by all. Using Partnership on AI’s (PAI) Shared Prosperity Guidelines, these stakeholders can minimize the chance that individual AI systems worsen shared prosperity-relevant outcomes.

The Shared Prosperity Guidelines can be used by following a guided, three-step process.

 

Get Involved

Partnership on AI needs your help to refine, test, and drive adoption of the Guidelines for AI and Shared Prosperity.

Fill out the form below to share your feedback on the Guidelines, ask about collaboration opportunities, and receive updates about events and other future work by the AI and Shared Prosperity Initiative.

Get in Touch

Guidelines for AI and Shared Prosperity

Home

Step 1: Learn About the Guidelines

The Need for the Guidelines

The Origin of the Guidelines

Design of the Guidelines

Key Principles for Using the Guidelines

Step 2: Apply the Job Impact Assessment Tool

Instructions for Performing a Job Impact Assessment

Signals of Opportunity to Advance Shared Prosperity

Signals of Risk to Shared Prosperity

STEP 3: Stakeholder-Specific Recommendations

For AI-Creating Organizations

For AI-Using Organizations

For Policymakers

For Labor Organizations and Workers

Get Involved

Endorsements

Acknowledgments

AI and Shared Prosperity Initiative’s Steering Committee

Sources Cited

  1. ​​Acemoglu, D. (Ed.). (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Boston Review.
  2. Korinek, A., and Stiglitz, J.E. (2020, April). Steering technological progress. In NBER Conference on the Economics of AI.
  3. Acemoglu, D., and Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Public Affairs, New York.
  4. International Labour Organization. (n.d.). Decent work. https://tinyurl.com/yur776yd
  5. US Department of Commerce and US Department of Labor. (n.d.). Department of Commerce and Department of Labor Good Jobs Principles, DOL. https://tinyurl.com/mtbpemkn
  6. Institute for the Future of Work. (n.d.). The Good Work Charter. https://tinyurl.com/ycxtaax4
  7. Klinova, K., and Korinek, A. (2021). AI and shared prosperity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 645-651).
  8. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  9. Partnership on AI, 2021. Redesigning AI for Shared Prosperity: an Agenda. https://partnershiponai.org/paper/redesigning-ai-agenda/
  10. Negrón, W. (2021). Little Tech is Coming for Workers. Coworker.org. https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf.
  11. Korinek, A., 2022. How innovation affects labor markets: An impact assessment.
  12. Brynjolfsson, E., Collis, A., Diewert, W.E., Eggers, F., and Fox, K.J. (2019). GDP-B: Accounting for the value of new and free goods in the digital economy (No. w25695). National Bureau of Economic Research.
  13. Acemoglu, D., and Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3-30.
  14. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  15. Valentine, M., and Hinds, R. (2022). How Algorithms Change Occupational Expertise by Prompting Explicit Articulation and Testing of Experts’ Theories. https://tinyurl.com/pxyr8ev3
  16. Autor, D. (2022). The labor market impacts of technological change: From unbridled enthusiasm to qualified optimism to vast uncertainty (No. w30074). National Bureau of Economic Research.
  17. Mateescu, A., and Elish, M. (2019). AI in context: the labor of integrating new technologies.
  18. Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print).
  19. World Bank. (2017). World development report 2018: Learning to realize education's promise. The World Bank.
  20. Korinek, A., and Stiglitz, J.E. (2021). Artificial intelligence, globalization, and strategies for economic development (No. w28453). National Bureau of Economic Research.
  21. Diao, X., Ellis, M., McMillan, M. S., and Rodrik, D. (2021). Africa's manufacturing puzzle: Evidence from Tanzanian and Ethiopian firms (No. w28344). National Bureau of Economic Research.
  22. Rodrik, D. (2022). 4 Prospects for global economic convergence under new technologies. An inclusive future? Technology, new dynamics, and policy challenges, 65.
  23. O'Keefe, C., Cihon, P., Garfinkel, B., Flynn, C., Leung, J., and Dafoe, A. (2020, February). The windfall clause: Distributing the benefits of AI for the common good. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 327-331).
  24. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  25. Scherer, M., and Brown, L. X. (2021). Warning: Bossware May Be Hazardous to Your Health. Center for Democracy and Technology. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf
  26. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  27. Acemoglu, D., and Restrepo, P. (2022). Tasks, automation, and the rise in US wage inequality. Econometrica, 90(5), 1973-2016.
  28. Valentine, M., and Hinds, R. (2022). How Algorithms Change Occupational Expertise by Prompting Explicit Articulation and Testing of Experts’ Theories. https://tinyurl.com/pxyr8ev3
  29. Nurski, L., and Hoffmann, M. (2022). The Impact of Artificial Intelligence on the Nature and Quality of Jobs. Working Paper. Bruegel. https://tinyurl.com/jxayzdcz
  30. Pritchett, L. (2020). The future of jobs is facing one, maybe two, of the biggest price distortions ever. Middle East Development Journal, 12(1), 131-156.
  31. Eloundou, T., Manning, S., Mishkin, P., and Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.
  32. Noy, S., and Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4375283
  33. Korinek, A. (2023). Language models and cognitive automation for economic research (No. w30957). National Bureau of Economic Research.
  34. Case, A., and Deaton, A. (2020). Deaths of Despair and the Future of Capitalism. Princeton University Press.
  35. Gihleb, R., Giuntella, O., Stella, L., and Wang, T. (2022). Industrial robots, workers’ safety, and health. Labour Economics, 78, 102205.
  36. Pritchett, L. (2020). The future of jobs is facing one, maybe two, of the biggest price distortions ever. Middle East Development Journal, 12(1), 131-156.
  37. Pritchett, L. (2023). Choose People. LaMP Forum. https://lampforum.org/2023/03/02/choose-people/
  38. Gray, M. L., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
  39. Dubal, V. (2023). On Algorithmic Wage Discrimination. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331080
  40. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  41. Schneider, D., and Harknett, K. (2017, April). Schedule Instability and Unpredictability and Worker and Family Health and Well-being. In PAA 2017 Annual Meeting. PAA.
  42. Williams, J. et al. (2022). Stable scheduling study: Health outcomes report. https://ssrn.com/abstract=4019693
  43. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  44. Dzieza, J. (2020). Robots aren’t taking our jobs — They’re becoming our bosses. The Verge. https://tinyurl.com/5a9mxeuz
  45. Levy, K. (2022). Data Driven: truckers, technology, and the new workplace surveillance. Princeton University Press.
  46. Moore, P.V. (2017). The quantified self in precarity: Work, technology and what counts. Routledge.
  47. Scherer, M., and Brown, L. X. (2021). Warning: Bossware May Be Hazardous to Your Health. Center for Democracy and Technology. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf.
  48. Brand, J., Dencik, L. and Murphy, S. (2023). The Datafied Workplace and Trade Unions in the UK. Data Justice Lab. https://datajusticeproject.net/wp-content/uploads/sites/30/2023/04/Unions-Report_final.pdf.
  49. Nurski, L., and Hoffmann, M. (2022). The Impact of Artificial Intelligence on the Nature and Quality of Jobs. Working Paper. Bruegel. https://tinyurl.com/2a943p8f
  50. Nanavaty, R. (2023). Interview with Reema Nanavaty, Self-Employed Women’s Association.
  51. Beane, M. (2022). Today's Robotic Surgery Turns Surgical Trainees into Spectators: Medical Training in the Robotics Age Leaves Tomorrow's Surgeons Short on Skills. IEEE Spectrum, 59(8), 32-37. https://tinyurl.com/wyhxukhk
  52. Gray, M. L., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
  53. Center for Democracy and Technology et al. 2022
  54. Buolamwini, J., and Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
  55. Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. John Wiley and Sons.
  56. Keyes, O. (2018). The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on human-computer interaction, 2(CSCW), 1-22.
  57. Rosales, A., and Fernández-Ardèvol, M. (2019). Structural ageism in big data approaches. Nordicom Review, 40(s1), 51-64.
  58. Klinova, K. (2022) Governing AI to Advance Shared Prosperity. In Justin B. Bullock et al. (Eds.), The Oxford Handbook of AI Governance. Oxford Handbooks.
  59. Park, H., Ahn, D., Hosanagar, K., and Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
  60. Bernhardt, A., Suleiman, R., and Kresge, L. (2021). Data and algorithms at work: the case for worker technology rights. https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf.
  61. Colclough, C.J. (2022). Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. https://tinyurl.com/26ycnpv2
  62. Pasquale, F. (2020). New Laws of Robotics. Harvard University Press.
  63. Rodrik, D. (2022). 4 Prospects for global economic convergence under new technologies. An inclusive future? Technology, new dynamics, and policy challenges, 65.
  64. Anderson, E. (2019). Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about it). Princeton University Press.
  65. Korinek, A. (2022). How innovation affects labor markets: An impact assessment.
  66. Institute for the Future of Work. (2023). Good Work Algorithmic Impact Assessment Version 1: An approach for worker involvement. https://tinyurl.com/mr4yn5yt
  67. Bernhardt, A., Suleiman, R., and Kresge, L. (2021). Data and algorithms at work: the case for worker technology rights. https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf.
  68. Colclough, C.J. (2022). Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. https://tinyurl.com/26ycnpv2
  69. Brand, J., Dencik, L. and Murphy, S. (2023). The Datafied Workplace and Trade Unions in the UK. Data Justice Lab. https://datajusticeproject.net/wp-content/uploads/sites/30/2023/04/Unions-Report_final.pdf.
  70. Park, H., Ahn, D., Hosanagar, K., and Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
  71. Mateescu, A., and Elish, M. (2019). AI in context: the labor of integrating new technologies.
  72. Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print).
Table of Contents

ABOUT ML Foundational Resource

Overview


ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) is a multi-year, multi-stakeholder initiative aimed at building transparency into the AI development process, industry-wide, through full lifecycle documentation. On this page, you will find the collected outputs of ABOUT ML, a library of resources designed to help organizations and individuals begin implementing transparency at scale. To further increase the usability of these resources, recommended reading plans for different readers are provided below.

Learn more about the origins of ABOUT ML and contributors to the project here.

Recommended Reading Plans

At the foundation of these resources lies the newly revised ABOUT ML Reference Document, which both identifies transparency goals and offers suggestions on how they might be achieved. Using principles provided by the Reference Document and insights about implementation gathered through our research, PAI plans to release additional ML documentation guides, templates, recommendations, and other artifacts. These future artifacts will also be available on this page.

Read the full ABOUT ML Reference Document

 

Recommended Reading Plans for…


ML System Developers/Deployers

ML system developers/deployers are encouraged to do a deep dive exploration of Section 3: Preliminary Synthesized Documentation Suggestions and use it to highlight gaps in their current understanding of both data- and model-related documentation and planning needs. This group will most benefit from further participation in the ABOUT ML effort by engaging with the community in the forthcoming online forum and by testing the efficacy and applicability of templates and specifications to be published in the PLAYBOOK and PILOTS, which will be developed based on use cases as an opportunity to implement ML documentation processes within an organization.


ML System Procurers

ML system procurers might explore Section 2.2: Documentation to Operationalize AI Ethics Goals to get ideas about what concepts to include as requirements for models and data in future requests for proposals relevant to ML systems. Additionally, they could use Section 2.3: Research Themes on Documentation for Transparency to shape conversations with the business owners and requirements writers to further elicit detailed key performance indicators and measures for success for any procured ML systems.


Users of ML System APIs and/or Experienced End Users of ML Systems

Users of ML system APIs and/or experienced end users of ML systems might skim the document and review all of the coral Quick Guides to get a better understanding of how ML concepts are relevant to many of the tools they regularly use. A review of Section 2.1: Demand for Transparency and AI Ethics in ML Systems will provide insight into conditions where it is appropriate to use ML systems. This section also explains how transparency is a foundation for both internal accountability among the developers, deployers, and API users of an ML system and external accountability to customers, impacted non-users, civil society organizations, and policymakers.


Internal Compliance Teams

Internal compliance teams are encouraged to explore Section 4: Current Challenges of Implementing Documentation and use it to shape conversations with developer/deployment teams to find ways to measure compliance throughout the Machine Learning Lifecycle (MLLC).


External Auditors

External auditors could skim Appendix: Compiled List of Documentation Questions and familiarize themselves with high-level concepts as well as tactically operationalized tenets to look for in their determination of whether or not an ML System is well-documented.


Lay Users of ML Systems and/or Members of Low-Income Communities

Lay users of ML systems and/or members of low-income communities might skim the document and review all of the blue “How We Define” boxes in order to get an overarching understanding of the text’s contents. These users are encouraged to continue learning ABOUT ML systems by exploring how they might impact their everyday lives. Additional insights can be gathered from the Glossary section of the ABOUT ML Reference Document.