AI is already changing the way news is being reported.
AI tools can alert journalists to breaking news, help them analyze and draw insights from large datasets, and even write and produce the news. At the same time, the risks associated with using AI tools are significant and varied. From potentially spreading misinformation to making biased statements, the cost — both literally and figuratively — of misusing AI in journalism can be high.
Partnership on AI (PAI), as part of the Knight Foundation’s AI and Local News Initiative, has been working with organizations and individuals from the technology and news industries, civil society, and academia to explore how journalists can ethically adopt AI. AI Adoption for Newsrooms: A 10-Step Guide is the latest addition to PAI’s AI and Local News Toolkit, a set of resources designed to help local news organizations responsibly harness AI’s potential.
Informed by 5 Key Principles for AI-Adopting Newsrooms, the Guide provides a step-by-step roadmap to support newsrooms navigating the difficult questions posed by AI tool identification, procurement, and use.
Beginning with Step 1, “Identifying the outcomes and objectives of adding an AI tool” and ending with Step 10, “When you should retire an AI tool,” AI Adoption for Newsrooms takes newsrooms through the entire AI adoption journey, illustrated with real-world examples of newsrooms that have incorporated AI tools.
How This Guide Was Created
Over the past year, we worked with journalists and newsroom leaders to understand their most pressing questions related to responsibly procuring and using AI tools. We’ve also interviewed AI tool developers to understand why they’ve developed these tools and what risks they foresee with adoption. In January 2023, we launched the AI and Local News Steering Committee, a group of nine experts currently working in the AI and news sectors, including representatives of industry, newsrooms, civil society, and academia. The Steering Committee has focused primarily on providing input and direction on the content and development of this Guide.
Who This Guide Is For
While the Guide is primarily written for newsrooms looking to procure new AI tools, it is also applicable to newsrooms that have already procured AI tools or are considering building their own. In this guide, procurement is covered in the first 7 steps, while the remaining 3 steps cover the governance and use of AI tools within the newsroom. The steps are written to allow users to jump into the Guide at any step depending on where their newsroom is in the procurement and adoption process. Throughout the Guide, we seek to balance usability with sufficient nuance and depth.
The responsible use of AI tools is part of upholding long-standing journalistic values of integrity, transparency, and accountability. Journalists should strive to apply the same level of rigor and scrutiny to AI tools as to sources in news stories. This is how we can ensure that the AI tools that are adopted are serving the newsroom and audiences’ best interest and do not amplify bias, increase misinformation, or put the newsroom’s credibility at stake. To that end, we guide newsrooms through the questions they should be asking at every step of the way in their journey of procuring and using an AI tool, with insights derived from a multidisciplinary community — including other newsrooms — who have already used AI tools.
What Responsible AI Adoption Looks Like
Responsible procurement and use of AI tools requires understanding the ethical implications of such tools, including how to maximize their benefits while appropriately assessing their risks. This necessitates a broader newsroom effort — between journalists, editors, and organization leaders — to put governance in place that ensures appropriate use and monitoring throughout an AI tool’s lifecycle.
AI tools can be used for many different purposes and have various degrees of complexity. As a result, the responsible adoption of AI can look different depending on the newsroom and what tools they are incorporating. For that reason, this Guide poses many questions for journalists, editors, and management to help determine what responsible AI stewardship looks like for their newsroom, whether they are looking to procure an AI tool or create their own. This may seem like a lot of work upfront for what otherwise might be a simple process. Answering these questions at the outset, however, will save newsrooms a lot of time and energy compared to retroactively figuring out responsible use of a tool after purchasing it, training it, and using it.
Scope Limitations of This Guide
Introducing AI technology in journalism requires internal management and preparation in the newsroom — not only for technical skills, but also for the cultural change it requires, taking into account the emotional needs and morale of those in the newsroom. The Guide does not touch upon the organizational and cultural impact of AI adoption, but we would like to note that it is an important consideration that is required to ensure the success of AI adoption in a newsroom. Journalists and team members should feel comfortable using the tool as an aid, not as a replacement for their work. In addition, it is important that journalists can provide their input into decision-making processes and have a real say in the AI tools chosen to aid in their work. For more on this, we encourage you to utilize PAI’s Guidelines for AI & Shared Prosperity and refer to our report on AI and Job Quality.
Broadly, AI tools are any technologies, software, or platforms that utilize algorithms or artificial intelligence to analyze data, automate processes, or make predictions or recommendations. While there are many definitions of AI, AI is, in essence, software systems that take in data, learn from that data, and interpret it.
Machine Learning: As defined by the General Services Administration, the practice of using algorithms that are able to learn from large datasets by extracting patterns, enabling the algorithm to take an iterative and adaptive approach to problem-solving.
Generative AI: A type of AI that can produce new content in various formats — including text, imagery, audio, or data — based on user inputs and the datasets it has been trained on.
Natural Language Generation: As described by IBM, the process of converting structured data into human-like text.
Natural Language Processing: As described by IBM, the ability of a machine to interpret what humans are saying through text or voice formats.
Computer Vision: A type of AI that seeks to classify or identify objects, features, or people in images or videos.
AI Bias: A prejudiced determination made by an AI system, particularly when it is inequitable or oppressive or impacts socially marginalized groups.
AI Ethics: The multidisciplinary field that aims to employ standards of moral conduct to consider the societal and ethical implications of algorithmic development and use.
Categories of AI Tools for Newsrooms
AI tools for newsrooms have various uses and can be used at different points in the news production process. To highlight this complexity, PAI analyzed more than 70 tools in our AI Tools for Local Newsrooms Database, providing plain-language descriptions of the AI tools and their uses and identifying five broad categories of AI tools relevant to journalists:
AI tools often have multiple features and can fall under multiple categories. For example, it is common for a tool to combine content creation and distribution functions. Step 3 of this guide addresses the unique risks associated with utilizing each of these categories of tools.
How AI Tools Differ From Other Newsroom Technologies
Several features differentiate AI tools from other software.
- Traditional software relies on a rules-based system where the outputs are the same every time. AI tools are iterative and often make decisions without explicit programming. Unlike with traditional software, we don’t always have insight into how AI systems arrive at their conclusions or the factors involved. AI tools therefore require an additional layer of oversight that might not have been previously necessary with traditional software that is “plug and play” and produces the same results using the same processes everytime.
- AI tools might not have the needed context to arrive at the correct conclusion (for example, when live-translating content) and thus need to be provided with that context through human oversight.
- AI tools may produce harmful outputs either unintentionally or through targeted attacks. While traditional software can suffer from similar vulnerabilities, the risk is amplified for AI tools. In turn, AI tools require that you continuously monitor how they operate, to ensure they continue to produce outputs that still align with their intended purposes.
These elements help justify the need for additional attention and governance when newsrooms adopt AI tools. This includes monitoring how data is being used to train models, the impact of those models, and determining thresholds for when tools are in need of retirement — all details described in more depth in the Guide.
5 principles of AI adoption for newsrooms
The step-by-step Guide below is informed by a set of recommendations for the ethical adoption of AI by newsrooms previously published by PAI. These principles are:
- Newsrooms need clear goals for adopting AI tools
- Technology must embody the standards and values of the news operation
- Transparency, explainability, and accountability mechanisms must accompany the implementation of AI tools
- Newsroom staff need to actively supervise AI tools
- Distribution platforms must embed journalistic values into their AI systems
For a more in-depth understanding of these recommendations please read PAI’s blog post on the topic.
10 Steps for AI Adoption in Newsrooms
This Guide recommends newsrooms follow a 10-step process for adopting AI tools.
Working through the steps, if you discover by Step 2 that your newsroom’s needs won’t be addressed by an AI tool but are instead structural or organizational, consider addressing those first before proceeding. If by Steps 6 and 7, you find none of the AI tools meet your needs, hold off on adopting an AI tool. The sunk cost of time spent researching and testing out a tool is likely far smaller than implementing one that doesn’t meet your needs or doesn’t meet the standards for responsible AI that you’ve set.
Tap to expand:
Identify the outcomes and objectives of adding an AI tool
Map out your news production cycle and where an AI tool might fit into existing systems
Pinpoint the category of tools you’ll be considering and understand the associated risks
Establish performance benchmarks
Shortlist three to five potential AI tools and interview the tool developers
Select one or two tools that you would like to procure
Outline the potential benefits and drawbacks of implementing this tool
Set up your newsroom for success after procurement
Understand the lifecycle of an AI tool
Determine when you should retire an AI tool
AI Adoption for Newsrooms was iteratively developed by PAI’s AI and Media Integrity team under comprehensive guidance from the AI and Local News Steering Committee. We’d like to thank the Steering Committee members for their commitment to this programmatic work and for their generosity in time, expertise, and effort to advance this project. Their astute contributions and detailed comments on earlier drafts have strengthened this work immensely.
We would also like to thank the Partnership on AI staff who championed this work and provided thoughtful feedback and ideas throughout the research and writing process: Claire Leibowicz, Stephanie Bell, Hudson Hongo and Neil Uhl.
Finally, PAI is grateful to the Knight Foundation for their financial support and thought partnership of the AI and local news work — and personally grateful to Marc Lavalee, Director of Technology, for his wisdom and energy.
If you would like to add to this work or to the list of resources available, to utilize this guide as part of your newsroom’s journey, or just to be involved in our future work at the intersection of AI and news, please email Dalia Hashim.