Major Themes and Findings

Major Themes and Findings

The workers participating in this research shared stories, experiences, and observations of their time interacting with AI in their workplaces. The findings of this report are drawn from their insights. While common themes manifested differently according to setting, they appeared across all of the research sites and reflect what we heard from a substantial portion of participating workers. The ways these themes might present themselves (including in settings beyond those we researched) depend on a number of factors, including regulatory protections, companies’ managerial priorities, and workers’ relative influence in their workplaces (through unions, worker organizations, or individual leverage due to local labor market conditions). Workers also experience these impacts unevenly as individuals. Personal demographic characteristics — such as their race, age, gender, immigration status, disability status, and formal education level — may lead them to be more marginalized or vulnerable.

Theme 1: Executive and managerial decisions shape AI’s impacts on workers, for better and worse

Theme 1

Executive and managerial decisions shape AI’s impacts on workers, for better and worse

Workplace AI is deployed by particular executives and managers in specific contexts and specific ways. Leaders and managers determine whether to use workplace AI technologies, which workplace AI technologies to use, what goals they are intended to accomplish and how they are to be used. These decisions are driven by a combination of business models, company culture, industry trends, and the availability of relevant AI products. These factors also shape each other. The initial choices made by companies that produce technological impacts on their workers are, at first glance, not technology decisions at all: they’re foundational choices about the operating model and personnel strategy of their business.

How hierarchical is the business model? How much discretion are employees given to use their own judgment in executing their work (as opposed to following a strict set of rules and procedures)? Are employees encouraged to stay and develop expertise and experience that they bring to their roles or intentionally churned to keep costs low? Are jobs designed so that they can be performed with very little training (rendering workers intentionally interchangeable) or do they reward experience? How aggressively are performance targets pushed and punished?

These foundational decisions in turn structure subsequent decisions about what technologies could be useful in meeting business goals, as well as how they ought to be used. Upstream decisions on questions like these likely have a significantly higher influence over how AI affects workers than any choices made by their immediate managers. For instance, a company that designs a “high road” model and strategy to offer its workers high degrees of autonomy (a job attribute highly correlated with high job quality and employee satisfaction)Paul E. Spector, “Perceived Control by Employees: A Meta-Analysis of Studies Concerning Autonomy and Participation at Work,” Human Relations 39, no. 11 (November 1, 1986): 1005–16, https://doi.org/10.1177/001872678603901104 would likely see more value in non-binding AI decision-support tools. On the other hand, a company that designs a “low road” model, with its roles to be performed with very little training or autonomy and very short average tenure (highly correlated with low job quality and employee satisfaction)Henry Ongori, “A Review of the Literature on Employee Turnover,” African Journal of Business Management 1, no. 3 (June 30, 2007): 049–054, https://academicjournals.org/article/article1380537420_Ongori.pdf would see more use in technologies that closely monitor work to ensure it is being performed correctly or claim to remove the need for human judgment. Each of these decisions has an impact on workers, shaping what tasks they are expected to accomplish and how they are expected to do them. All of these decisions impact workers beyond technology, potentially much more than any technologies used — but they also shape workplace AI’s impacts.

As an example, the customer service agents we spoke to in India use AI software marketed to customer support companies and teams as real-time coaching, performance assessment, and task augmentation for their agents. One function of the software is to monitor their calls and text chats for keywords and phrases to diagnose possible customer issues and suggest resolutions, which are offered to agents in real-time pop-ups and menus. Another function is to monitor tone of voice, volume, and keywords to assess emotion and offer real-time pop-ups and alerts to agents on how they could better manage the emotional side of their interactions with customers (for instance, warnings that conversations are sounding emotionally charged, or guidance to speak more quietly, or slow down their speaking speed).

In the agents’ use of this software, two clear examples of this theme emerged. First, for some agents, it was clear from their employers’ guidance that they should take AI alerts and prompts they received during calls or text chat sessions as suggestions, rather than commands or requirements. This group of workers was expected to exercise autonomy and judgment in meeting customer needs, using the AI feedback as one of many inputs in their call or chat handling. Agents at different companies were expected to closely follow the feedback from the AI and not disregard its recommendations except, perhaps, in extreme circumstances. Both groups recounted instances where they judged the AI to be incorrect in its recommendations, but the group empowered to deploy their judgment on calls or in chats felt more autonomy and control over the quality of the service they provided. Some employers treated AI feedback and call assessment (including predictions of customer feedback scores) as purely a coaching tool. Others used it as a direct input to performance evaluations. In a coaching setting, workers were able to put the feedback in context for themselves, adopting suggestions where they made sense. In a performance evaluation setting, the context was often flattened or missing, adding an element of arbitrariness where managers likely intended to add rigor.See Virginia Doellgast and Sean O’Brady, “Making Call Center Jobs Better: The Relationship between Management Practices and Worker Stress,” June 1, 2020, https://ecommons.cornell.edu/handle/1813/74307 for additional detail and impacts of punitive managerial uses of monitoring technology in call centers, including increased worker stress

In sub-Saharan Africa, the data annotators were tasked with annotating images or videos for data sets to be used in developing machine learning (ML) models. Prior to the introduction of machine learning software to automate part of this work, the workers carefully outlined each contour of an object in an image. For videos, this could require them to meticulously shift the position and edit the contours of the outline for dozens or even hundreds of frames where the object had only slightly changed position from one frame to the next. The company recently introduced ML task automation software to assist the workers in the fulfillment of their roles. For certain objects in an image, workers could identify the outermost corners of the object and the software filled in the rest. For videos, the software could take the initial object outline delineated by the worker and predict the outlines of that object in many future frames.

The workers who participated in this research were tasked with testing and providing feedback on the company’s new ML software (in addition to being responsible for actual annotation work). Unlike other workers responsible for specific client deliverables and deadlines, they were not given strict quality or completion performance targets for the annotation side of their role. They still, however, had the opportunity to earn bonuses for the speed and accuracy of their work. This incentive structure for their work gave them the needed time and freedom to focus and reflect on improvements to the ML tools that could deliver value for the company without forcing them to miss out on the opportunities for additional compensation offered to their colleagues.

Previous research has offered other demonstrations of how managerial decision-making shapes AI technology’s impacts on workers.Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/ This includes the use of big data analytics as invasive and harmful “bossware,”Matt Scherer, “Warning: Bossware May Be Hazardous to Your Health” (Center for Democracy & Technology, July 2021), https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf the cruelty that can result from algorithmic decisions with no human recourse,Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019)Alexandra Mateescu and Aiha Nguyen, “Algorithmic Management in the Workplace,” Explainer (Data and Society, February 2019), https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf the negative health impacts of overly aggressive performance targets set using AI,Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/Human Impact Partners and Warehouse Worker Resource Center, “The Public Health Crisis Hidden in Amazon Warehouses,” January 2021, https://humanimpact.org/wp-content/uploads/2021/01/The-Public-Health-Crisis-Hidden-In-Amazon-Warehouses-HIP-WWRC-01-21.pdf the lack of worker protections afforded to workers misclassified by their employers as independent contractors on AI-driven platforms,V.B. Dubal. “Wage Slave or Entrepreneur?: Contesting the Dualism of Legal Worker Identities.” California Law Review 105, no. 1 (2017): 65–123, https://www.jstor.org/stable/24915689Ramiro Albrieu, ed., Cracking the Future of Work: Automation and Labor Platforms in the Global South, 2021, https://fowigs.net/wp-content/uploads/2021/10/Cracking-the-future-of-work.-Automation-and-labor-platforms-in-the-Global-South-FOWIGS.pdf and the negative health impacts, life disruptions, anxiety, and job insecurity arising from last minute shift-scheduling enabled by AI software.Daniel Schneider and Kristen Harknett, “Schedule Instability and Unpredictability and Worker and Family Health and Wellbeing,” Working Paper (Washington Center for Equitable Growth, September 2016), http://cdn.equitablegrowth.org/wp-content/uploads/2016/09/12135618/091216-WP-Schedule-instability-and-unpredictability.pdf

These negative impacts on workers should not be seen as inevitabilities of the unstoppable march of technological progress, but rather as the outcomes of a series of decisions. These decisions are made first by companies who create business and operating models revolving around low-quality jobs, then by product developers and designers who build AI technologies that are either explicitly designed for these uses or possible to misuse in harmful ways, and subsequently by leaders and managers who choose these particular implementations. The beneficial examples above, where AI software was used to assist workers while they maintained their autonomy and retained decision-making authority, demonstrate that better choices are available for managers and leaders implementing AI in their workplaces.

Theme 2: Workers appreciate how some uses of AI have positively changed their jobs

Theme 2

Workers appreciate how some uses of AI have positively changed their jobs

While there are clear harms arising from some workplace AI uses and decisions, the role of workplace AI in job quality is not wholly negative. Across our research sites, workers offered reflections on what they appreciated about specific uses of AI or attributes of AI products that they use. In the India site, in addition to the appreciation for additional information to support their decisions, and real-time coaching that was not evaluative or punitive, workers highlighted time-saving as well as benefits to their physical well-being from AI software that logged caller details and auto-prompted solution menus. The call center workers also reported that the software’s automated data entry reduced eye strain and repetitive stress injuries to their wrists and hands compared to the constant keyboard, mouse, and screen work needed when entering this information themselves.

In the sub-Saharan Africa site, a strong majority of the data annotators preferred working with the ML tools compared to when they did their work more manually. They lauded the speed with which the ML prediction software enabled them to complete annotation tasks, and the reduction in sometimes tedious or boring repetitive work. (For instance, working their way through each frame in a video from start to finish.) Some workers also mentioned that the tool helped them feel less tired throughout the day or at the end of their shifts.

However, they noted the software also sometimes had accuracy problems. In these cases, many workers would have preferred to manually complete those tasks themselves from start to finish. When the software was inaccurate in its outputs, it posed several problems to the workers. First, it forced them to use their time inefficiently — not only did they have to spend the time waiting for the algorithm to complete its (incorrect) annotation, they also had to spend additional time revising the output from the software. Second, the process of trying to find each error and then correct it felt unnatural and painstaking compared to when they felt mentally prepared to just do the tasks themselves. Finally, they felt a sense of frustration familiar to anyone required to work with a malfunctioning technology: the software was failing to meet their expectations and leaving them to sort out the problems it created. In interviews, the data annotators explained that part of this performance gap could be attributed to portrayals of the technology when it was introduced. Because it was “machine learning” or “artificial intelligence,” they expected it to be more accurate than their own work, not less. Still, even workers who expressed these issues praised the benefits listed above when the technology was working properly.

While there is a broader, ongoing discussion of puffery in the AI industry, less coverage has been afforded to the effects of similar dynamics in workplaces.Arvind Narayanan, “How to Recognize AI Snake Oil,” https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdfFrederike Kaltheuner, ed., Fake AI (Meatspace Press, 2021), https://fakeaibook.com Inflated portrayals of workplace AI’s capabilities may do more harm than good. Setting high expectations (however inadvertently) and then failing to meet them was a source of frustration and stress expressed by the call center workers regarding the call-coaching and evaluation software as well. In the context of AI’s benefits to workers, setting realistic expectations and then meeting or exceeding them may substantially reduce friction in AI use.

Among the warehouse workers in the US, many singled out AI technologies that reduced possible errors, such as placing items in the wrong locations or using the wrong tape or labels on packages. A number of participants said they felt an increased degree of pride in their job due to their accuracy in their work. Some research participants additionally valued how warehouse robots reduced some physical demands of the job. In the case of robots that bring items to workers, this could be a radical reduction in steps walked by workers who previously would have walked 10 or 20 miles a day to get these items themselves. The assistance of robotic arms could reduce muscle strains and pulls. Positive reactions to these physical effects were mixed, however, with some participants noting that they missed the exercise they got in the old way of working and others raising concerns about increases in injuries from repetitive movements prompted by the robot-assisted workflow.Aiha Nguyen, “The Constant Boss: Work Under Digital Surveillance” (Data and Society, May 2021), https://datasociety.net/library/the-constant-boss/Strategic Organizing Center, “Primed for Pain,” May 2021, https://thesoc.org/wp-content/uploads/2021/02/PrimedForPain.pdfAlessandro Delfanti and Bronwyn Frey, “Humanly Extended Automation or the Future of Work Seen through Amazon Patents,” Science, Technology, & Human Values 46, no. 3 (May 1, 2021): 655–82, https://doi.org/10.1177/0162243920943665

Some of these benefits of workplace AI commonly cited by workers — like increases in speed, accuracy, efficiency, and productivity — were clearly intended by the creators and implementers of the technology. Others, such as the sense of pride in a job well done, could be seen as indirect effects of those benefits intentionally sought by the AI creators and implementers. Still others, such as the ergonomic advantages of automated call-logging, were meaningful improvements to worker well-being that likely did not play a decisive role in the creation of the software or the company’s decision to purchase it, but accrued to the worker nonetheless.

Both the intended and unintended positive consequences of workplace AI cited by workers point towards possible paths for developing and implementing workplace AI technologies that benefit workers as well as their employers. The workers who spoke with us and shared their stories and experiences were not anti-technology or anti-AI. Their own descriptions of what counts as a good work day and their personal definitions of what it means to do good work share a number of values and goals with their employers, including swift and accurate completion of their tasks. The participating workers welcomed technological assistance in achieving these objectives, provided they could maintain or improve their job quality while using it. The positive experiences of AI and perspectives shared by workers should give businesses confidence that benefits to workers and benefits to employers are not zero-sum. Workplace AI integrations can deliver value to both groups. Respectful, considered AI implementations that maintain worker dignity and autonomy can be embraced by workers.

Theme 3: Workplace AI harms repeat, continue, or intensify known possible harms from earlier technologies

Theme 3

Workplace AI harms repeat, continue, or intensify known possible harms from earlier technologies

While AI may be relatively new to most workplaces, the impacts workers see from its use are not. Many negative impacts from workplace AI are versions of impacts seen from non-AI technologies. For instance, employers may make job and task design decisions encouraging repetitive motions that could lead to injuries (as reported by some participating warehouse workers) in order to integrate AI task automation into workflows. This also occurs in other, non-AI assisted industrial settings where workers are assigned a small set of tasks to perform repeatedly. AI systems can deliver negative feedback to workers without helpful suggestions for improvement: an issue noted by some participating customer service agents and also an unfortunate practice of some human managers since the creation of managerial and supervisory roles. Additionally, some companies deployed intensive monitoring of their workers well before big data and AI made it possible for managers to analyze that data in increasingly invasive and stressful ways.

A US warehouse worker offered a representative explanation of how a performance evaluation AI system layered into her job — a monitoring software used by her company to provide real-time performance feedback — negatively affected her emotional well-being. From when she clocks in until she clocks out, she is constantly monitored by software. The software tracks when she is completing a task (for instance, following instructions she has been given about how to process an item in the warehouse). It tracks how long it takes her to complete that task, tracks when she is between tasks, tracks when she goes to get water or use the bathroom. And it tells her whether she is staying on pace or falling behind the goals her company managers use that same data to set. The expectation is that she is constantly on pace. If she falls behind for any reason, it triggers stress that stays with her until she is ahead of the targets again. The stress isn’t from personal perfectionism: firing is a common consequence for workers who fall behind targets at her employer, regardless of whether they might have understandable reasons for a slower pace (for instance, health conditions that might require more frequent breaks).

The pressure generated by the way her company management uses this software leads her and her colleagues to cut corners to speed up their work. When they’re trying to stay on pace, she and some of the other warehouse workers pointed out that they would sacrifice safe or proper movements or lifting techniques in favor of speed. The consequences their employers set for being too slow made the choice clear for them: they focused on not getting fired over making sure they stayed safe. While some technologies in AI-assisted warehouses can reduce physical burdens on workers, such as robotic item movers which reduce the distances workers walk in a shift, employers’ decisions to use AI technologies to accelerate the pace of work can create higher worker injury rates.Phoebe V. Moore, “OSH and the Future of Work: Benefits and Risks of Artificial Intelligence Tools in Workplaces,” Discussion Paper (European Agency for Safety and Health at Work, 2019), https://osha.europa.eu/en/publications/osh-and-future-work-benefits-and-risks-artificial-intelligence-tools-workplacesStrategic Organizing Center, “Primed for Pain,” May 2021, https://thesoc.org/wp-content/uploads/2021/02/PrimedForPain.pdf

On top of the emotional and physical well-being issues that workplace AI can cause, the way managers and executives choose to integrate AI into the overall workflow may lead to lowered intellectual well-being on the job. Workers at each site largely agreed that the AI systems used in their jobs lowered the level of intellectual challenge when compared to what it looked like to do their work without AI. Workers in US warehouses with higher degrees of AI implementation often had less variety in their tasks and more technological guardrails to assist them in performing them correctly. The customer service agents in India spent less time and energy diagnosing the reasons a customer called or identifying possible solutions for their issues. In sub-Saharan Africa, the data annotators no longer completed intricate tasks requiring a careful, discerning eye from start to finish, but instead largely spent their time creating broad outlines around objects in images, letting algorithms do the rest. While many welcomed the extra ease, many others indicated that they preferred a higher degree of challenge.

Each of the examples offered above can be seen as a continuation of trends from other workplace technologies. However, existing laws and regulations do not appropriately address these harms. The status quo enforcement of basic health and safety protections for workers around the world is inadequate to prevent them from being harmed by their jobs: the introduction of AI software and systems that can ratchet up work intensity only increases the urgency of shoring up these laws and their enforcement.Annette Bernhardt, Lisa Kresge, and Reem Suleiman, “Data and Algorithms at Work: The Case for Worker” (UC Berkeley Labor Center, November 2021), https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf In addition to the emotional and mental health impacts described above, AI monitoring and surveillance technologies undermine workers’ sense of privacy, dignity, and autonomy.Andrea Dehlendorf and Ryan Gerety, “The Punitive Potential of AI,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/the-punitive-potential-of-ai/ Yet mental health safeguards are often less regulated or enforced and a policy vacuum exists in many geographies regarding privacy and data protections at work.

The familiarity and continuity of harms from workplace AI should make them easier to anticipate, and thus to prevent or mitigate through responsible design and use. But until consideration of these impacts is foregrounded by AI developers and the executives and managers who purchase and implement workplace AI, or sufficient protections and enforcement are enacted by governments, workers will continue to suffer harms that could have been anticipated and prevented.

Theme 4: Current implementations of AI in work are reducing workers’ opportunities for autonomy, judgment, empathy, and creativity

Theme 4

Current implementations of AI in work are reducing workers’ opportunities for autonomy, judgment, empathy, and creativity

One optimistic line of thought on AI’s transformational effects on jobs suggests AI will support humanity to take on more creative, empathetic, or intellectually advanced workCarl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?,” Technological Forecasting and Social Change 114 (January 1, 2017): 254–80, https://doi.org/10.1016/j.techfore.2016.08.019“These Are the Top 10 Job Skills of Tomorrow – and How Long It Takes to Learn Them,” World Economic Forum, https://www.weforum.org/agenda/2020/10/top-10-work-skills-of-tomorrow-how-long-it-takes-to-learn-them/  — an admirable goal of creating jobs with more “human” tasks than the reproducible, mathematically definable work of algorithms and robots. As reported by workers collaborating closely with AI, however, the current reality on the ground points to jobs moving in the opposite direction.Daniel Susskind, “Technological Unemployment,” in The Oxford Handbook of AI Governance, ed. Justin Bullock et al. (Oxford University Press), https://doi.org/10.1093/oxfordhb/9780197579329.013.42

Take the data annotators working with an ML technology that automates some of the annotation work they would have previously done manually. Their responsibilities shifted away from a creative, generative role that some of them described as like a craft or art. Previously, they carefully drew the outlines of relevant objects and derived satisfaction from their precise handiwork. With the addition of ML software to their workflow, they now, in their words, spend less time creating and more time “fixing” or “cleaning” the AI’s output by identifying and editing images that the algorithms annotated incorrectly.

Some of the call center workers used technologies designed to address two of these supposedly more human skills: empathy and problem-solving. For empathy, call monitoring software assessed whether calls were getting too emotionally charged by measuring agents’ volume, speed, and word choice. For problem-solving, a software designed to assist agents detected keywords and phrases in order to pull up solution lists and suggest possible issues the customer or client might be having. In each case, the software was not consistently accurate or helpful (according to the judgment of the experienced call center workers) but workers often had to contend with performance assessments tied to complying with this software by making their displays of empathy more templatized and, ultimately, less human.

In automated warehouses, workers who had been around prior to the introduction of new AI and robotics systems, or who switched from warehouses with less automation to cutting-edge robotics locations, found that the variety of their tasks shrank over time, with AI, robotics, and other automated systems picking up tasks that they previously completed or coordinated with other workers to complete. Multiple workers mentioned feeling like they themselves were also robots in the more automated warehouses. Along with workers’ increasingly parochial view of the work being done throughout the warehouse came a decrease in their positioning and ability to identify and suggest improvements at a systemic level. Their universe of problem-solving potential had shrunk from warehouses sometimes the size of seven New York City blocks to a small set of tasks at a workstation no greater than 10 feet by 10 feet.

Each of these examples points to an under-discussed and heterodox aspect of current uses of AI on job quality and skills: current managerial decisions and technological products mean the transition to the purportedly attainable full automation of a specific job could well be one where the workers in that job experience less autonomy (and thus fewer opportunities for creativity, empathy, complex problem-solving, and judgment), not more.

These “transition to automation” periods can be extremely long, as incremental progress either continues or stalls and researchers work for the next breakthrough. See, for example, the delays relative to predicted timelines for self-driving cars.Christopher Mims, “Self-Driving Cars Could Be Decades Away, No Matter What Elon Musk Said,” WSJ, https://www.wsj.com/articles/self-driving-cars-could-be-decades-away-no-matter-what-elon-musk-said-11622865615 When thinking about workers training their AI replacements, some may have in mind the time horizon of training another human to do your job — but these periods of automation transition could last years, decades, or possibly the span of an entire career.

Depending on how the technology evolves, workers may never see a paradise of creativity on the other side. This is perhaps a corollary to, or a deepening of, the “paradox of automation’s last mile” suggested by Mary Gray and Siddarth Suri in Ghost Work.Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt, 2019) While their work highlighted the possibility that there will always be more work for humans in the quest for full automation, workers’ present experience working with AI systems tells us a great deal about what it will look like for many workers to traverse that paradoxical last mile.

This is an issue that comes through more clearly in light of the distribution of skills and tasks throughout the labor market and the ways managers and companies decide to combine human and AI labor. In jobs where companies are actively trying to automate some or all of the tasks, they need workers to produce training data: both for originally building the relevant algorithms and for continuous improvement. The current state of AI chasing the replication of human abilities means automation technologies are largely focused on discrete, narrow tasks — and so, too, are the workers tasked with training them.Erik Brynjolfsson, “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” January 11, 2022, https://doi.org/10.48550/arXiv.2201.04200 Given these structural forces, it’s not surprising that the ways these AI technologies are deployed reduce workers’ scope for exercising these more “human” tasks in their jobs.

The realities of how current managerial uses of AI technologies transform workers’ jobs suggest a need to re-evaluate the optimistic framing in multiple ways. To the extent that executives and managers see value in using AI technology to free up their workers to perform more “human” or advanced tasks, they cannot assume that any AI tool will meet that goal. Nor should developers take for granted that the AI tools they create, as implemented, will free up humans to be more creative or empathetic or to focus on tasks requiring more complex judgment or discernment. Without caution and active collaboration with workers, these workplace AI product adoptions may bring about the very opposite effects. Moreover, the uneven pace of AI development means that these current impacts ought not be brushed aside as temporary harms on a quick path to a better future. The future capabilities of these technologies remain unclear, as do the timelines to achieve them.World Economic Forum. “Positive AI Economic Futures.” Insight Report. World Economic Forum, November 2021. https://www.weforum.org/reports/positive-ai-economic-futures/ The present-day harms to existing workers’ autonomy, dignity, and senses of satisfying and meaningful work, on the other hand, are real — and accelerating.

Theme 5: Empowering workers early in AI development and implementation increases opportunities to implement AI that benefits workers as well as their employers

Theme 5

Empowering workers early in AI development and implementation increases opportunities to implement AI that benefits workers as well as their employers

The market for workplace AI products is presently structured to address needs and opportunities perceived by company leaders and managers with substantial budgets for AI transformations or integrations. Workers who use these products are multiple layers removed from decision-makers and may also be in different departments or reporting lines. As such, the priorities of AI purchasers are not necessarily those of workers.

Providing workers the opportunity to participate in the creation, design, and implementation of workplace AI is a necessary corrective to approaches that exclude workers only to later require them to use technologies created without their input or the centering of their needs. Not every worker who participated in the research wants these opportunities for input. But comprehensively excluding this group throughout the process or until UX (user experience) or user-testing phases has multiple negative effects.

The data annotators who participated in this research were tasked with helping to improve the ML software they used in their work. They described their team leaders and the developers they worked with as open to suggestions, and they took pride in troubleshooting, bug-spotting, and identifying improvements to the software that were later adopted. Interviewees thought their ideas and suggestions meaningfully improved the tools they worked with. Even in this intentionally participatory environment, however, their reflections revealed some missed opportunities. In broad strokes, they described their role as finding ways to improve the software’s effectiveness. Nested underneath this mission were implicit objectives like understanding the nuances of the software’s failure modes or identifying improvements for the user interface. Since the software was useful but frustrating when it failed, improvements to its effectiveness also contributed to their own satisfaction.

However, they appeared to consider participation in shaping other aspects of their work or offering ideas for new technologies as outside their responsibilities. Recall, for example, the workers who found it disruptive to their efficiency and flow when the ML software struggled and they had to edit its mistakes. An annotator offered the idea of giving annotators the ability to set those images aside to return to later, allowing them to process all of the successful automations in one batch, and all the tasks requiring their edits in another, rather than bouncing back and forth without a sense of what the next task would require. The suggestion was a process improvement that could improve worker satisfaction and likely task efficiency. When asked in a follow-up question whether the annotator had made that suggestion to their managers or the developers, they responded that their job was to improve the effectiveness of the tool, not ideas like this.

By not opening up the biggest possible spaces for worker participation, or not asking the right questions, designers and implementers are missing productive ideas for new technologies. What would a worker in this role identify as the biggest issues they’d like tech to help them solve or tasks they’d like assistance with? How could workflows, jobs, and business processes be reimagined for the better from workers’ perspectives? What would a welcome technological aid or solution look like from their perspective?

A number of the participating customer service agents, for instance, flagged angry customers as one of the worst parts of their jobs. They had no advance notice of whether their next interaction would be with someone who would be respectful or abusive to them about mistakes by their employer, and which were out of their control. When asked if there were areas where they would welcome AI in their jobs, multiple agents suggested de-escalation technologies, or warning systems so they at least knew what they’d be facing — both of which agents were confident would improve customer experience as well.

Without worker participation from the start, AI developers also lack important information about design and use. What common or uncommon occurrences in the workplace would cause this technology to fail or struggle? What does every experienced worker in this role know that outsiders would find difficult to identify or understand? Moreover, workers left out of the process may be less inclined to trust or adopt AI tools.Nithya Sambasivan and Rajesh Veeraraghavan, “The Deskilling of Domain Expertise in AI Development,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22 (New York, NY, USA: Association for Computing Machinery, 2022), 1–14, https://doi.org/10.1145/3491102.3517578 In an alternative world where workers were included from the start, how much more effective could a given technology be? How much more quickly could it be launched at scale?

Not having the workers who will use the technology “in the room” means that projects get greenlit and products get designs that are, on the whole, not worker-centric. Worker-centricity is one of many possible goals for a product team. Without consistent, empowered advocates for that goal present, it is structurally probable to be deprioritized relative to priorities of senior leaders, designers, and engineers.Sabrina Genz, Lutz Bellmann, and Britta Matthes, “Do German Works Councils Counter or Foster the Implementation of Digital Technologies?,” Jahrbücher Für Nationalökonomie Und Statistik 239, no. 3 (June 1, 2019): 523–64, https://doi.org/10.1515/jbnst-2017-0160

Many workplace AI systems (including several described in this research) also reflect and reinforce a managerial mindset (perhaps best described as “neo-Fordism” or “neo-Taylorism”) where deskilling and strict control over workers is seen as the path to the highest profitability. The origins and drivers of this approach in AI development has not been accounted for in detail (for instance, did limits to AI capabilities shape this approach to workplace AI products, or did strong belief in this managerial approach shape a market that AI developers then filled?), but the impact on workers remains the same — they are treated as subjects in need of discipline and control, rather than as professionals with valuable expertise.

Alternative management approaches used in sectors as varied as manufacturing and hospitality encourage frontline workers to draw on their accumulated expertise and judgment to address problems and make improvements. These approaches afford workers the needed influence and decision rights to make their recommendations stick.Alan G. Robinson and Dean M. Schroeder, “The Role of Front-Line Ideas in Lean Performance Improvement,” Quality Management Journal 16, no. 4 (January 1, 2009): 27–40, https://doi.org/10.1080/10686967.2009.11918248Jeffrey K. Liker, The Toyota Way: 14 Management Principles From the World’s Greatest Manufacturer (McGraw Hill Professional, 2003)Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production (CRC Press, 1988)Kayhan Tajeddini, Emma Martin, and Levent Altinay, “The Importance of Human-Related Factors on Service Innovation and Performance,” International Journal of Hospitality Management 85 (February 1, 2020): 102431, https://doi.org/10.1016/j.ijhm.2019.102431 Treating workers as genuine experts and empowering them to participate in AI development and deployment offers opportunities for both workers and businesses to benefit.Katherine C. Kellogg, Mark Sendak, and Suresh Balu, “AI on the Front Lines,” MIT Sloan Management Review, May 4, 2022, https://sloanreview.mit.edu/article/ai-on-the-front-lines/