Opportunities for Impact
Opportunities for Impact
Actors across the AI investment, creation, deployment, use, and regulation spectrum have opportunities to make decisions that center workers’ voices and protect their well-being. As shown above, the benefits of these decisions do not need to be zero-sum: there are paths forward that can benefit both workers and their employers. Specific recommendations and opportunities for impact are outlined by stakeholder group below. While the recommendations are structured by the audience most able to take action on it, change and accountability also rely on the relationships between different actors (for instance, the complicated ways in which the actions of both AI creators and AI implementers drive the workplace AI products available for purchase). In practice, these interactions may have allowed some decision-makers to evade culpability for their actions; accordingly, these relationships and dynamics should also be considered in the implementation of these recommendations. In the event that actors are attempting to avoid responsibility for harms or negative effects, others must take care to hold them to account.
Stakeholder 1: AI-implementing companies
Employers that choose to use AI in the workplace have an obligation to ensure it does not decrease their employees’ well-being. They also have the highest degree of control in ensuring this outcome.Zeynep Ton, “The Good Jobs Solution,” Harvard Business Review, 2017, 32. https://goodjobsinstitute.org/wp-content/uploads/2018/03/Good-Jobs-Solution-Full-Report.pdf While employers might not directly create the AI-enabled workplace products on the market, they can choose which products to use (or choose to use none at all) and set the contexts and conditions for their use.Abigail Gilbert et al., “Case for Importance: Understanding the Impacts of Technology Adoption on ‘Good Work’” (Institute for the Future of Work, May 2022), https://uploads-ssl.webflow.com/5f57d40eb1c2ef22d8a8ca7e/62a72d3439edd66ed6f79654_IFOW_Case%20for%20Importance.pdf Employers determine when AI is used (e.g., in core or non-core tasks) and how (e.g., as a decision-support tool with a human worker given the ultimate say or as a final decision-making tool). As shown above, this set of decisions has profound influence over how workers experience workplace AI, even in cases where employers are using similar AI products.
|Context||Opportunities for impact|
|Values and governance||Commit to making worker-centric/worker-friendly AI that increases access to better jobs, especially for the most vulnerable and marginalized workers.|
|AI product purchasing||Take workers and their institutionalized representatives seriously as experts in their own roles and incorporate their input into purchasing decisions, including:
|AI product implementation||Integrate frontline workers and other end-users’ perspectives into the implementation of AI (e.g., workflow and performance targets).
Give humans working directly with AI systems the final judgment on AI-supported decisions, especially in situations where they could affect workers’ performance evaluations and lives outside of work.
Foster and seek out representation from institutionalized forms of worker organization, ensuring that workers can offer their authentic views without fear of retribution or retaliation.
Stakeholder Group 2: AI-creating companies
Core technologies underlying workplace AI tools are created by an increasingly concentrated group of companies.Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf This concentrated group may use them internally and also sell these technologies to other businesses. Values and practices that center the participation and well-being of worker end-users at these companies have the potential for transformative changes in job quality around the globe. These values and practices are all the more important in the market for workplace AI products, one where company leaders and managers are purchasers and the users may be some of the lowest paid and least influential or powerful employees in the company. This market structure means a focus on customers is not necessarily a focus on worker end-users (and vice versa).
These divergences are likely particularly pronounced in companies with strong command and control approaches to integrating AI into their workplaces as outlined above. Not coincidentally, these companies often employ large pools of low-wage workers most vulnerable to AI’s negative effects. Creating better feedback loops and genuinely centering workers will often require seeking out the participation of workers and their representatives beyond their own organizations. There are, however, areas of alignment between the needs and preferences of workers and the incentives of business leaders and managers (as discussed in more detail in Theme 3). While not all of the applications sought by company leaders and managers may be endorsed by their workers, focusing on the overlap adds an additional constituency in support of particular products: the workers/end-users.
|Context||Opportunities for impact|
|Values and governance||Commit to making worker-centric/worker-friendly AI that increases access to better jobs — especially for the most vulnerable and marginalized workers — by measuring workplace AI products’ impacts on job availability, wages, and job quality, and working to eliminate or mitigate negative impacts.
Include workers as participants and key stakeholders in creating any company’s AI ethics/responsible AI principles.Julian Posada, “The Future of Work Is Here: Toward a Comprehensive Approach to Artificial Intelligence and Labour,” Ethics of AI in Context, 2020, http://arxiv.org/abs/2007.05843
Recruit staff of diverse backgrounds to AI development teams and actively work to retain them as staff after recruitment.Jeffrey Brown, “The Role of Attrition in AI’s ‘Diversity Problem’” (Partnership on AI, April 2021), https://partnershiponai.org//wp-content/uploads/dlm_uploads/2022/04/PAI_researchpaper_aftertheoffer.pdf While representation on its own is not a solution, the relative lack of diversity in AI product teams can contribute to the creation of blindspots that could be mitigated by more diverse teams.Tina M Park, “Making AI Inclusive: 4 Guiding Principles for Ethical Engagement” (Partnership on AI, July 2022), https://partnershiponai.org//wp-content/uploads/dlm_uploads/2022/07/PAI_whitepaper_making-ai-inclusive.pdf
|AI product origination||Incorporate workers and other end-users’ perspectives from the beginning of the product origination process. That is, work forward from problems, challenges, and opportunities identified by frontline and other workers toward products rather than finding ways to shoehorn research progress into workplace products and routines. Collaborate with workers’ institutionalized representatives where possible.
“Red-team” potential use of workplace AI products from origination through major update cycles. Without intentional focus, developer and product teams may not identify the potential for misuse or harm.Fabio Urbina et al., “Dual Use of Artificial-Intelligence-Powered Drug Discovery,” Nature Machine Intelligence 4, no. 3 (March 2022): 189–91, https://doi.org/10.1038/s42256-022-00465-9 Eliminate or mitigate identified opportunities for uses harmful to workers, especially in situations where technologies may be sold and deployed in contexts with fewer worker protections than they are developed. Responsible red-teaming and harm mitigation may require companies to not pursue product ideas where harms cannot be mitigated. Particular attention must be paid to the diversity and heterogeneity of use contexts, including ones where potential dimensions of marginalization and inequality (e.g., gender, class, age, ethnicity, race, religion, sexuality, disability status) may not be the same as the cultural and social context of the developing company or team and where existing power imbalances limit the opportunities to reject, restrict, or limit use.Aarathi Krishnan et al., “Decolonial AI Manyfesto,” accessed July 24, 2022, https://manyfesto.ai/
Collaborate with workers to identify areas where they would welcome assistance in completing their work with augmenting AI or automation of non-core tasks, drawing upon the complementarity of humans and AI.Lama Nachman, “Beyond the Automation-Only Approach,” in Redesigning AI, Boston Review (MIT Press, 2021), https://bostonreview.net/forum_response/beyond-the-automation-only-approach/
Foster and seek out institutionalized representation of workers, ensuring that workers can offer their authentic views without fear of retribution or retaliation.
When seeking to include the perspectives of workers, recognize that workers from different backgrounds and of different demographic categories may experience workplaces and AI technologies in different ways. Seek broad, representative participation and feedback, and work to ensure workers of all backgrounds feel comfortable and empowered when participating.
|AI product development and updating||Integrate frontline workers and other end-users’ perspectives into the implementation of AI (e.g., workflow and performance targets).
Take workers seriously as experts in their own roles and include them in product development and future update cycles. Create opportunities for their empowered participation as subject matter experts, not just as end-user testers.
Stakeholder Group 3: Workers, unions, and worker organizers
Workers, unions, and worker organizers
Workers and the organizations and unions that represent them can shape AI’s impacts on their workplaces through contract negotiations and other mechanisms to influence corporate policy as well as on-going input into purchase and implementation decisions. Unfortunately, it is not common in many countries for employers to invite this participation and the ways AI technology shapes job quality and worker well-being can be obscured. Education and training programs by unions and worker organizations can help workers understand the functions and roles played by AI products and equip workers to participate in decisions made to purchase and implement AI in their workplaces.
|Context||Opportunities for impact|
|Unions and worker organizations||Foreground worker voice in development and implementation of AI (and other technology) as a plank of contract negotiations and other mechanisms to influence corporate policies.Christina Colclough, “Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table,” in Digital Work in the Planetary Market, –International Development Research Centre Series (MIT Press, 2022), https://idl-bnc-idrc.dspacedirect.org/bitstream/handle/10625/61034/IDL-61034.pdf
Train members and organizers in relevant technologies and their benefits and drawbacks, spotlighting AI technology and related issues (e.g., data rights) as a major influence on working environments.Christina Colclough, “When Algorithms Hire and Fire,” International Union Rights 25, no. 3 (2018): 6–7. https://muse.jhu.edu/article/838277/summary
|Workers||Actively seek to participate in workplace AI purchasing and implementation decisions.
Ask for disclosure and transparency on technologies being used, data being collected, how it’s being used, and for what purpose.
In workplaces with cultures of including workers in management decisions, offer input on areas where AI technology solutions would be welcomed and suggestions about ideal implementation.
Seek out worker organizations and unions operating in the same sector and geography undertaking efforts on these issues.
Stakeholder Group 4: Policymakers
Through laws and regulations concerning both technology and labor, government lawmakers and regulators shape the environments in which AI products are developed, sold, and implemented, and thus shape the technologies themselves.Brishen Rogers, “The Law and Political Economy of Workplace Technological Change,” Harvard Civil Rights-Civil Liberties Law Review 55 (2020): 531 As discussed above, there are and will continue to be instances where the incentives of AI-creating and -implementing companies strongly diverge from the interests of their workers. In such instances, government action will be required to ensure the livelihood and well-being of workers; as the historical record indicates, few businesses will voluntarily shoulder the whole of these changes. Compounding this, lack of worker voice and power often comes down to lack of worker protection (e.g., for organizing or ensuring correct worker classification). In some cases, AI technology further enables employers to exploit these power imbalances and policy or enforcement gaps.Wilneida Negrón, “Little Tech Is Coming for Workers” (Coworker.org, 2021), https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf Strong regulation and enforcement, including of existing laws and policies, is all the more critical in these situations.Jeremias Adams-Prassl, “What If Your Boss Was an Algorithm? Economic Incentives, Legal Challenges, and the Rise of Artificial Intelligence at Work,” Comparative Labor Law & Policy Journal 41 (2021 2019): 123
The heavily concentrated nature of the global AI research and workplace product development industry means that many workplace AI technologies are developed in and sold from the United States and China and then implemented in other regulatory environments.Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf While the fractured, global nature of AI’s impacts on workers impedes concerted efforts to protect workers, divergent regulatory environments offer opportunities for the experimentation and sharing of best practices in line with local norms and values. Conversely, countries with less economic power or enforcement capacity may find themselves in the position of reacting to harmful technologies created at or implemented from a distance; these situations require careful consideration and differentiated responses.
Much of the African continent, for instance, is both less well-placed to reap the economic benefits of AI (due to a comparative lack of telecom, computing, and other infrastructure, as well as a comparatively small skilled AI workforce), and more susceptible to potential workforce and labor market harms from AI use inside and outside the region (due to a comparative absence of protective regulations targeting AI use and impacts and weaker enforcement capabilities for labor protections). While a number of countries have been making recent strides on these factors, they are beginning from less advantageous starting points and starting at later dates than many high-income countries and regions.Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/ Proactively investing in AI workforce development and supporting infrastructure opens up the possibility of more “home grown” solutions responsive to local needs and values, rather than the status quo importation of technology from abroad that may undercut local social goals.Fekitamoeloa ‘Utoikamanu, “Closing the Technology Gap in Least Developed Countries,” United Nations (United Nations), accessed July 25, 2022, https://www.un.org/en/chronicle/article/closing-technology-gap-least-developed-countries
|Context||Opportunities for impact|
|Worker voice||Safeguard worker organizing on working conditions (e.g., tech introductions and implementations) and unionization through additional legislation and enforcement as needed.
Give workers the right to know about technologies used in their workplace, the data being collected on them, and the intended uses and impacts of the technology and data.Annette Bernhardt, Lisa Kresge, and Reem Suleiman, “Data and Algorithms at Work: The Case for Worker” (UC Berkeley Labor Center, November 2021), https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf
|Worker protection||Where possible, regulate and enforce protections from known harms to workers caused by AI through existing legislation and agencies.
Create new, targeted legislation and regulations to address gaps in worker protection, either as standalone provisions focused on workersAllison Levitsky, “California Might Require Employers to Disclose Workplace Surveillance,” Protocol, April 21, 2022, https://www.protocol.com/bulletins/ab-1651-california-workplace-surveillance or as a part of broader efforts to regulate AI technologies.“The EU Artificial Intelligence Act,” The AI Act, September 7, 2021, https://artificialintelligenceact.eu/
Protect worker organizing for improved working conditions (e.g., tech introductions and implementations) and unionization.
|Tax policy||Identify opportunities to correct the balance of tax burden between labor and capital, which shape the conditions for when and how employers choose to use workplace AI technologies,Daron Acemoglu, Andrea Manera, and Pascual Restrepo, “Does the US Tax Code Favor Automation?,” Working Paper, Working Paper Series (National Bureau of Economic Research, April 2020), https://doi.org/10.3386/w27052 as well as workers’ influence, leverage, or voice in workplaces.|
|Investment regulation||Require inclusion of relevant worker impact and human capital measurements in standard reporting and disclosure metrics.|
|Research grants and proposals||Require assessments of anticipated impacts on job availability and quality in government AI research grants.Emmanuel Moss et al., “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest” (Data and Society, June 2021), https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf
Solicit ideas and prototypes of worker-friendly/worker-complementing AI technology (for instance, through RFPs or Grand Challenges) and fund their development with public research and development grants.
|Low- and middle-income country (LMIC) responses||Create multi-country and multi-stakeholder collaborations across LMICs facing similar challenges and reform existing multistakeholder groups to provide more influence to the least powerful and most vulnerable participating groups. While perspectives from representatives of LMICs are included in some existing global multistakeholder efforts, the structure of these groups and their embedded power imbalances mean participation from less powerful actors may function as a “box ticking” exercise rather than as a true and influential representation of their specific needs.Kofi Yeboah, “Artificial Intelligence in Sub-Saharan Africa: Ensuring Inclusivity.” (Paradigm Initiative, December 2021), https://paradigmhq.org/report/artificial-intelligence-in-sub-saharan-africa-ensuring-inclusivity/ The creation of collaborative groups facing similar challenges would enable them to work together on identifying specific needs, as well as to potentially take collective or coordinated action in addressing them.
Invest in local AI workforces and infrastructure to support the development of workplace AI technologies that address local needs in line with local values.
Stakeholder Group 5: Investors
Private investment in AI technologies doubled between 2020 and 2021, directing ever higher amounts to a concentrated group of companies.Daniel Zhang et al., “The AI Index 2022 Annual Report” (AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022), https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf While angel and venture capital funders have not traditionally focused on the ESG (Environmental, Social, and Governance) impacts of their investments, the push for higher investor responsibility for climate change and sustainability impacts marks a shift in this attitude. Large institutional investors, similarly, are beginning to articulate an investment thesis of “stakeholder capitalism”Business Roundtable, “Statement on the Purpose of a Corporation,” July 2021, https://s3.amazonaws.com/brt.org/BRT-StatementonthePurposeofaCorporationJuly2021.pdf inclusive of companies’ treatment of workers.Larry Fink, “Larry Fink’s Annual 2022 Letter to CEOs,” accessed May 27, 2022, https://www.blackrock.com/corporate/investor-relations/larry-fink-ceo-letter In the United States, the Securities and Exchange Commission (SEC), which is responsible for regulating government investments, has proposed additional workforce disclosures related to treatment of workers, arguing that they are material information for investors.Katanga Johnson, “U.S. SEC Chair Provides More Detail on New Disclosure Rules, Treasury Market Reform | Reuters,” https://www.reuters.com/business/sustainable-business/sec-considers-disclosure-mandate-range-climate-metrics-2021-06-23/ As AI is increasingly adopted by companies, investors can influence its path by understanding and accounting for the downside risks posed by practices harmful to workers and the potential value created by worker-friendly technologies and practices.
|Context||Opportunities for impact|
|Investors||Include job availability and quality impacts of AI technology in ESG impact measurements for AI-creating and AI-implementing companies.
Offer and support shareholder proposals to increase workers’ voice and well-being.“Your Guide to Amazon’s 2022 Shareholder Event,” United for Respect, accessed May 27, 2022, https://united4respect.org/amazon-shareholders/
Request anticipated impact on workers when evaluating proposals and pitches for workplace AI products and companies.
Work with institutionalized forms of worker representation in order to solicit authentic, unencumbered perspectives of workers to incorporate into ESG metrics, stakeholder capitalism initiatives, and other efforts intended to increase worker well-being.