Step 1: Learn About the Guidelines
The Need for the Guidelines
The Need for the Guidelines
Artificial intelligence is poised to substantially affect the labor market and the nature of work around the globe.
- Some job categories will shrink or disappear entirely and new types of occupations will arise in their place
- Wages will be affected, with AI changing the demand for various skills and the access workers have to jobs
- The tasks workers perform at their jobs will change, with some of their previous work automated and other tasks assisted by new technologies
- Job satisfaction and job quality will shift. Benefits will accrue to the workers with the highest control over how AI shows up in their jobs. Harms will occur for workers with minimal agency over workplace AI deployments
The magnitude and distribution of these effects is not fixed or pre-ordained.Acemoglu, D. (Ed.). (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Boston Review.Korinek, A., and Stiglitz, J.E. (2020, April). Steering technological progress. In NBER Conference on the Economics of AI. Today, we have a profound opportunity to ensure that AI’s effects on the labor market and the future of work contribute to broadly shared prosperity.
In the best scenario, humanity could use AI to unlock opportunities to mitigate climate change, make medical treatments more affordable and effective, and usher in a new era of improved living standards and prosperity around the world. This outcome, however, will not be realized by default.Acemoglu, D., and Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Public Affairs, New York. It requires a concerted effort to bring it about. AI use poses numerous large-scale economic risks that are likely to materialize given our current path, including:
- Consolidating wealth in the hands of a select few companies and countries
- Reducing wages and undermining worker agency as larger numbers of workers compete for deskilled, lower-wage jobs
- Allocating the most fulfilling tasks in some jobs to algorithms, leaving humans with the remaining drudgery
- Highly disruptive spikes in unemployment or underemployment*We use the definition of underemployment from Merriam-Webster dictionary: “the condition in which people in a labor force are employed at less than full-time or regular jobs or at jobs inadequate with respect to their training or economic needs.” as workers start at the bottom rung in new fields, even if permanent mass unemployment does not arise in the medium term
Partnership on AI’s (PAI) Shared Prosperity Guidelines are intended to equip interested stakeholders with the conceptual tools they need to steer AI in service of shared prosperity.
All stakeholders looking to ground their decisions, agendas, and interactions with each other in a systematic understanding of labor market opportunities and risks presented by AI systems can use these tools. This includes:
- AI-creating organizations
- AI-using organizations
- Policymakers
- Labor organizations and workers
The Origin of the Guidelines
The Origin of the Guidelines
- June 2023
Guidelines for AI and Shared Prosperity are released for testing and adoption by the AI industry, labor organizations, and policymakers
- March 2023—June 2023
Guidelines for AI and Shared Prosperity are vetted with labor, industry, and policymakers
- October 2022—March 2023
Guidelines for AI and Shared Prosperity are iterated on by the Steering Committee
- November 2022
Responsible Sourcing of Data Enrichment Services resource library is released, accompanied by a case study of Responsible Data Enrichment Sourcing Practices Implementation at DeepMind
- September 2022
AI and Job Quality: Insights from Frontline Workers report is released, summarizing the findings from primary research with workers around the world experiencing AI introduction in their workplace
- April 2022
“Governing AI to Advance Shared Prosperity” published in The Oxford Handbook of AI Governance to map the policy levers of governing AI’s economic trajectory
- June 2021—June 2022
PAI sets up and executes primary research with frontline workers in the US, India, and Sub-saharan Africa to understand the job quality impacts they are experiencing as a result of the introduction of AI in their respective workplaces
- June 2022
“How innovation affects labor markets: An impact assessment” working paper supported by the AI and Shared Prosperity Initiative is published, outlining a 5-step framework for evaluating the impact of an introduction of new technology on labor demand
- May 2021
Prototype framework for assessing AI’s impact on labor demand is published in the proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
- May 2021
PAI partners with the Boston Review to convene a group of prominent thought leaders to advance the public debate on the topic of AI’s economic and labor impacts. The outputs are published as a dedicated Boston Review Forum issue “Redesigning AI: Work, Democracy and Justice at the Age of Automation” led by Daron Acemoglu and responses from Kate Crawford, Erik Brynjolfsson, Lama Nachman, Rob Reich, and others
- May 2021
“Redesigning AI for Shared Prosperity: an Agenda” is released to outline the Initiative’s plans of research and action laid out through the Steering Committee deliberations
- September 2020—March 2021
The AI and Shared Prosperity Initiative’s Steering Committee holds deliberations to shape the Initiative’s Agenda
- September 2020
PAI convenes the AI and Shared Prosperity Initiative’s Steering Committee consisting of civil society and labor leaders, senior technologists, and leading academic economists
- June 2020
PAI announces the AI and Shared Prosperity Initiative and publishes a call to nominate leaders for its Steering Committee
Design of the Guidelines
Design of the Guidelines
- A high-level Job Impact Assessment Tool with:
- Signals of Opportunity indicating an AI system may support shared prosperity
- Signals of Risk indicating an AI system may harm shared prosperity
- A collection of Stakeholder-Specific Recommendations: Responsible Practices and Suggested Uses for stakeholders able to help minimize the risks and maximize the opportunities to advance shared prosperity with AI. In particular, they are written for:
PAI’s Shared Prosperity Guidelines are designed to apply to all AI systems, regardless of:
- Industry (including manufacturing, retail/services, office work, and warehousing and logistics)
- AI technology (including generative AI, autonomous robotics, etc.)
- Use case (including decision-making or assistance, task completion, training, and supervision)
As a whole, the Guidelines are general purpose and applicable across all existing AI technologies and uses, though some sections may only apply to specific technologies or uses.
To apply these guidelines, stakeholders should:
- For an AI system of interest, perform the analysis suggested in the Job Impact Assessment section, identifying which signals of opportunity and risk to shared prosperity are present.
- Use the results of the Job Impact Assessment to inform your plans, choices, and actions related to the AI system in question, following our Stakeholder-Specific Recommendations. For AI-creating and AI-using organizations, these recommendations are Responsible Practices. For policymakers, unions, workers, and their advocates, these recommendations are Suggested Uses.
We look forward to testing the Guidelines and refining the use scenarios together with interested stakeholders. If you have suggestions or would like to contribute to this work, please get in touch.
GET IN TOUCHIn these Guidelines, we consider an AI system to be serving to advance the prosperity of a given group if it boosts the demand for labor of that group — since selling labor remains the primary source of income for the majority of people in the world.We recognize that some communities advocate to advance shared prosperity in the age of AI through benefits redistribution mechanisms such as universal basic income. While a global benefits redistribution mechanism might be an important part of the solution (especially in the longer term) and we welcome research efforts and public debate on this topic, we left it outside of the scope of the current version of the Guidelines.
Instead, the Guidelines focus on governing the impact of AI on labor demand. We believe this approach will be extremely necessary at least in the short to medium term, enabling communities to have effective levers of influence over the pace, depth, and distribution of AI impacts on labor demand.
AI’s impacts on labor demand can manifest themselves as:
- Changes in availability of jobs for certain skill, demographic, or geographic groups✱Group’s boundaries can be defined geographically, demographically, by skill type, or another parameter of interest.
- Changes in the quality of jobs affecting workers’ well-being✱In other words, AI’s impact on labor demand can affect both incumbent workers as well as people interested in looking for work in the present or future.
In line with PAI’s framework for promoting workforce well-being in the AI-integrated workplace and other leading resources on high-quality jobs,International Labour Organization. (n.d.). Decent work. https://tinyurl.com/yur776ydUS Department of Commerce and US Department of Labor. (n.d.). Department of Commerce and Department of Labor Good Jobs Principles, DOL. https://tinyurl.com/mtbpemknInstitute for the Future of Work. (n.d.). The Good Work Charter. https://tinyurl.com/ycxtaax4 we recognize multiple dimensions of job quality or workers’ well-being, namely:
- Human rights
- Financial well-being
- Physical well-being
- Emotional well-being
- Intellectual well-being
- Sense of meaning, community, and purpose.
International Labour Organization, Decent Work agenda
US Department of Commerce and Department of Labor, Good Jobs Principles
Thus, for the purposes of these Guidelines, we define AI’s impact on shared prosperity as the impact of AI use on availability and quality of formal sector jobs across skill, demographic, or geographic groups.✱The share of informal sector employment remains high in many low- and middle-income countries. The emphasis on formal sector jobs here should not be interpreted as treating the informal sector as out of scope of the concern of PAI’s Shared Prosperity Guidelines. The opposite is the case: If the introduction of an AI system in the economy results in a reduction of availability of formal sector jobs, that reduction cannot be considered to be compensated by growth in availability of jobs in the informal sector.
In turn, the overall impact of AI on the availability and quality of jobs can be anticipated as a sum total of changes in the primary factors AI use is known to affect.Klinova, K., and Korinek, A. (2021). AI and shared prosperity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 645-651).Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611Partnership on AI, 2021. Redesigning AI for Shared Prosperity: an Agenda. https://partnershiponai.org/paper/redesigning-ai-agenda/ Those factors are:
- Relative productivity of workers (versus machines or workers in other skill groups)
- Labor’s share of organization revenue✱Labor’s share of revenue is the share of revenue spent on workers’ wages and benefits
- Task composition of jobs
- Skill requirements of jobs
- Geographic distribution of the demand for labor
- Geographic distribution of the supply of labor✱Geographic distributions of labor demand and supply do not necessarily match for a variety of reasons, the most prominent of which are overly restrictive policies around labor migration. Immigration barriers present in many countries with rapidly aging populations create artificial scarcity of labor in those countries, massively inflating the incentives to invest in labor-saving technologies. For more details, see: https://lampforum.org/2023/03/02/choose-people/.
- Market concentration
- Job stability
- Stress rates
- Injury rates
- Schedule predictability
- Break time
- Job intensity
- Freedom to organize
- Privacy
- Fair and equitable treatment
- Social relationships
- Job autonomy
- Challenge level of tasks
- Satisfaction or pride in one’s work
- Ability to develop skills needed for one’s career
- Human involvement or recourse for managerial decisions (such as performance evaluation and promotion)
- Human involvement or recourse in employment decisions (such as hiring and termination)
Anticipated effects on the above primary factors are the main focus of the risks and opportunities analysis tool provided in the Guidelines. Another important focus is the distribution of those effects. An AI system may bring benefits to one set of users and harms to another. Take, for example, an AI system used by managers to set and monitor performance targets for their reports. This system could potentially increase pride in one’s work for managers and raise rates of injury and stress for their direct reports.
When this dynamic prompts conflicting interests, we suggest higher consideration for the more vulnerable group with the least decision-making power in the situation as these groups often bear the brunt of technological harms.Negrón, W. (2021). Little Tech is Coming for Workers. Coworker.org. https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf. By a similar logic, where we call for worker agency and participation, we suggest undertaking particular effort to include the workers most affected and/or with the least decision authority (for example, the frontline workers, not just their supervisors).
Key Principles for Using the Guidelines
Key Principles for Using the Guidelines
These application principles apply independently of who is using the Guidelines and in what specific scenario they are doing so.
Make sure to engage worker communities that stand to be affected by the introduction of an AI system in the Job Impact Assessment, as well as in the development of risk mitigation strategies. This includes, but is not limited to, engaging and affording agency to workers who will be affected by the AI system and their representatives.✱It is frequently the case that workers who stand to be affected by the introduction of an AI system include not only workers directly employed by the company introducing AI in its own operations, but a wider set of current or potential labor market participants. Hence it is important that not only incumbent workers are given the agency to participate in job impact assessment and risk mitigation strategy development. Bringing in multi-disciplinary experts will help understand the full spectrum and severity of the potential impact.Workers may work with AI systems or have their work affected by them. In cases where one group of workers uses an AI system (for instance, uses an AI performance evaluation tool to assess their direct reports) and another group is affected by that AI system’s use (in this example, the direct reports), we suggest giving highest consideration to affected workers and/or the workers with the least decision-making power in the situation (in this example, the direct reports rather than the supervisors).
Some of the signals of risk to shared prosperity described in the Guidelines are actively sought by companies as profit-making opportunities. The Guidelines do not suggest that companies should stop seeking profits, just that they should do so responsibly.
Profit-generating activities do not necessarily have to harm workers and communities, but some of them do. The presence of signals of risk indicate that an AI system being assessed, while possibly capable of generating profit for a narrow set of beneficiaries, is likely to do that at the expense of shared prosperity, and thus might be undesirable from the societal benefit perspective. We encourage companies to follow the Guidelines, developing and using AI in ways that generate profit while also advancing shared prosperity.
Presence of a signal should be interpreted as an early indicator, not a guarantee that shared prosperity will be advanced or harmed by a given AI system. Presence of opportunity or risk signals for an AI system being assessed is a necessary, but not sufficient, condition for shared prosperity to be advanced or harmed with the introduction of that AI system into the economy.Many societal factors outside of the direct control of AI-creating organizations play a role in determining which opportunities or risks end up being realized. Holding all other societal factors constant, the purpose of these Guidelines is to minimize the chance that shared prosperity-relevant outcomes are worsened and maximize the chance that they are improved as a result of choices by AI-creating and -using organizations and the inherent qualities of their technology.
The analysis of signals of opportunity and risk is not prescriptive. Decisions around the development, implementation, and use of increasingly powerful AI systems should be made collectively, allowing for the participation of all affected stakeholders. We anticipate that two main uses of the signals analysis will include:
- Informing stakeholders’ positions in preparation for dialogue around development, deployment, and regulation of AI systems, as well as appropriate risk mitigation strategies
- Identifying key areas of potential impact of a given AI system which warrant deeper analysis (such as to illuminate their magnitude and distribution)Korinek, A., 2022. How innovation affects labor markets: An impact assessment. and further action