Step 1: Learn About the Guidelines

Step 1: Learn About the Guidelines

The Need for the Guidelines

The Need for the Guidelines

Action is needed to guide AI’s impact on jobs

Artificial intelligence is poised to substantially affect the labor market and the nature of work around the globe.

  • Some job categories will shrink or disappear entirely and new types of occupations will arise in their place
  • Wages will be affected, with AI changing the demand for various skills and the access workers have to jobs
  • The tasks workers perform at their jobs will change, with some of their previous work automated and other tasks assisted by new technologies
  • Job satisfaction and job quality will shift. Benefits will accrue to the workers with the highest control over how AI shows up in their jobs. Harms will occur for workers with minimal agency over workplace AI deployments

The magnitude and distribution of these effects is not fixed or pre-ordained.Acemoglu, D. (Ed.). (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Boston Review.Korinek, A., and Stiglitz, J.E. (2020, April). Steering technological progress. In NBER Conference on the Economics of AI. Today, we have a profound opportunity to ensure that AI’s effects on the labor market and the future of work contribute to broadly shared prosperity.

In the best scenario, humanity could use AI to unlock opportunities to mitigate climate change, make medical treatments more affordable and effective, and usher in a new era of improved living standards and prosperity around the world. This outcome, however, will not be realized by default.Acemoglu, D., and Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Public Affairs, New York. It requires a concerted effort to bring it about. AI use poses numerous large-scale economic risks that are likely to materialize given our current path, including:

  • Consolidating wealth in the hands of a select few companies and countries
  • Reducing wages and undermining worker agency as larger numbers of workers compete for deskilled, lower-wage jobs
  • Allocating the most fulfilling tasks in some jobs to algorithms, leaving humans with the remaining drudgery
  • Highly disruptive spikes in unemployment or underemploymentWe use the definition of underemployment from Merriam-Webster dictionary: “the condition in which people in a labor force are employed at less than full-time or regular jobs or at jobs inadequate with respect to their training or economic needs.” as workers start at the bottom rung in new fields, even if permanent mass unemployment does not arise in the medium term
The Guidelines are tools for creating a better future

Partnership on AI’s (PAI) Shared Prosperity Guidelines are intended to equip interested stakeholders with the conceptual tools they need to steer AI in service of shared prosperity.

All stakeholders looking to ground their decisions, agendas, and interactions with each other in a systematic understanding of labor market opportunities and risks presented by AI systems can use these tools. This includes:

  • AI-creating organizations
  • AI-using organizations
  • Policymakers
  • Labor organizations and workers

The Origin of the Guidelines

The Origin of the Guidelines

This work comes from years of applied research and multidisciplinary input
A key output of PAI’s AI and Shared Prosperity Initiative, PAI’s Shared Prosperity Guidelines were developed under the close guidance of a multidisciplinary Steering Committee and draw on insights gained during two years of applied research work.
Though the Guidelines reflect the inputs of many PAI Partners, they should not be read as representing the views of any particular organization or individual within the AI and Shared Prosperity Initiative’s Steering Committee or any specific PAI Partner.

Design of the Guidelines

Design of the Guidelines

We offer two tools for guiding AI’s impact on jobs
  1. A high-level Job Impact Assessment Tool with:
    • Signals of Opportunity indicating an AI system may support shared prosperity
    • Signals of Risk indicating an AI system may harm shared prosperity
  2. A collection of Stakeholder-Specific Recommendations: Responsible Practices and Suggested Uses for stakeholders able to help minimize the risks and maximize the opportunities to advance shared prosperity with AI. In particular, they are written for:
These tools can guide choices about any AI system

PAI’s Shared Prosperity Guidelines are designed to apply to all AI systems, regardless of:

  • Industry (including manufacturing, retail/services, office work, and warehousing and logistics)
  • AI technology (including generative AI, autonomous robotics, etc.)
  • Use case (including decision-making or assistance, task completion, training, and supervision)

As a whole, the Guidelines are general purpose and applicable across all existing AI technologies and uses, though some sections may only apply to specific technologies or uses.

To apply these guidelines, stakeholders should:

  • For an AI system of interest, perform the analysis suggested in the Job Impact Assessment section, identifying which signals of opportunity and risk to shared prosperity are present.
  • Use the results of the Job Impact Assessment to inform your plans, choices, and actions related to the AI system in question, following our Stakeholder-Specific Recommendations. For AI-creating and AI-using organizations, these recommendations are Responsible Practices. For policymakers, unions, workers, and their advocates, these recommendations are Suggested Uses.

We look forward to testing the Guidelines and refining the use scenarios together with interested stakeholders. If you have suggestions or would like to contribute to this work, please get in touch.

Our approach focuses on AI’s impact on labor demand

In these Guidelines, we consider an AI system to be serving to advance the prosperity of a given group if it boosts the demand for labor of that group — since selling labor remains the primary source of income for the majority of people in the world.We recognize that some communities advocate to advance shared prosperity in the age of AI through benefits redistribution mechanisms such as universal basic income. While a global benefits redistribution mechanism might be an important part of the solution (especially in the longer term) and we welcome research efforts and public debate on this topic, we left it outside of the scope of the current version of the Guidelines.

Instead, the Guidelines focus on governing the impact of AI on labor demand. We believe this approach will be extremely necessary at least in the short to medium term, enabling communities to have effective levers of influence over the pace, depth, and distribution of AI impacts on labor demand.

AI’s impacts on labor demand can manifest themselves as:

  • Changes in availability of jobs for certain skill, demographic, or geographic groupsGroup’s boundaries can be defined geographically, demographically, by skill type, or another parameter of interest.
  • Changes in the quality of jobs affecting workers’ well-beingIn other words, AI’s impact on labor demand can affect both incumbent workers as well as people interested in looking for work in the present or future.
In line with PAI’s framework for promoting workforce well-being in the AI-integrated workplace and other leading resources on high-quality jobs,International Labour Organization. (n.d.). Decent work. Department of Commerce and US Department of Labor. (n.d.). Department of Commerce and Department of Labor Good Jobs Principles, DOL. for the Future of Work. (n.d.). The Good Work Charter. we recognize multiple dimensions of job quality or workers’ well-being, namely:

  • Human rights
  • Financial well-being
  • Physical well-being
  • Emotional well-being
  • Intellectual well-being
  • Sense of meaning, community, and purpose.

Thus, for the purposes of these Guidelines, we define AI’s impact on shared prosperity as the impact of AI use on availability and quality of formal sector jobs across skill, demographic, or geographic groups.The share of informal sector employment remains high in many low- and middle-income countries. The emphasis on formal sector jobs here should not be interpreted as treating the informal sector as out of scope of the concern of PAI’s Shared Prosperity Guidelines. The opposite is the case: If the introduction of an AI system in the economy results in a reduction of availability of formal sector jobs, that reduction cannot be considered to be compensated by growth in availability of jobs in the informal sector.

In turn, the overall impact of AI on the availability and quality of jobs can be anticipated as a sum total of changes in the primary factors AI use is known to affect.Klinova, K., and Korinek, A. (2021). AI and shared prosperity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 645-651).Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. on AI, 2021. Redesigning AI for Shared Prosperity: an Agenda. Those factors are:

  • Relative productivity of workers (versus machines or workers in other skill groups)
  • Labor’s share of organization revenueLabor’s share of revenue is the share of revenue spent on workers’ wages and benefits
  • Task composition of jobs
  • Skill requirements of jobs
  • Geographic distribution of the demand for labor
  • Geographic distribution of the supply of laborGeographic distributions of labor demand and supply do not necessarily match for a variety of reasons, the most prominent of which are overly restrictive policies around labor migration. Immigration barriers present in many countries with rapidly aging populations create artificial scarcity of labor in those countries, massively inflating the incentives to invest in labor-saving technologies. For more details, see:
  • Market concentration
  • Job stability
  • Stress rates
  • Injury rates
  • Schedule predictability
  • Break time
  • Job intensity
  • Freedom to organize
  • Privacy
  • Fair and equitable treatment
  • Social relationships
  • Job autonomy
  • Challenge level of tasks
  • Satisfaction or pride in one’s work
  • Ability to develop skills needed for one’s career
  • Human involvement or recourse for managerial decisions (such as performance evaluation and promotion)
  • Human involvement or recourse in employment decisions (such as hiring and termination)

Anticipated effects on the above primary factors are the main focus of the risks and opportunities analysis tool provided in the Guidelines. Another important focus is the distribution of those effects. An AI system may bring benefits to one set of users and harms to another. Take, for example, an AI system used by managers to set and monitor performance targets for their reports. This system could potentially increase pride in one’s work for managers and raise rates of injury and stress for their direct reports.

When this dynamic prompts conflicting interests, we suggest higher consideration for the more vulnerable group with the least decision-making power in the situation as these groups often bear the brunt of technological harms.Negrón, W. (2021). Little Tech is Coming for Workers. By a similar logic, where we call for worker agency and participation, we suggest undertaking particular effort to include the workers most affected and/or with the least decision authority (for example, the frontline workers, not just their supervisors).

Key Principles for Using the Guidelines

Key Principles for Using the Guidelines

These application principles apply independently of who is using the Guidelines and in what specific scenario they are doing so.

Engage affected workers

Make sure to engage worker communities that stand to be affected by the introduction of an AI system in the Job Impact Assessment, as well as in the development of risk mitigation strategies. This includes, but is not limited to, engaging and affording agency to workers who will be affected by the AI system and their representatives.It is frequently the case that workers who stand to be affected by the introduction of an AI system include not only workers directly employed by the company introducing AI in its own operations, but a wider set of current or potential labor market participants. Hence it is important that not only incumbent workers are given the agency to participate in job impact assessment and risk mitigation strategy development. Bringing in multi-disciplinary experts will help understand the full spectrum and severity of the potential impact.Workers may work with AI systems or have their work affected by them. In cases where one group of workers uses an AI system (for instance, uses an AI performance evaluation tool to assess their direct reports) and another group is affected by that AI system’s use (in this example, the direct reports), we suggest giving highest consideration to affected workers and/or the workers with the least decision-making power in the situation (in this example, the direct reports rather than the supervisors).

Seeking shared prosperity doesn’t mean opposing profits

Some of the signals of risk to shared prosperity described in the Guidelines are actively sought by companies as profit-making opportunities. The Guidelines do not suggest that companies should stop seeking profits, just that they should do so responsibly.

Profit-generating activities do not necessarily have to harm workers and communities, but some of them do. The presence of signals of risk indicate that an AI system being assessed, while possibly capable of generating profit for a narrow set of beneficiaries, is likely to do that at the expense of shared prosperity, and thus might be undesirable from the societal benefit perspective. We encourage companies to follow the Guidelines, developing and using AI in ways that generate profit while also advancing shared prosperity.

Signals are indicators, not guarantees

Presence of a signal should be interpreted as an early indicator, not a guarantee that shared prosperity will be advanced or harmed by a given AI system. Presence of opportunity or risk signals for an AI system being assessed is a necessary, but not sufficient, condition for shared prosperity to be advanced or harmed with the introduction of that AI system into the economy.Many societal factors outside of the direct control of AI-creating organizations play a role in determining which opportunities or risks end up being realized. Holding all other societal factors constant, the purpose of these Guidelines is to minimize the chance that shared prosperity-relevant outcomes are worsened and maximize the chance that they are improved as a result of choices by AI-creating and -using organizations and the inherent qualities of their technology.

Signals should be considered comprehensively
Signals of opportunity and risk should be considered comprehensively. Presence of a signal of risk does not automatically mean an AI system in question should not be developed or deployed. That said, an absence of any signals of opportunity does mean that a given AI system is highly unlikely to advance shared prosperity and whatever risks it might be presenting to society are not justified.
Signals of opportunity do not “offset” signals of risk
Presence of signals of opportunity should not be interpreted as “offsetting” the presence of signals of risk. In recognition that benefits and harms are usually borne unevenly by different groups, the Guidelines strongly oppose the concept of a “net benefit” to shared prosperity, which is incompatible with a human rights-based approach. In alignment with the UN Guiding Principles on Business and Human Rights, a mitigation strategy should be developed for each risk identified, prioritizing the risks of the most severe impactsPAI’s Shared Prosperity Guidelines use UNGP’s definition of severity: an impact (potential or actual) can be severe “by virtue of one or more of the following characteristics: its scale, scope or irremediability. Scale means the gravity of the impact on the human right(s). Scope means the number of individuals that are or could be affected. Irremediability means the ease or otherwise with which those impacted could be restored to their prior enjoyment of the right(s).” first. Mitigation strategies can range from eliminating the risk or reducing the severity of potential impact to ensuring access to remedy or compensation for affected groups. If effective mitigation strategies for a given risk are not available, it should be considered as a strong argument in favor of meaningful changes in the development, implementation, and use plans of an AI system, especially if it is expected to affect vulnerable groups.
Analysis of signals is not prescriptive

The analysis of signals of opportunity and risk is not prescriptive. Decisions around the development, implementation, and use of increasingly powerful AI systems should be made collectively, allowing for the participation of all affected stakeholders. We anticipate that two main uses of the signals analysis will include:

  • Informing stakeholders’ positions in preparation for dialogue around development, deployment, and regulation of AI systems, as well as appropriate risk mitigation strategies
  • Identifying key areas of potential impact of a given AI system which warrant deeper analysis (such as to illuminate their magnitude and distribution)Korinek, A., 2022. How innovation affects labor markets: An impact assessment. and further action