Learn About the Guidelines

Learn About the Guidelines

How to Apply these Guidelines

How to apply these guidelines

The application principles that apply independently of who is using the Guidelines and in what specific scenario:

Make sure to engage communities that stand to be affected

Make sure to engage communities that stand to be affected by the introduction of an AI system in the job impact assessment, as well as the development of risk mitigation strategies—this includes, but is not limited to, engaging and affording agency to workers who will be affected by the AI system and their representatives. Bringing in multi-disciplinary experts will help understand the full spectrum and severity of the potential impact.

Presence of a signal should be interpreted as an early indicator
Presence of a signal is not a guarantee that shared prosperity will be advanced or harmed by a given AI system. Presence of at least some opportunity (risk) signals for an AI system being assessed is a necessary, but not sufficient condition for shared prosperity to be advanced (harmed) with the introduction of that AI system into the economy. Many societal factors outside of direct control of AI developing organizations play a role in determining which opportunities (risk) end up being realized. The purpose of these Guidelines is to minimize (maximize) the chance that shared prosperity-relevant outcomes are worsened (improved) due to factors inherent to the feature of the technology itself, the decisions and choices of AI developing and AI-using organizations, holding all other societal factors constant.
Signals of opportunity and risk should be considered comprehensively
Presence of a signal of risk does not automatically mean an AI system in question should not be developed or deployed. That said, an absence of any signals of opportunity does mean that a given AI system is highly unlikely to advance shared prosperity, and whatever risks it might be presenting to the society are not justified.
Signals of opportunity should not be interpreted as “offsetting” signals of risk
In recognition that benefits and harms are usually borne unevenly by different groups, the Guidelines strongly oppose the concept of a “net benefit” to shared prosperity, which is incompatible with a human rights-based approach. In alignment with UN Guiding Principles for Business and Human Rights, a mitigation strategy should be developed for each risk identified, prioritizing the risks of the most severe impacts first. Mitigation strategies can range from eliminating the risk or reducing the severity of potential impact to ensuring access to remedy or compensation for affected groups. If effective mitigation strategies for a given risk are not available, it should be considered as a strong argument in favor of meaningful changes in the development, implementation, and use plans of an AI system, especially if it is expected to affect vulnerable groups.
The analysis of signals of opportunity and risk is not prescriptive

Decisions around the development, implementation, and use of increasingly powerful AI systems should be made collectively, allowing for participation of all affected stakeholders. We anticipate that two main uses of the signals analysis will include:

  1. informing stakeholders’ positions in preparation for dialogue around development, deployment, and regulation of AI systems, as well as appropriate risk mitigation strategies;
  2. identifying key areas of potential impact of a given AI system which warrant deeper analysis (such as one suggested in Korinek (2022) to illuminate their magnitude and distribution) and further action.

Background

Origins

A key output of the AI and Shared Prosperity Initiative, PAI’s Shared Prosperity Guidelines were developed under the close guidance of a multidisciplinary Steering Committee and draw on insights gained during two years of applied research work. This work included economic modeling of AI’s impacts on labor demand (Klinova, Korinek 2021 and Korinek 2022), engaging frontline workers around the world to understand AI’s impact on job quality (Bell 2022), mapping the levers for governing AI’s economic trajectory (Klinova 2022), as well as a major workstream on creating and testing practitioner resources for responsible sourcing of data enrichment labor. The plan for this multi-stakeholder applied research work was shared with the public in “Redesigning AI for Shared Prosperity: an Agenda” published by Partnership on AI in 2021, following eight months of Steering Committee deliberations.

Though this document reflects the inputs of many PAI Partners, it should not be read as representing the views of any particular organization or individual within the AI and Shared Prosperity Initiative’s Steering Committee or any specific PAI Partner.

Motivation

Artificial intelligence is poised to substantially affect the labor market and the nature of work around the globe. Some job categories will shrink or disappear entirely, while new types of occupations will arise in their place. Wages will be affected by the way AI changes demand for specific skills and workers’ access to jobs. Workers will find a changing set of tasks in their jobs, with some of their previous work automated, and other tasks assisted by new technologies. And alongside all of this, job satisfaction and job quality will shift, with benefits accruing to the workers with the highest agency to shape the way AI shows up in their jobs, and harms occurring for workers with minimal agency over the AI in their workplaces and few other options available to them.

The balance of effects described above is not fixed or pre-ordained. As a society, we have a profound opportunity in this moment to ensure that AI’s effects on the labor market and the future of work contribute to broadly shared prosperity. But right now, the future economic impacts of AI as a whole and at the level of specific systems are known unknowns—in no small part because decisions made now will strongly shape the future path of the field. In the best scenario, humanity could use AI to unlock opportunities to mitigate climate change, make medical treatments more affordable and effective, and usher in a new era of improved living standards and prosperity around the world. But, AI use also brings numerous large-scale economic risks that are likely to materialize given our current path. Some of those risks include a consolidation of wealth in the hands of a select few companies and countries; reducing wages and undermining worker agency as larger numbers of workers compete for deskilled, lower-wage jobs; and allocating the most fulfilling tasks in some jobs to algorithms, leaving humans with the remaining drudgery. While current consensus indicates that mass permanent unemployment is unlikely at least in the medium term, continuing further down our existing path could still lead to highly disruptive spikes in unemployment or underemployment that force millions to start at the bottom rung of the ladder in new fields.

In domains like pharmaceuticals and medical devices, every new innovation is put through rigorous testing to ensure absence of unacceptable unintended harms to individuals and society. Yet, despite the potential for radical disruption to people’s lives, both positive and negative, there is no established practice of AI-developing and AI-using organizations assessing and disclosing the potential impacts of their decisions on shared prosperity. Many workers lack sufficient power and agency to demand negative impacts be addressed; those with such power lack commonly accepted foundations on which to base their advocacy. Policymaking on this issue is largely absent around the world, and broader society is only beginning to understand the range of possible impacts and set norms of acceptable behavior. Moreover, there are no widely available analytical tools for anticipating labor market risks and opportunities created by AI, or facilitating a well-informed conversation about those between relevant stakeholders.

Objectives

To bridge this gap, PAI’s Shared Prosperity Guidelines are intended to equip interested stakeholders with the conceptual tools they need to steer AI in service of shared prosperity: a high-level job impact assessment tool containing a set of granular signals for anticipating opportunities and risks presented by a given AI system to job access and job quality, as well as a set of responsible practices and suggested uses for minimizing risks and maximizing opportunities to advance shared prosperity with AI. The job impact assessment tool can be used by AI developers and deployers, worker representatives, policymakers, civil society leaders, and other stakeholders looking to ground their decisions, agendas, and interactions with each other in a systematic understanding of labor market opportunities and risks presented by AI systems.

We acknowledge that some of the signals described in the Guidelines as signals of risks to shared prosperity are actively sought by companies as profit-making opportunities. The Guidelines DO NOT suggest for companies to stop seeking to make profit, but merely to do it responsibly. Profit-generating activities do not have to cause harm to workers and communities, but some of them do. The presence of signals of risk indicate that an AI system being assessed, while possibly capable of generating profit for a narrow set of beneficiaries, is likely to do that at the expense of shared prosperity, and thus might be undesirable from the societal benefit perspective.

Guidelines’ scope

Guidelines’ Structure

The Shared Prosperity Guidelines are designed to apply regardless of industry (e.g., manufacturing, retail/services, office work, warehousing and logistics), AI technology (e.g., generative AI, predictive AI, autonomous robotics), or use case (e.g., decision-making or assistance, task completion, training, or supervision). Taken as a whole, the Guidelines are general purpose and applicable across all existing AI technologies and uses, though some sections may only apply to specific technologies or uses.

Defining shared prosperity

For the purposes of these Guidelines, we consider an AI system to be serving to advance the prosperity of a given group if it boosts the demand for labor of that group, since for the majority of the world’s population selling their labor remains the primary source of income. We recognize that some communities advocate to advance shared prosperity in the age of AI via benefits redistribution mechanisms such as universal basic income. While a global benefits redistribution mechanism might be an important part of the solution especially in the longer term, and we welcome research efforts and public debate on this topic, we left it outside of the scope of the current version of the Guidelines in order to focus on a comparatively more neglected approach: governing the impact of AI on labor demand. We think this approach will be extremely necessary at least in the short- to medium-term to enable communities to have effective levers of influence over pace, depth, and distribution of AI impacts on labor demand.

AI’s impacts on labor demand can manifest themselves in two ways:

  1. as changes in availability of jobs for certain skill, demographic, or geographic groups, and/or
  2. as changes in the quality of jobs affecting workers’ well-being.

In line with Partnership on AI (2020), we recognize multiple dimensions of job quality or workers’ well-being, namely:

  • Human Rights
  • Financial Well-being
  • Occupational Safety and Health (physical well-being)
  • Emotional Well-being
  • Intellectual Well-being
  • Sense of Meaning, Community, and Purpose.
Relevant outcomes

Thus for the purposes of these Guidelines, we define AI’s impact on shared prosperity as the impact of AI use on availability and quality of formal sector jobs across skill, demographic, or geographic groups.

In its turn, the overall impact of AI on availability and quality of jobs can be anticipated as a sum total of changes in the primary factors AI use is known to affect (Klinova and Korinek 2021, Bell 2022, PAI 2021). Those factors are:

  • Relative productivity of workers (vs machines or workers in other skill groups)
  • Labor share of organization’s revenue
  • Task composition of jobs
  • Skill requirements of jobs
  • Geographic distribution of the demand for labor
  • Geographic distribution of the supply of labor
  • Market concentration
  • Job stability
  • Stress rates
  • Injury rates
  • Schedule predictability
  • Break time
  • Job intensity
  • Freedom to organize
  • Privacy
  • Fair and equitable treatment
  • Social relationships
  • Job autonomy
  • Challenge level of tasks
  • Satisfaction or pride in one’s work
  • Ability to develop skills needed for one’s career
  • Human involvement or recourse for managerial decisions (performance evaluation, promotion)
  • Human involvement or recourse in employment decisions (hiring, termination)

Anticipated effects on the above primary factors are the main focus of risks and opportunities analysis. Another important focus is the distribution of those effects. An AI system may bring benefits to one set of users and harms to another. Take, for example, an AI system used by managers to set and monitor performance targets for their reports. This system can perhaps increase pride in one’s work for the managers, and higher rates of injury and stress for the direct reports. Where this dynamic prompts conflicting interests, we suggest higher consideration for the more vulnerable group with the least decision-making power in the situation. By a similar logic, where we call for worker agency and participation, we suggest undertaking particular effort to include the workers most affected and/or with the least decision authority (e.g., the frontline worker, not just their supervisor).