1. Motivation and objectives
Artificial intelligence is poised to substantially affect the labor market and the nature of work around the globe. Some job categories will shrink or disappear entirely, while new types of occupations will arise in their place. Wages will be affected by the way AI changes demand for specific skills and workers’ access to jobs. Workers will find a changing set of tasks in their jobs, with some of their previous work automated, and other tasks assisted by new technologies. And alongside all of this, job satisfaction and job quality will shift, with benefits accruing to the workers with the highest agency to shape the way AI shows up in their jobs, and harms occurring for workers with minimal agency over the AI in their workplaces and few other options available to them.
The balance of effects described above is not fixed or pre-ordained. As a society, we have a profound opportunity in this moment to ensure that AI’s effects on the labor market and the future of work contribute to broadly shared prosperity. But right now, the future economic impacts of AI as a whole and at the level of specific systems are known unknowns—in no small part because decisions made now will strongly shape the future path of the field. In the best scenario, humanity could use AI to unlock opportunities to mitigate climate change, make medical treatments more affordable and effective, and usher in a new era of improved living standards and prosperity around the world. But, AI use also brings numerous large-scale economic risks that are likely to materialize given our current path. Some of those risks include a consolidation of wealth in the hands of a select few companies and countries; reducing wages and undermining worker agency as larger numbers of workers compete for deskilled, lower-wage jobs; and allocating the most fulfilling tasks in some jobs to algorithms, leaving humans with the remaining drudgery. While current consensus indicates that mass permanent unemployment is unlikely at least in the medium term, continuing further down our existing path could still lead to highly disruptive spikes in unemployment or underemployment that force millions to start at the bottom rung of the ladder in new fields.
In domains like pharmaceuticals and medical devices, every new innovation is put through rigorous testing to ensure absence of unacceptable unintended harms to individuals and society. Yet, despite the potential for radical disruption to people’s lives, both positive and negative, there is no established practice of AI-developing and AI-using organizations assessing and disclosing the potential impacts of their decisions on shared prosperity. Many workers lack sufficient power and agency to demand negative impacts be addressed; those with such power lack commonly accepted foundations on which to base their advocacy. Policymaking on this issue is largely absent around the world, and broader society is only beginning to understand the range of possible impacts and set norms of acceptable behavior. Moreover, there are no widely available analytical tools for anticipating labor market risks and opportunities created by AI, or facilitating a well-informed conversation about those between relevant stakeholders.
To bridge this gap, PAI’s Shared Prosperity Guidelines are intended to equip interested stakeholders with the conceptual tools they need to steer AI in service of shared prosperity: a high-level job impact assessment tool containing a set of granular signals for anticipating opportunities and risks presented by a given AI system to job access and job quality, as well as a set of responsible practices and suggested uses for minimizing risks and maximizing opportunities to advance shared prosperity with AI. The job impact assessment tool can be used by AI developers and deployers, worker representatives, policymakers, civil society leaders, and other stakeholders looking to ground their decisions, agendas, and interactions with each other in a systematic understanding of labor market opportunities and risks presented by AI systems.
We acknowledge that some of the signals described in the Guidelines as signals of risks to shared prosperity are actively sought by companies as profit-making opportunities. The Guidelines DO NOT suggest for companies to stop seeking to make profit, but merely to do it responsibly. Profit-generating activities do not have to cause harm to workers and communities, but some of them do. The presence of signals of risk indicate that an AI system being assessed, while possibly capable of generating profit for a narrow set of beneficiaries, is likely to do that at the expense of shared prosperity, and thus might be undesirable from the societal benefit perspective.