Our Blog
/
Blog

How Many Jobs Will AI Destroy? As Many As We Tell It To.

$hero_image['alt']

As artificial intelligence rapidly reshapes our world — changing the way we live, think, and even dispense justice — technologists, civil society, and policy-makers have started questioning the fairness of these automated systems. Despite this, our better angels seem to have a blindspot when it comes to AI and labor. There is a growing consensus that AI systems shouldn’t increase racial or sexual inequality. That automation will displace jobs and likely increase economic inequality, however, is treated as an uncontestable destiny. “Over 30 million U.S. workers will lose their jobs because of AI,” reads a typical headline on the subject—and proposed solutions like reskilling implicitly accept this fate.

The truth is that no future is inevitable. Whether AI increases injustice or promotes equality, whether it makes the poor poorer or all of us richer, is a choice for us to make. When we ignore this decision, we accept by default a world that technology is making less fair.

The AI and Shared Prosperity Initiative (AI SPI) dares to imagine another world, one where innovation works to enhance humanity’s industriousness and creativity — not just supplant it. In service of that vision, a Steering Committee of 23 notable thinkers from around the globe regularly convened this fall, identifying major topics of study for this emerging discipline of Responsible AI.

Organized around a series of Impulse Talks by a diverse range of academics, advocates, and industry practitioners, the Steering Committee’s deliberations explored some of the biggest challenges to ensuring that everyone shares in AI’s economic benefits. Below are themes that emerged from these illuminating meetings.

Following a Worrisome Path

“Automation” has become synonymous with “efficiency,” but the introduction of new technologies like AI into the workplace in the name of increased productivity doesn’t have to harm employment or wages. Sadly, there have been plenty of examples of the opposite:  automation technologies that eliminate jobs without reducing consumer prices or improving the quality of goods and services, or “so-so” technologies in the words of MIT economist Daron Acemoglu, co-author of Why Nations Fail.

In his Impulse Talk, Acemoglu challenged one of the chief orthodoxies found in “future of work” narratives: that human job tasks transferred to automated systems will be replaced with new tasks created by these technologies. His recent research shows that while this phenomenon held true for new technologies deployed from the end of WWII through the late 1980s, fewer and fewer tasks for humans are being created by technologies to replace those they eliminate — and corporations are following the same automation playbook in their adoption of AI. He also explored the hidden costs of “so-so” automation and why these technologies have been excessively adopted.

In his own talk, University of Virginia economist Anton Korinek advocated for steering AI development in the direction of job creation, rather than job displacement. Korinek drew a distinction between mechanisms of redistribution, which share the gains of AI after development and deployment take place, and pre-distribution, that is, choosing to develop labor-benefitting types of AI applications from the start. He showed that technological advancement and workers are not inherently economic adversaries. Economic gains from new technologies can be biased in favor of labor (benefitting workers more than shareholders by creating additional jobs or raising overall wages), capital (benefiting shareholders more than workers by creating additional profit for investors), or be neutral between the two (benefiting workers and shareholders proportionately). He also cautioned against focusing only on the number of jobs displaced, as the impact of AI may show up in the level of wages instead, and neither metric captures the effects of technology on job quality.

Strea Sanchez and Mario Crippen, organizers for the worker movement United for Respect, added depth on job quality in their talks for the Steering Committee, speaking about their experiences as workers navigating deployments of new technologies in warehouses. Those technologies range from robots that move items around the warehouse to wristbands that buzz if a worker puts an item into a wrong bin. Such tools can be a double-edged sword: they can help make workers more productive, but can also give rise to exploitation, excessive surveillance and punitive measures, and increased injury rates.

While the rise of AI doesn’t have to come at the expense of workers, we are paving a path for greater and more entrenched inequality if current trends in automation continue unabated.

Avenues of Change

In their Impulse Talk, Paloma Muñoz Quick, Senior Consultant with the UN’s B-Tech Project, and Dunstan Allison-Hope, Vice President of Business for Social Responsibility, discussed the obligation of businesses developing and implementing AI to respect human rights. Relevant rights in the United Nations Declaration of Human Rights include the right to an adequate standard of living and the right to share in the benefits of scientific advancement. Human rights-based approaches often surface the groups who are most vulnerable to violations of their basic rights, making such approaches particularly important for practitioners trying to “do no harm.”

Human rights can illuminate what actions are and are not acceptable in pursuit of profit, and businesses are increasingly receptive to guiding principles beyond maximizing shareholder value, as seen in the 2019 Business Roundtable commitment to all stakeholders. Though innovators are the main architects of technological change, investors can also play a crucial role in ensuring funding only goes to ideas and proposals that respect people’s rights and dignity.

The Beneficiaries of AI

AI has the potential to greatly increase prosperity by one measure or another, but whose prosperity it will increase is far from a settled question. If this technology is deployed short-sightedly, AI risks further concentrating wealth in the hands of the few. By interrogating our notions of prosperity now, however, we have the opportunity to prevent tomorrow’s inequities today.

Shakir Mohamed, Senior Research Scientist at DeepMind, spoke about applying “decolonial foresight” to AI in order to ensure more equitable design and implementation.  He argued that researchers would be well-served to understand how the cause of advancing science has led to past horrors, such as the Tuskeegee Experiment, and reflect on how our societal inequalities and embedded social values could lead to similar mistakes today. Additionally, he suggested that AI developers need to participate in “reciprocal engagement and reverse tutelage” with those whose knowledge was extracted without compensation or recognition in earlier systems of power. This practice opens up questions of what assumptions should be challenged, what types of data are used, and what counts as valid knowledge. Finally, he proposed that developers form “affective communities” with users and stakeholders of their work, creating connections and solidarity with those frequently ignored in AI development.

If technology is to work for the benefit of all of humanity, it will need to be designed and deployed in dialogue with all stakeholders — and not treat their differing needs as an afterthought.

Collaboration, Not Competition

Reskilling and upskilling are frequently presented as the primary solutions to inevitable, automation-induced labor crises, but these crises can potentially be prevented if new technologies are designed to complement workers’ skills or have any necessary upskilling built-in. Simply subtracting labor does not always add value for humanity.

In her Impulse Talk, Lama Nachman, Director of Intel’s Anticipatory Computing Lab, discussed how a collaborative (as opposed to competitive) approach to human/AI interactions could improve productivity while also scaling human potential. While building technology in this way can be more challenging than designing systems that simply automate human labor, such technology expands the frontier of both human and AI capabilities. AI systems are trained in artificial environments and can struggle when introduced into the messiness of the real world.  Designing such systems to collaborate with—rather than replace—humans better equipped to understand and navigate that world enables both to achieve goals they could not reach on their own. One application of this approach is to provide greater information to workers, who can then apply it to better execute tasks in the world. This can take the form of providing more comprehensive diagnostic information to machine maintainers or offering targeted assessments of student engagement to teachers to improve learning outcomes.

In her talk, Jody Medich, Principal Design Researcher at Microsoft’s Office of the CTO, spoke about the value of worker knowledge that AI systems struggle to replicate, pointing to the potential of technological assistance tools that guide and amplify human capabilities. Medich noted that while AI and automation work well with visual and verbal knowledge, encoding physical knowledge is much more difficult. An AI could “watch” an infinite number of videos of a complicated surgical procedure, for instance, without ever being able to ascertain the pressure applied by the surgeon. This physical, or “embodied,” knowledge is another promising space for human/AI collaboration. Medich offered the idea of AI as ergonomics: How can we use AI and wearable computers to increase the ease for humans to complete a task, and improve the output as a result? If we use this approach to simply replace human labor, we may not create much improvement in cost or quality. But if we use it to assist humans in their roles, we open up new spaces of achievement.

If businesses limit themselves to only deploying AI that automates tasks already performed by humans, they will cheat both themselves and humanity at large of the opportunities that come from scaling workers’ abilities.

The Next Phase of Our Work

The completion of these Steering Committee deliberations marks the end of the first phase of the AI and Shared Prosperity Initiative. In 2021, the Initiative will conduct the research necessary to translate the ideas laid out by the Steering Committee into frameworks applicable to AI development and deployment. Sign up to get involved, share your ideas, and receive updates on AI SPI.