Guidelines for AI and Shared Prosperity

For AI-Using Organizations

Responsible Practices for AI-Using Organizations (RPU)

After performing the High-Level Job Impact Assessment, consult our recommendations to help minimize the risks and maximize the opportunities to advance shared prosperity with AI.

Use of workplace AI is still in early stages, and as a result information about what should be considered best practices for fostering shared prosperity is still preliminary. Below is a list for AI-using organizations of starter sets of practices aligned with increasing the likelihood of benefits to shared prosperity and decreasing the likelihood of harms to it. The list is drawn from early empirical research in the field, historical analogues for transformative workplace technologies, and theoretical frameworks yet to be applied in practice. For ease of use, the lists of Responsible Practices are organized by the earliest AI system lifecycle stage where the practice can be applied.

At an organizational level
RPU1. Make a public commitment to identify, disclose, and mitigate the risks of severe labor market impacts presented by AI systems you use

Labor practices and impacts are increasingly a part of suggested, proposed, or required non-financial disclosures. These disclosures include practices affecting human rights, management of human capital, and other social and employee issues. Regulatory authorities have suggested, proposed, or required these disclosures as material to investor decision-making, as well as for the benefit of the broader society. We recommend that AI-using organizations identify, disclose, and mitigate the risks of severe labor market impacts for the same rationales, as well as to provide both prospective and existing workers with the information they need to make informed decisions about their own employment.The public commitment to disclose severe risksPAI’s Shared Prosperity Guidelines use UNGP’s definition of severity: an impact (potential or actual) can be severe “by virtue of one or more of the following characteristics: its scale, scope or irremediability. Scale means the gravity of the impact on the human right(s). Scope means the number of individuals that are or could be affected. Irremediability means the ease or otherwise with which those impacted could be restored to their prior enjoyment of the right(s).” https://www.ungpreporting.org/glossary/severe-human-rights-impact/ should specify the severity threshold considered by the organization to warrant disclosure, as well as explain how the threshold level of severity was chosen and what external stakeholders were consulted in that decision.

Alternatively, an organization can choose to set a threshold in terms of an AI system’s marketed capabilities and disclose all risk signals which are present for systems meeting that threshold. For example, if an organization’s expected return on investment from the use of an AI system under assessment is a multiple greater than 10, its corresponding risks would be subject to disclosure. In instances where organizational impact is driven by a series of smaller system implementations, the organization could choose to disclose all risk signals present once the cumulative cost decrease or revenue increase exceeds 5%.A recent study of corporate respondents showed roughly one quarter of respondents were able to achieve a 5% improvement to EBIT in 2021. As AI adoption becomes more widespread, we anticipate more organizations will meet this threshold. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review#/

For additional resources on workplace impact assessment, including on worker involvement, see:

Institute for the Future of Work, Good Work Algorithmic Impact Assessment

Throughout the entire procurement process, from identification to use
RPU2. Commit to neutrality towards worker organizing and unionization
As outlined in the signals of risk above, AI systems pose numerous risks to workers’ human rights and well-being. These systems are implemented and used in employment contexts that often have such comprehensive decision-making power over workers that they can be described as “private governments.”Anderson, E. (2019). Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about it). Princeton University Press. As a counterbalance to this power, workers may choose to organize to collectively represent their interests. The degree to which this is protected, and the frequency with which it occurs, differs substantially by location. Voluntarily committing to neutrality towards worker organizing is an important way to ensure workers’ agency is respected and their collective interests have representation throughout the AI use lifecycle if workers so choose (as is repeatedly emphasized as a critical provision in these Guidelines).
RPU3. In collaboration with affected communities, perform Job Impact Assessments early and often throughout AI system implementation and use

Run opportunity and risk analyses early and often across AI implementation and use, using the data available at each stage. Update as more data becomes available (for example, as objectives are identified, systems are procured, implementation is completed, and new applications arise). Whenever applicable, we suggest using AI system implementation and use choices to maximize the presence of signals of opportunity and minimize the presence of signals of risk.Solicit the input of workers that stand to be affectedIt is frequently the case that workers who stand to be affected by the introduction of an AI system include not only workers directly employed by the organization introducing AI in its own operations, but a wider set of current or potential labor market participants. Therefore it is important that not only incumbent workers are given the agency to participate in job impact assessment and risk mitigation strategy development. and a multi-disciplinary set of independent experts when assessing the presence of opportunity and risk signals. Make sure to compensate external contributors for their participation in the assessment of the AI system.

Please note that the analysis of opportunity and risk signals suggested here is different from red team analysis suggested in RPU15. The former identifies risks and opportunities created by an AI system working perfectly as intended. The latter identifies possible harms if the AI system in question malfunctions or is misused.

RPU4. In collaboration with affected communities, develop mitigation strategies for identified risks

In alignment with UN Guiding Principles for Business and Human Rights, a mitigation strategy should be developed for each risk identified, prioritizing the risks primarily by severity of potential impact and secondarily by its likelihood. Severity and likelihood of potential impact are determined on a case-by-case basis.An algorithm described here is very useful for determining the severity of potential quantitative impacts (such as impacts on wages and employment), especially in cases with limited uncertainty around the future uses of the AI system being assessed: https://www.brookings.edu/research/how-innovation-affects-labor-markets-an-impact-assessment/Korinek, A. (2022). How innovation affects labor markets: An impact assessment.Mitigation strategies can range from eliminating the risk or reducing the severity of potential impact to ensuring access to remedy or compensation for affected groups. If effective mitigation strategies for a given risk are not available, this should be considered a strong argument in favor of meaningful changes in the development plans of an AI system, especially if it is expected to affect vulnerable groups.

Engaging workers and external experts as needed in the creation of mitigation strategies is critical to ensure important considerations are not being missed. It is especially critical to engage with representatives of communities that stand to be affected. Please ensure that everyone engaged in consultations around assessing risks and developing mitigation strategies is adequately compensated.

RPU5. Create and use robust and substantive mechanisms for worker agency in identifying needs, selecting AI vendors and systems, and implementing them in the workplace

Workers who will use or be affected by AI hold unique perspectives on important needs and opportunities in their roles. They also possess particular insight into how AI systems could create harm in their workplaces. To ensure AI systems foster shared prosperity, these workers should be included and afforded agency in the AI procurement, implementation, and use process from start to finish.Institute for the Future of Work. (2023). Good Work Algorithmic Impact Assessment Version 1: An approach for worker involvement. https://tinyurl.com/mr4yn5ytWorkers must be properly equipped with knowledge of potential product functions, capabilities, and limitations, so that they can draw meaningful connections to their role-based knowledge (see RPU13 for more information). Additionally, care must be taken to create a shared vocabulary on the team, so that technical terms or jargon do not unintentionally obscure or mislead. Workers must also be given genuine decision-making power in the process, allowing them to shape use (such as new workflows or job design) and be taken seriously on the need to end a project if they identify unacceptable harms that cannot be resolved.

RPU6. Ensure AI systems are used in environments with high levels of worker protections and decision-making power

AI systems are less likely to cause harm in environments with:

  • High levels of legal protection, monitoring, and enforcement for workers’ rights (such as those related to health and safety or freedom to organize)
  • High levels of worker voice and negotiating ability (due to strong protections for worker voice or high demand for workers’ comparatively scarce skills), especially those where workers have meaningful input into decisions regarding the introduction of new technologies

These factors encourage worker-centric AI design. Workers in such environments also possess a higher ability to limit harms from AI systems (such as changing elements of an implementation or rejecting the use of the technology as needed), including harms outside direct legal protections. This should not, however, be treated as a failsafe for harmful technologies: other practices in this list should also be followed to reduce risk to workers.

RPU7. Source data enrichment labor responsibly

Key requirements for the responsible sourcing of data enrichment services (such as, data annotation and real-time human verification of algorithmic predictions) include:

  • Always paying data enrichment workers above the local living wage
  • Providing clear, tested instructions for data enrichment tasks
  • Equipping workers with simple and effective mechanisms for reporting issues, asking questions, and providing feedback on the instructions or task design

In collaboration with our Partners, PAI has developed a library of practitioner resources for responsible data enrichment sourcing.

RPU8. Ensure workplace AI systems are not discriminatory
In general, AI systems frequently reproduce or deepen discriminatory patterns in society, including ones related to race, class, age, and disability. Specific workplace systems have shown a propensity for the same. Careful vetting and use is needed to ensure any AI systems affecting workers or the economy do not create discriminatory results.
When identifying needs, procuring, and implementing AI systems
RPU9. Procure AI systems that align with worker needs and preferences

AI systems welcomed by workers largely fall into three overarching categories:

  • Systems that directly improve some element of job quality
  • Systems that assist workers to achieve higher performance on their core tasks
  • Systems that eliminate undesirable non-core tasks (See OS2, OS9, RS1, and RS2 for additional detail)

Starting with one of these objectives in mind and creating robust participation mechanisms for workers throughout the design and implementation process is likely to result in win-win-wins for AI creators, employers who implement AI, and the workers who use or are affected by them.

RPU10. Staff and train sufficient internal or contracted expertise to properly vet AI systems and ensure responsible implementation

As discussed throughout, AI systems raise substantial concerns about the risks of their adoption in workplace settings. To understand and address these risks, experts are needed to vet and implement AI systems. In addition to technical experts, this includes sociotechnical experts capable of performing the Job Impact Assessment described above to the level of granularity necessary to fully identify and mitigate risks of a specific system in a given workplace.The importance of this practice increases with AI system customization or integration. In situations where systems are developed by organizations who follow the Shared Prosperity Guidelines or similar recommendations, disclose potential labor impacts, and design these systems to be used off-the-shelf, less internal expertise may be required from users. However, when systems are more customized or integrated into workplaces, specifics related to the organization and worksite more heavily influence labor impacts arising from the particulars of system use, requiring additional expertise.

RPU11. Prefer vendors who commit to following PAI’s Shared Prosperity Guidelines or similar recommendations
The benefit to workers and society from following these practices can be meaningfully undermined if organizations designing and selling the AI system do not do their part to advance shared prosperity. We encourage users to make developer adherence to PAI’s Guidelines or similar recommendations a priority when selecting vendors and systems for use.
RPU12. Ensure transparency about what worker data is collected, how it will be used, and why, and enable workers to opt out

Privacy and ownership over data generated by one’s activities are increasingly rights recognized inside and outside the workplace. Respect for these rights requires fully informing workers about the data collected on them and inferences made, how they are used and why, as well as offering them the ability to opt out of collection and use.Bernhardt, A., Suleiman, R., and Kresge, L. (2021). Data and algorithms at work: the case for worker technology rights. https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf. Workers should also be given the opportunity to individually or collectively forbid the sales of datasets that include their personal information or personally identifiable information. Depending on use, generative AI may present novel privacy risks, through extracting information about worker practices and sharing with managers and colleagues. System design and use should follow the data minimization principle: collect only the necessary data, for the necessary purpose, and hold it only for the necessary amount of time. Design should also enable workers to know about, correct, or delete inferences about them.Colclough, C.J. (2022). Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. https://tinyurl.com/26ycnpv2Particular care must be taken in workplaces, as the power imbalance between employer and employee undermines workers’ ability to freely consent to data collection and use compared to other, less coercive contexts. In practice, data use decisions by employers often shift over time, making it especially important for AI-using organizations to explicitly and transparently inform workers regarding each new use of their data and its implications, and request consent for each new use or repurposing.Brand, J., Dencik, L. and Murphy, S. (2023). The Datafied Workplace and Trade Unions in the UK. Data Justice Lab. https://datajusticeproject.net/wp-content/uploads/sites/30/2023/04/Unions-Report_final.pdf.

RPU13. Provide meaningful, comprehensible explanations of the AI system’s function and operation to workers overseeing it, using it, or affected by it
The field of explainable AI has advanced considerably in recent years, but workers remain an underrepresented audience for AI model explainability efforts.Park, H., Ahn, D., Hosanagar, K., and Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-15). Providing managers and workers explanations of workplace AI systems tailored to the particulars of their roles and job goals enables them to understand the tools’ strengths and weaknesses. When paired with workers’ existing subject matter expertise in their own roles, this knowledge equips managers and workers to most effectively attain the upsides and minimize the downsides of AI systems, meaning AI systems can enhance overall job quality across the different dimensions of well-being.
RPU14. Establish human recourse into decisions or recommendations offered, including the creation of transparent, human-decided grievance redress mechanisms
AI systems have been built to hire workers, manage them, assess their performance, and promote or fire them. AI is also being used to assist workers with their tasks, coach them, and complete tasks previously assigned to them. In each of these decisions allocated to AI, the technologies have accuracy as well as comprehensiveness issues. AI systems lack the human capacity to bring in additional context relevant to the issue at hand. As a result, humans are needed to validate, refine, or override AI outputs. In the case of task completion, an absence of human involvement can create harms to physical, intellectual, or emotional well-being. In AI’s use in employment decisions, it can result in unjustified hiring or firing decisions. Simply placing a human “in the loop” is insufficient to overcome algorithmic bias: demonstrated patterns of deference to the judgment of algorithmic systems. Care must be taken to appropriately position the strengths and weaknesses of AI systems and empower humans with final decision-making power.
RPU15. Red team AI systems for potential misuse or abuse
The preceding points have focused on AI systems working as designed and intended. Responsible development also requires comprehensive “red teaming” of AI systems to identify vulnerabilities and the potential for misuse or abuse. Managers, workers in relevant roles, and external experts should test the system for misuse and abusive implementation.
RPU16. Recognize extra work created by AI system use and ensure work is acknowledged and compensated
The above practice of red-teaming addresses intentional misuse or abuse. More routinely, AI systems fail to work as marketed or intended in ways big and small, creating additional tasks for workers to absorb. New tasks generated by the gap between AI system expectations and realities often go unrecognized, leaving workers to shoulder extra responsibilities or work without providing them additional time to complete these tasks or compensation for doing so.Mateescu, A., and Elish, M. (2019). AI in context: the labor of integrating new technologies.Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print). Address this issue by holding routine reviews with the workers who use or oversee systems to identify areas of new work and adjust accordingly.
RPU17. Ensure mechanisms are in place to share productivity gains with workers
The power and responsibility to share productivity gains from AI system implementation lies largely with AI-using organizations. AI-using organizations hold final decisions about wages, benefits, working hours, job design, worker retraining and reskilling, and more. To the extent that AI systems deliver cost savings and/or higher revenues via increased worker productivity, AI-using organizations hold authority over how to allocate increased margins. As highlighted in OS7, AI systems present a major opportunity to improve workers’ well-being, financial and otherwise, through maintaining or increasing their share of revenue without decreasing absolute returns to owners or shareholders.

Shared Prosperity Guidelines Home

Get Involved

Partnership on AI needs your help to refine, test, and drive adoption of the Shared Prosperity Guidelines.

Fill out the form below to share your feedback on the Guidelines, ask about collaboration opportunities, and receive updates about events and other future work by the AI and Shared Prosperity Initiative.

Get in Touch

Guidelines for AI and Shared Prosperity

Home

Step 1: Learn About the Guidelines

The Need for the Guidelines

The Origin of the Guidelines

Design of the Guidelines

Key Principles for Using the Guidelines

Step 2: Apply the Job Impact Assessment Tool

Instructions for Performing a Job Impact Assessment

Signals of Opportunity to Advance Shared Prosperity

Signals of Risk to Shared Prosperity

STEP 3: Stakeholder-Specific Recommendations

For AI-Creating Organizations

For AI-Using Organizations

For Policymakers

For Labor Organizations and Workers

Get Involved

Endorsements

Acknowledgments

AI and Shared Prosperity Initiative’s Steering Committee

Sources Cited

  1. ​​Acemoglu, D. (Ed.). (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Boston Review.
  2. Korinek, A., and Stiglitz, J.E. (2020, April). Steering technological progress. In NBER Conference on the Economics of AI.
  3. Acemoglu, D., and Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Public Affairs, New York.
  4. International Labour Organization. (n.d.). Decent work. https://tinyurl.com/yur776yd
  5. US Department of Commerce and US Department of Labor. (n.d.). Department of Commerce and Department of Labor Good Jobs Principles, DOL. https://tinyurl.com/mtbpemkn
  6. Institute for the Future of Work. (n.d.). The Good Work Charter. https://tinyurl.com/ycxtaax4
  7. Klinova, K., and Korinek, A. (2021). AI and shared prosperity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 645-651).
  8. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  9. Partnership on AI, 2021. Redesigning AI for Shared Prosperity: an Agenda. https://partnershiponai.org/paper/redesigning-ai-agenda/
  10. Negrón, W. (2021). Little Tech is Coming for Workers. Coworker.org. https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf.
  11. Korinek, A., 2022. How innovation affects labor markets: An impact assessment.
  12. Brynjolfsson, E., Collis, A., Diewert, W.E., Eggers, F., and Fox, K.J. (2019). GDP-B: Accounting for the value of new and free goods in the digital economy (No. w25695). National Bureau of Economic Research.
  13. Acemoglu, D., and Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3-30.
  14. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  15. Valentine, M., and Hinds, R. (2022). How Algorithms Change Occupational Expertise by Prompting Explicit Articulation and Testing of Experts’ Theories. https://tinyurl.com/pxyr8ev3
  16. Autor, D. (2022). The labor market impacts of technological change: From unbridled enthusiasm to qualified optimism to vast uncertainty (No. w30074). National Bureau of Economic Research.
  17. Mateescu, A., and Elish, M. (2019). AI in context: the labor of integrating new technologies.
  18. Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print).
  19. World Bank. (2017). World development report 2018: Learning to realize education's promise. The World Bank.
  20. Korinek, A., and Stiglitz, J.E. (2021). Artificial intelligence, globalization, and strategies for economic development (No. w28453). National Bureau of Economic Research.
  21. Diao, X., Ellis, M., McMillan, M. S., and Rodrik, D. (2021). Africa's manufacturing puzzle: Evidence from Tanzanian and Ethiopian firms (No. w28344). National Bureau of Economic Research.
  22. Rodrik, D. (2022). 4 Prospects for global economic convergence under new technologies. An inclusive future? Technology, new dynamics, and policy challenges, 65.
  23. O'Keefe, C., Cihon, P., Garfinkel, B., Flynn, C., Leung, J., and Dafoe, A. (2020, February). The windfall clause: Distributing the benefits of AI for the common good. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 327-331).
  24. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  25. Scherer, M., and Brown, L. X. (2021). Warning: Bossware May Be Hazardous to Your Health. Center for Democracy and Technology. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf
  26. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  27. Acemoglu, D., and Restrepo, P. (2022). Tasks, automation, and the rise in US wage inequality. Econometrica, 90(5), 1973-2016.
  28. Valentine, M., and Hinds, R. (2022). How Algorithms Change Occupational Expertise by Prompting Explicit Articulation and Testing of Experts’ Theories. https://tinyurl.com/pxyr8ev3
  29. Nurski, L., and Hoffmann, M. (2022). The Impact of Artificial Intelligence on the Nature and Quality of Jobs. Working Paper. Bruegel. https://tinyurl.com/jxayzdcz
  30. Pritchett, L. (2020). The future of jobs is facing one, maybe two, of the biggest price distortions ever. Middle East Development Journal, 12(1), 131-156.
  31. Eloundou, T., Manning, S., Mishkin, P., and Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.
  32. Noy, S., and Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4375283
  33. Korinek, A. (2023). Language models and cognitive automation for economic research (No. w30957). National Bureau of Economic Research.
  34. Case, A., and Deaton, A. (2020). Deaths of Despair and the Future of Capitalism. Princeton University Press.
  35. Gihleb, R., Giuntella, O., Stella, L., and Wang, T. (2022). Industrial robots, workers’ safety, and health. Labour Economics, 78, 102205.
  36. Pritchett, L. (2020). The future of jobs is facing one, maybe two, of the biggest price distortions ever. Middle East Development Journal, 12(1), 131-156.
  37. Pritchett, L. (2023). Choose People. LaMP Forum. https://lampforum.org/2023/03/02/choose-people/
  38. Gray, M. L., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
  39. Dubal, V. (2023). On Algorithmic Wage Discrimination. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331080
  40. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  41. Schneider, D., and Harknett, K. (2017, April). Schedule Instability and Unpredictability and Worker and Family Health and Well-being. In PAA 2017 Annual Meeting. PAA.
  42. Williams, J. et al. (2022). Stable scheduling study: Health outcomes report. https://ssrn.com/abstract=4019693
  43. Bell, S. A. (2022). AI and Job Quality: Insights from Frontline Workers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4337611
  44. Dzieza, J. (2020). Robots aren’t taking our jobs — They’re becoming our bosses. The Verge. https://tinyurl.com/5a9mxeuz
  45. Levy, K. (2022). Data Driven: truckers, technology, and the new workplace surveillance. Princeton University Press.
  46. Moore, P.V. (2017). The quantified self in precarity: Work, technology and what counts. Routledge.
  47. Scherer, M., and Brown, L. X. (2021). Warning: Bossware May Be Hazardous to Your Health. Center for Democracy and Technology. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf.
  48. Brand, J., Dencik, L. and Murphy, S. (2023). The Datafied Workplace and Trade Unions in the UK. Data Justice Lab. https://datajusticeproject.net/wp-content/uploads/sites/30/2023/04/Unions-Report_final.pdf.
  49. Nurski, L., and Hoffmann, M. (2022). The Impact of Artificial Intelligence on the Nature and Quality of Jobs. Working Paper. Bruegel. https://tinyurl.com/2a943p8f
  50. Nanavaty, R. (2023). Interview with Reema Nanavaty, Self-Employed Women’s Association.
  51. Beane, M. (2022). Today's Robotic Surgery Turns Surgical Trainees into Spectators: Medical Training in the Robotics Age Leaves Tomorrow's Surgeons Short on Skills. IEEE Spectrum, 59(8), 32-37. https://tinyurl.com/wyhxukhk
  52. Gray, M. L., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
  53. Center for Democracy and Technology et al. 2022
  54. Buolamwini, J., and Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
  55. Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. John Wiley and Sons.
  56. Keyes, O. (2018). The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on human-computer interaction, 2(CSCW), 1-22.
  57. Rosales, A., and Fernández-Ardèvol, M. (2019). Structural ageism in big data approaches. Nordicom Review, 40(s1), 51-64.
  58. Klinova, K. (2022) Governing AI to Advance Shared Prosperity. In Justin B. Bullock et al. (Eds.), The Oxford Handbook of AI Governance. Oxford Handbooks.
  59. Park, H., Ahn, D., Hosanagar, K., and Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
  60. Bernhardt, A., Suleiman, R., and Kresge, L. (2021). Data and algorithms at work: the case for worker technology rights. https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf.
  61. Colclough, C.J. (2022). Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. https://tinyurl.com/26ycnpv2
  62. Pasquale, F. (2020). New Laws of Robotics. Harvard University Press.
  63. Rodrik, D. (2022). 4 Prospects for global economic convergence under new technologies. An inclusive future? Technology, new dynamics, and policy challenges, 65.
  64. Anderson, E. (2019). Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about it). Princeton University Press.
  65. Korinek, A. (2022). How innovation affects labor markets: An impact assessment.
  66. Institute for the Future of Work. (2023). Good Work Algorithmic Impact Assessment Version 1: An approach for worker involvement. https://tinyurl.com/mr4yn5yt
  67. Bernhardt, A., Suleiman, R., and Kresge, L. (2021). Data and algorithms at work: the case for worker technology rights. https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf.
  68. Colclough, C.J. (2022). Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. https://tinyurl.com/26ycnpv2
  69. Brand, J., Dencik, L. and Murphy, S. (2023). The Datafied Workplace and Trade Unions in the UK. Data Justice Lab. https://datajusticeproject.net/wp-content/uploads/sites/30/2023/04/Unions-Report_final.pdf.
  70. Park, H., Ahn, D., Hosanagar, K., and Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
  71. Mateescu, A., and Elish, M. (2019). AI in context: the labor of integrating new technologies.
  72. Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print).
Table of Contents