For AI-Using Organizations

Responsible Practices for AI-Using Organizations (RPU)

After performing the High-Level Job Impact Assessment, consult our recommendations to help minimize the risks and maximize the opportunities to advance shared prosperity with AI.

Use of workplace AI is still in early stages, and as a result information about what should be considered best practices for fostering shared prosperity is still preliminary. Below is a list for AI-using organizations of starter sets of practices aligned with increasing the likelihood of benefits to shared prosperity and decreasing the likelihood of harms to it. The list is drawn from early empirical research in the field, historical analogues for transformative workplace technologies, and theoretical frameworks yet to be applied in practice. For ease of use, the lists of Responsible Practices are organized by the earliest AI system lifecycle stage where the practice can be applied.

At an organizational level
RPU1. Make a public commitment to identify, disclose, and mitigate the risks of severe labor market impacts presented by AI systems you use

Labor practices and impacts are increasingly a part of suggested, proposed, or required non-financial disclosures. These disclosures include practices affecting human rights, management of human capital, and other social and employee issues. Regulatory authorities have suggested, proposed, or required these disclosures as material to investor decision-making, as well as for the benefit of the broader society. We recommend that AI-using organizations identify, disclose, and mitigate the risks of severe labor market impacts for the same rationales, as well as to provide both prospective and existing workers with the information they need to make informed decisions about their own employment.The public commitment to disclose severe risksPAI’s Shared Prosperity Guidelines use UNGP’s definition of severity: an impact (potential or actual) can be severe “by virtue of one or more of the following characteristics: its scale, scope or irremediability. Scale means the gravity of the impact on the human right(s). Scope means the number of individuals that are or could be affected. Irremediability means the ease or otherwise with which those impacted could be restored to their prior enjoyment of the right(s).” https://www.ungpreporting.org/glossary/severe-human-rights-impact/ should specify the severity threshold considered by the organization to warrant disclosure, as well as explain how the threshold level of severity was chosen and what external stakeholders were consulted in that decision.

Alternatively, an organization can choose to set a threshold in terms of an AI system’s marketed capabilities and disclose all risk signals which are present for systems meeting that threshold. For example, if an organization’s expected return on investment from the use of an AI system under assessment is a multiple greater than 10, its corresponding risks would be subject to disclosure. In instances where organizational impact is driven by a series of smaller system implementations, the organization could choose to disclose all risk signals present once the cumulative cost decrease or revenue increase exceeds 5%.A recent study of corporate respondents showed roughly one quarter of respondents were able to achieve a 5% improvement to EBIT in 2021. As AI adoption becomes more widespread, we anticipate more organizations will meet this threshold. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review#/

For additional resources on workplace impact assessment, including on worker involvement, see:

Institute for the Future of Work, Good Work Algorithmic Impact Assessment

Throughout the entire procurement process, from identification to use
RPU2. Commit to neutrality towards worker organizing and unionization
As outlined in the signals of risk above, AI systems pose numerous risks to workers’ human rights and well-being. These systems are implemented and used in employment contexts that often have such comprehensive decision-making power over workers that they can be described as “private governments.”Anderson, E. (2019). Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about it). Princeton University Press. As a counterbalance to this power, workers may choose to organize to collectively represent their interests. The degree to which this is protected, and the frequency with which it occurs, differs substantially by location. Voluntarily committing to neutrality towards worker organizing is an important way to ensure workers’ agency is respected and their collective interests have representation throughout the AI use lifecycle if workers so choose (as is repeatedly emphasized as a critical provision in these Guidelines).
RPU3. In collaboration with affected communities, perform Job Impact Assessments early and often throughout AI system implementation and use

Run opportunity and risk analyses early and often across AI implementation and use, using the data available at each stage. Update as more data becomes available (for example, as objectives are identified, systems are procured, implementation is completed, and new applications arise). Whenever applicable, we suggest using AI system implementation and use choices to maximize the presence of signals of opportunity and minimize the presence of signals of risk.Solicit the input of workers that stand to be affectedIt is frequently the case that workers who stand to be affected by the introduction of an AI system include not only workers directly employed by the organization introducing AI in its own operations, but a wider set of current or potential labor market participants. Therefore it is important that not only incumbent workers are given the agency to participate in job impact assessment and risk mitigation strategy development. and a multi-disciplinary set of independent experts when assessing the presence of opportunity and risk signals. Make sure to compensate external contributors for their participation in the assessment of the AI system.

Please note that the analysis of opportunity and risk signals suggested here is different from red team analysis suggested in RPU15. The former identifies risks and opportunities created by an AI system working perfectly as intended. The latter identifies possible harms if the AI system in question malfunctions or is misused.

RPU4. In collaboration with affected communities, develop mitigation strategies for identified risks

In alignment with UN Guiding Principles for Business and Human Rights, a mitigation strategy should be developed for each risk identified, prioritizing the risks primarily by severity of potential impact and secondarily by its likelihood. Severity and likelihood of potential impact are determined on a case-by-case basis.An algorithm described here is very useful for determining the severity of potential quantitative impacts (such as impacts on wages and employment), especially in cases with limited uncertainty around the future uses of the AI system being assessed: https://www.brookings.edu/research/how-innovation-affects-labor-markets-an-impact-assessment/Korinek, A. (2022). How innovation affects labor markets: An impact assessment.Mitigation strategies can range from eliminating the risk or reducing the severity of potential impact to ensuring access to remedy or compensation for affected groups. If effective mitigation strategies for a given risk are not available, this should be considered a strong argument in favor of meaningful changes in the development plans of an AI system, especially if it is expected to affect vulnerable groups.

Engaging workers and external experts as needed in the creation of mitigation strategies is critical to ensure important considerations are not being missed. It is especially critical to engage with representatives of communities that stand to be affected. Please ensure that everyone engaged in consultations around assessing risks and developing mitigation strategies is adequately compensated.

RPU5. Create and use robust and substantive mechanisms for worker agency in identifying needs, selecting AI vendors and systems, and implementing them in the workplace

Workers who will use or be affected by AI hold unique perspectives on important needs and opportunities in their roles. They also possess particular insight into how AI systems could create harm in their workplaces. To ensure AI systems foster shared prosperity, these workers should be included and afforded agency in the AI procurement, implementation, and use process from start to finish.Institute for the Future of Work. (2023). Good Work Algorithmic Impact Assessment Version 1: An approach for worker involvement. https://tinyurl.com/mr4yn5ytWorkers must be properly equipped with knowledge of potential product functions, capabilities, and limitations, so that they can draw meaningful connections to their role-based knowledge (see RPU13 for more information). Additionally, care must be taken to create a shared vocabulary on the team, so that technical terms or jargon do not unintentionally obscure or mislead. Workers must also be given genuine decision-making power in the process, allowing them to shape use (such as new workflows or job design) and be taken seriously on the need to end a project if they identify unacceptable harms that cannot be resolved.

RPU6. Ensure AI systems are used in environments with high levels of worker protections and decision-making power

AI systems are less likely to cause harm in environments with:

  • High levels of legal protection, monitoring, and enforcement for workers’ rights (such as those related to health and safety or freedom to organize)
  • High levels of worker voice and negotiating ability (due to strong protections for worker voice or high demand for workers’ comparatively scarce skills), especially those where workers have meaningful input into decisions regarding the introduction of new technologies

These factors encourage worker-centric AI design. Workers in such environments also possess a higher ability to limit harms from AI systems (such as changing elements of an implementation or rejecting the use of the technology as needed), including harms outside direct legal protections. This should not, however, be treated as a failsafe for harmful technologies: other practices in this list should also be followed to reduce risk to workers.

RPU7. Source data enrichment labor responsibly

Key requirements for the responsible sourcing of data enrichment services (such as, data annotation and real-time human verification of algorithmic predictions) include:

  • Always paying data enrichment workers above the local living wage
  • Providing clear, tested instructions for data enrichment tasks
  • Equipping workers with simple and effective mechanisms for reporting issues, asking questions, and providing feedback on the instructions or task design

In collaboration with our Partners, PAI has developed a library of practitioner resources for responsible data enrichment sourcing.

RPU8. Ensure workplace AI systems are not discriminatory
In general, AI systems frequently reproduce or deepen discriminatory patterns in society, including ones related to race, class, age, and disability. Specific workplace systems have shown a propensity for the same. Careful vetting and use is needed to ensure any AI systems affecting workers or the economy do not create discriminatory results.
When identifying needs, procuring, and implementing AI systems
RPU9. Procure AI systems that align with worker needs and preferences

AI systems welcomed by workers largely fall into three overarching categories:

  • Systems that directly improve some element of job quality
  • Systems that assist workers to achieve higher performance on their core tasks
  • Systems that eliminate undesirable non-core tasks (See OS2, OS9, RS1, and RS2 for additional detail)

Starting with one of these objectives in mind and creating robust participation mechanisms for workers throughout the design and implementation process is likely to result in win-win-wins for AI creators, employers who implement AI, and the workers who use or are affected by them.

RPU10. Staff and train sufficient internal or contracted expertise to properly vet AI systems and ensure responsible implementation

As discussed throughout, AI systems raise substantial concerns about the risks of their adoption in workplace settings. To understand and address these risks, experts are needed to vet and implement AI systems. In addition to technical experts, this includes sociotechnical experts capable of performing the Job Impact Assessment described above to the level of granularity necessary to fully identify and mitigate risks of a specific system in a given workplace.The importance of this practice increases with AI system customization or integration. In situations where systems are developed by organizations who follow the Shared Prosperity Guidelines or similar recommendations, disclose potential labor impacts, and design these systems to be used off-the-shelf, less internal expertise may be required from users. However, when systems are more customized or integrated into workplaces, specifics related to the organization and worksite more heavily influence labor impacts arising from the particulars of system use, requiring additional expertise.

RPU11. Prefer vendors who commit to following PAI’s Shared Prosperity Guidelines or similar recommendations
The benefit to workers and society from following these practices can be meaningfully undermined if organizations designing and selling the AI system do not do their part to advance shared prosperity. We encourage users to make developer adherence to PAI’s Guidelines or similar recommendations a priority when selecting vendors and systems for use.
RPU12. Ensure transparency about what worker data is collected, how it will be used, and why, and enable workers to opt out

Privacy and ownership over data generated by one’s activities are increasingly rights recognized inside and outside the workplace. Respect for these rights requires fully informing workers about the data collected on them and inferences made, how they are used and why, as well as offering them the ability to opt out of collection and use.Bernhardt, A., Suleiman, R., and Kresge, L. (2021). Data and algorithms at work: the case for worker technology rights. https://laborcenter.berkeley.edu/wp-content/uploads/2021/11/Data-and-Algorithms-at-Work.pdf. Workers should also be given the opportunity to individually or collectively forbid the sales of datasets that include their personal information or personally identifiable information. Depending on use, generative AI may present novel privacy risks, through extracting information about worker practices and sharing with managers and colleagues. System design and use should follow the data minimization principle: collect only the necessary data, for the necessary purpose, and hold it only for the necessary amount of time. Design should also enable workers to know about, correct, or delete inferences about them.Colclough, C.J. (2022). Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. https://tinyurl.com/26ycnpv2Particular care must be taken in workplaces, as the power imbalance between employer and employee undermines workers’ ability to freely consent to data collection and use compared to other, less coercive contexts. In practice, data use decisions by employers often shift over time, making it especially important for AI-using organizations to explicitly and transparently inform workers regarding each new use of their data and its implications, and request consent for each new use or repurposing.Brand, J., Dencik, L. and Murphy, S. (2023). The Datafied Workplace and Trade Unions in the UK. Data Justice Lab. https://datajusticeproject.net/wp-content/uploads/sites/30/2023/04/Unions-Report_final.pdf.

RPU13. Provide meaningful, comprehensible explanations of the AI system’s function and operation to workers overseeing it, using it, or affected by it
The field of explainable AI has advanced considerably in recent years, but workers remain an underrepresented audience for AI model explainability efforts.Park, H., Ahn, D., Hosanagar, K., and Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-15). Providing managers and workers explanations of workplace AI systems tailored to the particulars of their roles and job goals enables them to understand the tools’ strengths and weaknesses. When paired with workers’ existing subject matter expertise in their own roles, this knowledge equips managers and workers to most effectively attain the upsides and minimize the downsides of AI systems, meaning AI systems can enhance overall job quality across the different dimensions of well-being.
RPU14. Establish human recourse into decisions or recommendations offered, including the creation of transparent, human-decided grievance redress mechanisms
AI systems have been built to hire workers, manage them, assess their performance, and promote or fire them. AI is also being used to assist workers with their tasks, coach them, and complete tasks previously assigned to them. In each of these decisions allocated to AI, the technologies have accuracy as well as comprehensiveness issues. AI systems lack the human capacity to bring in additional context relevant to the issue at hand. As a result, humans are needed to validate, refine, or override AI outputs. In the case of task completion, an absence of human involvement can create harms to physical, intellectual, or emotional well-being. In AI’s use in employment decisions, it can result in unjustified hiring or firing decisions. Simply placing a human “in the loop” is insufficient to overcome algorithmic bias: demonstrated patterns of deference to the judgment of algorithmic systems. Care must be taken to appropriately position the strengths and weaknesses of AI systems and empower humans with final decision-making power.
RPU15. Red team AI systems for potential misuse or abuse
The preceding points have focused on AI systems working as designed and intended. Responsible development also requires comprehensive “red teaming” of AI systems to identify vulnerabilities and the potential for misuse or abuse. Managers, workers in relevant roles, and external experts should test the system for misuse and abusive implementation.
RPU16. Recognize extra work created by AI system use and ensure work is acknowledged and compensated
The above practice of red-teaming addresses intentional misuse or abuse. More routinely, AI systems fail to work as marketed or intended in ways big and small, creating additional tasks for workers to absorb. New tasks generated by the gap between AI system expectations and realities often go unrecognized, leaving workers to shoulder extra responsibilities or work without providing them additional time to complete these tasks or compensation for doing so.Mateescu, A., and Elish, M. (2019). AI in context: the labor of integrating new technologies.Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print). Address this issue by holding routine reviews with the workers who use or oversee systems to identify areas of new work and adjust accordingly.
RPU17. Ensure mechanisms are in place to share productivity gains with workers
The power and responsibility to share productivity gains from AI system implementation lies largely with AI-using organizations. AI-using organizations hold final decisions about wages, benefits, working hours, job design, worker retraining and reskilling, and more. To the extent that AI systems deliver cost savings and/or higher revenues via increased worker productivity, AI-using organizations hold authority over how to allocate increased margins. As highlighted in OS7, AI systems present a major opportunity to improve workers’ well-being, financial and otherwise, through maintaining or increasing their share of revenue without decreasing absolute returns to owners or shareholders.

Shared Prosperity Guidelines Home

Get Involved

Partnership on AI needs your help to refine, test, and drive adoption of the Shared Prosperity Guidelines.

Fill out the form below to share your feedback on the Guidelines, ask about collaboration opportunities, and receive updates about events and other future work by the AI and Shared Prosperity Initiative.

Get in Touch