Redesigning AI for Shared Prosperity: an Agenda

PAI Staff

Artificial intelligence is expected to contribute trillions of dollars to the global GDP over the coming decade, but these gains may not occur equitably or be shared widely. Today, many communities around the world face persistent underemployment, driven in part by technological advances that have divided workers into cohorts of haves and have nots. If AI advancement continues on its current trajectory, it could accelerate this economic exclusion.

This is not the only trajectory AI could be on: switching the emphasis from automating human tasks to genuinely complementing human workers can help raise these workers’ productivity while making jobs safer, more stable and rewarding, and less physically exhausting. Redesigning AI for Shared Prosperity: an Agenda is a foundational document of the AI and Shared Prosperity Initiative outlining practical questions stakeholders need to collectively find answers to in order to successfully steer AI toward expanding access to good jobs—and away from eliminating them. We are sharing this living Agenda with the community to inform aligned efforts and invite all interested stakeholders to partake in the work. (Read our press release on the Agenda here.)

The Agenda, developed under the close guidance of the Initiative’s Steering Committee and based on their deliberations, calls for the creation of shared prosperity targets: verifiable criteria the AI industry must meet to support the future of workers. These targets would consist of commitments by AI companies to create (and not destroy) good jobs—well-paying, stable, honored, and empowered ones—across the globe. The commitments could be adopted by the AI industry players either voluntarily or with regulatory encouragement.

To date, no metrics have been developed to assess the impacts of AI on job availability, wages, and quality. Additionally, no targets have been set to ensure new products do not harm workers, either in aggregate or by category of potential vulnerability. Without clear metrics and commitments, efforts to steer AI in directions that benefit workers and society are susceptible to unbacked claims of human complementarity or human augmentation. Currently, such claims are frequently made by organizations that, in reality, produce job-displacing technology or employ worker-exploiting tactics (such as invasive surveillance) to produce productivity gains. We expect that organizations genuinely seeking to complement and benefit workers with their technology, would be most interested in measuring and disclosing their impact on availability of good jobs, helping differentiate themselves from industry actors seeking to sell worker exploitation-enabling technologies masked as “worker-augmenting AI”.

The success of the targets to be developed relies on their support by critical stakeholders in the AI development and implementation ecosystem: workers, private sector stakeholders, governments, and international organizations. Support within and across multiple stakeholder categories is particularly important given the diffuse nature of AI’s development and deployment: technologies are often created in separate companies and separate geographies than where they are implemented. Directing AI in service of expanding access to good jobs offers opportunities as well as complex challenges for each set of stakeholders. The Agenda outlines questions that need to be resolved in order to align the incentives, interests, and relative powers of key stakeholders in pursuit of a shared prosperity-advancing path for AI.

As an immediate next step, the Initiative is working to conduct thorough research on workers’ experiences of AI in the workplace. The research aims to identify key categories of impact on job quality to be included in the shared prosperity targets, as well as the most effective ways to empower workers throughout the AI development and deployment process. If you are an employer or worker organizing group who would potentially be interested in participating in this research, please get in touch to learn more about our research and how you can contribute.

It is our hope that this Agenda will catalyze the research and debates around automation, the future of work, and the equitable distribution of the economic gains of AI, and specifically on steering AI’s progress to reduce inequality and support sustainable economic and social development. PAI also enthusiastically invites collaboration on the design of shared prosperity targets. For more information on the AI and Shared Prosperity Initiative and how to get involved, please visit shared-prosperity-initiative.

To read the Agenda’s Executive Summary, click here. To read “Redesigning AI for Shared Prosperity: an Agenda” in full, click here.

1

Managing the Risks of AI Research: Six Recommendations for Responsible Publication

PAI Staff

Once a niche research interest, artificial intelligence (AI) has quickly become a pervasive aspect of society with increasing influence over our lives. In turn, open questions about this technology have, in recent years, transformed into urgent ethical considerations. The Partnership on AI’s (PAI) new white paper, “Managing the Risks of AI Research: Six Recommendations for Responsible Publication,” addresses one such question: Given AI’s potential for misuse, how can AI research be disseminated responsibly?

Many research communities, such as biosecurity and cybersecurity, routinely work with information that could be used to cause harm, either maliciously or accidentally. These fields have thus established their own norms and procedures for publishing high-risk research. Thanks to breakthrough advances, AI technology has progressed rapidly in the past decade, giving the AI community less time to develop similar practices.

Recent pilots, such as OpenAI’s “staged release” of GPT-2 and the “broader impact statement” requirement at the 2020 NeurIPS conference, demonstrate a growing interest in responsible AI publication norms. Effectively anticipating and mitigating the potential negative impacts of AI research, however, will require a community-wide effort. As a first step towards developing responsible publication practices, this white paper provides recommendations for three key groups in the AI research ecosystem:

  • Individual researchers, who should disclose and report additional information in their papers and normalize discussion about the downstream consequences of research.
  • Research leadership, which should review potential downstream consequences earlier in the research pipeline and commend researchers who identify negative downstream consequences.
  • Conferences and journals, which should expand peer review criteria to include engagement with potential downstream consequences and establish separate review processes to evaluate papers based on risk and downstream consequences.

Additionally, this white paper includes an appendix which seeks to disambiguate a variety of terms related to responsible research which are often conflated: “research integrity,” “research ethics,” “research culture,” “downstream consequences,” and “broader impacts.”

This document represents an artifact that can be used as a basis for further discussion, and we seek feedback on it to inform future iterations of the recommendations it contains. Our aim is to help build our capacity as a field to anticipate downstream consequences and mitigate potential risks.

To read “Managing the Risks of AI Research: Six Recommendations for Responsible Publication” in full, click here.

1

Framework for Promoting Workforce Well-being in the AI-Integrated Workplace

PAI Staff

Executive Summary

Executive Summary

The Partnership on AI’s “Framework for Promoting Workforce Well-being in the AI- Integrated Workplace” provides a conceptual framework and a set of tools to guide employers, workers, and other stakeholders towards promoting workforce well-being throughout the process of introducing AI into the workplace.

As AI technologies become increasingly prevalent in the workplace, our goal is to place  workforce well-being at the center of this technological change and resulting metamorphosis in work, well-being, and society, and provide a starting point to discuss and create pragmatic solutions.

The paper categorizes aspects of workforce well-being that should be prioritized and protected throughout AI integration into six pillars. Human rights is the first pillar, and supports all aspects of workforce well-being. The five additional pillars of well-being include physical, financial, intellectual, emotional well-being, as well as purpose and meaning. The Framework presents a set of recommendations that organizations can use to guide organizational thinking about promoting well-being throughout the integration of AI in the workplace.

The Framework is designed to initiate and inform discussions on the impact of AI that strengthen the reciprocal obligations between workers and employers, while grounding that discourse in six pillars of worker well-being.

We recognize that the impacts of AI are still emerging and often difficult to distinguish from the impact of broader digital transformation, leading to organizations being challenged to address the unknown and potentially fundamental changes that AI may bring to the workplace. We strongly advise that management collaborate with workers directly or with worker representatives in the development, integration, and use of AI systems in the workplace, as well as in the discussion and implementation of this Framework.

We acknowledge that the contexts for having a dialogue on worker well-being may differ. For instance, in some countries there are formal structures in place such as workers’ councils that facilitate the dialogue between employers and workers. In other cases, countries or sectors do not have these institutions in place, nor a tradition for dialogue between the two parties. In all cases, the aim of this Framework is to be a useful tool for all parties to collaboratively ensure that the introduction of AI technologies goes hand in hand with a commitment to worker well-being. The importance of making such commitment in earnest has been highlighted by the COVID-19 public health and economic crises which exposed and exacerbated the long-standing inequities in the treatment of workers. Making sure those are not perpetuated further with the introduction of AI systems into the workplace requires deliberate efforts and will not happen automatically.

Recommendations

Recommendations

This section articulates a set of recommendations to guide organizational approaches and thinking on what to promote, what to be cognizant of, and what to protect against, in terms of worker and workforce well-being while integrating AI into the workplace. These recommendations are organized along the six well-being pillars identified above, and are meant to serve as a starting place for organizations seeking to apply the present Framework to promote workforce well-being throughout the process of AI integration. Ideally, these can be recognized formally as organizational commitments at the board level and subsequently discussed openly and regularly with the entire organization.

The “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” is a product of the Partnership on AI’s AI, Labor, and the Economy (AILE) Expert Group, formed through a collaborative process of research, scoping, and iteration. In August 2019, at a workshop called “Workforce Well-being in the AI-Integrated Workplace” co-hosted by PAI and the Ford Foundation, this work received additional input from experts, academics, industry, labor unions, and civil society. Though this document reflects the inputs of many PAI Partner organizations, it should not under any circumstances be read as representing the views of any particular organization or individual within this Expert Group, or any specific PAI Partner.

Acknowledgements

The Partnership on AI is deeply grateful for the input of many colleagues and partners, especially Elonnai Hickok, Ann Skeet, Christina Colclough, Richard Zuroff, Jonnie Penn as well as the participants of the August 2019 workshop co-hosted by PAI and the Ford Foundation. We thank Arindrajit Basu, Pranav Bidaire, and Saumyaa Naidu for the research support.

1

The Ethics of AI and Emotional Intelligence

PAI Staff

About the Paper

About The Paper

2019 seemed to mark a turning point in the deployment and public awareness of artificial intelligence designed to recognize emotions and expressions of emotion. The experimental use of AI spread across sectors and moved beyond the internet into the physical world. Stores used AI perceptions of shoppers’ moods and interest to display personalized public ads. Schools used AI to quantify student joy and engagement in the classroom. Employers used AI to evaluate job applicants’ moods and emotional reactions in automated video interviews and to monitor employees’ facial expressions in customer service positions.

It was a year notable for increasing criticism and governance of AI related to emotion and affect. A widely cited review of the literature by Barrett and colleagues questioned the underlying science for the universality of facial expressions and concluded there are insurmountable difficulties in inferring specific emotions reliably from pictures of faces. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Corrigendum: Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(3), 165–166. https://doi.org/10.1177/1529100619889954 The affective computing conference ACII added its first panel on the misuses of the technology with the aim of increasing discussions within the technical community on how to improve how their research was impacting society. Valstar, M., Gratch, J., Tao, J., Greene, G., & Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science [Panel]. 8th International Conference on Affective Computing & Intelligent Interaction, Cambridge, United Kingdom. Surveys on public attitudes in the U.S. Only 15% of Americans polled said it was acceptable for advertisers to use facial recognition technology to see how people respond to public advertising displays. It is unclear whether the 54% of respondents who said it was not acceptable were objecting to the use of facial analysis to detect emotional reaction to ads or the association of identification of an individual through facial recognition with some method of detecting emotional response. See Smith, A. (2019, September 5). More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-sadults-trust-law-enforcement-to-use-facial-recognition-responsibly/ and the U.K. Only 4% of those polled in the U.K. approved of analysing faces (using “facial recognition technologies”, which the report defined as including detecting affect) to monitor personality traits and mood of candidates when hiring. Ada Lovelace Institute (2019, September). Beyond face value: public attitudes to facial recognition technology [Report], 11. Retrieved from https://www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf found that almost all of those polled found some current advertising and hiring uses of mood detection unacceptable. Some U.S. cities and states started to regulate private SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 See also proposed housing bills, No Biometrics Barriers to Housing Act. https://drive.google.com/file/d/1w4ee-poGkDJUkcEMTEAVqHNunplvR087/view [proposed U.S. federal] and Senate bill S5687 [proposed New York state] https://legislation.nysenate.gov/pdf/bills/2019/S5687 and government See Bill S.1385 [MA face recognition bill in process, as of June 23, 2020]. https://malegislature.gov/Bills/191/S1385/Bills/Joint and AB-1215 Body Camera Accountability Act [Bill enacted in CA] https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200AB1215. use of AI related to affect and emotions, including restrictions on them in some data protection legislation and face recognition moratoria. For example, the California Consumer Privacy Act (CCPA), which went into effect January 1, 2020, gives Californians the right to notification about what kinds of data a business is collecting about them and how it is being used and the right to demand that businesses delete their biometric information. The CCPA gives rights to California residents against a corporation or other legal entity operating for the financial benefit of its owners doing business in California that meets a certain revenue or data volume threshold. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 Biometric information, as defined in the CCPA, includes many kinds of data that are used to make inferences about emotion or affective state, including imagery of the iris, retina, and face, voice recordings, and keystroke and gait patterns and rhythms. California Consumer Privacy Act of 2018, AB-375 (2018).

All of this is happening against a backdrop of increasing global discussions, reports, principles, white papers, and government action on responsible, ethical, and trustworthy AI. The OECD’s AI Principles, adopted in May 2019 and supported by more than 40 countries, aimed to ensure AI systems would be designed to be robust, safe, fair and trustworthy. Forty-two countries adopt new OECD Principles on Artificial Intelligence. OECD. Retrieved March 22, 2019, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html In February, 2020, the European Commission released a white paper, “On Artificial Intelligence – A European approach to excellence and trust”, setting out policy options for the twin objectives of promoting the uptake of AI and addressing the risks associated with certain uses of AI. European Commission. White paper On artificial intelligence – A European approach to excellence and trust, 1. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf In June 2020, the G7 nations and eight other countries launched the Global Partnership on AI, a coalition aimed at ensuring that artificial intelligence is used responsibly, and respects human rights and democratic values. Joint statement from founding members of the global partnership on artificial intelligence. Government of Canada. Retrieved July 23, 2020, from https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-foundingmembers-of-the-global-partnership-on-artificial-intelligence.html

At its best, if artificial intelligence is able to help individuals better understand and control their own emotional and affective states, including fear, happiness, loneliness, anger, interest and alertness, there is enormous potential for good. It could greatly improve quality of life and help individuals meet long term goals. It could save many lives now lost to suicide, homicide, disease, and accident. It might help us get through the global pandemic and economic crisis.

At its worst, if artificial intelligence can automate the ability to read or control others’ emotions, it has substantial implications for economic and political power and individuals’ rights.

Governments are thinking hard about AI strategy, policy, and ethics. Now is the time for a broader public debate about the ethics of artificial intelligence and emotional intelligence, while those policies are being written, and while the use of AI for emotions and affect is not yet well entrenched in society. Applications are broad, across many sectors, but most are still in early stages of use.

The Ethics of AI and Emotional Intelligence

About the Paper

Sources Cited

  1. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Corrigendum: Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(3), 165–166. https://doi.org/10.1177/1529100619889954
  2. Valstar, M., Gratch, J., Tao, J., Greene, G., & Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science [Panel]. 8th International Conference on Affective Computing & Intelligent Interaction, Cambridge, United Kingdom.
  3. Only 15% of Americans polled said it was acceptable for advertisers to use facial recognition technology to see how people respond to public advertising displays. It is unclear whether the 54% of respondents who said it was not acceptable were objecting to the use of facial analysis to detect emotional reaction to ads or the association of identification of an individual through facial recognition with some method of detecting emotional response. See Smith, A. (2019, September 5). More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-sadults-trust-law-enforcement-to-use-facial-recognition-responsibly/
  4. Only 4% of those polled in the U.K. approved of analysing faces (using “facial recognition technologies”, which the report defined as including detecting affect) to monitor personality traits and mood of candidates when hiring. Ada Lovelace Institute (2019, September). Beyond face value: public attitudes to facial recognition technology [Report], 11. Retrieved from https://www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf
  5. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 See also proposed housing bills, No Biometrics Barriers to Housing Act. https://drive.google.com/file/d/1w4ee-poGkDJUkcEMTEAVqHNunplvR087/view [proposed U.S. federal] and Senate bill S5687 [proposed New York state] https://legislation.nysenate.gov/pdf/bills/2019/S5687
  6. See Bill S.1385 [MA face recognition bill in process, as of June 23, 2020]. https://malegislature.gov/Bills/191/S1385/Bills/Joint and AB-1215 Body Camera Accountability Act [Bill enacted in CA] https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200AB1215.
  7. The CCPA gives rights to California residents against a corporation or other legal entity operating for the financial benefit of its owners doing business in California that meets a certain revenue or data volume threshold. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121
  8. California Consumer Privacy Act of 2018, AB-375 (2018).
  9. Forty-two countries adopt new OECD Principles on Artificial Intelligence. OECD. Retrieved March 22, 2019, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html
  10. European Commission. White paper On artificial intelligence – A European approach to excellence and trust, 1. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
  11. Joint statement from founding members of the global partnership on artificial intelligence. Government of Canada. Retrieved July 23, 2020, from https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-foundingmembers-of-the-global-partnership-on-artificial-intelligence.html
Table of Contents
1
2

On the Legal Compatibility of Fairness Definitions

PAI Staff

Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.

READ THE BLOG POST  READ THE PAPER

1

Bringing Facial Recognition Systems To Light

PAI Staff

An Introduction to PAI’s Facial Recognition Systems Project

An Introduction to PAI’s Facial Recognition Systems Project

Facial recognition. What do you think of when you hear that term? How do these systems know your name? How accurate are they? And what else can they tell you about someone whose image is in the system?

These questions and others led the Partnership on AI (PAI) to begin the facial recognition systems project. During a series of workshops with our partners, we discovered it was first necessary to grasp how these systems work. The result was PAI’s paper “Understanding Facial Recognition Systems,” which defines the technology used in systems that attempt to verify who someone says they are or identify who someone is.

A productive discussion about the roles of these systems in society starts when we speak the same language, and also understand the importance and meaning of technical terms such as “training the system,” “enrollment database,” and “match thresholds.”

Let’s begin — keeping in mind that the graphics below do not represent any specific system, and are meant only to illustrate how the technology works.

How Facial Recognition Systems Work

How Facial Recognition Systems Work

Understanding how facial recognition systems work is essential to being able to examine the technical, social & cultural implications of these systems.

Let’s describe how a facial recognition system works. First, the system detects whether an image contains a face. If so, it then tries to recognize the face in one of two ways:

During facial verification: The system attempts to verify the identity of the face. It does so by determining whether the face in the image matches a specific face previously stored in the system.

During facial identification: The system attempts to predict the identity of the face. It does so by determining whether the face in the image potentially matches any of the faces previously stored in the system.

Let’s look at these steps in greater detail

A facial recognition system needs to first be trained, with two main factors influencing how the system performs: firstly, the quality of images (such as the angle, lighting, and resolution) and secondly the diversity of the faces in the dataset used to train the system.

An enrollment database consisting of faces and names is also created. The faces can also be stored in the form of templates.

The first step in using any facial recognition system is when a probe image, derived from either a photo or a video, is submitted to the system. The system then detects the face in the image and creates a template.

 

There are two paths that can be taken

The template derived from the probe image can be compared to a single template in the enrollment database. This “1:1” process is called facial verification.

Alternatively, the template derived from the probe image can be compared to all templates in the enrollment database. This “1:MANY” process is called facial identification.

 

Click and drag the slider to see the importance of match thresholds



Beyond facial recognition

Sometimes facial recognition systems are described as including facial characterization (also called facial analysis) systems, which detect facial attributes in an image, and then sort the faces by categories such as gender, race, or age. These systems are not part of facial recognition systems because they are not used to verify or predict an identity.

1

Explainable Machine Learning in Deployment

PAI Staff

Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. PAI’s recent research paper, Explainable Machine Learning in Deployment, is the first to examine how ML explainability techniques are actually being used. We find that in its current state, XAI best serves as an internal resource for engineers and developers, rather than for providing explanations to end users. Additional improvements to XAI techniques are necessary in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

READ THE BLOG POST  READ THE PAPER

1

Human-AI Collaboration Trust Literature Review: Key Insights and Bibliography

PAI Staff

Key Insights from a Multidisciplinary Review of Trust Literature

Key Insights from a Multidisciplinary Review of Trust Literature

Understanding trust between humans and AI systems is integral to promoting the development and deployment of socially beneficial and responsible AI. Successfully doing so warrants multidisciplinary collaboration.

In order to better understand trust between humans and artificially intelligent systems, the Partnership on AI (PAI), supported by members of its Collaborations Between People and AI Systems (CPAIS) Expert Group, conducted an initial survey and analysis of the multidisciplinary literature on AI, humans, and trust. This project includes a thematically-tagged Bibliography with 78 aggregated research articles, as well as an overview document presenting seven key insights.

These key insights, themes, and aggregated texts can serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, and can help align understandings related to trust between people and AI systems. This work can also inform future research, which should investigate gaps in the research and our bibliography to improve our understanding of how human-AI trust facilitates, or sometimes hinders, the responsible implementation and application of AI technologies.

Key Insights

Several high-level insights emerged when reflecting on the bibliography of submitted articles:

  1. There is a presupposition that trust in AI is a good thing, with limited consideration of distrust’s value.
    The original project proposal emphasized a need to understand the literature on humans, AI, and trust in order to eventually determine appropriate levels of trust and distrust between AI and humans in different contexts. However, the articles included in the bibliography are largely framed with the need and motivation towards trust – not distrust – between AI systems and humans. While certain instances may warrant facilitated trust between humans and AI, others may actually enable more socially beneficial outcomes if they prompt distrust or caution. Future literature should explore distrust as related, but not necessarily directly opposite, to the concept of trust. For example, an AI system that helps doctors detect cancer cells is only useful if the human doctor and patient trust that information. In contrast, individuals should remain skeptical of AI systems designed to induce trust for malevolent purposes, such as AI-generated malware that may use data to more realistically mimic the conversational style of a target’s closest friends.
  2. Many of the articles were published before the Internet’s ubiquity/the social implications of AI became a central research focus.
    It is important to contextualize recent literature on intelligent systems and humans with literature focused on social and cognitive mechanisms undergirding human to human, or human to organizational, trust. Future work can put many of the foundational, conceptual articles that were written before the 21st century in conversation with those specifically focused on the context of AI systems, and their different use cases. It can also compare foundational, early articles’ exploration of trust with how trust is seen specifically in relation to humans interacting with AI.
  3. Trust between humans and AI is not monolithic: Context is vital.
    Trust is not all or nothing. There often exist varying degrees of trust, and the level of trust sufficient to deploy AI in different contexts is therefore an important question for future exploration. There might also be several layers of trust to secure before someone might trust and perhaps ultimately use an AI tool. For example, one might trust the data upon which an intelligent system was trained, but not the organization using that data, or one might trust a recommender system or algorithm’s ability to provide useful information, but not the specific platform upon which it is delivered. The implications of this multifaceted trust between human and AI systems, as well as its implications on adoption and use, should be explored in future research.
  4. Promoting trust is often presented simplistically in the literature.
    The majority of the literature appears to assert that not only are AI systems inherently deserving of trust, but also that people need guidance in order to trust the systems. The basic formula is that explanation will demonstrate trustworthiness, and once understood to be deserving of trust, people will use AI. Both of these conceptual leaps are contestable. While explaining the internal logic of AI systems does, in some instances, improve confidence for expert users, in general, providing simplified models of the internal workings of AI has not been shown to be helpful or to increase trust.
  5. Articles make different assumptions about why trust matters.
    Within our corpus, we found a range of implicit assumptions about why fostering and maintaining trust is important and valuable. The dominant stance is that trust is necessary to ensure that people will use AI. The link between trust and adoption is tenuous at best, as people often use technologies without trusting them. What is largely consistent across the corpus – with the exception of some papers concerned about the dangers of overtrust in AI – is the goal of fostering more trust in AI, or stated differently, that more trust is inherently better than less trust. This premise needs challenging. A more reasonable goal would be that people are able to make individual assessments about which AI they ought to trust and which they ought not trust, in the service of their goals for what specifically and in which circumstances. This connects to insight 1: There is a presupposition that trust in AI is a good thing. It is important to think about context, person-level motivations and preferences, as well as instances in which trust might not be a precondition for use or adoption.
  6. AI definitions differ between publications.
    The lack of consistent definitions for AI within our corpus makes it difficult to compare findings. Most articles do not present a formal definition of AI, as they are concerned with a particular intelligent system applied in a specific domain. The systems in question differ in significant ways, in terms of the types of users who may need to trust the system, the types of outputs that a person may need to trust, and the contexts in which the AI is operating (e.g., high- vs. low-stakes environments). It is likely that these entail different strategies as they relate to trust. There is a need to develop a framework for understanding how these different contributions relate to each other, potentially looking not at trust in AI, but at trust in different facets and applications of AI. For a more detailed analysis of what questions to ask to differentiate particular types of human-AI collaboration, see the PAI CPAI Human-AI Collaboration Framework.
  7. Institutional trust is underrepresented.
    Institutional trust might be especially relevant in the context of AI, where there is often a competence or knowledge gap between everyday users and those developing the AI technologies. Everyday users, lacking high levels of technical capital and knowledge, may find it difficult to make informed judgments of particular AI technologies; in the absence of this knowledge, they may rely on generalized feelings of institutional trust.

About the Bibliography

About the Bibliography

The CPAI Trust Literature Bibliography includes 78 thematically tagged research articles (with references and abstracts). The article selection process sourced content from a multidisciplinary community all aligned around an interest and expertise in human-AI collaboration. Submitted articles were evaluated for inclusion and analyzed by members of a smaller project group from within the PAI Partner community. An analysis of the almost 80 initial articles resulted in the development of four thematic tags, highlighting the ways the article abstracts approached the issue of trust. Specifically:

Understanding – lays out a conceptual framework for trust or is primarily a survey of trust-related issues.
Promoting – focuses on means for increasing trust
Receiving – focuses on the entity (e.g., a robot, a system, a website) that is trusted
Impacting – focuses on the nature of changes due to trust being present (e.g., the impact on a group or an organization when it experiences trust)
Two individuals from the smaller project group undertook a thematic tagging exercise to assess inter-rater reliability and the distribution of themes across articles. They tagged themes as primary and secondary (first and second order) for each article from the four thematic options above.

The CPAIS Trust Literature Bibliography identifies thematic tags for each article, at levels 1 and 2. The “themes” column lists the first order themes and the second order themes, where applicable. The total tags for each article (at both levels) are also provided. “Understanding trust” was the most frequent theme – used with 61 articles (78% of the total). 50 articles (64%) were tagged with “promoting trust,” and 29 articles (37%) were tagged with “receiving trust”. Finally, 13 articles (16%) were tagged with a focus on impacting trust.

This bibliography and thematic tags serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, especially when contextualized with the insights drawn from the corpus presented above.

DOWNLOAD INSIGHTS            VIEW BIBLIOGRAPHY

1

Human-AI Collaboration Framework & Case Studies

PAI Staff

Overview

Overview

Best practices on collaborations between people and AI systems – including those for issues of transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – depend on a nuanced understanding of the nature of those collaborations.

With the support of the Collaborations Between People and AI Systems (CPAIS) Expert Group, PAI has developed a Human-AI Collaboration Framework, containing 36 questions that identify some characteristics that differentiate examples of human-AI collaborations. We have also prepared a collection of seven case studies that illustrate the Framework and its applications in the real world.

This project explores the relevant features one should consider when thinking about human-AI collaboration, and how these features present themselves in real-world examples. By drawing attention to the nuances – including the distinct implications and potential social impacts – of specific AI technologies, the Framework can serve as a helpful nudge toward responsible product/tool design, policy development, or even research processes on or around AI systems that interact with humans.

As a software engineer from a leading technology company suggested, this Framework would be useful to them because it would enable focused attention on the impact of their AI system design, beyond the typical parameters of how quickly it goes to market or how it performs technically.

“By thinking through this list, I will have a better sense of where I am responsible to make the tool more useful, safe, and beneficial for the people using it. The public can also be better assured that I took these parameters into consideration when working on the design of a system that they may trust and then embed in their everyday life.”

SOFTWARE ENGINEER, PAI RESEARCH PARTICIPANT

Case Studies

Case Studies

To illustrate the application of this Framework, PAI spoke with AI practitioners from a range of organizations, and collected seven case studies designed to highlight the variety of real world collaborations between people and AI systems. The case studies provide descriptions of the technologies and their use, followed by author answers to the questions in the Framework:

  1. Virtual Assistants and Users (Claire Leibowicz, Partnership on AI)
  2. Mental Health Chatbots and Users (Yoonsuck Choe, Samsung)
  3. Intelligent Tutoring Systems and Learners (Amber Story, American Psychological Association)
  4. Assistive Computing and Motor Neuron Disease Patients (Lama Nachman, Intel)
  5. AI Drawing Tools and Artists (Philipp Michel, University of Tokyo)
  6. Magnetic Resonance Imaging and Doctors (Bendert Zevenbergen, Princeton Center for Information Technology Policy)
  7. Autonomous Vehicles and Passengers (In Kwon Choi, Samsung)

 

VIEW THE FRAMEWORK AND CASE STUDIES        READ THE BLOG POST

1

Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent

PAI Staff

Executive Summary

Executive Summary

Immigration laws, policies, and practices are challenging the ability of many communities, including the artificial intelligence and machine learning (AI/ML) community, to incorporate diverse voices in their work. As a global, multi-stakeholder non profit committed to the creation and dissemination of best practices in artificial intelligence, the Partnership on AI (PAI) is uniquely positioned to address the impacts of immigration laws, policies, and practices on the AI/ML community.

PAI believes that bringing together experts from countries around the world that represent different cultures, socio-economic experiences, backgrounds, and perspectives is essential for AI/ML to flourish and help create the future we desire. In order to fulfill their talent goals and host conferences of international caliber, countries around the world will need to devise laws, policies, and practices that enable people around the world to contribute to these conversations.

Based on input from PAI Partners, and PAI’s own research, this paper offers recommendations to address these specific challenges. It highlights the importance of conferences and convenings for a variety of disciplines that are making important contributions to AI/ML, and makes recommendations for participants and organizers that may facilitate ease of travel for these events. It also presents recommendations for governments to improve the accessibility, evaluation and processing of visas for all types of potential visitors, including students, interns, and accompanying families. Appendices to the paper respond to potential questions, and provide an overview of the global demand for AI talent, as well as additional details on technical or expert visa, residence and work permit laws, policies and practices.

PAI’s recommendations are based on our area of expertise, and have been developed to help advance the mobility of innovative global AI/ML talent from a variety of disciplines. Many countries have already created visa classifications for other specialized occupations, including medical professionals, professional athletes, entertainers, religious workers, and entrepreneurs.

At the same time, we acknowledge the complex immigration debates taking place in countries around the world, and the challenges posed by global migration and the quest for basic human rights and dignity. These recommendations are in no way intended to minimize or replace opportunities for those affected by the ongoing immigration discussions and policymakers actions. We hope policymakers can create a path towards permanent residency or citizenship for these groups. In fact, while our recommendations target our field of expertise, we hope our paper can serve as a useful resource for the broader community, in support of balancing government public safety responsibilities with the benefits of immigration, freedom of movement, and collaboration.

Though this document incorporated suggestions from many of PAI’s partner organizations, it should not under any circumstances be read as representing the views of any specific member of the Partnership. Instead, it is an attempt to report the views of the artificial intelligence community as a whole.

Recommendations

Recommendations

Based on our investigations, PAI has developed the policy recommendations below for the global AI/ML community and policymakers around the world. Additional details on each of these recommendations are provided in the full text of the report.

I. Recommendations for the Global AI/ML Community:

  1. Use Plain Language Where Possible
    Consular and immigration officials may not be trained or familiar with the language used in the AI/ML community. PAI recommends that visa applicants explain technical terms using as much plain language as possible to describe the purpose of their visit and areas of expertise to facilitate the review of application documents and forms.
  2. Share Relevant Information with Host Countries in Advance
    Many governments evaluate visa applications on the basis of the applicant’s nationality and other factors, rather than the skills they will bring to the convening. Conference organizers will have to take extraordinary steps to facilitate the entry of their invited participants until laws, policies, and practices change in countries around the world. Conference organizers should contact host country government officials far in advance of the conference to share relevant information and facilitate government review of visa applications. Useful information includes a description of the conference, number of invited participants, and copies of invitation letter templates and other necessary paperwork.

II. Recommendations for Policymakers:

  1. Accelerate Reviews of Visa Applications
    Pass and implement laws, policies, and practices that accelerate review and favorably consider applications for visas, permits, and permanent legal status from highly skilled individuals. Visas should not be numerically limited or “capped.”
  2. Create AI/ML Visa Classifications within Existing Groups
    Members of existing intergovernmental groups, such as the Organization for Economic Cooperation and Development (OECD), should create visa classifications that enable AI/ML multidisciplinary experts to meet, convene, study, and work across member countries. The terms of the visa should be reciprocal across all countries.
  3. Publish Accessible Visa Application Information
    Visa application rules, processes & timelines should be clear, easily understood and accessible – published in plain language, in the applicants’ native languages on websites and in other publicly available locations. These processes should be fair, transparent, and clearly demonstrate that determinations for sponsor visas are based on skills.
  4. Establish Just Standards for Evaluating Visa Applications
    Eliminate nationality-based barriers in evaluating visa and permanent residence applications from highly skilled individuals. Security-based denials of applications should not be nationality based, but rather should be founded on specific and credible security and public safety threats, evidence of visa fraud, or indications of human trafficking.
  5. Train Officials in the Language of Emerging Technologies
    Train consular and immigration officials in the language of emerging technologies so they can quickly recognize and adjudicate applications from highly skilled experts.
  6. Assist Visa Applicants
    Empower select officials to assist applicants in correctly filling out visa paperwork, as well as clarifying and resolving any questions or discrepancies that may otherwise lead to a denial or delay in approval. Beneficiaries would include startups, small- and medium-sized enterprises, smaller colleges and universities, less affluent applicants, and students and interns.
  7. Students and Interns are the Future
    Pass laws that establish special categories of visas or permits for AI/ML students and interns. These laws should clearly identify a path for graduates to obtain a work permit (as necessary), or to obtain permanent legal status or citizenship.
  8. Redefine “Families”
    Adopt visa permissions that reflect a comprehensive definition of “family,” modeled on the Finnish Aliens Act and similar definitions in other European nations. Family visas should not be numerically limited. Legal spouses, partners, and those with family ties should also be permitted to work or study in the host country. Long-term caregivers should be permitted to accompany and remain with the main visa applicant and their family while employed in that capacity.
  9. Rely on Effective Policies and Systems to Protect Information
    Immigration restrictions do not adequately protect information and intellectual property rights. For example, trade negotiations can strengthen intellectual property laws and establish courts to protect and enforce intellectual property rights owned by individual rights holders, whereas implementing immigration policies and practices that broadly apply to all applicants from a particular country do not.

READ THE FULL PAPER

Frequently Asked Questions

Frequently Asked Questions

Why would PAI tackle a subject such as visas and immigration? This topic is not really related to artificial intelligence research.

PAI believes that bringing together experts from countries around the world that represent different cultures, socio-economic experiences, backgrounds, and perspectives is essential for AI/ML to flourish and help create the future we desire. Artificial intelligence is projected to affect all facets of society, and in some ways it already is having those effects. PAI’s work addresses a number of topics related to AI, such as criminal justice and labor and economy. Our work to address immigration challenges affecting the AI community is quite similar.

How does this document pertain to PAI’s mission and work?

This document makes visa policy recommendations that would improve the mobility of global AI/ML talent and enable companies, organizations and countries to benefit from their diverse perspectives. Fostering, cultivating, and preserving a culture of diversity and belonging in our work and in the people and organizations who contribute to our work is essential to our mission, and embedded in our Tenets. These include: committing to open research and dialogue on the ethical, social, economic, and legal implications of AI, ensuring that AI technologies benefit and empower as many people as possible, and striving to create a culture of cooperation, trust, and openness among AI scientists and engineers to help better achieve these goals.

Who benefits from this policy paper?

Unlike large, multinational companies and prominent, well-funded universities and colleges,  startups, small- and medium-sized enterprises, individuals traveling to conferences, less affluent applicants, students, and interns often lack the resources to hire experts to ensure their preferred candidates have the greatest chance to obtain visas for internships, to study, or to work in their organizations. These groups and individuals  often cannot successfully compete for visas, especially those that are numerically limited. They would be the greatest beneficiaries should governments implement these recommendations.

Why is PAI uniquely suited to address this issue?

As a multi-stakeholder non profit, PAI convenes over  100 global Partners, originating from 12 countries and four continents, and representing industry, civil society, and academic and research institutes. As such, we are uniquely qualified to describe the impacts of immigration laws, policies, and practices on the AI/ML community. The impetus for this document came from many of PAI’s Partners and colleagues, who have shared how certain visa laws, policies, and practices negatively affect their organizations’ abilities to benefit from global representatives and perspectives in their work.

Why is PAI focused on incorporating diverse voices in AI/ML?

Diverse perspectives are necessary to ensure that AI is developed in a responsible manner,  thoughtfully benefiting all people in society. Voices and contributions from global talent are also essential to reducing the unintended consequences that can arise from AI/ML development and deployment, including those related to safety and security. Due to the emergent and rapidly evolving nature of AI technology, AI in particular engenders high impact AI safety and security risks, which can be mitigated by increasing the diversity of participating voices Han, T. A., Pereira, L. M., Santos, F. C., & Lenaerts, T. (2019). Modelling the Safety and Surveillance of the AI Race. arXiv preprint. Diverse representation also serves to promote the safety of key members of the AI/ML community. Underrepresented voices, such as those of minorities and the LGBTQ community, are important as we design AI/ML systems to be inclusive of all populations.

Is PAI suggesting that AI/ML practitioners should be treated differently than other skilled workers? How is this different from other visa categories?

PAI’s recommendations would enable AI/ML practitioners, from a variety of disciplines, to travel and work more freely. In some cases, this could entail special visa classifications, similar to those that already exist for skilled workers in other specialized occupations, such as medical professionals, professional athletes, entertainers, religious workers, entrepreneurs, skilled laborers and trades workers.

This paper also highlights the many disciplines involved in the development and operations of AI/ML systems, above and beyond what is sometimes defined as “skilled technology work.” Responsible AI/ML systems involve input from researchers and practitioners in social sciences such as economics, sociology, philosophy, ethics, linguistics, and communications, and the “experiential expertise” offered by those working in labor and workers’ rights See discussion of “experiential expertise” in: Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89-103., in addition to technical fields such as mathematics, statistics, computer science, data science, neuroscience, and biology.

How does this work? Unlike medical professionals or engineers, AI/ML practitioners don’t have a certificate or license for governments to determine that they are experts.

Countries establish criteria for evaluating applications, whether for technical talent, a professional athlete, or someone skilled in trades or labor. Established eligibility criteria, and the process for evaluating this criteria, vary greatly from country to country. The PAI paper offers models for countries to consider and draw upon if they decide to create a classification for AI/ML practitioners.

For example, some countries require letters from a potential employer, or to have someone in the field attest to the applicant’s particular skills, or other supporting documentation that proves the applicant has the desired skills. Some examples:

  • An independent review board: The UK Tech Nation Visa, also known as the Tier 1 Exceptional Talent Visa, assigned an independent, “designated competent body,” to review and endorse applications. The Tech Nation Visa Guide outlines the skills and specialties typically exhibited in applications reviewed by this independent body, and the eligibility criteria.
  • Points-based systemCanada’s Express Entry Program, like other Canadian visas, evaluates applicants on the basis of the types of occupations and levels of skills they hope to attract. Certain occupations and skills, among other criteria, garner greater numbers of points. The higher the overall point total, the greater the likelihood of being admitted entry.
  • Government review: Japan’s Skilled Labor Visa program seeks documentation to support the visa application, and that documentation must prove, among other elements, that the applicant has a certain number of years of experience. The government will review the documentation, and issue a Certificate of Eligibility (COE) if they think the applicant possesses the necessary experience and skills. The existence of the COE in the application can accelerate the visa processing time.
  • Additional examples can be found in Recommendations for Policymakers #1 and Appendix C of the paper.

Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent

Executive Summary

Recommendations

Frequently Asked Questions

Recommendations

Frequently Asked Questions

Sources Cited

  1. Han, T. A., Pereira, L. M., Santos, F. C., & Lenaerts, T. (2019). Modelling the Safety and Surveillance of the AI Race. arXiv preprint.
  2. See discussion of “experiential expertise” in: Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89-103.
  3. Han, T. A., Pereira, L. M., Santos, F. C., & Lenaerts, T. (2019). Modelling the Safety and Surveillance of the AI Race. arXiv preprint.
  4. See discussion of “experiential expertise” in: Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89-103.
Table of Contents
1
2
3
4