Responsible Sourcing of Data Enrichment Services

PAI Staff

As AI becomes increasingly pervasive, there has been growing and warranted concern over the effects of this technology on society. To fully understand these effects, however, one must closely examine the AI development process itself, which impacts society both directly and through the models it creates. This white paper, “Responsible Sourcing of Data Enrichment Services,” addresses an often overlooked aspect of the development process and what AI practitioners can do to help improve it: the working conditions of data enrichment professionals, without whom the value being generated by AI would be impossible. This paper’s recommendations will be an integral part of the shared prosperity targets being developed by Partnership on AI (PAI) as outlined in the AI and Shared Prosperity Initiative’s Agenda.

High-precision AI models are dependent on clean and labeled datasets. While obtaining and enriching data so it can be used to train models is sometimes perceived as a simple means to an end, this process is highly labor-intensive and often requires data enrichment workers to review, classify, and otherwise manage massive amounts of data. Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face. This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind, which can have deleterious consequences for those being ignored.

Data Enrichment Choices Impact Worker Well-being

There is, however, an opportunity to make a difference. The decisions AI developers make while procuring enriched data have a meaningful impact on the working conditions of data enrichment professionals. This paper focuses on how these sourcing decisions impact workers and proposes avenues for AI developers to meaningfully improve their working conditions, outlining key worker-oriented considerations that practitioners can use as a starting point to raise conversations with internal teams and vendors. Specifically, this paper covers worker-centric considerations for AI companies making decisions in:

  • selecting data enrichment providers,
  • running pilots,
  • designing data enrichment tasks and writing instructions,
  • assigning tasks,
  • defining payment terms and pricing,
  • establishing a communication cadence with workers,
  • conducting quality assurance,
  • and offboarding workers from a project.

This paper draws heavily on insights and input gathered during semi-structured interviews with members of the AI enrichment ecosystem conducted throughout 2020 as well as during a five-part workshop series held in the fall of 2020. The workshop series brought together more than 30 professionals from different areas of the data enrichment ecosystem, including representatives from data enrichment providers, researchers and product managers at AI companies, and leaders of civil society and labor organizations. We’d like to thank all of them for their engaged participation and for valuable feedback. We’d also like to thank Elonnai Hickok for serving as the lead researcher on the project and Heather Gadonniex for her committed support and championship. Finally, this work would not be possible without the invaluable guidance, expertise, and generosity of Mary Gray.

Our intention with this paper is to aid the industry in accounting for wellbeing when making decisions about data enrichment and to set the stage for further conversations within and across AI organizations. Additional work is needed to ensure industry practices recognize, appreciate, and fairly compensate the workers conducting data enrichment. To that end, we want to use this paper as an opportunity to increase awareness amongst practitioners and launch a series of conversations. We recognize that there is a lot of variance in practices across the industry and hope to start a productive dialogue with organizations across the spectrum who are working through these questions. If you work at a company involved in building AI and want to host a conversation with your colleagues around data enrichment practices, we would love to join and help facilitate the conversation. If you are interested, please get in touch here.

To read “Responsible Sourcing of Data Enrichment Services” in full, click here.

Redesigning AI for Shared Prosperity: An Agenda

PAI Staff

Artificial intelligence is expected to contribute trillions of dollars to the global GDP over the coming decade, but these gains may not occur equitably or be shared widely. Today, many communities around the world face persistent underemployment, driven in part by technological advances that have divided workers into cohorts of haves and have nots. If AI advancement continues on its current trajectory, it could accelerate this economic exclusion.

This is not the only trajectory AI could be on: switching the emphasis from automating human tasks to genuinely complementing human workers can help raise these workers’ productivity while making jobs safer, more stable and rewarding, and less physically exhausting. Redesigning AI for Shared Prosperity: an Agenda is a foundational document of the AI and Shared Prosperity Initiative outlining practical questions stakeholders need to collectively find answers to in order to successfully steer AI toward expanding access to good jobs—and away from eliminating them. We are sharing this living Agenda with the community to inform aligned efforts and invite all interested stakeholders to partake in the work. (Read our press release on the Agenda here.)

The Agenda, developed under the close guidance of the Initiative’s Steering Committee and based on their deliberations, calls for the creation of shared prosperity targets: verifiable criteria the AI industry must meet to support the future of workers. These targets would consist of commitments by AI companies to create (and not destroy) good jobs—well-paying, stable, honored, and empowered ones—across the globe. The commitments could be adopted by the AI industry players either voluntarily or with regulatory encouragement.

To date, no metrics have been developed to assess the impacts of AI on job availability, wages, and quality. Additionally, no targets have been set to ensure new products do not harm workers, either in aggregate or by category of potential vulnerability. Without clear metrics and commitments, efforts to steer AI in directions that benefit workers and society are susceptible to unbacked claims of human complementarity or human augmentation. Currently, such claims are frequently made by organizations that, in reality, produce job-displacing technology or employ worker-exploiting tactics (such as invasive surveillance) to produce productivity gains. We expect that organizations genuinely seeking to complement and benefit workers with their technology, would be most interested in measuring and disclosing their impact on availability of good jobs, helping differentiate themselves from industry actors seeking to sell worker exploitation-enabling technologies masked as “worker-augmenting AI”.

The success of the targets to be developed relies on their support by critical stakeholders in the AI development and implementation ecosystem: workers, private sector stakeholders, governments, and international organizations. Support within and across multiple stakeholder categories is particularly important given the diffuse nature of AI’s development and deployment: technologies are often created in separate companies and separate geographies than where they are implemented. Directing AI in service of expanding access to good jobs offers opportunities as well as complex challenges for each set of stakeholders. The Agenda outlines questions that need to be resolved in order to align the incentives, interests, and relative powers of key stakeholders in pursuit of a shared prosperity-advancing path for AI.

As an immediate next step, the Initiative is working to conduct thorough research on workers’ experiences of AI in the workplace. The research aims to identify key categories of impact on job quality to be included in the shared prosperity targets, as well as the most effective ways to empower workers throughout the AI development and deployment process. If you are an employer or worker organizing group who would potentially be interested in participating in this research, please get in touch to learn more about our research and how you can contribute.

It is our hope that this Agenda will catalyze the research and debates around automation, the future of work, and the equitable distribution of the economic gains of AI, and specifically on steering AI’s progress to reduce inequality and support sustainable economic and social development. PAI also enthusiastically invites collaboration on the design of shared prosperity targets. For more information on the AI and Shared Prosperity Initiative and how to get involved, please visit shared-prosperity-initiative.

To read the Agenda’s Executive Summary, click here. To read “Redesigning AI for Shared Prosperity: an Agenda” in full, click here.

Managing the Risks of AI Research: Six Recommendations for Responsible Publication

PAI Staff

Once a niche research interest, artificial intelligence (AI) has quickly become a pervasive aspect of society with increasing influence over our lives. In turn, open questions about this technology have, in recent years, transformed into urgent ethical considerations. The Partnership on AI’s (PAI) new white paper, “Managing the Risks of AI Research: Six Recommendations for Responsible Publication,” addresses one such question: Given AI’s potential for misuse, how can AI research be disseminated responsibly?

Many research communities, such as biosecurity and cybersecurity, routinely work with information that could be used to cause harm, either maliciously or accidentally. These fields have thus established their own norms and procedures for publishing high-risk research. Thanks to breakthrough advances, AI technology has progressed rapidly in the past decade, giving the AI community less time to develop similar practices.

Recent pilots, such as OpenAI’s “staged release” of GPT-2 and the “broader impact statement” requirement at the 2020 NeurIPS conference, demonstrate a growing interest in responsible AI publication norms. Effectively anticipating and mitigating the potential negative impacts of AI research, however, will require a community-wide effort. As a first step towards developing responsible publication practices, this white paper provides recommendations for three key groups in the AI research ecosystem:

  • Individual researchers, who should disclose and report additional information in their papers and normalize discussion about the downstream consequences of research.
  • Research leadership, which should review potential downstream consequences earlier in the research pipeline and commend researchers who identify negative downstream consequences.
  • Conferences and journals, which should expand peer review criteria to include engagement with potential downstream consequences and establish separate review processes to evaluate papers based on risk and downstream consequences.

Additionally, this white paper includes an appendix which seeks to disambiguate a variety of terms related to responsible research which are often conflated: “research integrity,” “research ethics,” “research culture,” “downstream consequences,” and “broader impacts.”

This document represents an artifact that can be used as a basis for further discussion, and we seek feedback on it to inform future iterations of the recommendations it contains. Our aim is to help build our capacity as a field to anticipate downstream consequences and mitigate potential risks.

To read “Managing the Risks of AI Research: Six Recommendations for Responsible Publication” in full, click here.

Framework for Promoting Workforce Well-being in the AI-Integrated Workplace

PAI Staff

Executive Summary

Executive Summary

The Partnership on AI’s “Framework for Promoting Workforce Well-being in the AI- Integrated Workplace” provides a conceptual framework and a set of tools to guide employers, workers, and other stakeholders towards promoting workforce well-being throughout the process of introducing AI into the workplace.

As AI technologies become increasingly prevalent in the workplace, our goal is to place  workforce well-being at the center of this technological change and resulting metamorphosis in work, well-being, and society, and provide a starting point to discuss and create pragmatic solutions.

The paper categorizes aspects of workforce well-being that should be prioritized and protected throughout AI integration into six pillars. Human rights is the first pillar, and supports all aspects of workforce well-being. The five additional pillars of well-being include physical, financial, intellectual, emotional well-being, as well as purpose and meaning. The Framework presents a set of recommendations that organizations can use to guide organizational thinking about promoting well-being throughout the integration of AI in the workplace.

The Framework is designed to initiate and inform discussions on the impact of AI that strengthen the reciprocal obligations between workers and employers, while grounding that discourse in six pillars of worker well-being.

We recognize that the impacts of AI are still emerging and often difficult to distinguish from the impact of broader digital transformation, leading to organizations being challenged to address the unknown and potentially fundamental changes that AI may bring to the workplace. We strongly advise that management collaborate with workers directly or with worker representatives in the development, integration, and use of AI systems in the workplace, as well as in the discussion and implementation of this Framework.

We acknowledge that the contexts for having a dialogue on worker well-being may differ. For instance, in some countries there are formal structures in place such as workers’ councils that facilitate the dialogue between employers and workers. In other cases, countries or sectors do not have these institutions in place, nor a tradition for dialogue between the two parties. In all cases, the aim of this Framework is to be a useful tool for all parties to collaboratively ensure that the introduction of AI technologies goes hand in hand with a commitment to worker well-being. The importance of making such commitment in earnest has been highlighted by the COVID-19 public health and economic crises which exposed and exacerbated the long-standing inequities in the treatment of workers. Making sure those are not perpetuated further with the introduction of AI systems into the workplace requires deliberate efforts and will not happen automatically.

Recommendations

Recommendations

This section articulates a set of recommendations to guide organizational approaches and thinking on what to promote, what to be cognizant of, and what to protect against, in terms of worker and workforce well-being while integrating AI into the workplace. These recommendations are organized along the six well-being pillars identified above, and are meant to serve as a starting place for organizations seeking to apply the present Framework to promote workforce well-being throughout the process of AI integration. Ideally, these can be recognized formally as organizational commitments at the board level and subsequently discussed openly and regularly with the entire organization.

The “Framework for Promoting Workforce Well-Being in the AI-Integrated Workplace” is a product of the Partnership on AI’s AI, Labor, and the Economy (AILE) Expert Group, formed through a collaborative process of research, scoping, and iteration. In August 2019, at a workshop called “Workforce Well-being in the AI-Integrated Workplace” co-hosted by PAI and the Ford Foundation, this work received additional input from experts, academics, industry, labor unions, and civil society. Though this document reflects the inputs of many PAI Partner organizations, it should not under any circumstances be read as representing the views of any particular organization or individual within this Expert Group, or any specific PAI Partner.

Acknowledgements

The Partnership on AI is deeply grateful for the input of many colleagues and partners, especially Elonnai Hickok, Ann Skeet, Christina Colclough, Richard Zuroff, Jonnie Penn as well as the participants of the August 2019 workshop co-hosted by PAI and the Ford Foundation. We thank Arindrajit Basu, Pranav Bidaire, and Saumyaa Naidu for the research support.

The Ethics of AI and Emotional Intelligence

PAI Staff

About the Paper

About The Paper

2019 seemed to mark a turning point in the deployment and public awareness of artificial intelligence designed to recognize emotions and expressions of emotion. The experimental use of AI spread across sectors and moved beyond the internet into the physical world. Stores used AI perceptions of shoppers’ moods and interest to display personalized public ads. Schools used AI to quantify student joy and engagement in the classroom. Employers used AI to evaluate job applicants’ moods and emotional reactions in automated video interviews and to monitor employees’ facial expressions in customer service positions.

It was a year notable for increasing criticism and governance of AI related to emotion and affect. A widely cited review of the literature by Barrett and colleagues questioned the underlying science for the universality of facial expressions and concluded there are insurmountable difficulties in inferring specific emotions reliably from pictures of faces. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Corrigendum: Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(3), 165–166. https://doi.org/10.1177/1529100619889954 The affective computing conference ACII added its first panel on the misuses of the technology with the aim of increasing discussions within the technical community on how to improve how their research was impacting society. Valstar, M., Gratch, J., Tao, J., Greene, G., & Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science (Panel). 8th International Conference on Affective Computing & Intelligent Interaction, Cambridge, United Kingdom. Surveys on public attitudes in the U.S. Only 15% of Americans polled said it was acceptable for advertisers to use facial recognition technology to see how people respond to public advertising displays. It is unclear whether the 54% of respondents who said it was not acceptable were objecting to the use of facial analysis to detect emotional reaction to ads or the association of identification of an individual through facial recognition with some method of detecting emotional response. See Smith, A. (2019, September 5). More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-sadults-trust-law-enforcement-to-use-facial-recognition-responsibly/ and the U.K. Only 4% of those polled in the U.K. approved of analysing faces (using “facial recognition technologies”, which the report defined as including detecting affect) to monitor personality traits and mood of candidates when hiring. Ada Lovelace Institute (2019, September). Beyond face value: public attitudes to facial recognition technology (Report), 11. Retrieved from https://www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf found that almost all of those polled found some current advertising and hiring uses of mood detection unacceptable. Some U.S. cities and states started to regulate private SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 See also proposed housing bills, No Biometrics Barriers to Housing Act. https://drive.google.com/file/d/1w4ee-poGkDJUkcEMTEAVqHNunplvR087/view (proposed U.S. federal) and Senate bill S5687 (proposed New York state) https://legislation.nysenate.gov/pdf/bills/2019/S5687 and government See Bill S.1385 (MA face recognition bill in process, as of June 23, 2020). https://malegislature.gov/Bills/191/S1385/Bills/Joint and AB-1215 Body Camera Accountability Act (Bill enacted in CA) https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200AB1215. use of AI related to affect and emotions, including restrictions on them in some data protection legislation and face recognition moratoria. For example, the California Consumer Privacy Act (CCPA), which went into effect January 1, 2020, gives Californians the right to notification about what kinds of data a business is collecting about them and how it is being used and the right to demand that businesses delete their biometric information. The CCPA gives rights to California residents against a corporation or other legal entity operating for the financial benefit of its owners doing business in California that meets a certain revenue or data volume threshold. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 Biometric information, as defined in the CCPA, includes many kinds of data that are used to make inferences about emotion or affective state, including imagery of the iris, retina, and face, voice recordings, and keystroke and gait patterns and rhythms. California Consumer Privacy Act of 2018, AB-375 (2018).

All of this is happening against a backdrop of increasing global discussions, reports, principles, white papers, and government action on responsible, ethical, and trustworthy AI. The OECD’s AI Principles, adopted in May 2019 and supported by more than 40 countries, aimed to ensure AI systems would be designed to be robust, safe, fair and trustworthy. Forty-two countries adopt new OECD Principles on Artificial Intelligence. OECD. Retrieved March 22, 2019, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html In February, 2020, the European Commission released a white paper, “On Artificial Intelligence – A European approach to excellence and trust”, setting out policy options for the twin objectives of promoting the uptake of AI and addressing the risks associated with certain uses of AI. European Commission. White paper On artificial intelligence – A European approach to excellence and trust, 1. https://templatearchive.com/ai-white-paper/ In June 2020, the G7 nations and eight other countries launched the Global Partnership on AI, a coalition aimed at ensuring that artificial intelligence is used responsibly, and respects human rights and democratic values. Joint statement from founding members of the global partnership on artificial intelligence. Government of Canada. Retrieved July 23, 2020, from https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-foundingmembers-of-the-global-partnership-on-artificial-intelligence.html

At its best, if artificial intelligence is able to help individuals better understand and control their own emotional and affective states, including fear, happiness, loneliness, anger, interest and alertness, there is enormous potential for good. It could greatly improve quality of life and help individuals meet long term goals. It could save many lives now lost to suicide, homicide, disease, and accident. It might help us get through the global pandemic and economic crisis.

At its worst, if artificial intelligence can automate the ability to read or control others’ emotions, it has substantial implications for economic and political power and individuals’ rights.

Governments are thinking hard about AI strategy, policy, and ethics. Now is the time for a broader public debate about the ethics of artificial intelligence and emotional intelligence, while those policies are being written, and while the use of AI for emotions and affect is not yet well entrenched in society. Applications are broad, across many sectors, but most are still in early stages of use.

The Ethics of AI and Emotional Intelligence

About the Paper

Sources Cited

  1. Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., u0026amp; Pollak, S. D. (2019). Corrigendum: Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(3), 165–166. https://doi.org/10.1177/1529100619889954
  2. Valstar, M., Gratch, J., Tao, J., Greene, G., u0026amp; Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science (Panel). 8th International Conference on Affective Computing u0026amp; Intelligent Interaction, Cambridge, United Kingdom.
  3. Only 15% of Americans polled said it was acceptable for advertisers to use facial recognition technology to see how people respond to public advertising displays. It is unclear whether the 54% of respondents who said it was not acceptable were objecting to the use of facial analysis to detect emotional reaction to ads or the association of identification of an individual through facial recognition with some method of detecting emotional response. See Smith, A. (2019, September 5). More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center. https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-sadults-trust-law-enforcement-to-use-facial-recognition-responsibly/
  4. Only 4% of those polled in the U.K. approved of analysing faces (using “facial recognition technologies”, which the report defined as including detecting affect) to monitor personality traits and mood of candidates when hiring. Ada Lovelace Institute (2019, September). Beyond face value: public attitudes to facial recognition technology (Report), 11. Retrieved from https://www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf
  5. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121 See also proposed housing bills, No Biometrics Barriers to Housing Act. https://drive.google.com/file/d/1w4ee-poGkDJUkcEMTEAVqHNunplvR087/view (proposed U.S. federal) and Senate bill S5687 (proposed New York state) https://legislation.nysenate.gov/pdf/bills/2019/S5687
  6. See Bill S.1385 (MA face recognition bill in process, as of June 23, 2020). https://malegislature.gov/Bills/191/S1385/Bills/Joint and AB-1215 Body Camera Accountability Act (Bill enacted in CA) https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200AB1215.
  7. The CCPA gives rights to California residents against a corporation or other legal entity operating for the financial benefit of its owners doing business in California that meets a certain revenue or data volume threshold. SB-1121 California Consumer Privacy Act of 2018, AB-375 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121
  8. California Consumer Privacy Act of 2018, AB-375 (2018).
  9. Forty-two countries adopt new OECD Principles on Artificial Intelligence. OECD. Retrieved March 22, 2019, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.html
  10. European Commission. White paper On artificial intelligence – A European approach to excellence and trust, 1. https://templatearchive.com/ai-white-paper/
  11. Joint statement from founding members of the global partnership on artificial intelligence. Government of Canada. Retrieved July 23, 2020, from https://www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-foundingmembers-of-the-global-partnership-on-artificial-intelligence.html
Table of Contents
1
2

On the Legal Compatibility of Fairness Definitions

PAI Staff

Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.

READ THE BLOG POST  READ THE PAPER

Bringing Facial Recognition Systems To Light

PAI Staff

An Introduction to PAI’s Facial Recognition Systems Project

An Introduction to PAI’s Facial Recognition Systems Project

Facial recognition. What do you think of when you hear that term? How do these systems know your name? How accurate are they? And what else can they tell you about someone whose image is in the system?

These questions and others led the Partnership on AI (PAI) to begin the facial recognition systems project. During a series of workshops with our partners, we discovered it was first necessary to grasp how these systems work. The result was PAI’s paper “Understanding Facial Recognition Systems,” which defines the technology used in systems that attempt to verify who someone says they are or identify who someone is.

A productive discussion about the roles of these systems in society starts when we speak the same language, and also understand the importance and meaning of technical terms such as “training the system,” “enrollment database,” and “match thresholds.”

Let’s begin — keeping in mind that the graphics below do not represent any specific system, and are meant only to illustrate how the technology works.

How Facial Recognition Systems Work

How Facial Recognition Systems Work

Understanding how facial recognition systems work is essential to being able to examine the technical, social & cultural implications of these systems.

Let’s describe how a facial recognition system works. First, the system detects whether an image contains a face. If so, it then tries to recognize the face in one of two ways:

During facial verification: The system attempts to verify the identity of the face. It does so by determining whether the face in the image matches a specific face previously stored in the system.

During facial identification: The system attempts to predict the identity of the face. It does so by determining whether the face in the image potentially matches any of the faces previously stored in the system.

Let’s look at these steps in greater detail

A facial recognition system needs to first be trained, with two main factors influencing how the system performs: firstly, the quality of images (such as the angle, lighting, and resolution) and secondly the diversity of the faces in the dataset used to train the system.

An enrollment database consisting of faces and names is also created. The faces can also be stored in the form of templates.

The first step in using any facial recognition system is when a probe image, derived from either a photo or a video, is submitted to the system. The system then detects the face in the image and creates a template.

 

There are two paths that can be taken

The template derived from the probe image can be compared to a single template in the enrollment database. This “1:1” process is called facial verification.

Alternatively, the template derived from the probe image can be compared to all templates in the enrollment database. This “1:MANY” process is called facial identification.

 

Click and drag the slider to see the importance of match thresholds



Beyond facial recognition

Sometimes facial recognition systems are described as including facial characterization (also called facial analysis) systems, which detect facial attributes in an image, and then sort the faces by categories such as gender, race, or age. These systems are not part of facial recognition systems because they are not used to verify or predict an identity.

1

Explainable Machine Learning in Deployment

PAI Staff

Organizations and policymakers around the world are turning to Explainable AI (XAI) as a means of addressing a range of AI ethics concerns. PAI’s recent research paper, Explainable Machine Learning in Deployment, is the first to examine how ML explainability techniques are actually being used. We find that in its current state, XAI best serves as an internal resource for engineers and developers, rather than for providing explanations to end users. Additional improvements to XAI techniques are necessary in order for them to truly work as intended, and help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

READ THE BLOG POST  READ THE PAPER

Human-AI Collaboration Trust Literature Review: Key Insights and Bibliography

PAI Staff

Key Insights from a Multidisciplinary Review of Trust Literature

Key Insights from a Multidisciplinary Review of Trust Literature

Understanding trust between humans and AI systems is integral to promoting the development and deployment of socially beneficial and responsible AI. Successfully doing so warrants multidisciplinary collaboration.

In order to better understand trust between humans and artificially intelligent systems, the Partnership on AI (PAI), supported by members of its Collaborations Between People and AI Systems (CPAIS) Expert Group, conducted an initial survey and analysis of the multidisciplinary literature on AI, humans, and trust. This project includes a thematically-tagged Bibliography with 78 aggregated research articles, as well as an overview document presenting seven key insights.

These key insights, themes, and aggregated texts can serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, and can help align understandings related to trust between people and AI systems. This work can also inform future research, which should investigate gaps in the research and our bibliography to improve our understanding of how human-AI trust facilitates, or sometimes hinders, the responsible implementation and application of AI technologies.

Key Insights

Several high-level insights emerged when reflecting on the bibliography of submitted articles:

  1. There is a presupposition that trust in AI is a good thing, with limited consideration of distrust’s value.
    The original project proposal emphasized a need to understand the literature on humans, AI, and trust in order to eventually determine appropriate levels of trust and distrust between AI and humans in different contexts. However, the articles included in the bibliography are largely framed with the need and motivation towards trust – not distrust – between AI systems and humans. While certain instances may warrant facilitated trust between humans and AI, others may actually enable more socially beneficial outcomes if they prompt distrust or caution. Future literature should explore distrust as related, but not necessarily directly opposite, to the concept of trust. For example, an AI system that helps doctors detect cancer cells is only useful if the human doctor and patient trust that information. In contrast, individuals should remain skeptical of AI systems designed to induce trust for malevolent purposes, such as AI-generated malware that may use data to more realistically mimic the conversational style of a target’s closest friends.
  2. Many of the articles were published before the Internet’s ubiquity/the social implications of AI became a central research focus.
    It is important to contextualize recent literature on intelligent systems and humans with literature focused on social and cognitive mechanisms undergirding human to human, or human to organizational, trust. Future work can put many of the foundational, conceptual articles that were written before the 21st century in conversation with those specifically focused on the context of AI systems, and their different use cases. It can also compare foundational, early articles’ exploration of trust with how trust is seen specifically in relation to humans interacting with AI.
  3. Trust between humans and AI is not monolithic: Context is vital.
    Trust is not all or nothing. There often exist varying degrees of trust, and the level of trust sufficient to deploy AI in different contexts is therefore an important question for future exploration. There might also be several layers of trust to secure before someone might trust and perhaps ultimately use an AI tool. For example, one might trust the data upon which an intelligent system was trained, but not the organization using that data, or one might trust a recommender system or algorithm’s ability to provide useful information, but not the specific platform upon which it is delivered. The implications of this multifaceted trust between human and AI systems, as well as its implications on adoption and use, should be explored in future research.
  4. Promoting trust is often presented simplistically in the literature.
    The majority of the literature appears to assert that not only are AI systems inherently deserving of trust, but also that people need guidance in order to trust the systems. The basic formula is that explanation will demonstrate trustworthiness, and once understood to be deserving of trust, people will use AI. Both of these conceptual leaps are contestable. While explaining the internal logic of AI systems does, in some instances, improve confidence for expert users, in general, providing simplified models of the internal workings of AI has not been shown to be helpful or to increase trust.
  5. Articles make different assumptions about why trust matters.
    Within our corpus, we found a range of implicit assumptions about why fostering and maintaining trust is important and valuable. The dominant stance is that trust is necessary to ensure that people will use AI. The link between trust and adoption is tenuous at best, as people often use technologies without trusting them. What is largely consistent across the corpus – with the exception of some papers concerned about the dangers of overtrust in AI – is the goal of fostering more trust in AI, or stated differently, that more trust is inherently better than less trust. This premise needs challenging. A more reasonable goal would be that people are able to make individual assessments about which AI they ought to trust and which they ought not trust, in the service of their goals for what specifically and in which circumstances. This connects to insight 1: There is a presupposition that trust in AI is a good thing. It is important to think about context, person-level motivations and preferences, as well as instances in which trust might not be a precondition for use or adoption.
  6. AI definitions differ between publications.
    The lack of consistent definitions for AI within our corpus makes it difficult to compare findings. Most articles do not present a formal definition of AI, as they are concerned with a particular intelligent system applied in a specific domain. The systems in question differ in significant ways, in terms of the types of users who may need to trust the system, the types of outputs that a person may need to trust, and the contexts in which the AI is operating (e.g., high- vs. low-stakes environments). It is likely that these entail different strategies as they relate to trust. There is a need to develop a framework for understanding how these different contributions relate to each other, potentially looking not at trust in AI, but at trust in different facets and applications of AI. For a more detailed analysis of what questions to ask to differentiate particular types of human-AI collaboration, see the PAI CPAI Human-AI Collaboration Framework.
  7. Institutional trust is underrepresented.
    Institutional trust might be especially relevant in the context of AI, where there is often a competence or knowledge gap between everyday users and those developing the AI technologies. Everyday users, lacking high levels of technical capital and knowledge, may find it difficult to make informed judgments of particular AI technologies; in the absence of this knowledge, they may rely on generalized feelings of institutional trust.

About the Bibliography

About the Bibliography

The CPAI Trust Literature Bibliography includes 78 thematically tagged research articles (with references and abstracts). The article selection process sourced content from a multidisciplinary community all aligned around an interest and expertise in human-AI collaboration. Submitted articles were evaluated for inclusion and analyzed by members of a smaller project group from within the PAI Partner community. An analysis of the almost 80 initial articles resulted in the development of four thematic tags, highlighting the ways the article abstracts approached the issue of trust. Specifically:

Understanding – lays out a conceptual framework for trust or is primarily a survey of trust-related issues.
Promoting – focuses on means for increasing trust
Receiving – focuses on the entity (e.g., a robot, a system, a website) that is trusted
Impacting – focuses on the nature of changes due to trust being present (e.g., the impact on a group or an organization when it experiences trust)
Two individuals from the smaller project group undertook a thematic tagging exercise to assess inter-rater reliability and the distribution of themes across articles. They tagged themes as primary and secondary (first and second order) for each article from the four thematic options above.

The CPAIS Trust Literature Bibliography identifies thematic tags for each article, at levels 1 and 2. The “themes” column lists the first order themes and the second order themes, where applicable. The total tags for each article (at both levels) are also provided. “Understanding trust” was the most frequent theme – used with 61 articles (78% of the total). 50 articles (64%) were tagged with “promoting trust,” and 29 articles (37%) were tagged with “receiving trust”. Finally, 13 articles (16%) were tagged with a focus on impacting trust.

This bibliography and thematic tags serve as fruitful entry points for those investigating the nuances in the literature on humans, trust, and AI, especially when contextualized with the insights drawn from the corpus presented above.

DOWNLOAD INSIGHTS            VIEW BIBLIOGRAPHY

Human-AI Collaboration Framework & Case Studies

PAI Staff

Overview

Overview

Best practices on collaborations between people and AI systems – including those for issues of transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – depend on a nuanced understanding of the nature of those collaborations.

With the support of the Collaborations Between People and AI Systems (CPAIS) Expert Group, PAI has developed a Human-AI Collaboration Framework, containing 36 questions that identify some characteristics that differentiate examples of human-AI collaborations. We have also prepared a collection of seven case studies that illustrate the Framework and its applications in the real world.

This project explores the relevant features one should consider when thinking about human-AI collaboration, and how these features present themselves in real-world examples. By drawing attention to the nuances – including the distinct implications and potential social impacts – of specific AI technologies, the Framework can serve as a helpful nudge toward responsible product/tool design, policy development, or even research processes on or around AI systems that interact with humans.

As a software engineer from a leading technology company suggested, this Framework would be useful to them because it would enable focused attention on the impact of their AI system design, beyond the typical parameters of how quickly it goes to market or how it performs technically.

“By thinking through this list, I will have a better sense of where I am responsible to make the tool more useful, safe, and beneficial for the people using it. The public can also be better assured that I took these parameters into consideration when working on the design of a system that they may trust and then embed in their everyday life.”

SOFTWARE ENGINEER, PAI RESEARCH PARTICIPANT

Case Studies

Case Studies

To illustrate the application of this Framework, PAI spoke with AI practitioners from a range of organizations, and collected seven case studies designed to highlight the variety of real world collaborations between people and AI systems. The case studies provide descriptions of the technologies and their use, followed by author answers to the questions in the Framework:

  1. Virtual Assistants and Users (Claire Leibowicz, Partnership on AI)
  2. Mental Health Chatbots and Users (Yoonsuck Choe, Samsung)
  3. Intelligent Tutoring Systems and Learners (Amber Story, American Psychological Association)
  4. Assistive Computing and Motor Neuron Disease Patients (Lama Nachman, Intel)
  5. AI Drawing Tools and Artists (Philipp Michel, University of Tokyo)
  6. Magnetic Resonance Imaging and Doctors (Bendert Zevenbergen, Princeton Center for Information Technology Policy)
  7. Autonomous Vehicles and Passengers (In Kwon Choi, Samsung)

 

VIEW THE FRAMEWORK AND CASE STUDIES        READ THE BLOG POST