Our Blog
/
Blog

AI Needs Inclusive Stakeholder Engagement Now More Than Ever

$hero_image['alt']

From algorithms that provide recommendations for what to watch next on platforms such as TikTok, Instagram, or Netflix, to automated systems that detect your vehicle and can read your license plate at toll booths and automatically issue you tolls, AI’s presence is ubiquitous. AI is becoming embedded in everything and it’s no longer a question of “if” you’re engaging with AI but “when” and “how.” Ensuring the responsible development, deployment, and use of AI is more important now than ever.

And yet, while AI systems can make life and work easier, they can uphold and even exacerbate existing inequalities or harms to marginalized groups/communities. As these systems become entrenched in our daily lives, we must address these risks and harms to understand how AI can perpetuate biases and inequalities and the actions we can take as a collective to address and mitigate them. AI developers and organizations must adopt inclusive research and design principles such as engaging diverse stakeholders to identify and mitigate biases in these systems.

Stakeholder engagement can drive innovation and help deliver robust products and services. Engaging with diverse stakeholder groups opens up opportunities to foresee and manage risks and harms before they manifest. Stakeholder engagement also fosters awareness of social context, drives product development towards outcomes that are meaningful and relevant to a broad array of users, and in turn mitigates the risk of developer teams missing something crucial about a problem space.

Partnership on AI understands the importance of inclusive design in developing these AI systems, and that is why we created the Global Task Force for Inclusive AI. Established in 2023, the Task Force is dedicated to fostering responsible and ethical practices in AI by addressing challenges to meaningful stakeholder engagement in AI development and deployment, particularly given the rapid pace of innovation. Involving users, particularly those from marginalized communities, is essential to ensuring AI systems are equitable to minimize discrimination. Although it can be difficult to engage diverse audiences, particularly people with limited technological literacy, the Task Force builds on existing studies and frameworks, including PAI’s own “4 Guiding Principles for Ethical Engagement,” to create actionable recommendations for AI developers.

Understanding Risks and Harms in AI

When addressing risks and harms associated with AI, it is important to understand who exactly is at risk of being harmed and excluded, intentionally or unintentionally, and the circumstances in which it occurs. Technology tends to reflect its creators; meaning the perspectives, experiences and even unconscious biases of AI developers get embedded into the tech/systems they create. It is inherent to the nature of design that a person’s life experiences affect the choices they make and therefore influence the final outcome of a product. But why is this problematic? When products are not created with all types of users in mind those users get left behind.

<blockquote
Whether done intentionally or unintentionally, products that are not created with an inclusive approach do not serve all users of technology equally and in some cases they can actively harm communities, especially those who have been historically excluded or marginalized.

These harms have already been witnessed across the AI industry and will continue to be issues if we do not take proactive measures to correct it. For example, resume screening software, which was thought to correct for human bias in the hiring process has come under scrutiny for inaccurately screening out qualified job applicants from roles. In the first EEOC AI Hiring Bias Lawsuit, an applicant was immediately screened out of the hiring process; but after changing her birthdate to appear younger, she submitted her application again after and was offered an interview. Evidence suggests the company programmed the system to reject older applicants. Biases in resume screening systems can continue to disproportionately favor certain groups of applicants over others, which can severely impact marginalized groups.

Facial recognition systems are a prevalent use of AI and can be used for device security (as a replacement for a password on your phone), fraud detection, airport and border control, and even in healthcare. They are also coming under scrutiny for demonstrating bias. These tools should be able to provide better security and a more efficient verification process, however facial recognition systems are notoriously bad at recognizing the faces of people with darker skin tones and can be forcibly used against people’s will to access their devices.

With error rates of 0.8 percent for light-skinned men and 34.7 percent for dark-skinned women, as noted in a study by Dr. Joy Buolamwini of the Algorithmic Justice League, this begs the question – why isn’t the technology working the way it’s supposed to? This is likely due to the lack of Black faces in the training data for these systems. The error rates may seem inconsequential to some, but for many marginalized groups, this can lead to life altering or threatening scenarios. In one case of mistaken identity due to facial recognition software, a Brown University student was mistaken for a suspect in a Sri Lankan bombing. In another case, the lives of three Black men were forever altered when facial recognition software wrongly identified them as the main suspects in crimes they did not commit.

However, even if all AI technology worked the way it’s supposed to, some uses of the technology are just bad ideas. Law enforcement agencies are adopting predictive policing systems that can predict criminal activity and allocate policy resources. Looks like we’ve learned nothing from Minority Report. These policing systems rely heavily on data generated during periods of “flawed, racially biased, and sometimes unlawful practices and policies,” otherwise known as “dirty policing.” This data can raise the risk of creating inaccurate and biased outcomes, particularly for Black people, as the systems are built on biases with no assurance that the developers mitigate these risks.

<blockquote
It is critical that companies listen to users, or stakeholders, that are affected by these systems so they mitigate risks and harms.

When stakeholders, or affected users, are not engaged in the design process, scenarios like the one seen in New York may become more common. In this case, New Yorkers were very vocal in their opposition to landlords installing facial recognition systems to access parts of their building, citing privacy concerns. Engaging and listening to stakeholders is crucial to creating trust between users and companies. By not engaging stakeholders, companies can unknowingly create a culture that people would be less excited to participate in, as was noted in a conversation with PAI’s Stephanie Bell in “What Workers Say About Workplace AI.”

The Importance of Stakeholder Engagement

The technology industry is not unaware of the importance of stakeholder engagement. “Beta” testing and user experience research, where users are invited to provide companies with input about what works and doesn’t work well about their products, is now universally adopted as a key phase of development. Indeed, human input is inherent to all phases of the technology and AI development cycle. The data required to train these systems, whether it is in the model building or testing phases, requires participation from individuals representative of the communities and groups that the technology will serve. The inclusion of diverse perspectives, particularly those representing non-white racial-ethnic identities, non-male gender identities, and experiences of disabled people, has been emphasized as a means to mitigate some of the harm that AI is shown to cause. Developers and organizations working within this space must consider and include the people who exist in the margins and make strategic and intentional goals to get their input when developing their systems or products.

We recognize, however, that there are challenges to this approach. Communicating with technical as well as nontechnical audiences can be a challenge for developers in engaging some of these communities or groups. Translating complex technical topics into accessible language without losing the nuance of the issues is important for broader engagement and understanding. We must also recognize the challenge in bringing together a diverse pool of representatives of these communities while operating within the fast pace of development of the technology. Developers must also be cautious of falling into the trap of “participation washing,”, in which participation is used as a means to extract labor without proper compensation or credit or as a way to legitimize the status quo by collecting input without incorporating it into final outcomes and by maintaining boundaries between who is essential versus nonessential in the decision-making process. When done well, stakeholder engagement can support better relationships between companies and their consumers and the general public, by building in opportunities for external stakeholders to develop a sense of shared ownership.

What’s next

Over the last year, the Global Task Force has worked to develop a comprehensive series of guidance and guardrails (Guidelines) on the ethical engagement of users and the public, particularly people of socially marginalized communities, during the AI development process. The Guidelines will be released for public consultation and review in the coming months to fully reflect the different communities implicated by these Guidelines. Because the solution to inclusive stakeholder engagement in AI is not straightforward we welcome any and all perspectives, but especially those of socially marginalized and “de-centered” communities, to help guide the Task Force.

As we continue to shape the future of responsible AI, we invite you to follow along with our work via our newsletter.