Guidance for Developers and Deployers New to Public Engagement
The race to develop artificial intelligence is accelerating at an unprecedented pace. AI products are being deployed at singular speed — sometimes without the necessary safeguards in place. Systems are released into the world before they’ve been thoroughly tested for risks, unintended consequences, or potential harm. This rapid pace of innovation presents a critical challenge: How do we ensure that AI remains safe, ethical, and relevant to those it most affects? Part of the answer lies in meaningful public engagement.
In the dash toward progress, public engagement is frequently sidelined, viewed as a bottleneck rather than a necessity. This inattention is particularly concerning as AI development shifts from narrowly focused, task-specific models to powerful, general-purpose foundational models. As we move toward greater development and adoption of systems and AI assistants that can function with little human oversight, public engagement will be particularly important.
Various AI systems require differing levels of engagement, depending on such factors as the type of data they rely on, how their parameters are set, and how easily they can be explained to the public. As we move into an era wherein AI-powered assistants and autonomous systems become more embedded in our daily lives, it becomes even more critical to engage with a diverse array of users and other people impacted by the development and deployment of AI technologies.
How do we ensure that AI remains safe, ethical, and relevant to those it most affects?
Among those whose voices matter most are socially marginalized communities — groups that have long been overlooked in political, social, and institutional decision-making. Their perspectives are not only valuable but essential. Consider, for instance, individuals with disabilities who rely every day on assistive technologies. Their lived experiences provide critical insights into both the challenges and possibilities of AI-driven accessibility tools — insights that might be invisible to those who have never needed to use such technologies. Prospective improvements based on such insights can provide better products for everyone. Closed captioning, for example, not only ensures that people with hearing difficulties have access to spoken content, but serves as a convenience for those who may be unable to access reliable audio.
Without intentional and inclusive engagement, AI risks reinforcing existing inequalities, thereby failing the very people it was meant to serve. The question is no longer whether public engagement should be part of AI development but rather how we can make it a fundamental, non-negotiable step in shaping AI’s future.
What is public engagement?
At its core, public engagement refers to the process of involving individuals, communities, and organizations that are directly or indirectly affected by AI technologies in the development of those technologies. This ensures that the technology is not only functional and marketable but also ethically sound and socially responsible.
Public engagement in this case moves beyond traditional user research by actively involving diverse perspectives in decision-making throughout the AI lifecycle. This includes consultation, collaboration, and co-creation with a range of voices, particularly those from marginalized and vulnerable communities that might face the greatest impact — both positive and negative — of AI deployment.
Insights Gained through Public Engagement
Public engagement can be conducted throughout the AI development life cycle in order to elicit different kinds of insights to improve products and services, or to mitigate potential harms. Broadly speaking, the different use cases or purposes for public engagement often include:
To better understand an existing or potential market of users, customers, or people otherwise impacted by a product or service. A concept may not yet be in development and the aim may be to develop a foundational understanding of what a set of potential users may and may not want in an AI-driven product or service. Public engagement can help to better understand:
- nuances between existing consumer markets and new potential markets
- issues a consumer market is facing that might support an AI-driven solution (including whether or not an AI-driven solution is even needed for that market or audience)
Product teams may conduct early-stage market studies before a new product/feature is fully designed or built. These market studies may be conducted in the form of desk research of existing knowledge and studies; focus groups (group interviews); individual interviews; mass market surveys; and consultations with experts.
To create or refine datasets for training or fine-tuning purposes. Participants may be engaged to both serve as data providers (i.e., people who knowingly volunteer their personal and user data to help diversify datasets) and to improve data annotation or enrichment (i.e., people from many different contexts who are employed to provide annotation services). Given growing concerns around algorithmic bias, validity issues, and reliability issues across different groups, growing attention is being devoted to ensuring that datasets on which AI systems are built and tested are robust, high quality, and reflect diversity of experience. Such datasets require both that data is collected from a broad and diverse set of sources and populations (e.g., for facial recognition systems, datasets need full diversity of skin tones and facial features) and that datasets are annotated to reflect different context-specific value systems and understandings of the world (e.g., for content moderation systems, words are labeled as obscene in ways that reflect region-specific notions of obscenity).
To adversarially engage with a prototype or product/feature in development in order to identify likely risks and potential failures. Individuals unfamiliar with the specific project — including people from other teams in the same organization; academics or other subject matter experts; users; advocates; and other general members of the public — may be invited. For example, red-teaming exercises (a term borrowed from the military and cybersecurity fields) present opportunities to attack a system in order to identify ways it might be abused, misused, or fail to function as expected for different populations. Additional forms of crowdsourcing include “bug bounties,” by which users are rewarded for identifying and reporting system vulnerabilities.
To work with existing or likely users to better understand how they engage with a product or service, what they like and dislike about it, and what adjustments could be made to make it easier to use or more inviting to use. User-experience research and product beta testing, now a core phase of product development, can be applied before and after a product is released. For example, extensive user-experience research may be conducted after a product’s initial release to identify sites for general product improvement or to update the product for different markets or user bases.
To engage individuals outside the product team and/or organization to provide ongoing oversight of a deployed AI product or service. This may be done to help the organization adjudicate user-reported harms, provide a mechanism for ongoing accountability, or monitor changes. For example, community panels may be created to help organizations better understand ongoing and emerging issues that specifically face a socially marginalized community or region. Panels of experts and others — voluntary or compensated — may provide additional reviews of a company’s policies and approach, such as for content moderation. Various forms of expertise may be solicited. Subject matter experts, individuals with relevant lived expertise and knowledge of specific situations, and individuals who study the social impact of AI all offer forms of expertise that may be sought for a project. Experts may be consulted at every stage of the development life cycle.
Public Engagement Approaches and Activities
Public engagement activities include such things as:
- Focus groups or group meetings: facilitated discussions in which individuals are asked to talk about something as a group
- Participatory workshops: a series of facilitated discussion and activity sessions for a designated group
- Qualitative interviews: individual interviews wherein participants are asked a similar battery of questions
- User testing: participants are invited to use the product/service and give direct feedback about their experiences
- Diary studies: participants are invited to use the product/service over an extended period of time and maintain a regular diary wherein they note observations
- Surveys and questionnaires: distributing structured or semi-structured surveys to gather quantitative and qualitative data on public opinions, preferences, and experiences
- Community advisory boards: establishing a one-time or ongoing advisory group composed of community representatives who meet regularly to provide input on specific products
- Public consultations: structured processes whose participants are invited to review and comment on proposed policies, plans, or products before final decisions are made
- “Hack-a-thons” or innovation or red teaming exercises: intensive, time-bound events wherein diverse teams of participants collaborate to develop innovative solutions to specific challenges related to a product or service
- Listening tours: Visiting various communities to listen to concerns, needs, and perspectives in their environments
How does PAI’s Guidance build on existing public engagement approaches?
Working with the public, whether with people who are impacted by a proposed change or with users of a good, is a core activity in many other sectors. From government agencies seeking input on public services and goods to marketing teams seeking insights on what people love and hate about their products, and on to the very notion of shared governance, public engagement is an established field of practice. This resource draws upon the wide array of practices, frameworks, and research that have existed for many decades to make them applicable to AI model and system development.
Technology companies are aware of the important role of members of the public in developing technology that is marketable, works as intended, and minimizes unforeseen harms and risks. For both big tech companies and smaller start-ups, learning from, and developing with, the public is a recognized way to build better products while navigating known and unforeseen risks and harms.
For example, user-experience research is recognized as an important part of the technology development life cycle, and human-centered computing is growing as a subfield. Many approaches to user-experience research, however, have limitations. Often, the focus of user experience research is on existing or likely users of the product or to obtain feedback on how to improve or market it, rather than engaging in a more comprehensive assessment of needs, relevance, and risks. In addition, user-experience research often takes place after the product or system has largely been designed — or already deployed — when it might be too late for impacted communities to inform substantive changes.
Centering Humans in AI Development and Deployment
In the past decade, some AI developers have attempted to take such holistic and inclusive approaches borrowed from the field of design as human-centered design (HCD) and value sensitive design (VSD). These approaches emphasize designing around human needs and values through iterative processes in collaboration with the people most impacted by what is being developed.
Similar to the human-centered design approach, PAI’s Guidance for Inclusive AI emphasizes the importance of starting the discovery and problem-identification process with members of the public and integrating user feedback throughout the development process. But these frameworks need to go further. It’s not enough to design AI systems for the people directly interacting with them: AI doesn’t just impact users; it affects entire communities, often in ways that are invisible to them. Think of someone engaging with a customer support chatbot, unaware that an AI — not a human —is making decisions that could shape their experience, escalate their issue, or even deny them service.
Like the value sensitive design approach, PAI’s Guidance for Inclusive AI emphasizes the importance of learning, not only about community members’ experiences but their underlying values; these values are often reflected in AI systems. Developing inclusive AI systems demands a deep understanding of social, political, and cultural contexts because technology doesn’t exist in a vacuum. It operates in the real world and affects real people. If AI is to deliver on its promise of progress, it must be designed with the full spectrum of human experience in mind.
What are the challenges of public engagement?
Implementing initiatives to work with the broader public can be challenging. Even if undertaken with the best intentions, interaction with teams from the corporate sector can be harmful or unpleasant for the targeted community or individuals. Problems can include:
- Timing. Participants might experience their time and effort as “wasted” because their input is not integrated into the final result.
- Benefit. Participants may feel taken advantage of if their input leads to a substantial change in the final product but they do not financially or otherwise materially benefit.
- Representation. Communities are not homogenous, even if carefully selected along specific dimensions of identity and experience. Individuals come in with a complex array of identities and experiences. Their demands, concerns, and insights may ultimately converge, but they are just as likely to disagree with what other individuals are saying and prioritizing.
- Prioritization. Sifting through seemingly conflicting inputs and deciding whom to engage with is an extremely difficult aspect of working with the public, especially because it may ultimately seem that certain voices or insights must be prioritized.
- Resource constraints. Given the rapid development of AI and the fierce competition among companies to get systems and products to market, choosing to spend additional time and resources on consulting and working with different communities can be demanding.
Along with these challenges, conducting public engagement poses risks for the organization, too. Some common risks of poorly planned or executed public engagement include:
- Loss of competitive market edge. Maintaining competitiveness in a rapidly oversaturating marketplace requires timely development and release of AI systems, services, and products. The time required for public engagement — particularly more careful, qualitative work — can cause delays. The team that requires a greater amount of time to do its work may generate friction with the rest of the organization, creating internal disincentives for teams and the individuals leading them . This can particularly occur when public engagement yields insights that highlight significant potential issues or harms that need to be addressed once they have surfaced (as opposed to having “plausible deniability” when releasing a product).
- Participants don’t yield any “useful” insights. After engaging with different participant groups, it may turn out that the input they provided is inapplicable or inconsistent (e.g., individuals have provided wildly opposing viewpoints, making it difficult to know how to proceed without alienating a segment of the public). This makes it difficult to apply any such learning to the development process. Participants may also be viewed as insufficiently sophisticated on a technical basis to provide well-informed feedback.
- Negative publicity or leak of product details prior to release. Working with external participants — even with strong NDAs in place — opens the organization to the risk that product details will leak or that negative publicity might precede the product release. Individual participants may choose to speak negatively about the project solely because their input was not adopted or integrated into the project in a manner they preferred.
These risks can be mitigated with thoughtful planning and research and should not be considered a likely or inevitable outcome of working with the public.
Why does public engagement matter?
Imagine launching an AI product that seems flawless in production — only to find, once it reaches market, that it overlooks critical risks, alienates key user groups, or even causes harm. This scenario, more common than you might think, often stems from one major oversight: failing to engage the right set of impacted people early in the development process.
Working with users and other impacted members of the public isn’t just a box to check. It offers strategic advantages, fueling innovation, strengthening products, and ensuring that AI will serve people in ways that are ethical, sustainable, and effective. By involving a diverse range of voices, companies can identify risks before issues escalate into real-world consequences. Take, for instance, workers and labor organizations: Consulting them before deploying AI-driven automation can reveal hidden threats to job security, worker rights, and overall well-being. These issues might otherwise go unnoticed until it’s too late.
The benefits go far beyond risk mitigation. Meaningful public engagement broadens developers’ understanding of the historical and social contexts in which their technology will operate. It ensures that AI products are designed not just for theoretical users but for the full spectrum of people who will interact with them in their daily lives. Without such insight, developers risk missing crucial aspects of the problem context and may pursue solutions that don’t solve the targeted problems.
The final, often underestimated advantage is trust. When companies actively involve the public — whether users, advocacy groups, or impacted communities — they foster a sense of shared ownership over the technology that’s being built. This trust can translate into stronger consumer relationships, greater public confidence, and ultimately, broader adoption of AI products.
- To identify issues that can be resolved through — or markets that can be served by — the application of AI technologies (or when AI-driven technological innovation is not the best approach)
- To identify potential risks and harms as they may arise for different people or groups
- To identify circumstances in which the technology does not work as intended
- To identify friction points that make the technology difficult to use or inaccessible to certain people
- To align with general democratic principles and promote greater access and adoption of new technologies
Moreover, individuals, teams, and organizations may elect to engage with the public for a number of reasons:
- Desire to improve the product. Seeking user and public input in design choices is integral to the practices of user research and user experience (UX), which are concerned with improving a product or feature by better tailoring it to the target population’s needs and interests. Soliciting feedback from a variety of users and others will help inspire a better product. As such, product managers may be more likely to adopt public engagement for product improvement because this follows their organizational remit to build a product that is usable by, and desirable for, a broad array of people.
- Increase market share or profit. Public engagement is also a profit-generating practice on the basis that higher levels of input from users or affected people/communities will in turn create higher-quality products. The perception that an organization is open to feedback from the public and users may also increase trust and consumer loyalty. This may help organizations distinguish themselves in the wider market.
- Reputational gains. Disclosing details about public engagement can be seen as good PR for a company, an easy win for tech companies seeking to improve public perception and consumer trust.
- Personal passion for community engagement. Many AI researchers and practitioners report strong feelings of personal motivation to work more closely with socially marginalized communities and other members of the public, due to academic orientation, continuation of previous experience outside tech (such as in community organizing), or a sense of personal ethical duty.
Understanding these various incentives can help you better advocate for public engagement and set realistic expectations for its outcomes with decision-makers and leaders in the organization. In addition, it is crucial to identify common ground between an organization’s incentives for public engagement and the benefits of integrating public input at different stages of AI development.
What should I do next?
- Download and read the Guidance for Developers and Deployers New to Public Engagement
- Generate custom guidance for your project