Our Blog
/
Blog

The Future of Responsible AI: 3 Key Insights From PAI’s 2023 Partner Forum

$hero_image['alt']

 

As the potential impact of AI becomes more widely understood, the need for responsible development and deployment is more apparent than ever before. At Partnership on AI’s (PAI) 2023 Partner Forum in San Francisco last month, experts from across the AI space convened to advance the solutions that this crucial moment demands.

Over three days, more than 100 representatives from industry, academia, and civil society attended a variety of panels and working sessions on topics from inclusive practices to generative AI. These leading voices came together to supercharge PAI’s ongoing work, create connections across sectors, and inspire new ideas for how to support AI that benefits everyone.

Here are three key takeaways from the Partner Forum.

1. Cross-sectoral Collaboration Is Essential

Throughout the Partner Forum, many attendees emphasized the importance of a multistakeholder approach to guiding AI’s future. Responsible AI development and deployment will require solutions that are both practical and effective — solutions that are easier to create when actors work together across sectors and borders.

At the panel “Deepfakes to DALL-E: Real Rules for Fake Media,” the CBC’s Bruce MacCormack reflected on the benefits of collaborations between news organizations and other experts when addressing questions related to synthetic media.

“We needed technical partners,” said MacCormack. “And we needed partners that understood the social impacts on marginalized communities. We needed a bunch of people we didn’t normally talk to get together and have conversations, and PAI facilitated a lot of those conversations. So it was really important.”

Irene Solaiman, Policy Director at Hugging Face, speaks at the panel “Envisioning Positive AI Futures.”

Collaborations like these informed PAI’s Synthetic Media Framework, guidance on the use of AI-generated media that is supported by a dozen AI industry, civil society, and media organizations.

During a panel on global AI policy, Vinhcent Le, a board member of the California Privacy Protection Agency, highlighted the importance of multistakeholder support for facilitating trust between policymakers and the AI industry. “I think the main thing to learn, and the thing that I think we want to call for,” he said, “is more cooperation between civil society, academia, policymakers to get consensus.”

2. Inclusive Practices Are a Must — and Must Be Done Right

The importance of inclusion was a recurring theme at the Partner Forum, with wide recognition that AI can only work in the interests of everyone if they are heard during AI development and deployment. After announcing the inaugural members of the Global Task Force for Inclusive AI, PAI hosted a panel exploring the need for engaging global voices and how incorporating their expertise can promote equitable innovation. Panelists emphasized that truly inclusive practices require more than getting a quick stamp of approval from affected communities.

“Inclusion is not checking boxes. It’s not saying, ‘Okay, I have people represented from this community, that community, and that’s all I have to do.’ Inclusion is really about honoring perspectives, needs, considerations,” said Stacy Hobson, who leads the Responsive and Inclusive Technologies initiative at IBM. “How do we get the input from communities and individuals and make sure that we represent it well and we’re not doing it in an extractive way? That is really, really, really hard.”

Stacy Hobson, Director of the Responsible and Inclusive Technologies research initiative at IBM, speaks at the panel “Citizen-Centric AI: Introducing the Global Task Force for Inclusive AI.”

Wilneida Negrón, a member of PAI’s Global Task Force for Inclusive AI and the Director of Policy and Research at Coworker.org, characterized inclusion as “a social and economic imperative” while observing “it is also incredibly hard to do.” In addition to engagement itself, Negrón said that communities need the power to hold companies accountable “to whatever findings or principles come out of different participatory methodology.”

3. Recognition of AI’s Potential Grants Us an Important Opportunity

While the rapid pace of AI development was widely acknowledged by attendees, many expressed excitement that conversations about AI’s impact were progressing similarly quickly. The existence of the Partner Forum itself was cited by many speakers as representative of the opportunity we have to shape AI’s future in this crucial moment. Comparing approaches to AI to policy responses to other technologies, Le said, “I think that’s very encouraging that we’re not 10 years behind the curve here.”

Eric Horvitz, Chair of PAI’s Board of Directors and Chief Scientific Officer at Microsoft, speaks at the panel “Action or Reaction: Community Approaches to Generative AI.”

Eric Horvitz, Chair of PAI’s Board of Directors and Chief Scientific Officer at Microsoft, expressed a similar thought.

“Here’s what makes me excited: I don’t think we had this level of reflection about a technology in the past that was disruptive,” said Horvitz. “It heartens me that we’re here today, that we see not just this organization, but other discussions going on at the senior levels of government.”

In her closing remarks, PAI CEO Rebecca Finlay said she was “inspired” by the conversations that had taken place over the course of the Partner Forum

Rebecca Finlay, PAI’s CEO, sharing remarks at the forum reception.

“This is remarkable,” said Finlay. “You are all asking the hardest questions, you understand and see the challenges, and you’re here. You’re thinking about what we can do together and how we can move forward.”

To stay informed about PAI, sign up here to receive the latest news, updates, and event invitations related to our work.