Responsible Generative AI. Let’s get started …
At their best, we can choose to use new technologies to make our lives richer and our communities stronger. We can develop them to enable new avenues for connection, learning, and creativity. At their worst, they can be misused to harm, disempower, and disenfranchise for the benefit of a select few.
The rapid and widespread release of large-scale AI models, including those associated with generative AI, has prompted a robust public debate about the impact of these systems, as well as the relative benefits and risks to society across several domains, evoking responses ranging from excitement to concern.
In light of our commitment to advance AI with principles of equity, justice, and prosperity, Partnership on AI (PAI) is launching a new collective and cross-sectoral initiative with the goal of mitigating the harms and maximizing the benefits of these new technologies.
We will convene stakeholders across the private and public sectors to assess and address the impact of the rapid deployment of large-scale AI models on people and society, and will focus on ways to protect the people and communities most likely to suffer negative impacts.
By fostering a shared understanding of what is new about these large-scale models we want to inspire and support PAI Partners and policymakers to take action across the AI ecosystem.
Together, we aim to develop a consensus agenda on research directions to catalyze individual and collective actions in government, industry, academia, and civil society.
Over the last decade, there have been a number of initiatives to promote AI safety and accountability — including a growing number of policy frameworks, international standards, research across disciplines, voluntary principles and practice, legal challenges, and advocacy efforts for the protection of human and civil rights. Each one contributes to promoting the development of innovative and beneficial AI applications, while making it harder to deploy harmful technology. And there is more work to be done on all of these fronts to ensure that we deploy AI responsibly.
With the recent developments in generative AI, we’ve seen public calls prioritizing different types of these initiatives — from a voluntary pause on training of models to greater transparency and third-party oversight. We have also seen governments begin to take action.
Learning from initiatives already underway, some of the questions that are prompting and guiding PAI’s initiative in this area include:
- What is new about how these systems are researched, developed, deployed, and used? And what is not new, but still unresolved?
- Are there new definitions of benefits and harms that enable a richer dialogue about the opportunities and risks of these models?
- What best practices are emerging that require further discussion, dissemination, and evaluation?
- What types of policy or third-party oversight may be needed to respond to the development and behavior of these systems while protecting civil liberties and human rights?
- What topics require focused research and knowledge mobilization to prepare for potential future developments?
- What types of international coordination, interoperability, ongoing evaluation of systems and oversight, and foresight are needed?
As we work on these questions and others together, we will make this work publicly accessible, develop useful resources, and aim to reach consensus. We will consult with broad audiences to contribute to and encourage a global dialogue about the promise and the perils of this technology. We will do this work with care and urgency.
Join us. Let’s work together to affirm what it means to be a responsible actor in this rapidly evolving landscape.
- On February 27th, PAI launched Partnership on AI’s Responsible Practices for Synthetic Media, a first-of-its-kind project in the field of AI-generated media community-derived framework for the ethical and responsible development, creation, and sharing of synthetic media. Cases and pilots exploring the Framework in practice are to be announced soon.
- On April 11th and 12th, PAI will hold a workshop on “Protocols for Responsible Innovation of Large-scale Models,” hosted by PAI Partner IBM. Guided by the work of our Safety Critical AI Steering Committee, this meeting brings together AI labs and experts across civil society and academia to kickstart a multistakeholder dialogue on protocols for responsible deployment of large-scale AI models.
- On June 27th and 28th, PAI will hold our Annual Partner Forum, bringing together Partners across industry, academia, civil society, and media. Through interactive sessions, our Partners will work on developing a shared agenda and consensus on key terms and priorities, including policy considerations, on generative AI.
- On July 12th and 13th, PAI’s Board of Directors will hold a capstone session on this work to engage in timely discussions with members of the international community.
- More to come.
For more information about this work and how to get involved, please join our mailing lists here.