Despite what many sci-fi movies may lead us to believe, we are not destined to live in a world where AI harms society for the benefit of a select few. While the risks are real, no technological future is fixed. If technology is to work for the benefit of all of humanity, it will need to be designed and deployed in dialogue with all stakeholders — and not treat their differing needs as an afterthought.
With multistakeholder collaboration and responsible policymaking, we can build an equitable future empowered by AI. In that spirit, Partnership on AI, develops tools, resources, and recommendations, to change industry practice and ensure advancements in AI benefit people and society.
Over the course of 2023, we published four sets of recommended practices towards the following goals:
- responsible development, creation, and sharing of synthetic media
- safe deployment of foundation models
- AI development and deployment that truly works for workers
- responsible AI adoption by news organizations
Through a collaborative process with our partner community, including AI practitioners, civil society experts, and academic researchers, among others, our team identifies actionable recommendations that, if adopted at scale, can prevent AI’s potential harms and expand its benefits.
The PAI recommendations released in 2023 aimed to:
Set Real Rules for “Fake” Media
AI-generated content, once the product of experimental research in computer science labs, has become a major engine of digital content creation. In February 2023, we launched PAI’s Responsible Practices for Synthetic Media, a framework on how to responsibly develop, create, and share synthetic media: the audiovisual content often generated or modified by AI. The Framework is the culmination of a year of consultation with PAI partners and collaborators, and 18 organizations, from OpenAI to TikTok to WITNESS, have signed on as Framework supporters.
Provide Custom Guidance for Model Deployers
Given the potentially far-reaching impacts of foundation models, we collaborated with our global community to develop PAI’s Guidance for Safe Foundation Model Deployment. This is a framework for model providers to responsibly develop and deploy foundation models across a spectrum of current and emerging capabilities, helping anticipate and address risks. We are currently accepting public comments on the Guidance; please submit feedback by Jan. 15, 2024.
Make AI Work for Workers and the Economy
AI has the potential to radically disrupt people’s economic lives in both positive and negative ways. It remains to be determined which of these we’ll see more of. AI developers, AI users, policymakers, labor organizations, and workers can all help steer AI so its economic benefits are shared by all. Using PAI’s Shared Prosperity Guidelines, these stakeholders can minimize the chance that individual AI systems worsen shared prosperity-relevant outcomes. We continue to seek input on the Guidelines; please get in touch to share feedback.
Help News Organizations Responsibly Adopt AI
AI is already changing the way news is being reported. While many AI tools can benefit journalists and streamline processes, their use can present many risks. From potentially spreading misinformation to making biased statements, the cost — both literally and figuratively — of misusing AI in journalism can be high. AI Adoption for Newsrooms: A 10-Step Guide provides a step-by-step roadmap to support newsrooms navigating the difficult questions posed by AI tool identification, procurement, and use.
* * *
In the past year, we’ve seen policymakers and regulators around the globe seek to implement AI policies to protect citizens from potential AI harms. PAI’s recommendations provide important examples of practical guidance created with input from stakeholders across the AI ecosystem.
We look forward to working with our community to implement and refine these recommendations in 2024.