We are at a critical moment for AI’s development, its applications, and use. Advancements in algorithms, computing power, and rich data are helping the field to solve technical challenges that will improve domains such as perception, natural language understanding, and robotics, bringing great value in the years ahead. New technologies made possible by these advances affect fields such as education, accessibility, health, and public administration.
However, with technological advancement in AI comes concerns, challenges, and questions associated with responsible development and the impact of technologies on people’s lives. These concerns include the safety of AI technologies, the fairness and transparency of systems, and other intentional as well as inadvertent influences of AI on people and society, including potential impacts on privacy, criminal justice, and human rights–among other domains.
At the Partnership on AI, it is our mission to bring together a multistakeholder community spanning technology developers, researchers, advocates, and organizations representing affected communities so as to develop a shared understanding of challenges and opportunities that touch a wide range of constituencies–and promising answers and strategies to meet and address those issues. Importantly, the Partnership is focused on translating answers to these questions into practice in organizations at the forefront of AI development and deployment, many of which are members of our community and deep contributors to our work.
Our Working Group Charters
At the heart of that mission are a series of Working Groups which bring together diverse voices from across our partnership of civil society organizations, academic institutions, and leading technology and professional service organizations. These groups will seek to answer the questions implicating AI development, by carrying out ambitious research that will explore the opportunities and impacts of AI. They will also work to develop the policies, tools, and principles to act as the guardrails for future AI development.
The first three of these groups launched in recent months, and now collectively involve over 150 of the world’s leading thinkers on topics of importance to AI and its impacts. Each group is led by Co-Chairs from our Partner organizations. Today, we are excited to make public the Charter for each of these groups. These guiding documents define key questions and considerations for each of these areas, and have been generated and defined by the members and leadership of the groups.
These first groups aim to address three of some of the most important challenge areas for AI development. Their Charters help to scope and set out the goals of each group, and are available below:
AI, Labor, and the Economy
Exploring how we can minimize disruptions AI advances may cause to individual workers and the labor markets in which they exist
Co-chairs: Elonnai Hickok, Centre for Internet and Society; Michael Chui, McKinsey & Company
The Charter, with examples of planned activity, work, and research, is available here.
Safety-Critical AI
Focusing on the deployment of AI in safety-critical environments; exploring how we can ensure AI in those settings is safe, trustworthy and ethical
Co-chairs: Peter Eckersley, Electronic Frontier Foundation; Ashish Kapoor, Microsoft
The Charter, with examples of planned activity, work, and research, is available here.
Fair, Transparent and Accountable AI
Encouraging the development of AI systems that can perform effectively while respecting principles of fairness, transparency and accountability
Co-chairs: Edward Felten, Princeton University Center for Information Technology Policy; Verity Harding, DeepMind
The Charter, with examples of planned activity, work, and research, is available here.
We look forward to sharing our understanding of opportunities and challenges that touch a wide range of issues and actors in AI — and of evolving society’s answers to the most important questions in this field.