Our Blog
/
Blog
Other

“Fair, Transparent, and Accountable AI” Working Group launches in London

PAI recently convened its second Working Group focused on Fair, Transparent, and Accountable AI (FTA). The group is comprised of 70 representatives from technology, academia, and civil society engaging in dialogue, research, and the creation of best practices for addressing intelligent systems that may manifest unfairly, opaquely, or without accountability.

Verity Harding, Co-Lead of Ethics & Society at DeepMind, and Edward Felten, Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton University and the founding Director of Princeton’s Center for Information Technology Policy, are Co-Chairing the group.

AI has the potential to improve decision-making capabilities through techniques which identify underlying patterns and draw inferences from large amounts of data. This can lead to breakthroughs in domains such as safety, health, education, transportation, sustainability, public administration, and basic science. However, these possible benefits also coincide with serious and justifiable concerns–shared by both the public and specialists in the field–about the harms that AI may produce.

The FTA Working Group will therefore encourage development of AI systems that perform effectively while respecting notions of fairness, transparency, and accountability, explore interpretations of these terms as applied to AI, and foster public discussion of these values and possible tension with other goals of AI decision-making. It is the FTA group’s goal to provide a practical toolkit for those engaged in AI development and deployment to engage in responsible behavior.

We are excited to follow the group’s progress as they pursue ambitious and achievable undertakings related to fairness, transparency, and accountability in AI.