Premise of the Project

Premise of the Project

In order to understand the sociotechnical dimensions of various approaches to algorithmic bias and fairness assessments, Partnership on AI (PAI) relies on its multistakeholder model to identify both technical considerations and potential social risks and impacts. PAI convenes experts from different sectors and disciplines, ranging from technologists (technical experts) to social scientists (social issue experts) to civil society advocates (social impact experts) to provide a more holistic understanding of a given algorithmic issue. For example, PAI considers algorithmic fairness not only as the pursuit of statistical parity,See Appendix 4 for a more detailed definition. but as the pursuit of social equity that is attentive to structural inequities, power asymmetries, and histories of discrimination and oppression.

PAI also recognizes that responsible AI principles are challenging to operationalize. Strategies to contend with algorithmic harms may be well-considered on paper, but run into obstacles when being implemented within an organization. There may be legal and organizational considerations, from issues of legal liability to the necessary staffing and organizational incentives, that may stymie the implementation of responsible AI practices within an organization. Furthermore, an algorithmic system is not deemed to be responsibly or ethically developed simply because it functions as intended. It is also necessary to take into account the way it was developed (e.g., development process) and the components that went into its creation (e.g., datasets). As they say, “the devil is in the details” and these more quotidian development and design decisions are often the choice points not discussed in responsible AI guidance. For these reasons, the opportunity to observe and learn directly from teams and organizations as they implement various responsible AI practices or strategies is invaluable.See Appendix 1 for more information about Partnership on AI’s Fairness, Transparency, and Accountability (FTA) program area and its existing work on the use of demographic data for algorithmic fairness purposes.

To better examine the potential of differentially private federated statistics to support a more robust approach to algorithmic bias identification, PAI collaborated with a major technology company from its Partner community.See the section titled “Funding Disclosure” for more information regarding Partnership on AI’s relationship with Apple, Inc. As part of their roll-out of IDs in Wallet in the United States, Apple implemented differentially private federated statistics to support their post-deployment algorithmic fairness assessment strategy.See Appendix 2 for more details about Apple’s algorithmic fairness assessment strategy for their new IDs in Wallet feature.

PAI organized two multistakeholder expert convenings, using the details by Apple to host a more grounded and specific discussion about differentially private federated statistics in a real case.The purpose of the workshop and this report is to provide the AI community with guidance on an important and novel technique. While Apple benefits from the case-specific discussion hosted by Partnership on AI, the role of PAI — and the experts who participated in the convenings — is not to assess Apple on the relative success (or lack thereof) of a technique they chose to employ in the roll-out of their IDs in Wallet feature. Each three-hour virtual workshop was organized to examine how this data privacy mechanism can support or limit more responsible data collection and analysis for algorithmic fairness assessments. The context of US digital identification cards — particularly the stakes of ensuring that all people are able to successfully onboard and utilize a digital identification card if they wish to — surfaced key points about the importance of addressing algorithmic bias using tools like differentially private federated statistics. This included discussions about how different social identities are defined and measured in order to determine whether any experience of group-level algorithmic harm related to that social identity; or whether people with highly marginalized social identities would feel safe disclosing their identities, even for the purposes of identifying potential algorithmic harm.

The 38 participant experts were drawn from a variety of backgrounds including industry, academic, and civil society experts specializing in racial, disability, and gender, and LGBTQIA+ equity, as well as data privacy and algorithmic fairness.Although the case study provided by Apple is specific to the United States, also included in the multistakeholder convenings were experts from Canada and the United Kingdom who noted considerations for how use of differentially private federated statistics in their socio-political contexts may be similar or different. Additional research should be conducted to determine how use of differentially private federated statistics may differ in non-Western contexts and by non-corporate organizations developing and/or deploying AI. These convenings were designed to explore differentially private federated statistics through both social and technical lenses. Participants were also encouraged to consider the risks surrounding sensitive demographic data collection and analysis that may not be fully mitigated through the application of differentially private federated statistics and the additional steps organizations could take to strengthen their overall algorithmic fairness approach.

This white paper is based on the insights provided by the multistakeholder body of experts across the two convenings, as well as a review of available secondary literature on differential privacy, federated learning, and the social considerations of demographic data collection and use. Regular, weekly discussions with key Apple staff involved with the implementation of differentially private federated statistics for algorithmic bias identification in IDs in Apple WalletThis included regular touchpoints with the Product and Engineering teams, as well as some consultations with the Business Development, Marketing, and Legal teams. also helped to clarify PAI’s understanding of Apple’s overall approach to algorithmic fairness.