Conclusion

Conslusion

An important part of responsible AI development is recognizing that it is difficult, if not impossible, to release an algorithmically-driven feature or product that is guaranteed to work every time for all people and situations. Rigorous pre- and post-deployment fairness assessments are necessary to surface any potential bias in algorithmic systems. Post-deployment fairness assessments can pose additional challenges to organizations, as they often involve collecting new user data, including sensitive demographic data, to observe whether the algorithm is operating in ways that disadvantage any specific group of people. The collection and use of demographic data is recognized to be challenging for organizations due to concerns related to data privacy, data security, and legal barriers. Demographic data collection also poses key risks to data subjects and communities such as data misuse or abuse of data (including potential discriminatory uses), as well as harms stemming from misrepresentation and miscategorization in datasets.

In an effort to deploy algorithmically driven features responsibly, Apple introduced IDs in Apple Wallet with mechanisms in place for Apple (and the identification card issuing state authority) to identify any potential biases users may experience when setting up or using their new digital ID. Currently only available in the United States, Apple applied differentially private federated statistics as a way to protect users’ data, including their demographic data, as part of IDs in Apple Wallet. The main benefit of using differentially private federated statistics is the preservation of data privacy by combining the features of differential privacy (e.g., adding statistical noise to data to prevent re-identification) and federated statistics (e.g., analyzing user data on individual devices, rather than on a central server, to avoid the creation of datasets that can be hacked or otherwise misused).

A member organization of Partnership on AI (PAI), Apple shared details about the use of differentially private federated statistics in a US context for discussion by responsible AI, algorithmic fairness, and social inequality experts across two convenings. Independently organized and hosted by PAI, the two expert convenings discussed how algorithmic fairness assessments are strengthened, challenged, or otherwise unaffected by the use of differentially private federated statistics. PAI applies a sociotechnical lens to various AI issues, including algorithmic fairness and bias issues, in order to draw attention to the complex ways AI can have social impact, particularly for marginalized demographic groups.

Expert participants were asked to consider not only the specific technical strengths or weaknesses of differentially private federated statistics but how this approach interacts with an overall algorithmic fairness strategy. Recognizing that data privacy and security are not the only concerns people have regarding the collection and use of their demographic data, participants were directed to consider whether differentially private federated statistics could also be leveraged to attend to some of the other social risks that can arise.

The expert participants — drawn from commercial AI companies, research institutions, and civil society organizations — emphasized the importance of having both pre- and post-deployment algorithmic fairness assessments throughout the development and deployment of an AI-driven system or product/feature. Post-deployment assessments are especially important as they enable organizations to monitor algorithmic systems once deployed in real-life social, political, and economic contexts. They also recognized the importance of thoughtfully collecting some demographic data in order to help identify group-level algorithmic harms.

The expert participants, however, clearly noted that a secure and privacy-preserving way of collecting and analyzing sensitive user data is, on its own, insufficient to deal with the risks and harms of algorithmic bias. In fact, they expressed that such a technique is not entirely sufficient for dealing with the risks and harms of collecting demographic data. Instead, the convening participants identified key choice points facing AI-developing organizations to ensure the use of differentially private federated statistics contributes to overall alignment with responsible AI principles and ethical demographic data collection and use.

The following tables (Tables 2 and 3) summarize the different choice points and recommendations for best practices identified by the expert participants. Recommendations are organized into two types:

  1. general considerations that any AI-developing organization should consider for their post-deployment algorithmic fairness assessment (Table 2)
  2. design choices specifically related to the use of differentially private federated statistics within a post-deployment algorithmic fairness strategy (Table 3)

The choice points identified by the expert participants summarized in Table 2 emphasize the importance of carefully applying differentially private federated statistics in the context of algorithmic bias assessment. They noted that several features of the technique can be determined in such a way that reduces the efficacy of the privacy-preserving and security-enhancing aspects of differentially private federated statistics. Several expert participants highlighted Apple’s decision to limit the data retention period (90 days), clearly and simply sharing what data the user will be providing for the assessment, and maintaining organizational oversight of the query process and parameters as aligning with the best practices they would recommend.

Many of the recommendations surfaced by the expert participants focus on the resources (e.g., financial, time allocation, and staffing) necessary to achieve a level of alignment and clarity on the nature of “fairness” and “equity” AI-developing organizations are seeking for their AI-driven tools and products/features before integrating differentially private federated statistics into their overall bias mitigation strategy. While these considerations may seem tangential, the experts emphasized the importance of establishing a robust foundation on which differentially private federated statistics could be effectively utilized. Any form of demographic data collection or use can expose people to potential risk or harm. Regardless of the steps taken to minimize such risk, the collection of demographic data without an explicit purpose or effective plan for its responsible usage is not justifiable given the potential individual or societal cost. Differentially private federated statistics, in and of itself, does not mitigate all the potential risks and harms related to collecting and analyzing sensitive demographic data. It can, however, strengthen overall algorithmic fairness assessment strategies by supporting better data privacy and security throughout the assessment process.

TABLE 2: General Considerations for Algorithmic Fairness Assessment Strategies
Choice Point Recommendation(s)
Establishing organizational support
  • Organizations should provide teams with adequate time and resources to design and deploy algorithmic fairness assessments.
  • Teams should obtain executive, leadership, and middle management buy-in to ensure they receive the proper support to effectively address any bias identified.
  • Team members involved in conducting the overarching fairness assessment, which differentially private federated statistics is one component of, should ensure meaningful engagement with non-technical experts and community groups to inform their overall approach.
    • This involves setting expectations, maintaining communication, and providing compensation for those external to the organization who contribute their time and expertise.
Defining fairness
  • Organizations should achieve alignment between technical and non-technical definitions of fairness.
  • Organizations should achieve alignment between developer and user or public understanding and measurement of fairness.
  • Organizations must practice transparency when it comes to how they define fairness.
Identifying relevant demographic categories
  • Organizations should allocate necessary resources to conduct original research into what demographic categories are relevant as well as how communities that interact with the algorithmic system define themselves to yield a more inclusive fairness assessment process.
Determining the data collection method(s)
  • Organizations should provide individuals with complimentary opportunities to self-identify or to check their ascribed demographics if using inference techniques.
  • Teams should account for sampling bias by doing specific outreach to communities at risk of underrepresentation.
  • Organizations should ensure participants are provided with a clear, accessible opportunity to accept or refuse participation with an informed understanding of the privacy protection provided to them, what their data will be used for, and for how long their data will be retained.
TABLE 3: Design Considerations for Differential Private Federated Statistics
Choice Point Recommendation(s)
Choosing the differential privacy model (local differential privacy vs. central differential privacy)
  • Organizations should use local differential privacy (LDP) to guarantee the highest amount of privacy protection for individuals who share their data.
  • Organizations should consider incorporating a secure aggregation protocol alongside LDP to bolster privacy once data is received by the central server.
Determining the appropriate privacy budget/epsilon
  • Teams should choose the epsilon and other privacy parameters with the needs of those most at risk of algorithmic harm at the center (which will often benefit all users) rather than choosing based on the needs of the majority of users as this could exacerbate existing inequities.
Designing queries
  • Teams should ensure query parameters align with the definition of fairness.
  • Teams should work with interdisciplinary experts and/or community groups in designing query parameters.
  • Teams should balance data minimization with the need for robust fairness assessment depending on specific context.
Determining the data retention period
  • Organizations should institute a data retention period to ensure individual data is not perpetually used or accessible.
  • Organizations should think carefully about how the data retention period will impact their ability to identify bias when users are able to contribute their data across a long time period, particularly for statistical minorities who may not all contribute their data at once.