Our Resources
/
Impact Stories

How to Analyze Data while Preserving User Privacy

$hero_image['alt']

Can Differentially Private Federated Statistics Help Advance Algorithmic Fairness?

Collection and analysis of demographic data are often required to conduct algorithmic fairness assessments. Over the past four years, we’ve investigated key risks and harms individuals and communities face when AI-developing organizations collect demographic data. These include well-known data privacy and security risks as well as other harms, including the experience of having one’s social identity being miscategorized or data being used beyond data subjects’ expectations.

Differentially private federated statistics is an increasingly popular technique that allows teams to analyze data while preserving user privacy. Our new white paper, “Eyes Off My Data: Exploring Differentially Private Federated Statistics To Support Algorithmic Bias Assessments Across Demographic Groups,” investigates whether this privacy-preserving technique can address the various individual and community-level risks posed by demographic data collection.

We found that the differentially private federated statistics, in and of itself, does not mitigate all the potential risks and harms related to collecting and analyzing sensitive demographic data. It can, however, strengthen overall algorithmic fairness assessment strategies by supporting better data privacy and security throughout the assessment process. We use a sociotechnical framework, considering the potential harm to individuals, as well as more general social impacts. The technical accuracy of differentially private federated statistics is examined within the broader context of a broader algorithmic fairness assessment strategy.

Partnership with Apple: A Case Study of Differentially Private Federated Statistics & Apple’s ID in Wallet

To better examine the potential of differentially private federated statistics to support a more robust approach to algorithmic bias identification, PAI worked with Apple, a founding organization of the Partnership. As part of their roll-out of IDs in Wallet in the United States, Apple implemented differentially private federated statistics to support their post-deployment algorithmic fairness assessment strategy. PAI organized two multistakeholder expert convenings, using the details by Apple to host a more grounded and specific discussion about differentially private federated statistics in a real case.

Our convening discussions emphasized that the use of differentially private federated statistics, on its own, does not mitigate all the potential risks and harms related to collecting and analyzing demographic data. Instead, teams should leverage this technique in conjunction with other pre- and post-deployment algorithmic fairness assessments throughout the development and deployment of an AI-driven system. For example, Apple is using differentially private federated statistics as part of a larger fairness assessment strategy for IDs in Wallet that includes adherence to existing inclusivity and accessibility design guidelines and pre-deployment user testing and roundtable stakeholder discussions with user testing and inclusivity roundtable discussions. Differentially private federated statistics should also be designed with the specific needs of the bias assessment in mind to ensure its use contributes to overall alignment with responsible AI principles and ethical demographic data collection and use. Together, these considerations can help organizations design more comprehensive, equitable, and successful fairness assessments. Read the full paper here.

What’s Next

We will continue to explore equity-based approaches to algorithmic discrimination assessment through our Participatory and Inclusive Demographic Data Guidelines which we will publish for public comment in February 2024.

Sign up for our mailing list and stay up-to-date with the Guidelines and all of PAI’s Fairness, Transparency, and Accountability work.

Read the White Paper