Eyes Off My Data: Exploring Differentially Private Federated Statistics To Support Algorithmic Bias Assessments Across Demographic Groups

PAI Staff

Executive Summary

Executive Summary

Designing and deploying algorithmic systems that work as expected every time for all people and situations remains a challenge and a priority. Rigorous pre- and post-deployment fairness assessments are necessary to surface any potential bias in algorithmic systems. As they often involve collecting new user data, including sensitive demographic data, post-deployment fairness assessments to observe whether the algorithm is operating in ways that disadvantage any specific group of people can pose additional challenges to organizations. The collection and use of demographic data is difficult for organizations because it is entwined with highly contested social, regulatory, privacy, and economic considerations. Over the past several years, Partnership on AI (PAI) has investigated key risks and harms individuals and communities face when companies collect and use demographic data. In addition to well-known data privacy and security risks, such harms can stem from having one’s social identity being miscategorized or data being used beyond data subjects’ expectations, which PAI has explored through our demographic data workstream. These risks and harms are particularly acute for socially marginalized groups, such as people of color, women, and LGBTQIA+ people.

Given these risks and concerns, organizations developing digital technology are invested in the responsible collection and use of demographic data to identify and address algorithmic bias. For example, in an effort to deploy algorithmically driven features responsibly, Apple introduced IDs in Apple Wallet with mechanisms in place to help Apple and their partner issuing state authorities (e.g., departments of motor vehicles) identify any potential biases users may experience when adding their IDs to their iPhones.IDs in Wallet, in partnership with state identification-issuing authorities (e.g., departments of motor vehicles), were only available in select US states at the time of the writing of this report.

In addition to pre-deployment algorithmic fairness testing, Apple followed a post-deployment assessment strategy as well. As part of IDs in Wallet, Apple applied differentially private federated statistics as a way to protect users’ data, including their demographic data. The main benefit of using differentially private federated statistics is the preservation of data privacy by combining the features of differential privacy (e.g., adding statistical noise to data to prevent re-identification) and federated statistics (e.g., analyzing user data on individual devices, rather than on a central server, to avoid the creation and transfer of datasets that can be hacked or otherwise misused). What is less clear is whether differentially private federated statistics can attend to some of the other risks and harms associated with the collection and analysis of demographic data. To understand this, a sociotechnical lens is necessary to understand the potential social impact of the application of a technical approach.

This report is the result of two expert convenings independently organized and hosted by PAI. As a partner organization of PAI, Apple shared details about the use of differentially private federated statistics as part of their post-deployment algorithmic bias assessment for the release of this new feature.

During the convenings, responsible AI, algorithmic fairness, and social inequality experts discussed how algorithmic fairness assessments can be strengthened, challenged, or otherwise unaffected by the use of differentially private federated statistics. While the IDs in Wallet use case is limited to the US context, the participants expanded the scope of their discussion to consider differential private federated statistics in different contexts. Recognizing that data privacy and security are not the only concerns people have regarding the collection and use of their demographic data, participants were directed to consider whether differentially private federated statistics could also be leveraged to attend to some of the other social risks that can arise, particularly for marginalized demographic groups.

The multi-disciplinary participant group repeatedly emphasized the importance of having both pre- and post-deployment algorithmic fairness assessments throughout the development and deployment of an AI-driven system or product/feature. Post-deployment assessments are especially important as they enable organizations to monitor algorithmic systems once deployed in real-life social, political, and economic contexts. They also recognized the importance of thoughtfully collecting key demographic data in order to help identify group-level algorithmic harms.

The expert participants, however, clearly stated that a secure and privacy-preserving way of collecting and analyzing sensitive user data is, on its own, insufficient to deal with the risks and harms of algorithmic bias. In fact, they expressed that such a technique is not entirely sufficient for dealing with the risks and harms of collecting demographic data. Instead, the convening participants identified key choice points facing AI-developing organizations to ensure the use of differentially private federated statistics contributes to overall alignment with responsible AI principles and ethical demographic data collection and use.

This report provides an overview of differentially private federated statistics and the different choice points facing AI-developing organizations in applying differentially private federated statistics in their overall algorithmic fairness assessment strategies. Recommendations for best practices are organized into two parts:

  1. General considerations that any AI-developing organization should factor into their post-deployment algorithmic fairness assessment
  2. Design choices specifically related to the use of differentially private federated statistics within a post-deployment algorithmic fairness strategy

The choice points identified by the expert participants emphasize the importance of carefully applying differentially private federated statistics in the context of algorithmic bias assessment. For example, several features of the technique can be determined in such a way that reduces the efficacy of the privacy-preserving and security-enhancing aspects of differentially private federated statistics. Apple’s approach to using differentially private federated statistics aligned with some of the practices suggested during the expert convenings: the decision to limit the data retention period (90 days), allowing users to actively opt-into data sharing (rather than creating an opt-out model), clearly and simply sharing what data the user will be providing for the assessment, and maintaining organizational oversight of the query process and parameters.

The second set of recommendations surfaced by the expert participants primarily focus on the resources (e.g., financial, time allocation, and staffing) necessary to achieve a level of alignment and clarity on the nature of “fairness” and “equity” AI-developing organizations are seeking for their AI-driven tools and products/features. While these considerations may seem tangential, expert participants emphasized the importance of establishing a robust foundation on which differentially private federated statistics could be effectively utilized. Differentially private federated statistics, in and of itself, does not mitigate all the potential risks and harms related to collecting and analyzing sensitive demographic data. It can, however, strengthen overall algorithmic fairness assessment strategies by supporting better data privacy and security throughout the assessment process.

Eyes Off My Data: Exploring Differentially Private Federated Statistics To Support Algorithmic Bias Assessments Across Demographic Groups

Executive Summary

Introduction

The Challenges of Algorithmic Fairness Assessments

Prioritization of Data Privacy: An Incomplete Approach for Demographic Data Collection?

Premise of the Project

A Sociotechnical Framework for Assessing Demographic Data Collection

Differentially Private Federated Statistics

Differential Privacy

Federated Statistics

Differentially Private Federated Statistics

A Sociotechnical Examination of Differentially Private Federated Statistics as an Algorithmic Fairness Technique

General Considerations for Algorithmic Fairness Assessment Strategies

Design Considerations for Differentially Private Federated Statistics

Conclusion

Acknowledgments

Funding Disclosure

Appendices

Appendix 1: Fairness, Transparency and Accountability Program Area at Partnership on AI

Appendix 2: Case Study Details

Appendix 3: Multistakeholder Convenings

Appendix 4: Glossary

Appendix 5: Detailed Summary of Challenges and Risks Associated with Demographic Data Collection and Analysis

Table of Contents
1
2
3
4
5
6
7
8
9
10

Fairer Algorithmic Decision-Making and Its Consequences: Interrogating the Risks and Benefits of Demographic Data Collection, Use, and Non-Use

PAI Staff

Introduction and Background

Introduction

Introduction

Algorithmic decision-making has been widely accepted as a novel approach to overcoming the purported cognitive and subjective limitations of human decision makers by providing “objective” data-driven recommendations. Yet, as organizations adopt algorithmic decision-making systems (ADMS), countless examples of algorithmic discrimination continue to emerge. Harmful biases have been found in algorithmic decision-making systems in contexts such as healthcare, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society. In response, algorithmic fairness strategies attempt to understand how ADMS treat certain individuals and groups, often with the explicit purpose of detecting and mitigating harmful biases.

Many current algorithmic fairness techniques require access to data on a “sensitive attribute” or “protected category” (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. These demographic-based algorithmic fairness techniques assume that discrimination and social inequality can be overcome with clever algorithms and collection of the requisite data, removing broader questions of governance and politics from the equation. This paper seeks to challenge this assumption, arguing instead that collecting more data in support of fairness is not always the answer and can actually exacerbate or introduce harm for marginalized individuals and groups. We believe more discussion is needed in the machine learning community around the consequences of “fairer” algorithmic decision-making. This involves acknowledging the value assumptions and trade-offs associated with the use and non-use of demographic data in algorithmic systems. To advance this discussion, this white paper provides a preliminary perspective on these trade-offs derived from workshops and conversations with experts in industry, academia, government, and advocacy organizations as well as literature across relevant domains. In doing so, we hope that readers will better understand the affordances and limitations of using demographic data to detect and mitigate discrimination in institutional decision-making more broadly

Background

Background

Demographic-based algorithmic fairness techniques presuppose the availability of data on sensitive attributes or protected categories. However, previous research has highlighted that data on demographic categories, such as race and sexuality, are often unavailable due to a range of organizational challenges, legal barriers, and practical concerns Andrus, M., Spitzer, E., Brown, J., & Xiang, A. (2021). “What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness. ArXiv:2011.02282 (Cs). http://arxiv.org/abs/2011.02282. Some privacy laws, such as the EU’s GDPR, not only require
data subjects to provide meaningful consent when their data is collected, but also prohibit the collection of sensitive data such as race, religion, and sexuality. Some corporate privacy policies and standards, such as Privacy By Design, call for organizations to be intentional with their data collection practices, only collecting data they require and can specify a use for. Given the uncertainty around whether or not it is acceptable to ask users and customers for their sensitive demographic information, most legal and policy teams urge their corporations to err on the side of caution and not collect these types of data unless legally required to do so. As a
result, concerns over privacy often take precedence over ensuring product fairness since the trade-offs between mitigating bias and ensuring individual or group privacy are unclear Andrus et al., 2021.

In cases where sensitive demographic data can be collected, organizations must navigate a number of practical challenges throughout its procurement. For many organizations, sensitive demographic data is collected through self-reporting mechanisms. However, self reported data is often incomplete, unreliable, and unrepresentative, due in part to a lack of incentives for individuals to provide accurate
and full information Andrus et al., 2021. In some cases, practitioners choose to infer protected categories of individuals based on proxy information, a method which is largely inaccurate. Organizations also face difficulty capturing unobserved characteristics, such as disability, sexuality, and religion, as these categories are frequently missing and often unmeasurable Tomasev, N., McKee, K. R., Kay, J., & Mohamed, S. (2021). Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. ArXiv:2102.04257 (Cs). https://doi.org/10.1145/3461702.3462540. Overall, deciding on how to classify and categorize demographic data is an ongoing challenge, as demographic categories continue to shift and change over time and between contexts. Once demographic data is collected, antidiscrimination law and policies largely inhibit organizations from using this data since knowledge of sensitive categories opens the door to legal liability if discrimination is uncovered without a plan to successfully mitigate it Andrus et al., 2021.

In the face of these barriers, corporations looking to apply demographic-based algorithmic fairness techniques have called for guidance on how to responsibly collect and use demographic data. However, prescribing statistical definitions of fairness on algorithmic systems without accounting for the social, economic, and political systems in which they are embedded can fail to benefit marginalized
groups and undermine fairness efforts Bakalar, C., Barreto, R., Bogen, M., Corbett-Davies, S., Hall, M., Kloumann, I., Lam, M., Candela, J. Q., Raghavan, M., Simons, J., Tannen, J., Tong, E., Vredenburgh, K., & Zhao, J. (2021). Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems. 12.. Therefore, developing guidance requires a deeper understanding of the risks and trade-offs inherent to the use and non-use of demographic data. Efforts to detect and mitigate harms must account for the wider contexts and power structures that algorithmic systems, and the data that they draw on, are embedded in.

Finally, though this work is motivated by the documented unfairness of ADMS, it is critical to recognize that bias and discrimination are not the only possible harms stemming directly from ADMS. As recent papers and reports have forcefully argued, focusing on debiasing datasets and algorithms is (1) often misguided because proposed debiasing methods are only relevant for a subset of the kinds of bias ADMS introduce or reinforce, and (2) likely to draw attention away from other, possibly more salient harms Balayn, A., & Gürses, S. (2021). Beyond Debiasing. European Digital Rights. https://edri.org/wp-content/ uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf. In the first case, harms from tools such as recommendation systems, content moderation systems, and computer vision systems might be characterized as a result of various forms of bias, but resolving bias in those systems generally involves adding in more context to better understand differences between groups, not just trying to treat groups more similarly. In the second case, there are many ADMS that are clearly susceptible to bias, yet the greater source of harm could arguably be the deployment of the system in the first place. Pre-trial detention risk scores provide one such example. Using statistical correlations to determine if someone should be held without bail, or, in other words, potentially punishing individuals for attributes outside of their control and past decisions unrelated to what they are currently being charged for, is itself a significant deviation from legal standards and norms, yet most of the debate has focused around how biased the predictions are. Attempting to collect demographic data in these cases will likely do more harm than good, as demographic data will
draw attention away from harms inherent to the system and towards seemingly resolvable issues around bias.

Fairer Algorithmic Decision-Making and Its Consequences: Interrogating the Risks and Benefits of Demographic Data Collection, Use, and Non-Use

Introduction and Background

Introduction

Background

Social Risks of Non-Use

Hidden Discrimination

''Colorblind'' Decision-Making

Invisibility to Institutions of Importance

Social Risks of Use

Risks to Individuals

Encroachments on Privacy and Personal Life

Individual Misrepresentation

Data Misuse and Use Beyond Informed Consent

Risks to Communities

Expanding Surveillance Infrastructure in the Pursuit of Fairness

Misrepresentation and Reinforcing Oppressive or Overly Prescriptive Categories

Private Control Over Scoping Bias and Discrimination

Conclusion and Acknowledgements

Conclusion

Acknowledgements

Sources Cited

  1. Andrus, M., Spitzer, E., Brown, J., & Xiang, A. (2021). “What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness. ArXiv:2011.02282 (Cs). http://arxiv.org/abs/2011.02282
  2. Andrus et al., 2021
  3. Andrus et al., 2021
  4. Tomasev, N., McKee, K. R., Kay, J., & Mohamed, S. (2021). Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. ArXiv:2102.04257 (Cs). https://doi.org/10.1145/3461702.3462540
  5. Andrus et al., 2021
  6. Bakalar, C., Barreto, R., Bogen, M., Corbett-Davies, S., Hall, M., Kloumann, I., Lam, M., Candela, J. Q., Raghavan, M., Simons, J., Tannen, J., Tong, E., Vredenburgh, K., & Zhao, J. (2021). Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems. 12.
  7. Balayn, A., & Gürses, S. (2021). Beyond Debiasing. European Digital Rights. https://edri.org/wp-content/ uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf
  8. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., … Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356
  9. Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data, 2, 13. https://doi.org/10.3389/fdata.2019.00013
  10. Rimfeld, K., & Malanchini, M. (2020, August 21). The A-Level and GCSE scandal shows teachers should be trusted over exams results. Inews.Co.Uk. https://inews.co.uk/opinion/a-level-gcse-results-trust-teachers-exams-592499
  11. Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated Hate Speech Detection and the Problem of Offensive Language. Proceedings of the International AAAI Conference on Web and Social Media, 11(1), 512–515.
  12. Davidson, T., Bhattacharya, D., & Weber, I. (2019). Racial Bias in Hate Speech and Abusive Language Detection Datasets. ArXiv:1905.12516 (Cs). http://arxiv.org/abs/1905.12516
  13. Bogen, M., Rieke, A., & Ahmed, S. (2020). Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 492–500. https://doi.org/10.1145/3351095.3372877
  14. Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. (2021, January 21). The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/
  15. Executive Order on Diversity, Equity, Inclusion, and Accessibility in the Federal Workforce. (2021, June 25). The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/06/25/executive-order-on-diversity-equity-inclusion-and-accessibility-in-the-federal-workforce/
  16. Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual Fairness. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper/2017/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html
  17. Harned, Z., & Wallach, H. (2019). Stretching human laws to apply to machines: The dangers of a ’Colorblind’ Computer. Florida State University Law Review, Forthcoming.
  18. Washington, A. L. (2018). How to Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate. Colorado Technology Law Journal, 17, 131.
  19. Rodriguez, L. (2020). All Data Is Not Credit Data: Closing the Gap Between the Fair Housing Act and Algorithmic Decisionmaking in the Lending Industry. Columbia Law Review, 120(7), 1843–1884.
  20. Hu, L. (2021, February 22). Law, Liberation, and Causal Inference. LPE Project. https://lpeproject.org/blog/law-liberation-and-causal-inference/
  21. Bonilla-Silva, E. (2010). Racism Without Racists: Color-blind Racism and the Persistence of Racial Inequality in the United States. Rowman & Littlefield.
  22. Plaut, V. C., Thomas, K. M., Hurd, K., & Romano, C. A. (2018). Do Color Blindness and Multiculturalism Remedy or Foster Discrimination and Racism? Current Directions in Psychological Science, 27(3), 200–206. https://doi.org/10.1177/0963721418766068
  23. Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press
  24. Banco, E., & Tahir, D. (2021, March 9). CDC under scrutiny after struggling to report Covid race, ethnicity data. POLITICO. https://www.politico.com/news/2021/03/09/hhs-cdc-covid-race-data-474554
  25. Banco, E., & Tahir, D. (2021, March 9). CDC under scrutiny after struggling to report Covid race, ethnicity data. POLITICO. https://www.politico.com/news/2021/03/09/hhs-cdc-covid-race-data-474554
  26. Elliott, M. N., Morrison, P. A., Fremont, A., McCaffrey, D. F., Pantoja, P., & Lurie, N. (2009). Using the Census Bureau’s surname list to improve estimates of race/ethnicity and associated disparities. Health Services and Outcomes Research Methodology, 9(2), 69.
  27. Shimkhada, R., Scheitler, A. J., & Ponce, N. A. (2021). Capturing Racial/Ethnic Diversity in Population-Based Surveys: Data Disaggregation of Health Data for Asian American, Native Hawaiian, and Pacific Islanders (AANHPIs). Population Research and Policy Review, 40(1), 81–102. https://doi.org/10.1007/s11113-020-09634-3
  28. Poon, O. A., Dizon, J. P. M., & Squire, D. (2017). Count Me In!: Ethnic Data Disaggregation Advocacy, Racial Mattering, and Lessons for Racial Justice Coalitions. JCSCORE, 3(1), 91–124. https://doi.org/10.15763/issn.2642-2387.2017.3.1.91-124
  29. Fosch-Villaronga, E., Poulsen, A., Søraa, R. A., & Custers, B. H. M. (2021). A little bird told me your gender: Gender inferences in social media. Information Processing & Management, 58(3), 102541. https://doi.org/10.1016/j.ipm.2021.102541
  30. Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. In Dark Matters. Duke University Press. https://doi.org/10.1515/9780822375302
  31. Eubanks, 2017
  32. Farrand, T., Mireshghallah, F., Singh, S., & Trask, A. (2020). Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy. Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, 15–19. https://doi.org/10.1145/3411501.3419419
  33. Jagielski, M., Kearns, M., Mao, J., Oprea, A., Roth, A., Sharifi -Malvajerdi, S., & Ullman, J. (2019). Differentially Private Fair Learning. Proceedings of the 36th International Conference on Machine Learning, 3000–3008. https://bit.ly/3rmhET0
  34. Kuppam, S., Mckenna, R., Pujol, D., Hay, M., Machanavajjhala, A., & Miklau, G. (2020). Fair Decision Making using Privacy-Protected Data. ArXiv:1905.12744 (Cs). http://arxiv.org/abs/1905.12744
  35. Quillian, L., Pager, D., Hexel, O., & Midtbøen, A. H. (2017). Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Proceedings of the National Academy of Sciences, 114(41), 10870–10875. https://doi.org/10.1073/pnas.1706255114
  36. Quillian, L., Lee, J. J., & Oliver, M. (2020). Evidence from Field Experiments in Hiring Shows Substantial Additional Racial Discrimination after the Callback. Social Forces, 99(2), 732–759. https://doi.org/10.1093/sf/soaa026
  37. Cabañas, J. G., Cuevas, Á., Arrate, A., & Cuevas, R. (2021). Does Facebook use sensitive data for advertising purposes? Communications of the ACM, 64(1), 62–69. https://doi.org/10.1145/3426361
  38. Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112. https://doi.org/10.1515/popets-2015-0007
  39. Hupperich, T., Tatang, D., Wilkop, N., & Holz, T. (2018). An Empirical Study on Online Price Differentiation. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy, 76–83. https://doi.org/10.1145/3176258.3176338
  40. Mikians, J., Gyarmati, L., Erramilli, V., & Laoutaris, N. (2013). Crowd-assisted search for price discrimination in e-commerce: First results. Proceedings of the Ninth ACM Conference on Emerging Networking Experiments and Technologies, 1–6. https://doi.org/10.1145/2535372.2535415
  41. Cabañas et al., 2021
  42. Leetaru, K. (2018, July 20). Facebook As The Ultimate Government Surveillance Tool? Forbes. https://www.forbes.com/sites/kalevleetaru/2018/07/20/facebook-as-the-ultimate-government-surveillance-tool/
  43. Rozenshtein, A. Z. (2018). Surveillance Intermediaries (SSRN Scholarly Paper ID 2935321). Social Science Research Network. https://papers.ssrn.com/abstract=2935321
  44. Rocher, L., Hendrickx, J. M., & de Montjoye, Y.-A. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications, 10(1), 3069. https://doi.org/10.1038/s41467-019-10933-3
  45. Cummings, R., Gupta, V., Kimpara, D., & Morgenstern, J. (2019). On the Compatibility of Privacy and Fairness. Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization - UMAP’19 Adjunct, 309–315. https://doi.org/10.1145/3314183.3323847
  46. Kuppam et al., 2020
  47. Mavriki, P., & Karyda, M. (2019). Automated data-driven profiling: Threats for group privacy. Information & Computer Security, 28(2), 183–197. https://doi.org/10.1108/ICS-04-2019-0048
  48. Barocas, S., & Levy, K. (2019). Privacy Dependencies (SSRN Scholarly Paper ID 3447384). Social Science Research Network. https://papers.ssrn.com/abstract=3447384
  49. Bivens, R. (2017). The gender binary will not be deprogrammed: Ten years of coding gender on Facebook. New Media & Society, 19(6), 880–898. https://doi.org/10.1177/1461444815621527
  50. Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology, 30(4), 475–494. https://doi.org/10.1007/s13347-017-0253-7
  51. Taylor, 2021
  52. Draper and Turow, 2019
  53. Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020). Towards a Critical Race Methodology in Algorithmic Fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 501–512. https://doi.org/10.1145/3351095.3372826
  54. Keyes, O., Hitzig, Z., & Blell, M. (2021). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1–2), 158–175. https://doi.org/10.1080/03080188.2020.1840224
  55. Scheuerman, M. K., Wade, K., Lustig, C., & Brubaker, J. R. (2020). How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), 1–35. https://doi.org/10.1145/3392866
  56. Roth, W. D. (2016). The multiple dimensions of race. Ethnic and Racial Studies, 39(8), 1310–1338. https://doi.org/10.1080/01419870.2016.1140793
  57. Hanna et al., 2020
  58. Keyes, O. (2018). The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 88:1-88:22. https://doi.org/10.1145/3274357
  59. Keyes, O. (2019, April 8). Counting the Countless. Real Life. https://reallifemag.com/counting-the-countless/
  60. Keyes, O., Hitzig, Z., & Blell, M. (2021). Truth from the machine: Artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews, 46(1–2), 158–175. https://doi.org/10.1080/03080188.2020.1840224
  61. Scheuerman et al., 2020
  62. Scheuerman et al., 2020
  63. Stark, L., & Hutson, J. (2021). Physiognomic Artificial Intelligence (SSRN Scholarly Paper ID 3927300). Social Science Research Network. https://doi.org/10.2139/ssrn.3927300
  64. U.S. Department of Justice. (2019). The First Step Act of 2018: Risk and Needs Assessment System. Office of the Attorney General.
  65. Partnership on AI. (2020). Algorithmic Risk Assessment and COVID-19: Why PATTERN Should Not Be Used. Partnership on AI. http://partnershiponai.org/wp-content/uploads/2021/07/Why-PATTERN-Should-Not-Be-Used.pdf
  66. Hill, K. (2020, January 18). The Secretive Company That Might End Privacy as We Know It. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
  67. Porter, J. (2020, February 6). Facebook and LinkedIn are latest to demand Clearview stop scraping images for facial recognition tech. The Verge. https://www.theverge.com/2020/2/6/21126063/facebook-clearview-ai-image-scraping-facial-recognition-database-terms-of-service-twitter-youtube
  68. Regulation (EU) 2016/679 (General Data Protection Regulation), (2016) (testimony of European Parliament and Council of European Union). https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN
  69. Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society, 7(1), 2053951720935615. https://doi.org/10.1177/2053951720935615
  70. Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society, 7(1), 2053951720935615. https://doi.org/10.1177/2053951720935615
  71. Obar, 2020
  72. Angwin, J., & Parris, T. (2016, October 28). Facebook Lets Advertisers Exclude Users by Race. ProPublica. https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
  73. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
  74. Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. In Dark Matters. Duke University Press. https://doi.org/10.1515/9780822375302
  75. Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
  76. Hoffmann, 2020
  77. Rainie, S. C., Kukutai, T., Walter, M., Figueroa-Rodríguez, O. L., Walker, J., & Axelsson, P. (2019). Indigenous data sovereignty.
  78. Ricaurte, P. (2019). Data Epistemologies, Coloniality of Power, and Resistance. Television & New Media, 16.
  79. Walter, M. (2020, October 7). Delivering Indigenous Data Sovereignty. https://www.youtube.com/watch?v=NCsCZJ8ugPA
  80. See, for example: Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press.
  81. See, for example: Dembroff, R. (2018). Real Talk on the Metaphysics of Gender. Philosophical Topics, 46(2), 21–50. https://doi.org/10.5840/philtopics201846212
  82. See, for example: Hacking, I. (1995). The looping effects of human kinds. In Causal cognition: A multidisciplinary debate (pp. 351–394). Clarendon Press/Oxford University Press.
  83. See, for example: Hanna et al., 2020
  84. See, for example: Hu, L., & Kohler-Hausmann, I. (2020). What’s Sex Got to Do With Fair Machine Learning? 11.
  85. See, for example: Keyes (2019)
  86. See, for example: Zuberi, T., & Bonilla-Silva, E. (2008). White Logic, White Methods: Racism and Methodology. Rowman & Littlefield Publishers.
  87. Hanna et al., 2020
  88. Andrus et al., 2021
  89. Bivens, 2017
  90. Hamidi, F., Scheuerman, M. K., & Branham, S. M. (2018). Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18, 1–13. https://doi.org/10.1145/3173574.3173582
  91. Keyes, 2018
  92. Keyes, 2021
  93. Fu, S., & King, K. (2021). Data disaggregation and its discontents: Discourses of civil rights, efficiency and ethnic registry. Discourse: Studies in the Cultural Politics of Education, 42(2), 199–214. https://doi.org/10.1080/01596306.2019.1602507
  94. Poon et al., 2017
  95. Hanna et al., 2020
  96. Saperstein, A. (2012). Capturing complexity in the United States: Which aspects of race matter and when? Ethnic and Racial Studies, 35(8), 1484–1502. https://doi.org/10.1080/01419870.2011.607504
  97. Keyes, 2019
  98. Ruberg, B., & Ruelos, S. (2020). Data for queer lives: How LGBTQ gender and sexuality identities challenge norms of demographics. Big Data & Society, 7(1), 2053951720933286. https://doi.org/10.1177/2053951720933286
  99. Tomasev et al., 2021
  100. Pauker et al., 2018
  101. Ruberg & Ruelos, 2020
  102. Braun, L., Fausto-Sterling, A., Fullwiley, D., Hammonds, E. M., Nelson, A., Quivers, W., Reverby, S. M., & Shields, A. E. (2007). Racial Categories in Medical Practice: How Useful Are They? PLOS Medicine, 4(9), e271. https://doi.org/10.1371/journal.pmed.0040271
  103. Hanna et al., 2020
  104. Morning, A. (2014). Does Genomics Challenge the Social Construction of Race?: Sociological Theory. https://doi.org/10.1177/0735275114550881
  105. Barabas, C. (2019). Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3377921
  106. Barabas, 2019
  107. Hacking, 1995
  108. Hacking, 1995
  109. Dembroff, 2018
  110. Andrus et al., 2021
  111. Holstein, K., Vaughan, J. W., Daumé III, H., Dudík, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–16. https://doi.org/10.1145/3290605.3300830
  112. Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. ArXiv:2006.12358 (Cs). https://doi.org/10.1145/3449081
  113. Wachter, S., Mittelstadt, B., & Russell, C. (2021). Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. Computer Law & Security Review, 41. https://doi.org/10.2139/ssrn.3547922
  114. Xenidis, R. (2021). Tuning EU Equality Law to Algorithmic Discrimination: Three Pathways to Resilience. Maastricht Journal of European and Comparative Law, 27, 1023263X2098217. https://doi.org/10.1177/1023263X20982173
  115. Xiang, A. (2021). Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review, 88(3).
  116. Balayn & Gürses, 2021
  117. Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic Fairness from a Non-ideal Perspective. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 57–63. https://doi.org/10.1145/3375627.3375828
  118. Green & Viljoen, 2020
  119. Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19–31. https://doi.org/10.1145/3351095.3372840
  120. Gitelman, L. (2013). Raw Data Is an Oxymoron. MIT Press.
  121. Barabas, C., Doyle, C., Rubinovitz, J., & Dinakar, K. (2020). Studying Up: Reorienting the study of algorithmic fairness around issues of power. 10.
  122. Crooks, R., & Currie, M. (2021). Numbers will not save us: Agonistic data practices. The Information Society, 0(0), 1–19. https://doi.org/10.1080/01972243.2021.1920081
  123. Muhammad, K. G. (2019). The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America, With a New Preface. Harvard University Press.
  124. Ochigame, R., Barabas, C., Dinakar, K., Virza, M., & Ito, J. (2018). Beyond Legitimation: Rethinking Fairness, Interpretability, and Accuracy in Machine Learning. International Conference on Machine Learning, 6.
  125. Ochigame et al., 2018
  126. Basu, S., Berman, R., Bloomston, A., Cambell, J., Diaz, A., Era, N., Evans, B., Palkar, S., & Wharton, S. (2020). Measuring discrepancies in Airbnb guest acceptance rates using anonymized demographic data. AirBnB. https://news.airbnb.com/wp-content/uploads/sites/4/2020/06/Project-Lighthouse-Airbnb-2020-06-12.pdf
Table of Contents
1
2
3
4
5
6