Our Blog
/
Blog

Inclusion in the Algorithm: A Q&A with CDT’s Ariana Aboulafia on AI and Disability

$hero_image['alt']

Ariana Aboulafia is an attorney with a strong background in public interest advocacy – her expertise spans disability rights, technology, criminal law, and the First Amendment. She leads the Disability Rights in Technology Policy Project at the Center for Democracy & Technology, focused on addressing tech-facilitated disability discrimination. Although discrimination towards disabled people isn’t new, AI and algorithmic technologies can pose new challenges. A guest speaker at PAI’s 2024 Partner Forum, Ariana shared how AI, algorithmic tools, and related technologies have become force multipliers, further entrenching the ableism that exists in employment, education, healthcare, housing, transportation, and beyond. AI and other technologies permeate every aspect of our lives, which exacerbates the risks they can pose to disabled people and other marginalized communities.

Please accept preferences, statistics, marketing cookies to watch this video.

So why does this problem exist, and what can we do about it? We sat down with Ariana to discuss inclusive design as a way to address risks, responsible data collection, and her hopes for the future.

Thalia K: In your talk at PAI’s Partner Forum you discussed some real world harms AI systems have caused for people with disabilities like Crohn’s, diabetes, and ADHD in housing, healthcare, and education. What actions can organizations take to audit existing systems for ableism to ensure this technology is inclusive of all people?

Ariana A: For organizations that are already using AI tools or algorithmic systems, it is important to utilize post-deployment audits that test for all kinds of bias and biased impacts on users, including people with disabilities. One of the concerns with post-deployment audits (and this applies to pre-deployment audits as well) is that they may test for other types of bias, like racial or gender bias, without being inclusive of disability. In doing so, these organizations may genuinely believe that their systems are not biased (depending, of course, on the results of the audit) when they actually are. It’s also really important that the audit not be a box-checking exercise – that is, that organizations allow the results of the audit to inform their decision-making in whether to keep the system in effect, or change course.

TK: Your work has focused a lot on integrating principles of inclusive design into the creation of AI and algorithmic tools to mitigate risks while maximizing potential benefits for disabled people. What are some practices of inclusive design developers should consider in creating technologies that are accessible to all users, especially those with disabilities?

AA: One of the main precepts of inclusive design is that, by creating spaces and systems in ways that are inclusive of disabled people, designers can make systems that are more likely to be inclusive of everyone, including other marginalized groups. The benefits of inclusive design are sometimes illustrated by examining the so-called “curb cut effect,” where physical spaces with curb cuts were found to help not only people who use wheelchairs, but also parents with strollers, travelers with suitcases, and more. This same effect can be seen in the thoughtful, inclusive design of algorithmic systems or AI tools. One of the practices of inclusive design that AI developers should use is to ensure that their products are human-centered, and that users have control over their experience. People with disabilities should be involved, not only in their experience as users, but also in the design process, as well as in deployment, auditing, and procurement of AI and algorithmic and AI-integrated tools.

TK: In your talk you mentioned that involving disabled people in the creation, deployment, auditing, and procurement of all of these technologies as well as tech policy is essential to reducing discrimination in these systems. How can developers and policymakers include disabled people in these processes? How can they sustain these relationships to ensure consistent engagement?

AA: There are so many people with disabilities with unique experiences, not only as a result of being disabled but also because of their subject matter expertise. It’s important that developers and policymakers consider disabled people when doing stakeholder engagement, and when hiring people that help build technologies and craft tech policies. And, this cannot be done in a way that checks a box, but instead should represent real, sustainable relationships that show respect for both lived and learned experience of disabled people over time.

TK: You’ve mentioned that data collection is vital to creating inclusive AI and algorithmic systems but that there are many challenges to doing this right. What are some big risks in collecting data on disabled people or other marginalized groups and how can they be mitigated?

AA: Last year, I co-authored a report that explains some of the reasons why it may be difficult to collect accurate and inclusive disability data. In short, there are variances in defining disability, social stigma, difficulties in making data collection mechanisms accessible, and other issues that all contribute to creating an exclusionary data environment. Furthermore, with people with disabilities as well as other marginalized groups, it is vital to ensure that data collection processes are done with full, informed consent – to ensure that this occurs for disabled participants, plain language and other accessible resources should be made available throughout the collection process. It is also important to ensure that data is collected in a way that is protective of personal and data privacy, particularly when that data is sensitive or identifiable in any way. By implementing policies like data minimization, purpose limitation and deletion, data collectors can mitigate some of the privacy-related concerns for disabled and other marginalized populations while still building inclusive datasets.

TK: How do communities like Partnership on AI enable efforts to make AI accessible and equitable for disabled people?

AA: Ensuring that AI is fully accessible and equitable for people with disabilities requires both awareness of the issues that impact disabled people when they interact with technologies, as well as a commitment to ameliorating those issues from all sectors that are involved in AI use and development. The Partnership on AI community is composed of people in academia, civil society, and industry who bring together their individual perspectives to create actionable guidance on the responsible use of AI. By bringing together people from these different sectors, and encouraging conversations about disability inclusion in tech development and policy, the Partnership on AI is creating valuable opportunities to raise awareness of these issues among the very people who have the skills and resources to ameliorate them.

TK: Can you share with us some initiatives or efforts currently underway that you are particularly excited about within your work at CDT?

AA: In 2024, I co-authored three major reports at CDT – one on disability data collection, one on the impact of AI-enabled hiring tools on workers with disabilities, and one that asked several chatbots questions about voting with a disability and evaluated the quality of their responses. This year, I hope to continue producing work – including reports and shorter-form opinion pieces – that illustrates the myriad ways that tech can impact disabled people across employment, voting rights, and other areas that include transportation and healthcare. As you mentioned at the start of this conversation, AI and algorithmic tools are everywhere, impacting disabled people in every aspect of their lives, and in 2025, my work will continue to reflect that, in partnership with my many excellent colleagues.