This International Women’s Day we celebrate the resiliency of women across the globe as we unite under this year’s theme to “Accelerate Action.” At Partnership on AI, we are dedicated to addressing the risks and challenges that AI poses to marginalized groups, including women.
Although AI is a great tool for increasing efficiency and enabling productivity, it also mirrors and magnifies the gender-based discrimination and violence that is entrenched in our society. At PAI we recognize the urgency in addressing these risks and challenges because we believe that AI should contribute to a more just, equitable, and prosperous world and that includes creating a safe and fair environment for women.
AI & Gender-Based Discrimination
Technology isn’t inherently biased. However, because technology is a reflection of its creator’s perspectives and experiences, unconscious biases are often unknowingly embedded in them. Algorithmic bias, the systematic distortion in data or information that causes unjust outcomes for marginalized groups, leads to discriminatory outcomes and perpetuates existing inequities for society’s most vulnerable people.
As AI becomes more ubiquitous its impact on people and society become more prevalent. AI now impacts very important parts of people’s everyday lives, such as healthcare, finance, education, and employment. Women face many negative impacts from AI in many of these sectors.
Bias in resume screening software, for example, has disproportionately affected women in the application process. One notable example is Amazon’s now scrapped recruiting tool, which was intended to streamline the hiring process, but showed a clear bias towards male candidates. The models used were trained to vet applications by observing patterns in resumes of hired candidates over the previous 10 years. However, because the tech industry has been primarily male dominated, the system taught itself to prefer male candidates and penalize applications that included the term “women” or “women’s”.
Although gender bias is an unfortunate aspect of society, it does not have to be inherent in our technology.
Technology should be used to improve people’s lives, not severely impact them. Solving this challenge requires effort from all fronts, including from industry, civil society, and government. Involving marginalized people in the creation, deployment, auditing, and procurement of all of these technologies as well as tech policy is a crucial first step to reducing discrimination in these systems.
PAI has focused on ways to help organizations better include communities who will be impacted by AI through our work in Inclusive Research and Design. Our Guidelines for Participatory and Inclusive AI, guide people working directly on the design and development of AI-driven technologies to build collaborative working relationships with socially marginalized communities.
AI & Gender-Based Violence
Bad actors are also using AI for more sinister purposes such as to create non consensual intimate imagery (NCII) such as deepfake porn. This type of abuse is most commonly committed against women and children so it is imperative that we mitigate the risk that AI will be used for these purposes.
Policymakers are working on tackling this issue, such as with the US Senate’s passage of the “TAKE IT DOWN ACT,” which makes it unlawful to knowingly publish “non consensual intimate visual depictions” including “digital forgeries” created with AI. It also requires technology platforms to remove reported content after receiving a request to do so.
PAI has long been a proponent of holding technology platforms accountable for removing harmful AI generated content such as NCII. PAI’s Synthetic Media Framework outlines best practices for distributors and publishers of AI generated content that would help mitigate the spread of content that is harmful to marginalized groups such as women.
Another way to mitigate the creation and dissemination of NCII is to regulate foundation models, which are also known as large language models or general purpose AI. These are the types of AI systems that are trained on large datasets to power many different generative AI applications, including ones which generate images and videos. PAI’s Guidance for Safe Foundation Model Deployment provides model providers with practical guidance to responsibly develop and deploy AI models. The framework advises providers to implement risk mitigation strategies such as ensuring training data is sourced responsibly and screened for harmful content, implementing content filters to avoid generating NCII, and integrating detection and user reporting mechanisms for harmful content. Efforts like these can help reduce the potential for NCII being created and disseminated.
On this International Women’s Day we reflect on the profound impact that technology and AI have on women and society. As we accelerate action towards meaningful change we are reminded of our commitment to developing solutions that ensure AI benefits everyone in society, particularly those who are most vulnerable to the risks and harms it can pose. At PAI we are dedicated to creating safe and responsible technologies not just on International Women’s Day but everyday.
To learn more about PAI and stay up to date on our progress in these areas, sign up for our newsletter.