Our Blog
/
Blog

Artificial Intelligence’s House of Horrors

$hero_image['alt']

Join us if you dare, on a journey through artificial intelligence’s House of Horrors! Mimicking human abilities such as reasoning, learning, planning, and creativity, this ghost in the machine has inhabited technology for over half a century. Propelling efficiency in the workplace and in people’s daily lives, AI has the power to unburden people of time consuming and tedious tasks. It can also reduce human errors in data entry, healthcare, and manufacturing, saving lives, time, and money. However, despite its many benefits AI poses many risks to people, industries, and humanity as a whole. 

Zombies Eating Your Brains

Much like zombies hungering for your brains, AI-fueled recommendation systems can slowly eat away at your mind by locking you into endless scrolling on social media platforms. These algorithms work by feeding you your own interests and prejudices, keeping you online by personalizing your experience and exposing you solely to narratives it thinks you are interested in, which can sometimes mutate to harmful or problematic content. This is one of the reasons many people find social media so addicting; because it weaponizes your interests and constantly feeds you content that you do not want to look away from. This has been shown to be especially harmful to kids and younger users, as their brains are still developing and impulse control may be harder for them than adults.

Frankenstein’s Monster: The Deepfake

A mish mash of stolen legs, arms, and faces just like Frankenstein’s monster, deepfakes are a mad (data) scientist’s abominable creation. These AI-generated illusions can mimic voices and faces with terrifying precision. Unauthorized AI generated audio, images, and videos of people, also known as non consensual imagery can also be a form of abuse and sexual exploitation. Women and children are particularly vulnerable to this kind of misuse of AI. Deepfakes are often created for nefarious purposes, like for Child Sexual Abuse Material (CSAM), revenge porn, and to spread misinformation and tarnish the reputations of any person.

Weaving a Web of Lies and Misinformation

Although the spread of misinformation is not new, AI’s ubiquity and accessibility has made it easier to create misleading and fake content that can be used to amplify and “prove” incorrect claims. It also helps amplify misinformation on social media platforms by creating echo chambers, where people are shown similar content consistently to increase engagement, amplifying and reinforcing what they already believe. This has been most notably observed in recent elections around the world, where deepfakes and misinformation have impacted voting procedures, such as in the case of the Biden Robo-call.  It has even affected candidate narratives, and voter sentiments through the spread of deep fakes. AI systems can spin intricate webs of lies and deciphering truth from fiction can be difficult for unsuspecting individuals.

Vampires Sucking Your Data

AI tools may seem incredibly smart but that is because they are trained on large sets of data, using and imitating human intelligence. Every time you query Chat GPT or generate an image with Midjourney you are being drained of personal data, knowingly or unknowingly. This data is collected through cookies you’ve accepted in your browser when using AI applications or websites and through data submissions or searches you send through AI systems (such as questions, requests, and feedback). This data can be used to develop targeted ads, promotional campaigns for products or services, or even be sold to third party companies who may profit off of your data.

A Hall of Mirrors Reflecting Biases

Distorting truths and reflecting back a warped view of reality is not just a neat trick in a funhouse hall of mirrors. AI systems can often present warped views of reality by amplifying biases which negatively impact marginalized groups of people. Technology is a reflection of its creators and just like humans harbor conscious and unconscious biases, the development process often incorporates these biases, affecting vulnerable groups in the deployment and use of these systems. As a result of biases being introduced into these systems, real people have lived out horror stories, like when a Brown University student was mistakenly identified by AI as a suspect in a Sri Lankan bombing. Surveillance and policing systems using AI have also exacerbated biases against people of color.

The Grim (Job) Reaper

One of the scariest risks that AI poses to people is the risk of job displacement. As AI rapidly advances and gains new capabilities many people wonder whether their job will be reaped and replaced by automation. The looming threat has left many people to reconsider their professions, and has already forced many to look for new work as automation replaces essential roles in the food and customer service industry

Trick or Treat?

When dealing with artificial intelligence you never quite know whether you will be getting a trick or a treat. Although AI can offer solutions to many of the world’s problems it can also create risks and pose threats to people. At Partnership on AI we work to ensure the responsible development, deployment, and use of these systems. Mitigating risks and answering critical questions regarding AI, we work on developing solutions that center on people and society. To stay up to date on our work sign up for our newsletter.