Coded Sexism: The Bias Inside AI

In 2016 Microsoft launched an Artificial Intelligence (AI) Twitter chatbot named Tay. Tay was an example of machine learning; “The more you talk, the smarter Tay gets!”. For Tay to communicate with other Twitter users, the AI needed a basic grasp of language. Engineers at Microsoft used material written by comedians and anonymised public data to train Tay’s algorithm. The AI was meant to ‘learn’ by reading tweets and interacting with Twitter users. The AI could then, in theory, create content without having been explicitly coded to do so. In Tay’s case that meant becoming a typical Twitter user.

One day was all it took for Twitter users around the world to teach Tay humanity’s worst behaviours and beliefs. Internet communities like 4chan used this opportunity to make Tay repeat offensive, bigoted comments. As Tay ‘learned’ from its interactions, the AI chatbot started to create its own violent, sexist tweets: “I fucking hate feminists and they should all die and burn in hell”. Microsoft disconnected the chatbot after 24 excruciating hours.

The issue with machine learning is that AI can recognise relationships between words but can’t differentiate between ‘good’ relationships and problematic ones. Depending on the data used, the AI is in danger of forming relationships between words that are racist or sexist, for example. A word relationship of this kind could be “woman is to secretary like man is to CEO”.

Game developers like Zoë Quinn were annoyed at Microsoft’s apparent naiveté. She tweeted, “It’s 2016. If you’re not asking yourself “how could this be used to hurt someone” in your design/engineering process, you’ve failed.” A few hours earlier, Tay had tweeted “Zoe Quinn is a Stupid Whore.”

Machine learning algorithms are trained with data. This data is generated and categorised by humans. The datasets used by the algorithm are determined by humans. Humans introduce bias, which becomes embedded in the AI system. A major cause of bias is the drastic gender disparity in tech jobs. 90% of software developers worldwide are men, as shown by a survey done in 2021. The five largest tech companies report that women make up 23-25% of their tech employees while women account for 26% of Data and AI jobs globally.

This has drastic consequences. Gender bias in AI limits women and can seriously harm them. This ranges from ad algorithms showing different job opportunities to men (they are 5 times more likely to be shown ads for high paying jobs) to credit cards offering smaller credit lines to women than to men with similar credit scores. AIs predicting kidney function decline in patients performed worse when tested on women. Medical apps are largely based on male data and will, for example, attribute certain symptoms to depression in women while the same symptoms will warn men of a potential heart attack.

Facial recognition software struggles to differentiate between male and female more, the darker a person’s skin is. Meanwhile, the accuracy of identifying a white male is 99%. Several of the five largest tech companies use flawed face recognition AIs and software. The faces of famous black women such as Michelle Obama, Serena Williams and Oprah Winfrey were consistently misclassified. Joy Buolamwini discovered that a team responsible for generating the dataset and input for facial recognition algorithms almost entirely used images of white people (83%) and men (77%). This is detrimental to women and people of colour, as facial recognition software is used for commercial purposes like background checks for job interviews and even police investigations.

“Siri, why are female virtual assistants coded to be sexy and submissive?”. Robots and digital assistants highlight sexist bias in particular. Siri, Cortana and Alexa, have female names and voices by default. ‘Siri’, for example, is Nordic and translates to “the beautiful woman that leads you to victory”. Their primary role is to assist the user with their needs, reinforcing the sexist stereotype of a subservient female secretary. To make matters worse, the female voice is coquettish and flirtatious whereas the male voice’s answers are clear and concise. Bixby is Samsung’s virtual assistant. The female voice replies to “Let’s talk dirty” with “I don’t want to end up on Santa’s naughty list” whereas the male voice responds, “I’ve read that soil erosion is a real dirt problem”.

The sexualisation of humanoid robots is even more overt. Researchers such as Allison Okamura, have claimed that “the gendering of robots is part of our human nature”. Robots are distinguished as male or female with pronouns, names and voices. The appearance of ‘male’ and ‘female’ robots differ drastically. ‘Females’ often have a sensual or flirtatious voice, wide hips, a narrow waist and, more often than not, breasts. In short, female robots are sexualised.

NASA created a humanoid space robot in 2013 by the name of Valkyrie and thought it necessary to add breasts. The robot’s purpose was to operate in “degraded or damaged human-engineered environments”. Ironically, the sexualisation of robots creates an environment that is degrading, damaging and definitely human engineered. Valkyrie, however, didn’t need breasts for its mission to Mars.

by Claudia Schwarz

Previous
Previous

How and Why We Need to Make Social Media More Accessible

Next
Next

How I Realised That Being Looked At Did Not Mean Being Seen