Another day, another AI is found – to the great surprise of its creators – to be racist and misogynistic. This week it was the turn of researchers at MIT to be SHOCKED, I tell you, SHOCKED, that their image-labelling AI thinks women in bikinis are “whores” and described both monkeys and black people the “N-word”. I’m telling you, this is unprecedented.
When is the tech world going to wake up and smell the lack of data? Let’s hope it does soon because in the meantime they are leading us, blindfolded, into a dystopia where racism and misogyny are literally coded in. And remember, AIs don’t simply reflect our biases back at us, they amplify them:
In [a 2017 study of algorithms trained on commonly-used image datasets], pictures of cooking [in the dataset] were over 33% more likely to involve women than men, but algorithms trained on this dataset connected pictures of kitchens with women 68% of the time. The paper also found that the higher the original bias, the stronger the amplification effect, which per- haps explains how the algorithm came to label a photo of a portly balding man standing in front of a stove as female. Kitchen > male pattern baldness. (IW, p.166)
Let’s also hope that most tech companies don’t follow the example of
google, which decided to “fix” their algorithmic bias by removing labels altogether. No label, no problem, right? Wrong. The judgments are still there, they are just hidden now, making them even harder to solve than they were before.