View profile

Invisible Women - When in Doubt, Blame the Algorithm

Invisible Women
Invisible Women - When in Doubt, Blame the Algorithm
By Caroline Criado Perez • Issue #21 • View online
GOOD MORNING GFPs!!! Pretty much every week I get a perplexed email from someone asking wtf is a GFP. And while I am EXTREMELY HURT that this means some of you have not gone back and read the entire back catalogue of this newsletter, I’m feeling generous this morning, so I will explain that GFP stands for Generic Female Pal. It started in my very first newsletter, which I think we can all agree was an absolute BANGER.
As will this one be 💃

Gender data gap of the Week
I enjoyed reading this interview with Dr Sabine Oertelt-Prigione, the chair of sex and gender-sensitive medicine at Radboud University in the Netherlands and a member of the European Commission’s expert group on gendered innovations.”
I bet you’re wondering what she thinks about how we’re handling Covid, right? Well, I have good and bad news. The good news is, she does give us her opinion. Hurrah! The bad news is, she doesn’t think we’re doing very well. I know right, bombshell. Here is an extract:
Given that sex has emerged as a factor in how patients respond to Covid-19 infections, do you think drug and vaccine makers are taking this into account?
Unfortunately, I would have to say no.
Dr Oertelt-Prigione continues:
What we’re seeing is — of the first 10 to 15 publications, which mostly included trials testing the drugs hydroxychloroquine and remdesivir — you’re lucky if you find out how many women and men participated in the trial. There’s no disaggregated analysis. We don’t know anything about risk factors. We don’t know anything about unwanted side effects — if there were differences or not.
So far so Invisible Women. And remember: “The immune system is also thought to be behind sex-specific responses to vaccines: women develop higher antibody responses and have more frequent and severe adverse reactions to vaccines,19 and a 2014 paper proposed developing male and female versions of influenza vaccines.” (IW, p.199)
But wait, there’s more
I think the approach here is what unfortunately happens in an emergency — we’re trying to go as fast as possible and looking into sex differences is perceived as something that is just going to cost us time. In any case, this was what was happening before (Covid-19) as well.
It’s also a race that has financial costs. Being the first one to produce a vaccine can make or break a pharmaceutical company, especially if it’s a smaller one. And considering sex differences is simply not an issue that has a large enough lobby, which is ironic if you consider that it affects the entire population.
live footage of the medical research establishment responding to requests to sex-disaggregate their data
live footage of the medical research establishment responding to requests to sex-disaggregate their data
PS, you may remember some examples from the final section of Invisible Women, showing that ignoring women at a time of crisis is a false economy:
Failing to account for gender during the 2009 H1N1 (swine flu virus) outbreaks meant that ‘government officials tended to deal with men because they were thought to be the owners of farms, despite the fact that women often did the majority of work with animals on backyard farms’. During the 2014 Ebola outbreak in Sierra Leone, ‘initial quarantine plans ensured that women received food supplies, but did not account for water or fuel’. In Sierra Leone and other developing countries, fetching fuel and water is the job of women (and of course fuel and water are necessities of life), so until the plans were adjusted, ‘women continued to leave their houses to fetch firewood, which drove a risk of spreading infection’. (IW, p.299)
Still tho, those KERRAZY ladies and their unreasonable requests for accurate data amirite.
Let’s cleanse our palate with some final words from my new fave, Dr Oertelt-Prigione:
When it comes to Covid-19 trials, what kind of analysis would you like to see when it comes to sex as a factor?    
It’s not rocket science. I want to know, when it comes to effectiveness, if the curve looks the same in the female population as it does in the male one. If we look at side effects, I want to see if the numbers look the same. The problem is, when you look at all these publications, there’s always table 1 that tells you — if we’re lucky — 50% female individuals and 50% male individuals, but then if you look at the following tables and figures, you will see one population.
In the 1990s, there was a study about the effectiveness of Digoxin (a medicine for heart conditions) that was widely marketed. The first study showed that Digoxin was effective and didn’t have a higher mortality (versus placebo). And then we started to see that there seemed to be a higher incidence of side effects in women. The initial study had this nice figure with one population where you could see the curve, there was no difference between placebo and the patients who took the treatment (in terms of safety). And then a new group took exactly the same data set and analysed it according to sex. It turns out Digoxin didn’t increase mortality in men — it actually decreased mortality in men — but it increased mortality in women. And why did you not see that in the first analysis? Because there were 25% women included and 75% men.
In these Covid-19 trials I’m not asking for fancy new techniques and complicated things. I just want to see the data disaggregated at all levels so that I can see if the results are truly the same.
Default male of the week
This week, we’re mostly looking at really dumb AI mistakes. For example:
Large Shrimp Child
lmao that stupid "gender verification" startup that's been going around is such an amazing piece of shit
Wait for it….
Ah yes, there it is. The famous man-name Dr Alice Ladypants.
And then there’s this:
Eleanor Moore
Hi @Grammarly please can you tell your algorithm that people who identify by both She/They pronouns can also be Doctors. @CCriadoPerez
Apparently, despite women having been allowed to be doctors for quite some time now, this is news to algorithms.
Both companies have since addressed the issue, Genderify by deleting their entire online presence:
Grammarly, by contrast, went for the tried-and-tested approach of blaming the algorithm.
@mooreofeleanor Thanks for your patience, Eleanor. Our team has fixed the issue, which occurred because our algorithms treated the doctor's name as masculine. You should no longer see such a suggestion. We appreciate you flagging this and, again, extend our sincerest apologies.
This is a very popular approach by tech companies caught out by their sexist algorithms making sexist decisions. You may remember a similar “not me, just the algorithm!!!” response from Apple, when their credit card was found to be giving men with lower credit ratings 20x the credit than their wives even if they had completely shared finances or the wife had a higher credit rating (interestingly, I myslef recently experienced this with the AB: I have a substantially higher credit rating, but can get about a tenth of his credit limit. IT’S A MYSTERY WHAT’S GOING ON HERE 🤪)
I find this “it’s just the algorithm” response *fascinating*. Yes we KNOW it’s the algorithm – how does that make it any better? You *coded* the algorithm. The algorithm is a product of *your* failure to account for the bias in the data you fed it. And no matter how much you hand-wave about the issue, algorithms don’t just exist in the ether waiting to be plucked into being by a wholly passive human. Humans make them, therefore humans are to blame when they produce sexist/racist* (*delete as appropriate although of course it’s usually both 🥳) decisions.
And, sure, no one is going to die as a direct result of either of these dumb mistakes, but if you’re not disturbed by such BASIC and obvious and visible mistakes being made: you should be. You don’t see most algorithms and they are making far more important decisions than this. If they can’t get this basic stuff right to avoid PR disasters, think about what 99% of black box algorithms are fucking up (with apologies to everyone who has signed up with a BMA email who will now not get this email).
Bullshit AI chaser
AI Is More Likely to Misidentify Women's Masks
“IBM’s system misidentified women’s masks as “restraint chains” or “gags” 23% of the time, and correctly identified just 5% of masks.”
It’s been a  🤪🤪🤪 kind of a week
Poppy pic of the week
That’s it! Enjoy your week, GFPs, and remember: when in doubt, blame the algorithm! xoxoxo
Did you enjoy this issue?
Become a member for £3 per month
Don’t miss out on the other issues by Caroline Criado Perez
Caroline Criado Perez

Keeping up with the gender data gap (and whatever else takes my fancy). Like the Kardashians, but with more feminist rage. Plus, toilet queue of the week.

You can manage your subscription here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue