AI has a sexism bug, and these women aim to fix it

World

AI has a sexism bug, and these women aim to fix it

Experts say women can help fix AI’s gender bias by rewriting algorithms to ensure sexism isn’t reflected by technology

Natasha Bernal


From improving healthcare to combating crime, artificial intelligence promises to bring benefits to almost every aspect of our lives. But there’s major problem that threatens to derail its ambition: AI can be inherently sexist.
That problem is making it harder for AI to improve the lives of women around the world, according to a group of experts led by global innovation foundation Nesta.
In 2018, for instance, Amazon was forced to ditch an AI recruiting tool that favoured men for a technical job. The AI was created by a team at Amazon’s Edinburgh office in 2014 as a way to automatically sort through CVs and pick out the most promising candidates. However, it quickly taught itself to prefer male candidates over female ones.
The problem stemmed from the fact that the system was trained on data submitted by people over a 10-year period, most of which came from men.
But Amazon isn’t the only one facing bias in AI. In 2018, Google was forced to alter its Translate tool after it was accused of sexism for defaulting translations to the masculine pronoun.
Joysy John, director of Education at Nesta, believes women can help fix AI’s gender bias by rewriting algorithms to ensure sexism in society isn’t reflected by technology. “Society is very biased, and we are propagating that,” she says. “I don’t think enough has been done.”
John is among a group of female professionals in the UK working to tackle the problem of bias in AI by promoting women who work in their industry to encourage more women to join their ranks. One of the problems is that the AI industry lacks diversity. According to the World Economic Forum’s latest Global Gender Gap Report, only 22% of AI professionals globally are female.
Zoë Webster, director of AI and data economy at Innovate UK, believes changing the female balance in the AI industry is vital to eradicating prejudice in technology. “The more diverse people working on these solutions the more likely for this to happen,” says Simi Awokoya, a technical evangelist and founder of BME careers organisation Witty Careers.
“I think there needs to be a conscious effort to make sure data sets represent real people. As long as the industry keeps on championing this, it’s bound to become part of the process. In the big corporates, men still outnumber women,” argues Webster.
But she believes that it’s not about making up the numbers, but making the voices that are there count. “These people need to be in a position of influence and power. They need to be factored much more into what is going on.”
Gartner predicts that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data and the teams managing them.
One area where these women believe there could be major change is in voice assistants, or chatbots. The majority of the most famous models have been given women’s names as well as a feminine personality, even though they were built by male-heavy teams.
“Some psychologists have shown that people find the female voice more pleasant and less aggressive, especially if it is telling you to do something they would find it better from a woman than a man,” says Verena Reisner, a professor at Herriot-Watt University.
They are also huge targets for abuse, she says – something that could have been avoided if there had been more women building the AI.
Now, AI professionals such as Reisner are trying to understand why chatbots are becoming targets for abuse and how to respond to it. “We noticed since our system has a female personality we had sexually tinted abuse which wasn’t nice, but we also found that not all systems reacted in a way that was appropriate,” Reisner says.
“The worrying thing is that most of these systems are hand-engineered. You can’t blame the data for that. There are no guidelines on ethical behaviour.”
She has joined calls for a larger-scale study to be conducted to help chatbot AIs recognise abuse through more than just keywords and find the answer to the perennial question, what is a good response to stop abuse?
To answer that question, teams need people with backgrounds in linguistics and behaviour analysis, not just coding. But more importantly, they need more women.
– © The Daily Telegraph

This article is reserved for Times Select subscribers.
A subscription gives you full digital access to all Times Select content.

Times Select

Already subscribed? Simply sign in below.

Questions or problems?
Email helpdesk@timeslive.co.za or call 0860 52 52 00.