Voice assistants – and the algorithms that power them – need to speak for everyone
It’s no coincidence that Amazon, Microsoft and Apple chose to use female voices for Alexa, Cortana and Siri. Multiple studies suggest that people prefer to listen to women, finding them more reassuring and trustworthy. Before birth, humans are hardwired to find comfort in their mother’s voices. Most digital assistants seek to take advantage of this… Even self service checkouts at supermarkets give their instructions in distinctly feminine tones. And there’s nothing wrong with that, right? Wrong.
The use of female voices in submissive digital assistants has served to reinforce women’s subservient role in a patriarchal society. At the same time as creating more positive relationships with users, female voice assistants support the notion that women should follow commands and do as they are told. Ironically, algorithms are also far better at understanding men.
I’m sorry, I didn’t catch that…
A YouGov survey of 1000 people in the UK found that women often experience problems when talking to their smart speakers. Around two thirds (67 per cent) of female owners claimed that their device failed to respond to them at least some of the time. In contrast, 54 per cent of men reported similar issues. The survey also suggested that women are more polite to their machines… In another twist of irony, manners maketh misunderstanding.
To some extent, the apparent sexism in voice recognition technology can be blamed on biology. Women naturally tend to speak more softly and at a higher frequency than men, making their voices harder to pick up. However, the gender bias can also be put down to data. The more an algorithm is exposed to a certain metric, the better it is at recognising it. By now, it’s clear that the technology sector has a diversity problem. With that in mind, it makes sense that the models built by Western men favour their creators. Unsurprisingly, this bias is also evident in facial recognition algorithms.
No gender, no problem
By favouring female voices in one sense and disfavouring them in another, voice technology reinforces rigid gender roles. Women are assistants whereas men are commanders. Men request, women respond. And when women do make requests of technology, they are less likely to be understood.
In an attempt to bring diversity to voice tech, a group of linguists, technologists and sound designers have created Q: the first genderless voice. Based on a combination of female, male, non binary and transgender voices, Q is created for a future where ‘we are no longer defined by gender but by how we define ourselves’ to ‘ensure that technology recognises us all’. Q challenges linguistic discrimination by removing the distinction between male and female, which is hoped to encourage gender diversity and break down traditional gender roles. The team behind Q have called on technology companies to use their genderless voice. Whether or not they will listen remains to be seen (or heard).
Fix the data, fix discrimination
As well as using a genderless voice, voice tech could deliver diversity through data. Biased data will create biased algorithms, so developers need to be especially careful to feed their models with quality examples of all kinds of voices. This goes well beyond gender to nationality and, within that, accents. In fact, it’s worth considering whether datasets should be biased in favour of female speech to readdress the balance. If natural language processing algorithms can easily pick up deep male voices, perhaps there should be a period of ‘catch up’ that focuses on understanding women.
It’s impossible to completely eradicate bias in voice technology – especially when female voices are considered to be more comforting and trustworthy. While voice tech itself isn’t deliberately sexist, it currently does very little to combat sexism in society. The answer isn’t necessarily to stop using female digital assistants, but to move towards more neutral tones that don’t reinforce typical gender stereotypes. Q, the first genderless option for voice technology, demonstrates that this is possible. It also speaks to the ongoing diversity problem in the technological community. Not only does technology have to recognise different genders, identities, and demographics – it has to represent them, too.
Learn more about the difficulties of technological diversity in DISRUPTIONHUB’s free, weekly newsletter.