Are artificial intelligence voice assistants reinforcing gender bias?

Science and Technology

Artificial intelligence voice assistants like Siri and Alexa are reflecting, reinforcing, and spreading gender bias, a recent United Nations report suggested.

The UNESCO report, entitled “I’d Blush if I Could”, explores the consequences of the spread of smart speakers and voice assistants, particularly the presentation of many of these platforms as female.

“With many AI tools, we see companies building infrastructure for how people will interact long into the future. The UNESCO report on AI & Gender raises important questions for how people of all genders will fare within this new future and what we might do today to help address inequalities tomorrow,” Professor Gina Neff told the Student. Neff is a Senior Research Fellow at the Oxford Internet Institute whose research partly focuses on the sociological implications of technology.

A primary concern the report raises is the passive nature of the responses that voice assistants give to verbal sexual harassment: the name of the report is how Siri used to respond to the statement, “You’re a slut.” In addition to indifference to harassment, female voice assistants are programmed to be non-confrontational and respond to blunt commands. Such passivity, the report suggests, can contribute to and normalise such treatment of women.

Ever since Apple launched Siri in 2013, voice assistants and smart speakers have exploded in popularity; industry researchers forecast that there will be more voice-activated assistants than people by 2021.

“Because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers”, the report said. “The assistant holds no power of agency beyond what the commander asks of it. It honours commands and responds to queries regardless of their tone or hostility.”

Another part of the report seeks to address how and why the majority of voice assistants were developed as female. One possibility is that the design of these voice assistants reflect the gender makeup of the AI field, which is 85% male. That said, a more direct explanation for bias in AI is that the data sets that AIs are trained on reflect engrained biases, and that AI incorporate these biases as they “learn”.

Several strategies for addressing the problem of bias in voice assistants have emerged. Technology companies have added male voice alternatives and removed female-by-default settings for voice assistants, forcing users to choose the gender of their AI. (While the British version of Siri is male by default, it is female by default in 17 of 21 languages.) Further, researchers unveiled a new, genderless AI voice called Q this March with the aim of ending gender bias in AI assistants.

“The point isn’t that there’s only one way forward for designing better voice assistants and embodied robots. Rather, companies should not be building into new technologies very old-fashioned stereotypes about gender and assistance. It’s not rocket science – But it does mean taking on board expertise in anthropology, psychology, the humanities and the social sciences to design technologies that fit the much richer realities of human interaction than these outdated notions,” Neff told the Student.

AI bias is not limited to gender. A 2017 ProPublica report found that an American computer algorithm used by courts to predict re-offending rates among criminals mislabeled black defendants as high-risk twice as often as white defendants, and mislabeled white defendants as low risk more often than black defendants.

Ever since Apple launched Siri in 2013, voice assistants and smart speakers have exploded in popularity; industry researchers forecast that there will be more voice-activated assistants than people by 2021. The report raises the possibility that gendered voice assistants may help spread western stereotypes and prejudices to regions that may not subscribe to those attitudes.

“The field of artificial intelligence is one of the most skewed in all of science in terms of gender balance. But it would be a mistake to think that increasing the diversity of people in the pipeline to tech jobs is a simple fix to the problems of technological bias,” Neff told the Student. “Researchers like me have been calling on companies to do social impact assessments of their technologies to understand the full range of their potential social, cultural, political and ethical effects on society. Technology companies should design better technologies and commit to fixing products and services that have a negative impact on people.”

Image Credit: pixabay