一本道无码

Skip to main content
A cell phone using a voice assistant with the text "how may I help you?"

Researchers Seek to Reduce Harm to Multicultural Users of Voice Assistants

Media Inquiries
Name
Aaron Aupperlee
Title
School of Computer Science

Users of voice assistants such as Siri, Alexa or Google Assistant know the frustration of being misunderstood by a machine.聽

But for people who may lack a standard American accent, such miscommunication can go beyond simply irritating to downright dangerous, according to researchers in the (HCII) in 一本道无码鈥檚 School of Computer Science.

滨苍听, HCII Ph.D. student and identified six downstream harms caused by voice assistant errors and devised strategies to reduce them. Their work won a Best Paper award at the Association for Computing Machinery鈥檚 Conference on Human Factors in Computing Systems.

鈥淭his paper is part of a larger research project in our lab looking at documenting and understanding the impact of biases that are embedded in technology,鈥 Kaufman said.聽

White Americans are overrepresented in most datasets used to train voice assistants, and studies have shown that these assistants are far more likely to misinterpret or misunderstand Black speakers and people with accents or dialects that vary from standard American. Earlier researchers tended to look at this problem as a technical issue to be overcome, as opposed to a failure that has repercussions on the user, Kaufman said. But having speech misunderstood, whether by a person or a machine, can be experienced as a microaggression.聽

鈥淚t can have effects on self-esteem or your sense of belonging,鈥 Kaufman said.

In a聽, Kaufman and Wenzel studied the impact that error rates by a voice assistant had on white and Black volunteers. Black people who experienced high error rates had higher levels of self-consciousness, lower levels of self-esteem and a less favorable view of technology than Black people who experienced low error rates. White people didn鈥檛 have this reaction, regardless of error rate.

鈥淲e hypothesize that because Black people experience miscommunication more frequently, or have more everyday experience with racism, these experiences build up and they suffer more negative effects,鈥 Wenzel said.

In the latest study, Wenzel and Kaufman interviewed 16 volunteers who experienced problems with voice assistants. They found six potential harms that can result from seemingly innocuous voice assistant errors. These included emotional harm as well as cultural or identity harm caused by microaggressions. They also included relational harm, which is when an error leads to interpersonal conflict. A voice assistant, for instance, might make a calendar entry with the wrong time for a meeting or misdirect a call. Other harms include paying the same price for a technology as other people even though it doesn鈥檛 work as well for you, as well as needing to exert extra effort 鈥 such as altering an accent 鈥 to make the technology work.

A sixth harm is physical endangerment.

鈥淰oice technologies are not only used as a simple voice assistant in your smartphone,鈥 Wenzel said. 鈥淚ncreasingly they are being used in more serious contexts, for example in medical transcription.鈥澛

Voice technologies also are used in conjunction with auto navigation systems, 鈥渁nd that has very high stakes,鈥 Wenzel added.

One person interviewed for the study related their own hair-raising experiences with a voice-controlled navigation system: 鈥淥ftentimes, I feel like I鈥檓 pronouncing things very clearly and loudly, but it still can鈥檛 understand me. And I don鈥檛 know what鈥檚 going on. And I don鈥檛 know where I鈥檓 going. So, it鈥檚 just this, this frustrating experience and very dangerous and confusing.鈥

The ultimate solution is to eliminate bias in voice technologies, but creating datasets representative of the full range of human variation is a perplexing task, Wenzel said. So she and Kaufman talked to the participants about things voice assistants could say to their users to mitigate those harms.

One communication repair strategy they identified was blame redirection 鈥 not a simple apology, but an explanation describing the error that doesn鈥檛 put the blame on the user.

Wenzel and Kaufmann also suggest that voice technologies be more culturally sensitive. Addressing cultural harms is to some extent limited by technology, but one simple yet profound action would be to increase the database of proper nouns.

鈥淢isrecognition of non-Anglo names has been a persistent harm across many language technologies,鈥 the researchers noted in the paper.

A wealth of social psychology research has shown that self-affirmation 鈥 a statement of an individual鈥檚 values or beliefs 鈥 can be protective when their identity is threatened, Kaufman said. He and Wenzel are looking for ways that voice assistants can include affirmations in their conversations with users, preferably in a way that isn鈥檛 obvious to the user. Wenzel is currently testing some of those affirmations in a follow-up study.

In all these conversational interventions, the need for brevity is paramount. People often use voice technologies, after all, in hopes of being more efficient or able to work hands-free. Adding messages into the conversation tends to work against that goal.

鈥淭his is a design challenge that we have: How can we emphasize that the blame is on the technology and not on the user at all? How can you make that emphasis as clear as possible in as few words as possible?鈥 Wenzel said. 鈥淩ight now, the technology says 鈥榮orry,鈥 but we think it should be more than that.鈥

鈥 Related Content 鈥