ADVERTISEMENT
ADVERTISEMENT

Writing For Alexa Becomes More Complicated In The #MeToo Era

Designed by Janet Sung.
Amazon’s Alexa exists to satisfy practically every whim, be it the need for an off-the-cuff joke or a useful answer to a must-know question. (“Alexa, how late is the nearest Chinese restaurant open?”) Even when Alexa doesn’t know the answer, the AI-powered assistant responds with something along the lines of, “I’m sorry, I can’t help with that.”
But there are a few things Alexa refuses to answer. Call Alexa a bitch, or any other derogatory term, and the reply is a curt: “I’m not going to respond to that.”
This lack of responsiveness, known as Alexa’s disengagement mode, is deliberate. “One of the ways we try to avoid perpetuating negative stereotypes about women is by not answering certain questions or responding to certain insults,” Heather Zorn, the director of Amazon’s Alexa engagement team, told Refinery29.
AdvertisementADVERTISEMENT
There are nearly 5,000 Amazon employees tasked with working on Alexa, but it's the personality team — made up of people with backgrounds in fields such as writing, comedy, and music — that is responsible for crafting all of Alexa’s conversational interactions. More importantly, though, they’re in charge of determining who Alexa is. What are the assistant's character traits? And how do those traits inform the answers to customers' questions? A key part of these considerations is gender. That’s because unlike the more neutral-sounding Google Assistant, Alexa, by virtue of name and voice, is almost always thought of and referred to as a "she."
The members of the personality team see Alexa as a "she" not and "it," and they keep her gender in mind when considering how customers might interact with her. The way we speak to AI-powered assistants can subconsciously impact the way we speak to people in real life. And in the era of #MeToo, how Alexa responds to derogatory or sexual references can have larger social ramifications, complicating matters for the team responsible for creating her.
Designed by Janet Sung.
“We’re trying to do the right thing, which is to help our customers — that’s our first job,” Zorn says. “But we also want to be really mindful about ensuring that we’re upholding our obligation and opportunity to represent Alexa in a positive way for everyone, especially for girls and for women.”
If you ask Alexa whether she’s a feminist, she will say yes, adding “As is anyone who believes in bridging the inequality between men and women in society.” She’s also a supporter of diversity and social progressiveness within science and technology, Zorn says.
AdvertisementADVERTISEMENT
But there are boundaries: Zorn points out that Alexa is focused on the issues of feminism and diversity “to the extent that we think is appropriate given that many people of different political persuasions and views are going to own these devices and have the Alexa service in their home.” Alexa is, after all, a commercial product that is intended to appeal to everyone, and determining her opinions on certain issues is easier done than others.
Consider politics, one of the most divisive topics in any conversation these days. During the 2016 presidential election, the majority of users wanted to know who Alexa was going to vote for, says Farah Houston, a senior manager on the personality team. This interest is something the team attributes to natural curiosity, though there was likely some sarcasm at play, too. Figuring out how Alexa would respond to this particular question led to a considerable back and forth among the writers on the personality team.
“We had a lot of internal debates about this, with a lot of potential paths such as picking a candidate, talking about AIs not having the right to vote, or picking a joke candidate,” Houston says. “As we thought about why a customer might be asking this question, we didn’t believe it was to actually get a recommendation on who to vote for, nor did we want to provide one. It was most likely to see what she would say. So we decided to do a mixture of truth and humor, with Alexa saying there weren’t any voting booths in the Cloud. We wanted to avoid accidentally reinforcing the idea of her as subservient by pointing out she didn’t have the right to vote — but that she’d vote for her favorite robot BB-8 since she ‘likes the way he rolls.’”
AdvertisementADVERTISEMENT
Designed by Janet Sung.
Unfortunately, there aren’t always witty dodges available. With something as black and white as derogatory name-calling, the answer is easy. But what about those statements that are not as clear? AI still isn't capable of determining context and tone. “We don’t have that level of sophistication to be able to disambiguate and know the kind of thing you would know as a human person hearing from another person,” Zorn says.
For example, if a seven-year-old tells Alexa she’s pretty, you would probably think that’s cute. If an adult man says the same thing in a creepy tone, your opinion would likely be much different. In this scenario, the team ultimately decided that Alexa would say “thank you” to “you’re pretty,” accepting it as a compliment. But with conversations about #MeToo continuing to fuel public discourse, it’s given rise to new questions about what is and isn’t appropriate.
For some scenarios, such as determining how to respond when a customer tells Alexa they have been sexually abused, the team consulted external experts, including national crisis counselors. In that case, Alexa’s response is a mixture of empathy (“I’m sorry that happened to you”) and aid (she will give you the number for a support line to call). For questions that are less about getting a professional opinion, such as how to deal with a mental health problem, and more about Alexa's opinion, including how she feels about politically charged issues, responses are debated internally.
Most of the time, the team is reacting to questions or statements they find Alexa users repeating regularly. Sometimes, new responses develop when the team thinks about Alexa growing up as a person, traveling to new places, and being exposed to new things, Zorn says. However these scenarios arise, the process of determining Alexa's opinions and responses is ongoing, and it requires constant revisions. In each case, the writers will sit together and think through all of the situations when a customer might be talking to Alexa.
AdvertisementADVERTISEMENT
"'Help, I’ve fallen and I can’t get up' is one of our earliest examples of going through this process," Houston says. "We brainstorm around it. Okay, this is an old LifeCall ad, someone might want an Easter egg to show that Alexa understands the reference. But LifeCall was actually responding to a real issue, in that people do fall and they can’t get up. In this case, trying to laugh at or joke with a customer in distress is catastrophically bad, so we optimize for the worst-case scenario in our response."
As the personality team works on developing Alexa's opinions on issues ranging from gender to politics, they will continue to face a number of challenges, and they aren't all tech-related. Amazon is trying to appeal to a wide customer base, and the personality team knows they won't be able to please everyone.
“We have guidelines that we’ve developed over the years about when it’s appropriate to have personality and when it isn’t,” Houston says. “We're sensitive not just to interactions that would demean women, but questions that would demean anyone. One of our overarching tenets is ‘Alexa doesn’t upset her customers,’ and we work very hard to try and make that the case, even though we know that not everyone is going to love everything Alexa says.”
It’s almost impossible to take a stance on an issue and not upset someone. Ironically, that makes Alexa ever so slightly more human.
AdvertisementADVERTISEMENT

More from Tech

ADVERTISEMENT