The Ethics of Digital Voice Assistants

Abstract

In 2019, there were a staggering 3.25 billion digital voice assistants in use across the globe, a figure that is predicted to rise to 8 billion by 2023. These assistants, which include Amazon’s Alexa, Apple’s Siri, and more, are not just computers; they have voices and personalities, and we develop social relationships with them just like we do with real people. However, the way they are currently designed, voice assistants enable and even encourage certain stereotypes and social behaviors in their users. This paper explores the ethicality of digital voice assistants and their effects on those who use them and suggests ways designers can approach their development to make sure such socially detrimental effects are mitigated.


Digital voice assistants like Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Google Assistant have grown significantly since their inception, in terms of both capability and prevalence. In February 2019, UK-based Juniper Research estimated that there were a staggering 3.25 billion digital voice assistants in use across the globe and projected that this number would rise to 8 billion by 2023 [1]. Naturally, concerns about privacy and data security are at the forefront of people’s minds when it comes to such a rapidly spreading technology. However, there is another more obscure aspect to digital voice assistants that must be considered: their social implications.

The way we interact with our voice assistants affects our other social relationships. This may not be evident at first, but it becomes clear when considering how seamlessly digital voice assistants have permeated our collective consciousness and integrated themselves into our everyday lives. We do not think of Alexa as just another computer; we think of her as Alexa. When we speak to her and other voice assistants like Siri and Cortana, the exchange encompasses a social aspect – one that does not exist in a vacuum but becomes a part of the complex web of social relationships we share with other human beings, who may also have their own voice assistants in their personal networks. These kinds of social consequences deserve more attention in the broader conversation about technology ethics that affects how companies like Google, Apple, and Amazon develop their products. Because these companies design the type of personality embodied by their digital voice assistants, they are responsible for the way their assistants inadvertently enable, or even encourage, certain stereotypes and social behaviors in users.

Siri, the first of the digital voice assistants, set a new standard for human-to-smartphone communication when she was released in 2011. But the social relationship we have with computers is something that has been recognized for at least two decades in the “Computers are Social Actors” or “CASA” paradigm. One of the groundbreaking studies supporting this idea is B.J. Fogg and Clifford Nass’ 1997 study on computer flattery and perceived computer performance. They found that participants working with computers that used flattery reported higher computer performance compared to participants with computers that did not use flattering language, even though the actual performance levels of the computers were the same. In the analysis of their discoveries, Fogg and Nass emphasize that human-computer relationships are fundamentally social in nature, aligning with the core concept of the CASA paradigm [2].

A report on further studies supporting the CASA paradigm was published in 1994 by Clifford Nass, Jonathan Streuer, and Ellen R. Tauber of Stanford University, containing a multitude of related conclusions drawn from those previous studies, with both theoretical and design implications. These conclusions include the following: (a) social norms are applied to computers, (b) voices are social actors, (c) computers are gendered social actors, (d) gender is an extremely powerful cue, (e) social responses are automatic and unconscious, (f) integration is highly consequential, (g) uniformity of interface is double-edged, and (h) gender of voices is highly consequential [3]. The researchers surmised that the interface of a computer does not need to be rich in human representation and personality to generate a social response. Technology has since progressed well beyond the “low-overhead agents” (without faces or personality) that the authors claim are enough to elicit a wide range of social responses, but it is questionable how much consideration has been given to the “highly consequential” conclusions above as part of the development of the digital voice assistants in the intervening years.

Another point in support of the existence of social implications for virtual digital assistants is that these days, people seem to care more about the social and personable aspects of their assistants than they care about flawless technical performance. A more recent study by Purington et al of Cornell University was conducted on social roles, user satisfaction, and personification of the Amazon Echo. The authors found that personification of the Echo (Alexa) is associated with “increased levels of satisfaction, regardless of technological problems or function […]. Simply put, people who love her love the Echo” [4]. These findings align with the CASA paradigm, and they prompt us to consider the ethics of our digital voice assistants through social lenses in addition to technical performance. According to Section 7 of the IEEE Code of Ethics, it is the responsibility of engineers “to improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems” [5]. Thus, it is not enough to only consider issues of technical performance and privacy/data security of digital voice assistants; we must also examine the societal effects of aspects of this new voice-based technology, especially their user interfaces, since those are how users interact with their voice assistants.

In her article “Why Our Voice Assistants Need Ethics,” Sophie Kleber, Google’s Head of Spaces UX and former Global Executive Creative Director at Huge, writes about how major technology companies developing voice assistants try to find the optimal personality in order to appeal to users [6]. Each company has a slightly different approach; Alexa, for instance, is described as smart and helpful, as opposed to Siri’s more sassy personality. Nevertheless, across the board, the most universally liked personalities are female, extroverted, and subordinate [6]. This is problematic, because it enforces enduring social stereotypes about women. Ben Parr, cofounder of chatbot company Octane AI, says that “we’re basically training our kids that they can bark commands at a female and she will respond” [7]. If this is true, then we as engineers would be failing to uphold our agreement as highlighted by the IEEE Code of Ethics to “not engage in acts of discrimination based on […] sexual orientation, gender identity, or gender expression” [5]. Kleber also writes that “abiding by universal norms backfires when the norms themselves perpetuate prejudice”and explains how this hurts the image of the brand, concluding with a call to action to design these virtual personalities more ethically [6].

In order to create healthier social relationships with more ethical voice assistants, responsibility falls to the company, the design team, and the individuals who decide what this new voice-based user interface will be like. During Vox Media’s 2016 Code Conference, Bill and Melinda Gates spoke about the arrival of our AI dreams, identifying the lack of women in the workspace as a major problem. In particular, Melinda Gates said that we “ought to care more about women being in computer science” [8]. Recently, only 17% of computer science graduates have been women, down from a peak of 37%, meaning there is currently not nearly enough representation in computer science to develop technology that is representative and sensitive to the needs of all. The virtue of fairness should be practiced in the workplace in order to achieve the ambition of a more ethical product. Anita Williams Woolley, a researcher at Carnegie Mellon University, found that social sensitivity was the single most important factor in determining the collective intelligence of a group and led to better decision-making with regards to complex problems [9]. Women tend to exhibit higher social sensitivity than men, making them valuable in the development of complex technology. However, regardless of their social skill, it is imperative that their input is heard and considered. Fairness in the workplace does not just constitute having female representatives on the team, but also giving the same weight to their thoughts and ideas as those of everyone else. The direction that digital voice assistants are headed in concerns us all. We all carry this technology with us on the phones in our pockets. Many of us set them up in our homes to be a fixed part of the household. The ease of using digital voice assistants and the increasingly natural feel of their user interfaces promotes mindless interaction with them. It is for these reasons that it is crucial for voice technology developers to be conscientious when they design the personalities and user interfaces of digital voice assistants. Technology companies, engineers, and designers should design their assistants to be more ethical than universally likeable. This may mean that the digital voice assistants of the future should be less passive and have stronger personalities, perhaps even speaking up against certain negative commands or wake words unprompted. This can be achieved by having the voices of women be heard, not just as voice assistants, but as members of design teams that are more representative of the population that has integrated this technology into their lives.

By Beatriz Suarez, USC School of Architecture, University of Southern California


About the Author

At the time of writing this paper, Beatriz Suarez was an undergraduate student at the University of Southern California pursuing a degree in Architecture and a minor in Urban Sustainable Planning. She was born and raised in Cebu City, Philippines.

References

[1] S. Kleber, “Why Our Voice Assistants Need Ethics,” Medium, 04-Apr-2018. [Online]. Available: https://magenta.as/whats-missing-from-siri-and-alexa-ethics-25abdd6b4e7f. [Accessed: 14-Apr-2020].

[2] B. J. Fogg and C. Nass, “Silicon sycophants: the effects of computers that flatter.” [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.544.4333&rep=rep1&type=pdf. [Accessed: 25-Mar-2020].

[3] C. Nass, J. Streuer, and E. R. Tauber, “Computers are Social Actors.” [Online]. Available: https://www.radford.edu/~sjennings15/CASA.pdf. [Accessed: 12-May-2020].

[4] A. Purington, J. G. Taft, S. Sannon, N. N. Bazarova, and S. H. Taylor, “‘Alexa is my new BFF’: Social Roles, User Satisfaction, and Personification of the Amazon Echo .” [Online]. Available: https://cpb-us-east-1-juc1ugur1qwqqqo4.stackpathdns.com/blogs.cornell.edu/dist/c/6136/files/2013/12/Alexa_CHI_Revise_Submit-22ay4kx.pdf. [Accessed: 10-Apr-2020].

[5] “IEEE Code of Ethics,” IEEE. [Online]. Available: https://www.ieee.org/about/corporate/governance/p7-8.html. [Accessed: 24-Mar-2020].

[6] S. Kleber, “Why Our Voice Assistants Need Ethics,” Medium, 04-Apr-2018. [Online]. Available: https://magenta.as/whats-missing-from-siri-and-alexa-ethics-25abdd6b4e7f. [Accessed: 24-Mar-2020].

[7] “How You Speak To Siri & Alexa Matters More Than You Think – Here’s Why,” Yahoo! [Online]. Available: https://www.yahoo.com/lifestyle/speak-siri-alexa-matters-more-162000692.html. [Accessed: 25-Mar-2020].

[8] J. D’Onfro, “Bill and Melinda Gates believe the dream of AI is ‘finally arriving’ – but there’s one huge problem,” Business Insider, 01-Jun-2016. [Online]. Available: https://www.businessinsider.com/bill-and-melinda-gates-on-artificial-intelligence-and-women-2016-6. [Accessed: 11-Apr-2020].

[9] K. Caprino, “How Decision-Making Is Different Between Men And Women And Why It Matters In Business,” Forbes, 19-May-2016. [Online]. Available: https://www.forbes.com/sites/kathycaprino/2016/05/12/how-decision-making-is-different-between-men-and-women-and-why-it-matters-in-business/#536dd604dcdf. [Accessed: 12-May-2020].

Related Links

https://www.cnet.com/news/alexa-vs-google-assistant-vs-siri-the-state-of-voice-after-google-io-and-wwdc-2019/

https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html

https://www.technologyreview.com/2019/11/05/65069/amazon-alexa-will-run-your-life-data-privacy/