The Ethical Design of Getting Lost in a Game

Introduction:
Virtual reality (VR) is an industry on the rise; 2016 was even declared by experts in the field to be “The Year of VR.” Though many may expect the entertainment sphere to be virtual reality’s epicenter, the technology has proven itself to have incredibly useful applications in other settings, from advertising and education to medicine and the military. As the burgeoning tech gains popularity, those in the scientific community learn more about the susceptibility of the human mind to be tricked by staged realities, and what the long-term effects of participating in a virtual environment may be. Until such time as these potential psychological downsides to VR are understood both by the scientific community creating VR and the people using VR, there are certain ethical pitfalls of which we must be cautious, particularly in propaganda and in military training, where the ulterior motives of those funding the creation of VR environments may not be aligned with the users’ expectations or best interests.

Background: It’s Just a Game, What’s the Big Deal?
Time and time again, psychologists have shown that the human brain is easily manipulated by artificial environmental factors: take the Stanford Prison Experiment, for example. Stanford professor Phillip Zimbardo split students into two groups; one group was told that they were the “guards,” the other that they were the “prisoners” [1]. What stunned the world about this experiment is how badly it ended. Both groups internalized their roles to the point where the guards began committing brutal acts against the prisoners. The prisoners suffered so much mentally and physically that the experimenters decided continuing would be unethical, and they halted the experiment. These students had volunteered to participate; they knew it was all an experiment, they knew it was not real, and yet they were still fooled by the simulation of a prison.

For a larger-scale and almost less-believable example, we can look at the 1975 premiere of the movie Jaws. Following the summer blockbuster’s success, humans decimated shark populations in the northeast; by some estimates, tiger shark populations decreased by 65%, great whites by 79%, and hammerheads by 89% [2]. The author of the novel Jaws, Peter Benchley, deeply regretted writing the book after seeing the destruction it brought to these shark species, many of which now rank as vulnerable or endangered as a result [2]. Viewers knew it was a work of fiction, fishermen knew that sharks were not aggressive and did not harbor resentment towards individuals, yet the “Jaws Effect” took hold of us. The fear of getting bitten by a shark is far more commonplace than actual shark bites, and the reputation in popular culture that sharks have for being angry man-eaters persists today [3]. In short, a 2-D, fictional cinematic experience drastically changed the way people thought about sharks by inducing fear, even though logically, people should have been able to shrug off the fallacies.

What went wrong in both the Stanford and Jaws examples was that the creators of each underestimated how easily the human brain could be fooled by simulation. The participants in the Stanford experiment could not have known how intense the atrocities would become when they volunteered. The natural malleability of the human psyche caused them to internalize their roles to the point where the atrocities they executed seemed normal. Because of this, the experiment became ethically impermissible from all standpoints [1]. A utilitarian approach dictates that to act ethically is to act in a way that creates the greatest amount of good for the greatest number of sentient beings. The Stanford experiment harmed its participants more than the potential scientific insight of further testing could have benefitted society. The fact that the experiment had to be stopped was proof enough that the original hypothesis, that people are unlikely to internalize artificial roles, was incorrect. The experiment was also a basic violation of the do-no-harm principle, a guiding light in ethics. A rights approach to ethics dictates that to act ethically is to protect people’s autonomy and rights. However, the rights of all prisoners to retain basic dignity was infringed upon past a level they could have anticipated at the time of volunteering. Because the experimenters continued to let them commit heinous acts without intervening, the dignity of the guards was arguably being destroyed as well.

In a very similar way, the creation of Jaws might be retroactively considered unethical—author Peter Benchley would certainly agree that it was. Utilitarian ethics pursues the maximum happiness and least suffering possible for the most conscious creatures. If we look at the world through that lens, killing an animal is analogous to killing a human. It can be justified if the net benefit outweighs the suffering of the animal in question, but when it comes to killing for sport, there is hardly a case to be made for ethical justifiability. Shark population declines post-Jaws were due primarily to sport catching—in other words, the gain was not in having meat to eat or even in having a fin to sell (though this too would raise ethical questions). Rather, it was for the entertainment of the fisherman [2]. Oliver Crimmen, the fish curator at London’s Natural History Museum, described the period of shark hunting with the following: “there was no remorse, since there was this mindset that [the sharks] were man-killers” [2]. For those who do not like the idea of treating animals ethically as humans’ equivalents, a utilitarian standpoint also supports the idea that removing the apex predator of these marine ecosystems detrimentally affects the other fish populations of the ecosystem. This ultimately harms humans in the process as it limits our access to healthy fish that some areas depend upon for nutrition [4].

In both examples, people were so entranced by a fictional reality that they allowed it to affect their real-life behavior in a harmful way, and the ethically unsupportable acts they committed as a result cast the whole process into an ethical gray area. What we can learn from both examples is that humans are deeply fooled by simulations, even when they consciously know that they are not real, and this subtly gives immense power to those creating the simulation. In other words, good fiction is not on the ethical chopping block, but the creators of good fiction may need to be more cognizant of the power they have to sway people’s actions.

It’s Just a VR Game, What’s the Big Deal?
Stanford University’s Virtual Human Interaction Lab studies the degree to which virtual reality environments can affect human behavior. In other words, they study how easily and how intensely people’s brains are affected by virtual reality simulations. They have discovered something they call the “Proteus effect,” which says that people will tailor their actions inside a VR environment to match those that they would expect of their avatar. For example, when asked to 1) socialize and 2) negotiate a deal, players with tall avatars were found to be bolder than those with average-sized or short avatars [5]. More interesting to this discussion, though, is that the participants carried this confidence out of the virtual reality and into reality—when asked to socialize and negotiate, people who had played with taller avatars were bolder than those who had played with their shorter counterparts [5]. The Proteus effect has held true in many contexts. Players given Superman-like avatars exhibited supererogatory altruistic behaviors after the test, and players given avatars with darker skin tone tested as having less implicit racial bias after the test than before [5,7]. However, not all these contexts have been so positive. Similar studies have shown that players whose avatars are dressed in KKK-like outfits or black robes exhibit more aggressive behavior within the VR environment [6], demonstrating that the Proteus effect has the potential for eliciting harmful behavior just as significantly as it does positive behavior.

Other studies by the Stanford lab have found that significant events in virtual environments can also change people’s behavior after they take the goggles off. Participants who were asked to chop down a tree in a virtual reality later subconsciously used 20% fewer paper napkins than those who had only read about chopping down a tree [7]. A similar study is currently being conducted, testing whether people are more empathetic towards homeless people when put through a virtual experience of homelessness, rather than simply made to read about a homeless person’s experience [8]. This example in particular illustrates not only that VR has the power to affect behavior (like any other medium of good fiction), but also that it potentially has more power to do so than other conventional media. So, it is plausible that a VR version of Jaws might have caused more destruction than the movie because of the interactive component of VR.

Organizations with specific agendas have already picked up on this power. Animal rights groups have made waves by asking users to virtually go to an industrial farm as a chicken [9], and the UN even sponsored a VR project that puts you in the shoes of a Syrian refugee child [10]. Already these projects have their own ethical dilemmas: participants gave their consent, just as the participants in the Stanford experiment had, but is it possible that for some people the experience was more intense than they could have anticipated? At what point does the programmers’ emotional manipulation of subjects cross from being ethically permissible to not?

Ramifications of The Programmer-User Power Disparity:
These questions must be at the center of any ethical discussion on virtual reality and at the forefront of VR programmers’ minds. The Stanford lab studies are powerful testaments to the potential VR has to shape human behavior even after exiting the virtual world, and the lack of information concerning possible long-term effects should be concerning to anyone involved in the field.

That virtual realities—or simulations of any kind—can be designed to elicit certain behaviors, negative or positive, is itself ethically ambiguous. A rights approach to ethical thinking demands that we respect the autonomy and dignity of others, and respect for autonomy is one of the core principles of bioethics. However, the participants in these studies had no concept of how they were being manipulated or how their autonomy was being violated. They gave consent to be a part of the study, but the question of whether or not participants can truly give informed consent when the scientific community does not fully understand the possible consequences persists. Participants need to be able to trust the experimenters and programmers to protect their well-being, but for the most part, experimenters are still discovering the persuasive power of the technology. Ultimately, the issue is that as VR becomes more popular, it is necessary for users to understand that the technology has the ability to affect their thinking and behavior. It is also necessary that programmers understand the power they have to shape users’ behavior before they create anything with the potential to harm. In other words, there is an ethical obligation on both sides (that of the programmer and that of the user) to understand that there is a dearth of information on the potential long-term effects of VR.

The ambiguity of what it means to give informed consent with an unknown technology like VR is especially pertinent in situations in which users are being compelled to use virtual reality, such as during military training. VR allows soldiers to practice handling situations that might otherwise be costly to stage, like flight simulations, or that they might otherwise not get to experience before going into war, like an active battlefield. They can practice nonviolently diffusing tense situations with potential hostiles or responding to the explosion of a landmine (without the danger or cost of training with an actual landmine), all while receiving instant feedback from their drill sergeants [11]. Combat medics can even practice administering medicine under extreme duress [11]. VR allows soldiers to gain exposure to the stress of combat without the immediate physical danger, with the end goal of building soldiers’ stress resilience to the point of preventing Post-Traumatic Stress Disorder (PTSD), an epidemic affecting veterans today.

From an ethical standpoint, the current use of VR technologies in military training almost seems obligatory. They provide soldiers excellent opportunities to experience what could be life-or-death situations, but in a learning environment with low stakes. However, with what we know—and do not know—about human susceptibility to virtual simulations, we must be cautious about subjecting young men and women to extreme stress in a virtual environment. If VR is a PTSD vaccine, we are still very much figuring out the dosage necessary to prevent without infecting.

Another worry in military use of VR is that it will be used to inure soldiers to killing, and/or to reduce empathy for the enemy. The framework that explains best why this would be ethically wrong is the rights approach: all humans deserve to retain their dignity and basic humanity. To desensitize soldiers to the point of making them killing machines would inarguably be to strip them of these entitlements, and to demonize the enemy is to strip them of the same. Supporters of this policy might argue that winning the conflict is the greatest good for the greatest number of people, and dehumanizing a few soldiers in order to win faster and with fewer casualties might be justifiable collateral damage. While this is a well-reasoned argument, it is too far removed from the human element of the ethical dilemma; the programmers writing this game and the sergeants implementing it would be knowingly and intentionally damaging the user. The scale of the conspiracy of creators and military decision-makers that would need to defy their own moral compasses in order to implement this kind of desensitization policy is prohibitively massive. All the same, there exists the possibility that someone, somewhere may try to do just this. For this reason, VR programmers must be vigilant against unethical or ethically ambiguous subsidized projects. Ultimately, the decision of how a VR environment will affect its users falls to those creating the environments. Programmers are the last line of moral defense, and they have to understand that is their role.

As engineers and software developers, we need to take greater responsibility over our industry. VR is exciting and new, and it has the potential to be extremely helpful to a lot of people, but it will always be at the cost of those who try to use it for less noble purposes. As virtual reality grows and enters even more industries, all parties involved must be held accountable to ensure that programmers understand the power they have in affecting participant behavior, that participants understand the possible long-term effects of use, and that investors respect the importance of morality even when it is in conflict with potential profit. Virtual reality is powerful, but with great power comes great responsibility. Bailenson, the leading expert of the Stanford VR Lab, said it best: “When I think about virtual reality, I think virtual reality is like uranium: It’s this really powerful thing. It can heat homes and it can destroy nations. And it’s all about how we use it” [12].

By Kiera Breen Salvo, Viterbi School of Engineering, University of Southern California


Helpful Links

For more on the Stanford Prison Experiment, visit http://www.prisonexp.org/the-story/

For more depth on Bailenson and Yee’s various experiments on virtual reality: https://vhil.stanford.edu/pubs/

To check out the controversial “I, Chicken,” follow this link to the embedded video: http://www.peta.org/blog/petas-innovative-virtual-reality-experience-turns-chicken/


Works Cited

[1] S. McLeod. “Stanford Prison Experiment.” Simple Psychology, 2016. [Online]. Available: http://www.simplypsychology.org/zimbardo.html.

[2] BBC. “How Jaws Misrepresented the Great White.” BBC News, 2015. [Online]. Available: http://www.bbc.com/news/magazine-33049099.

[3] B. Francis. “BEFORE AND AFTER ‘JAWS’: CHANGING REPRESENTATIONS OF SHARK ATTACKS.” The Great Circle, vol. 34, no. 2, 2012, pp. 44–64. [Online]. Available: www.jstor.org/stable/23622226.

[4] Shark Saver. “Sharks’ Roles in the Ocean.” SharkSavers.org, 2017. [Online]. Available: http://www.sharksavers.org/en/education/the-value-of-sharks/sharks-role-in-the-ocean/.

[5] J. Bailenson, N. Yee. “The Proteus Effect: The Effect of Transformed Self-Representation on Behavior.” 2007. [Online]. Available: https://vhil.stanford.edu/mm/2007/yee-proteus-effect.pdf.

[6] J. Pena, J. Hancock, N. Merola. “The Priming Effects of Avatars in Virtual Settings.” Communication Research, 2009. [Online]. Available: http://www.academia.edu/1266919.

[7] A. Gorlick. “New virtual reality research – and a new lab – at Stanford.” Stanford News, 2011. [Online]. Available: http://news.stanford.edu/news/2011/april/virtual-reality-trees-040811.html.

[8] L. Sydell. “Can Virtual Reality Make You More Empathetic?” NPR, 2017. [Online]. Available: http://www.npr.org/sections/alltechconsidered/2017/01/09/508617333/can-virtual-reality-make-you-more-empathetic.

[9] B. King. “A Virtual View of a Slaughterhouse.” NPR Cosmos & Culture, 2016. [Online]. Available: http://www.npr.org/sections/13.7/2016/02/04/465530255.

[10] J. Alsever. “Is Virtual Reality the Ultimate Empathy Machine?.” Wired Magazine, 2015. [Online]. Available: https://www.wired.com/brandlab/2015/11/is-virtual-reality-the-ultimate-empathy-machine.

[11] J. Buckwalter. “Stress Resilience in Virtual Environments (STRIVE).” USC Institute for Creative Technologies Prototypes, 2011. [Online]. Available: http://ict.usc.edu/prototypes/strive.

[12] I. Novacic. “How might virtual reality change the world? Stanford lab peers into future.” CBS, 2015. [Online]. Available: http://www.cbsnews.com/news/how-might-virtual-reality-change-the-world-stanford-lab-peers-into-future/.