The Unnerving Implications of Brain-Computer Interfaces

9/28/2020

Elon Musk has always been one for lofty technological goals, but he’s not always concerned with their societal or ethical implications. During a recent demo for his company Neuralink, he revealed to the audience how the company’s signature brain-computer interface (BCI) could detect when the pig, whose brain it was installed in, was sniffing for food. Musk claims that, when finished, the 50 cent piece-sized device that he compares to a “Fitbit in your skull” will likely be able to record human memories and control external devices with a thought. There is great potential for devices like these to help people who are disabled; for instance, someone who is paralyzed could control a wheelchair or computer with just their brain. But before we as a society plunge enthusiastically into the world of neurotechnology, there are several salient concerns to consider regarding the ethicality of devices that can read and even literally change your mind.

As Marcello Ienca, a neuroethicist and researcher at one of Europe’s top technology universities, noted in a 2017 paper, the rapid advancement of BCIs forces us to ponder human rights so basic we might not have even previously considered them rights. Ienca’s paper details four of these rights that he considers necessary to protect before neurotechnology becomes mainstream. First, the right to cognitive liberty: you should be able to decide freely whether or not you want to use any kind of neurotechnology. This might seem obvious, but the United States military is already looking into ways to use BCIs to make its soldiers “more fit for duty,” and the Chinese government has been recording data from some employees’ brains with brain caps, scanning for emotional states such as depression, anxiety, rage, and fatigue. It’s not difficult to imagine a dystopian future in which signing up for the military requires a “perfect soldier” chip installed in your brain, and where your company monitors your mood and attention span during work to make sure you’re being as productive as humanly possible. This idea also concerns another one of Ienca’s proposed rights: the right to mental privacy. Perhaps the government decides to use neurotechnology to aid in interrogations, or use data from BCIs as evidence in criminal investigations. How does this affect the constitutional right to remain silent? Could even your own thoughts be used against you in a court of law?

The first two of Ienca’s “neurorights,” as they’ve been dubbed, deal with the right to choice and right to privacy, however we must also consider the more serious possible consequences of BCIs that can “write” to the brain as well as read from it. Those devices necessitate what he calls the right to mental integrity, or right not to be harmed, either physically or psychologically, by neurotechnology. For example, the consequences of extremist groups (religious, political, terrorist, or otherwise) or fascist regimes gaining access to devices that can directly modify the brains of their members or targets, are terrifying to think about. Even for a person in a democratic society that is not a part or target of those groups, there’s the risk of hacking, one that any electronic device comes packaged with. There’s already a large amount of concern over (and proof of) the potential damage that hacking self-driving cars can cause; imagine the implications of someone being able to hack your brain. It’s still a hypothetical at this stage, but proof of concept studies have demonstrated that hacking BCIs is a real possibility. 

Lastly, Ienca brings up the right to psychological continuity, the right to protection against alterations to your sense of self. There’s already been interest from advertisers in the brand new field of “neuromarketing,” which involves studying how people decide what and whose product they want to buy, and then attempting to change (or “nudge”) those decisions on a wholly subconscious level. Outside of advertising, another concerning example of the way neurotechnology could affect a person’s personality that has been discussed in some circles is the way BCIs could help people with “violent tendencies” control those impulses by blocking them with signals to induce calm. Doing this at the behest of the person affected with their full understanding and consent is one thing, but it doesn’t seem too far a leap to imagine governments using this sort of neurotechnology on, say, violent prisoners, “for their own safety,” or otherwise infringing on people’s autonomy.

These sorts of dystopian hypotheticals are, of course, currently just that: hypothetical. But without laws in place to protect rights like the ones Ienca has named, there aren’t many obstacles to letting them become a reality. So, with that in mind, can we morally justify the enthusiastic research into BCIs and the bevy of companies, including Neuralink, that have sprung up in hopes of turning a profit on neurotechnology? Are the benefits BCIs can offer to those who are disabled worth the possible human rights violations, the losses of privacy or autonomy, that could occur by allowing companies to develop and sell them? In other words, as always seems relevant in the wake of new technological developments, how do we balance the beauty of innovation with its serious societal implications?