Ethics of Transference of Human Intelligence to Non-Biological Substrates

For many of us, the inevitability of our deaths seems certain. We are given limited time to inhabit our biological bodies, and then we are separated from then when they decay. Recently, however, research into the feasibility of reproducing human intelligence in another physical medium besides our bodies has sparked hope for those who fear death. Transferring human intelligence to another medium could provide the solution to the decay of our natural minds and bodies. For instance, human consciousness replicated on computer hardware could be reliably repaired, exist in places uninhabitable to biological life, and be improved upon in a controlled, purposeful fashion. Despite the promise of these new abilities and advantages, the transference of human intelligence to non-biological substrates carries ethical concerns. We must consider the ethical implications of being given new bodies, being connected to the world in new ways, and being mentally and physically altered. We will review the implications of this technology, and demonstrate that embracing it would do more harm to humanity’s ethical character than improve it.

The feasibility of developing non-biological substrate-based intelligence, or SIM, is uncertain. The potential for it to become reality rests on a number of assumptions about the operations of the human brain. One such premise is functionalism, the idea that the brain is computable [1]. For SIM to be possible, the brain’s operations must behave like mathematical functions, i.e. given a set of finite inputs, they produce a corresponding set of finite outputs. If the human brain adheres to functionalism, then the Church-Turing hypothesis, a principle put forth by two of the fathers of modern computer science, states that all of the mind’s inner workings can be reproduced by a computer executing a large but finite set of instructions [2]. So far, all evidence indicates that this is the case. For instance, the field of neural prosthetics has been able to reproduce elements of the brain to a degree of accuracy. At the University of Southern California, Theodore Berger and his associates were able to replicate a rat hippocampus and successfully install it in an animal patient with encouraging results [3]. Scientists’ ability to replicate this biological component indicates an underlying predictability, a trait shared with modern hardware. Further, a study at Duke showed that animal brains produce predictable electromagnetic signatures when they are attempting to move particular limbs [4], again demonstrating a reproducibility indicative of functionalism. Finally, research at MIT aims to simulate the activity of the 302 neurons found in C. elegans, a species of roundworm [3]. In short, these studies give indication that there is an underlying, computable logic to the workings of the mind that may be reproducible.

Unfortunately, developing SIM is dangerous for all test subjects involved, human or otherwise. Many of the experiments performed to understand cognition are invasive [5]. Whether installing an implant or dissecting brain matter to learn about its structure, scientists often cause pain or even death [6]. From this disconcerting reality arises the question: at what cost are we willing to discover the feasibility of the replication of human intelligence? It seems inappropriate to put the members of other species through suffering with the aim of achieving immortality for our own species. Destroying our collective virtue is a high price to pay for anything – even immortality. Additionally, the development of SIM will likely be incompatible with existing ethical research standards. The reason for this conflict is simple: we have no idea how effectively a consciousness transferred across media will be able to communicate its internal state, e.g. a rat brain moved into a computer may not be able to show that is suffering [5]. If the transferred mind is unable to properly communicate with the world, it could be in immense pain or a state of artificially induced dementia unbeknownst to researchers. Establishing a reliable mechanism for monitoring the internal state of such a mind would require time, if it is even possible. Because of this challenge, the first attempts at SIM could prove especially painful for test subjects [5]. Again, it seems selfish to force animals to be the preliminary test subjects of this technology when we aim to benefit only ourselves.

Beyond animals, replicating human intelligence will require extensive use of human test subjects. Worse, we are currently unsure if non-invasive studies would be sufficient to develop SIM. Scientists have much to learn about the inner workings of the brain before even postulating a design that might exist on another substrate [7].

Even with our comprehensive understanding of the human brain, we would still be uncertain about the outcome of a transfer of consciousness involving a human [5]. Even after successful transfers with animal test subjects, the humaneness of the process would be uncertain given the animal’s inability to communicate. In the event that animal tests falsely indicate that the transference process was safe, the first tests on humans could go horribly wrong. Additionally, there is also the continuity of consciousness problem [8]. The problem deals with the following question: if a person is replicated in a machine, is the new entity really the same person? Even though it might possess all of the original person’s memories and think in the same fashion, it could be a different person, just as twins with identical DNA differentiate themselves. Therefore in tests attempting the transfer of consciousness, the emergence of a healthy, functioning consciousness in the new medium does not necessarily validate the transfer procedure. Confronted by this scenario, we must ask if such an experiment violates the rights of the subject. The subject would be informed that they will be changing their conscious’ medium, only to be replaced by an identical copy of their DNA capable of convincing the researcher that the experiment worked. This directly violates the rights of the transferred human beings, though it is done unwittingly.

Placing human intelligence on a non-biological substrate has another implication that is difficult to justify. Provided a sophisticated understanding of the human brain and its workings on a new substrate, it may be possible to “save” the state of the mind at a particular point in time and later revert the mind to the saved state. The process has been popularly referred to as “mind uploading” by futurists like Ray Kurzweil, Google’s new head of engineering [3][9]. This technology may greatly alter the way we experience consciousness, particularly the way we form and interact with memories. Currently, we have limited control of the memories we form over our lifetimes. We are only able to select the memories we will form by putting ourselves in situations in which we are likely to form the desired memories. Even then, we cannot be sure that we will form such memories. With mind uploading, entirely removing our own memories (by downloading a previous state of our own consciousnesses) or supplanting our own memories with those of another person becomes possible. In either case, the link between the actions we take and the memories we form that has long been a significant feature of the human experience will have been broken. Individuals could freely commit crimes and wipe their minds of the experience, absolving them of any guilt that might have existed. In another instance, an individual might live a life of debauchery, but transplant pleasant memories of family and fulfillment into their brain. In both cases, they are not forced to face the negative emotions that accompany a life lacking virtue. We must ask whether we really want to destroy this link between virtuous actions and their natural rewards. If we were to break this connection, we would not be the virtuous beings we imagine ourselves to be – even if our memories say otherwise.

Another concerning implication of non-biological substrate-based intelligence is the greater interconnectivity of human minds [10]. With a more complete understanding of the human brain, the technology we use to communicate will likely become more fundamentally intertwined with our brains’ analogous forms in the new substrate. Minds on this network would be susceptible to potential abuse or theft by hackers [5]. This class of abuse would be far worse than any experienced on the internet in this age. Hackers could potentially stimulate uncomfortable feelings, install undesirable memories, or remove or alter sentimental memories. In short, the hacker’s potential to cause damage is magnified in the event that we develop SIM. From a utilitarian standpoint, it is difficult to validate the development of SIM and the increased interconnectivity between minds it implicates.

There are, however, aspects of SIM that may prove positive for humanity. The most obvious is the potential to live vastly longer than our short biological lifetimes allow [11]. A robust architecture reliant upon a new substrate may allow us to live many multiples of our current life-spans. We may also have more success repairing our minds [9]. We are unable to solve issues like neurodegenerative diseases because our ability to operate within the biological medium is limited. This may not be the case in a new medium.

Further, SIM would free us from reliance on food and water [12]. We would likely become reliant upon some new resource required to power our minds in the new substrate. Such a resource may be more abundant than food and water, effectively solving the issue of global hunger. This clearly passes the justice test.

In the same vein, SIM may also eliminate our dependence on our biological forms. In a new substrate, we would likely require some new conduit to interact with each other and the outside world since our minds would no longer be linked to our bodies. This development would free those with disabilities from their fates. For instance, assuming human minds could transfer to computers, all people could be provided fully functioning virtual bodies on par with those of all other humans living within the substrate. This is far more just than the existing distribution of bodies provided to humans by nature.

Finally, existence in a new substrate may allow us to improve upon our natural cognitive architecture [10]. With full understanding of the substrate, we may be able to modify our new ‘brains’ in a way that we are currently unable to do with our biological ones. This allows for the possibility of augmenting our intelligence, a process that, as researcher Randal A. Koene points out, is actually a natural extension of existent Darwinian processes [10]. Just as virtual bodies may solve the inequality of nature’s distribution of bodies to humanity, improving the architecture of the brain within a new substrate may solve the problem of inequality in native intelligence. With SIM, all humans have the potential to be equally intelligent. Additionally, the cumulative sum of this intelligence, if quantifiable, would likely be much greater than it is currently. It is difficult to determine if improving our intelligence as a species would raise our overall happiness, but it might allow for a more just distribution of intelligence.

The perils of developing the technology, however, outweigh the potential gains. Breaching our existent moral code to procure some potential benefits is irrational and selfish. Submitting test subjects to immense pain and exposing humanity to greater potential harm from hackers will only harm our morality. SIM must be viewed as more than just a potential mechanism to extend human life. It is a dangerous technology that has the potential to destroy the ethical character humanity has worked so hard to develop. Further, there is the potential to rewrite the laws of nature, possibly granting those born with deficient bodies or intelligence the opportunity to gain what other members of their species naturally possess, but definitely changing the time-honored way we function and think. While too early to determine the nature of the influence SIM will have on humanity, we should approach the topic with a cautious mindset. It is perhaps better left alone.

By David Bell, Viterbi School of Engineering, University of Southern California


Works Cited

[1] AH Marblestone, BM Zamft, YG Maguire et al., (2013), “Physical principles for scalable neural recording”. Front. Comput. Neurosci. 7:137. Doi: 10.3389/fncom.2013.00137

[2] B Copeland, “The Church-Turing Thesis”. Retrieved February, 2015. Available: http://plato.stanford.edu/entries/church-turing

[3] R Kurzweil, “How to Create a Mind: The Secret of Human Thought Revealed”. Penguin Books (2012).

[4] A Regaldo, “The Brain Is Not Computable”. Retrieved February, 2015. Available: http://www.technologyreview.com/view/511421/the-brain-is-not-computable

[5] G Dvorsky, “You Might Never Upload Your Mind Into A Computer”. Retrieved February, 2015. http://io9.com/you-ll-probably-never-upload-your-mind-into-a-computer-474941498

[6] “Animal Experiments Overview”. Retrieved February, 2015. Available: http://www.peta.org/issues/animals-used-for-experimentation/animals-used-experimentation-factsheets/animal-experiments-overview

[7] R Koene, TEDx Talks, Neuroscience, “Machines in minds to reverse engineer the machine that is mind”. Tallinn, Estonia, September 4, 2012.

[8] S Novella, “The Continuity Problem”. Retrieved, February, 2015. Available: http://theness.com/neurologicablog/index.php/the-continuity-problem

[9] M Neal, “Scientists Are Convinced Mind Transfer Is the Key to Immortality”. Retrieved February, 2015. Available: http://motherboard.vice.com/blog/scientists-are-convinced-mind-transfer-is-the-key-to-immortality

[10] A Piore, “The Neuroscientist Who Wants to Upload Humanity into a Computer”. Retrieved February, 2015. Available: http://www.popsci.com/article/science/neuroscientist-who-wants-upload-humanity-computer

[11] R Kurzweil, “Our Bodies, Our Technologies: Ray Kurzweil’s Cambridge Forum Lecture (Abridged)”. Retrieved February, 2015. http://www.kurzweilai.net/our-bodies-our-technologies-ray-kurzweil-s-cambridge-forum-lecture-abridged

[12] M Anissimov, “What Are the Benefits of Mind Uploading?”. Retrieved February, 2015. Available: http://lifeboat.com/ex/benefits.of.mind.uploading