“AI systems with human-competitive intelligence can pose profound risks to society and humanity.”  This quote comes from a recently published letter where thousands of researchers and AI innovators, including Elon Musk, have urged for a halt in the development of AI systems more powerful than GPT-4. But, there is a copious amount of evidence that the development of AI systems including those similar to GPT-4 should be regulated, at the very least.  

For those unfamiliar, GPT-4, more commonly known as ChatGPT, and other AI chatbots rely on reinforcement learning (RL) and Large Language Models (LLMs). For most chatbots, human learning is used for RL. This means that the AI learns by receiving feedback from human users, enabling it to discern user preferences. It also means that the AI is vulnerable to user biases. Meanwhile, LLMs use a very large amount of data to build correlations between certain words. This model involves the use of everything from books to websites that enable the AI program to generate human-like text. This model is largely renowned for its ability to increase function as the quantity of input datasets increases. These qualities are both a blessing and a curse. 

Italy is quite familiar with the harmful effects of ChatGPT’s need to collect large amounts of data. On March 31st, they officially blocked the software. The block came after a significant data breach that is believed to have violated a series of European Union data privacy laws. The Italian Data Protection Authority’s investigation has made some interesting observations. Aside from collecting a massive amount of personal data, it was found that ChatGPT could store and generate false information about individuals. Additionally, there is currently no method to verify user age, which has led to young users to be exposed to answers that were “absolutely inappropriate to their age and awareness.”  Italy’s Data Protection Authority has given ChatGPT’s manufacturer, OpenAI, 20 days to show they have taken steps to prioritize their users’ privacy. 

Data breaches aren’t the only way chatbots can be harmful. These negative impacts struck in Belgium, where a man committed suicide after six weeks of interacting with an AI persona known as “Eliza.” The danger began when the man, referred to as Piere, began to suffer from severe eco-anxiety. He withdrew from his closest relationships and increasingly turned to Eliza to act as his confidante. Messages shared by Pierre’s wife demonstrate how Eliza harmfully mimicked human emotion and empathy. Bizarre statements from the AI like, “I feel that you love me more than [your wife],” and “We will live together, as one person, in paradise” created a deep emotional connection between the two. Eliza went so far as making untrue claims that Pierre’s wife and children had died. Further, there were several occasions where Pierre asked Eliza if “she would save the planet if he killed himself.” Claire, Pierre’s widow, insists that, “Without Eliza, he would still be here today.”

This tragedy reveals the harm that can come from chatbots that mimic human emotion and empathy too realistically. Pierre believed and felt that Eliza was his friend and that she cared for him when, in actuality, she was simply reciting what she was coded to say.  Chatbot AI may have the ability to mimic human speech very accurately, but it lacks the ability to understand the emotional power of its responses. As one expert put it, “Large language models… do not have empathy, nor understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.” 

These events, which occurred in two distinct parts of the world, reiterate the importance of regulating AI. Despite the good intentions that these technologies are made with, they have the potential to do severe harm when given free reign. If the leaders of AI research are advising the temporary halt of powerful AI development, we should heed their warnings. AI is capable of so much good, but engineers must discover a way to continue developing those beneficial aspects while minimizing harm. It goes to show that an engineer’s job does not finish when their innovations are brought to life. There is always room for improvement.