Abstract
CGI and other visual effects have allowed filmmakers and game designers to create visually stunning movies and characters that were previously impossible. However, with the rise of this revolutionary and ultra-realistic technology comes significant potential for misuse. This article discusses the background of CGI and visual effects, their application in the film and entertainment industries, and the possible ways they can be exploited. It also addresses the negative implications of this technology and suggests possible solutions and mitigation strategies.
Introduction
With the rise of Computer Generated Imagery (CGI) and Artificial Intelligence (AI), Visual Effects (VFX) for films have become increasingly computerized, transforming a concrete backdrop into a prehistoric jungle, a New York skyline, or virtually any scene imaginable. CGI is a type of VFX that uses computers to create 2-dimensional and 3-dimensional images. Although CGI is popular in the film industry, it is also used in the gaming, art, simulation, architecture, advertising, and news industries. In the past decade, AI has become more integrated into the creation of graphics, allowing for faster and more hands-off creation of CGI, also known as a “deepfake” [1]. Deepfakes allow for the imitation of a person’s NILV: Name, Image, Likeness, and Voice [2].
CGI can create synthespians. Synthespian is a portmanteau of “synthetic” and “thespian.” As the name suggests, they are synthetic humans. Synthespians are defined as 3-dimensional characters generated by computers. They are born through a masterful combination of artistry, biology, and engineering, often for computer animation. This newfound power to recreate lifelike humans has unlocked the potential to resurrect the dead or impersonate the living, manipulating them like puppets. Deepfakes allow a person’s face to be replaced with another’s. While this is exciting and can be used for many beneficial purposes, some will inevitably misuse it. Therefore, although CGI can create stunning non-human visuals, the majority of its uses for human characters are unethical because they mislead audiences and disadvantage actors.
The Institute for Creative Technologies at the University of Southern California can use AI to create a realistic, interactive, moving, speaking, and emoting 3-dimensional character based on a human subject within 20 minutes. AI makes this process virtually automated [3]. In recent years, several deepfake and CGI-based AI models have been developed, such as Synthesia [4] and Runway Gen-3 Alpha [5]. These models can create deepfakes, synthespians, and other realistic modifications. These AI models are incredibly easy to use, as anyone with access to an internet-connected device has the power to manipulate reality. Now that the code for creating synthespians has been cracked, the dilemma of whether it should be pursued arises.
Believable CGI: Crossing The Uncanny Valley
As the film industry continues to develop CG characters, the Uncanny Valley concept hints that people still have a sense of what is or isn’t human. This concept, introduced in 1970 by Masahiro Mori (a Japanese robotics professor), describes how humans react to robots [6]. Until a certain point, we tend to like robots becoming more human-like. However, once the Uncanny Valley is reached–when the robot or CG character becomes realistic enough–feelings of discomfort and repulsion may emerge. These emotions interfere with people’s ability to connect with the robots and see them as human. Instead, we see a robot that is almost a human but not quite, often resulting in feelings of dread [6, 7].
The interesting thing about the Uncanny Valley is that it forms a barrier, protecting and distancing people from increasingly realistic synthespians and robots. Now that the Valley has been crossed with movies like Avengers: Infinity War (2018), CGI-created fantastical characters are extremely common. Crossing the Uncanny Valley to create realistic non-human characters is a wonderful idea. CGI has made movies like The Lion King (2019) and the Planet of the Apes series possible. CGI invites creative filmmaking, allowing for the realistic portrayal of animals that are real, extinct, mythical, or entirely imaginary. AI makes this process simpler and faster.
On the other hand, crossing the Uncanny Valley to replicate a real actor is problematic and hinders the audience’s ability to connect emotionally with the actor and their performance. Movie lovers, also known as cinephiles, closely follow their favorite actors and filmmakers, watching everything from their films to behind-the-scenes footage, interviews, and red-carpet premieres. These are all opportunities for emotional connection between the filmmakers and their audience. Simply relying on scans of an actor’s face and body to create a performance removes the opportunity for this relationship to develop. Not only does this diminish the audience’s investment in the film, but it can also repel them, as a simple computer program can be perceived as artificial and unappealing. A survey on audience interest in AI-generated synthespians found that only 30 percent of respondents were interested in an AI-generated version of a real celebrity [2].
Tom Cruise’s Mission: Impossible movies are phenomenal because of their minimal use of CGI. The multi-millionaire could easily opt for hyper-realistic graphics and cut the amount of work done by 90 percent [8]. However, his dedication to the craft is demonstrated by his willingness to risk his life, from parachute jumps at 25,000 feet to scaling the exterior of the Burj Khalifa [9]. The precise choreographing of his stunts makes the movie come alive. Replacing this artistry with a few clicks on a keyboard would be a pitiful switch. Not only would it take away the jobs of stunt professionals, but it would also degrade the filming process from a robust crew of thousands down to a third of them [10].
The Future of VFX
In the future, the development of CGI will blur the line between reality and simulation. Take James Cameron’s 2009 and 2022 Avatar movies, for example. The actors were shot with motion capture cameras to animate their virtual counterparts, the Na’vi, through advanced VFX. These movies blend live-action and animation seamlessly, raising the question of what is real and what is not. This will be a prevalent matter in the film industry. CGI, VFX, and deepfakes create doubt about whether something shown on screen actually happened or was manipulated to appear like it did [11].
CGI is meant to go unnoticed; it should feel natural in the scene. Because audiences can’t tell the difference between what is real and what is not, it does not matter whether an inanimate object exists on set. In fact, this type of CGI should be encouraged to create new and interesting visuals. For example, Interstellar (2014) contains otherworldly environments that would not be possible to film without CGI.
CGI heavily depends on computers and reduces the need for real props, settings, and locations. This has caused a shift from real elements to digital elements being used in movies, especially for fantastical genres like science fiction and fantasy, such as the Harry Potter series, Blade Runner (1982), Blade Runner 2049 (2017), and Wicked (2024). Based on a 2023 study by researchers at Huaqiao University, CGI movies also tend to perform better at the box office: on average, CGI films have a 7.83 times higher box office value than non-CGI films [12]. While there has been extensive use of it to create landscapes, objects, and animals, there is untapped potential in using CGI to create or modify humans. Animal and human-focused development of CGI can be expected in future films.
Face & Body Scanning
Detailed face and body scans provide new filmmaking opportunities. Face scans are obtained in a light stage–the actor will stand in the center of a dome-shaped rig, surrounded by cameras and lights. The lights will illuminate their face from all sides to eliminate shadows, as the cameras capture every angle. Face scans provide human references for inspiration when creating digital avatars for games and films. With AI, this invasive process has been simplified [13].
Studios looking to reduce the costs of filmmaking greatly benefit. Scans require only an actor’s face and body to create an entire movie. This eliminates the need to provide food, breaks, or healthcare for actors. The biggest benefit is that once studios have this digital scan, it is completely free and can be used unlimitedly. Compared to a living, breathing human, these virtual actors are a lower-maintenance option [14]. Companies such as Waymark and Private Island are at the forefront of these new filmmaking techniques. Among many others, The Frost, a 12-minute short film made with AI, reveals how easily actors can be replaced by technology [15]. This is leading to fear in the acting industry and may foreshadow the end of the acting profession.
The Actor’s Take
After the COVID-19 pandemic, 50 percent of actors lost full-time jobs in entertainment, 76 percent reported loss of income, and 22 percent said they did not know when they would return to work [16]. The pandemic had already forced a switch from in-person to virtual casting calls, which removed geographic barriers and increased competition for acting roles. There is already high competition in the acting field, with around 90 percent of actors being unemployed and only 2 percent able to make a living out of acting [17]. Using AI to replace actors creates more inequality and instability within the acting community.
In 2023, members of SAG-AFTRA (the Screen Actors Guild–American Federation of Television and Radio Artists) went on strike to announce their opposition to AI [13]. For 118 days, the members refused to work or communicate with other filmmakers as a form of protest, including actors on or below the poverty line. The 2023 actors’ strike agreements require studios to get an actor’s consent before obtaining scans of them to create their digital twin. Studios also have to pay the actor for a day’s work at least [14].
Although some agreement is better than none, the compensation given to the actor is unfair. For example, a daily film contract promises roughly $1,500 per day. A lead role in a full movie provides at least $65,000 [18]. Money for a day’s work is significantly less than the typical earnings from a film. It is immoral to increase the use of AI and CGI, as it is making the actors’ already challenging jobs even more competitive. Fans of famous actors can especially attest to this, as 67 percent of US voters supported the actors’ strikes while only 18 percent were unsupportive. SAG-AFTRA’s agreement with studios requires them to communicate with the union each time they intend to use a synthetic performer and consider casting real actors [14]. This agreement supports the actors’ rights to work.
If actors can’t act anymore, they won’t be happy. Despite the economic benefits of using AI-generated scans of actors, real humans should be the ones behind the camera. The artistic professions differ from the technical professions: art is based on emotion and human experience. While CG actors can perform well, they are simply imitating real humans. Therefore, in artistic professions, human employees should be prioritized over artificial ones.
Misuse of Deepfakes
Deepfakes are very useful for Visual Effects (VFX). They polish the final product and enhance the visual appearance of actors. Since deepfakes rely on AI, they also speed up the editing process. Deepfakes can be misused in several serious ways, such as creating videos of individuals without their consent, fabricating events that never occurred (which could be dangerously used as false evidence in court), manipulating identities, and spreading misinformation. To prevent these things from happening, there is a wide consensus on the need for education in this field, especially for filmmakers who are likely to use this technology even more in the future. This will ensure that the creators of popular and influential films are considering the implications of this power they have to influence the general public and create scenarios that were never before possible, such as bringing dead actors back to life. There is also a need for laws that require the filmmaker/deepfake creator to acquire the consent of their subject and ensure that their final product will not be used to misrepresent any person or situation [19]. The SAG-AFTRA agreement meets many of these needs and provides some stability for the hazy future.
While deepfakes promise fun and exciting innovations, they also introduce the possibility of privacy invasion and impersonation. A 2019 YouTube video titled “Keanu Reeves Stops A ROBBERY!” showing a hyper-realistic Keanu Reeves stopping a robbery went viral. Frighteningly, it was not actually Keanu Reeves in the video but Reuben Langdon, a voice actor and stuntman who was transformed into the famous actor with Faceswap (an open-source AI-assisted deepfake generation software) and Adobe After Effects (a visual effects software) [20].
The creation of deepfakes prompts a critical question: Should one person have the power to determine what another person does, wears, or says? Everyone should have the freedom to be in control of their own bodies and lives. Therefore, the only time that there is absolutely nothing wrong with a deepfake is when a person is creating a deepfake of themselves. In all other scenarios, someone is impersonating someone else, essentially treating them as an instrument and not as a person. Without complete consent from the subject being impersonated, this action is immoral and should not be permitted.
As per the wishes of actors from the SAG-AFTRA union, rules must be set to ensure ethical use of computer-generated (CG) humans. When real people are recreated into synthespians, CGI artists are essentially manipulating their bodies. The actor turns into a puppet on screen. Because this person’s body can be manipulated in ways that might not be agreeable to them, the only ethical approach is to obtain consent: the CGI artist must obtain explicit directions for what they are allowed to do with the actor’s digital counterpart. The 2023 actors’ strike demanded that such a contract be drafted among studios. The SAG-AFTRA union negotiated rules requiring CGI artists to be “reasonably specific” with the intended use of the actor’s digital twin [14].
Digital Resurrection in Media
AI can create Thanabots, also known as Deadbots, allowing for the digital resurrection of deceased people. This computer-facilitated resurrection can be traced back to 2005, when Hanson Robotics recreated the head of Philip K. Dick, the science fiction author who died in 1982. The head was attached to a robotic body that could converse with humans, replicating Dick’s personality through his interviews and writings. Not only was this experiment entertaining, but it also allowed the world to continue engaging with the author, even though he himself was no longer present [21].
Rogue One: A Star Wars Story digitally resurrected Peter Cushing’s character, who was originally seen in the 1977 movie. Although it was an impressive feat, audiences felt odd seeing an actor who died in 1994 in a 2016 movie, especially because the CGI was unable to surpass the Uncanny Valley.
A similar situation occurred under different circumstances in Furious 7 when Paul Walker passed away due to a car accident. In this case, digital resurrection was used not to revive a distant memory of an actor, but to finish a project already in progress. The filmmakers called in his brothers, Cody and Caleb Walker, to stand in while they finished shooting his scenes. Weta Digital, a CGI and VFX company, digitally replaced the faces of Walker’s brothers with the face of the actor [7]. In these cases, it is likely that the actor would have been in support of completing the movie, considering he signed a contract and was acting in it.
In general, while such technology benefits the public by increasing accessibility to information, it also raises ethical concerns. For example, reviving an estimated version of Philip Dick or Peter Cushing is problematic, especially because neither could have anticipated this possibility; and since consent can no longer be obtained, this may be an ethical violation of the actor’s rights to his own body. Central questions remain: Is this artificially recreated version of this person truly accurate to the real person? And would the real person have approved of this use? There is no available guarantee of the answer and therefore no guarantee of the authenticity and ethicality of these bots.
An ethically acceptable solution from now on could involve actors formally documenting their consent to digital resurrection, whether it be as a clause in their will or through a separate lifetime agreement specifying how their likeness may be used. Another proposition is to obtain permission from family members to use this person’s body [22].
Furthermore, reusing actors who have passed away reduces spots for incoming actors to have a chance at performing in films. If resurrected CGI actors become as realistic as their human counterparts, it will be tough for audiences to let go of their favorite stars. This is not fair for aspiring actors who do not have the reputation or time that established actors do.
The cases discussed above prompted serious consideration of these ethical concerns, especially as technology has continued to advance and make digital resurrection more feasible. As a result, the issue became a major topic of discussion during the 2023 SAG-AFTRA strike. The resulting agreement allows studios to continue using an actor’s digital replica beyond death once they obtain consent. If they need consent to create a deceased actor, they can receive it from an authorized representative or the union [23]. This rule is important in protecting the rights of actors and makes the use of digital replicas more ethical. It protects the actor’s right to their own digital image. However, once an actor has given consent to the studio, their image can be reused for the specified topic endlessly after their death, possibly in ways that they would not agree to if they were there in person [23]
Conclusion
The ability to create, design, and dictate to another human allows us to play God. It is up to us now to decide if we are capable of handling this power with dignity and good moral judgment. The SAG-AFTRA agreement lays out a good basis for the rules of using deepfakes and CGI. The most important step should be obtaining consent from the person being impersonated. In the case of deceased people, the most morally sound action is none at all; it is best to let the dead stay dead.
About the Author: Kariena Panpaliya is an actress, director, and film editor pursuing a dual degree in Film & TV Production and Computer Science Games at the University of Southern California. She is interested in discovering ethical ways technology can make films better, and make better films.
By Kariena Panpaliya, Viterbi School of Engineering, University of Southern California
References
[1] J. Straub, “Filmmaking 101: What is CGI in movies and animation?,” Boords. https://boords.com/blog/filmmaking-101-what-is-cgi-in-movies-and-animation (accessed Nov. 13, 2024).
[2] A. Schomer, “Generative AI, celebrity deepfakes & digital replicas: A special report,” Variety, vol. 8, no. 1, pp. 1–25, Dec. 2024.
[3] J. Stadler, “Synthetic beings and synthespian ethics: Embodiment technologies in science/fiction,” Projections (New York, N.Y.), vol. 13, no. 2, pp. 123–141, 2019, doi: 10.3167/proj.2019.130207.
[4] M. Rebelo, “The 15 best AI video generators in 2024,” Zapier. https://zapier.com/blog/best-ai-video-generator/ (accessed Dec. 11, 2024).
[5] I. Vrtaric, “What is Runway Gen-3 Alpha? How it works, use cases, alternatives & more,” DataCamp. https://www.datacamp.com/blog/what-is-runway-gen-3 (accessed Dec. 12, 2024).
[6] R. Pütten et al., “Neural mechanisms for accepting and rejecting artificial social partners in the uncanny valley,” Journal of Neuroscience, vol. 39, no. 20, pp. 3897–3908, 2019, doi: 10.1523/JNEUROSCI.2956-18.2019.
[7] R. Haridy, “Star Wars: Rogue One and Hollywood’s trip through the uncanny valley,” New Atlas. https://newatlas.com/star-wars-rogue-one-uncanny-valley-hollywood/47008/ (accessed Nov. 13, 2024).
[8] “UK special effects guru Neil Corbould on staging key scenes from ‘Mission: Impossible – Dead Reckoning’ and ‘Napoleon’,” Screen International, London, Feb. 9, 2024.
[9] J. Guerrasio and E. Jacobs, “Tom Cruise climbs between two planes in the new ‘Mission: Impossible.’ Here are his best stunts, ranked,” Business Insider. https://www.businessinsider.com/tom-cruise-best-stunts-all-time-ranked-2022-5#1-cruise-did-106-skydives-with-a-broken-ankle-to-pull-off-the-halo-jump-in-mission-impossible-fallout-12 (accessed Nov. 4, 2025).
[10] S. Follows, “Do more film crew members work on-set or in post-production?,” Stephen Follows. https://stephenfollows.com/p/film-crew-work-on-set-or-in-post-production (accessed Nov. 12, 2024).
[11] W. R. Neuman, Evolutionary Intelligence: How Technology Will Make Us Smarter. Cambridge, MA: MIT Press, 2023. doi: 10.7551/mitpress/15049.001.0001
[12] Z. Sun, “What does CGI digital technology bring to the sustainable development of animated films?,” Sustainability, vol. 15, no. 14, Art. no. 10895, 2023, doi: 10.3390/su151410895.
[13] L. Leffer, “Can AI replace actors? Here’s how digital double tech works,” Scientific American. https://www.scientificamerican.com/article/can-ai-replace-actors-heres-how-digital-double-tech-works/ (accessed Nov. 13, 2024).
[14] J. Schuhrke, “Lights, camera, collective action: Assessing the 2023 SAG-AFTRA strike,” New Labor Forum, vol. 33, no. 2, pp. 56–64, 2024, doi: 10.1177/10957960241245445.
[15] W. D. Heaven, “Welcome to the new surreal: How ai-generated video is changing film.,” MIT Technology Review, https://www.technologyreview.com/2023/06/01/1073858/surreal-ai-generative-video-changing-film/?gad_source=1&gad_campaignid=16701804390&gbraid=0AAAAADgO_mjqB0qIOE_3qHQdO4q4ngOuX&gclid=CjwKCAiAoNbIBhB5EiwAZFbYGFD-1779HCsVu98vvK9YY47QpBPAPSQglyk66pN9WinXmYbnWb_4XBoC5b8QAvD_BwE (accessed Nov. 13, 2024).
[16] S. K. Brooks and S. S. Patel, “Challenges and opportunities experienced by performing artists during COVID-19 lockdown: Scoping review,” Social Sciences & Humanities Open, vol. 6, no. 1, Art. no. 100297, 2022, doi: 10.1016/j.ssaho.2022.100297.
[17] O. E. Williams, L. Lacasa, and V. Latora, “Quantifying and predicting success in show business,” Nature Communications, vol. 10, no. 1, Art. no. 2256, 2019, doi: 10.1038/s41467-019-10213-0.
[18] “How Much Do Actors Get Paid Per Movie?” Backstage. https://www.backstage.com/magazine/article/how-much-money-actors-make-for-films-68589/ (Accessed Dec 13, 2024).
[19] P. Rathi, S. K. Budhani, G. M. Upadhyay, P. Vats, R. Kaur, and A. K. Saini, “Unmasking deepfakes: Understanding the technology, risks, and countermeasures,” in 2024 4th International Conference on Innovative Practices in Technology and Management (ICIPTM), IEEE, 2024, pp. 1–6. doi: 10.1109/ICIPTM59628.2024.10563353
[20] L. Bode, “Deepfaking Keanu: YouTube deepfakes, platform visual effects, and the complexity of reception,” Convergence, vol. 27, no. 4, pp. 919–934, 2021, doi: 10.1177/13548565211030454.
[21] “Philip K Dick,” Hanson Robotics. https://www.hansonrobotics.com/philip-k-dick/ (accessed Nov. 13, 2024).
[22] S. R. Stroud and W. Cuellar, “Faking it: The ethical challenges of computer-generated actors,” The University of Texas at Austin Center for Media Engagement. https://mediaengagement.org/research/the-ethics-of-computer-generated-actors/ (accessed Nov. 12, 2024).
[23] SAG-AFTRA, SAG-AFTRA TV/ Theatrical Contracts 2023 Summary of Tentative Agreement, Los Angeles, 2023.
Further Reading
https://www.scientificamerican.com/article/can-ai-replace-actors-heres-how-digital-double-tech-works
https://www.technologyreview.com/2023/06/01/1073858/surreal-ai-generative-video-changing-film/
https://deadline.com/2023/02/keanu-reeves-calls-out-scary-deepfakes-ai-technology-1235260761/
