How the WGA Strike Could Accelerate AI Regulation

9/11/2023

The 2023 Writer’s Guild of America Strike has lasted over four months, with SAG-AFTRA joining the writers for almost two months. Despite a few attempts at negotiations and studios like Warner Bros. Discovery reporting up to $500 million in losses, there is seemingly no agreement in sight. The writers’ and actors’ shared demands include increased minimum compensation and residuals from streaming, especially since TV streaming viewership outpaced cable TV viewership last year. However, compensation is not the only issue on the docket; writers and actors are also concerned about job security, especially in a new age of artificial intelligence. As AI expands into multiple industries, regulation is still in its infancy in the United States. Protections for writers and actors against AI may set a baseline for all creators. 

One key WGA demand during the strike has been a minimum hire quota for pre-greenlight and post-greenlight TV writers’ rooms. Currently, writers generate most TV scripts in mini pre-greenlight rooms, where a handful of writers develop as much of the script as possible over a few weeks for minimal pay. If a studio greenlights the show, then the showrunner is tasked with making the pre-written episodes and picking up where the writers left off, with no input from the original pre-greenlight team. Some writers see this structure as a way to hire as few writers as possible rather than paying a set of writers for the duration of a show. AI text generators like ChatGPT could exacerbate this issue by removing a bulk of the scriptwriting work and more of the already lacking compensation. Luckily, existing AI models are not entirely up to par with a real writers’ room: several writers who have already experimented with text generators found that the AI model would still require extensive human inputs and editing. Additionally, AI-generated scripts tended to be more cliché than human writing. This lack of nuance is an example of large language model bias, where common tropes or patterns from training materials are amplified by the model. Ultimately, these cliché scripts are likely not as harmful as explicit racial or gendered stereotypes often found with unregulated text models. Nevertheless, studios could cut back even more on hiring writers if AI technology continues to advance without protections. 

Using AI for writing also makes ownership of script ideas unclear. If AI-generated text is a reconfiguration of its training materials, do the authors of the training materials benefit from a successful script? For programs like ChatGPT, “sources” for its responses are not included, and if ChatGPT does not directly quote a source, intellectual property law does not require a citation. ChatGPT’s training materials include 570GB of data from books, web texts, and more, so tracking down sources by hand is impossible. The legal ambiguity of AI content ownership could prevent writers from gaining credit, exposure, and payment for their work. 

Aside from AI-generated text, AI can also be applied to visuals to threaten actors’ job security. Digital double technology is a method that originated by using CGI and motion capture to create believable replicas of an actor on the screen and in video games. The introduction of AI has accelerated digital double creation, making it easier for productions to superimpose actors’ faces onto body doubles and digitally de-age actors. However, actors in both large and small roles are concerned that their digital counterparts would be used by production studios, rather than being rehired for future roles. Studios have promised to receive consent from actors before implementing their double in different projects, but the pressure to find work, especially for background actors, may influence actors to concede the rights to their image.

Ultimately, AI is not the only contended issue in the writers and actors strike. However, successful strike negotiations could set a precedent for how AI should be regulated in creative fields. Additionally, AI regulations could extend to other countries like South Korea and India, where streaming services seek to invest in creators and new content.