9/19/2022
In recent months, AI-generated images have gained traction as programs such as DALL-E 2 and Craiyon have technologically advanced. These programs are machine-learning models that use the internet as their dataset in order to generate an image based on text input. Craiyon (formerly DALL-E mini) has become popular because it is easy and free to use; users can receive an image output for a wide variety of prompts in less than two minutes. Viral Twitter threads and subreddits have emerged based on exceedingly absurd text prompts and their associated AI images, most having surprising accuracy to the prompt.
While the quality of Craiyon’s images is not as realistic as pricier machine-learning models, there is concern that further development of this technology for the public could increase the spread of misinformation online. A few weeks ago, the winning submission for the digital art category of the Colorado State Fair Fine Arts competition was an image generated via the open-beta AI MidJourney. While the submission did not violate competition rules, the artist, Jason Allen, garnered heavy criticism for submitting the piece. Allen understands the controversy but holds that AI is integrated more as a tool in digital art, as he reported spending over 80 hours developing the piece. However, other artists have since generated similar pieces and claimed the process took under 10 minutes. This controversy brings into question the definition of originality and copyright in art. Ultimately, this type of art is a composition of millions of existing data points, so it could be argued that AI-generated art lacks originality. Nevertheless, the overall images created using these models are novel, and many of them, including Allen’s piece, are particularly striking.
Aside from submitting AI art to competitions, many AI artists already sell and experiment with their work online. Last week, AI artist Supercomposite discovered an unnerving presence during her AI art development. In several instances, Supercomposite noticed a recurring form of an older woman, usually in a gory or horror-type setting. Supercomposite named this figure “Loab”. Supercomposite also noted that Loab was persistent, appearing in composite images after many combinations with other pre-generated images. While Loab makes for a chilling internet ghost story, it displays a real problem that arises from these machine-learning image models: Many of them suffer from bias. Under Craiyon’s FAQs, they state that the model may “exacerbate societal biases” and perpetuate “harmful stereotypes” due to its unfiltered data set. These biases can include societal and pop culture trends as well. Loab is not necessarily harmful; its persistence and entwinement with horror may be an indication of horror movie casting and tropes, rather than a spirit haunting a computer. Phenomena like Loab, however, may inadvertently do harm in the form of racial or sexist stereotypes.
It is likely that in the near future, new rules will be implemented to prevent or strictly categorize AI-generated submissions in art competitions. However, it will be much harder to control the categorization of AI work online. Similar machine-learning models already exist to generate text, and the technology may soon be advanced enough to generate full videos and audio tracks. Overall, the unfiltered and unmonitored nature of these models can make them harmful, and internet users may be tasked to identify for themselves whether certain media is genuine.