Last year, Kelly McKernan, a 36-year-old artist from Nashville who uses watercolor and acrylic gouache to create original illustrations for books, comics, and games, entered their name into the website Have I Been Trained. That’s when they learned that some of their artwork was used to train Stable Diffusion, the free AI model that lets anyone generate professional-quality images with a simple text prompt. It powers dozens of popular apps like Lensa.
“At first it was exciting and surreal,” McKernan wrote in a tweet that went viral in December.
That excitement, however, was short-lived. Anybody who used Stable Diffusion, McKernan realized, could now generate artwork in McKernan’s style simply by typing in their name. And at no point had anyone approached them to seek consent or offer compensation.
“This [tech] effectively destroys an entire career path made up of the most talented and passionate living artists today,” McKernan, a single mother who is currently working on a graphic novel anthology for the rock band Evanescence, told BuzzFeed News. “This development accelerates the scarcity of independent artists like me.”
Last week, McKernan became one of the three plaintiffs in a class-action lawsuit against Stability AI, the London-based company that co-developed Stable Diffusion; Midjourney, a San Francisco-based startup that uses Stable Diffusion to power text-based image creation; and DeviantArt, an online community for artists that now offers its own Stable Diffusion-powered generator called DreamUp.
The lawsuit was filed in San Francisco by lawyer Matthew Butterick along with the Joseph Saveri Law Firm, the same team that is currently suing Microsoft, GitHub, and OpenAI (creator of ChatGPT and image generator DALL-E 2) for making Copilot, an automatic code generator trained on existing code available online without seeking permission from engineers who wrote it. The two other plaintiffs in the AI art suit are Oregon-based cartoonist Sarah Andersen, 30, and San Francisco-based illustrator Karla Ortiz, 37.
The suit claims that Stable Diffusion was trained on billions of images scraped from the internet without consent, including images owned by this trio of artists. If products and services powered by generative AI products are allowed to operate, a press release by Saveri says, “the foreseeable result is that they will replace the very artists whose stolen works power these AI products with whom they are competing.”
Ortiz, a concept illustrator who has worked on video games and Hollywood blockbusters like Jurassic World and Dr. Strange, told BuzzFeed News that making art was “her happy place.” She added that she’s obsessed with technology.
In early 2021, Ortiz stumbled upon DiscoDiffusion, an earlier text-to-image AI generator, and found out that the tool was capable of generating images in her style and in the styles of other artists she knew. “It felt invasive in a way that I have never experienced,” she said.
Concerned, she started organizing town halls around the topic with the Concept Artists Association, an organization for artists in the entertainment industry that she is on the board of. She also connected with machine-learning experts to understand the tech better and reached out to other artists. In November, she saw news of the Copilot suit and got in touch with Saveri about filing her own. The firm agreed.
In December, Ortiz saw McKernan’s viral tweet about generative AI, and an opinion piece that Andersen wrote in the New York Times about how members of the alt-right on 4chan had imitated her art style to create pro-Nazi comic strips. She reached out to the two immediately, and they both agreed to be a part of the lawsuit with her.
“Artists have a right to say what happens to their hard-earned works,” Andersen told BuzzFeed News over email. “It’s clear from the way AI generators rolled out that there was never any consideration given to artists, our wishes, or our rights, and this is our only option to be heard.”
The Concept Artists Association is currently fundraising to hire a lobbyist to protect creators against the march of generative AI.
“It’s gross to me,” Ortiz said about AI-powered apps and services that instantly spit out art based on a text prompt . “They trained these models with our work. They took away our right to decide whether we wanted to be a part of this or not.”
Artists around the world have expressed concerns that AI technology will make them redundant. In December, hundreds of artists uploaded a picture saying “No to AI Generated Images” to ArtStation, one of the largest art communities on the internet, after AI-generated art appeared on the website. A few months earlier, the art world was up in arms after the Colorado State Fair’s annual art competition for emerging digital artists awarded a blue ribbon to an entrant who created his work using Midjourney.
But last week’s lawsuit against Stable Diffusion, Midjourney, and DeviantArt is the first time that artists have challenged generative AI companies in court. Days after that lawsuit was filed, stock-image powerhouse Getty Images filed its own suit against Stability AI in London, claiming that the company “unlawfully copied and processed millions of images protected by copyright and the associated metadata” to train its AI model.
Midjourney and DeviantArt did not respond to requests for comment from BuzzFeed News about the lawsuits. A Stability AI spokesperson said that the company was still waiting for legal documents from Getty. As for the suit from the three artists, the spokesperson said, “Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”
The outcome of both lawsuits will have deep ramifications for the future of AI-generated artwork, as well as for creators, according to experts.
“If the artists or Getty were able to secure a broad victory, it would significantly undermine the viability of tools like Stable Diffusion,” said Jessica Fjeld, a lecturer on law at Harvard Law School and assistant director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. “On the other hand, if Stable Diffusion prevails and these uses are found to be non-infringing, either because there’s no reproduction or derivative work or because it’s fair use, the artists will lose access to a potentially valuable licensing market.”
Both the suits from the artists and from Getty allege that the defendants have violated copyright law by training their AI models on images scraped from around the web. In the past, though, scraping images or other content for training datasets has been considered “fair use” in US copyright law. In 2016, the Supreme Court turned down an appeal from authors who sued Google for scanning more than 20 million copyrighted books and indexing them for its Google Books website.
But generative AI, which creates its own output based on what it learns from the source material, is still a nascent technology, and so far, no court has weighed in on it. “It’s a little hard to predict how copyright infringement cases will come out in the field of generative AI, because the technology isn’t well understood,” Fjeld said.
CEOs of generative AI companies haven’t shied away from talking about the issues surrounding the tech they’re pushing. In an interview with Forbes, Midjourney CEO David Holz said that the dataset that the company trained its AI model on was “just a big scrape of the internet” and the company hadn’t sought consent from artists who owned the copyright to their work. “There isn’t really a way to get a hundred million images and know where they’re coming from,” Holz said.
Meanwhile, Stability AI CEO Emad Mostaque told CNN that art makes up “much less than 0.1%” of LAION-5B. The name refers to a dataset of nearly 6 billion images scraped off the web by the nonprofit organization LAION, which in turn were used to train Stable Diffusion. (Stability AI partially funded the nonprofit.)
“That’s still millions of images,” pointed out Jon Lam, a Vancouver-based concept artist who is not part of the trio’s lawsuit. “That’s millions of pieces of copyrighted images just floating out there that you’ve trained your AI on.”
Last month, Stability AI announced that artists who had concerns about their art being used to train AI models could opt out of the next version of Stable Diffusion, a statement that generated backlash from artists who felt that the default should be “opt-in.” Mostaque acknowledged that it was a “complex issue” and said that future models of Stable Diffusion will be trained on “fully licensed” images.
Some observers have criticized the artists’ lawsuit for getting several things wrong. The suit, for instance, describes Stable Diffusion as a “collage tool that remixes the copyrighted works of millions of artists whose work was used as training data.” It also claims that AI art models “store compressed copies of [copyright-protected] training images.”
Both statements are a matter of debate. AI art models trained on images contain “patterns” that the model has learned from images that it then stores as numeric model parameters. Once a user types in a prompt, the model generates its own images from scratch based on this math and guided by the text.
“The [artists] allege that [the way Stable Diffusion operates] is an additional infringement of the reproduction right,” Fjeld said, “but I’m not at all sure they’re right about that. If what the algorithm captures is data about the image, it will be much harder to construe as copying. The right to create derivative works will concern the outputs: Do they resemble or draw from the source works in an identifiable way?”
But others said that a legal challenge, however flawed, was “still trouble” for the AI companies. “Anything the companies say to defend themselves will be used against them,” tweeted Alex Champandard, founder of Creative.ai, a European company that designs software for creative industries. “If it looks like [the] defendants [are] going to lose, or if it means incriminating evidence will end up on the record, it’ll probably get settled out of court.”
Some people have asked why the three artists haven’t included OpenAI, whose DALL-E 2 image generator set the internet on fire last year. Travis Manfredi, one of Saveri's lawyers representing the artists, told BuzzFeed News that the firm had excluded OpenAI because DALL-E 2 doesn’t use Stable Diffusion. “We don’t have as much information about their training datasets because they don’t use LAION,” he said. When asked why the firm wasn’t suing LAION, Manfredi said that it was “still investigating the full nature of the relationship between Stability AI and LAION.”
Meanwhile, there are trolls to contend with. Ever since news of the lawsuit went public, McKernan’s inbox has been flooded with hate mail from “AI bros.” Some of them called McKernan and other artists like them “Luddites.” Some said that AI is going to replace them and told them to go get “real jobs.” Others said that their work wasn’t original in the first place and that they deserved to lose to AI.
“They’re inane rants,” McKernan said. “I don’t read them. It’s a waste of my time. They sound like toddlers mad that their new toy is being threatened.” ●
Correction: The extent to which generative AI models rely on the actual images they are trained on to create new ones when prompted is a matter of debate. An earlier version of this story suggested it was more conclusive.