Neil Clarke was overwhelmed. The editor and publisher of Clarkesworld, a prestigious online magazine that publishes science fiction, was drowning in submissions. But the number of stories being sent in wasn’t the problem; it was the fact that most of them had been written by AI tools like ChatGPT instead of by human authors.
Clarke, who lives in New Jersey, had spent most of last week weeding out the spammy submissions, but now, they were pouring in faster than he could keep up. On Monday, after getting more than 50 AI-generated stories before noon, he did something he had never done before in Clarkesworld’s 17-year-old history: He closed new submissions indefinitely.
It got to the point, Clarke said, where “I was dreading opening the submissions system. I decided that the only course of action was to close for some period of time while we dealt with real submissions and figured out how to live in whatever this new world was going to be.” (Clarkesworld will reopen submissions “probably some time next month,” according to the magazine’s Twitter account.)
The recent explosion of generative AI — tools and services that can generate any kind of text or image with a simple prompt — has birthed an online ecosystem of hustlers dishing out dubious advice on how to use online apps like ChatGPT and Midjourney to get rich quick. According to a Reuters report, Amazon is being flooded with books written entirely by AI.
A similar phenomenon was at play at Clarkesworld. The surge of AI-penned submissions was “coming from outside our community,” Clarke said, adding, “These are not people that were trying to legitimately submit fiction to us. These are the people who are trying to make money on the side hustle. They’re listening to all these experts on TikTok and YouTube that say, ‘Hey, you can make some money, just pop this into ChatGPT and then submit the text to this list of sites.’”
Clarke doesn’t want to disclose exactly how he can tell that a submission is AI-generated, mostly to avoid helping future spammers fool him. But, he said, it’s not that hard. “With a lot of them, I can tell on the first page,” he said. He looks for patterns, weird words, and odd sentence constructions, he said. “Sometimes, there’s a difference in rhythm, and there are some serious tells, like a bunch of submissions with the same title generated by an AI program.”
He admitted that his methods, however, aren’t foolproof, and sometimes legit submissions get flagged as machine-generated, too. “There are always exceptions,” Clarke sighed.
He’s tried using AI-detection tools, but has found them lacking. (A detector released by OpenAI, the company behind ChatGPT, works only about 1 in 4 times.) He said unfamiliar turns of phrase in submissions from authors based outside the US whose first language isn’t English can sometimes trip up such tools. “There’s an inherent bias in these detectors,” Clarke said.
Clarke thinks that the rapid advances in AI over the next few years will make such detection tools totally ineffective. “AI is going to be writing at such a level that you won’t be able to detect it against a normal human,” he said.
At least one person responsible for creating generative AI tools shares Clarke’s concerns. Amit Gupta is the cofounder of Sudowrite, an AI tool for writers that helps with edits, generates plot ideas, and completes entire sentences and paragraphs. In an interview with BuzzFeed News, Gupta, who is also a sci-fi author and has submitted to Clarkesworld multiple times in the past, said that what the magazine was going through was “terrible” and “really disappointing.”
He said that something like ChatGPT, which generates large blocks of text from scratch, would be a better tool to generate sci-fi submissions than Sudowrite, which is mostly used for stories that are already in the process of being written. He pointed out that Sudowrite caps the number of stories you can create using the tool in a single day. “But if you just came and wrote like three stories each day, I don’t think we can stop that use case,” Gupta said. “That feels too much of a gray area between legitimate and illegitimate use.”
Clarke called the entire field of generative AI “an ethical and legal gray area.”
“Who owns these [submitted] works?” he asked. “If I buy one of them, who am I paying? The person didn’t write it. The chatbot doesn’t own it.” He also pointed out the lack of transparency in the data that these tools are trained on. “Look at what’s happening in the art world,” he said, referring to a case in which a trio of artists sued the makers of popular AI image generators, claiming that the tools had been trained on their art without their permission.
But ultimately, Clarke said, the real issue isn’t how good or bad the text generated by AI tools is. The problem is their speed. “We were being buried,” he said. “I never expected a bunch of side hustle gurus to take out our submission system.” Meanwhile, he said, “The irony of being a magazine that publishes sci-fi that is flooded with stories written by AI isn’t lost on me.”