A banner image with the text "Sea Moss Spotlight: Bots," overlaid on a red-and-blue-lit image of CPU heat sinks.
Photo by Michael Dziedzic on Unsplash

Assistant Editor Lily Davenport: It’s been a slower season for the CR fiction team this month, as we read through the queue and pass stories around for further discussion. (Of course, it’s a busy time in other ways: planning for AWP and for UC’s upcoming visiting-writer events, the ongoing website redesign, and embarking on projects with our new undergraduate intern.) And, as is often the case when I spend a lot of time with the queue, I’ve been thinking about the ways in which AI proliferation continues to alter our professional landscape as writers and editors.

The CR doesn’t yet have a policy excluding submissions produced or abetted by large-language models (LLMs); I remain grateful to Matt O’Keefe and to our team of volunteer editors, who conduct initial reads of our entire submission queue, as their efforts have shielded me thus far from obvious AI material. (I’d like to think that I’m capable of detecting, and summarily dismissing, any fiction produced in this way—but maybe that’s just a squishy, self-important, human thought, characteristic of my squishy and self-important human brain.) As a culture, we’re now two years out from Clarkesworld’s now-famous temporary submissions closure, following a massive spike in obviously AI-produced submissions; LLM technology continues to grow more sophisticated, as do technological efforts to detect its usage, and predictive-text AI tools have cropped up everywhere from my Outlook composition tool to my search-engine results, like the fruiting bodies of a disgusting but hardy fungus. 

While I’m not terribly concerned about the advent of a Great Automatic Grammatizor, I do worry about a future in which publishers attempt to cut costs by using a souped-up Grammarly instead of a human copyediting team. On a whim (just kidding, on a deadline!) I turned to the Manual to see what wisdom the new edition might have on the subject. This is the third installment of this column to cover the latest—eighteenth!—edition of the Manual. The new edition debuted last September in print and online, with a shiny new color scheme and well over 150 changes significant enough to merit specific attention in the cheat sheet published on the website. This time around, I’ll be highlighting the nods to an area that, in the seventeenth edition, wasn’t yet a concern: the presence of AI in the editing and publishing world. 

There is no unified section in the Manual that addresses AI’s impact on the writing, editing, and publishing industries. Instead, references to AI have crept in piecemeal, often as addenda to entries focused on other matters. This comes as something of a surprise to me, since CMOS has, in the past, employed a certain level of skepticism when making reference to novel technological tools (or, for that matter, to the corporate interests that popularize them); the seventeenth edition begrudgingly allowed the capitalization of trademarks (8.153), apps (7.76), and file names (7.79), but generally discouraged the use of terms that would require movement away from its capitalization-light style.

Anecdotally, I’d argue that a certain distaste for technology-specific editorial requirements (apart, of course, from keeping up to date on Microsoft Word and its uses) was part of the culture surrounding the Manual at the time; during my stint as an adjunct instructor for the University of Chicago’s copyediting certificate program, I inherited a set of slides from another instructor that coached students to “beware of 7.79’s font/typeface MADNESS!”. I don’t detect the same tone in the current sections dealing with AI, though perhaps its relegation to the fringes of the eighteenth edition amounts to the same thing. 

The first mention of AI appears in 3.38, “Crediting Adapted Material,” following the main entry text indicating that authors adapting illustrations or figures produced by others should provide an attribution line: “If the illustration was created by or with the help of artificial intelligence (AI), that fact should be noted in the credit.” In a related vein, 14.112, “Citing AI-Generated Content,” which remains the sole entry devoted entirely to an AI-related topic, provides guidance for citing text generated by LLMs. This entry insists on authorial disclosure where AI has been incorporated into the composition or revision proesses: anyone employing ChatGPT or similar models “must make it clear how the tool has been used.” Both entries acknowledge the reality that, in many ways, the AI ship has sailed, and we in the editing world will be living with the consequences for the foreseeable future; I think of them as a kind of editorial harm reduction, offering a framework for disclosure rather than attempting to enforce a ban that will widely be ignored. 

The other three mentions of AI all occur in chapter four, which deals with rights, permissions, and copyright. Uniformly, they contain additions that acknowledge generative AI’s increasing prevalence as a complicating factor: 4.5, “Original Expression,” defines copyrightable expression as extending only to materials produced by a human; 4.51, “Need for Accuracy and Candor,” and 4.76, “Author’s Copyright Warranties,” meanwhile, both caution that, if AI has been used to generate or modify text being submitted for copyright, the author must disclose the usage or face potential legal repercussions.

The implication is clear enough: ChatGPT and the like cannot be considered authors or creators in any meaningful fashion. Paradoxically, these sections also indicate that human writers cannot take credit, in a legal sense, for text produced using AI; this places generated text in something of a netherworld, outside the purview of copyright regulations. It’s not legally expression, but must still be admitted to, delineated, or described; anyone could copy the text I generated and use it for themselves without facing penalties, but they too would be required to state that the text had been generated by a model—just not that I had been the original person who prompted the model to generate it. I suppose it’s a good thing that I don’t work in rights and permissions, because I find the logic here recursive, dizzying. 

Maybe that’s because it highlights a larger disconnect within the Manual. If, after all, we’re not going to consider generated text and images as “expression,” then why have procedures for citing them? LLMs aren’t sentient; they do not have independent cognition, only a set of algorithmic procedures whose end results mimic those achieved through a human cognitive process. (A 2023 post on the CMOS Shop Talk blog, which was the closest thing to a broad-spectrum official statement on AI that I could find from the Manual’s team, notes this as well, and points out that attributing “knowledge” to a chatbot is, for this reason, fallacious.)

They’re not people, so we don’t extend them the same legal protections that copyright affords human creators. They are processes, whose end results we sometimes incorporate into our own work, but somehow they are qualitatively different from other software tools such as Gephi or Tableau, both of which construct data visualizations by automating the tasks that a human statistician/designer would find onerous. Nobody cites Gephi as if it were a separate entity when providing a figure, although they might appropriately note that a given image was produced using Gephi’s toolkit. But, because they are processes that make something that could arguably be confused for human labor, we must disclose their presence in a new and somewhat cumbersome way. 

I wonder if this tension really arises, not from a perceived similarity in the end results of text written by a human and text generated by a bot, but from our uneasy cultural awareness that LLMs are essentially our collective digital unconscious. They’re trained on vast swaths of human language; when they generate responses to our queries, they’re feeding our own words back to us. (This is a process reminiscent of the Mima in Aniara (2018), though that isn’t a compliment to us or it.) They’re our shadow selves, our uncanny doubles, the best and worst of us. How can you cite something so vast, amorphous, and ever-changing? How can you attribute any kind of personhood to that, even in the driest and most technical of senses? And yet: it’s made by people. In some sense, it is people. (Not an endorsement; Soylent Green, after all, was made of the same thing.) The worlds of writing and editing are, ultimately, changing too fast for any one edition of the Manual to capture—we’ll all have to wait for a future edition to see how this set of shifts pans out.