Partly in response to NaNoWriMo’s newest blunder, and partly because it’s just in the air, some people are talking about AI1 writing and whether or not it is writing and also, whether or not it is art. Speaking as someone who did a lot of time in the “can videogames ever be art” wars of the late aughts and early 2010s, I have no interest in getting into another one of these brouhahas. This is partly because I am tired, partly because the question is somewhat unsolvable, since a stable definition of “art” is more or less impossible2, and partly because it avoids the actually important question, which is: “is AI writing a good thing for us to do?”
I have also been recently forced to encounter a similar question, which might be phrased as “can AI replace lawyers?” I think, at least in terms of my own legal practice as a public defender, these questions have similar problems.
There are the various empirical questions associated with these problems: can AI ever convincingly write a novel, can AI produce a digital painting that doesn’t have weird little artifacts of failure all over it, can AI parse law and construct cogent legal arguments, etc. I don’t know the answers to these questions, since I am not a software engineer; my understanding is that the answer is “not yet, or at least not without using so much electricity as to materially worsen our climate catastrophe,” but I don’t know. Let us assume, however, for the sake of argument, that large-language models have the potential to be able to do all of these things with human-level competence, so as not to get bogged down in discussions of microprocessors and exactly how much dakka is needed to make ChatGPT work; I have nothing to contribute to that conversation.
The problem here, then, and the thing that I fear can be obscured by all the technical posturing, is that using LLMs to write novels or represent clients in court is, broadly speaking, evil.
Let’s stick, first, to law, and specifically to the sort of law I practice, which is the defense of indigent people against the blunt hammer that is the State’s power to throw people into cages or otherwise subject them to capricious and impersonal supervision.
The other day I was at a DC party. I do not go to very many of these things, largely because I generally feel like a space alien when I do. When I go to DC parties I am immediately conspicuous: I never manage to dress right, I’m a trenches trial attorney in Maryland and therefore not immersed in the same universe most of the attendees are, and also, I have a medium-sized beard. (I faced many culture shocks when I moved from rural Minnesota to the DMV, but none more than the fact that no one here grows a beard. My current beard, which has not been materially trimmed in about a year, would have been notable in MN but hardly outside the realm of pogonotrophic3 reason; here, I look like Augbert.)
This particular DC party was about 30% AI think-tank employees by volume, and although I tried not to talk about this subject very much, since I did not think the party’s vibe would be improved by me approximating a Jonathan Edwards sermon, I couldn’t avoid it entirely. One very nice young lady learned I was a lawyer and told me that AI was definitely coming for my job, and I could not stop myself from responding with a certain amount of scorn. There are certainly avenues of legal practice that will be impacted by LLMs. There is even a reasonable use-case for such things in the context of document review in vast civil cases, where currently the petabytes of data produced when one medical device manufacturer sues another are reviewed either by first-year associates or farmed out to huge warehouses of contractors. But it is difficult for me to imagine how an LLM would be able to perform my job, which is mostly about convincing judges and juries that the State of Maryland has failed to prove, beyond a reasonable doubt, that my client committed this-or-that crime.
To go back briefly on my promise not to talk about empirics, I struggle to see how an LLM could do what I did the other day, which was win a trial without asking a single question of a witness, and only saying three or four sentences in my argument for a judgment4 of acquittal. Without going into too much detail, the State (for reasons that were at least partly outside of their control, lest it sound like I’m dunking too hard on the line ASA on the other side) couldn’t get their witnesses to say enough relevant things about where the alleged incident occurred and about what actually happened. The best thing for me to do, then, was to shut the hell up, since if I had started asking more questions of these witnesses, they might have volunteered more information about what happened, which might (and almost certainly would) have made my client’s case worse. Further, had I asked questions on cross-examination, the State would have had the opportunity to cure these defects by asking these witnesses more questions on re-direct, and I might have essentially helped the State make their case for them.
I also knew this was the right move because I knew that the judge at issue, who has been presiding over criminal cases for some thirty years, would have already identified these problems and would hold the State to their burden. So the right thing for me to do was shut up. I have a very hard time believing that any LLM would understand that its best strategy was to not say anything, since LLMs like to talk.5
But this is probably not an insurmountable problem for LLM designers, so let’s imagine that a hypothetical LLM in this case would have known to shut up, and would have won the case. This still leaves us with the fact that much of the job of a PD occurs outside of the confines of formal court interaction. I have to negotiate with prosecutors and advise clients, and that requires experience and judgment. I have to know which sorts of cases I can win and which sorts I can’t, I have to know the idiosyncrasies of both the specific prosecutor on the other end and the judge on the bench (and, in a jury context, what I can know about the specific jurors I have), and I have to tailor my arguments, either in negotiations or in open court, accordingly. Should I take a more aggressive tack with this prosecutor, since they respond to shows of strength, or should I present in a more friendly fashion, since they don’t respond well to bullying but do respond well to cogent argument? It is difficult for me to imagine an LLM knowing how to do these things, no matter how powerful.
But fine: I said I wasn’t going to argue about empirical realities much, so let’s imagine a hypothetical AI model that can learn to make these judgments at least as well as the average PD. It would still be evil to put such a program in charge of a client’s defense. A PD client is a human being, and deserves a human being to serve as an advocate. A PD client is a human being ensnared in a vast, impersonal system they probably do not understand. Faced with the Cyclopean, implacable machinery of the State, the last thing a PD client needs is to be left to the mercy of yet another sterile apparatus that is incapable of recognizing them as a human being.
How would an LLM, however powerful, earn a client’s trust, and learn what they are most worried about? Some clients are terrified of jail but perfectly fine with probation; some clients are used to jail but hate probation; some clients just want the moral victory of presenting their side of the story to a judge or jury and damn the consequences. Some clients don’t know what they want, really, and are just terrified and need someone to help them out. That’s the client’s call, not mine; can we imagine an LLM accurately representing that person, rather than just trying to minimize “consequences” in a way determined by some algorithm somewhere? As a PD, I may be the only person in the world who has ever been on my client’s side; this is a sacred obligation, and cannot be replaced by a computer program, no matter how advanced.
But fine: whatever this nice young lady at the DC party said, I don’t really think there is a serious movement to replace trench criminal trial attorneys on either side with LLMs. (There is some movement, I think, to replace complex civil litigators with LLMs, which also bothers me, since I think a world where our civil courts are largely populated by nonhuman corporations all represented by robot lawyers starts to feel a lot like There Will Come Soft Rains, but whatever; that’s not my area of expertise).
But is it completely ridiculous of me to think that a hypothetical robot-author has all the same problems as this hypothetical robot-public defender? Any novel or short story or screenplay worth anything at all is a vulnerable baring of the soul, and the pleasure of reading such a thing is partly about reaching into the mind and soul of another human being and seeing how they tick. Reading Gene Wolfe or Susanna Clarke or Shirley Jackson or Charles Portis or whoever-it-is-I’m-obsessed-with-at-the-moment is an exercise in trust; this other person, this human being, sat down and put words in a particular order because they though it would be beautiful, or meaningful, or funny, or maybe just because they wanted to get paid; but the result is the product not only of craft and care and skill and whatever “talent” is, but of a life. I will confess that I do not think it is possible for any LLM, no matter how powerful, to write something as powerfully idiosyncratic and beautiful as Jonathan Strange6 & Mr. Norrell, but even if such a thing could happen, it would be a betrayal of that contract between reader and author. This is true not only of heartbreaking works of staggering genius but also of airport thrillers and ghostwritten franchise novels and bizarre self-published dinosaur erotica. An LLM writing fiction, whether Proust7 or Patterson, would be, to quote the great Hayao Miyazaki, “an insult to life itself.”
I don’t think there is a significant market for full-blown AI fiction, because I think most people feel this insult instinctively. But I do think there is a world where corporations or hack writers use AI to generate words and lie about it, or at least heavily obfuscate it, because it is slightly cheaper than paying a human writer, if one disregards the climate costs and this degradation of the human condition. Disney already used an AI voicebot to emulate the late, great James Earl Jones in its godawful Obi-Wan Kenobi series; do you think they won’t try to replace writers’ rooms with servers, all overseen by some poor underpaid sonofabitch who has to turn the AI hallucinations into something resembling a coherent script? Do you think major publishing houses of sci-fi and fantasy wouldn’t be willing to slap a random “author’s” name on AI generated slop if they thought they could make more money that way?
Just as a public defender client needs a human being to help advise them and guide them through the terrible trial they are enduring, so too do human beings, weak and pathetic as we are, need human writers8 to guide us through the inexplicable and horrifying world in which we find ourselves.
Use of LLMs to generate fiction is bad not only bad for environmental reasons and labor rights reasons, it’s fundamentally evil, and we need to make sure we don’t lose sight of that while we’re arguing about the logistical realities of LLMs or their interfacing with copyright law or ivory tower debates about whether or not such things would be, in some sense, “art.” It’s evil, and the people who are trying to make such things possible should be ashamed of themselves. OpenAI delenda est.
By “AI,” which is a nebulous term, I mostly mean what most people mean these days, which is the “large language model,” or LLM: predictive text on steroids. It’s not really AI, and there are several other things under the broad umbrella of “AI” that are not LLMs, but this is not a technical Substack and so it will work for the purposes of this rant.
Just read Wittgenstein on the definition of a “game” if you don’t know what I mean. It’s all just family resemblance, baby.
This one’s for you, DBH. I hope you are recovering well from spinal surgery.
A motion for a judgment of acquittal, or MJOA, is a way for the defense to win at halftime. Basically, at the close of the State’s case and before the defense has presented any evidence, the defense can say “Judge, even if you believe everything they just said, that does not add up to a crime, or at least not to the crime with which my client is charged.” Since the State has the burden of proof, if they haven’t elicited testimony relating to all the elements of the offense, the defense should win even without making any arguments as to the credibility of witnesses. Winning an MJOA feels a bit like hitting a home run.
Granted, there are a number of flesh-and-blood lawyers who have a hard time knowing when to shut up; it’s possible that my ability to know when to stop is my best feature as a trial attorney. But most trial attorneys seem to figure this out eventually.
JS&MN just turned 20 on Sunday. Happy Birthday to what may well be the best work of fantasy of the last 30 years. (The existence of The Wizard Knight makes me unwilling to say this unequivocally; also all the 10,000 or so novels I haven’t read.)
To avoid any stolen valor: I still haven’t read Proust. I’m going to fix it one of these days, honest.
Or, in the interest of a thoroughly unnecessary disclaimer: at least “sapient” writers. Intelligent extraterrestrial life or even actually sentient robots do not cause the same problems. I don’t think Lt. Cmdr. Data is possible and I think we shouldn’t try to build him if he is, but I also think he can read his poetry at the poetry jam if some asshole drags him into existence.