LITERARY THEORY FOR ROBOTS: How computers learned to writeby Dennis Yi Tenen
In “Literary Theory for Robots,” Dennis Yi Tenen’s fun new book about artificial intelligence and how computers learned to write, one of his most powerful examples comes in the form of a small mistake.
Tenen draws links between modern chatbots, pulp fiction plot generators, old-fashioned dictionaries, and medieval prophecy wheels. Both the utopians (Robots will save us!) and the doomsayers (The robots will destroy us!) are wrong, he argues. There will always be an irreducibly human aspect to language and learning: a crucial core of meaning that emerges not only from syntax but from experience. Without it, all you hear is the chatter of parrots, who, “according to Descartes in his ‘Mediations,’ simply repeated without understanding,” Tenen writes.
But Descartes did not write “Mediations”; Tenen must have meant “Meditations”: the missing “t” will go unnoticed by any spell-checking program because both words are perfectly legitimate. (The book’s index lists the title correctly.) This tiny typo has no bearing on Tenen’s argument; If anything, it reinforces the argument he wants to make. Machines are getting stronger and smarter, but we still decide what is meaningful. A human wrote this book. And, despite the robots in the title, it is intended for other humans to read.
Tenen, now a professor of English and comparative literature at Columbia, used to be a software engineer at Microsoft. He puts his diverse skills to use in a book that is surprising, funny, and decidedly unintimidating, even as it smuggles in big questions about art, intelligence, technology, and the future of work. He suspects the book’s small size (less than 160 pages) is part of the point. People are not indefatigable machines, incessantly ingesting enormous volumes on enormous topics. Tenen has figured out how to present a network of complex ideas on a human scale.
To that end, he tells stories, beginning with the 14th-century Arab scholar Ibn Khaldun, who chronicled the use of the prophecy wheel, and ending with a chapter on the 20th-century Russian mathematician Andrey Markov, whose analysis of the probability of Letter sequences in Pushkin’s “Eugene Onegin” formed a fundamental building block of generative AI (regular players of the Wordle game intuit such probabilities all the time). Tenen writes knowledgeably about the technological hurdles that hampered earlier models of computer learning, before “the brute force required to “process almost everything published in English” was so readily available. He urges us to be alert. He also urges us not to panic.
“Intelligence evolves on a spectrum from ‘partial assistance’ to ‘full automation,’” Tenen writes, offering the example of an automatic transmission in a car. Driving an automatic transmission in the 1960s must have been mind-blowing for people used to manual transmissions. An automatic worked by automating key decisions, downshifting on hills and sending less power to the wheels in bad weather. Eliminated the option to stall or grind the gears. It was “artificially intelligent,” even if no one used those words to refer to it. American drivers now take its magic for granted. It has been demystified.
As for the current debates about AI, this book also attempts to demystify them. Instead of talking about AI as if it had a mind of its own, Tenen talks about the collaborative work that went into building it. “We employ a cognitive-linguistic shortcut by condensing and attributing agency to the technology itself,” she writes. “It’s easier to say: ‘The phone complete my messages’ instead of ‘The engineering team behind the autocomplete tool writing software draws on the following dozen research articles complete my messages.’”
Therefore, our common metaphors about AI are misleading. Tenen says we should “be suspicious of all metaphors that attribute familiar human cognitive aspects to artificial intelligence. The machine thinks, speaks, explains, understands, writes, feels, etc., only by analogy.” This is why much of his book revolves around questions of language. Language allows us to communicate and understand each other. But it also allows for deception and misunderstanding. Tenen wants us to “unroll the metaphor” of AI, a proposal that at first glance might seem like an English teacher’s hobbyhorse, but is entirely appropriate. Too general a metaphor can make us complacent. Our sense of possibility is shaped by the metaphors we choose.
Text generators, whether in the form of 21st-century chatbots or 14th-century “magic letters,” have always faced the problem of “external validation,” Tenen writes. “Procedurally generated text can make grammatical sense, but it doesn’t always. sense Take Noam Chomsky’s famous example: “Green, colorless ideas sleep furiously.” Anyone who has lived in the physical world would know that this syntactically perfect phrase is nonsense. Tenen goes on to refer to the importance of “lived experience.” ” because it describes our condition.
Tenen does not deny that AI threatens much of what we call “knowledge work.” Nor does it deny that automating something also devalues it. But he also puts it another way: “Automation reduces barriers to entry, increasing the supply of goods for everyone.” Learning is cheaper now, so having a large vocabulary or repertoire of memorized facts is no longer the competitive advantage it once was. “Today’s scribes and scholars can challenge themselves with more creative tasks,” he suggests. “Tasks that were tedious have been outsourced to machines.”
I see your point, even if this perspective still seems bad to me, with an increasingly smaller portion of the population engaged in creative and challenging work while a once-flourishing ecosystem collapses. But Tenen also argues that we, as social beings, have agency, if only we allow ourselves to accept the responsibility that comes with it. “Individual AIs pose a real danger, given the ability to add power in the pursuit of a goal,” he acknowledges. But the real danger comes “from our inability to hold technology manufacturers accountable for their actions.” What if someone wanted to attach a jet engine to a car and see how it fares on crowded city streets? Tenen says the answer is obvious: “Don’t do that.”
Why “not doing that” may seem easy in one area but not in another requires more thought, more precision, more scrutiny – all qualities that fall by the wayside when we cower before AI, treating the technology as a singular god instead. of a multitude of machines built by a multitude of humans. Tenen leads by example, applying his human intelligence to artificial intelligence. By reflecting on our collective habits of thought, it offers a meditation of its own.
LITERARY THEORY FOR ROBOTS: How computers learned to write | By Dennis Yi Tenen | norton | 158 pages | $22