Field stripping
LLMs and the accidental atrophy of academic knowledge
It is fun to look at the literature reviews of academic papers written over twenty years ago—a simpler time. Here are the References from a wonderful 2003 paper by Juan Carlos Munoz (recently a target of our stupid regime) and Carlos Daganzo. Every citation is either to themselves or to someone they know personally:
Unfortunately, today this wouldn’t fly. Now articles must include a fairly extensive literature review. Like the contribution section, this is a requirement reviewers have imposed by an evolving tradition (i.e., they complain if you don’t cite enough papers) rather than a formal rule, and it has gone very far.
Increasingly, people are satisfying this requirement by using LLMs to write these literature reviews. Currently, LLMs aren’t very good at literature reviews: they are ignorant of what’s in the body of most papers, because they cannot or do not read them, so they will often claim that a paper contains whatever content seems concordant with its title. But despite chronic hallucination, they are arguably better than most human-written literature reviews—which hallucinate content from titles and also are poorly written. ResearchGate.com tells me whenever my papers are cited and shows me the passages with my citation highlighted. Typically it’s some babble like:
Congestion pricing is an urgent policy of the toll in global cities such as Sweden (Vickrey, 1969; Lehe, 2019; The New York Time, 2024).
With time and more “skills” or “hooks,” LLMs will overcome their problems and write inarguably good literature review sections. Summarizing text is very much in the LLM wheelhouse. When the hooks arrive, you might be able to write a better literature review than an LLM, but only in the sense that you can iron your shirts better than the dry cleaner if you set your mind to it.
As goes the literature review, so goes the paper: LLMs will take over a larger and larger share of the paper-writing process. And in doing so, the average, modal, third quartile etc. quality of papers will probably improve. If you doubt this, I encourage you to go read a couple of papers at random.
The point I want to make today is that—even if papers get better—the improvement is only unmitigated insofar as you take the journal article at face value: as a concise (and maybe pleasing) way to transmit a new argument, finding or result. While this is certainly part of the “job” of a journal article, it is worth taking in the overall environment. We ought to really think of the modern paper as an act rather than a product per se.
Maybe you’re not supposed to admit it, but academia’s emphasis on the journal article is similar to a military’s emphasis on things like field stripping rifles (i.e., taking them apart and putting them back together fast). This is a task which, while ostensibly useful on the battlefield, is emphasized well beyond any serious estimate of its direct effect on warfighting. But field stripping and similar rituals (marching, making your bed, rigmarole around flags) are part of military culture around the world and always have been. The reason is that these rituals are thought to do more than what’s on the label.
Forcing people to write journal articles (even bad ones) forces them, by and by, to learn and remember facts and theories from a discipline. You have to stay abreast of or debate your fields’ consensus. The system of enforced article-writing also (noisily) filters out persons: you won’t publish (as many) articles if you are lazy, a genuine moron, or such a defiant crank that you won’t submit to tedium like perfunctory literature reviews.
Now imagine a field in which all papers are written and reviewed by AIs. All degree-holders use LLMs to write their theses and to do most of the assignments in their classes. Lectures are created by LLMs, and speakers (whether at conferences or in thesis defenses) read power point slides written by LLMs—perhaps with Q&A maps of answers to questions like the ones Neal Katyal used in his Supreme Court arguments. In short, the ordinary way of academic life goes on and is, at face value, executed more artfully and faster.
While the field’s output is better, it is possible that no human knows essentially anything substantive about this field at all—even as its publications, books and conferences proliferate at break-neck speed. Of course, an individual might know a lot—just as a Swede might know Quechua. But there is nothing in particular pushing anyone beyond familiarity. Maybe people learn a lot to take exams in grad school, but this knowledge atrophies quickly from disuse.
Whatever the benefits of more and better papers, and howsoever ignorant you think professors are, it is common sense that the situation of a field is fundamentally different when no human knows anything about it than when some people know something. No one can talk to each other. No one can tell if the LLMs are doing a good job. One could come up with specific problems: e.g., what if the LLMs (being the ultimate front row kids) never slough off wrong traditions? What if LLMs reason their way down a wrong path that no human brain (however flawed) would ever go down due to common sense? But whether those specific problems hold water, I think it just worth noting the situation is different.
Consider three possibilities:
that LLMs are opening up a heretofore unlikely scenario: a state of deep human ignorance about topics even as the “state of the art” advances.
that little of the modus operandi of academia makes #1 impossible, nor would push a field out of such a state once obtained. I mean to say a state of ignorance is Lyapunov stable (I think…I can’t remember the difference between Lyapunov stable and the other type of local stability…maybe I should ask an LLM).
that academia’s current M.O. could make the ignorance equilibrium into a global attractor: i.e., something that any field (whatever the initial amount of learning or rigor) will tend toward. For example, suppose everyone uses LLMs to publish a thousand papers and submit a thousand grants, and that achievement is judged relatively. Any junior professor who spends care and time on reading, writing, reviewing and teaching will find themselves googling “Commercial Drivers License” after mid-tenure review. They will look worse in any measurable way than the professor who specializes in committees, collecting accolades and editorships, etc. but who knows nothing and farms as much work as possible to LLMs.
In conclusion, I want to emphasize that I am not an AI or education doomer. Realistically, the implication of my argument is not that people actually will become totally ignorant. Rather, it is that the way we do things will substantially change. I can’t predict what it will be, except that it will piss everyone off. Maybe professors will submit to routine examinations every few years (even after tenure), and they will spend some of their time studying for these tests.
PostScript:
I am not talking about the situation in Asimov’s Foundation series, where no one knows how nuclear power works but a religion of atomic priests passes down rituals to run the reactors. LLMs will not be analogous to those priests. The priests have no idea what they are doing, and they keep what they know secret. The scenario I am describing is simply the same as the situation of Latin or Classical Chinese—things that every scholar used to know, and that are easier to learn now than ever before, but that very few actually do know.



