People are discovering very quickly that with a swish of Claude Code, economics (and other social sciences) papers can be easily produced. That has caused those experiencing this to reflect on what it all means. The latest is this thoughtful exploration by Scott Cunningham, who did some calculations on supply and demand and concluded that a journopocalypse was imminent, with journals overwhelmed by papers and forced to raise submission fees to cope.
I made that discovery over a year ago, and the experience led me to reflect similarly on what it would all mean. But I wanted to know if it was really possible to produce more papers at substantially lower cost or, more to the point, at a substantially accelerated rate. For the past year, I have been exploring whether it really was true. My current view is that you cannot, as I explained here.
Given the discussion going on, I am going to explain why I don’t think a journopocalypse is imminent or even likely to happen because of AI.
What Scott Cunningham argued
His post is long, but the economics is pretty simple. Publishing papers requires journal capacity and also editorial and referee labour. All of those are fixed. Academics are motivated to publish more papers because, as Scott assumes, doing so confers higher status, tenure, or something similar. This is what is called a Tullock contest. Throw some technology into the mix that allows everyone to produce more papers, and what won’t happen is the publication of more papers as supply is fixed. The journal bottleneck means that journals will have to ration more, and Scott is pessimistic that journals will be able to easily distinguish paper quality, so they will rely on prices (that is, higher submission fees) to ration the supply of papers and bring everything into balance. The end result is that people produce lots of papers, most of which are cast off with quick desk rejects, and people have to pay more to play in the academic system.
That sounds bad. But I don’t think it will happen for three reasons. First, I think the human input in academic papers is still very apparent, real and observable. Second, what constitutes a research contribution will change (e.g., journals will have a higher quality threshold). Finally, what constitutes a high-quality research paper will be something virtually unrecognisable today.
The human input
In my experience, AI doomers of all stripes tend to downplay what people bring to the table. They generate a paper quickly and wonder if they did anything at all. The answer is: no, they didn’t. What’s more, I think it will be obvious.
The human input in economics, at least, is to allocate cognitive and other resources appropriately. If you are going to produce research in a timely manner, the less time and money you waste on things that are unlikely to pan out, the better. Basically, what you are doing all the time is forecasting the likely outcomes if you try this or do that, not some other thing. You can see what happens when you don’t have that judgment. Just look at PhD students. They are massive time wasters. And I think that the core difference between academics who “make it” and those who don’t is not creativity, but the ability to work on a problem in an economical way by making good predictions of what paths are likely to be more successful.
If I hand you an AI and say write a paper, you can do that. But the paper you write will have all manner of choices embedded in it that are not the choices you would make if you were paying more attention; i.e., thinking for longer, getting input from others and all those things that tend to slow things down. And I have to say that the Turing Test hasn’t been passed by the AIs on this front. Handed any paper and I believe I can readily still tell whether it is of high quality or not, and invariably, the high quality is not coming when there is no human input.
Daron Acemoglu, who has been off Nobel prize-winning for a year, (along with coauthors Dingwen Kong & Asuman Ozdaglar) produced a new paper this week. He is anti-AI, so it is highly unlikely any AI played any role in his work. But I have to tell you, whether you agree with the project outcomes or not, he and his co-authors have produced a work of art. Economists who read it will know what I mean. The model is beautifully laid out (just read through page 8 to see). It is clear, and its results are easy to follow. More importantly, they are easy to understand (I’ll come back to that later). No AI can produce that yet. Yet most theory published in top economics journals is at a similar level. I’d love it if AI could do this, but I just don’t believe it has taste. And nothing I have seen in the last year, as I tried to do this, has convinced me I am wrong in this belief.
This is why I am unpersuaded by the argument that we won’t be able to tell good from bad quality. I think we can. Put simply, experienced researchers know the cues and the ability of generative AI, at least to mimic those cues, has not been apparent. Don’t get me wrong, it is amazing and time-saving. But no matter how quickly you produce a paper, that extra stuff will still take whatever time it needs to take. It just isn’t the case that if you can do it once or twice a year, you will be able to do it ten times. Producing that extra but key ingredient just isn’t something that can be ramped up. The top journals may be flooded with extra submissions, but I believe that the sorting function of the review process can still proceed quickly. Editors are likely feeling it. But at the same time, I think they are able to hold the line. The dilution Scott thought might occur can be held off. Indeed, the one thing AI can do is notice when something different is in a paper. I suspect that with the help of AI, they can readily identify if that is missing, even if AI cannot fill the gap.
What is a research contribution?
All this has happened before, and it will happen again. In economics, we have gone through similar technological revolutions that have allowed various aspects of economic research to be performed at a high level. The big one is computerisation. Back in the day, Ken Arrow told me once that to run a regression, he had to invert an 8-by-8 matrix by hand. Suffice it to say, you had to be very selective when you chose to do that, and for sure, you could not handle large data sets. That made empirical work extremely difficult. If someone found significance in small data sets, then that was an event.
Even in my lifetime, things have changed. We went from being able to deal with appropriately sized datasets to what is termed big data. That changed economics. But it took time. People could run effectively four million regressions, but they didn’t know what to make of it. What happened as the entire way empirical research was conducted and, importantly, presented changed. The credibility revolution arose not because people suddenly understood what causation was but because they had the data to handle the multiple regressions and systems of equations that would make it possible to generate coefficients that were credible. (I wrote about the change here.)
AI will generate a similar revolution, and I have no idea what the economics paper of the future will look like. I only know it will look nothing like it does today. We are only just beginning to explore what AI is actually able to tell us about the economy. The multidimensional spaces, the simulations, and the prediction of human agent interactions could all lead to something new. That is where the frontier is, and it is unexplored. But once that is cracked, the journal thresholds will change. The use of those AI tools will be expected, and the level of analysis will be something quite different from our eeking out of little burrows to find nuggets of identification.
But there is, however, something that I am sure of. Whatever is published will need to be understood and explained. This is a constraint economists have imposed on themselves, and I don’t see it changing.
To see what I mean, let’s talk about economic theory. The computer revolution has had no impact on economic theory. Well, a little impact. Paul Krugman once told me (aren’t I just dropping those names today?) that it was magical when WordPerfect got an equation editor, and he could write CES equations with elasticities rather than exponents. But other than that, nothing much.
That, however, wasn’t the expectation. It was believed that economic theory would move towards large simulations of the interactions of agents. This began with the work of Richard Nelson and Sid Winter and morphed into activities at the Santa Fe Institute. The idea is that you didn’t need to assume rational agents who were easy to model mathematically, but could have various agents with deviations from rationality and evolutionary selection to produce simulations of economic outcomes more consistent with what people argued was the real world. But that had no impact. Why? Because believing those models required an act of faith. And if there’s one thing economists don’t do, it’s that they don’t take things on faith. They need to understand what is going on. Theory just doesn’t work unless you understand the mechanism. And not just theory, we impose that constraint on empirical work. A mere relationship without an explanation just isn’t going to cut it.
What this means is that whatever new methodologies AI hands us, the outcomes will have to be human-readable to be published. AIs can do so, but a person, first and foremost, the author(s), needs to be persuaded first. And that will be hard and take time.
In effect, what will happen is that as AI makes it trivial to do some tasks, we will invest more in other tasks. The time to complete a paper will not change, and the resources to do so will rise. We will likely continue the trend towards multi-authored papers and labs like the Chetty one, but the point is that there will be an escalation in effort that will come alongside the new thresholds for acceptance at top journals. Put simply, there will be no vibe researching your way into that new club.
(P)research
Finally, I want to harken back to something I discussed when I discovered AI’s ability to generate papers a year ago. Not only will the production function of papers change, but the demand for papers will change too. New technology has a way of causing us to really think about why we are doing what we are doing, and AI is no exception.
Consider the Scott Cunningham-AI-paper mill notion. You can pose a question and get an answer very quickly. This is actually not new. The Internet helped us along there, allowing us to search without visits to a library. Even tools like Deep Research have allowed us to scan the literature and more, and get a neat and comprehensive report in a very short period of time. This obviates the need for papers that previously would do the same thing.
The point is that what we call research is a speculative exercise. Because it takes so long, we are trying to answer questions now in the hope that the answers will be there when someone who really needs that answer asks a question. We research precisely so answers are readily available. It isn’t research that we do, but presearch.
Paper mills are useless because it will be easier for someone with a question to just ask AI for an answer at the time, compared to searching for that answer in the corpus of published knowledge (even if it reaches that stage). To be sure, publication is useful in that there is some certification. But we shouldn’t overweight its importance. Instead, you need an answer; just ask AI, and a paper will be produced with the answer. You can then read it as you could something you found in the literature.
The point is that even if people just start producing papers, the incentives we provide to do that will be fundamentally altered. That research is just not likely to be that valuable, and so we won’t value it. Do that, and the Tullock contest is eliminated. Again, there may be short-run disruption as we work this out, but a long-run journalpocalypse isn’t on the cards.


