Welcome to Plugging the Gap (my email newsletter about Covid-19 and its economics). In case you don’t know me, I’m an economist and professor at the University of Toronto. I have written lots of books including, most recently, on Covid-19. You can follow me on Twitter (@joshgans) or subscribe to this email newsletter here.
The Economist had a recent piece on how economists and epidemiologists don’t get along. It claims that economists are reluctant collaborators. In reality, that charge is not valid. Economists often collaborate with others and during this pandemic have engaged with epidemiologists/public health scholars to a very large extent. Papers, jointly written, started appearing in the first month of the pandemic. And immediately my circles of close contacts expanded well beyond economics. When I rushed to write my book I was very careful to stick to economic proclamations using epidemiology as a base rather than start a turf war about modelling. I was most definitely a consumer of that work.
But over the past nine months, I have started to get a sense that there is a difference between epidemiologists/public health people and economists. Their biases are very distinct.
The Epidemiological/Public Health Biases
In fact, I started with this observation for an article I wrote for Stat a couple of weeks ago.
Here is my working definition of a public health expert: someone who is constantly frustrated that people will not act on clear health advice in the interests of their own health.
Epidemiology is a profession and field that can be construed as suffering from collective PTSD.
From smoking to diabetes, from eating responsibly and exercising to practicing safe sex, from getting an annual flu shot to wearing face coverings in the time of coronavirus, the literature on public health is awash with tricks, nudges, and sometimes outright bribes to try and get people to take care of themselves. Some progress has been made, but the feeling is often one of disappointment that so many people aren’t following the advice.
This was the lede for my article that went on to suggest that, contrary to expectations, much of what we have seen in the pandemic has been a public health triumph and, while there is great room for improvement, our default assessment should not be that everyone has failed here.
A recent paper by Ellie Murray, an epidemiologist, reflects on the nexus between the two fields. She notes that much of the problem comes from a failure to distinguish between epidemiology in practice and epidemiology as an academic field. Like economics, those in practice make suggestions designed to persuade that are based on the best science available. Like economics, those academics long for more nuance and statement of assumptions in the communication of policy recommendations. To people outside the field, the difference can be obscured. We can certainly empathise with that!
All good news is bad news
Actually, some of this intuition cannot be communicated. One of the great things was a rule I derived from interacting with Carl Bergstrom. That rule … which I think I am originally stating is:
All good news is bad news in epidemiology.
Here is how it works:
Normal person: “Wow, it has been warmer this November. That will help us keep people outdoors and not spreading the virus.”
Epidemiologist: “What that means is that the rise in cases we have seen in November was all during a period of relatively warm weather. When the usual winter comes it will be even worse!”
I mean it is not that warmer weather isn’t good news (unless you are a climate scientist). But to an epidemiologist, that means that infections we are observing are created under ‘good’ conditions which imply that things could get worse. Who’s the dismal science now!
Anticipating the worst
Murray describes the general misreading of epidemiological models that showed dire outcomes at the beginning of the pandemic that did not come to pass. She argues, interestingly, that the goal of the models was not prediction but as a means of explaining what the worst-case scenarios were.
In the case of the Imperial College model, two of the key assumptions which defined their original model were that the government would not respond to the COVID-19 pandemic with any interventions and that the general public would not respond to the pandemic with any changes to their own behavior. These assumptions are clearly unrealistic. However, by making these strong assumptions, the Imperial College model was able to provide epidemiologists and public health practitioners with a rapid estimate of the worst-case scenario: if SARS-CoV-2 was allowed to run unchecked through the population, what is the maximum amount of death that we might expect over the course of the outbreak until it burned out via herd immunity? The answer––510,000 deaths in the United Kingdom and 2,200,000 in the United States (Walker et al. 2020)—rightly spurred both governments and individuals to action.
In other words, the models were tools of persuasion. “Look here is what’s going to happen if you don’t do something. It’s really bad. Do something!” And governments did. It worked … for a time.
Being persuasive
Here’s the problem. People will listen and act. If you act and then things go well, then great. But if you act and then things do not quite go to plan — you know, as normally happens, this persuasion game strategy runs into a problem. The same thing happened with regard to mask-wearing advice; something I devoted an entire chapter of my book on.
Now we have a situation where epidemiological projections are mistrusted. This can be for the worse; for instance, the whole second wave was downplayed for months when there was a real opportunity to prepare for it as Zeynep Tufecki writes, mentally and otherwise, during the summer. Even now, in Ontario, those models are predicting a possible rise to 6,000 cases a day in December (from its current 1,500). But I have to admit, while that it possible, I am not particularly worried about it happening because, in my mind, people will react before then. It is far from surprising that governments are reacting more slowly and also with less gusto than before.
That said, I get it. If you have had a profession where people routinely fail to do what needs to be done to protect themselves and others, your instincts are to ramp up the urgency and narrow the message. Epidemiologists know this is a problem but at the moment do not have other alternative means of persuasion do push people in certain directions.
In economics, we don’t have these either. Our approach has not to try and convince the public of anything. That is seen as futile. Instead, it is to try and convince those in charge. That is no more fun a way to go but it is the way economics has chosen to go. And it has 50 years of tools that it has built up to help be persuasive.
The tools of persuasion
What tools are those? They are data. Theory can help frame arguments. Data can help win them. In economics, the framing that led to data came from Alfred Marshall in microeconomics, John Maynard Keynes in macroeconomics, and a group of people in financial economics. It took over half a century to build the data and analyse it to start doing better winning arguments. In part, this is because we actually started to learn what the correct policies actually were.
My sense is that epidemiology is still at the beginning stages of this. It has data on epidemics in animal populations where human behaviour isn’t an issue. It has data on epidemics that were not so serious as to cause humans to behave differently. But, somewhat thankfully, there are only a handful of epidemics where they have been able to collect data where human behaviour was an issue. That is changing and so will how arguments and positions are presented to policy-makers.
Economics should remember its own roots here. When we didn’t have much data, look at how we dealt with the problem of monopoly. Theoretically, we knew monopolies were bad. So our solution was: put a cap on their prices. This is now laughable but it was the state of play for over fifty years!
Thus, it shouldn’t surprise us that the policy options being pushed by epidemiologists are similarly crude: “everyone keep away from other people and wear masks.” To be sure, they get the job done but, like price caps, their problems start to show up the longer they are in place. Coming up with a way of evolving policy in real-time with clear guidance on why that is happening is a much bigger challenge. Understanding, the biases and defaults of our academic colleagues as well as our own will help us communicate and collaborate better.