I was there when Geoff Hinton said this in 2016:
People should stop training radiologists now.
Here is the video of the moment:
Notice the extraordinary confidence of the statement. He was on stage with Rich Sutton and Yoshua Bengio, and there was no pushback on that statement. And there was a clear logic to it. Deep learning was a prediction machine, and its success in diagnosing images had triggered the recent AI revolution. At that moment, it seemed to be a compelling argument. How did it play out almost 10 years later?
The prediction made by the godfather of prediction machines was extraordinarily poor on the face of it. Radiologists are thriving. But before getting into that, I want to separate out the real prediction from the prescription in Hinton’s statement. The real prediction is that the diagnostic task in radiology will be performed by AI, eliminating the need for human intervention. That is within Hinton’s expertise. The prescription that we should stop training radiologists has a logical flow from that prediction, but many other factors are outside of Hinton’s expertise. I have seen all too often that scientists often take the things they know and extrapolate those into calls for action that they don’t fully understand. The AI discussion has had that in spades, and this is a shining example of it.
Which leads me to this great post by Deena Mousa. (I recommend you read it in full.)
Mousa unpacks the radiologist story, starting with this:
AI is having a massive impact on radiology just as Hinton predicted. Well directionally and relative to other applications. But not completely.
Most radiology models detect a single finding or condition in one type of image. For example, a model might look at a chest CT and answer whether there are lung nodules, rib fractures, or what the coronary arterial calcium score is.
For every individual question, a new model is required. In order to cover even a modest slice of what they see in a day, a radiologist would need to switch between dozens of models and ask the right questions of each one. Several platforms manage, run, and interpret outputs from dozens or even hundreds of separate AI models across vendors, but each model operates independently, analyzing for one finding or disease at a time. The final output is a list of separate answers to specific questions, rather than a single description of an image.
Put simply, AI image diagnosis has a very small footprint in terms of the number of use cases. In other words, where there is effort in detection and clean data, it has done well. But those situations are a minority. It just hasn’t been possible to provide sufficiently reliable predictions in many cases. This is because AI requires millions of images, much greater than the images that exist and are solidly classified for partial questions. AI is far, far away from producing a tricorder — able to take an image and diagnose it without a prompt of what it is looking for. That means that there is a substantial need for people to choose the right model, even if it exists. Knowing what to ask an AI is still a big task for people.
Given this, how are things looking for radiologists — the Hinton prescription? Pretty great, it turns out:
But demand for human labor is higher than ever. In 2025, American diagnostic radiology residency programs offered a record 1,208 positions across all radiology specialties, a four percent increase from 2024, and the field’s vacancy rates are at all-time highs. In 2025, radiology was the second-highest-paid medical specialty in the country, with an average income of $520,000, over 48 percent higher than the average salary in 2015.
Why is this so? One reason Mousa points to is regulations that slow AI adoption:
… if you retrain a model, you are required to receive new approval even if the previous model was approved.
This and many other issues were identified in our collected volume of papers in The Economics of Artificial Intelligence: Health Care Challenges that came out last year and is well worth the read for anyone in the health sector. Mousa does raise additional concerns, such as insurance. Insurers are reluctant to cover automated diagnosis because if there is an issue, it will be “at scale”, impacting not just a single patient (as a human might damage) but many patients. That isn’t a regulatory issue but a real risk.
But ultimately, radiologists are doing well because these tools haven’t displaced them, but arguably are helping them do their full job better. Diagnosis is only about a third of their time.
More time is spent on overseeing imaging examinations, communicating results and recommendations to the treating clinicians and occasionally directly to patients, teaching radiology residents and technologists who conduct the scans, and reviewing imaging orders and changing scanning protocols.
This is something we have emphasised in our research. AI does prediction, but prediction is only one component of decision-making, and jobs are made up of many decisions. Our recent paper on “The Economics of Bicycles for the Mind” expands on this and covers the particular ways in which having a cognitive tool by AI complements other human skills.
But as I read the Mousa piece, I became nervous. I was there when Hinton spoke and thought at the time it was a compelling argument. I then wondered, when we came subsequently to write Prediction Machines, did we just parrot it? Fortunately, it turns out not. We devoted an entire subsection of the book to Hinton’s statement and came to the following conclusion:
[F]ive clear roles for humans in the use of medical images will remain, at least in the short and medium term: choosing the image, using real-time images in medical procedures, interpreting machine output, training machines on new technologies, and employing judgment that may lead to overriding the prediction machine’s recommendation, perhaps based on information unavailable to the machine. Whether radiologists have a future depends on whether they are best positioned to undertake these roles, if other specialists will replace them, or if new job classes will develop, such as a combined radiologist/pathologist (i.e., a role where the radiologist also analyzes biopsies, perhaps performed immediately after imaging).
Phew. Now I can gloat. We actually researched what radiologists do and came to the conclusion that Hinton’s prescription was not obvious, even if his prediction was compelling.
Canary in the Coal Mine or Coyote over the Ledge?
Hinton likened radiologists to the coyote in the Road Runner cartoons, being over the ledge but not falling because he hadn’t looked down. Others suggested that radiologists were a canary in a coal mine — we should watch what happens to them as a prediction of what happens to other jobs from AI.
But a canary in the coal mine is a particular type of signal. If there is toxic gas in the mine, then the canary will die. But it may also die for other reasons. Nonetheless, if the canary is alive, then you can be sure things are fine. In other words, as a signal it has low false negatives (if it lives then things are fine) but higher false positives (if it dies, there may be an issue but not for sure).
If the radiologists were in decline now, that would not have told us that other jobs were for sure doomed because, as was argued by Hinton, radiologists’ predictive tasks seemed amenable to AI (and that has arguably been what has happened).
But the canary is well and truly alive. That should give us more confidence that AI will not displace humans because this was a job at clear risk that does not seem to have had an issue. Is it a perfect signal? Not by any means. But it is a very informative case for our projections.



