Trust in Science
In The Great Stagnation, Tyler Cowen briefly discusses the 2008 financial crisis, and points to the fact that at a fundamental level, the cause was overconfidence ("we thought we were richer than we were"), from the lowest level of individual home buyers, to museum directors investing in an expansion, to the unjustified trust of institutions in the assessments made by the ratings agencies. Cowen also points to Bernie Madoff as a kind of exemplar of this process, a case where many people who should have known better didn't bother with due diligence, and instead relied on the fact that so many other people, (who surely would have done their due diligence), seemed to trust him. More recently, we can look to cases like Theranos, to see the same dynamics at work: otherwise careful people not bothering to look at the evidence, in part because so many serious people were already on board.
It's interesting to think about where such dynamics might be at play in the world of science today. Science is, by its very nature, a business of being skeptical. Externally, science is frequently held up as authoritative (and it is, ultimately, our best way of knowing things), but inside the academy everyone mostly approaches others' work with great caution. Hence the use of meta-analyses and replications. Hence the skeptical reviewer who wants additional robustness checks. Because we know how the sausage is made, we see many opportunities by which sawdust might have been smuggled in, and take all published results as only part of the story.
An important part of how this works is the transparency of publication. The model is far from perfect, but in principle, being clear about what you did and what you found should allow others to repeat your work, as close as they are able, and see if they find something similar. Of course, this only works when people can access the paper, let alone the equipment necessary for the experiment. Greater support for preprints, open-access publishing, and even sci-hub help considerably with access to findings, though much scientific work requires both technical skill and resources that make replication prohibitive for all except well-funded research labs.
The authority of science, meanwhile, gets publicly laundered somewhat, especially in times like the pandemic, in which many different people claim to be its true representative, not all of whom are practicing reasonable caution or clear communication. In other areas, like machine learning, research seems inherently tied to hype cycles in industry, and certain aspects of the research process become increasingly inaccessible, with secrets hidden in corporate safes, or gated by commercial APIs.
In some sense, none of this is critical. Over the long term, errors do get corrected, and even our truths are only better and better approximations. Nevertheless, it's worth asking, how much would it take for a certain body of work to sustain itself on everyone's overconfidence, with each researcher noticing flaws in their own work, or critical limitations of their findings, but not willing to believe that others would look past such anomalies.
In the aftermath in the collapse of such overconfidence, it suddenly becomes much easier to see where people had gone wrong, and how obvious some of the signs were. This may even lead to systematic change, as has happened (to some extent) in the wake of the replication crisis, especially in medicine and psychology. And yet, the lessons aren’t quite clear enough to feel confident in being able to recognize such unsupported trust in advance of a crisis. Or perhaps they are, but it is simply not actionable information. There should be potential value in being able to notice where the supports are weak or failing, but it may be that overconfidence overwhelms doubt as long as the collective enthusiasm remains.

