Yesterday, Scott Alexander asked: why there's no public database for expert predictions? I ask: why there's no public database for expert consensus?
I'm not an expert on every topic under the sun. But from time to time I have conversations where I have to pick a position on a topic I didn't publish a peer-reviewed paper on. Are nuclear power plants safer than fossil fuel power plants? Does moderate alcohol consumption have health benefits? Can adult human brain produce new neurons?
In theory, I could find a reasonable answer to these issues digging through studies, educating myself on research methods or even conducting experiments by myself. In practice, it would be a huge time sink to do it for any issue I have an opinion on. It also incurs a substantial risk of getting a false lead, terminating the search too early and ending up believing some crank or outdated misconception.
What I think makes more sense is to find what the relevant mainstream worldwide community of relevant experts believe and see if they have formed a consensus. If they did, I would assume the consensus as my default opinion, doubting which would require a very strong evidence of coordinated delusion or corruption.
It's not hard to see the reasoning behind this strategy. Expert communities can be wrong, of course, but it tends to be a self-correcting kind of wrongness – at least in the case of science, which I'm more familiar with. Typically, one of the scientists in the community will find evidence that goes against the perceived consensus, the finding will be discussed and then the consensus will gradually shift.
It can sometimes happen that an individual outside the relevant expert community knows better. Continental drift theory was first proposed by Wegener, a meteorologist with a doctorate in astronomy and his idea met firm resistance from geologists until his death. However, I struggle to remember a single recent example when a large group of outsiders were correct and expert consensus was wrong. And if you hear an opinion that goes contrary to the expert consensus, it's probably because a large group of outsiders hold this opinion, not because you or your friend came up with it.
Overall, I think expert consensus is a great starting point for a discussion, and a decent justification for having a belief in most circumstances. If someone believes an opinion that the vast majority of credentialed experts disagrees with, they better either be a credentialed expert themselves working on a cutting-edge issue, or have unbelievably good personal evidence that they're struggling to broadcast to a wider community – and even in these cases they're probably wrong. In effect, this narrows the set of topics two laymen can meaningfully disagree on to those where the consensus has not yet been reached.
One problem though – how do I find out what do the experts actually think about the issue? Is there like a database of surveys of experts on multiple issues?
I think there should be.
Some issues of public interest and importance get featured on surveys conducted among conference attendees. Here's one dedicated to opinions on global warming. Here's one asking machine learning researchers when do they expect human-level artificial intelligence to arrive. Here's one for philosophers, asking them whether they believe in God, free will and zombies.
But this selection of topics and respondents by study authors is purposefully narrow: their goal is to publish an article in the journal, so they have to limit the topics to their specialization and conduct a one-time survey. Many issues don't receive this kind of coverage. But what if I want to know what dietary researcher community thinks about low-calorie sweeteners? Whether anthropologists and geneticists agree that human races exist? How many astrobiologists fell for the phosphine fad?
The above philosophy survey comes the closest to the idea I got, since they are relaunching a decade-later follow-up of their study. To which I ask: that's great, why not do it every year? Why limit to philosophy questions? Why not sort the questions by pageviews? Why not give experts persistent user accounts so they don't have to input the same thing every time?
This brings me to my second point: I'd be delighted to see a platform where researchers are routinely polled on their opinion concerning issues of both academic and public importance. I envision it as a library of categorized poll questions which everyone can view, but only experts can vote on. Votes are anonymous of course, apart from the area of specialization, so that the experts are not afraid to go against the majority opinion. Each question would display a percentage of experts who vote agree/disagree/not sure, or something else depending on the question. Site admins could periodically compile a batch of hot new questions and email them to relevant experts on a monthly or yearly basis. I guess public participation could be encouraged too by writing the questions and voting on the most interesting ones.
There are multiple benefits I see to such a system:
1) It allows to track the impact of revolutionary discoveries. It's useful for assigning merit to individual scientists who manage to overturn the prevailing view.
2) If survey indicates that a narrow field or a sub-specialization disagrees with the opinion of other scientists, it's a sign we need to take a closer look. Either a specific sub-community struggles to attract the attention of the larger community to their findings, or, more likely, it has turned into an echo chamber that will change their view only when the old guard retires.
3) It allows historians and philosophers of science to track how quickly scientists adopt new views. Is it really true that science progresses one funeral at a time? Could we see the struggle between modern fixers and mobilists, preformists and epigeneticists, Big Bangers and Steady Staters, but now with cool graphs?
4) It brings science out of the ivory tower and closer to the people. Personally for me, this kind of site would be fun to browse – sort the questions by view count and find the consensus on most well-known issues like global warming and vaccines, sort it by date and find the consensus on whether the latest weird astronomical discovery might be aliens. (spoiler: it's never aliens)
5) For individual scientists legible lack of consensus on an issue would indicate unresolved questions that need more investigation, as well as provide justification for grant applications.
6) Claiming false lack of consensus is a popular tactic among pseudoscience and fringe science groups. An easy and reliable way to check science consensus on multiple issues would limit the ability of cranks to spread their ideas.
7) Allows science articles to cite something when they make claims about consensus or lack of it.
For example, this article about aging by Chilton, O'Brien and Charchar alludes to there being a consensus among researchers that telomeres mediate age-related diseases despite inconsistent evidence – naturally, with no citation. If only there was a way to find out what researchers really think about telomeres!
I also see potential problems that need to be solved before this idea can be implemented:
1) How do consensus site admins determine who's a real expert and who is not? Should they include people with degrees in natural medicine? Psychoanalysis? Literary criticism? Theology? I'd say if the doctoral degree was granted by an accredited institution, its holder should be allowed to participate. That would cut through most of the diploma mills, I think. As for the exact field of study, I imagine it won't be too much harm to include each field as a separate category. The visitor of the site could see that, for example, 90% of theologians support the existence of Christ as a historical figure, and 10% of historians and 5% of archaeologists do. If integrated with researcher ID databases, it would be possible to narrow specialization even further, depending on which journals did the scholar publish in.
2) If results of the poll upset some country's government, it can block the site access in the country so we don't hear from experts from that country. I imagine a situation similar to Lysenkoism in USSR, where political considerations have interest in shaping scientific opinion. It's not a big problem for the other users though – as long as the overall userbase is sufficiently large and diverse, this shouldn't influence the results much.
3) If the userbase is initially small, fringe theory believers can organize to all register and promote their fringe opinion. This can be alleviated if survey functionality is plugged into existing site with large expert userbase like Researchgate.
4) Legible consensus might aggravate groupthink – it can introduce additional source of friction to challenging the mistaken consensus by bright individuals. For example, funding agencies might withdraw funding of scientists holding minority views seeing them as cranks, even though these scientists might actually be correct. Promising students might avoid them, journals might refuse to publish their results.
5) Who's going to pay for all that?
Science journals, I guess? They already maintain science databases like Scopus. The main costs, I imagine, would be linked to manual registration and verification of new researchers, web design and maintenance, and for someone to compile and survey the userbase. Doesn't seem too outrageous. Also gives journals something useful to do in the age of Open Science.
6) Is is secure from bad actors? Why can't oil companies, pharma companies, sugar companies etc. pay off eligible people to vote in their interest?
I will admit, when I realized this I was pretty bummed. This totally seems like a thing companies would do. In my defense, this sort of affair probably won't go unnoticed – the number of people who decide "Hey I've got a graduate degree, better go write some companies to get cash" is probably too small to influence results, and companies can't exactly post ads or spam alumni lists to attract attention without getting noticed and risking backlash. Overall, if the company has access to a couple of shady researchers, it's much better to use them to publish a sham study that to shift the consensus by 1%.
To sum up, the main two assertions I make with this post, first strong and second weak, are:
1) Following expert consensus is a reasonable default position for a non-expert and it needs very strong evidence to be overturned.
2) One workable way to record the state of consensus is to set up an international polling web page where documented international experts can submit their opinions on various propositions, for example as a 'yes / no / not sure' poll.
Further reading: Trusting Expert Consensus by Chris Hallquist
Quick update: I was informed by @larry on ACX discord that economists already have a similar thing here, though it seems to be limited to US and European experts, with less than a hundred each.