It seems extraordinary now, but the idea of using randomised control trials to test whether new medical treatments actually work didn't take hold until the early 20th century, and only became widespread after World War II. Until then, medical treatments were largely applied based on the idea that the physician knew best. Experiments simply weren't done because physicians, particularly ones with high reputations, were considered so infallible that no independent, scientific validation of their treatments was considered necessary.
The attention-grabbing claim of a new book is that political forecasting operates to this day in just such a pre-20th century mode.
Superforecasting: The Art & Science of Prediction, by Philip Tetlock and Dan Gardner, argues that the forecasts offered by political experts — media pundits, academics, intelligence analysts and, yes, think tankers — are made without anything approaching a scientific method, and are rarely subjected to scrutiny after the fact in order to determine their accuracy. Tom Friedman, Steve Ballmer, Niall Ferguson and other luminaries come in for scrutiny in this book for their dodgy political and business predictions, but the deeper theme is that we are all guilty of the mistakes they made in their thinking.
Happily, the book also offers ideas on how to overcome some basic cognitive traps that lead to poor political predictions. Superforecasting is the product of a long forecasting tournament funded by the US intelligence community in which teams applying various methods were pitted against each other to determine which team — and which methods — delivered the most accurate forecasts. Superforecasting describes the most effective methods, and even comes with a handy cheat sheet at the back with 'Ten commandments for aspiring superforecasters'.
Last Friday, I interviewed one of the authors of Superforecasting, Dan Gardner, from his home in Canada: