“
In 2013, on the auspicious date of April 1, I received an email from Tetlock inviting me to join what he described as “a major new research program funded in part by Intelligence Advanced Research Projects Activity, an agency within the U.S. intelligence community.” The core of the program, which had been running since 2011, was a collection of quantifiable forecasts much like Tetlock’s long-running study. The forecasts would be of economic and geopolitical events, “real and pressing matters of the sort that concern the intelligence community—whether Greece will default, whether there will be a military strike on Iran, etc.” These forecasts took the form of a tournament with thousands of contestants; the tournament ran for four annual seasons. “You would simply log on to a website,” Tetlock’s email continued, “give your best judgment about matters you may be following anyway, and update that judgment if and when you feel it should be. When time passes and forecasts are judged, you could compare your results with those of others.” I did not participate. I told myself I was too busy; perhaps I was too much of a coward as well. But the truth is that I did not participate because, largely thanks to Tetlock’s work, I had concluded that the forecasting task was impossible. Still, more than 20,000 people embraced the idea. Some could reasonably be described as having some professional standing, with experience in intelligence analysis, think tanks, or academia. Others were pure amateurs. Tetlock and two other psychologists, Barbara Mellers (Mellers and Tetlock are married) and Don Moore, ran experiments with the cooperation of this army of volunteers. Some were given training in some basic statistical techniques (more on this in a moment); some were assembled into teams; some were given information about other forecasts; and others operated in isolation. The entire exercise was given the name Good Judgment Project, and the aim was to find better ways to see into the future. This vast project has produced a number of insights, but the most striking is that there was a select group of people whose forecasts, while they were by no means perfect, were vastly better than the dart-throwing-chimp standard reached by the typical prognosticator. What is more, they got better over time rather than fading away as their luck changed. Tetlock, with an uncharacteristic touch of hyperbole, called this group “superforecasters.” The cynics were too hasty: it is possible to see into the future after all. What makes a superforecaster? Not subject-matter expertise: professors were no better than well-informed amateurs. Nor was it a matter of intelligence; otherwise Irving Fisher would have been just fine. But there were a few common traits among the better forecasters.
”
”