Superforecasting: The Art and Science of Prediction

In his new book, Philip Tetlock provides insight on the art and science of prognostication.

Sometimes really important people get things dreadfully wrong.

Such as back in 1961, when the US Joint Chiefs of Staff told President John F. Kennedy that the Bay of Pigs plan had “a fair chance of success.” Or in 1977, when the president of Digital Equipment Corp. said home computers wouldn’t catch on. Who can forget the promise of supply side economics, which was supposed to usher in unprecedented prosperity for all Americans? More recently, the weapons of mass destruction debacle led to an ill-considered invasion of Iraq, from which the world is still suffering deadly repercussions.

Philip Tetlock, a professor in the psychology and political science departments at the University of Pennsylvania and the Wharton School of Business, is interested in the art and science of prognostication. As co-leader of a multiyear forecasting study called the Good Judgment Project, he helped bring together 20,000 intellectually curious volunteers from outside the professional intelligence community to try forecasting how global events might unfold. How will Russia behave on the world stage? What will happen to the price of gold? How high will the Nikkei rise? Will elevated plutonium levels be found in Yasser Arafat’s remains? The project was part of a larger effort coordinated by a US intelligence agency.

Scores were assigned to rate the success and failure of the predictions and the results were analyzed to determine the research and thought processes that provided the best forecasts. If you like to make predictions, here are some useful tips that came out of the exercise.

Practise triage. Don’t waste time on simple questions, where rules of thumb can get close to the right answer, or on cloud-like questions that no one can answer.

Fermi-ize. Channel the spirit of renowned physicist Enrico Fermi, who succeeded by breaking down seemingly intractable problems into more manageable subproblems. Examine your assumptions and dare to be wrong by making your best guess.

Balance inside and outside views. The outside view compares events that can be relevant to the situation you’re trying to forecast. The inside view takes into account factors unique to the situation.

Adjust as events unfold. Strike the right balance between under- and overreacting to changing evidence.

Synthesize. Shed ideological thinking and learn to draw conclusions by considering differing points of view.

Embrace nuance. Put the finest point you can on statistical probability. “Absolutely,” “impossible” and “maybe” aren’t enough settings on your dial.

Be neither under- nor overconfident. Manage the trade-off between being decisive and not being afraid to qualify your judgments.

Conduct postmortems. Analyze both successes and failures to see what you can learn.

Hone your team-building skills. Learn how to disagree without being disagreeable and how to bring out the best in others.

Occasionally the book bogs down with accounts of other experiments and studies, and technical explanations about such things as how to quantify probabilities. Unless you know the difference between “epistemic” and “aleatory” uncertainty, for instance, it might be a good idea to have a dictionary nearby. But the fascinating parts about how captains of industry and US intelligence professionals have made famous mistakes with far-reaching consequences make the work worthwhile.

If you read it, you might not end up designing a better nuclear reactor, knowing when the next act of terrorism will occur or when a meteorite might obliterate your neighbourhood. But you will have a better appreciation for how tough it is to make accurate predictions and a few new tools to help you do so.