This CATO Unbound Discussion is required reading for everyone interested in DAGGRE, ACE, or forecasting generally. The participants:
Once we grasp that the experts aren’t so reliable at predicting the future, a question arises immediately: How can we do better? Some events will always be unpredictable, of course, but this month’s lead authors, Dan Gardner and Philip E. Tetlock, suggest a few ways that the experts might still be able to improve.
To discuss with them, we’ve invited economist and futurologist Robin Hanson of George Mason University, Professor of Finance and Cato Adjunct Scholar John H. Cochrane, and political scientist Bruce Bueno de Mesquita. Each will offer a commentary on Gardner and Tetlock’s essay, followed by a discussion among the panelists lasting through the end of the month.
The lead Gardner and Tetlock essay introduces ACE and reflects more widely:
In an unprecedented “forecasting tournament,” five teams will compete to see who can most accurately predict future political and economic developments. ….all the results will be directly comparable, and so, with a little luck, we will learn more about which methods work better and under what conditions. This sort of research holds out the promise of improving our ability to peer into the future.
But only to some extent, unfortunately. … There increasingly appear to be fundamental limits to what we can ever hope to predict.
That is a considerable understatement. Remember that it was a single suicidal protest by a lone Tunisian fruit seller that set off the tumult
Accepting that our foresight will always be myopic also calls for decentralized decision-making and a proliferation of small-scale experimentation.
The optimists are right that there is much we can do at a cost that is quite modest relative to what is often at stake. For example, why not build on the IARPA tournament? Imagine a system for recording and judging forecasts. Imagine running tallies of forecasters’ accuracy rates. Imagine advocates on either side of a policy debate specifying in advance precisely what outcomes their desired approach is expected to produce, the evidence that will settle whether it has done so, and the conditions under which participants would agree to say “I was wrong.” Imagine pundits being held to account.
If you would like to participate in the tournament as a member of the DAGGRE team, please visit our website, http://www.daggre.org, and register.