The Boston Globe has a smart piece that draws on the latest research about one of our favourite topics — the perils of economic forecasting.
Having singled out everyone’s favourite Doomster, Nouriel Roubini, here’s the main point of the article (our emphasis):
But are such people really better at predicting the future than anyone else? In October of last year, Oxford economist Jerker Denrell cut directly to the heart of this question. Working with Christina Fang of New York University, Denrell dug through the data from The Wall Street Journal’s Survey of Economic Forecasts, an effort conducted every six months, in which roughly 50 economists are asked to make macroeconomic predictions about gross national product, unemployment, inflation, and so on. They wanted to see if the economists who successfully called the most unexpected events, like our Dr. Doom, had better records over the long term than those who didn’t.
To find the answer, Denrell and Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.”
So economists who tend to predict near the consensus are, by definition, unlikely to anticipate extreme events, while those who correctly predict the occasional Black Swan tend to get everything else wrong (or most everything else).
Unfortunately, when it comes to economic forecasting, there’s really nowhere to turn, as the consensus view tends to miss even cyclical, non-Black Swan recessions. Here’s James Montier of GMO with a chart, via ZH:
Securities analysts aren’t much better.
Of course, the forecasters themselves are hip to this, which is why they often shrewdly resort to fun tactics like the 40 Per Cent Rule, which Roubini himself used back in October to assess probability of a US double-dip recession.
We support any effort to remind people that they should be highly skeptical of forecasts, and underlying this discussion is obviously the notion that forecasts aren’t to be taken seriously. But rather than thinking in binary terms about whether forecasts are ever useful or to be avoided entirely, a harder question to answer is whether forecasts are net useful — on balance more valuable than destructive.
Unfortunately, we don’t have a ready answer. Generally we agree with the standard defence that it’s not the forecast part of forecasts that matters, but rather their underlying information and logical coherence. And indeed, sometimes these are extremely helpful regardless of the outcome.
For one example, and so long as we’re talking about Roubini, consider his 2006 speech to the IMF, cited by the Boston Globe as the one big thing he got right. Having read through it, the speech is impressive not just because he turned out to be right, but for the cogency and strength of its arguments. Even had he been wrong about the severity of the ensuing recession, his view that a collapse in housing would lead to a wider financial fallout would still have shed light on the dangerous inter-connectivity of the financial system.
A more recent example might be Gary Shilling’s prediction from last October that US house prices will decline by 20 per cent in 2011. Whether or not this actually happens, the forecast contains a number of incisive points about the housing market and drove a wider conversation (at least within the blogosphere) that yielded still more useful points.
Henry Blodget recently offered another variation of this reasoning in defending Meredith Whitney after she predicted widespread defaults in the municipal debt markets:
Regardless of whether Whitney’s latest default-doomsday prediction comes true, the whole country is now aware that hundreds of towns, cities, and states face massive budget shortfalls that have to be addressed. Dozens of analysts are now frantically gathering data to figure out whether Meredith Whitney is right or wrong. And we’ve all gotten a lot better informed about our slow-motion municipal trainwreck.
Against these arguments are problems rooted in human psychology. If everyone could fully internalise the notion that forecasts are helpful but wholly unreliable, then there wouldn’t be much of a problem — people would simply absorb the useful bits and discard the actual predictions. (Of course, if everyone did that, then probably there wouldn’t be any forecasts.)
But we can’t. As the Boston Globe explains, we gravitate to forecasts because they give us the illusion that we understand more about the world than we do, and that we can take certain steps to to make it better:
In a saner world than ours, those who listen to forecasters would take into account all their incorrect predictions before making a judgment. But real life doesn’t work that way. The reason is known in lab parlance as “base rate neglect.” And what it means, essentially, is that when we try to predict what’s next, or determine whether to believe a prediction, we often rely too heavily on information close at hand (a recent correct prediction, a new piece of data, a hunch) and ignore the “base rate” (the overall percentage of blown calls and failures). …
To look at Denrell’s work is to realize the extent to which our judgment can be warped by our bias toward success, even when failure is statistically the default setting for human endeavor. We want to believe success is more probable than it is, that it’s the result of a process we can wrap our heads around. That’s why we’re drawn to prophets, especially the ones who get one big thing right. We want to believe that someone, somewhere can foresee surprising and disruptive change. It means that there is a method to the madness of not just business, but human existence, and that it’s perceptible if you look at it from the right angle.
In the world of finance and economics, we suppose this has taken the shape of an institution conserving too little capital, or a hedge fund banking on convergence because a bunch of Nobel Laureates expected it, or risk managers depending on formulas that use historical information to predict future correlations. There are plenty others.
And as for how forecasts are actually used inside Wall Street shops, we have a feeling that this observation from Eric Falkenstein isn’t limited to only his experience (though do let us know in the comments if any of you readers have had a different one):
When I worked for an economics department, I quickly learned what a lame business we were in. Our stated purpose–to forecast the economy to allow people to make better decisions–was different than our actual purpose–to provide rationales for decisions already made, to serve as an excuse to have a get together. The sad thing is that a Big Lie needs many little lies, as the stated goal of forecasting accuracy could not be discussed openly and honestly, because if one did the stated purpose becomes untenable, and then the unstated purpose becomes unworkable. It’s one of those phony little kabuki dances that seems so quaint in primitive cultures, but just as common in our own.
A problem in this field is that accuracy spells extinction because no one wants to listen to an honest forecaster, they don’t purport to know enough. Rather, listen to someone who can make you rich! In selling forecasts to the masses, honesty is a strictly dominated strategy.
Of course, at this point forecasts are such a big part of the finance and economics landscape that no matter how often they are debunked, people will keep depending on and paying for them (one way or the other).
And we predict that with at least 40% probability.