Anyone who has watched the 2011 Adam Curtis documentary series “All watched over by machines of loving grace” will remember the bit about Alan Greenspan becoming confused about America’s exceptional growth in the 1990s.
At the time, the data didn’t seem to fit the prevailing reality. The incredible and seemingly unstoppable growth Greenspan was seeing on the ground was at odds with his economic models, which instead were signalling an imminent rebalancing on the back of wage pressures and implied inflation.
Greenspan, confounded by the disconnect, concluded that some sort of missing variable must have been responsible for the mismatch. A variable which, very possibly, was tied to the growth of information technology and computers, an area of the economy that economists and statisticians still didn’t understand well.
Satisfied with this explanation, Greenspan finally saw fit to sit back and stop worrying about the aberrations, declaring instead that the dawn of a “new economy” was upon America. A new era underpinned by the rise of information technology. As he noted in a key speech in 1998:
Our economy, of course, is changing everyday, and in that sense it is always “new.” The deeper question is whether there has been a profound and fundamental alteration in the way our economy works that creates discontinuity from the past and promises a significantly higher path of growth than we have experienced in recent decades. The question has arisen because the economic performance of the United States in the past five years has in certain respects been unprecedented. Contrary to conventional wisdom and the detailed historic economic modeling on which it is based, it is most unusual for inflation to be falling this far into a business expansion.
This new economy, he believed, was representative of a “structural change”, which allowed for much higher levels of productivity, a tighter labour market and no inflationary consequences. And yet, even as he was writing on the subject in July 2000, the dot-com bubble — the ultimate representation of this brave new world — began to burst around him.
As the intensity of the crash became clear, including its repercussions for the economy, talk of the new model faded. It seemed logical that Greenspan — after all that — had been wrong. More so, that his initial instincts had been correct.
There was no hidden variable. Just a time lag generated by the very irrational exuberance he had identified earlier that decade. Exuberance that had kept the economy afloat for much longer than would normally be expected.
Yet the concept didn’t die completely. In 2006, Ben Bernanke honed in on what once again appeared to be an exceptional rate of productivity, all things considered. As Bernanke noted, two key phenomena were particularly puzzling:
Undoubtedly, the IT revolution and the resurgence of productivity in the United States after 1995 were closely connected. However, the technology-based explanation of increased productivity growth does raise a couple of puzzles (see, for example, McKinsey and Company, 2001 and 2005, and Basu and others, 2003). First, the United States was not the only country to have access to the new technologies or to have experienced a rapid expansion in IT investment; other industrial countries also invested heavily in these technologies in the 1980s and 1990s.
Yet, with a few exceptions, the available data show that productivity growth in other advanced countries has not increased to the extent seen in the United States.
Second, as I have noted, productivity growth increased very rapidly earlier this decade and has continued to rise at a solid pace, even though IT investment declined sharply after the stock prices of high-tech firms plummeted in 2000. More generally, as a historical matter, increases in IT investment have not always been followed in short order by increases in productivity growth. This observation raises the question of why, in some cases, the putative productivity benefits of investments in new technologies do not occur until years after those investments are made.
In the end, Bernanke rationalised that the United States had become the greatest beneficiary of technological shifts due to the country’s open and pro-business culture, which had translated into low barriers to entry, and incentivised firms to find ways to cut costs and to improve on products.
As for why productivity had continued to accelerate despite the decline in IT investment… this was possibly down to “an even more rapid pace of technical change and of the diffusion of technological advances.”
In other words, the pace of innovation and development was so quick that it was more than compensating for the lack of investment. Indeed, every dollar invested was now generating an ever greater amount of productivity.
Of course, what happened next — the subprime crisis — changed everything.
The above rationale was forgotten about as a new, more resolute, and less ‘mystic’ theory focused on the role of credit expansion came to the forefront.
The logic now went that it was the uber-low rates, introduced in the aftermath of the dot-com crash to cushion the economy, that had caused all the trouble. Not only had they flamed the dot-com bubble, they had gone on to fuel the much bigger bubble in the housing market. Critical imbalances had been formed, capital markets had been destabilised and capital itself had been mis-allocated.
On top of that sat a frenzied and out-of-control credit market — itself propelled by market demands for higher-yielding yet safe assets, funded to a large degree by Chinese capital account surpluses — to enable it all. Americans became hooked on debt, while investors, hungry for yield, took their eye off risk.
By the time the market started to crash in 2007, the causes seemed far clearer. What we were suffering from was a debt overhang in the Western world, driven by a complacent attitude to debt, imprudent lenders and greedy consumers living beyond their means.
Even Alan Greenspan acknowledged that the ‘missing variable’ had more than likely been globalisation, not information technology.
And with that, the idea that technology had changed the economy in a weird and wonderful way was buried once and for all.
Time to resurrect the ‘missing variable’?
But what if this school-book rationale — now viewed as the gospel truth — is not as accurate as many believe? What if the ‘new economy’ was, and actually is, a real and fundamental force?
Critics might argue that this time around the economic statistics beg to differ, and quite definitively so. CPI inflation may be muted, but the labour market is anything but tight. An output gap prevails, and US growth is not yet back on trend.
So why is it that a new mystery, the so-called ‘jobless recovery‘, is starting to do the rounds?
While growth is still working its way back to trend, it’s doing so against the odds. Key inputs such as a lack of investment and high unemployment should ordinarily imply the opposite, hence the mystery.
Then there’s multifactor productivity, a favourite measure for the FOMC in the 1990s… it hasn’t just recovered, it’s positively sprung back:
All the while unit labour costs — the labour cost attached to the production of one unit — are staying positively muted:
So is it time to consider the crucial difference between productivity and growth?
Nouriel Roubini explains here, that while productivity may be the cornerstone of economic growth, it does not solely determine output. Growth itself is a factor of many other inputs, among them capital and labour.
Hence why the data for the latter half of the 1990s was so confusing. Here’s Roubini:
Until the end of 1995 (when the fixed-weight system was being used to measure GDP and productivity) it appeared that there was a major resurgence of productivity in the 1990s: total factor productivity grew at a 1.7% per year rate while labor productivity grew at a 2.2% yearly rate. It appeared that a decade old (starting in the 1980s) process of corporate restructuring, reengineering, downsizing had finally borne its fruits and led to a major resurgence of productivity in the 1990s, spurred by a boom of investment in computer and information technologies. However, the switch in 1995 to the chain-weight method of measuring productivity changed drastically the picture: the new chain-weight data showed that in the 1990s total factor productivity grew at a dismal 0.9% per year rate while labor productivity grew at a 1.4% yearly rate, not much above the 1970s and 1980s rates. So the great resurgence of American productivity in the 1990s suddenly disappeared overnight by a statistical wand.
It became obvious that growth and productivity in the early 1990s was very much in the eye of the statistical beholder. As Roubini noted, depending on how you interpreted the data, you could either argue that the productivity benefits of the Information Revolution had been overstated, or that the new chain-weight method of calculating growth underestimated output and productivity because of mismeasurement of the growth productivity in the service sector.
Yet, by 1997, there was no denying that the data appeared to show a significant increase in the rate of productivity whichever way you looked at it. All of which led to some interesting discussions in FOMC meetings. Some committee members even suggested that a bunch of magical elves must be working busily behind the scenes to aid the US economy on.
Alice Rivlin commented in 1999 (as vice chairman of the Federal Reserve):
The elves clearly have been working overtime and they have gotten more ingenious. They have figured out how to control the weather, at least temporarily, and how to keep productivity growth increasing when any reasonable elf would suspect that all the reengineering and restructuring and computerizing that could be done had been done already. Most amazing of all, they seem to have figured out how to keep unemployment rates lower than what the NAIRU enthusiasts have said for a long time is the drop-dead rate, while wage increases actually have decelerated and inflation does not seem to be a danger at present.
Greenspan, meanwhile, argued that the FOMC was possibly looking at how productivity influences prices the wrong way round:
The reason I ask the question is actually the reverse; it relates to anecdotal indications that when nominal wages are beginning to accelerate, then business escalates its efforts to reduce costs and improve productivity. So, if that model were functioning in a meaningful sense, then a significant rise in nominal wages could very well merely reflect the fact that productivity was rising and therefore unit labor costs were not.
Average earnings on an per hour basis, however, continue to rise consistently, even despite the current crisis:
Meanwhile, the decline in total hours worked, which started with the crescendo of the dot-com bubble, continues on:
Which leaves us wondering whether information technology — as well as technology in its own right — may be the mysterious (and deflationary) force that’s once again influencing the economy, and being misread by economists all round?
Indeed, could the jobless recovery be signalling that technology has lead to the sort of abundance and productivity that leaves NAIRU — the unemployment rate below which inflation rises — with no choice but to recalibrate higher, if returns on capital investment are to be protected?
The rationale being, if NAIRU was unnaturally low in the 1990s — meaning everyone could have a job without there being inflationary consequences since productivity was deflating unit labour costs — did the buck break on account of capital, not low interest rates or inflationary forces? That’s to say, productivity had become so great, that the economy could no longer afford to keep hiring workers without pushing unit labour costs to a point where goods and output would infringe on profitability directly?
Amidst such improvements in productivity, labour could thus only be profitably employed if the actual level of employment and hours worked began to fall instead. In which case, perhaps what NAIRU now represents is no longer the point below which employment starts to become inflationary… but rather, and very much conversely, the point below which more employment only becomes more deflationary a force on the economy?
And if that’s true, perhaps it’s time to once again consider the ‘missing variable’ if not the whole idea of a technology-driven new economy?
For now, we’ll leave you with a rather unconventional indicator. The below chart notes the number of references to technology or high-tech industry in FOMC transcripts between 1993-2006, produced in a version of Excel from the same period (not that we’re saying this influences our own productivity, mind you):
Related links:
How Does the FOMC Learn About Economic Revolutions? Evidence from the New Economy Era, 1994-2001 - Federal Reserve
Beyond Scarcity – FT Alphaville (series)
Peter Diamandis: Abundance is our future – Ted Talks
