David Warsh: After '08 Close Call Can Bankers Avoid Another Depression?

SOMERVILLE, Mass. So much for the first two depressions, the one that happened in the 20th Century, and the other that didn’t happen in the 21st. What about that third depression?  The presumptive one that threatens somewhere in the years ahead.

Avoiding the second disaster, when a full-blown systemic panic erupted in financial markets in September 2008 after 14 months of slowly growing apprehension, turned on understanding what precipitated and then exacerbated the first disaster, the Great Depression of the 1930s.

By the same token, much will depend on how the crisis of 2007-08 comes to be understood by politicians and policy-makers in the future.

The panic of ’08 wasn’t described as such at the time – it was all but invisible to outsiders, and understood by insiders only at the last possible moment.

It occurred in a collateral-based banking system that bankers and money-managers had hastily improvised to finance a 30-year boom often summarized as an era of “globalization.”

The logic of this so-called “shadow banking system” has slowly become visible only after the fact.

The panic was centered in a market that few outside the world of banking  even knew to exist. Its very name – repo – was unfamiliar. The use of sale and repurchase agreements as short-term financing – some $10 trillion worth of overnight demand deposits for institutional money managers, insured by financial collateral – had grown so quickly since the 1980s that the Federal Reserve Board gave up measuring it in 2006.

The panic led to the first downturn in global output since  the end of World War II, and the ensuing political consequences in the United States, Europe, and Russia have been intense.

But the banking panic itself was the more or less natural climax to a building spree that saw China’s entry into the world trading system, the collapse of the USSR, and many other less spectacular developments – all facilitated by an accompanying revolution in information and communications technology.

Gross world product statistics are hard to come by – the  concept is too new – but however those  years are  understood, as one period of expansion of world trade or two, punctuated by the 1970s, growth since the trough of the Great Depression is  unique in global history.

The much-ballyhooed subprime lending was only a proximate cause.  It was a detonator attached to a debt bomb that fortunately didn’t explode, thanks to emergency lending by central banks, backed by their national treasuries, that alleviated the fear of irrational ruin.

Instead of Great Depression 2.0, the loans were mostly repaid.

But what had occurred was almost completely misinterpreted by both  the Bush and Obama administrations. What happened is still not broadly understood, even among economists.

Let me briefly recapitulate the story I have been telling here since May – mainly an elaboration of the work of a handful of economists involved in the financial macroeconomics project of the National Bureau of Economic Research.

Panics We Will Always Have With Us

Banks have been a fixture of market economies since medieval times. Periodic panics have been a feature of banking since the seventeenth century, usually occurring at intervals of ten to twelve years. Panics are always the same:  en masse demands by depositors – in modern parlance, by holders of bank debt – for cash.

Panics are a problem because the cash is not there – most of it has been lent out, many times over, in accordance with the principles of fractional banking, with only a certain amount kept in the till to cover ordinary rates of withdrawal.

In the 18th Century, Sir James Steuart argued for central banks and government charters. His rival Adam Smith advocated less invasive regulation, consisting of bank-capital and -reserve requirements and, having ignored Steuart, won the argument completely, as far as economists were concerned. Bankers were less persuaded. After the Panic of 1793, the Bank of England began lending to quell stampedes.

Panics continued in the 19th Century and, as banks grew larger and more numerous, became worse. After the Panic of 1866 shook the systems, former banker Walter Bagehot took leave from his job as editor of The Economist to set straight the directors of the Bank of England. In Lombard Street: A Description of the Money Market, he spelled out three basic rules for preventing them panics from getting out of hand: lend freely to institutions threatened by withdrawals, at a penalty rate, against good collateral.  Thereafter, troubled banks sometimes closed, but panics in the United Kingdom ceased.

Panics continued in the United States.  The National Banking Acts of 1863 and 1864 were supposed to stop them; they didn’t. There were panics in 1873, 1884 and 1893.  At least the statutes created a national currency, backed by gold, and the Office of the Comptroller of the Currency to manage both the paper currency and the banks.

Since then, three especially notable panics have occurred in the U.S.  in the last hundred years – in 1907, in 1930-33, and in 2007-8.

The 1907 panic began with a run on two New York City banks, then spread to the investment banks of their day – the new money trusts.  The threatened firestorm was quelled only when the New York Clearing House issued loan certificates to stop the bank run and J.P. Morgan organized the rescue of the trusts.

The episode led, fairly directly, to the creation in 1913  of the Federal Reserve System – what turned out to be a dozen regional banks in major banking cities around the country, privately owned,  including an especially powerful one in New York, and a seven-member board of governors in Washington, D.C., appointed by the president.

Legislators recognized that creating the Fed amounted to establishing a fourth branch of government, semi-independent of the rest, and a great deal of care  and compromise went into the legislation. Under the leadership of Benjamin Strong, president of the Federal Reserve Bank of New York, who had served as Morgan’s deputy in resolving the 1907 crisis, and who enjoyed widespread confidence in both banking and government circles as a result, the Fed got off to a good start.

In the Panic of 1914, at the outbreak of World War I, banks were able to issue emergency currency under the Aldrich-Vreeland Act; the Fed was not yet  functioning.  The new central bank managed its policies adroitly enough in the short, sharp post-war recession of 1920-21 to gain a measure of self-confidence. In 1923, it began actively managing the ebb and flow of interest rates through “open market operations,” buying and selling government bonds for its own account.

Then Strong died, in 1928, leaving a political vacuum.  That same year, disputatious leaders within the Fed, its governors in Washington and its operational center in New York, and their counterparts in the central banks of Britain, France and Germany, began to make a series of missteps that, cumulatively, may have led to the Great Depression. Investment banker Liquat Ahamed has argued as much in Lords of Finance: The Bankers Who Broke the World, drawing on many years of economic and historical research.

Starting in January 1928, the New York Fed stepped sharply on its brakes, hoping to dampen what it regarded as excessive stock market speculation.  Instead the market surged, then crashed in October 1929.  A sharp recession had begun.

A series of bank failures began, but the methods by which the industry had coped with such runs before the central bank was established were held in abeyance, awaiting intervention by the Fed. Instead of acting, the Fed tightened.

Twice more, in 1931, and in early 1933, waves of panic swept segments of the banking industry around the country with corresponding failures of hundreds of banks – everywhere but New York. (There was no deposit insurance in those days.)

Each time the Fed stood by, declining to lend or otherwise ease monetary stringency.  Meanwhile Herbert Hoover set out to balance the federal budget. By March 1933, banks in many states had been closed by executive order.

Franklin Roosevelt defeated Hoover in a landslide in November 1932.  The subsequent March, Roosevelt and a heavily Democratic Congress began the New Deal. (Inauguration of the new president was subsequently moved up to January.)  With unemployment rates reaching 25 percent and remaining stubbornly high, the panics of the early ’30s were quickly forgotten. In any event, they had ceased.

Economists of all stripes offered prescriptions.  Finally, in 1936, John Maynard Keynes, in The General Theory of Employment, Interest and Money, dramatically  recast the matter. Because wages would inevitably be slow to fall, an economy could become trapped in a high-unemployment equilibrium for lengthy periods. Only government could intervene effectively, providing fiscal stimulus to create more jobs, returning a stalled economy to a full-employment equilibrium.

Keynesian ideas gradually conquered the economics profession, especially after they were restated by Sir John Hicks and Paul Samuelson in more traditional terms. Soon after World War II, the new doctrine was deployed in the industrial democracies of the West. Fiscal policy, meaning raising and lowering taxes periodically, and manipulating government spending in between, would be the key to managing, perhaps even ending, business cycles. The influence of money and banking were said to be slight.

Starting in 1948, Milton Friedman and Anna Schwartz, two young economists associated respectively with the University of Chicago and the NBER, began a long rearguard action against the dominant Keynesian orthodoxy. It reached a climax with the publication, in 1963, of A Monetary History of the United States, 1867-1960, a lengthy statistical study of the Fed’s conduct of monetary policy. Its centerpiece was a reinterpretation of the Great Depression.

The Fed, far from having been being powerless to affect the economy, Friedman and Schwartz argued, had turned “what might have been a garden-variety recession, though perhaps a fairly severe one, into a major catastrophe,”   by permitting the quantity of money to decline by a third between 1929 and 1933.

Peter Temin, economic historian at the Massachusetts Institute of Technology, took the other side of the argument. Shocks produced by World War I were so severe that, exacerbated by commitments to an inflexible gold standard, a dozen years later, they caused the Depression. Central banks could scarcely have acted otherwise.

The argument raged throughout the ’70s.  By the early ’80s, a young MIT graduate student named Ben Bernanke concluded that Friedman was basically correct: Monetary policy, especially emergency last-resort lending, was very important. He and others set out to persuade their peers. By a series of happy coincidences, Bernanke had been in office as chairman of the Fed just long enough to recognize  the beginning of the panic for what they were in the summer of 2007.  And so fifty years of economics inside baseball was swept away in a month of emergency lending in the autumn of 2008. Great Depression 2.0 was avoided.  Friedman – or at least Bagehot – had been right.

This is a very different story than the one commonly told. More of that in a final episode next week.

Meanwhile, even this brief account raises an interesting question.  How was it that the United States enjoyed that 75-year respite from panics – Gary Gorton calls it “the quiet period” – in the decades after 1934?

Why We Had the Quiet Years

With respect to banking, four measures stand out among the responses to the Crash of 1929 and the subsequent Great Depression:

* President Roosevelt  explained very clearly the panic that had taken hold in the weeks before his inauguration in his first Fireside Chat, “The Banking Crisis,” and why he had declared a bank holiday to deal with it.  He had taken the U.S. off the gold standard, too, but didn’t complicate matters by trying to explain why. As Gorton has pointed out, Roosevelt was careful not to blame the banks.

Some of our bankers had shown themselves either incompetent or dishonest in their handling of the people's funds. They had used the money entrusted to them in speculations and unwise loans. This was, of course, not true in the vast majority of our banks, but it was true in enough of them to shock the people for a time into a sense of insecurity and to put them into a frame of mind where they did not differentiate, but seemed to assume that the acts of a comparative few had tainted them all. It was the government's job to straighten out this situation and do it as quickly as possible. And the job is being performed.

* The Banking Act of 1933, known as the Glass-Steagall Act for its sponsors, Sen. Carter Glass (D-Virginia) and Rep. Henry Steagall (D-Alabama), tightly partitioned the banking system, relying mainly on strong charters for commercial banks of various sorts.  Securities firms were prohibited from taking deposits; banks were prohibited from dealing or underwriting securities, or investing in them themselves. (The McFadden Act of 1927 already had prohibited interstate banking.)

* Congress established the Federal Deposit Insurance Corporation to oversee the deposit insurance provisions of the 1933 Banking Act . Small banks were covered as well as big ones, at the instance of Steagall, over the objections of Glass.

* Former Utah banker Marinner Eccles, as chairman of the Federal Reserve Board, re-engineered the governance of the Fed as part of the Banking Act of 1935, over the vigorous opposition of Carter Glass, who had been one of the architects of the original Federal Reserve Act. Eccles wrote later, “A more effective way of diffusing responsibility and encouraging inertia and indecision could not very well have been devised.”

Authority for monetary policy was re-assigned to the Board of Governors in Washington, in the form of a new 12-member Federal Open Market Committee, rather than left to the regional bank in New York. The power of the regional banks was reduced, and the appointment of their presidents made subject to the approval of the Board. Emergency lending powers were broadened to include what the governors considered “sound assets,” instead of previously eligible commercial paper, narrowly defined. The system remained privately owned, and required no Congressional appropriations (dividends from its portfolio of government bonds more than covered the cost of its operations), but its relationships with the Treasury Department and Congress remained somewhat ambiguous.

A fifth measure, the government’s entry into the mortgage business, has a cloudier history. The Federal Housing Act of 1934 set standards for construction and underwriting and insured loans made by banks and other lenders for home construction, but had relatively little effect until the Reconstruction Finance Corporation established the Federal National Mortgage Association (FNMA, or Fannie Mae) in1938, to buy mortgage loans from the banks and syndicate them to large investors looking for guaranteed returns. With that, plus the design of a new 30-year mortgage requiring a low down payment, the housing market finally took off. As chairman of the Fed, Eccles pleaded with bankers to create the secondary market themselves, but without success.

Lawyers immediately began looking for loopholes. By 1940 they had found several, resulting in the passage of the Investment Company Act, providing for federal oversight of mutual funds, then in their infancy.  Eight years later, the first hedge fund found a way to open its doors without supervision under the 1940 Act – as a limited partnership of fewer than 100 investors.

By the 1970s, many financial firm were eager to enter businesses forbidden them by the New Deal reforms.  The little-remembered Securities Acts Amendment of 1975 began the process of financial deregulation with the seemingly innocuous aim of ending the 180-year-old prohibition of price competition among members of the New York Stock Exchange. It was a response to an initiative undertaken by the Treasury Department in the midst of the Watergate scandals of the Nixon administration, and signed into law by President Gerald Ford, Deregulation continued apace under President Jimmy Carter, and swung into high gear with the election of President Ronal Reagan.  It reached its apex with the repeal of the key provisions of the Glass-Steagall Act in 1999.

As of 1975, financial innovators had been encouraged to experiment as they pleased, subject mainly to the discipline of competition. The moment coincided with developments in academic finance that revealed whole new realms of possibility. Fed regulators scrutinized the new developments at intervals.  Investment banker (and future Treasury Secretary) Nicholas Brady headed a commission that examined relationships among commodity and stock exchanges after the sharp break in share prices in 1987.  Economist Sam Cross studied swaps for the Fed. New York Fed President Gerald Corrigan undertook a more wide-ranging study of derivatives.  Objections, including those of Brooksley Born, chairman of the Commodity Futures Trading Commission from 1996-99, were swept aside.

An extensive new layer of regulation was put into place by the Sarbanes-Oxley Act of 2002, after the collapse of the bubble and the corporate scandals of 2001, Enron, WorldCom and Tyco International, but mostly these had to do with corporate accounting practices and disclosure. The vast new apparatus of global finance worked pretty well, until unmistakable signs of stress began to show in the summer of 2007.

It will be years before the outlines of the 75 years between 1933, the trough of the Great Depression, and early 2008, the peak of the more-or-less uninterrupted global expansion that began in ’33 (making allowances for World War II and the ‘70s, when the U.S.  drifted while Japan and other Asian economies  grew rapidly) come into focus.

Yet a few basic facts about the period are already clear. Innovation was crucial, in finance as in all other things, especially information and communications technologies. Competition between the industrial democracies and the communist nations was a considerable stimulant to development.  Banking and economic growth were intimately related, perhaps especially after 1975. And the dominant narrative furnished by economic science, the history of the business cycle compiled by the NBER, while valuable, is of limited usefulness when  it comes to interpreting the history of events. Additional yardsticks will be required.

The New New Deal – Not

How did the Practicals do this time, measured against the template they chose, the New Deal of Franklin Roosevelt?  Not very well, I am sorry to say.

Certainly the Fed did much better than it had  between 1928 and 1934. Decisions in its final years under  Chairman Alan Greenspan will continue to be scrutinized. And no one doubts that Bernanke made a slow start after taking over in February 2006. But from summer of 2007, the Fed was on top at every juncture.  The decision to let Lehman Brothers fail, is likely to be the chief topic of conversation in the stove-top league when his first-person account, The Courage to Act, appears next week. It will still be debated a hundred years from now.

If Lehman had somehow been salvaged, some other failure would have occurred. The panic happened in 2008.  Bernanke, his team, and their counterparts abroad, were ready when it did.

The Bush administration, too, did far better than Hoover. Treasury Secretary Henry Paulson was not so good in the thundery months after August 2007, when he pursued a will-o’-the-wisp he called a “super SIV (structured investment vehicle”), modeled on the private-sector resolution of the hedge- fund bankruptcies that accompanied the Russia crisis of 1998.  And it can’t be said that the planning for a “Break-the-Glass” reorganization act that Paulson ordered in April produced impressive results by the time it was rolled out as the Troubled Asset Relief Program in September 2008. But Paulson’s team improvised very well after that.  Bernanke and Paulson, Bush’s major post-Katrina appointments, as well as the president himself, with the cooperation of the congressional leadership, steered the nation through its greatest financial peril in 75 years.

The panic was nearly over by the time Obama took office.

In retrospect, the Obama admiration seems to have been either coy or obtuse in its first few months.  Former Treasury Secretary Lawrence Summers and Robert Rubin, the man he had succeeded in the job, formerly of Goldman Sachs and, by then, vice chairman of Citigroup, formally joined the Obama campaign on the Friday that the TARP was announced. (The connection to Rubin was soon severed.)

Summers, like the rest of the economics profession, began a journey of escape from the dogma he had learned in graduate school, which held that banking panics no longer occurred. Upon leaving office, in a conversation with columnist Martin Wolf, of the Financial Times, Summers hinted at how his thinking had changed when he told an audience at a meeting of the Institute for New Economic Thinking at Bretton Woods, N.H., in March 2011:

"I would have to say that the vast edifice in both its new Keynesian variety and its new classical variety of attempting to place micro foundations under macroeconomics was not something that informed the policy making process in any important way.''

Instead, Summers said, Walter Bagehot, Hyman Minsky, and, especially Charles Kindleberger had been among his guides to “the crisis we just went through.”

But Summers made little attempt that day to distinguish between the terrifying panic that occurred the September before Obama took office, and the recession that the Obama administration had inherited as a result.  Nor did the accounts of Summers’s tenure subsequently published by journalists Noam Scheiber, Ron Suskind and Michael Grunwald make clear the extent to which the panic had been a surprise to Summers.  Schooled to act boldly by his service in the Treasury Department during the crises of the ’90s, he did the best he could. At every juncture, Summers remained a crucial step behind Bernanke and Obama's first Treasury secretary, Timothy Geithner, the men he hoped desperately to replace.

The result was that Obama’s first address about the crisis, to a joint session of Congress, in February 2009, was memorable not so much for what the president said as for what he didn’t.

"I intend to hold these banks fully accountable for the assistance they receive, and this time they’ll have to fully demonstrate how taxpayer dollars result in more lending for the American taxpayer. This time, CEOs won’t be able to use taxpayer money to pad their paychecks or buy fancy drapes or disappear on a private jet.''

But there was no matter-of-fact discussion of the panic of the autumn before the election, in the manner of Franklin Roosevelt; no credit given to the Fed (and certainly none to the Bush administration); and not much optimism, either. The cost of inaction would be “an economy that sputters along not for months or years, perhaps a decade,” Obama said.

After an initial “stimulus” – as opposed to the “compensatory spending” of the New Deal – of the $819 billion American Recovery and Reinvestment Act, inaction is exactly what he got.  The loss of the Democratic majority in the House in the Tea Party election of 2010 only made the impasse worse. Yet the economy recovered.

Congress?  Many regulators and bankers contend that the thousand-page Dodd Frank Act complicated the task of a future panic rescue by compromising the independence of the Fed. Next time the Treasury secretary will be required to sign off on emergency lending.

Bank Regulators?  Some economists, including Gorton, worry that by focusing on its new “liquidity coverage ratio” the Bank for International Settlements, by now the chief regulator of global banking, will have rendered the international system more fragile rather than less by immobilizing collateral.

Bankers?  You know that the young ones among them are already looking for the Next New Thing.

Meanwhile, critics left and right in Congress are seeking legislation that would curb the power of the Fed to respond to future crises.

So there is plenty to worry about in the years ahead.  Based on the experience of 2008, when a disastrous meltdown was avoided, there is also reason to hope that central bankers will once again cope. Remember, though – it was a close-run thing.

David Warsh, a long-time financial journalist and economic historian, is proprietor of

David Warsh: Generations of economists

Economics, at least in its dominant narrative tradition, is a story of human lifetimes. Its most vital concerns in the present day span no more than four or five generations: the lives of our parents and grandparents backwards in time, our own lives in the present, the lives of our children and grandchildren going forward – perhaps a hundred and twenty-five  years altogether.  That each of us has (or deserves to have) first-hand knowledge of five generations may be a basic fact of human existence. Perspectives shift with the passing of each succeeding generation. In due course, each new generation writes history anew.

When I started covering economics, in 1975, Paul Samuelson and Milton Friedman were pre-eminent. They had been born about the same time — Friedman in 1912, Samuelson in 1915.  They led powerful university departments,  at the University of Chicago, and the Massachusetts Institute of Technology, to which the brightest students flocked.  They advised presidential candidates – Samuelson, John F. Kennedy; Friedman, Barry Goldwater and Ronald Reagan.

For nearly 20  years, they argued with one another, each writing a column every two weeks about public policy in the pages of Newsweek.   They were among the first winners of the new Nobel Prize for economic sciences funded by the Bank of Sweden and established in 1969.

The premise of the Nobel was that important lessons had been learned from the Depression, sufficient to constitute a phase change in what previously had been a nascent science.  Those lessons were persuasive to most of those who took an active interest in the subject, not just the economists themselves, but the physicists, chemists, earth scientists, social scientists and others of the Royal Swedish Academy of Sciences who agreed to give the prize. They were useful, probably, for purposes of managing industrial economies.

This achievement was ascribed to the generation of scholars who had been called upon in the 1930s to parse the Great Depression as it unfolded. John Maynard Keynes is the best remembered of this community, along with Friedrich von Hayek, Joseph Schumpeter, and Irving Fisher.  Edward Chamberlin, Joan Robinson and Wesley Clair Mitchell are remembered for different reasons. Only Hayek lived long enough to share a Nobel Prize.

The Nobel Prize made it easier to follow the narrative. Samuelson received the second prize, in 1970 (the pioneering European econometricians Ragnar Frisch and Jan Tinbergen were first); Friedman, in its eighth year, in 1976. In between, a third major figure was recognized, and, at least within the profession, gradually came to be understood as being of importance comparable to Friedman and Samuelson.  Kenneth Arrow (b. 1921) was cited in 1972, with Sir John Hicks (b. 1904), for work that at the time seemed abstruse: a mathematical proof of the existence and stability of general equilibrium in a system of market prices. It turned out to be the foundation of what we call microeconomics.

The decision to split the award was confusing:  the two economists had worked on some of the same problems, but in different eras, with different tools. Hicks had done his important work in the 1930s; Arrow in the ’50s. But even then, Arrow’s accomplishments were much wider-ranging. They included his analysis of different voting systems, sufficient to establish a new sub-discipline called social choice; the tools he devised to incorporate uncertainty into economic analysis, imagining complete markets of options and futures for all manner of things in all conceivable states of the world; and his contributions to the theory of growth. Starting in the early 1960s, Arrow began a whole new skein of work, on the economics of information.

He added to his Nobel autobiography in 2005 that, even before 1972, his  research  was moving in directions beyond those cited for the prize. Hicks died in 1989; Arrow, a Stanford University professor, is still active, editing, with his colleague Timothy Bresnahan,  the Annual Review of Economics.

Slowly the narrative filled in. Prizes wet to theorists, and to refiners of theory; to practitioners of measurement: and measurement’s close cousin,  the statistically-oriented field of econometrics; to economists who specialized in finance; and economists influenced by game theory, though the first ones weren’t given until 1994. Some economists of comparable achievement were mentioned only in citation for awards to others, some not cited at all; and those who had concerned themselves mainly with policy were left out altogether.

For the first 20 years of the Nobel survey, the dominant theme was macro-economics. This was unsurprising, since macro was the essence of what was thought to have been learned from the experience of the ‘30s: that here was a way to characterize the behavior of the economy as a whole, using a handful of measurable aggregates, chief among them consumption, investment, and government purchases.  It was easy, too, to divide the macro awards into camps, if not to reach conclusions about the fundamental differences between among them.

Keynesian laureates, led by Samuelson, included Wassily Leontief, Lawrence Klein, James Tobin, Franco Modigliani, Rober Solow, George Akerlof, Joseph Stiglitz, Edmund Phelps, Paul Krugman, Peter Diamond, and Robert Shiller. Monetarists, led by Friedman, included George Stigler, Robert Lucas, Robert Mundell, Edward Prescott,  Thomas Sargent, and Eugene Fama.

Woven through the prizes was recognition of a different skein loosely related to work by Arrow and his intellectual collaborators, Gerard Debreu and Leo Hurwicz, in particular. Taken together, that new work added up to a working out of microeconomics to include strategic behavior:  incentives, information, and human capability. Nobel laureates in this category included Herbert Simon, Michael Spence, William Vickery, James Mirrlees, Eric Maskin, Roger Myerson, and Alvin Roth. Lucas and Robert Merton used the tools of complete competitive markets that Arrow had contributed; Ronald Coase and Gary Becker lent support, relying less on formal methods.

For half a century then, all the time I’d been reporting on it, economics seemed to be thestory of three generations: of Keynes and others, those who were called upon during the emergency; of their students, Samuelson, Friedman, and Arrow,  who entered grad school in ’30s and ’40s; and their students, who went to school in the ’50s and ’60s, during the of the Cold War.  A fourth generation could be discerned, economists who entered graduate school in the ’70s and ’80s, in an age of innovation, restructuring and globalization.

The crisis of 2008 seemed to overturn all that.  At its heart was a long global boom, punctuated by a panic, controlled by emergency leading by the world’s central banks.  There seemed to be little in the panic, at least, that could not be understood in the new economics of information and incentives.  The broad macroeconomic questions seemed to resolve decisively in favor of Friedman, who  however warily, had put central banking at the center of his analysis.  But the whole only made sense when placed in a grammar pioneered by Samuelson: models, measurement and operability.

Only slowly did it dawn on me that Samuelson, Friedman and Arrow,  having long been economics’ own greatest generation, had themselves become so much History, superseded by events.   Several members of that fourth generation played prominent roles in the crisis: Ben Bernanke, Austan Goolsbee, Lawrence Summers, Paul Krugman, in particular; economists of the third generation, John Taylor. Martin Feldstein and Stanley Fischer were active, too, mostly behind the scenes: but all these were policymakers and advocates, not originators of profound new insights.

So I  set out to tell the story of the collaboration of Gary Gorton, of Yale University’s School of Management, and Bengt Holmström, of MIT, as described in one way by Gorton, in Misunderstanding Financial Crises: Why We Don’t See Them Coming (Oxford 2012), and, in another, by Holmstrom in “Understanding the Role of Debt in the Financial System.”  These two did a better job of analyzing and interpreting the crisis than any others I had seen. They have opened the door to a new financial macro-economics. A growing coterie of monetary theorists; economic historians; and international economists are taking the next steps. Only the connections between finance and growth are so far missing.

It clearly wouldn’t be enough to trumpet the new results.  The controversy between the Keynesians and Monetarists would have to be unscrambled, at least for those who remembered the history of economics in the 20th Century as I did — for one friend in particular. To do that I devised an account that went beyond the bounds of memory.  I set out to tell a story with two threads: the accomplishments of practitioners, mostly bankers in this case, on the one hand; of economists on the other.

I started with Sir James Steuart, the all-but-forgotten Scottish rival to Adam Smith, whose verbose but coherent system of economics of 1767 — today we would call it a model— made an important place for the governmental institution of central banking.  I showed  how Smith eclipsed Steuart completely with the exciting research program of The Wealth of Nations, which contained not one but three promising lines of inquiry, and how Smith’s rich vision was itself  eclipsed when subsequent economists chose to pursue the most promising of the three, the existence of an interdependent price system, symbolized by the metaphor of an Invisible Hand.

I then showed bankers coping with the situation on their own for 150 years, creating central banks and instruction manuals for their safe operation, until Milton Friedman and Anna Schwartz placed them at the top of the research agenda of macroeconomics — and then only on terms so old-fashioned that they had to be translated into the language devised by Samuelson and the economics devised by Arrow and his students before they could be properly understood.  But here I get ahead of the story.

David Warsh, a longtime financial journalist and economic historian, is proprietor of, where this piece originated.

David Warsh: Jeb Bush likely to be the next president

  The early stages of the presidential campaign are unfolding as expected.  Jeb Bush is irritating his party’s right wing by periodically praising President Obama. Otherwise he remains as elusive as possible, in view of the gantlet of Red State primaries he must run  to receive the Republican nomination. Then he can run to the center in the general election.

To make this point last week, Washington Post campaign correspondent Ed O’Keefe reached back to an interview that Bush gave Charlie Rose in 2012:

“I don’t have to play the game of being 100,000 percent against President Obama,” he said at the time. “I’ve got a long list of things that I think he’s done wrong and with civility and respect I will point those out if I’m asked. But on the things I think he’s done a good job on, I’m not just going to say no.”

In the same interview, Bush alleged that Obama was repeatedly blaming his brother for his own missteps and suggested it would be nice to hear Obama give “just a small acknowledgment that the guy you replaced isn’t the source of every problem and the excuse of why you’re not being successful.”

Meanwhile, Hillary Rodham Clinton is bogged down amidst increasing scrutiny of the philanthropic foundation her husband founded after leaving office.

Are the stories fair?  In New York Times columnist Paul Krugman’s view, there is reason to be skeptical. He wrote last week on his blog, “If you are old enough to remember the 1990s, you remember the endless parade of alleged scandals, Whitewater above all — all of them fomented by right-wing operatives, all eagerly hyped by mainstream news outlets, none of which actually turned out to involve wrongdoing.”

In fact  the Clintons manifested a fair amount of slippery behavior once they got to in Washington,  most of it involving a series of appointments of cronies to sensitive positions in the Treasury and Justice Departments, including that of his wife and of his old friend Ira Magaziner to head a Task Force on National Health Care Reform. Little of the initial concern was fomented by “right-wing operators.”

But instead of simply monitoring the president’s behavior in office, the editorial page of The Wall Street Journal, under editor Robert Bartley, launched a campaign to, in effect, overturn the result of the election, by conducting an extensive investigation of the couple’s Arkansas years.

The result was the Whitewater craze —the name stems from a vacation development in the Ozark Mountains in which the Clintons had an interest.  Bartley’s inquisition found relatively little smoke and much less fire. Typical was the discovery of a futures trade, arranged by a favor-seeker, from which Hillary had benefitted in 1978, having put very little money at risk – at a moment in which her marriage hung in in the balance. Afterwards she backed away from the broker.

Mrs. Clinton’s healthcare reform initiative collapsed in 1994, amid heavy criticism of its approach (employers required to provide coverage to all employees) and its lack of transparency. Later that year Rep. Newt Gingrich (R.-Ga.) led the Republicans to control of the House of Representatives.  The struggle for power became more intense.

It reached a crescendo when impeachment proceedings failed to convict the president of “high crimes and misdemeanors” for having lied about his affair with White House intern Monica Lewinsky. The Democrats gained seats in the House after that.

Clinton served out his time in the White House with only one more memorable scandal – his pardon of Glencore commodities founder Marc Rich on charges of trading with Iran and tax evasion.

The case against Bill Clinton was sunk by the consistently unfair and ultimately deplorable way in which it was brought. But Hillary Clinton’s problems with the Clinton Foundation – possible conflicts of interest while serving as secretary of state, her husband’s lavish speaking fees – are there for all to see, 18  months before the election.  Politico’s Jack Shafer compares the scheme to the concept of “honest graft,” as enunciated long ago by George Washington Plunkitt of Tammany Hall: “I seen my opportunities and I took ‘em.”

Shafer writes, “The Times story contains no smoking gun. As far as I can tell, the pistol isn’t even loaded. Hell, I’m not sure I even see a firearm.” But then Plunkitt never ran for president.

So it seems likely the next president will be Jeb Bush. He would be the third member of the family to serve. Is that a bad thing?

Not in my reading of it. Jeb Bush more nearly resembles his realist father than his idealistic brother.  The net effect of a Bush victory would be the marginalization of the populist wing of the GOP – and their fellow travelers at the WSJ ed page.  Remember, the editorialists there played a significant role in bringing about the defeat of the first Bush. Furious at him for having raised taxes slightly to finance the first Gulf War, they systematically forced Bush to the right.  Pat Buchanan gave the keynote address to the Republican convention in Houston, H. Ross Perot split the GOP vote and Clinton was elected with 43 percent of the popular vote. .

Rupert Murdoch bought the WSJ in 2007.  Since then I have detected a gradual slight moderation of its editorial views (if not those of its more fiery columnists (Bret Stephens, Daniel Henninger, Kimberly Strassel). In 2014, Club of Growth founder Stephen Moore left the editorial board for the Tea Party’s Heritage Foundation, taking with him the portfolio of crackpot economics.  Due next for an overhaul is the page’s stubborn denial of climate change.

The WSJ editorial page and the Clintons have shaped each other to an astonishing extent over the past 25 years. What’s sauce for the goose is sauce for the gander.

Disclosure:  I worked on the famously fair-minded news side of the WSJ for a short sweet time many years ago. In the name of fairness, I have to say that an awful lot of wise opinion and shrewd editorial writing appears on its editorial pages. .

WSJ reporters are still remarkably straight. I have the feeling that the disdain among them for the wilder enthusiasms of their editorial colleagues hasn’t changed any more than mine in all that time.

David Warsh is a Somerville, Mass.-based based economic historian, long-time financial journalist and proprietor of, where this essay first appeared.

David Warsh: Looking for a 'Jewish lunch' at Harvard


What propelled Massachusetts Institute of Technology economics to the top of the heap? As Bloomberg Businessweek memorably illustrated in 2012, most of the leadership arrayed against the financial crisis was educated to the task at MIT, starting with Ben Bernanke, of the Federal Reserve Board; Mervyn King, of the Bank of England, and Mario Draghi, of the European Central Bank.

That they and innumerable other talented youngsters chose MIT and turned out so well owed to the presence of two strong generations of research faculty at MIT, led  in the 1970s and ’80s by Rudiger Dornbusch, Stanley Fischer, and Olivier Blanchard, and, in the ’50s and ‘60s, by Paul Samuelson, Robert Solow and Franco Modigliani.

Samuelson started it all when he bolted Harvard University in the fall of 1940 to start a program in the engineering school at the industrial end of Cambridge.

What made MIT so receptive in the first place?  Was it that the engineers were substantially unburdened by longstanding Brahmin anti-Semitism, as E. Roy Weintraub argues in MIT and the Transformation of American Economics?

Or that the technologically-oriented institute was more receptive to new ideas, such as the mathematically-based “operationalizing” revolution, of which Samuelson was exemplar-in-chief, a case made in the same volume by Roger Backhouse, of the University of Birmingham?

The answer is probably both.  The very founding of MIT, in 1861, enabled by the land-grant college Morrill Act, had itself been undertaken in a spirit of breakaway.  First to quit Harvard for Tech was the chemist Charles W. Eliot, in 1865. (Harvard quickly hired him back to be its president.)

Harvard-trained  prodigy Norbert Wiener moved to MIT in 1920 after Harvard’s  mathematics department failed to appoint him; linguist Noam Chomsky left Harvard’s Society of Fellows for MIT in 1955. Historian of science Thomas Kuhn wound up at MIT, too, after a long detour via Berkeley and Princeton.

But the Harvard situation today is very different. Often overlooked is a second exodus that played an important part in bringing change about.

Turmoil at the University of at California at Berkeley, which later came to be known as the Free Speech Movement, had led a number of Berkeley professors to accept offers from Harvard: economists David Landes, Henry Rosovsky and Harvey Leibenstein; and sociologist Seymour Martin Lipset among them.

The story, in the stylized fashion in which it has often been told, is that,  one of the four one day said, “You know, I kind of miss the Jewish lunch” [that they had in Berkeley].

A second said, “Why don’t we start one here?”

“How are you going to find out who’s Jewish?”

“We can’t. Some have changed their names,” said a third. Whereupon Henry Rosovsky said, “Give me the faculty list. I can figure it out.”

A month later, luncheon invitations arrived in homes of faculty members who had previously made no point of identifying one way or another.  And a month after that, a group larger than the original four gathered at the first Jewish lunch at Harvard.  The Jewish lunch has been going on ever since.

Rosovsky, 88, former dean of the Faculty of Arts and Sciences, and the second Jew to serve as a member of Harvard’s governing corporation (historian John Morton Blum, less open about it his Jewishness, was the first)  is the sole surviving member of the original group. Last week  I asked him about it.

“It was not the way things were done at Harvard. The people here were a little surprised by our chutzpah to have this kind of open Jewish lunch, reflecting, I think, the sense that the Jews were here a little bit on sufferance, I don’t think that feeling existed at Berkeley. Nobody was worried there about somebody sending an invitation to the wrong person.

“It’s a subtle thing. We left graduate school at [Harvard] for Berkeley in 1956. I wouldn’t say that Harvard was anti-Semitic, but just as in the ’30s, Berkeley was happy to take the [European] refugees, where Harvard had difficulty with this, there were notions [at Harvard] of public behavior, of what was fitting. Berkeley was a public university, nobody thought twice about their lunch.”

Rosovsky’s wife, Nitza, an author who prepared an extensive scholarly exhibition on  "The Jewish Experience at Harvard and Radcliffe'' for the university’s 350th anniversary, in 1986, remembered that there were surprising cases. Merle Fainsod, the famous scholar of Soviet politics who grew up in McKee’s Rocks, Pa., asked  her one night at dinner if she knew his nephew, Yigael Yadin? “Apparently this was the first time he ever said in public that he was Jewish.” Yadin was a young archaeologist who served as head of operations of the Israeli Defense Forces during its 1948 war and later translated the Dead Sea Scrolls.

Rosovsky’s exhibition catalog tells the story in a strong narrative.  Perhaps a dozen Jews  graduated in Harvard’s first 250 years. But as a professor, then as president of the university, A. Lawrence Lowell watched the proportion of Jewish undergraduates rise from 7 percent of freshmen in 1900 to 21.5 in 1922.  Jews constituted 27 percent of college transfers, 15 percent of special students, 9 percent of Arts and Sciences graduate students, and 16 percent of the Medical School.  Harvard was deemed to have a “Jewish problem,” which was addressed by a system of quotas lasting into the 1950s.

The most outspoken anti-Semite in the Harvard Economics Department at the time that Samuelson left was Harold Burbank. Burbank died in 1951. Some 20  years later, it fell to Rosovsky, in his capacity as chairman of the economics department, to dispose of the contents of his Cambridge house. It turned out that Burbank had left everything to Harvard – enough to ultimately endow a couple of professorships.

So it was  was Rosovsky, by then the faculty dean, who persuaded Robert Fogel – a Jew and a former Communist married to an African-American woman, who he hired away from the University of Chicago – to become the first Harold Hitchings Burbank Professor of Political Economy. “He was the only one who didn’t know the history,” said Rosovsky. Fogel went on to share a Nobel prize in economics.

It is the hardest struggles that command the greatest part of our attention. But between Montgomery, Ala.,  in 1955-56, when the Civil Rights Movement for African-Americans really got going, and the Stonewall Inn riot, in New York, in 1969,  when homosexuals' rights started to get a lot of attention, a great many groups graduated to “whiteness,” as Daniel Rodgers puts it in Age of Fracture – including Jews, Irish Catholics. and, of course, women.

Whatever it was that MIT started, Harvard and all other major universities soon enough accelerated – in economics as well as the dismantling of stereotypes of race and gender. By the time that comparative literature professor Ruth Wisse asserted that anti-Semitism had brought down former Harvard President Lawrence Summers, virtually no one took her seriously.

David Warsh, a longtime economic historian and financial journalist, is proprietor of


David Warsh: The high-speed bailout of 2009


I spent some hours last week browsing the newly released transcripts of Federal Open Market Committee meetings in 2009.  Mostly I relied on the extraordinary “live tour” and subsequent coverage by The Wall Street Journal team.

I was struck by how greatly the action had shifted to the incoming administration of President Barack Obama.   The acute-panic phase of the crisis was past, and relatively little of the drama of that troubled year is captured in the talk of monetary policy.

On Jan. 15, President George W. Bush asked Congress to authorize the incoming Obama administration to spend $350 billion in Troubled Asset Relief Program funds.  Obama was inaugurated Jan. 20.

Treasury Secretary Timothy Geithner on Feb. 10 announced a financial stabilization plan consisting mainly of stress tests for the nineteen largest bank holding companies.

In a conference call, Fed Chairman Ben Bernanke explained to the Federal Open Market Committee that the details were hazy. “It’s like selling a car: Only when the customer is sold on the leather seats do you actually reveal the price.”

On Feb. 17 Obama signed the American Recovery and Reinvestment Act of 2009, a stimulus package of around $800 billion in spending measures and tax cuts designed to promote economic recovery.

In March the Fed announced it planned to purchase $1.25 trillion of mortgage-backed securities in 2009, expanding the “quantitative easing” program it had begun the previous November.   Also the administration’s bailout of the auto industry was completed.

In May, Geithner reported that nine banks were judged sufficiently well capitalized to have passed the stress tests. Ten others would be required to raise additional capital by November.  Gradually the stabilization was recognized to have been a success.

And in August, Obama nominated Bernanke to a second term as Fed chairman. Senior White House adviser Lawrence Summers had been unsuccessful in his efforts to replace first Geithner, then Bernanke.  He would try again.

Bernanke’s book-length account of all this is expected in the fall. About the same time, U.S.  Court of Claims Judge Thomas Wheeler likely will have delivered a verdict in a lawsuit against the government alleging that Bernanke acted illegally when the Fed took control of insurance giant American International Group at the height of the crisis.

Meanwhile, Summers has been repositioning himself, perhaps hoping to return to the White House in a Hillary Clinton administration.  In a New York Times piece, ''Establishment Populism Rising,'' Thomas Edsall interviews the Harvard professor for an update on Summers’s thinking.

.                              xxx

I continue to get my news of Russia from even-handed Johnson’s Russia List – five issues last week alone, containing 188 items from the U.S., European and Russian press, most of which, needless to say, I did not read.  Two that I did stood out.

Jack Matlock, ambassador to the disintegrating Soviet Union under George H.W. Bush,  wrote  on his blog that the “knee-jerk” conviction that Vladimir Putin  was directly responsible for the deliberately shocking murder of Russian dissenter Boris Nemtsov overlooks other possibilities. “So far nothing is absolutely clear about this tragedy except that an able politician and fine man was gunned down in cold blood,” he concluded.

Peter Hitchens, in The Spectator, argued that It’s NATO that’s empire-building, not Putin. His principal authority, George Friedman, founder of the high-end publisher Stratfor, dates the current phase of the conflict from Putin’s refusal to go along with US policy in Syria in 2011.

.                         xxx

I was struck that when the Club of Growth asked Wisconsin Gov. Scott Walker about his foreign- policy credentials, he replied that he considered Ronald Reagan’s decision to fire striking federal air-traffic controllers in 1981 “the most important foreign-policy decision of his lifetime.”


{Added by New England Diary overseer: Walker said of the firings; "It sent a message not only across America, it sent a message around the world'' that "we weren't to be messed with.''}

When you’re a kid with a hammer, the whole world looks like a nail.

(Reagan speechwriter and Wall Street Journal columnist Peggy Noonan offered Walker some half-hearted backup and The Washington Post zeroed in on Walker’s  cram course in foreign policy.)

I mention it mainly ir to say that, having spent most of my life covering economic development, one way or another, I’d say that the skein of events more important to U.S. foreign policy than any other were those in which the march in Selma, Ala.,  commemorated this weekend played an important part.

David Warsh is proprietor of and a longtime financial journalist and economic historian.

David Warsh: U.S. party politics in our more perilous times

harvey "Anti-Drone Burqa,' by ADAM HARVEY, in the show "Permanent War: The Age of Global Conflict, School of the Museum of Fine Arts, Boston, through March 7.


Why is the race for the Republican presidential nomination shaping up the way it is?  On Friday Mitt Romney ended his bid to return to the lists after only three weeks. It’s clear why he got out:  the Republican Establishment that supported his candidacy in 2012 has switched to backing Jeb Bush.

But why did he get in? We know something about this, thanks to Dan Balz and Philip Rucker of The Washington Post.

One issue that seemed to weigh on Romney was the Jan. 7 terrorist attack in Paris on the Charlie Hebdo publication. Romney talked about the issue with close advisers the night before he declared he would seriously consider running. “Paris was the biggest of all the factors,” the Romney associate said. “It was a tipping point for him about how dangerous the world had become.”

That sounds more than plausible. Romney spent more than two years as a Mormon missionary in France in the late 1960s.

We don’t know much yet even about the reasons that Jeb Bush has stated privately for deciding to enter the race, despite, for instance, this illuminating examination of his involvement in public-education issues in Florida, where he was governor for eight years. It seems a safe bet that his motives eventually will turn out be similar to those of Romney, stemming from his family’s long involvement in US foreign policy.

If you listen carefully, you can hear tipping going on all around.

For my part, I was deeply surprised to find myself thinking aloud in December that, as a centrist Democrat, I might prefer Bush to Hillary Rodham Clinton in 2016. I expect to read several books and chew plenty of fat over the next few months figuring whether that is really the case.

It’s not simply that I expect that the path to the nomination would  require Bush to rein in the GOP’s Tea Party wing – all those space-shots meeting late last month in Iowa – an outcome to be devoutly desired, but not enough in in itself to warrant election. More important, it is possible that Bush would promise to bring the Republicans back to the tradition of foreign-policy realism that was characteristic of Presidents Eisenhower, Nixon, Ford and George H.W. Bush, and bring future Democratic candidates along with him. That would be something really worth having.

To the end of thinking about what is involved, I have been reading Overreach: Delusions of Regime Change in Iraq (Harvard, 2014), by Michael MacDonald, professor of international relations at Williams College.  It is a brilliant reassessment of the opinion-making forces that led to the American invasion of Iraq, an aide-mémoire more powerful than Madame Defarge’s knitted scarf  for all its careful comparisons, distinctions and citations.

The conventional wisdom has become that George W. Bush all but willed the invasion of Iraq singlehandedly. There is, of course, no doubt that the president was essential, says MacDonald. For one reason or another, Bush positively hankered to go to war. But he had plenty of help.

For one thing, there were the neoconservatives.  By 2000, they more or less controlled the Republican Party.  MacDonald put the emphasis less on policy makers such as Vice President Dick Cheney and Defense Secretary Donald Rumsfeld than on the extensive commentarial behind them:  Journalists Bill Kristol and Robert Kagan at the Weekly Standard and the New American Century think tank, the long-dead political philosopher Leo Strauss (nothing neo about him) his and latter-day acolyte Harvey Mansfield, of Harvard Law School, and Bernard Lewis, an historian of Islamic culture, to name the most prominent.

For another, there were the Democratic hawks. The Democratic Party itself divided into three camps: opponents (Sen. Edward Kennedy, former Vice President Al Gore, House Speaker Nancy Pelosi); cautious supporters (Senators John Kerry ,Hillary Clinton and Joe Biden, former President Bill Clinton); and passionate supporters (Senators Joseph Lieberman, Diane Feinstein, and Evan Bayh).  Former Clinton adviser Kenneth Pollack made the argument for war in The Threatening Storm: The Case for Invading Iraq.

MacDonald discounts the theory that the oil companies argued for war, with a view to obtaining control of Iraqi reserves.  But he credits the argument that Israel and the Israeli lobby in the United States strongly supported regime change.  And the pundits, ranging from Thomas Friedman of The New York Times to Michael Kelley of The Atlantic to Max Boot of The Wall Street Journal, as well as the editors of The New Yorker, The New Republic and Slate.   Economic Principals, whose column you are reading now, was a follower in this camp.


At first the war went well.  The U.S. captured Baghdad, Saddam fled, and Bush staged his “Mission Accomplished” landing on an aircraft carrier.  But after the apparent victory began to melt away, MacDonald writes, those who had supported the war for whatever reason united in what he calls the Elite Consensus designed to shift the blame.

The war should have been won but it was poorly planned. There weren’t enough U.S. troops. Defense chief Rumsfeld was preoccupied with high-tech weaponry.  Administrator Paul Bremer was arrogant. The Americans never should have disbanded the Iraqi army.  The Iraqis were incurably sectarian.  The Americans lacked counterinsurgency doctrine. The whole thing was Bush and Cheney’s fault.  And, whatever else, the Elite Consensus was not at fault.

In fact, writes MacDonald, the entire intervention was based on the faulty premise that American values were universal.  Regime change would be easy because Iraqis wanted what Americans wanted for them:  democracy, individualism, constitutional government, toleration and, of course, free markets.  Some did, but many did not.

Breaking the state was easy; liberating Iraq turned out to be impossible. Instead, MacDonald notes, the always precarious nation has turned “a bridge connecting Iran to Syria.” Meanwhile, Russia is annexing eastern Ukraine, over its neighbor’s attempts to break away from Russian influence and enter the economic sphere of the European Community. It has become a much more dangerous world.

Hence the dilemma facing Hillary Clinton and Jeb Bush, if either or both are to become presidential candidates in 2016. Can they back away from the proposition that has been at the center of American foreign policy since the end of the Cold War – as Michael MacDonald puts it, that we are the world, and the world is better for it?

David Warsh, a longtime business journalist and economic historian, is proprietor of economicprincipals.c0m

David Warsh: Knowledge economy superstars and hollowing of middle class



It is one of the great explications of economics of modern times:  Written in 1958 by libertarian Leonard Read and subsequently performed by Milton Friedman as The Pencil, a couple of minutes of Free to Choose, the 10-part television series he made with his economist wife, Rose Director Friedman, broadcast and published in 1980.

Friedman comes alive as he enumerates the various products required to make a simple pencil: the wood (and, of course, the saw that cut down the tree, the steel that made the saw, the iron ore that made the steel and so on), graphite, rubber, paint (“This brass ferrule? I haven’t the slightest idea where it comes from”).

Literally thousands of people cooperated to make this pencil – people who don’t speak the same language, who practice different religions, who might hate one another if they ever met.

This is Friedman as he was experienced by those around him, sparks shooting out of his eyes. The insight itself might as well have been Frederic Bastiat in 1850 explaining the provisioning of Paris, or Adam Smith himself in 1776 writing about the economics of the pin factory.

There is a problem, though. None of these master explicators have so much as word to say about how the pencil comes into being. Nor, for that matter, does most present-day economics, which remains mainly prices and quantities. As Luis Garicano, of the London School of Economics, and Esteban Rossi-Hansberg, of Princeton University, write in a new article for the seventh edition of the Annual Review of Economics:

Mainstream economic models still abstract from modeling the organizational problem that is necessarily embedded in any production process.Typically these jump directly to the formulation of a production function that depends on total quantities of a pre-determined and inflexible set of inputs.

In other words, economics assumes the pencil.  Though this approach is often practical, Garicano and Rossi-Hansberg write, it ignores some very important issues, those surrounding not just the companies that make the products that  make pencils, and the pencils themselves, but the terms under which all their employees work, and, ultimately, the societies in which they live.

In “Knowledge-based Hierarchies: Using Organizations to Understand the Economy,” Garicano and Rossi-Hansberg lay out in some detail a prospectus for an organization-based view of economics.  The approach, they say, promises to shed new light on many of the  most pressing problems of the present day:  evolution of wage inequality, the growth and productivity of firms, the gains from trade, the possibilities for economic development, from off-shoring and the formulation of international teams, — and, ultimately, the taxation of all that.

The authors note that, at least since Frank Knight described the role of entrepreneurs, in Risk, Uncertainty, and Profit, in 1921, economists have recognized the importance of understanding the organization of work.  Nobel  laureates Herbert Simon and Kenneth Arrow each tackled the issue of hierarchy.  Roy Radner, of Bell Labs and New York University, went further than any other  in developing a theory of teams, especially, the authors say, in “The Organization of Decentralized Information Processing,” in 1993.

But all the early theorizing, economic though it may have been in its concern for incentives and information, was done in isolation from analysis of the market itself, according to Garicano and Rossi-Hansberg. The first papers had nothing tp say about the effects of one organization on all the others, or about the implications of the fact that people differ greatly in their skills.

That changed in 1978, the authors say.  A decade earlier, legal scholar Henry Manne had noted that a better pianist had higher earnings not only because of his skill; his reputation meant that he played in larger halls. The insight led Manne to conjecture that large corporations existed to allocate the production most efficiently of managers, like so many pianists of different levels of ability.

It was Robert Lucas, of the University of Chicago, who took up the task in 1978 of showing precisely how such “superstar” effects might account for the size of firms, with CEOs of different abilities hiring masses of undifferentiated  workers – and why scale might be an important aspect of organization. He succeeded, mainly in the latter, generating fresh interest among economists in the work of  business historian Alfred Chandler.

It was Sherwin Rosen, of the University of Chicago, with “The Economics of Superstars,” in 1982, who convincingly made the case that the increasing salaries paid to managers had to do with the increase in scale of the operations over which they preside (and, with athletes, singers and others, the size of the audiences for whom they perform). A good manager might increase the productivity of all workers; the competition among firms to hire the best might cause the winners to build more and larger teams; but Rosen didn’t succeed at building hierarchical levels into his model.  He died in 2001, at 62, a few months after he organized the meetings of the American Economic Association as president.

Many others took up the work, including Garicano and Rossi-Hansberg.  It was the '90's, not long after a flurry of work on the determinants of economic growth spelled out for the first time in formal terms the special properties of knowledge as an input in production. The work on skills and layers in hierarchies gained traction once knowledge entered the picture.

At the meetings of the American Economic Association this weekend in Boston, a pair of sessions were  devoted to going over that old ground, one on the “new growth economics” of the Eighties, another on the “optimal growth” literature of the Sixties. Those hoping for clear outcomes were disappointed.

Chicago’s Lucas; Paul Romer, of New York University; and Philippe Aghion, of Harvard University, talked at cross purposes, sometimes bitterly, while Aghion’s research partner, Peter Howitt, of Brown University, looked on.But Gene Grossman, of Princeton University, who with Elhanan Helpman, of Harvard University, was another contestant in what turned out to be a memorable race, put succinctly in his prepared remarks what he thought had happened:

Up until the mid-1980, studies of growth focused primarily on the accumulation of physical capital. But capital accumulation at a rate faster than the rate of population growth is likely to meet diminishing returns that can drive the marginal product of capital below a threshold in which the incentives for ongoing investment vanish. This observation led Romer (1990), Lucas, (1988), Aghion and Howitt (1992) Grossman and Helpman (1991) and others to focus instead on the accumulation of knowledge, be it embodied in textbooks and firms as “technology” or in people as “human capital.” Knowledge is different from physical capital inasmuch as it is often non-rivalrous;  its use by one person or firm in some application does not preclude its simultaneous or subsequent use by others.

My guess is that “Knowledge-based Hierarchies: Using Organizations to Understand the Economy” will mark a watershed in this debate, the point after which arguments about the significance of knowledge will be downhill. “If one worker on his own doesn’t know how to program a robot, a team of ten similar worker will also fail,” write Garicano and Rossi-Hansberg. The only question is whether to make or buy the necessary know-how.

What’s new here is the implication that as inequality at top of the wage distribution grows, inequality at the bottom will diminish less, as the middle class is hollowed out.

[E]xperts, the superstars of the knowledge economy, earn a lot more while less knowledgeable workers become more equal since their knowledge becomes less useful.  Moreover, communications technology allows superstars to leverage their expertise by hiring many workers who know little, thereby casting a shadow on the best workers who used to be the ones exclusively working with them. We call this the shadow of superstars.

For a poignant example of the shadow, see last week’s cover story in The Economist, Workers on Tap.  The lead editorial rejoices that a young computer programmer in San Francisco can live like a princess, with chauffeurs, maids, chefs, personal shoppers.  How? In There’s an App for That, the magazine explains that entrepreneurs are hiring “service pros” to perform nearly every conceivable service – Uber, Handy, SpoonRockert, Instacart are among the startups. These free-lancers earn something like $18 an hour. The most industrious among them, something like 20 percent of the workforce, earn as much as $30,000 a year.  The entrepreneurs get rich. The taxi drivers, restaurateurs, grocers and secretaries who used to enjoy middle class livings are pressed.

Work on the organization-based view of economics is just beginning:  Beyond lie all the interesting questions of industrial organization, economic development, trade and public finance.  Much of the agenda is set out in the volume whose appearance marked the formal beginnings of the field,  The Handbook of Organizational Economics (Princeton, 2013), edited by Robert Gibbons, of the Sloan School of Management of the Massachusetts Institute of Technology, and John Roberts, of Stanford University’s Graduate School of Business. Included is a lucid survey of the hierarchies literature by Garicano and Timothy Van Zandt, of INSEAD.

The next great expositor of economics, whoever she or he turns out to be, will give a very different account of the pencil.

.                                         xxx

Andrew W. Marshall retired last week after 41 years as director of the Defense Department’s Office of Net Assessment, the Pentagon’s internal think-tank. A graduate of the University of Chicago, a veteran of the Cowles Commission and RAND Corp., Marshall was originally appointed by President Nixon, at the behest of Defense Secretary James Schlesinger, and reappointed by every president since. He served fourteen secretaries with little external commotion.

A biography to be published next week, The Last Warrior: Andrew Marshall and the Shaping of Modern American Defense Strategy (2015, Basic Books), by two former aides, Andrew Krepinevich and Barry Watts, is already generating commotion. Expect to hear more about Marshall in the coming year.

David Warsh, a longtime financial columnist and economic historian, is proprietor of

David Warsh: The 'pie-giver' and the 'liberal' vs. 'realist' view of Russia

Perhaps the single most intriguing mystery of the Ukrainian crisis has to do with how the Foreign Service officer who served as deputy national security adviser to Vice President Dick Cheney for two years, starting on the eve of the invasion of Iraq, became the Obama administration’s point person on Russia in 2014. Victoria Nuland took office as assistant secretary of  state for European and Eurasian affairs a year ago this week.
It was Nuland who in February was secretly taped, probably by the Russians, saying “F--- the E.U.” for dragging its feet in supporting Ukrainian demonstrators seeking to displace its democratically elected pro-Russian president Viktor Yanukovych, two months after he rejected a trade agreement with the European Union in favor of one with Russia. She made a well-publicized trip to pass out food in the rebels’ encampment on Kiev’s Maidan Square in the days before Yanukovych fled to Moscow.
When Russian President Vladimir Putin said the other day, “Our Western partners, with the support of fairly radically inclined and nationalist-leaning groups, carried out a coup d'état [in Ukraine]. No matter what anyone says, we all understand what happened. There are no fools among us. We all saw the symbolic pies handed out on the Maidan,” Nuland is the pie-giver he had in mind.
Before she was nominated to her current job, Nuland was State Department spokesperson under Secretary Hillary Rodham Clinton during the congressional firestorm over the attack on the diplomatic post in Benghazi, Libya.
So how did the Obama administration manage to get her confirmed – on a voice vote with no debate?  The short answer is that she was stoutly defended by New York Times columnist David Brooks and warmly endorsed by two prominent Republican senators, Lindsey Graham, of South Carolina, and John McCain, of Arizona.
Clearly Nuland stands on one side of a major fault-line in the shifting, often-confusing tectonic plates of U.S. politics.
A good deal of light was shed on that divide by John Mearsheimer, of the University of Chicago, in an essay earlier this month in Foreign Affairs.  In “Why the Ukraine Crisis Is the West’s Fault,” Mearsheimer described the U.S.  ambitions to move Ukraine out of Russia’s orbit via expansion of the North Atlantic Treaty Organization as the taproot of the crisis.  Only after Yanukovych fled Ukraine did Putin move to annex the Crimean peninsula, with its longstanding Russian naval base.
Mearsheimer writes:
Putin’s actions should be easy to comprehend. A huge expanse of flat land that Napoleonic France, imperial Germany, and Nazi Germany all crossed to strike at Russia itself, Ukraine serves as a buffer state of enormous strategic importance to Russia. No Russian leader would tolerate a military alliance that was Moscow’s mortal enemy until recently moving into Ukraine. Nor would any Russian leader stand idly by while the West helped install a government there that was determined to integrate Ukraine into the West.
Washington may not like Moscow’s position, but it should understand the logic behind it. This is Geopolitics 101: great powers are always sensitive to potential threats near their home territory. After all, the United States does not tolerate distant great powers deploying military forces anywhere in the Western Hemisphere, much less on its borders. Imagine the outrage in Washington if China built an impressive military alliance and tried to include Canada and Mexico in it. Logic aside, Russian leaders have told their Western counterparts on many occasions that they consider NATO expansion into Georgia and Ukraine unacceptable, along with any effort to turn those countries against Russia -- a message that the 2008 Russian-Georgian war also made crystal clear.
Why does official Washington think any different? (It’s not just the Obama administration, but much of Congress as well.)  Mearsheimer delineates a “liberal” view of geopolitics that emerged at the end of the Cold War, as opposed to a more traditional “realist” stance.  He writes,
As the Cold War came to a close, Soviet leaders preferred that U.S. forces remain in Europe and NATO stay intact, an arrangement they thought would keep a reunified Germany pacified. But they and their Russian successors did not want NATO to grow any larger and assumed that Western diplomats understood their concerns. The Clinton administration evidently thought otherwise, and in the mid-1990s, it began pushing for NATO to expand.
The first round of NATO expansion took place in 1999, and brought the Czech Republic, Hungary and Poland into the treaty. A second round in 2004 incorporated Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia and Slovenia.  None but the tiny Baltic Republics shared a border with Russia. But in 2008, in a meeting in Bucharest, the Bush administration proposed adding Georgia and Ukraine.  France and Germany demurred, but the communique in the end flatly declared, “These countries will become members of NATO.”  This time Putin issued a clear rejoinder – a five-day war in 2008 which short-circuited Georgia’s application (though Georgia apparently continues to hope).
The program of enlargement originated with key members of the Clinton  administration, according to Mearsheimer. He writes:
They believed that the end of the Cold War had fundamentally transformed international politics and that a new, post-national order had replaced the realist logic that used to govern Europe. The United States was not only the “indispensable nation,” as Secretary of State Madeleine Albright put it; it was also a benign hegemon and thus unlikely to be viewed as a threat in Moscow. The aim, in essence, was to make the entire continent look like Western Europe.
In contrast, the realists who opposed expansion did so in the belief that Russia had voluntarily joined the world trading system and was no longer much of a threat to European peace. A declining great power with an aging population and a one-dimensional economy did not, they felt, need to be contained.
 Mearsheimer writes:
And they feared that enlargement would only give Moscow an incentive to cause trouble in Eastern Europe. The U.S. diplomat George Kennan articulated this perspective in a 1998 interview, shortly after the U.S. Senate approved the first round of NATO expansion. “I think the Russians will gradually react quite adversely and it will affect their policies,” he said. “I think it is a tragic mistake. There was no reason for this whatsoever. No one was threatening anyone else.”
Policies devised in one administration have a way of hardening into boilerplate when embraced by the next. So thoroughly have liberals come to dominate discourse about European security that even the short war with Georgia has done little to bring realists back into the conversation. The February ouster of Yanukovych is either cited as the will of a sovereign people yearning to be free or, more frequently, simply ignored altogether.
  Mearsheimer writes:
The liberal worldview is now accepted dogma among U.S. officials. In March, for example, President Barack Obama delivered a speech about Ukraine in which he talked repeatedly about “the ideals” that motivate Western policy and how those ideals “have often been threatened by an older, more traditional view of power.” Secretary of State John Kerry’s response to the Crimea crisis reflected this same perspective: “You just don’t in the twenty-first century behave in nineteenth-century fashion by invading another country on completely trumped-up pretext.”
Nuland was present at the creation of the liberal view. She served for two years in the Moscow embassy, starting in 1991; by 1993 she was chief of staff to Deputy Secretary of State Strobe Talbott. She directed a study on NATO enlargement for the Council on Foreign Relations in 1996, and spent three more years at State as deputy director for Former Soviet Union Affairs.
After a couple of years  of Nuland being on the beach at the Council on Foreign Relations, President George W. Bush named her deputy ambassador to NATO, in 2001. She returned to Brussels in the top job after her service to Cheney. When Obama was elected, she cooled her heels as special envoy to the Talks on Conventional Forces in Europe for two years until Clinton elevated her to spokesperson. Secretary of State John Kerry promoted her last year.
It seems fair to say that Putin has trumped Obama at every turn in the maneuvering over Ukraine – including last week, when the Russian president concluded a truce with the humbled Ukrainian President Petro Poroshenko while leaders of the NATO nations fumed ineffectively at their biennial summit, this year in Wales. Never mind the Islamic State in Iraq and Syria; China; Israel. Even in Europe, the president’s foreign policy is in tatters.
Backing away from the liberal view is clearly going to be costly for some future presidential aspirant. The alternative is to maintain the expensive fiction of a new Cold War.
David Warsh is a longtime financial journalist and economic historian. He is proprietor of

David Warsh: Nobel Prizes and macro vs. growth


It was about a year ago that Paul Krugman asked, “[W]hatever happened to New Growth Theory?”  The headline of the item on the blog with which the Nobel laureate supplements his twice-weekly columns for The New York Times telegraphed his answer: The New Growth Fizzle.  He wrote:

''For a while, in the late 1980s and early 1990s, theories of growth with endogenous technological change were widely heralded as the Next Big Thing in economics. Textbooks were restructured to put long-run growth up front, with business cycles (who cared about those anymore?) crammed into a chapter or two at the end. David Warsh wrote a book touting NGT as the most fundamental development since Adam Smith, casting Paul Romer as a heroic figure leading economics into a brave new world.

''And here we are, a couple of decades on, and the whole thing seems to have fizzled out. Romer has had a very interesting and productive life, but not at all the kind of role Warsh imagined. The reasons some countries grow more successfully than others remain fairly mysterious, with most discussions ending, as Robert Solow remarked long ago, in a “blaze of amateur sociology”. And whaddya know, business cycles turn out still to be important.''

Krugman’s post raised eyebrows in my circles because many insiders expected that a Nobel Prize for growth theory would be announced within a few weeks. A widely noticed Nobel symposium had been held in Stockholm in the summer of 2012, the usual (though not inevitable) prelude to a prize.  Its proceedings had been broadcast on Swedish educational television.  Romer, of New York University, had been the leadoff speaker; Peter Howitt, of Brown University, had been his discussant; Philippe Aghion, of Harvard University and the Institute for International Studies, the moderator of the symposium.

Knowing this, I let Krugman’s gibe pass unchallenged, even though it seemed flat-out wrong. These things were best left to the Swedes in private, I reasoned; let the elaborate theater of the prize remain intact.

Then came October, and a surprise of a slightly different sort.  Rather than rousing one or more of the growth theorists, the early morning phone calls went to three economists to recognize their work on trend-spotting among asset prices and the difficulty thereof – Eugene Fama, Robert Shiller and Lars Hansen.  Fama’s work had been done 50 years before; Shiller’s, 35. Two big new financial industries, index funds and hedge funds, had grown up to demonstrate that the claims of both were broadly right, in differing degrees. Hansen had illuminated their differences. So old and safe and well-prepared was the award that its merit couldn’t possibly be questioned.

What happened?  It’s well known that, in addition to preparing each year’s prize,  prize committees work ahead on a nomination or two or even three, assembling slates of nominees for future years in order to mull them over. Scraps of evidence have emerged since last fall that a campaign was mounted last summer within the Economic Sciences Section of the Academy, sufficient to stall the growth award and bring forward the asset-pricing prize – resistance to which Krugman may have been a party.

These things happen.  The fantasy aspects of the Nobel Prize – the early-morning phone call out of the blue – have been successfully enough managed over the years as to distract from the “hastily-arranged” press conferences that inevitably follow, the champagne chilled and ready-to-hand. Laureates, in general, are only too happy to play along.  Sometimes innocence may even be real. Simon Kuznets, on his way to visit Wassily Leontief in New York in 1971, told friends that he overheard heard only that “some guy with a Russian name” had won, before stepping into the high-rise elevator that would carry him to his friend’s apartment. It was, he said, the longest ride of his life.

As described on the Nobel website, the committee meets in February to choose preliminary candidates, consults experts in the matter during March and April, settles on a nomination in May, writes up an extensive report over the summer, and sends it in September to the Social Science class of the Royal Swedish Academy of Sciences – around seventy professors, most of them Scandinavians – where it is widely discussed. Thus by summer, the intent of the committee is known, if very closely held, by a fairly large fraternity of scientists. The 600-member Academy then votes in October.

There is nothing obvious about the path that the economics prize award should take; even within the Academy there are at least a couple (and probably more) different versions of what the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, established in 1969, is all about. Wide-ranging and free-wheeling discussion among the well-informed is therefore crucial to its success; so is dependable confidentiality. Nominations and surrounding documentation are sealed for 50 years, so none of this has been revealed yet since the economics prize was established less than 50 years ago.

Over the years, however, scraps of information have leaked out about struggles that have taken place behind the scenes, in areas where sharp philosophical disagreements existed. For example, Gordon Tullock, of George Mason University, a lawyer and career diplomat with no formal training in economics,  told me years ago that  he woke in 1986 expecting to share the prize for public choice with James Buchanan.  He didn’t.  In her biography of game theorist John Nash, A Beautiful Mind, Sylvia Nasar reported that Ingemar Ståhl had sought to delay an award to Nash by moving up the prize prepared for Robert Lucas.  He didn’t succeed, and Lucas was honored, as had been planned, the following year. (Harold Kuhn, the Princeton mathematician who tirelessly insured that Nash’s story would be told, died last week, at 88.)

Something of the sort may actually have happened in 2003: preparations were made in Minneapolis for a press conference for Edward Prescott, then of the University of Minnesota; the prize went instead to a pair of low-key econometricians, Clive Granger and Robert Engle, both of the University of California at San Diego. Prescott and Finn Kydland, of the University of California at Santa Barbara, were cited the following year, “for their contributions to dynamic macroeconomics: the time consistency of economic policy and the driving forces behind business cycles.”  The latter award remains even more controversial today than it was then.

Indeed, Krugman’s own prize may have been moved up, amidst concern in Stockholm for the bourgeoning financial crisis of 2008.  As late as that October it was believed, at least in Cambridge, Mass., that the committee recommended that a prize be given for measurement economics, citing Dale Jorgenson, of Harvard University; Erwin Diewert, of the University of British Columbia; and Robert Hall, of Stanford University. It would have been the first prize for empirical economics since the award to Richard Stone, in 1984, and only the third since Kuznets was recognized, in 1971.  Instead the prize was to Krugman, by then working mainly as a columnist for The Times, “for his analysis of trade patterns and location of economic activity.”

No one seriously disputes that Krugman should have been recognized at some point for the consensus-changing work he did, beginning in the late 1970s, on monopolistic competition among giant corporations engaged in international trade, though a common view in the profession is that two others, Elhanan Helpman, of Harvard University, and Gene Grossman, of Princeton University, should have shared in the award. Committees over the years have been very conscious of the emphasis conferred by a solo award – only 22 of 45 economics prizes have been “singletons.”

The deferral of the measurement prize, if that is what happened, suggests there must have been considerable tumult behind the scenes. The gravity of the global financial crisis was very clear in Stockholm in September 2008. What happened in those few months won’t be known with any certainty for another forty-four years. But the effect of the award in October 2008 was to empower Krugman as a spokesman for the tradition of Keynesian macroeconomic analysis.  He responded with alacrity and has employed his bully pulpit since.

So much, then for what is known and, mostly, not quite known, about the recent politics of the prize.  What about the contest between macroeconomics and growth?

Macro is the dominant culture of economics – the center ring ever since Keynes published The General Theory of Employment, Interest and Employment, in 1936. It is a way of looking at the world, “an interpretation of events, an intellectual framework, and a clear argument for government intervention,” especially in the management of the business cycle, according to Olivier Blanchard, author of an authoritative text, Macroeconomics. There are many other fields in economics, but macro is the one that seeks to give an overall narrative and analytic account of expansion and recession, of capacity and utilization, of inflation and unemployment.  Macro has had its ups and downs in the years since 1936. Today anyone who studies fluctuations is a macroeconomist; but not all macroeconomists acknowledge the centrality of Keynes.

In the 1950s and ’60s, a “neoclassical synthesis” merged Keynesian contributions with all that had gone before. New standards for formal models, plus national income and product accounts and measures of the flow of funds,  produced various rules of thumb for managing modern industrial economies: Okun’s Law (output related to unemployment) and the Phillips Curve (inflation to unemployment); and so on. By the end of the 1960s, many economists thought of their field as mature.


In the ’70s came the “expectations revolution,” a series of high-tech developments (most of them anticipated by low-tech Milton Friedman), in which economists sought to build accounts of forward-looking people and firms into the macro scheme of things. The effectiveness of monetary policy was debated, until the Federal Reserve Board, under Paul Volcker, gave a powerful demonstration of its effectiveness.   Reputation and credibility became issues; targets and new rules emerged.


Growth theory, on the other hand, has a less clear-cut provenance.  There is no doubt that it began with Adam Smith, who, in the very first sentence of The Wealth of Nations, pronounced that the greatest improvement in the productive powers of humankind stemmed from the division of labor. Smith expounded for three chapters on the sources and limits of specialization, using a mass-production pin factory as his example, before dropping the topic in order to elucidate what  economists today call “the price system.” Interest in the kind of technological change that the pin factory represented faded into the background.


Karl Marx was a growth theorist (remember “Asiatic,” “ancient,” “feudal,” “bourgeois” modes of production and all that?), but he came late to economics and never found his way into the official canon. So was Joseph Schumpeter, who came closer to giving a persuasive account in economic terms but still failed to leave much of a mark. In the ’50s, MIT’s Robert Solow, a leading macroeconomist, ingeniously showed that most of the forces generating gains in wealth (gross domestic product/per capita) were exogenous, that is, outside the standard macro model, unexplained by it as the tradition stood. Macro debates continued to flourish. By end of the ’70s, interest in growth once had again faded away in technical economics.


In the ’80s, excitement over growth was suddenly rekindled in economics by three key papers, of which Romer wrote two and Robert Lucas wrote one. Romer’s primary interest was in “endogenizing” technology; that is, showing why governments, universities, corporations and entrepreneurs engaged in research. Lucas was intrigued by stories from international trade:  Asian trading nations such as Japan, Hong Kong, Taiwan, Korea and Singapore grew rich quickly while communist nations stagnated.  Where did the growth “miracles” come from?


As usual, the arguments of both men were intricately related to other on-going debates in technical economics. Lucas, a University of Chicago professor, was at pains to preserve, for convenience’s sake, the assumption of perfect competition.  Romer, educated at both Chicago and MIT and by then teaching at the University of Rochester, was intent on writing intellectual property into the act, employing the sixty-year-old convention of monopolistic competition.  Pure competition “spillovers,” meaning, roughly, the gains you reap from watching your neighbors, animated the first models that Romer and Lucas produced.  Romer’s second – and final – model depended on income streams that arose from new processes and new goods. The University of Chicago hired Romer; after a year, he moved to California where his wife had obtained a better job.


It seems clear that Romer won the debate. Aghion, then at MIT, and Howitt, then at the University of Western Ontario, quickly buttressed the case for viewing growth through the lens of monopolistic competition, but without producing the same clean convention as Romer’s “non-rival goods,” that is, know-how that can be possessed by more than one person at the same time. Helpman and Grossman obtained the same result.

Once it was established formally that privately appropriable knowledge was somehow involved in the process of growth – that ideas were economically important, as well as people and things – interest shifted quickly to the institutions and norms by which knowledge and the power to protect it were diffused.  A shower of interesting new work ensued. The effects on growth of patterns of suffrage, political governance, education, tax policy, land and immigration policy, laws, banking, religion and geography came under economists’ lenses.

The Nobel symposium in 2012 made it clear just how sprawling the “new” literature of growth and development has become. Presenters included a galaxy of stars, nearly every one of them players in the Nobel nomination league. They ranged from experts on technology, schooling, health, credit, geography and political and legal institutions; to empirical economists; and policy evaluation specialists.  So is it true, then, as Krugman asserted last summer, that “The reasons some countries grow more successfully than others remain fairly mysterious?” Only if you take the view from macro, and an extremely narrow view at that.

This is the sort of swirl that the Nobel program in economic sciences exists to rise above. It is true that Romer, 58, hasn’t made it easy for the Swedes. He stopped writing economics in the ’90s, started an online learning company, sold it, then quit economics altogether, leaving Stanford University’s Graduate School of Business and started a movement (which he announced in a TED talk) to create “charter cities” in less-developed countries around the world.

Charter cities?  By analogy to charter schools, these city-scale enterprise zones would spring up on greenfield sites, their police and legal systems guaranteed by volunteer foreign governments: perhaps Norway, for example, or Canada. “Opt-in colonialism,” say the critics.  After a couple of last-minute failures, in Madagascar and Honduras, Romer seems to be trying again, this time from the Urbanization Project at New York University’s Stern School of Business.

Second careers have become more common in recent years among economists whose early work has put them into the nomination for a Nobel Prize. Some intellects become bored by the chase.  A. Michael Spence became a business school dean; Krugman took up journalism.  Romer has become a reformer. But before he quit, he carefully dotted his i’s and crossed his t’s.  He added growth to economics’ agenda, once and for all. Its integration into macroeconomics has barely begun.

David Warsh, a longtime financial journalist and an economic historian, is proprietor of economic


Will Bill White be the Perot of 2016?


What might a successful Democratic presidential candidate in 2016 look like who is not Hillary Rodham Clinton?  The Republican Party can’t be expected to field a successful mainstream candidate until some of its serious kinks have been worked out – at least another cycle or two.  This time the Democratic Party nominee is more than likely to win. What happens after that depends on who that candidate is.

The former secretary ofsState has had a splendid career since striking out on her own as a senator from New York in 2001. It would be good to have a woman president.  But, to my mind, Clinton is too tied to battles going back to 1992 and before to hope that, once elected, she could win over her many critics and steer the nation back towards consensus.

A non-polarizing rival for the nomination might look like Bill White. He’s a personable fellow, 59 years old, a former litigator, oil and gas entrepreneur, deputy secretary of energy (1995-97), real-estate developer, and successful three-term mayor of Houston, the nation’s fourth-largest city. His candidacy might  have seemed a logical possibility, except that he was defeated by incumbent Rick Perry in the Texas gubernatorial election in 2010, the year of the Tea Party.  End of story, at least on the surface.

Mainly, I think of White because he has written a book, America's Fiscal Constitution: Its Triumph and Collapse, which seems to me like a very promising platform for a Democratic candidate. White has zeroed in on the long-term federal borrowing crisis that affects every aspect of America’s future role in the world. He has placed it, credibly, in historical perspective. That in turn demonstrates a deep political intelligence.

“To understand what recently has gone wrong,” he writes, “it helps to know what had once gone right.” Five  times in the past the United States has found itself deeply in debt:  after the American Revolution, the War of 1812, the Civil War, World War I, and the  16hard years of the Great Depression and World War II.

In each case the U.S. borrowed for a clear and generally agreed-upon purpose: to preserve the union, to expand and connect its borders (the Louisiana Purchase, in particular), to wage war, and to compensate in severe economic downturns, beginning with the Panic of 1819. In each instance, relying on a political tradition that White traces to budgetary procedures instituted by the Founding Fathers, Congress found the political will to pay it down afterwards – until 2001.

Then, says White, the 220-year-old fiscal tradition collapsed.  The federal government cut taxes and borrowed to pay for two wars, and a rising proportion of its domestic operating expenses as well. George W. Bush took office pledging to reduce the national debt by $2 trillion and create a $1 trillion rainy-day fund.  Instead he increased the debt by 50 percent, from $5.7 trillion to $9 trillion – before the Panic of 2008!

In the long and deep recession that followed,  federal debt exploded, to $16.7 trillion last year, or something like $120,000 for every working American.  Debt coverage, the measure banks commonly use to judge credit-worthiness of businesses and individuals, rose to nine times the revenue available to pay the debt.

America’s Fiscal Constitution is gracefully written, but it is not an easy read: 410 pages of narrative, with another 150 pages of notes, bibliography and tables, and only three charts in the entire book to illustrate the argument. White describes in some detail each prior episode of borrowing and, with a politician’s eye, the hard legislative compromises and monetary policy accommodations that were subsequently required to restore the tradition of more-or-less balanced budgets.

None of these chapters is better than the set-piece with which the book begins – the 1950 battle between President Harry Truman and Republican Sen. Robert Taft, of Ohio, over tax cuts on the eve of the Korean War, contrasted with the collapse of the tradition of fiscal responsibility in 2003, as the U.S. prepared to invade Iraq.

So what happened in 2001?  White sees the loss of discipline as having happened in two stages.  The first he traces to the run-up to Ronald Reagan’s election in 1980.  Traditionally the GOP had campaigned on promises to maintain balanced budgets. He describes the role of then-Congressmen David Stockman, of Michigan, and Jack Kemp, of New York, and Wall Street Journal editorial writer Jude Wanniski in fomenting a competition between two Santa Clauses – Democrats who delivered more services, and Republicans who delivered lower taxes. That lowering of taxes, it was promised, would pay for itself by stimulating growth.

Candidate George H.W. Bush, soon to be vice president, saw “voodoo economics” Gradually, Ronald Reagan perceived envisioned a “supply-side revolution.” White writes, “Never in the nation’s history had a president proposed large, simultaneous spending increases and tax cuts when the federal budget already had a deficit.”

But Reagan was more nearly a fiscal conservative than a heedless spender, White notes.  He expected higher inflation to make up for lost revenues by carrying taxpayers into higher brackets. And he gave his blessing to an historic rebalancing of the Social Security Trust Fund.  Not until George W. Bush arrived in 2001 was traditional discipline truly lost.

Presented by the Clinton administration with a budget surplus accumulated through a combination of savvy policies (tax increases combined with monetary easing) and good luck (the Internet boom), Bush immediately sought tax cuts, explaining that the resulting deficits were “incredibly good news” because of the “straitjacket” they imposed on Congress. The straitjacket notwithstanding, Bush went to war first in Afghanistan and then in Iraq, too. Congress cut taxes again. “Nothing is more important in the face of war than cutting taxes,” explained House Majority Leader Tom DeLay, of Texas.

What accounts for this sixth great spike in borrowing, unaccompanied by any of the traditional rationale?  The great advantage of White’s argument from history is that it underscores how weighty must be any satisfying explanation for the current mess. Oedipal rivalry in the Bush family is a non-starter.

My own preferred suspect is the entry into civic discourse of claims to authority derived from scientific economics in the years after World War II  This occurred gradually, first in the guise of a “Keynesian Revolution”  that put “demand management” through deficit spending at the center of the conduct economic affairs; then in the form of carelessly conjured “supply management” of the economy through tax cuts.  Plain old political pandering played an even larger role.

White, a Democrat, carefully delineates the counter-revolution, but he has little to say about the rise of the “New Economics,” except to note that when John F. Kennedy, in a memorable commencement speech at Yale University in 1962, urged young Americans to develop fiscal policies based on “technical answers, not political answers,” a Gallup Poll a few weeks later found that 72 percent of Americans opposed tax cuts financed by debt. Nevertheless, “guns and butter” policies of the Vietnam War followed.

I have nothing against technical economics; indeed, writing about it is how I make my living. Its findings, large and small, have greatly improved the lot of billions of persons around the world over the last eighty years. But I do think that its claims to authority, especially in public finance, have enjoyed a somewhat overblown in recent decades, in contrast to the common-sense strictures of the American fiscal tradition whose two-hundred-year arc White describes so clearly.

White identifies four time-honored conventions whose return would begin to solve the problem of today’s massive debt: clear accounting; “pay as you go” budget planning; separate budgeting for government trust funds; and explicit congressional authorization for each new debt.  Is such an extensive simplification politically possible?  Certainly not at the moment.

Might Bill White play a Perot-like role in the 2016 election?  I have no idea, though I hope so.  I do know that the search for alternatives to Hillary Clinton will continue. People will say that the hope for a new and transformative figure is what brought Barack Obama to office in 2008. In my opinion, the strategy has worked pretty well, for all the acrimony.  It is simply taking longer than had been hoped.

 David Warsh, a long-time financial journalist and economic historian, is proprietor of


What to do about capitalism in the 21st Century



"Economist Receives Rock Star Treatment": That was the recent  headline  on Jennifer Schuessler’s story in The New York Times. The facts bear her out. Thomas Piketty, 42, of the Paris School of Economics, seemed to be everywhere last week. Publication of his 685-page Capital in the Twenty-First Century had been moved up by two months, sales were soaring (46,000 copies so far), a triumphant tour of Washington (meeting with Treasury Secretary Jack Lew) and New York (appearing at the United Nations) has been completed. Encomiums were pouring in. “Pikettty has transformed our economic discourse,” wrote Paul Krugman in the current New York Review of Books. “We’ll never talk about wealth and inequality the way we used to.”

Not bad for an economist who traded an appointment at the Massachusetts Institute of Technology for a job as a researcher for the French government in 1996, when he was 25.  “I did not find the work of US economists entirely convincing,” he writes in the introduction to Capital in the Twenty-First Century:

" I was only too aware  that I knew nothing at all about the world’s economic problems.  My thesis consisted of several relatively abstract mathematical theorems. Yet the profession liked my work. I quickly realized that there had been no significant effort to collect historical data on the dynamics of inequality since [Simon] Kuznets [in the 1950s and ’60s], yet the profession continued to churn out purely theoretical results without even knowing what facts needed to be explained.''

He went home to collect some of the missing facts.

Piketty wanted to teach at the Ecole des Hautes Etudes en Sciences Sociales, the elite institute whose faculty had included many of the foremost figures in the Annales school, including Lucien Febvre and Fernand Braudel – a group of scholars, most  of them quantitative historians, that achieved enormous influence around the world publishing in the journal Annales. Economies, sociétés, civilisations (or Annales. Histoire, Sciences Sociales as it is called today).

Piketty got that job, along with time to do the research he wanted, first producing a book in 2001 on high incomes in France since 1901, then enlisting Anthony Atkinson, of Oxford University, in a similar investigation of Great Britain and several other countries.  His friend and countryman Emmanuel Saez, of the University of California at Berkeley, produced similar data for the US. The World Top Incomes Database (WTID) is the result.  Data on wealth, following the methods of Robert Lampman, of the University of Wisconsin, came next. Starting in 2003, Piketty began setting up the new Paris School of Economics; in 2006, he was named its first head.  He resumed teaching and writing the next year.

Piketty’s thesis is set out succinctly on the first page of his introduction:

"When the rate of return on capital exceeds the rate of growth of output, as it did in the nineteenth century and seems quite likely to do again in the twenty-first, capitalism automatically generates arbitrary and unsustainable inequalities that radically undermine the meritocratic values on which democratic societies are based.  There are ways nevertheless democracy can regain control over capitalism and ensure that the general interest takes precedence over private interests, while preserving economic openness and avoiding protectionist and nationalist reactions.''

What are those measures?  Four chapters in the fourth section of the book draw a variety of policy lessons from the first three parts for a “social state:”

The right solution is a progressive annual tax on capital. This will make it possible to avoid an endless inegalitarian spiral while preserving competition and incentives for new instances of primitive accumulation.

Piketty says he’s left Paris only a few times on short trips since returning nearly 20  years ago.  My hunch is that, after last week, it will be a long time before he takes another. He’s left behind a beautiful book, one that will receive a great deal of attention around the world in the years to come.  He’s gone home to work on others.


Michael C. Janeway, a former editor of The Boston Globe, died last week.  He was 73. It was he who, as managing editor, permitted Economic Principals to begin in 1983 as a column in the Sunday business pages. I have always been grateful to him, and to Lincoln Millstein, still very much alive, who led the blocking.

David Warsh is a long-time financial journalist and economic historian and proprietor of


David Warsh: Deconstructing the Great Panic of 2008



Lost decades, secular stagnation -- gloomy growth prospects are in the news. To understand the outlook, better first be clear about the recent past. The nature of what happened in September five years ago is now widely understood within expert circles. There was a full-fledged systemic banking panic, the first since the bank runs of the early1930s. But this account hasn’t yet gained widespread recognition among the public. There are several reasons.

For one thing, the main event came as a surprise even to those at the Federal Reserve and Treasury Departments who battled to end it. Others required more time to figure out how desperate had been the peril.

For another, the narrative of what had happened in financial markets was eclipsed by the presidential campaign and obscured by the rhetoric that came afterwards.

Finally, the agency that did the most to save the day, the Federal Reserve Board, had no natural constituency to tout its success in saving the day except the press, which was itself pretty severely disrupted at the time.

The standard account of the financial crisis is that subprime lending did it. Originate-to-distribute, shadow banking, the repeal of Glass-Steagall, credit default swaps, Fannie and Freddie, savings glut, lax oversight, greedy bankers, blah blah blah. An enormous amount of premium journalistic shoe leather went into detailing each part of the story. And all of it was pieced together in considerable detail (though with little verve) in the final report of the Financial Crisis Inquiry Commission in 2011.

The 25-page dissent that Republican members Keith Hennessey, Douglas Holtz-Eakin and Bill Thomas appended provided a lucid and terse synopsis of the stages of the crisis that is the best reading in the book.

But even their account omitted the cardinal fact that the Bush administration was still hoping for a soft landing in the summer of 2008. Nearly everyone understood there had been a bubble in house prices, and that subprime lending was a particular problem, but the sum that all subprime mortgages outstanding in 2007 was $1 trillion, less than the market as a whole occasionally lost on a bad day, whereas the evaporation of more than $8 trillion of paper wealth in the dot-com crash a few years earlier was followed by a relatively short and mild recession.

What made September 2008 so shocking was the unanticipated panic that followed the failure of the investment banking firm of Lehman Brothers. Ordinary bank runs – the kind of things you used to see in Frank Capra films such as "American Madness" and “It’s a Wonderful Life”– had been eliminated altogether after 1933 by the creation of federal deposit insurance.

Instead, this was a stampede of money-market wholesalers, with credit intermediaries running on other credit intermediaries in a system that had become so complicated and little understood after 40 years of unbridled growth that a new name had to be coined for its unfamiliar regions: the shadow banking system – an analysis thoroughly laid out by Gary Gorton, of Yale University’s School of Management, in "Slapped by the Invisible Hand'' (Oxford, 2010).

Rather than relying on government deposit insurance, which was designed to protect individual depositors, big institutional depositors had evolved a system employing collateral – the contracts known as sale and repurchase agreements, or repo – to protect the money they had lent to other firms. And it was the run on repo that threatened to melt down the global financial system. Bernanke told the Financial Crisis Inquiry Commission:

As a scholar of the Great Depression, I honestly believe that September and October of 2008 was the worst financial crisis in global history, including the Great Depression. If you look at the firms that came under pressure in that period… only one… was not of serious risk of failure…. So out of the thirteen, thirteen of the most important financial institutions in the United State, twelve were at risk of failure within a week or two.

Had those firms begun to spiral into bankruptcy, we would have entered a decade substantially worse than the 1930s.

Instead, the emergency was understood immediately and staunched by the Fed in its traditional role of lender of last resort and by the Treasury Department under the authority Congress granted in the form of the Troubled Asset Relief Program (though the latter aid required some confusing sleight- of-hand to be put to work).

By the end of the first full week in by October, when central bankers and finance ministers meeting in Washington issued a communique declaring that no systemically important institution would be allowed to fail, the rescue was more or less complete.

Only in November and December did the best economic departments begin to piece together what had happened.

When Barack Obama was elected, he had every reason to exaggerate the difficulty he faced – beginning with quickly glossing over his predecessor’s success in dealing with the crisis in favor of dwelling on his earlier miscalculations. It’s in the nature of politics, after all, to blame the guy who went before; that’s how you get elected. Political narrative divides the world into convenient four- and eight-year segments and assumes the world begins anew with each.

So when in September Obama hired Lawrence Summers, of Harvard University, to be his principal economic strategist, squeezing out the group that had counselled him during most of the campaign, principally Austan Goolsbee, of the University of Chicago, he implicitly embraced the political narrative and cast aside the economic chronicle. The Clinton administration, in which Summers had served for eight years, eventually as Treasury secretary, thereafter would be cast is the best possible light; the Bush administration in the worst; and key economic events, such as the financial deregulation that accelerated under Clinton, and the effective response to panic that took place under Bush, were subordinated to the crisis at hand, which had to do with restoring confidence.

The deep recession and the weakened banking system that Obama and his team inherited was serious business. At the beginning of 2008, Bush chief economist Edward Lazear had forecast that unemployment wouldn’t rise above 5 percent in a mild recession. It hit 6.6 percent on the eve of the election, its highest level in 14 years. By then panic had all but halted global order-taking for a hair-raising month or two, as industrial companies waited for assurance that the banking system would not collapse.

Thus having spent most of 2008 in a mild recession, shedding around 200,000 jobs a month, the economy started serious hemorrhaging in September, losing 700,000 jobs a month in the fourth quarter of 2008 and the first quarter of 2009. After Obama’s inauguration, attention turned to stimulus and the contentious debate over the American Recovery and Reinvestment Act. Summers’s team proposed an $800 billion stimulus and predicted that it would limit unemployment to 8 percent. Instead, joblessness topped out at 10.1 percent in October 2009. But at least the recovery began in June

What might have been different if Obama had chosen to tell a different story? To simply say what had happened in the months before he took office?

Had the administration settled on a narrative of the panic and its ill effects, and compared it to the panic of 1907, the subsequent story might have been very different. In 1907, a single man, J.P. Morgan, was able to organize his fellow financiers to take a series of steps, including limiting withdrawals, after the panic spread around the country, though not soon enough to avoid turning a mild recession into a major depression that lasted more than a year. The experience led, after five years of study and lobbying, to the creation of the Federal Reserve System.

If Obama had given the Fed credit for its performance in 2008, and stressed the bipartisan leadership that quickly emerged in the emergency, the emphasis on cooperation might have continued. If he had lobbied for “compensatory spending” (the term preferred in Chicago) instead of “stimulus,” the congressional debate might have been less acrimonious. And had he acknowledged the wholly unexpected nature of the threat that had been turn aside, instead of asserting a degree of mastery of the situation that his advisers did not possess, his administration might have gained more patience from the electorate in Ccngressional elections of 2010. Instead, the administration settled on the metaphor of the Great Depression and invited comparisons to the New Deal at every turn – except for one. Unlike Franklin Delano Roosevelt, Obama made no memorable speeches explaining events as he went along.

Not long after he left the White House, Summers explained his thinking in a conversation with Martin Wolf, of the Financial Times, before a meeting of the Institute for New Economic Thinking at Bretton Woods. N.H. He described the economic doctrines he had found useful in seeking to restore broad-based economic growth, in saving the auto companies from bankruptcy and considering the possibility of restructuring the banks (the government owned substantial positions in several of them through TARP when Obama took over). But there was no discussion of the nature of the shock the economy had received the autumn before he took office, and though he mentioned prominently Walter Bagehot, Hyman Minsky and Charles P. Kindleberger, all classic scholars of bank runs, the word panic never came up.

On the other hand, the parallel to the Panic of 1907 surfaced last month in a pointed speech by Bernanke himself to a research conference of the International Monetary Fund. The two crises shared many aspects, Bernanke noted: a weakening economy, an identifiable trigger, recent changes in the banking system that were little-understood and still less well-regulated, sharp declines in interbank lending as a cascade of asset “fire sales” began. And the same tools that the Fed employed to combat the crises in 2008 were those that Morgan had wielded in some degree a hundred years before – generous lending to troubled banks (liquidity provision, in banker-speak), balance-sheet strengthening (TARP-aid), and public disclosure of the condition of financial firms (stress tests). But Bernanke was once again eclipsed by Summers, who on the same program praised the Fed’s depression-prevention but announced that he had become concerned with “secular stagnation.”

The best what-the-profession-thinks post-mortem we have as yet is the result of a day-long conference last summer at the National Bureau of Economic Research. The conference observed the hundredth anniversary of the founding of the Fed. An all-star cast turned out, including former Fed chairman Paul Volcker and Bernanke (though neither historian of the Fed Allan Meltzer, of Carnegie Mellon University, or Fed critic John Taylor, of Stanford University, was invited). Gorton, of Yale, with Andrew Metrick, also of Yale, wrote on the Fed as regulator and lender of last resort. Julio Rotemberg, of Harvard Business School, wrote on the goals of monetary policy. Ricardo Reis, of Columbia University, wrote on central bank independence. It is not clear who made the decision to close the meeting, but the press was excluded from this remarkable event. The papers appear in the current issue of the Journal of Economic Perspectives.

It won’t be easy to tone down the extreme political partisanship of the years between 1992 and 2009 in order to provide a more persuasive narrative of the crisis and its implications for the future – for instance, to get people to understand that George W. Bush was one of the heroes of the crisis. Despite the cavalier behavior of the first six years of his presidency, his last two years in office were pretty good – especially the appointment of Bernanke and Treasury Secretary Henry Paulson. Bush clearly shares credit with Obama for a splendid instance of cooperation in the autumn of 2008. (Bush, Obama and John McCain met in the White House on Sept. 25, at the insistence of Sen. John McCain, in the interval before the House of Representatives relented and agreed to pass the TARP bill. Obama dominated the conversation, Bush was impressed, and, by most accounts, McCain made a fool of himself.)

The fifth anniversary retrospectives that appeared in the press in September were disappointing. Only Bloomberg BusinessWeek made a start, with its documentary “Hank,” referring to Paulson. The better story, however, should be called “Ben.” Perhaps the next station on the way to a better understanding will be the appearance of Timothy Geithner’s book, with Michael Grunwald, of Time magazine, currently scheduled to appear in May. There is a long way to go before this story enters the history books and the economics texts.

David Warsh is proprietor of, economic historian and along-time financial journalist. He was also a long-ago colleague of Robert Whitcomb.