Paul Romer

David Warsh: Blame political choices, not economists, for today's mess

500px-Sao_Paulo_Stock_Exchange.jpg

SOMERVILLE, Mass.

A couple of recent books by well-regarded journalists – Binyamin Appelbaum and Nicholas Lemann, have blamed economists for the current state of the world. Reviewing these in Foreign Affairs last week, the peripatetic New York University economist Paul Romer embraced the authors’ judgments and added his own.

Having lost track of the distinction between positive and normative economics (is vs. ought), the profession has come to think of itself and be thought of by others as a tribe of philosopher-kings, Romer wrote. Citing the OxyContin epidemic and the 2008 financial crisis, he summed up in “The Dismal Kingdom

Simply put, a system that delegates to economists the responsibility for answering normative questions may yield many reasonable decisions when the stakes are low, but it will fail and cause enormous damage when powerful industries are brought into the mix. And it takes only a few huge failures to offset whatever positive difference smaller, successful interventions have made.

A third book, Where Economics Went Wrong: Chicago’s Abandonment of Classical Liberalism, by economists David Colander and Craig Freedman (Princeton, 2019), mentioned by Romer, hasn’t received as much attention. It sets out the case in detail, with clarity and in depth.  A fourth book, In Search of the Two-Handed Economist: Ideology, Methodology and Marketing in Economics, by Freedman (Palgrave 2016), offers the deepest dive of all, but hardly ever comes up outside of professional circles (where it is often discussed with hand-rubbing, lip-smacking  enthusiasm, thanks to the extensive interviews it contains).

Accoding to Colander and Freedman, economics began to go off-course in the 1930s, when it embraced an ambitious new program that came to be known as “welfare economics,” replacing the “classical liberalism” of John Stuart Mill. The new framework developed slowly, led by John Hicks and Abba Lerner at the London School of Economics, Arthur Pigou at Cambridge University, and Paul Samuelson at Harvard University and The Massachusetts Institute of Technology, but in the 1950s, seemed to virtually take over the profession, coming to be associated simply with the macroeconomics of John Maynard Keynes.

The new framework was conducted with mathematical and statistical models instead of arguments about moral philosophy and curiosity about, even respect for, existing institutions. Believing itself to be an understanding superior to what had gone before, This new approach – now simply “the new economics,” abandoned the traditional firewall between science and policy.

Irked and, for a time, flummoxed by welfare economics’ assertiveness, especially in its Keynesian form, young economists from all over congregated at the University of Chicago, starting in 1943.  Led by Milton Friedman, George Stigler and Aaron Director, they began to look for flaws in Keynesian doctrines, which they viewed as “a Trojan horse being used to advance statist ideology and collectivist ideals,” Colander and Freeland say.

Secure in their belief that markets could, to a considerable extent, take care of themselves, thanks to the powerful solvent of competition, the Chicagoans responded to normative science with more stringent normative science. They devised an alternative “scientific” pathway that would lead to their intuited laissez-faire vision.

“Because of their impressive rhetorical and intuitive marketing skills, the Chicago economists eventually managed to engineer a successful partial counterrevolution against [the] general equilibrium welfare economic framework,” write Colander and Freedman. But embracing cost-benefit analysis required abandoning the tenets of debate focused on judgements and sensibilities – “argumentation for the sake of heaven,” as the authors prefer to put it

So what does a present-day hero of classical liberalism look like? Colander and Freedman cite six well-known exemplars: Edward Leamer,  who wrote a classic 1983 critique of scientific pretension, “Taking the Con Out of Econometrics”; Ariel Rubinstein, a distinguished game theorist who describes models as no more compelling than economic fables; Dani Rodrik, a rigorous trade theorist who asked as long ago as 1997, Has Globalization Gone Too Far?; Nobel laureate Alvin Roth, who likens the role of many economist to that of an engineer;  Amartya Sen, another laureate, recognized for his “scientific” work on collective decision-making but honored for his policy work on the development of capabilities; and Romer, another laureate perhaps better known for his biting criticism of “mathiness,” akin to “truthiness,” among leaders of the profession.

All are excellent economists.  But almost certainly it was not the economics profession that led the world down a garden path to its present state of discombobulation. In his Foreign Affairs review, Romer asserts,

For the past 60 years, the United States has run what amounts to a natural experiment designed to answer a simple question: What happens when a government starts conducting its business in the foreign language of economists? After 1960, anyone who wanted to discuss almost any aspect of US public policy – from how to make cars safer to whether to abolish the draft, from how to support the housing market to whether to regulate the financial sector – had to speak economics. Economists bring scientific precision and rigor to government interventions, the thinking went, promised expertise and fact-based analysis.

Far more persuasive were the natural experiments conducted in the language of the Cold War. They include the rise of Japan in the global economy; the decision of China’s leaders to follow its neighbors’ example and join the global market system; the slow decline and rapid final collapse of the Soviet empire; the financial-asset boom that followed Western central bankers’ success in quelling inflation; the globalization that accompanied a burst of “deregulation”; the integration that accompanied the invention of computers, satellites, and the Internet; and the escape from extreme poverty of 1.1 billion people, a seventh of the world’s population.

Rivalries among nations were far more influential in precipitating these changes than were contests among Keynesians and Monetarists, even their magazines and television debates. Political choices produced the present world – grass roots, top-down, and everywhere in between. Economists scrambled to keep up.

        David Warsh, an economic historian and veteran columnist, is proprietor of Somerville-based economicprincipals.com, where this column first ran.       

© 2020 DAVID WARSH, PROPRIETOR 

440px-Supply-demand-right-shift-demand.svg.png


David Warsh: The future of the great U.S.-China trade decoupling

In the Port of Shanghai, the world’s biggest container port

In the Port of Shanghai, the world’s biggest container port

James Kynge, Financial Times bureau chief in Bejing in 1998-2005, is among the China-watchers whom I have followed, especially since China Shakes the World: A Titan’s Rise and Troubled Future – and the Challenge for America appeared, in 2006.  Today he operates a pair of proprietary research services for the FT.

So I was disheartened to see to see Kynge employ an ominous new term in an FT op-ed column on Friday, Aug. 23, “Righteous Anger Will Not Win a Trade War’’.  President Trump thinks that the U.S. becomes stronger and China weaker as the trade war continues, Kynge wrote, but others see an opposite dynamic at work: “mounting losses for American corporations as the U.S. and Chinese economies decouple after nearly 40 years of engagement.”

Decoupling is so incipient as a term of art in international economics that Wikipedia offers no meaning more precise than “the ending, removal or reverse of coupling.” A decade ago, it implied nothing more ominous than buffering the business cycle (The Decoupling Debate). Former World Bank chief economist Paul Romer, no professional China-watcher but better connected than ever, since he shared a Nobel Prize in economics last year, returned from a trip there in June with something of a definition. The mood in China, at least in technology circles, was grim but determined, he told Bloomberg News.

“I think what they’ve decided is that the U.S .is not a reliable trading partner, and they can’t maintain their economy or their tech industry if it’s dependent on critical components from the United States.  So I think they are on a trajectory now, that they’re not going to move off of, of becoming wholly self-sufficient in technology. Even if there’s a paper deal that covers over this trade war stuff, I think we’ve seen a permanent change in China’s approach…. There’s no question that they’re on a trajectory to become completely independent of the United States because they just can’t count on us anymore.’’

How long might it take to pretty fully disengage at the level of technological standards?  More than five years, maybe ten, Romer guessed, citing Chinese estimates. For that length of time, Kynge reckons, U.S. high tech vendors would continue to suffer.  American companies and their affiliates sell nine times more in China than their counterparts operating in the United States, according to one estimate he cited. Cisco and Qualcomm report being squeezed out of China markets, he says.  HP, Dell, Microsoft, Amazon and Apples are considering pulling back.

The long-term competition for technological dominance worries Kynge more than the trade war.  In many industries, he writes, China is thought to be already ahead. Among those he lists are high-speed rail, high-voltage transmission lines, renewables, new energy vehicles, digital payment systems, and 5G telecom technologies. And while there is no agreement about which nation possesses the more effective start-up culture, in university-based disciplines such as artificial intelligence, quantum computing, and biomedicine, in which the U.S. has been thought to have been well ahead, China is making rapid gains.

This decoupling of two nations that for 40 years gave grand demonstration of the benefits and, latterly, the costs, of trade is a bleak prospect.  If there is a silver lining, it lies in the fact that rivalry often produces plenty of jobs along with the mortal risks that passionate competition entail. But if America is to do anything more than simply capitulate, it must find a leader and begin to move past the disastrous presidency of Donald Trump.

Friday’s shocking escalation, via Trump’s Tweets, brought that eventuality a little closer.  The president is on the ropes. There is no sign of trade war fever beyond his base that might restore the confidence required for him to win a second term.

.                                                xxx

New on the EP bookshelf:   The Narrow Corridor: States, Societies, and the Fate of Liberty, by Daron Acemoglu and James A. Robinson (Random House, 2019).

The Triumph of Injustice: How the Rich Dodge Taxes and How to Make Them Pay, by Emmanuel Saez and Gabriel Zucman (Norton, 2019).

David Warsh, an economic historian, book author and veteran columnist, is proprietor of Somerville-based economicprincipals.com, where this essay first appeared.







 



David Warsh: 35 years of chronicling complexity

600px-Longleat_maze.jpg

SOMERVILLE, Mass.

In the summer of 1984, starting out as an economic journalist for The Boston Globe, I published The Idea of Economic Complexity (Viking). “Complexity,” I wrote, “is an idea on the tip of the modern tongue.”

About that much, at least, I was right.

My book was received with newspaperly courtesy by The New York Times, but it was soon eclipsed by three much more successful titles. Chaos: The Making of a New Science (Viking), by James Gleick, appeared in 1987. Complexity: The Emerging Science at the Edge of Order and Chaos (Simon & Schuster), by M. Mitchell Waldrop, and Complexity: Life at the Edge of Chaos (Macmillan), by Roger Lewin, both appeared in 1992. The reviewer for Science remarked that the latter read like the movie version of the former.

Gleick reported on the doings of a community of physicists, biologists and astronomers, including mathematician Benoit Mandelbrot, who were studying, among other things, “the butterfly effect.” Lewin and Waldrop both wrote mainly about W. Brian Arthur, of the Santa Fe Institute. I had pinned my hopes on Peter Albin, of the City University of New York, whose students hoped he would be the next Joseph Schumpeter.

When the famously pessimistic financial economist Hyman Minsky retired, Albin was chosen to replace him at the Levy Institute at Bard College, but he suffered a massive stroke before he could take the job. Duncan Foley, then of Barnard College, edited and introduced a volume of Albin’s papers: Barriers and Bounds to Rationality: Essays on Economic Complexity and Dynamics in Interactive Systems (Princeton, 1998). Arthur went on to win many awards and write a well-regarded book, The Nature of Technology: What It Is and How It Evolves (Free Press, 2011).

By then complexity had become a small industry, powered by a vigorous technology of agent-based modeling. Publisher John Wiley & Sons started a journal, Princeton University Press a series of titles, Ernst & Young opened a practice. Among the barons who came across my screen were John Holland, Scott Page, Robert Axelrod, Leigh Tesfatsion, Seth Lloyd, Alan Kirman, Blake LeBaron, J. Barkley Rosser Jr., and Eric Beinhocker, as well as three men who became good friends: Joel Moses, Yannis Ionnides, and David Colander. All extraordinary thinkers. I long ago went far off the chase.

Two of the most successful expositors of economic complexity were research partners, as least for a time: Ricardo Hausmann, of Harvard University’s Kennedy School of Government, and physicist César Hidalgo, of MIT’s Media Lab. They, too, worked with a gifted mathematician, Albert-László Barabási, of Northeastern University, to produce a highly technical paper; then, with colleagues, assembled an Atlas of Complexity: Mapping Paths to Prosperity (MIT, 2011), a data-visualization tool that continues to function online. Meanwhile, Hidalgo’s Why Information Grows: The Evolution of Order, from Atoms to Economies (Basic, 2015) remains an especially lucid account of humankind’s escape (so far) from the Second Law of Thermodynamics, but there is precious little economics in it. For the economics of international trade, see Gene Grossman and Elhanan Helpman.

That leaves economist Martin Shubik, surely the second most powerful mind among economists to have tackled the complexity problem (John von Neumann was first). Shubik pursued an overarching theory of money all his life, one in which money and financial institutions emerge naturally, instead of being given. In The Guidance of an Enterprise Economy (MIT, 2016), he considered that he and physicist Etic Smith had achieved it. Shubik died last year, at 92. His ideas about strict definitions of “minimal complexity” will take years to resurface in others’ hands.\

So what have I learned? That the word itself was clearly shorthand: complexity of what? One possible phenomenon is complexity of the division of labor, or the extent of aggregate specialization in an economic system.

I came close to saying as much in 1984. My book began:

“To be complex is to consist of two or more separable, analyzable parts, so the degree of complexity of an economy consists of the number of different kinds of jobs in the system and the manner of their organization and interdependence in firms, industries, and so forth. Economic complexity is reflected, crudely, in the Yellow Pages, by occupational dictionaries, and by standard industrial classification (SIC) codes. It can be measured by sophisticate modern techniques such as graph theory or automata theory. The whys and wherefores of complexity are not our subject here, however; it is with the idea itself that we are concerned. A high degree of complexity is what hits you in the face in a walk across New York City; it is what is missing in Dubuque, Iowa. A higher degree of specialization and interdependence – not merely more money or greater wealth – is what makes the world of 1984 so different from the world of 1939.’’

I was interested in specialization as a way of talking about why the prices of everyday goods and services were what they were apart from the quantity of money. I was writing towards the end of 40 years of steadily rising prices. I had become entranced by some painstaking work published 25 years before, by economists E.H. Phelps Brown and Sheila Hopkins. There were measurements of both the money cost of living in England and the purchasing power of workers’ wages over seven centuries. The price level exhibited a step-wise pattern, relentlessly up for a century, steady the next; purchasing power, a jagged but ultimately steady increase (sorry, only JSTOR subscription links).

“[W] hen (I wrote) we find the craftswomen who have been building Nuffield College in our own day earning a hundred fifty pennies in the time it took their forebears building Merton to earn one, the impulse to break through the veil of money becomes powerful: we are bound to ask, what sort of command over the things that builders buy did these pennies give from time to time?’’

It turned out the higher the money price, the more prosperous was the craftsman’s lot, at least in the long run, though sometimes after periods of immiseration lasting decades. That was much as Adam Smith led readers to expect in the first sentence of The Wealth of Nations: “The greatest improvement in the productive power of labor, and the greater part of the skill, dexterity, and judgement, with which it is directed, or applied, seems to have been the effects of the division of labor.” Today’s builders rely on a bewildering array of materials and machines to pursue their tasks, compared to those who built Merton College.

What interested me were intricate questions about the direction of causation. Had prices grown higher because the number of pennies had increased? Or had the supply of pennies grown to accommodate an increasing overall division of labor? To put it slightly differently, in those periods of “industrial revolution” – there had been at least two or three such events – had prices risen because the size of the market and the division of labor had grown, and the quantity of money along with them? Or was it the other way around?

Economists had no hope of answering questions like this, it seemed to me, because they had no good way of posing them. They were in the grip of the quantity theory of money, which at least since the time of the first European voyages to the West, has held that “the general level of prices” is proportional to the quantity of money in the system available to pay for those goods. This is, I thought, little more than an analogy with Boyle’s Law, one of the most striking early successes of the scientific revolution, which holds that the pressure and volume of a fixed amount of gas are inversely proportional. Release the contents from a steel cylinder into a balloon and the container expands. But it still contains no more gas than before. Something like that must have been in the mind of the first person who first spoke of “inflating” the currency. From there it was a short jump to the way that classical quantity theory relies on the principle of plenitude – the age-old assumption, inherited from Plato, that there can be nothing truly new under the sun, that the collection of goods of “general price level” were somehow fixed.

But I was no economist. My book found no traction. By then, however, I was hooked; and within a few years I had found my way to a circle of economists at whose center was Paul Romer, then a professor at the University of Rochester. Romer was in the process of putting the growth of knowledge at the center of economics, but that turns out not to be the whole story, just the beginning of it.

The Yellow Pages are all but gone, casualties of search advertising; other industries that supported themselves by assembling audiences have shrunk (newspapers, magazines, broadcast television). Still others have grown (Internet firms, Web vendors, producers of streaming content). Tens of thousands of jobs have been lost; hundreds of thousands of jobs have been created

I still have the feeling that the important changes in the global division of labor have something to do with the behavior of traditional macroeconomic variables. Romer once surmised that the way into the problem was via Gibson’s paradox – a strong and durable positive empirical correlation between interest rates and the general level of prices, where theory expected to find the reverse. Meanwhile, central bankers are fathoming the mysteries of the elusive Phillips Curve, the inverse relationship between unemployment and inflation.

Which brings me back to 1984. Also in that year, Michael Piore and Charles Sabel published The Second Industrial Divide: Possibilities for Prosperity (Basic). They found their new highly flexible manufacturing firms in northwestern and central Italy instead of Silicon Valley. Their entrepreneurs had ties to communist parties and the Catholic Church instead of liberation sympathies. But the idea was much the same: Computers would be the key to flexible specialization. For all the talk since about economic complexity, that is the book about the changing division of labor worth rereading.

David Warsh, a veteran columnist and an economic historian, is proprietor of Somerville-based economicprincipals.com, where this essay first ran.

David Warsh: A way to change tech giants' behavior?

Google headquarters, in Mountain View, California

Google headquarters, in Mountain View, California

SOMERVILLE, Mass.

“What is so rare as a day in June,’’ as New England poet James Russell Lowell wrote, or, for that matter, in May, in Somerville, Massachusetts? A genuinely powerful intellect, that’s what. Enough to elicit a weekly instead of a walk.

The most interesting thing I saw last week was A Tax to Fix Big Tech, an op-ed by economist Paul Romer in The New York Times, proposing a progressive tax on corporate revenues from sales of search advertising. “Putting a levy on targeted ad revenue would give Facebook and Google a real incentive to change their dangerous business models,” he wrote.

About those dangerous business models, Romer had little to say except that

It is the job of government to prevent a tragedy of the commons. That includes the commons of shared values and norms on which democracy depends. The dominant digital platform companies, including Facebook and Google, make their profits using business models that erode this commons. They have created a haven for dangerous misinformation and hate speech that has undermined trust in democratic institutions. And it is troubling when so much information is controlled by so few companies.

What is the best way to protect and restore this public commons? Most of the proposals to change platform companies rely on either antitrust law or regulatory action. I propose a different solution. Instead of banning the current business model – in which platform companies harvest user information to sell targeted digital ads –  new legislation could establish a tax that would encourage platform companies to shift toward a healthier, more traditional model.

He relied for a foil on Sen. Elizabeth Warren’s proposals to break up big tech companies, using antitrust statutes or regulation. He wrote, “Existing antitrust law in the United States addresses mainly the harm from price gouging, not the other kinds of harm caused by these platforms, such as stifling innovation and undermining the institutions of democracy.”  And regulators and judges can be captured by clever lawyers and patient corporate lobbyists.  (Nothing here about the legislators who would enact and monitor the tax statutes and laws.)

There are several advantages to using tax legislation as a strategy, according to Romer. The tax he had in mind could apply to revenue from sales of targeted digital ads, the core businesses of Facebook, Google and other firms that make money monitoring users’ searches. “At the federal level, Congress could add it as a surcharge to the corporate income tax. At the state level, a legislature could adopt it as a type of sales tax on the revenue a company collects for displaying ads to residents of the state.”  Such a tax could be progressive, creating an impediment to growth through acquisition, and an incentive to periodic spin-offs, and thus greater competition. He added several FAQS the next day on his Web site about various tax aspects.

There was, alas, very little speculation about the new ad-free subscription models that might emerge as a means of avoiding taxes on targeted ad revenue, except to say that subscribers would be mindful of the privacy they obtained by avoiding the ever-more sophisticated surveillance of their habits by traditional search services, and subscription companies “could succeed the old-fashioned way: by delivering a service that is worth more than it costs.”

Along with countless others, I share Senator Warren and Nobel-laureate Romer’s sense that Facebook and Google and other big Internet firms have become highly undesirable corporate citizens in their current gigantic and highly profitable ad-supported form. Surely newspapers are among the “institutions of democracy” that would be strengthened by some governmental reshaping of advertising markets.

Those with long memories will recall that, as a professor at Stanford University’s Graduate School of Business, Romer was the government’s expert in the remedy phase of the Justice Department’s successful (to that point) antitrust complaint against Microsoft Corp. His recommendation was to break the company into two competing firms – one selling its Windows operating systems, the other marketing software applications (including its highly profitable Office suite). The remedy was headed for implementation, until an appellate court sent the case back to a different judge. The election of George W. Bush mooted the issue; the Justice Department withdrew its complaint: a salutary victory against big business slipped away.

Romer had left research by then to start an online learning company.  In 2007 he quit Stanford altogether to work as a policy entrepreneur – a natural enough path for the son of a former governor of Colorado who had harbored national ambitions.  Romer spent several years advocating for “charter cities,” tax-favored enterprise zone in developing nations whose governance was to be somehow outsourced to independent authorities. Two attempts failed on the eve of what would have been their creation.  In 2010, he joined New York University’s Stern School of Business as a University Professor and for a time, director of NYU’s Marron Institute of Urban Management.

In October 2016 he signed on as chief economist of the World Bank, with hopes of transforming its large and well-funded research department. Fifteen months later, he resigned, after a series of controversies with staff. By then he had come perilously close to gadfly status as a critic of macroeconomics. He could speak so freely, he explained, “because I am no longer an academic. I am a practitioner, by which I mean that I want to put useful knowledge to work. I care little about whether I ever publish again in leading economics journals or receive any professional honor because neither will be of much help to me in achieving my goals.”

The Nobel award last year, jointly with William Nordhaus, “for integrating technological innovations into long-run macroeconomic analysis,” rescued Romer from that limbo by certifying his stature. He married the same day he received the prize.  Since then, Romer has offered advice to incoming Word Bank President David Malpass, in an op-ed in the Financial Times (outsource the bank’s research function and concentrate on infrastructure planning and financial diplomacy instead), and, last week, the op-ed in the Times

Op-eds are only slightly better than TED talks.  But, as noted, really good ideas are rare. This one may be profound.  It deserves plenty of further study.

David Warsh, an economic historian and veteran columnist, is proprietor of Somerville-based economicprincipals.com, where this column first ran.


David Warsh: Economist's worst-case stance on global warming

California’s disastrous Camp Fire as seen from the Landsat 8 satellite on Nov. 8.

California’s disastrous Camp Fire as seen from the Landsat 8 satellite on Nov. 8.

SOMERVILLE, MASS.

At first glance, it might have seemed anticlimactic, even crushing. The two young men had arrived together at the Massachusetts Institute of Technology in 1964, one from Swarthmore College, the other from Yale University. They completed their graduate studies three years later and, as assistant professors, taught together at Yale for the next five years. Then one returned to MIT and later moved to Harvard University, while the Yalie remained in New Haven. For the next dozen years, they worked on different problems, one on resource economics, the other on economies in which profit-sharing. as opposed to wages, would be the norm; until sustainability and global warming took over for both, far-seeing hedgehog and passionate fox.

Now the hedgehog had been recognized with a Nobel Prize that the fox had hoped to share, and the newly-announced laureate was speaking at a symposium to mark the retirement from teaching of the fox.

Don’t worry, you haven’t heard the last of Harvard’s Martin Weitzman. William Nordhaus, of Yale, shared the Nobel award this year for having framed the world’s first integrated model of the interplay among climate, growth, and technological change. But unless you believe the problem of global warming is going to go away, you are likely to meet Weitzman somewhere down the road. It just isn’t clear how or when.

At the moment, Weitzman is associated mainly with his so-called Dismal Theorem. The argument concerns “fat tailed uncertainty” or, as he describes it, the “unknown unknowns of what might go very wrong … coupled with essentially unlimited downside liability on possible planetary damages,” The structure of the reasoning was apparently well-known to high-end statisticians. Weitzman applied it first as a way to explain the so-called equity premium (why stocks earn so much more than bonds). Then, in 2009, introduced it to the global warming debate. Others have applied it since to fears about releasing genetically modified organisms.

Those unknown unknowns call for a more expensive insurance policy against their possibility than would otherwise be the case, Weitzman says, in the form of immediate countermeasures, You can hear him expound the case himself in an hour-long podcast with interviewer Russell Roberts. Better yet, read Weitzman and Gernot Wagner’s uncommonly well-written book Climate Shock: The Economic Consequences of a Hotter Planet (Princeton, 2015).

(Copy editor: This seems a bit dashed off. EP: it is, too much so. I did a much better story about Thomas Schelling 25 years ago [“The Phone that Didn’t Ring”]. But I worked for a daily newspaper then, and I had less faith in the prize committee.)

On the other hand, if you have reservations about the worst-case way of framing policy choices, as does Nordhaus (along with many others), Weitzman has made other distinctive contributions, four in particular, which constitute tickets in some future lottery of fame.

The first has to do with a series of conceptual papers on “green accounting,” which involve ways of incorporating depreciation of natural resources into accounts of economic growth. The second involves contributions to the debate about the choice of discounting rates and intergenerational equity. The third concerns pioneering work on the costs and benefits of maintaining species diversity (the Noah’s Ark problem, the contribution Nordhaus gauged his most profound). The fourth has to do with his analysis of the means and risks of deploying various geoengineering measures to combat rapid warming – particularly injecting particles into the upper atmosphere, volcano-style, to shade the Earth from solar rays. And of course there is “Prices and Quantities,’’ from 1974, his most-cited paper, a durable contribution to comparative economics.

Global warming is a problem of staggering complexity. Economic activity caused the problem; economic analysis will be an important part of the response. If you believe the science, expect that this year’s laureates, Nordhaus and Paul Romer, of New York University, are only the first economists whose contributions will be recognized by the Swedes. Fat tails or not, time is God’s way of keeping everything from happening at once.

David Warsh, a columnist and economic historian, is proprietor of economicprincipals.com, based in Somerville.


David Warsh: Romer's huge contribution to growth economics

Ruins of the church at São Miguel das Missões, Rio Grande do Sul, Brazil, one of the missions that ministered to the Guarani tribe.

Ruins of the church at São Miguel das Missões, Rio Grande do Sul, Brazil, one of the missions that ministered to the Guarani tribe.

It was a newspaper feature story of a sort that has become fairly familiar, if rarely so well executed, and my physicist friend was enthusiastic about it.  “I love studies like this. Data on almost anything can be squeezed out of the most unlikely places. Clever data acquisition, analysis, and normalization to an ingeniously inferred control group. And in this case, the look-back period is 400 years and the ripple effects have continued for 250 years after the stimulus was removed.”

As described by Washington Post reporter Andrew Van Dam, “The Mission: Human Capital Transmission, Economic Persistence, and Culture in South America,” a study published in the Quarterly Journal of Economics, tells the following story:

Jesuit missionaries arrived in South America in the mid-16th Century, proselytizing Catholicism and teaching useful new skills in roughly equal measure. In 1609, the order established the first of some 30 missions in the remote homeland of the Guarani tribe, what is today the “triple frontier” region where two great rivers meet to form the boundaries of Brazil, Paraguay and Argentina. Jesuits taught blacksmithing, arithmetic and embroidery to the Guarani people. The missions mostly thrived until 1767, when King Charles III of Spain expelled all Jesuits from the Spanish Empire. Formal instruction stopped.

Yet even today, people living near the ruins of those missions show the effect of that long ago training, going to school 10 to 15 percent longer and earning 10 percent more than residents of equivalent towns without missions. Felipe Valencia Caicedo, of the University of British Columbia, chose to piece together the story of the Guarani missions from archival sources because the relatively isolated region, with its jumble of governments, offered a natural experiment.

Bringing to bear much of the apparatus of a randomized controlled trial, the economic historian was able to show that it wasn’t colonialism that produced the result, it wasn’t geography, it wasn’t religion. It was investment in skills. “Valencia’s analysis is among the most striking of a surge of studies that show how returns from education and vocational training span generations and even centuries,” wrote Van Dam.

“The Mission” is also a prime example of the torrent of important work that was unleashed t by a single paper, “Endogenous Technical Change,” in 1990,  for which Paul Romer, of New York University, shared this year’s Nobel Prize in economics with William Nordhaus, of Yale University. Nordhaus’s topic is the interplay of economic growth and climate. Romer’s topic was the role of inventors, researchers and entrepreneurs in economic growth. Even before he completed his thesis, in 1983, at the University of Chicago, his emphasis on differential and, often, accelerating national growth rates was causing excitement. As his adviser Robert Lucas famously put it in his Marshall lectures,

I do not see how you can look at figures like these without seeing them as possibilities. Is there some action a government of India could take that would lead the Indian economy to grow like Indonesia’s or Egypt’s? If so, what, exactly?  If not, what is it about the “nature of India” that makes it so? … The consequences for human welfare involved in questions like these are simply staggering: Once one starts thinking about them, it is hard to think of anything else.

Romer concentrated narrowly on the economics of technological change. On Vox EU Web site, Charles I. Jones, Romer’s successor at Stanford University’ Graduate School of Business, describes the path by which his friend introduced an “economics of ideas,” with is powerful implication that governments inevitably influence growth through intellectual property regimes, education and training policies, and subsidies to long-run research and development..

Almost immediately researchers began looking for other policies that might influence growth, financial, legal and political institutions in particular. New journals appeared. So did a long shelf of books, including Why Nations Fail: The Origins of Power,, Prosperity, and Poverty, by Daron Acemoglu and James Robinson; The Great Escape: Health, Wealth, and the Origins of Inequality, by Angus Deaton; The Race Between Education and Technology, by Claudia Goldin and Lawrence Katz; and The Rise and Fall of American Growth, by Robert Gordon.  Economic historians emphasized the role of culture:  Joel Mokyr, of Northwestern University, in A Culture of Growth: The Origins of the Modern Economy; and Deidre McCloskey, of the University of Illinois at Chicago, in her epic trilogy, The Bourgeois Era, especially its third volume, Bourgeois Equality: How Ideas, Not Capital or Institutions, Enriched the World.  Historian Youval Noah Harari contributed Sapiens: A Brief History of Humankind.

Romer tried for a while to keep ahead of the torrent, beginning to work on endogenous change of tastes and preferences, and then gave up. He started an online learning company, Aplia, sold it, resigned from Stanford University to become a policy entrepreneur, advocating for the creation of “charter cities” around the world. He joined the Stern School of Business at NYU, founded the Marron Institute of Urban Management on the university’s campus, then left to serve for a tumultuous time as chief economist of the World Bank.

Between times he skirmished publicly over a tendency to “mathiness” in economics and the state of macroeconomics, continuing to play a role behind the scenes in research economics.  Today he is an Institute Professor at NYU. As Jones concludes, Romer’s contribution to growth economics has been monumental. With a single paper, he virtually invented the modern field. “Endogenous Technical Change” is an especially vivid reminder of Einstein’s dictum, that it is the theory which decides what we can observe.

David Warsh, an economic historian and long-time columnist, is proprietor of Somerville, Mass.-based economicprincipals.com, where this column first appeared.

 


David Warsh: On the failure of markets

Note Independence near the border with Kansas.

Note Independence near the border with Kansas.

As especially interesting place to visit is Independence, Mo., 250 miles up the Missouri River from the Mississippi.  It was nothing more than a river bank in 1804, when the Lewis and Clark expedition stopped overnight to pick wild plums, apples and raspberries.  Mormons began to settle in the little frontier town in 1831, and by 1840, you could stand by the gate of the marshalling yard and contemplate the junction of all three main wagon routes to the west: the Oregon Trail, the California Trail and the Santa Fe Trail.

Something of the sort may have been in the back of the minds of the Nobel Committee when they designated William Nordhaus and Paul Romer recipients of the 2018 Swedish Central Bank Prize for Economic Sciences in Memory of Alfred Nobel. By unexpectedly linking the two, each of whom could have been cited separately, the Scandinavians got people thinking and writing about subjects not yet well-connected in the popular mind. Nordhaus is an environmental economist. Romer is a theorist of economic growth.

The link that the prize awarders emphasized was the researchers’ shared concern with the failure of markets to deliver desired results. These are known as externalities, the effect that certain kinds of transactions have on persons who were not involved in the deal. These so-called “market failures” may be negative, as with greenhouse-gas emissions that adversely affect the global climate, or they may be positive, as with the knowledge spillovers that occur when technological know-how is widely shared.

Nordhaus has built models with extensive links to physical science models to gauge the social costs of atmospheric pollution. Romer has deepened and broadened the argument for policies in support of education, for the sharing of intellectual property, and for thoughtful zoning. Much the best discussion of the ins and outs of the laureates’ work that I have seen is by Kevin Ryan, a professor at the University of Toronto’s Rotman School of Management.

I wrote about some part of this story many years ago in Knowledge and the Wealth of Nations: A Story of Economic Discovery (Norton, 2006). Over the next couple of months I thought I might scatter half a dozen weeklies updating the story as best I can in light of the 2018 prize. It will be a welcome alternative to hashing over the election news.

The single most important message of the shared work is to show how modern mixed-market economies can cope with the exigencies of continued economic growth without a lot of regulation.  Carbon taxes, on the one hand, government support for long-term research and development of green technologies on the other, can limit the damage to the Earth that is already in train, and, eventually, even roll back some of the enormous quantity of carbon dioxide that pell-mell growth over two centuries has dumped in the atmosphere.

True, the way ahead is more fraught with peril than were those trails to the Pacific Ocean. But humankind’s inventiveness has grown. What is lacking, so far, is cohesion.

David Warsh, an economic historian and veteran columnist, is proprietor of Somerville, Mass.-based economicprincipals.com, where this essay first ran.

           


David Warsh: Nobel Prizes and macro vs. growth

  BOSTON

It was about a year ago that Paul Krugman asked, “[W]hatever happened to New Growth Theory?”  The headline of the item on the blog with which the Nobel laureate supplements his twice-weekly columns for The New York Times telegraphed his answer: The New Growth Fizzle.  He wrote:

''For a while, in the late 1980s and early 1990s, theories of growth with endogenous technological change were widely heralded as the Next Big Thing in economics. Textbooks were restructured to put long-run growth up front, with business cycles (who cared about those anymore?) crammed into a chapter or two at the end. David Warsh wrote a book touting NGT as the most fundamental development since Adam Smith, casting Paul Romer as a heroic figure leading economics into a brave new world.

''And here we are, a couple of decades on, and the whole thing seems to have fizzled out. Romer has had a very interesting and productive life, but not at all the kind of role Warsh imagined. The reasons some countries grow more successfully than others remain fairly mysterious, with most discussions ending, as Robert Solow remarked long ago, in a “blaze of amateur sociology”. And whaddya know, business cycles turn out still to be important.''

Krugman’s post raised eyebrows in my circles because many insiders expected that a Nobel Prize for growth theory would be announced within a few weeks. A widely noticed Nobel symposium had been held in Stockholm in the summer of 2012, the usual (though not inevitable) prelude to a prize.  Its proceedings had been broadcast on Swedish educational television.  Romer, of New York University, had been the leadoff speaker; Peter Howitt, of Brown University, had been his discussant; Philippe Aghion, of Harvard University and the Institute for International Studies, the moderator of the symposium.

Knowing this, I let Krugman’s gibe pass unchallenged, even though it seemed flat-out wrong. These things were best left to the Swedes in private, I reasoned; let the elaborate theater of the prize remain intact.

Then came October, and a surprise of a slightly different sort.  Rather than rousing one or more of the growth theorists, the early morning phone calls went to three economists to recognize their work on trend-spotting among asset prices and the difficulty thereof – Eugene Fama, Robert Shiller and Lars Hansen.  Fama’s work had been done 50 years before; Shiller’s, 35. Two big new financial industries, index funds and hedge funds, had grown up to demonstrate that the claims of both were broadly right, in differing degrees. Hansen had illuminated their differences. So old and safe and well-prepared was the award that its merit couldn’t possibly be questioned.

What happened?  It’s well known that, in addition to preparing each year’s prize,  prize committees work ahead on a nomination or two or even three, assembling slates of nominees for future years in order to mull them over. Scraps of evidence have emerged since last fall that a campaign was mounted last summer within the Economic Sciences Section of the Academy, sufficient to stall the growth award and bring forward the asset-pricing prize – resistance to which Krugman may have been a party.

These things happen.  The fantasy aspects of the Nobel Prize – the early-morning phone call out of the blue – have been successfully enough managed over the years as to distract from the “hastily-arranged” press conferences that inevitably follow, the champagne chilled and ready-to-hand. Laureates, in general, are only too happy to play along.  Sometimes innocence may even be real. Simon Kuznets, on his way to visit Wassily Leontief in New York in 1971, told friends that he overheard heard only that “some guy with a Russian name” had won, before stepping into the high-rise elevator that would carry him to his friend’s apartment. It was, he said, the longest ride of his life.

As described on the Nobel website, the committee meets in February to choose preliminary candidates, consults experts in the matter during March and April, settles on a nomination in May, writes up an extensive report over the summer, and sends it in September to the Social Science class of the Royal Swedish Academy of Sciences – around seventy professors, most of them Scandinavians – where it is widely discussed. Thus by summer, the intent of the committee is known, if very closely held, by a fairly large fraternity of scientists. The 600-member Academy then votes in October.

There is nothing obvious about the path that the economics prize award should take; even within the Academy there are at least a couple (and probably more) different versions of what the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, established in 1969, is all about. Wide-ranging and free-wheeling discussion among the well-informed is therefore crucial to its success; so is dependable confidentiality. Nominations and surrounding documentation are sealed for 50 years, so none of this has been revealed yet since the economics prize was established less than 50 years ago.

Over the years, however, scraps of information have leaked out about struggles that have taken place behind the scenes, in areas where sharp philosophical disagreements existed. For example, Gordon Tullock, of George Mason University, a lawyer and career diplomat with no formal training in economics,  told me years ago that  he woke in 1986 expecting to share the prize for public choice with James Buchanan.  He didn’t.  In her biography of game theorist John Nash, A Beautiful Mind, Sylvia Nasar reported that Ingemar Ståhl had sought to delay an award to Nash by moving up the prize prepared for Robert Lucas.  He didn’t succeed, and Lucas was honored, as had been planned, the following year. (Harold Kuhn, the Princeton mathematician who tirelessly insured that Nash’s story would be told, died last week, at 88.)

Something of the sort may actually have happened in 2003: preparations were made in Minneapolis for a press conference for Edward Prescott, then of the University of Minnesota; the prize went instead to a pair of low-key econometricians, Clive Granger and Robert Engle, both of the University of California at San Diego. Prescott and Finn Kydland, of the University of California at Santa Barbara, were cited the following year, “for their contributions to dynamic macroeconomics: the time consistency of economic policy and the driving forces behind business cycles.”  The latter award remains even more controversial today than it was then.

Indeed, Krugman’s own prize may have been moved up, amidst concern in Stockholm for the bourgeoning financial crisis of 2008.  As late as that October it was believed, at least in Cambridge, Mass., that the committee recommended that a prize be given for measurement economics, citing Dale Jorgenson, of Harvard University; Erwin Diewert, of the University of British Columbia; and Robert Hall, of Stanford University. It would have been the first prize for empirical economics since the award to Richard Stone, in 1984, and only the third since Kuznets was recognized, in 1971.  Instead the prize was to Krugman, by then working mainly as a columnist for The Times, “for his analysis of trade patterns and location of economic activity.”

No one seriously disputes that Krugman should have been recognized at some point for the consensus-changing work he did, beginning in the late 1970s, on monopolistic competition among giant corporations engaged in international trade, though a common view in the profession is that two others, Elhanan Helpman, of Harvard University, and Gene Grossman, of Princeton University, should have shared in the award. Committees over the years have been very conscious of the emphasis conferred by a solo award – only 22 of 45 economics prizes have been “singletons.”

The deferral of the measurement prize, if that is what happened, suggests there must have been considerable tumult behind the scenes. The gravity of the global financial crisis was very clear in Stockholm in September 2008. What happened in those few months won’t be known with any certainty for another forty-four years. But the effect of the award in October 2008 was to empower Krugman as a spokesman for the tradition of Keynesian macroeconomic analysis.  He responded with alacrity and has employed his bully pulpit since.

So much, then for what is known and, mostly, not quite known, about the recent politics of the prize.  What about the contest between macroeconomics and growth?

Macro is the dominant culture of economics – the center ring ever since Keynes published The General Theory of Employment, Interest and Employment, in 1936. It is a way of looking at the world, “an interpretation of events, an intellectual framework, and a clear argument for government intervention,” especially in the management of the business cycle, according to Olivier Blanchard, author of an authoritative text, Macroeconomics. There are many other fields in economics, but macro is the one that seeks to give an overall narrative and analytic account of expansion and recession, of capacity and utilization, of inflation and unemployment.  Macro has had its ups and downs in the years since 1936. Today anyone who studies fluctuations is a macroeconomist; but not all macroeconomists acknowledge the centrality of Keynes.

In the 1950s and ’60s, a “neoclassical synthesis” merged Keynesian contributions with all that had gone before. New standards for formal models, plus national income and product accounts and measures of the flow of funds,  produced various rules of thumb for managing modern industrial economies: Okun’s Law (output related to unemployment) and the Phillips Curve (inflation to unemployment); and so on. By the end of the 1960s, many economists thought of their field as mature.

 

In the ’70s came the “expectations revolution,” a series of high-tech developments (most of them anticipated by low-tech Milton Friedman), in which economists sought to build accounts of forward-looking people and firms into the macro scheme of things. The effectiveness of monetary policy was debated, until the Federal Reserve Board, under Paul Volcker, gave a powerful demonstration of its effectiveness.   Reputation and credibility became issues; targets and new rules emerged.

 

Growth theory, on the other hand, has a less clear-cut provenance.  There is no doubt that it began with Adam Smith, who, in the very first sentence of The Wealth of Nations, pronounced that the greatest improvement in the productive powers of humankind stemmed from the division of labor. Smith expounded for three chapters on the sources and limits of specialization, using a mass-production pin factory as his example, before dropping the topic in order to elucidate what  economists today call “the price system.” Interest in the kind of technological change that the pin factory represented faded into the background.

 

Karl Marx was a growth theorist (remember “Asiatic,” “ancient,” “feudal,” “bourgeois” modes of production and all that?), but he came late to economics and never found his way into the official canon. So was Joseph Schumpeter, who came closer to giving a persuasive account in economic terms but still failed to leave much of a mark. In the ’50s, MIT’s Robert Solow, a leading macroeconomist, ingeniously showed that most of the forces generating gains in wealth (gross domestic product/per capita) were exogenous, that is, outside the standard macro model, unexplained by it as the tradition stood. Macro debates continued to flourish. By end of the ’70s, interest in growth once had again faded away in technical economics.

 

In the ’80s, excitement over growth was suddenly rekindled in economics by three key papers, of which Romer wrote two and Robert Lucas wrote one. Romer’s primary interest was in “endogenizing” technology; that is, showing why governments, universities, corporations and entrepreneurs engaged in research. Lucas was intrigued by stories from international trade:  Asian trading nations such as Japan, Hong Kong, Taiwan, Korea and Singapore grew rich quickly while communist nations stagnated.  Where did the growth “miracles” come from?

 

As usual, the arguments of both men were intricately related to other on-going debates in technical economics. Lucas, a University of Chicago professor, was at pains to preserve, for convenience’s sake, the assumption of perfect competition.  Romer, educated at both Chicago and MIT and by then teaching at the University of Rochester, was intent on writing intellectual property into the act, employing the sixty-year-old convention of monopolistic competition.  Pure competition “spillovers,” meaning, roughly, the gains you reap from watching your neighbors, animated the first models that Romer and Lucas produced.  Romer’s second – and final – model depended on income streams that arose from new processes and new goods. The University of Chicago hired Romer; after a year, he moved to California where his wife had obtained a better job.

 

It seems clear that Romer won the debate. Aghion, then at MIT, and Howitt, then at the University of Western Ontario, quickly buttressed the case for viewing growth through the lens of monopolistic competition, but without producing the same clean convention as Romer’s “non-rival goods,” that is, know-how that can be possessed by more than one person at the same time. Helpman and Grossman obtained the same result.

Once it was established formally that privately appropriable knowledge was somehow involved in the process of growth – that ideas were economically important, as well as people and things – interest shifted quickly to the institutions and norms by which knowledge and the power to protect it were diffused.  A shower of interesting new work ensued. The effects on growth of patterns of suffrage, political governance, education, tax policy, land and immigration policy, laws, banking, religion and geography came under economists’ lenses.

The Nobel symposium in 2012 made it clear just how sprawling the “new” literature of growth and development has become. Presenters included a galaxy of stars, nearly every one of them players in the Nobel nomination league. They ranged from experts on technology, schooling, health, credit, geography and political and legal institutions; to empirical economists; and policy evaluation specialists.  So is it true, then, as Krugman asserted last summer, that “The reasons some countries grow more successfully than others remain fairly mysterious?” Only if you take the view from macro, and an extremely narrow view at that.

This is the sort of swirl that the Nobel program in economic sciences exists to rise above. It is true that Romer, 58, hasn’t made it easy for the Swedes. He stopped writing economics in the ’90s, started an online learning company, sold it, then quit economics altogether, leaving Stanford University’s Graduate School of Business and started a movement (which he announced in a TED talk) to create “charter cities” in less-developed countries around the world.

Charter cities?  By analogy to charter schools, these city-scale enterprise zones would spring up on greenfield sites, their police and legal systems guaranteed by volunteer foreign governments: perhaps Norway, for example, or Canada. “Opt-in colonialism,” say the critics.  After a couple of last-minute failures, in Madagascar and Honduras, Romer seems to be trying again, this time from the Urbanization Project at New York University’s Stern School of Business.

Second careers have become more common in recent years among economists whose early work has put them into the nomination for a Nobel Prize. Some intellects become bored by the chase.  A. Michael Spence became a business school dean; Krugman took up journalism.  Romer has become a reformer. But before he quit, he carefully dotted his i’s and crossed his t’s.  He added growth to economics’ agenda, once and for all. Its integration into macroeconomics has barely begun.

David Warsh, a longtime financial journalist and an economic historian, is proprietor of economic principals.com.