Outside the United States Bankruptcy Court in Manhattan, May 2009
Outside the United States Bankruptcy Court in Manhattan, May 2009
Ashley Gilberton / VII / Redux

Over the past 60 years, the United States has run what amounts to a natural experiment designed to answer a simple question: What happens when a government starts conducting its business in the foreign language of economists? After 1960, anyone who wanted to discuss almost any aspect of U.S. public policy—from how to make cars safer to whether to abolish the draft, from how to support the housing market to whether to regulate the financial sector—had to speak economics. Economists, the thinking went, promised expertise and fact-based analysis. They would bring scientific precision and rigor to government interventions.

For a while, this approach seemed a sure bet for steady progress. But several decades on, the picture is less encouraging. Consider, for example, the most basic quantitative indicator of well-being: the average length of a life. For much of the last century, life expectancy in the United States increased roughly in tandem with that in western Europe. But over the last four decades, the United States has been falling further and further behind. In 1980, the average American life was a year longer than the average European one. Today, it is two years shorter. For a long time, U.S. life expectancy was still rising but more slowly than in Europe; in recent years, it has been falling. A society is hardly making progress when its people are dying younger.

Binyamin Appelbaum makes this point in his new book, The Economists’ Hour. That book and another recent one—Transaction Man, by Nicholas Lemann—converge on the conclusion that the economists at the helm are doing more harm than good.

Both books are compelling and well reported, and both were written by journalists—outsiders who bring historical perspective to the changing role of economists in American society. Appelbaum tracks their influence across a wide range of policy questions since the 1960s. The language and the concepts of economics helped shape debates about unemployment and taxation, as one would expect. But they also influenced how the state handled military conscription, how it regulated airplane and railway travel, and how its courts interpreted laws limiting corporate power. Together, Appelbaum writes, economists’ countless interventions in U.S. public policy have amounted to no less than a “revolution”—well intentioned but with unanticipated consequences that were far from benign.

Outside the United States Bankruptcy Court in Manhattan, May 2009
Outside the United States Bankruptcy Court in Manhattan, May 2009
Ashley Gilberton / VII / Redux

Lemann chronicles another, related revolution. In the first half of the twentieth century, especially after the calamity of the Great Depression, the conventional wisdom held that the power of corporations must be held in check by other comparably sized organizations—churches, unions, and, above all, a strong national government. But in the decades that followed, a new generation of economists argued that tweaks to how companies operated—more hostile takeovers, more reliance on corporate debt, bigger bonuses for executives when stock prices increased—would enable the market to regulate itself, obviating the need for stringent government oversight. Their suggestions soon became reality, especially in a newly deregulated financial sector, where they precipitated the emergence of junk bonds and other questionable innovations. Like Appelbaum, Lemann concludes that economists’ uncritical embrace of the market changed U.S. society for the worse.

Voters, too, have their doubts, in the United States and beyond. In the run-up to the 2016 Brexit vote, Michael Gove, then the British justice secretary, was asked to name economists who supported his position that the United Kingdom should leave the European Union. He refused. “People in this country have had enough of experts,” he snapped. “I’m not asking the public to trust me. I’m asking the public to trust themselves.” A majority of the British electorate followed his cue and voted to leave the EU, the warnings of countless economists be damned.

Economists should take that outcome as an admonition warranting a major course change. Writing in 2018, the economists David Colander and Craig Freedman proposed one such correction. Over the course of the twentieth century, they contended, economists had built more and more sophisticated models to guide public policy, and many succumbed to hubris in the process. To regain the public’s trust, economists should return to the humility of their nineteenth-century forebears, who emphasized the limits of their knowledge and welcomed others—experts, political leaders, and voters—to fill in the gaps. Economists today should recommit to that approach, even if it requires them to publicly expel from their ranks any member of the community who habitually overreaches.

ESCAPE FROM THE BASEMENT

Appelbaum’s book begins with a revealing anecdote from the 1950s about Paul Volcker, at the time a young economist working in the bowels of the Federal Reserve System and disillusioned about his career prospects. Among the Fed’s national leadership were bankers, lawyers, and a hog farmer from Iowa—but no economists. In 1970, William McChesney Martin, Jr., then chair of the Federal Reserve’s Board of Governors, could still explain to a visitor that although economists asked good questions, they worked from the basement because “they don’t know their own limitations, and they have a far greater sense of confidence in their analyses than I have found to be warranted.”

The United States is going backward, and many economists have provided the intellectual cover for this retreat.

But Martin was on his way out, and as Appelbaum shows in the chapters that follow, economists were emerging from the basement—not just at the Fed but also across the government. To take just one example, consider the rapid spread of cost-benefit analysis as the tool of choice for assessing health and safety regulations. When the U.S. Congress created the Department of Transportation in 1966 and told it to make motor vehicles safer, lawmakers did not ask regulators to weigh the potential costs and benefits of proposed new rules: after all, no one could possibly determine the value of a human life. The economists Thomas Schelling and W. Kip Viscusi disagreed, arguing that people did in fact place a dollar value on human life, albeit implicitly, and that economists could calculate it.

Regulators initially rejected this approach, but as complaints about burdensome safety regulations grew louder, some began to waver. In 1974, the Department of Transportation used a cost-benefit analysis to reject a proposed requirement that trucks be fitted with so-called Mansfield bars, designed to prevent the type of accident that had killed the actress Jayne Mansfield in 1967. The cost of installing the bars on every truck, regulators calculated, would exceed the combined value of the lives that the bars would save. Soon, every participant in the conversation about safety regulations was expected to state and defend a specific dollar value for a life lost or saved.

Unfortunately, asking economists to set a value for human life obscured the fundamental distinction between the two questions that feed into every policy decision. One is empirical: What will happen if the government adopts this policy? The other is normative: Should the government adopt it? Economists can use evidence and logic to answer the first question. But there is no factual or logical argument that can answer the second one. In truth, the answer lies in beliefs about right and wrong, which differ from one individual to the next and evolve over time, much like people’s political views.

In principle, it is possible to maintain a clear separation between these two types of questions. Economists can answer such empirical questions as how much it would cost if the government required Mansfield bars. It is up to officials—and, by extension, up to the voters who put them in office—to answer the corresponding normative question: What cost should society bear to save a life in any particular context?

In practice, however, voters can provide only so much in the way of quantifiable directives. People may vote for an administration that promises safer cars, but that mandate alone is not specific enough to guide decisions such as whether to require Mansfield bars. Lacking clear guidance from voters, legislators, regulators, and judges turned to economists, who resolved the uncertainty by claiming to have found an empirical answer to the normative question at hand. In effect, by taking on the responsibility to determine for everyone the amount that society should spend to save a life, economists had agreed to play the role of the philosopher-king.

Sean McSorley

In Appelbaum’s account, this arrangement seems to have worked out surprisingly well in setting standards for automobile safety. Economists in the mold of Schelling and Viscusi seem to have channeled as best they could the moral beliefs of the median voter. When regulators first rejected Mansfield bars, in 1974, they put the value of a life at $200,000, but in response to pressure from voters demanding fewer traffic fatalities, economists and regulators gradually adjusted that number upward. Eventually, as the estimated value of the human lives lost to car accidents began to exceed the cost of installing Mansfield bars, regulators made the bars mandatory, and voters got the outcome they wanted.

Unfortunately, this outcome may have been possible only because, although the moral stakes were high, the financial stakes were not. No firm faced billions of dollars in gains or losses depending on whether the government mandated Mansfield bars. As a result, none had an incentive to use its massive financial resources to corrupt the regulatory process and bias its decisions, and the “don’t ask, don’t tell” system of using economists as philosopher-kings worked reasonably well.

The trouble arose when the stakes were higher—when the potential gains or losses extended into the tens of billions or hundreds of billions of dollars, as they do in decisions about regulating the financial sector, preventing dominant firms from stifling competition, or stopping a pharmaceutical firm from getting people addicted to painkillers. In such circumstances, it is all too easy for a firm that has a lot riding on the outcome to arrange for a pliant pretend economist to assume the role of the philosopher-king—someone willing to protect the firm’s reckless behavior from government interference and to do so with a veneer of objectivity and scientific expertise.

Simply put, a system that delegates to economists the responsibility for answering normative questions may yield many reasonable decisions when the stakes are low, but it will fail and cause enormous damage when powerful industries are brought into the mix. And it takes only a few huge failures to offset whatever positive difference smaller, successful interventions have made.

One such failure is prescription drug regulation. In the United States in 1990, overdoses on legal and illegal drugs accounted for four deaths per 100,000. By 2017, they were causing 20 deaths per 100,000. A little math reveals that this increase is a major reason why average life expectancy in the United States lags so far behind that in western Europe today. A recent paper by four economists—Abby Alpert, William Evans, Ethan Lieber, and David Powell—concluded that OxyContin, the opioid-based painkiller that generated billions in revenue for the U.S. pharmaceutical giant Purdue Pharma, was responsible for a substantial fraction of those new drug overdoses.

Imagine making the following proposal in the 1950s: Give for-profit firms the freedom to develop highly addictive painkillers and to promote them via sophisticated, aggressive, and very effective marketing campaigns targeted at doctors. Had one made this pitch to the bankers, the lawyers, and the hog farmer on the Board of Governors of the Federal Reserve back then, they would have rejected it outright. If pressed to justify their decision, they surely would not have been able to offer a cost-benefit analysis to back up their reasoning, nor would they have felt any need to. To know that it is morally wrong to let a company make a profit by killing people would have been enough.

In their attempt to answer normative questions, economists opened the door to ideologues lacking scientific integrity.

By the 1990s, such arguments were out of bounds, because the language and elaborate concepts of economists left no opening for more practically minded people to express their values plainly. And when the Drug Enforcement Administration finally tried to limit the distribution of these painkillers, pharmaceutical companies launched a massive lobbying effort in favor of a bill in Congress that would strip the DEA of the power to freeze suspicious narcotics shipments by drug companies. It is a safe bet that these lobbyists made their arguments to Congress in the language of growth, incentives, and the danger of innovation-killing regulations. The push succeeded, and the DEA lost one of its most powerful tools for saving lives.

Of course, during earlier eras, regulators allowed many industries to profit massively from products known to be harmful; Big Tobacco is the most obvious example. But until the 1980s, the overarching trend was toward restrictions that reined in these abuses. Progress was painfully slow, but it was progress nonetheless, and life expectancy increased. The difference today is that the United States is going backward, and in many cases, economists—even those acting in good faith—have provided the intellectual cover for this retreat.

THE COST OF DEREGULATION

Perhaps no one has captured the mindset that made possible such a massive regulatory failure—the mindset that economists really are philosopher-kings, who can instruct the public on right and wrong—better than Alan Greenspan, who was chair of the Federal Reserve at the time when Washington was easing regulations on many sectors. “Unfettered markets create a degree of wealth that fosters a more civilized existence,” Greenspan told a group of business economists in 2002. “I have always found that insight compelling.”

Greenspan was hardly alone in this conviction, and the most damaging forms of deregulation were those that removed constraints on financial firms, as Lemann reveals in his account of the career of Michael Jensen, an economist who helped reshape the U.S. financial sector in the late twentieth century. Jensen rightly worried about several problems that bedeviled the market, including how to keep corporate executives from promoting their own interests at the expense of shareholders. His proposed solutions—hostile takeovers, debt, and executive bonuses that tracked the share price of a firm, among other changes—were widely adopted.

Corporate shareholders saw their earnings skyrocket, but the main effect of the changes was to empower the financial sector, which Greenspan, for his part, worked doggedly to unfetter. As Lemann writes, Jensen’s ideas also helped chip away at the power of the traditional Corporate Man—the sort of executive whose pursuit of profit was tempered somewhat by a commitment to noneconomic norms, among them a belief in the need to foster trust and build long-term relationships across company lines. Taking his place was Transaction Man, who focused on little more than driving up share prices by any means necessary.

Deregulation, coupled with the new ethos of Transaction Man, invited immensely destructive behavior. One particularly egregious example occurred in 2007. That year, Paulson & Company, a hedge fund led by the investor John Paulson, paid Goldman Sachs approximately $15 million to structure and market a bundle of mortgage-backed securities. According to a civil lawsuit later filed against Goldman (but not against Paulson & Company) by the U.S. Securities and Exchange Commission, Goldman had included in the investment product mortgages that Paulson & Company believed were likely to end in default. In a 2010 settlement with the SEC, Goldman conceded that in marketing the product to clients, it had omitted both the role of Paulson & Company in designing the product and the hedge fund’s bet against it. According to the SEC, investors soon lost over $1 billion; Paulson & Company, by taking the opposite position, earned approximately the same amount.

A foreclosed home in Chicago, January 2008
John Gress / Reuters

Jensen quickly realized that Goldman’s behavior was cause for concern, and he inveighed against the cultural changes that had eroded the firm’s erstwhile commitment to integrity in its long-term relationships with its clients. Banks were, Lemann quotes him as saying, “lying, cheating, stealing.” It “sickened” Jensen that senior executives had avoided jail time in the wake of the financial crisis that followed.

It is not clear whether Jensen has ever considered the possibility that by promoting a system that relied on transactions instead of relationships, he himself may have contributed to the erosion of trust and integrity in the U.S. financial sector. He seems not to have lost his faith that one more adjustment to the system might restore the miracle of the market. But he has not found that adjustment. He ended his professional career preaching the gospel of corporate integrity to empty pews.

Lemann balances his account of Jensen’s career with the story of people whose lives were damaged by a deregulated financial system that let a new breed of mortgage broker mimic the predatory practices of payday lenders with impunity. In the 1990s, so many of those brokers opened storefront offices on Pulaski Road, on Chicago’s South Side, that residents came to refer to it as “Mortgage Row.” Lemann describes the effect these lenders had on one nearby neighborhood, Chicago Lawn. Teaser rates kept mortgage payments low for the first 24 months of a loan, but then they increased dramatically to levels that many borrowers could not possibly afford. Like clockwork, two years after being purchased, houses went into foreclosure. Many were abandoned.

Neighborhood activists tried to stop the destruction of human capital caused by debt that overwhelmed the tenuous lives of the working poor, the destruction of physical capital caused by thieves who stripped water heaters and copper pipe from abandoned houses, and the destruction of social capital caused by abandoned houses that turned into crime hot spots. On top of these visible injuries, the people of Chicago Lawn had to bear the insult of official indifference. A decade before the collapse of the U.S. housing market rocked the global financial system, the damage done by subprime lending was already evident in their neighborhood. But in 1998, the Federal Reserve, under Greenspan, refused requests from alarmed consumer advocates that it examine the subprime-lending activities of the banks it regulated.

After more than a decade of damage to their neighborhood, the citizens of Chicago Lawn watched as the officials who would not even look into that damage saved the banks that had caused it. No amount of econosplaining could change the message this conveyed: everybody has to accept what the market gives them—except the people who work in the financial sector. Today’s record-low unemployment rate shows that ten years on, the most direct harm from the financial crisis has healed. But deeper wounds remain. Wage growth for workers has been slow, and the crisis caused a massive and long-lasting reduction in incomes across the world—and perhaps an even longer-lasting populist backlash against the political institutions of many countries.

A NEW HUMILITY

In their attempt to answer normative questions that the science of economics could not address, economists opened the door to economic ideologues who lacked any commitment to scientific integrity. Among these pretend economists, the ones who prized supposed freedom (especially freedom from regulation) over all other concerns proved most useful—not to society at large but to companies that wanted the leeway to generate a profit even if they did pervasive harm in the process. When the stakes were high, firms sought out these ideologues to act as their representatives and further their agenda. And just like their more reputable peers, these pretend economists used the unfamiliar language of economics to obscure the moral judgments that undergirded their advice.

Throughout his entire career, Greenspan worked to give financial institutions more leeway and in doing so helped create the conditions that led to the financial crisis. He did so in the name of economics—indeed, in the public consciousness, he came to personify the field. But his opposition to regulation was invulnerable to evidence. Until he took control at the Fed, he was a hired gun, ready to defend firms in the financial sector from regulators who tried to protect the public. In this role, he reportedly said that he had “never seen a constructive regulation yet.” If economists continue to let people like him define their discipline, the public will send them back to the basement, and for good reason.

The alternative is to make honesty and humility prerequisites for membership in the community of economists. The easy part is to challenge the pretenders. The hard part is to say no when government officials look to economists for an answer to a normative question. Scientific authority never conveys moral authority. No economist has a privileged insight into questions of right and wrong, and none deserves a special say in fundamental decisions about how society should operate. Economists who argue otherwise and exert undue influence in public debates about right and wrong should be exposed for what they are: frauds.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print, online, and audio editions
Subscribe Now