Tuesday, November 21, 2017

Will Artificial Intelligence Recharge Economic Growth?

There may be no more important question for the future of the US economy than whether the ongoing advances in information technology and artificial intelligence will eventually (and this "eventually" is central to their argument) translate into substantial productivity gains. Erik Brynjolfsson, Daniel Rock, and Chad Syverson make the case for optimism in "Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics" (NBER Working Paper 24001, November 2017). The paper isn't freely available online, but many readers will have access to NBER working papers through their library. The essay will eventually be part of a conference volume on The Economics of Artificial Intelligence

Brynjolfsson, Rock, and Syverson are making several intertwined arguments. One is that various aspects of machine learning and artificial intelligence are crossing important thresholds in the last few years and the next few years. Thus, even though we tend to think of the "computer age" as having already been in place for a few decades, there is a meaningful sense in which we are about to enter another chapter. The other argument is that when a technological disruption cuts across many parts of the economy--that is, when it is a "general purpose technology" as opposed to a more focused innovation--it often takes a substantial period of time before producers and consumers fully change and adjust. In turn, this means a substantial period of time before the new technology has a meaningful effect on measured economic growth. 

As one example of a new threshold in machine learning, consider image recognition. On various standardized tests for image recognition, the error rate for humans is about 5%. In just the last few years, the error rate for image-recognition algorithms is now lower than the human level--and of course the algorithms likely to keep improving. 
There are of course a wide array of similar examples. The authors cite one study in which an artificial intelligence system did as well as a panel of board-certified dermatologists in diagnosing skin cancer. Driverless vehicles are creeping into use. Anyone who uses translation software or software that relied on voice recognition can attest to how much better it has become in the last few years. 

The author also point to an article from the Journal of Economic Perspectives in 2015, in which Gill Pratt pointed out the potentially enormous advantages of artificial intelligence in sharing knowledge and skills. For example, translation software can be updated and improved based on how everyone uses it, not just on one user. They write about Pratt's essay: 
[Artificial intelligence] machines have a new capability that no biological species has: the ability to share knowledge and skills almost instantaneously with others. Specifically, the rise of cloud computing has made it significantly easier to scale up new ideas at much lower cost than before. This is an especially important development for advancing the economic impact of machine learning because it enables cloud robotics: the sharing of knowledge among robots. Once a new skill is learned by a machine in one location, it can be replicated to other machines via digital networks. Data as well as skills can be shared, increasing the amount of data that any given machine learner can use.
However, new technologies like web-based technology, accurate vision, drawing inferences, and communicating lessons don't spread immediately. The authors offer the homely example of the retail industry. The idea or invention of of online sales became practical back in the second half of the 1990s. But many of the companies founded for online-sales during the dot-com boom of the late 1990s failed, and the sector of retail that expanded most after about 2000 was warehouse stores and supercenters, not  online sales. Now, two decades later, online sales have almost reached 10% of total retail. 

Why does it take so long? The theme that Brynjolfsson, Rock, and Syverson emphasize is that a revolution in online sales needs more than an idea. It needs innovations in warehouses, distribution, and the financial security of online commerce. It needs producers to think in terms of how they will produce, package, and ship for online sales. It needs consumers to buy into the process. It takes time. 

The notion that general purpose inventions which cut across many industries will take time to manifest their productivity gains, because of the need for complementary inventions, turns out to be a pattern that has occurred before. 

For economists, the canonical comment on this process in the last few decade is due to Robert Solow (Nobel laureate '87) who wrote in an essay in 1987, "You can see the computer age everywhere but in the productivity statistics" (“We’d better watch out,” New York Times Book Review, July 12, 1987, quotation from p. 36). After all, IBM had been producing functional computers in substantial quantities since the 1950s, but the US productivity growth rate had been slow since the early 1970s. When the personal computer revolution, the internet, and surge of productivity in computer chip manufacturing all hit in force the 1990s, productivity did rise for a time. Brynjolfsson, Rock, and Syverson write: 
"For example, it wasn’t until the late 1980s, more than 25 years after the invention of the integrated circuit, that the computer capital stock reached its long-run plateau at about 5 percent (at historical cost) of total nonresidential equipment capital. It was at only half that level 10 years prior. Thus, when Solow pointed out his now eponymous paradox, the computers were finally just then getting to the point where they really could be seen everywhere."
Going back in history, my favorite example of this lag that it takes for inventions to diffuse broadly is from the invention of the dynamo for generating electricity, a story first told by economic historian Paul David back in a 1991 essay. David points out that large dynamos for generating electricity existed in the 1870s. However, it wasn't until the Paris World Fair of 1900 that electricity was used to illuminate the public spaces of a city. And it's not until the 1920s that innovations based on electricity make a large contribution to US productivity growth. 

Why did it take so long for electricity to spread? Shifting production away from being  powered by waterwheels to electricity was a long process, which involved rethinking, reorganizing, and relocating factories. Products that made use of electricity like dishwashers, radios, and home appliances could not be developed fully or marketed successfully until people had access to electricity in their homes. Large economic and social adjustments take time time.

When it comes to machine learning, artificial intelligence, and economic growth, it's plausible to believe that we are closer to the front end of our economic transition than we are to the middle or the end. Some of the more likely near-term consequences mentioned by Brynjolfsson, Rock, and Syverson include a likely upheaval in the call center industry that employs more than 200,000 US workers, or how automated driverless vehicles (interconnected, sharing information, and learning from each other) will directly alter one-tenth or more of US jobs. My suspicion is that the changes across products and industries will be deeper and more sweeping than I can readily imagine.

Of course, the transition to the artificial intelligence economy will have some bumps and some pain, as did the transitions to electrification and the automobile. But the rest of the world is moving ahead. And history teaches that countries which stay near the technology frontier, and face the needed social adjustments and tradeoffs along the way,  tend to be far happier with the choice in the long run than countries which hold back. 

Monday, November 20, 2017

Why Has Life Insurance Ownership Declined?

Back in the first half of the 19th century, life insurance was unpopular in the US because it was broadly considered to be a form of betting with God against your own life. After a few decades of insurance company marketing efforts, life insurance was transformed into a virtuous purchase for any good and devout husband. But in recent decades, life insurance has been in decline.

Daniel Hartley, Anna Paulson, and Katerina Powers look at recent patterns of life insurance and bring the puzzle of its decline into sharper definition in "What explains the decline in life insurance ownership?" in Economic Perspectives, published by the Federal Reserve Bank of Chicago (41:8,   2017). The story of shifting attitudes toward life insurance in the 19th century US is told by Viviana A. Zelizer in a wonderfully thought-provoking 1978 article, "Human Values and the Market: The Case of Life Insurance and Death in 19th-Century America," American Journal of Sociology (November 1978, 84:3, pp. 591-610).

With regard to recent patterns, Hartley, Paulson, and Powers write: "Life insurance ownership has declined markedly over the past 30 years, continuing a trend that began as early as 1960. In 1989, 77 percent of households owned life insurance (see figure 1). By 2013, that share had fallen to 60 percent." In the figure, the blue line shows any life insurance, the red line shows the decline in term life, and the gray line shows the decline in cash value life insurance.


Early the 19th century, the costs of death and funerals were largely a family and neighborhood affair. As Zelizer points out, attitudes at the time, life insurance was commercially unsuccessful because it was viewed as betting on death. It was widely believed that such a bet might even hasten death, with with blood money being received by the life insurance beneficiary. For example, Zelizer wrote:

"Much of the opposition to life insurance resulted from the apparently speculative nature of the enterprise; the insured were seen as `betting' with their lives against the company. The instant wealth reaped by a widow who cashed her policy seemed suspiciously similar to the proceeds of a winning lottery ticket. Traditionalists upheld savings banks as a more honorable economic institution than life insurance because money was accumulated gradually and soberly. ...  A New York Life Insurance Co. newsletter (1869, p. 3) referred to the "secret fear" many customers were reluctant to confess: `the mysterious connection between insuring life and losing life.' The lists compiled by insurance companies in an effort to respond to criticism quoted their customers' apprehensions about insuring their lives: "I have a dread of it, a superstition that I may die the sooner" (United States Insurance Gazette [November 1859], p. 19). ... However, as late as the 1870s, "the old feeling that by taking out an insurance policy we do somehow challenge an interview with the 'king of terrors' still reigns in full force in many circles" (Duty and Prejudice 1870, p. 3). Insurance publications were forced to reply to these superstitious fears. They reassured their customers that "life insurance cannot affect the fact of one's death at an appointed time" (Duty and Prejudice 1870, p. 3). Sometimes they answered one magical fear with another, suggesting that not to insure was "inviting the vengeance of Providence" (Pompilly 1869). ... An Equitable Life Assurance booklet quoted wives' most prevalent objections: "Every cent of it would seem to me to be the price of your life .... it would make me miserable to think that I were to receive money by your death .... It seems to me that if [you] were to take a policy [you] would be brought home dead the next day" (June 1867, p. 3)."
However, over the course of several decades, insurance companies marketed life insurance with a message that it was actually a loving duty to one's family for a devout husband. As Zelizer argues, the rituals and institutions of what society viewed as a "good death" altered. She wrote:
"From the 1830s to the 1870s life insurance companies explicitly justified their enterprise and based their sales appeal on the quasi-religious nature of their product. Far more than an investment, life insurance was a `protective shield' over the dying, and a consolation `next to that of religion itself' (Holwig 1886, p. 22). The noneconomic functions of a policy were extensive: `It can alleviate the pangs of the bereaved, cheer the heart of the widow and dry the orphans' tears. Yes, it will shed the halo of glory around the memory of him who has been gathered to the bosom of his Father and God' (Franklin 1860, p. 34). ... life insurance gradually came to be counted among the duties of a good and responsible father. As one mid-century advocate of life insurance put it, the man who dies insured and `with soul sanctified by the deed, wings his way up to the realms of the just, and is gone where the good husbands and the good fathers go' (Knapp 1851, p. 226). Economic standards were endorsed by religious leaders such as Rev. Henry Ward Beecher, who pointed out, `Once the question was: can a Christian man rightfully seek Life Assurance? That day is passed. Now the question is: can a Christian man justify himself in neglecting such a duty?' (1870)."
Zelizer's work is a useful reminder that many products, including life insurance, are not just about prices and quantities in the narrow economic sense, but are also tied to broader social and institutional patterns.  

The main focus of Hartley, Paulson, and Powers is to explore the extent to which shifts in socioeconomic and demographic factors can explain the fall in life insurance: that is, have socioeconomic or demographic groups that were less likely to buy life insurance become larger over time? However, after doing a breakdown of life insurance ownership by race/ethnicity, education level, and income level, they find that the decline in life insurance is widespread across pretty much all groups. In other words, the decline in life insurance doesn't seem to be (primarily) about socioeconomic or demographic change, but rather about other factors. They write: 
"Instead, [life insurance] ownership has decreased substantially across a wide swath of the population. Explanations for the decline in life insurance must lie in factors that influence many households rather than just a few. This means we need to look beyond the socioeconomic and demographic factors that are the focus of our analysis. A decrease in the need for life insurance due to increased life expectancy is likely to be an especially important part of the explanation. In addition, other potential factors include changes in the tax code that make the ability to lower taxes through life insurance less attractive, lower interest rates that also reduce incentives to shelter investment gains from taxes, and increases in the availability and decreases in the cost of substitutes for the investment component of cash value life insurance." 
It's intriguing to speculate about what the decline in life insurance purchases tells us about our modern attitudes and arrangements toward death, in a time of longer life expectancies, more households with two working adults, the backstops provided by Social Security and Medicare, and perhaps also shifts in how many people feel that their souls are sanctified (in either a religious or a secular sense) by the purchase of life insurance. 

Friday, November 17, 2017

Brexit: Still a Process, Not Yet a Destination

I happened to be in the United Kingdom on a long-planned family vacation on June 23, 2016, when the Brexit vote took place. At the time, I offered a stream-of-consciousness "Seven Reflections on Brexit" (June 26, 2016). But more than year has now passed, and Thomas Sampson sums up the research on what is known and what might come next in "Brexit: The Economics of International Disintegration," which appears in the Fall 2017 issue of the Journal of Economic Perspectives.

(As regular readers know, my paying job--as opposed to my blogging hobby--the Managing Editor of the JEP. The American Economic Association has made all articles in JEP freely available, from the most recent issue back to the first. For example, you can check out the Fall 2017 issue here.)

Here's Sampson's basic description of the UK and its position in the international economy before Brexit. For me, it's one of those descriptions that doesn't use any weighted rhetoric, but nonetheless packs a punch.
"The United Kingdom is a small open economy with a comparative advantage in services that relies heavily on trade with the European Union. In 2015, the UK’s trade openness, measured by the sum of its exports and imports relative to GDP, was 0.57, compared to 0.28 for the United States and 0.86 for Germany (World Bank 2017). The EU accounted for 44 percent of UK exports and 53 percent of its imports. Total UK–EU trade was 3.2 times larger than the UK’s trade with the United States, its second-largest trade partner. UK–EU trade is substantially more important to the United Kingdom than to the EU. Exports to the EU account for 12 percent of UK GDP, whereas imports from the EU account for only 3 percent of EU GDP. Services make up 40 percent of the UK’s exports to the EU, with “Financial services” and “Other business services,” which includes management consulting and legal services, together comprising half the total. Brexit will lead to a reduction in economic integration between the United Kingdom and its main trading partner."
A substantial reduction in trade will cause problems for the UK economy. Of course, the estimates will vary according to just what model is used, and Sampson runs through the main possibilities. He summarizes in this way: 
"The main conclusion of this literature is that Brexit will make the United Kingdom poorer than it would otherwise have been because it will lead to new barriers to trade and migration between the UK and the European Union. There is considerable uncertainty over how large the costs of Brexit will be, with plausible estimates ranging between 1 and 10 percent of UK per capita income. The costs will be lower if Britain stays in the European Single Market following Brexit. Empirical estimates that incorporate the effects of trade barriers on foreign direct investment and productivity find costs 2–3 times larger than estimates obtained from quantitative trade models that hold technologies fixed."
What will come next after Brexit isn't yet clear, and may well take years to negotiate. In the meantime, the main shift seems to be that the foreign exchange rate for the pound has fallen, while inflation has risen, which means that real inflation-adjusted wages have declined. This national wage cut has helped keep Britain's industries competitive on world markets, but it's obviously not a desirable long-run solution.

But in the longer run, as the UK struggles to decide what actually comes next after Brexit, Sampson makes a distinction worth considering: Is the opposition to Brexit about national identity and taking back control, even if it makes the country poorer, or is it about renegotiating trade agreements and other legislation to do more to address the economic stresses created by globalization and technology? He writes:

"Support for Brexit came from a coalition of less-educated, older, less economically successful and more socially conservative voters who oppose immigration and feel left behind by modern life. Leaving the EU is not in the economic interest of most of these left-behind voters. However, there is currently insufficient evidence to determine whether the leave vote was primarily driven by national identity and the desire to “take back control” from the EU, or by voters scapegoating the EU for their
economic and social struggles. The former implies a fundamental opposition to deep economic and political integration, even if such opposition brings economic costs, while the later suggests Brexit and other protectionist movements could be addressed by tackling the underlying reasons for voters’ discontent."
For me, one of the political economy lessons of Brexit is that relatively easy to get a majority against a specific unpopular element of the status quo, while leaving open the question of what happens next. It's a lot harder to get a majority in favor of a specific change. That problem gets even harder when it comes to international agreements, because while it's easy for UK politicians to make pronouncements on what agreements the UK would prefer, trade negotiators in the EU, the US, and the rest of the world have a say, too. Sampson discusses the main post-Brexit options, and I've blogged about them in "Brexit: Getting Concrete About Next Steps" (August 2, 2016). While the process staggers along, this "small open economy with a comparative advantage in services that relies heavily on trade with the European Union" is adrift in uncertainty.

Thursday, November 16, 2017

US Wages: The Short-Term Mystery Resolved

The Great Recession ended more than eight years ago, in June 2009. The US unemployment rate declined slowly after that, but it has now been below 5.0% every month for more than two years, since September 2015. Thus, an ongoing mystery for the US economy is: Why haven't wages started to rise more quickly as the labor market conditions improved? Jay Shambaugh, Ryan Nunn, Patrick Liu, and Greg Nantz provide some factual background to address this question in "Thirteen Facts about Wage Growth," written for the Hamilton Project at the Brookings Institution (September 2017).  The second part of the report addresses the question: "How Strong Has Wage Growth Been since the Great Recession?"

For me, one surprising insight from the report is that real wage growth--that is, wage growth adjusted for inflation--has actually not been particularly slow during the most recent upswing. The upper panel of this figure shows real wage growth since the early 1980s. The horizontal lines show the growth of wages after each recession. The real wage growth in the last few years is actually higher. The bottom panel shows nominal wage growth, with inflation included. By that measure, wage growth in recent years is lower than after the last few recessions. Thus, I suspect that one reason behind the perception of slow wage growth is that many people are focused on nominal rather than on real wages.


Government statistics offer a lot of ways of measuring wage growth. The graphs above are wage growth for "real average hourly earnings for production and nonsupervisory workers," which is about 100 million of the 150 million workers.

An alternative and broader approach looks what is called the Employment Cost Index, which is based on a National Compensation Survey of employers. To adjust for inflation, I use the measure of inflation called the Personal Consumption Expenditures price index, which is the inflation just for the personal consumption part of the economy that is presumably most relevant to workers. I also use the version of this index that strips out jumps in energy and food prices. This is the measure of the inflation rate that the Federal Reserve actually focuses on.

Economists using these measures were pointing out a couple of years ago that real wages seemed to be on the rise. The blue line shows the annual change in wages and salaries for all civilian workers, using the ECI, while the redline shows the PCE measure of inflation. The gap between the two is the real gain in wages, which you can see started to emerge in 2015.

Not only has the recovery in US real wages been a bit higher than usual for the last few decades, and especially prominent in the last couple of years, but there is good reason to believe that the wage statistics since the Great Recession may be picking up a change in the composition of the workforce that tends to make wage growth look slower. Shambaugh, Nunn, Liu, and Nantz explain (citations and footnotes omitted):
"In normal times, entrants to full-time employment have lower wages than those exiting, which tends to depress measured wage growth. During the Great Recession this effect diminished substantially when an unusual number of low-wage workers exited full-time employment and few were entering. After the Great Recession ended, the recovering economy began to pull workers back into full-time employment from part-time employment ... and nonemployment, while higher-paid, older workers left the labor force. Wage growth in the middle and later parts of the recovery fell short of the growth experienced by continuously employed workers, reflecting both the retirements of relatively high-wage workers and the reentry of workers with relatively low wages. In 2017 the effect of this shifting composition of employment remains large, at more than 1.5 percentage points. If and when growth in full-time employment slows, we can expect this effect to diminish somewhat, providing a boost to measured wage growth."
The baby boomer generation is hitting retirement and leaving the labor force, as relatively highly-paid workers at the end of their careers. New workers entering the labor force, together with low-skilled workers being drawn back into the labor force, tend to have lower wages and salaries. This makes wage growth look low--but what's happening is in part a shift in types of workers. 

One other fact from Shambaugh, Nunn, Liu, and Nantz is that wage growth has been strong at the bottom and the top of the wage distribution, but slower in the middle. This figure splits the wage distribution into five quintiles, and shows the wage growth for production and nonsupervisory workers in each. 

Taking these factors together, the "mystery" of why wages haven't recovered more strongly since the end of the Great Recession appears to be resolved. However, a bigger mystery remains. Why have wages and salaries for production and nonsupervisory workers done so poorly not in the last few years, but over the last few decades?

There's a long list of potential reasons: slow productivity growth, rising inequality, dislocations from globalization and new technology, a slowdown in the rate of start-up firms, weakness of unions and collective bargaining, less geographic mobility by workers, and others. These factors have been discussed here before, and will be again, but not today. Shambaugh, Nunn, Liu, and Nantz provide some background figures and discussion of these longer-term factors, too. 

Wednesday, November 15, 2017

Rethinking Development: Larry Summers

Larry Summers delivered a speech on the subject of "Rethinking Global Development Policy for the 21st Century" at the Center for Global Development on November 8, 2017. A video of the 45-minute lecture is here. Here are a few snippets, out of many I could have chosen:

The dramatic global convergence between rich and poor
"There has been more convergence between poor people in poor countries and rich people in rich countries over the last generation than in any generation in human history. The dramatic way to say it is that between the time of Pericles and London in 1800, standards of living rose about 75 percent in 2,300 years. They called it the Industrial Revolution because for the first time in human history, standards of living were visibly and 2 meaningfully different at the end of a human lifespan than they had been at the beginning of a human lifespan, perhaps 50 percent higher during the Industrial Revolution. Fifty percent is the growth that has been achieved in a variety of six-year periods in China over the last generation and in many other countries, as well. And so if you look at material standards of living, we have seen more progress for more people and more catching up than ever before. That is not simply about things that are material and things that are reflected in GDP. ... [I]f current trends continue, with significant effort from the global community, it is reasonable to hope that in 2035 the global child mortality rate will be lower than the US child mortality rate was when my children were born in 1990. That is a staggering human achievement. It is already the case that in large parts of China, life expectancy is greater than it is in large parts of the United States." 

The marginal benefit of development aid is what is enabled, not what is funded
"I remember as a young economist who was going to be the chief economist of the World Bank sitting and talking with Stan Fischer, who was my predecessor as the chief economist of the World Bank. And we were talking, and I was new to all this. I had never done anything in the official sector. And I said, "Stan, I don't get it. If a country has five infrastructure projects and the World Bank can fund two of them, and the World Bank is going to cost-benefit analyze and the World Bank is going to do all its stuff, I would assume what the country does is show the World Bank its two best infrastructure projects, because that will be easiest, and if it gets money from the World Bank, then it does one more project, but what the World Bank is actually buying is not the project it is being shown, it is the marginal product that it is enabling. And so why do we make such a fuss of evaluating the particular quality of our projects?" And Stan listened to me. And he looked at me. He's a very wise man. And he said, "Larry, you know, it is really interesting. When I first got to the bank, I always asked questions like that." "But now I've been here for two years, and I don't ask questions like that. I just kind of think about the projects, because it is kind of too hard and too painful to ask questions like that."
Funds from the developing world governments and multilateral institutions have much less power
"[O]ur money—and I mean by that our assistance and the assistance of the multilateral institutions in which we have great leverage—is much less significant than it once was. Perhaps the best way to convey that is with a story. In 1991, when I was new to all of this, I was working as the chief economist of the World Bank, and the first really important situation in which I had any visibility at all was the Indian financial crisis that took place in the summer of 1991. And at that point, India was near the brink. It was so near the brink that, at least as I recall the story, $1 billion of gold was with great secrecy put on a ship by the Indians to be transported to London, where it could be collateral for an emergency loan that would permit the Indian government to meet its payroll at the end of the month.  And at that moment, the World Bank was in a position over the next year to lend India $3 billion in conjunction with its economic reform program. And the United States had an important role in shaping the World Bank's strategy. Well, that $3 billion was hugely important to the destiny of a sixth of humanity. Today, the World Bank would have the capacity to lend India in a year $6 billion or $7 billion. But India has $380 billion—$380 billion—in reserves dominantly invested in Treasury bills earning 1 percent. And India itself has a foreign aid budget of $5 billion or $6 billion. And so the relevance of the kind of flows that we are in a position to provide officially to major countries is simply not what it once was."
Protecting the world from pandemic flu vs. the salary of a college football coach
"[T]he current WHO budget for pandemic flu is less than the salary of the University of Michigan's football coach—not to mention any number of people who work in hedge funds. And that seems manifestly inappropriate. And we do not yet have any settled consensus on how we are going to deal with global public goods and how that is going to be funded."

Tuesday, November 14, 2017

Regional Price Parities: Comparing Cost of Living Across Cities and States

Many years ago I heard a story from a member of a committee of a midwestern university that was thinking about hiring a certain economist. The economist had an alternative offer from a southern California university that paid a couple of thousand dollars more in annual salary. The economist offered to come to the midwestern university if it would match this slightly higher salary . But the hiring committee declined to match . As the story was told to me, the hiring committee talked it over and felt: "Spending a couple of thousand dollars more isn't actually the issue. The key fact cost of living is vastly higher in southern California. An economist who isn't able to recognize that fact--and thus who doesn't recognize that the lower salary actually buys a higher standard of living here in the midwest--isn't someone we want for our department."

The point is a general one. Getting a higher salary in California or New York, and then needing to pay more for housing and perhaps other costs of living as well, can easily eat up that higher salary. In fact, the Bureau of Economic Analysis now calculates Regional Price Parities, which adjust for higher or lower levels of housing, goods, and services across areas. Comparisons are available at the state level, the metropolitan-area level, and for non-metro areas within states. To illustrate, here are a couple of maps taken from "Living Standards in St. Louis and theEighth Federal Reserve District: Let’s Get Real," an article by Cletus C. Coughlin, Charles S. Gascon, and Kevin L. Kliesen in the Review of the Federal Reserve Bank of St. Louis (Fourth Quarter 2017, pp. 377-94).

Here are the US states color-coded according to per capita GDP. For example, you can see that California and New York are in the highest category. My suspicion is that states like Wyoming, Alaska, and North Dakota are in the top category because of their energy production.



And now here are the US states color-coded according to per capita GDP with an adjustment for Regional Price Parities: that is, it's a measure of income adjusted for what it actually costs to buy housing and other goods. With that change, California, New York, and Maryland are no longer in the top category. Hoever, a number of midwestern states like Kansas, Nebraska, South Dakota, and my own Minnesota move into the top category. A number of states in the mountain west and south  that were in the lowest-income category when just looking at per capita GDP move up a category or two when the Regional Price Parities are taken into account.


When thinking about political and economic differences across states, these differences in income levels,  housing prices, and other costs-of-living are something to take into account. 

Monday, November 13, 2017

Choice and Health Insurance Coverage

If you think if Medicare and Medicaid as examples of "single payer" health insurance plan, you are at best partially correct. Government health spending (including federal, state, and local) does accounts for about 46% of total US health care spending.  However, a major and largely unremarked change is that government health care spending is being filtered through a system in which those receiving the government health insurance need to make choices between privately-run health insurance plans.

A three-paper symposium in the Fall 2017 issue of the Journal of Economic Perspectives tackles this issue of choice and health insurance coverage. The introductory essay by Jonathan Gruber is called "Delivering Public Health Insurance through Private Plan Choice in the United States."
Then  Michael Geruso and Timothy Layton focus on the issue of "Selection in Health Insurance Markets and Its Policy Remedies," while Keith Marzilli and Justic Sydnor focus on the issue of how difficult it can be for consumers to make wise choices between health insurance plans--especially when the providers of these plans may have incentive to slant those choices in certain directions in "The Questionable Value of Having a Choice of Levels of Health Insurance."  For example, Gruber describes how US government health care spending has moved away from a "single payer" approach over time, and writes:
"Currently, almost one-third of Medicare enrollees are in privately provided insurance plans for all of their medical spending, and another 43 percent of Medicare enrollees have standalone private drug plans through the Medicare Part D program. More than three-quarters of Medicaid enrollees are in private health insurance plans. Those receiving the subsidies made available under the Patient Protection and Affordable Care Act of 2010 do so through privately provided insurance plans that are
reimbursed by the government."
Or here's a figure from Geruso and Layton. When you take into account the people choosing between Medicaid managed care plans, Medicare "Advantage" plans (as part of Medicare Part C), Medicare prescription drug benefits (as part of Medicare Part D), and people choosing between health insurance plans in the insurance "marketplaces" set up by the Patient Protection and Affordable Care Act of 2010, you have a total of nearly 100 million enrollees. Of course, if you're looking at choice in health insurance more broadly, many individual also have some choices in the the health insurance plans supported by their employers, too.
In all insurance markets, not just health insurance, choice can be a double-edged sword. On one side, choice lets people match up the characteristics of different health insurance care plans to their personal preferences and needs, which clearly can be positive. But health insurance providers here have mixed incentives: in this choice-based health insurance universe, they want to encourage people to choose their plans, but they also are trying not to attract disproportionate numbers people who are more likely to have high health care costs in the future. Health insurance plans have a very wide array of characteristics: not just the structure of deductibles, copayments, and annual caps, but also including limits on the breadth of a provider network and how costly (in terms of out-of-pocket costs) or difficult (in terms of paperwork and delay) it can be to go outside that network. Another limit can be on what types of care are covered in extreme health situations.  With these difficulties in mind, a number of conventional problems arise.

Health insurance market will have a tendency to sort people into groups, where those who regard themselves as healthy at present will seek out health insurance that covers less and has a lower cost, while those who know that they are likely to have higher health-care costs will tend to seek out insurance that covers more but has a higher cost. As this dynamic emerges, so to a number of problems:

Insurance companies will have an incentive to structure their insurance plans with the idea of  attracting the more-healthy consumers, while encouraging less health consumers to shop elsewhere, which is sometimes known as "cream-skimming." Health insurance plans that would tend to be more attractive for the less healthy will tend to be packed full of out-of-pocket costs and restrictions on the network of service providers. At an extreme, health insurance plans suitable for those with high costs may become so costly or limited as to be essentially unavailable, which of course defeats the purpose of insurance altogether, which is sometimes known as "death spiral" for that market. Some of the  people who signed up for lower-cost plans, either because they expected to be healthy or just because they focused on the low costs, will instead turn out to be unhealthy--and discover that their low-cost plan provided only limited coverage.

Of course, these are exactly the issues that have been playing out in the state-level insurance "marketplaces" set up under the Affordable Care Act. Economic analysis points out that these kinds of issues are endemic to choice-based insurance markets. These problems lead to a parade of policy interventions in health insurance markets, laid out by Geruso and Layton.

There are often rules for "premium rating," which limits the price differences between insurance plans for different groups, or rules that insurance companies cannot reject an applicant outright, but must offer some kind of plan. These rules seek to avoid the problem that a consumer who is likely to have health care costs can't find an insurance policy at all, but given the many ways in which health insurance can be structured, the available policies can still look rather scanty.

The government can impose penalties for not purchasing health insurance, or subsidies for buying it. In practice, the state-level health insurance marketplaces do both of these.

"Risk adjustment" refers to the situation which a statistical formula is used to predict who is likely to have higher or lower health insurance costs--so that the government pays  that amount to the insurance company.  For example, in the Medicare Advantage program, where Medicare recipients can choose among private insurance plans rather than the government single-payer approach, the government needs to avoid a situation where the private health insurance firms just attract the healthier participants, and so it uses a risk adjustment formula. The evidence is that this risk adjustment is imperfect, in the sense that the higher payments for those expected-to-be-sick don't quite account for the higher costs, but it's better than not having it at all. Medicaid and the state-level insurance marketplaces have risk adjustment procedures, too.

Yet another policy is "contract regulation," to require that insurance firms offer certain benefits. Of course, the question of what coverage is required, and the extent to which firms can require additional payments or limit the providers for certain kinds of coverage, remain controversial.

The bottom line here is that choice in health insurance markets unleashes both good and distressing dynamics. The good dynamic is people who can select the plan that they think best suits their immediate needs, and to some extent it focuses insurance companies on providing what people actually want. The distressing dynamic is that as people do this, the health insurance market for those who need more extensive health insurance will stagger for all the reasons given above. The available public policies that seek to address this issue--premium rating, penalties/subsidies to encourage  buying insurance, risk adjustment, and contract regulations--all have understandable underlying purposes. But they add a great deal of complexity to an already messy market, and only partially address the underlying problems.

The ongoing US shift in how public health insurance is increasingly provided through private health insurance firms should influence the discussion over a "single payer" approach to health care.

Traditionally, the term "single payer" has referred to direct government payments to health care providers. In this sense, a true advocate of "single payer" in the traditional meaning cannot advocate "Medicare for all," at least not as Medicare is currently constructed, because a large part of Medicare (both the choice section in Part C and the pharmaceutical benefits Part D) is no longer a single-payer system in the traditional meaning of the term. Similarly, an expansion of Medicaid is largely an expansion of government paying health care providers directly. A supporter of "single payer" should presumably oppose both the state-level insurance "marketplaces," as well as the provision of private-sector health insurance.

Conversely, those who oppose "single payer" should contemplate whether their concerns about government control over health care are at ameliorated to some extent if the beneficiaries of those programs have a degree of choice across health insurance firms and health providers--albeit in regulated markets.