You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

The Quagmire

How American medicine is destroying itself.

In 1959, the great biologist René Dubos wrote a book called Mirage of Health, in which he pointed out that “complete and lasting freedom from disease is but a dream remembered from imaginings of a Garden of Eden.” But, in the intervening decades, his admonition has largely been ignored by both doctors and society as a whole. For nearly a century, but especially since the end of World War II, the medical profession has been waging an unrelenting war against disease—most notably cancer, heart disease, and stroke. The ongoing campaign has led to a steady and rarely questioned increase in the disease-research budget of the National Institutes of Health (NIH). It has also led to a sea change in the way Americans think about medicine in their own lives: We now view all diseases as things to be conquered. Underlying these changes have been several assumptions: that medical advances are essentially unlimited; that none of the major lethal diseases is in theory incurable; and that progress is economically affordable if well managed.

But what if all this turns out not to be true? What if there are no imminent, much less foreseeable cures to some of the most common and most lethal diseases? What if, in individual cases, not all diseases should be fought? What if we are refusing to confront the painful likelihood that our biological nature is not nearly as resilient or open to endless improvement as we have long believed?

Let us begin by pointing to some unpleasant realities, starting with infectious disease. Forty years ago, it was commonly assumed that infectious disease had all but been conquered, with the eradication of smallpox taken as the great example of that victory. That assumption has been proved false—by the advent, for example, of HIV as well as a dangerous increase in antibiotic-resistant microbes. Based on what we now know of viral disease and microbial genetics, it is reasonable to assume that infectious disease will never be eliminated but only, at best, become less prevalent.

Then there are chronic diseases, now the scourge of industrialized nations. If the hope for eradication of infectious disease was misplaced, the hopes surrounding cures for chronic diseases are no less intoxicated. Think of the “war on cancer,” declared by Richard Nixon in 1971. Mortality rates for the great majority of cancers have fallen slowly over the decades, but we remain far from a cure. No one of any scientific stature even predicts a cure for heart disease or stroke. As for Alzheimer’s, not long before President Obama recently approved a fresh effort to find better treatments, a special panel of the NIH determined that essentially little progress has been made in recent years toward finding ways to delay the onset of major symptoms. And no one talks seriously of a near-term cure.

One of the hardiest hopes in the chronic-disease wars has been that of a compression of morbidity—a long life with little illness followed by a brief period of disability and then a quick death. A concept first introduced by James Fries in 1980, it has had the special attraction of providing a persuasively utopian view of the future of medicine. And it has always been possible to identify very old people who seemed to have the good fortune of living such a life—a kind of end run on medicine—and then dying quickly. But a recent and very careful study by Eileen Crimmins and Hiram Beltran-Sanchez of the University of Southern California has determined that the idea has no empirical support. Most of us will contract one or more chronic diseases later in life and die from them, slowly. “Health,” Crimmins and Beltran-Sanchez write, “may not be improving with each generation” and “compression of morbidity may be as illusory as immortality. We do not appear to be moving to a world where we die without experiencing disease, functioning loss, and disability.”

Average life expectancy, moreover, steadily increasing for many decades, now shows signs of leveling off. S. Jay Olshansky, a leading figure in longevity studies, has for some years expressed skepticism about the prospect of an indefinite increase in life expectancy. He calls his position a “realist” one, particularly in contending that it will be difficult to get the average beyond 85. He also writes that it is “biased” to assume that “only positive influences on health and longevity will persist and accelerate.” That view, he notes, encompasses a belief that science will surely keep moving on a forward track—a projection that is not necessarily true. Simply look at the “breakthroughs” that have been predicted for such scientific sure things as stem-cell technology and medical genetics—but have yet to be realized. These breakthroughs may eventually happen, but they are chancy bets. We have arrived at a moment, in short, where we are making little headway in defeating various kinds of diseases. Instead, our main achievements today consist of devising ways to marginally extend the lives of the very sick.

There are many ways of responding to this generally pessimistic reading of medical innovation in recent years. The most common is simply to note all the progress that has been made: useful new drugs, helpful new devices and technologies, decreased disability, better ways of controlling pain, and so on. And it is certainly true that some aspects of medicine have made enormous strides over the past few decades. Some of these strides, in fact, have taken place in the very areas—such as cardiac and infectious diseases (for instance, treatment of HIV)—in which so much of the outlook remains otherwise unpromising. One of us was the beneficiary of a life-saving heart operation at age 78, of a kind that did not exist a decade ago (and both of us celebrated our eightieth birthdays this past year). Americans do live longer, by eight to nine years since 1960; a great range of treatments are available for our illnesses, mild or severe; our pain is better relieved; and our prospects for living from youth to old age have never been greater.

It might also be said that there is no reason to believe that cures for infectious and chronic diseases cannot eventually be found; it is just taking longer than expected and the necessary knowledge for breakthroughs seems to be slowly accumulating. Or it might be said that more people living longer, though sick, is a not inconsiderable triumph.

These advances, however, should be balanced against another factor: the insupportable, unsustainable economic cost of this sort of success. Twenty years from now, the maturation of the baby boom generation will be at flood tide. We will have gone from 40 million Americans over the age of 65 in 2009 to 70 million in 2030. This will put enormous pressure on the health care system, regardless of whether Obama’s reform efforts, or even Paul Ryan’s, prove successful. The chronic diseases of the elderly will be the front line. Because we cannot cure those diseases at present, nor reasonably hope for cures over the next few decades, the best we will be able to do in many cases, especially those of the elderly and frail, is extend people’s lives for a relatively short period of time—at considerable expense and often while causing serious suffering to the person in question.

Consider that a National Cancer Institute study projects a 39 percent increase in cancer costs between 2010 and 2020. That figure represents in great part our success in extending the lives of those already afflicted with the disease. Kidney dialysis also has become an economic quagmire. A 150 percent increase in the number of such patients is expected over the next decade. The cost of Alzheimer’s disease is projected to rise from $91 billion in 2005 to $189 billion in 2015 to $1 trillion in 2050 (twice the cost of Medicare expenditures for all diseases now).

In a 2006 article, Harvard economist David Cutler and colleagues wrote, “Analyses focused on spending and on the increase in life expectancy beginning at 65 years of age showed that the incremental cost of an additional year of life rose from $46,800 in the 1970s to $145,000 in the 1990s. ... If this trend continues in the elderly, the cost-effectiveness of medical care will continue to decrease at older ages.” Emory professor Kenneth Thorpe and colleagues, summing up some Medicare data, note that “more than half of beneficiaries are treated for five or more chronic conditions each year.” Among the elderly, the struggle against disease has begun to look like the trench warfare of World War I: little real progress in taking enemy territory but enormous economic and human cost in trying to do so.

In the war against disease, we have unwittingly created a kind of medicine that is barely affordable now and forbiddingly unaffordable in the long run. The Affordable Care Act might ease the burden, but it will not eliminate it. Ours is now a medicine that may doom most of us to an old age that will end badly: with our declining bodies falling apart as they always have but devilishly—and expensively—stretching out the suffering and decay. Can we conceptualize something better? Can we imagine a medicine that is more affordable—that brings our health care system’s current cost escalation, now in the range of 6 percent to 7 percent per year, down to 3 percent, which would place it in line with the annual rise in GDP? Can we imagine a system that is less ambitious but also more humane—that better handles the inevitable downward spiral of old age and helps us through a somewhat more limited life span as workers, citizens, and parents?

The answer to these questions is yes. But it will require—to use a religious term in a secular way—something like a conversion experience on the part of physicians, researchers, industry, and our nation as a whole.

 

Vannevar Bush, a scientific advisor to President Franklin D. Roosevelt, famously said that science is an “endless frontier.” He was right then and that is still true now. But scientific progress to extend that frontier is not an endlessly affordable venture. Health care, like the exploration of outer space, will always be open to progress, but we understand that putting humans on Mars is not at present economically sensible. We have settled for a space station and the Hubble telescope. We must now comparably scale down our ambitions for medicine, setting new priorities in light of the obstacles we have encountered.

We need, first of all, to change our approach to research. A key ingredient of the economic engine of medical progress has been the endless issuing of promissory notes by scientists and the medical industry, which are then amplified by the media. The human genome project, stem-cell research, highly touted “breakthroughs”—all have raised hopes that we are on the verge of saving hundreds of millions of lives. But these promises have not materialized. A more realistic rhetoric is necessary, one that places a heavier emphasis on caring for the sick, not curing them.

The traditional open-ended model of medical research, with the war against death as the highest priority, should give way to a new goal: aiming to bring everyone’s life expectancy up to an average age of 80 years (already being approached), reducing early death, and shifting the emphasis in the direction of improving the quality of life of those in every age group. The highest priority should be given to children, the next-highest to those in their adult years (the age group responsible for managing society), and the lowest to those over 80.

In light of the fact that we are not curing most diseases, we need to change our priorities for the elderly. Death is not the only bad thing that can happen to an elderly person. An old age marked by disability, economic insecurity, and social isolation are also great evils. Instead of a medical culture of cure for the elderly we need a culture of care, notably a stronger Social Security program and a Medicare program much more heavily weighted toward primary care. Less money, that is, for late-life technological interventions and more for preventive measures and independent living. Some people may die earlier than now, but they will die better deaths.

Bringing about these changes would require shifts in the medical profession. Imagine a health care pyramid. At the lowest and broadest level is public health (health promotion and disease prevention). The next level is primary medicine and emergency care. The level above that consists of short-term hospital care for acute illness. And the top, narrowest level is high-technology care for the chronically ill. It is essential that we find ways to push down the ever-expanding kind of care at the highest level to lower levels, and particularly to the public-health and primary-care levels. The standards for access to care at the highest levels should be strict, marked by a decent chance of good outcomes at a reasonable cost.

Along these lines, one obvious step is to encourage more medical students to become primary-care physicians rather than specialists. Though there is nothing new or radical in such a proposal, it will not be easy to implement. Medical education must be better subsidized to reduce the debt of young doctors, which discourages many from entering family practice and tempts them toward ever-narrowing and more lucrative specialties.

Yet the most difficult shift will have to take place not among doctors, but among the public as a whole. The institution of medicine is enormously popular with the public. None of us likes being sick or threatened with death. Modern technology has brought us many benefits that enhance the prestige and social power of medicine. But the public must be persuaded to lower its expectations. We must have a society-wide dialogue on what a new model of medicine will look like: a model that will be moderate in its research aspirations, and dominated by primary care and neighborhood clinics staffed mainly by family physicians, paramedics and nurses for routine health needs, and organized teams for acute care. If this society-wide dialogue is to be successful, doctors will have to call repeated attention to the economic and social realities of the endless war on disease. They will have to remind the public that this war cannot be won—or can achieve small, incremental victories only—and if we are not careful, we can harm ourselves trying.

Finally, we need a health care system that is far more radically reformed than the system envisioned by the Affordable Care Act (ACA). Should the ACA be successful down to the last detail, it is still unlikely to succeed in bringing the annual rise in health care costs down to the annual GDP increase. In their 2011 yearly report, the Medicare program trustees project insolvency by 2024. The only reliable way of controlling costs has been the method used by most other developed countries: a centrally directed and budgeted system, oversight in the use of new and old technologies, and price controls. Medicine cannot continue trying to serve two masters, that of providing affordable health care and turning a handsome profit for its middlemen and providers.

Even so, those countries with less costly but more effective health care systems are in trouble as well—not as much as we are, but enough to inspire constant reforms. Every health care system has to cope with aging populations, new technologies, and high patient expectations. However a health care system is organized, the open-ended idea of medical progress is the deepest driver of health care costs. It dooms us to live too much of our later years in poor and declining health, and to die inch by inch from failure of one organ after another. Is it really a medical benefit, for ourselves or our families, to be doomed by frailty to a life that makes even walking a hazard? Or to spend our last years in and out of doctors’ offices and ICUs? Those results are what progress has given us—a seeming benefit that has become a serious economic and personal burden.

“All politics,” the late and wise Tip O’Neill once said, “is local.” It can no less be said that “all medicine is personal.” Our own experience in trying to talk about the kind of wholesale reforms we think necessary for medicine’s future is that people are far more concerned about what it will mean for themselves and their families than for something as general and abstract as the health care system. Their heads tell them that rationing and limits will probably be necessary, but they reject these ideas if it means that a loved one might not have what is needed to be kept alive, even if in a bad or terminal state. Unhappily, however, some rationing and limit-setting will be necessary. There is no way the Medicare program can survive unless it both sharply cuts benefits and raises taxes. Certain benefits can be cut directly or indirectly—directly by reducing payments for treatments, or indirectly by increasing co-payments and deductibles to a painful level, sufficient to discourage people from insisting on them.

But our broader point is not really about policy changes such as rationing. It is, put simply, that substantial shifts will be needed in the way our culture thinks about death and aging. There is good evidence that if physicians talk candidly and with empathy to critically ill patients and their families, telling them what they are in for if they want a full-court press, minds can be changed. That, in turn, means that physicians themselves will have to acknowledge their limits, explore their own motivations, and be willing to face patients with bad news as a way of avoiding even worse treatment outcomes. The ethic of medicine has long been to inspire unbounded hope in the sick patient and the same kind of hope in medical research. Sobriety and prudence must now take their place.

The problems we are describing are, of course, hardly the only flaws within the U.S. medical system. Among the spheres of concern most commonly cited for major criticism are: the perception of significant deterioration in the doctor-patient relationship; the state of care at the end of life; maldistribution of health care availability among geographic locations; malpractice and tort law; physician entrepreneurship; emphasis on profit motive by the insurance and pharmaceutical industries; duplication of resources among competing health facilities; multiple tiers of access and care, largely determined by income; wasting of money, resources, and personnel within the system; and costly overspecialization.

Sometimes—at all times, actually—the problems seem overwhelming. Not only does the complexity of the issues make them appear insoluble, but so does the way in which each seems to intertwine with all the others, inevitably to exacerbate the whole. The entire web of interconnected, complicating factors has long since reached the bewildering point where no issue can be addressed, or so much as approached, in isolation. The complexities are enough to make every stakeholder in American medicine—namely all of us—throw up our hands in desperation.

But there is, in fact, a solution: a top-down, bottom-up study of the entire U.S. health system, with a view toward taking it apart and reconstructing it in a manner adapted to our nation’s needs—a multiyear, multidisciplinary project whose aim would be to change the very culture of American medicine. The inadequate, inequitable, and financially insupportable system that has been jerry-built and constantly band-aided during recent decades will no longer do. Nor will incremental policy reforms, no matter how well-intentioned.

There is a historical precedent for such a project. At the turn of the twentieth century, U.S. medical education was a disgrace, and care of the sick, except in a certain few facilities, was almost as bad. Something had to be done. In 1908, the newly founded Carnegie Foundation for the Advancement of Teaching stepped in, hiring a 42-year-old educator named Abraham Flexner to embark on a study of medical education in North America. His report, published two years later, became a clarion call for drastic change. Subsequently, armed with a total of $600 million provided by the Carnegie and Rockefeller philanthropies and other contributors, Flexner visited 35 schools in the United States and Canada, and provided the financial wherewithal for the changes so desperately needed. The result of this remarkable effort was that, within ten years, U.S. medical schools became the prototype upon which all others tried to fashion themselves; our nation’s medicine, like the vastly improved institutions that gave it new life, became the gold standard for the world.

We can do this kind of thing again. It will take political will; unyielding leadership; vast amounts of money, both from government and private philanthropy; and extreme patience. Above all, it will take the confidence of the American people that a more humane, more affordable kind of medicine is possible.

Daniel Callahan is president emeritus of the Hastings Center and the author of Taming the Beloved Beast: How Medical Technology Costs Are Destroying Our Health Care System. Sherwin B. Nuland is a fellow of the Hastings Center and a retired clinical professor of surgery at Yale University. He is the author of How We Die and The Art of Aging. This article originally ran in the June 9, 2011, issue of the magazine.

Follow @tnr on Twitter.