You Can't Enlarge The Pie: Six Barriers To Effective Government

You Can't Enlarge The Pie: Six Barriers To Effective Government

by Max H. Bazerman
You Can't Enlarge The Pie: Six Barriers To Effective Government

You Can't Enlarge The Pie: Six Barriers To Effective Government

by Max H. Bazerman

Paperback(REPRINT)

$21.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Why do our government leaders continually make decisions and craft policies that everybody knows are foolish? Because they, like the rest of us, remain trapped in foolish and unproductive habits of thinking. "You Can't Enlarge the Pie" analyzes the unspoken assumptions that lead to bad policy, wasted resources, and lost lives, and shows exactly why they're wrong. With fascinating case studies and clear, compelling analysis, it dissects six psychological barriers to ineffective government:1. Do no harm. 2. Their gain is our loss.3. Competition is always good. 4. Support our group. 5. Live for the moment.6. No pain for us, no gain for them. By freeing ourselves from the narrow way we evaluate our government leaders, we can learn to judge their performance more as that of business leaders is judged: by the overall health of their organizations.

Product Details

ISBN-13: 9780465006328
Publisher: Basic Books
Publication date: 08/22/2002
Edition description: REPRINT
Pages: 288
Product dimensions: 5.90(w) x 9.00(h) x 0.90(d)

About the Author

Max H. Bazerman is Jesse Isidor Straus Professor of Business Administration at the Harvard Business School and lives in Cambridge, Massachusetts.

Jonathan Baron is Professor of Psychology at the University of Pennsylvania and lives in Philadelphia.

Katherine Shonk is a research associate at Harvard Business School and lives in Cambridge, Massachusetts.

Read an Excerpt

You Can't Enlarge the Pie

Six Barriers to Effective Government


By Max H. Bazerman Basic Books

Copyright © 2002 Max H. Bazerman
All right reserved.

ISBN: 9780465006328



Chapter One


Do No Harm


SINCE THE EARLY 1960s, MILLIONS OF AMERICAN WOMEN HAVE undergone surgery for silicone breast implants, either as reconstruction after a mastectomy or for the cosmetic enlargement of normal breasts. These silicone pouches, usually filled with saline solution, are intended to improve the quality of life rather than its duration. The popularity of the procedure suggests that many women expect implants to enhance their confidence, well-being and enjoyment of life.

    When the U.S. Food and Drug Administration (FDA) began regulating breast implants in 1976, it acted on the assumption that these devices were safe. Meanwhile, in Japan, reports in medical journals were beginning to document illnesses-particularly connective-tissue disease-among prostitutes who had received direct injections of silicone or wax. In 1982, connective-tissue disease was reported in three Australian women with implants. Soon after, American women with breast implants who developed connective-tissue disease or another disorder began to initiate lawsuits against the manufacturers. In 1990, after the television show Face to Face with Connie Chung interviewed victims of these diseases and implicitlyblamed the manufacturers and the FDA, Congressman Ted Weiss began public hearings to explore a possible link between breast implants and these ailments.

    Just a year later, in 1991, a jury awarded $7.34 million to a woman who claimed that her Dow-Corning implants had caused her connective-tissue disease. In 1992, after failing to receive evidence of their safety, FDA Commissioner David Kessler placed a ban on most implants; at the same time, he assured women who already had them that there was no evidence of danger. The ban galvanized tens of thousands of women to launch lawsuits against implant manufacturer Dow-Corning; these were eventually consolidated into one class-action suit. Some of the women involved in the suit blamed their implants for poor health. Others had no sign of disease but joined the suit for fear of future illness. In 1994, Dow-Corning agreed to an initial settlement of $4.25 billion, at the time the largest class-action settlement in history. One billion of the settlement went directly to the plaintiffs' lawyers. Under the terms of the settlement, a woman had only to present a doctor's diagnosis of some illness to receive a share of the money. Other women were allowed to file for their shares retroactively, making the per-person amount very small. Dow-Corning filed for bankruptcy in 1995.

    Many of the facts of this case may already be familiar to you. But you may not have heard about the surprising scientific results that emerged—without much attention from the press—when implant manufacturers began to put their products to the test. This research began at the time of the lawsuit and continues today. Consider that about 1 percent of adult American women have implants, and about 1 percent of adult American women have connective tissue disease. If the implants had no relation to the disease, we would expect about 10,000 American women (1 percent of 1 percent of the 100 million women in the United States) to have both. Several studies showed that this rate was approximately correct. The only study that showed a small association between implants and disease was conducted after the negative publicity; the criterion for "disease" was a self-report questionnaire, raising the possibility that some women might have been more likely to report signs of disease after hearing that it could be caused by implants.

    By 1999, a panel appointed by the Institute of Medicine had reviewed the existing evidence and concluded that, to date, there was no connection between implants and the diseases blamed on them. Meanwhile, as more and more women have filed claims, the amount paid out by Dow-Corning in the class-action settlement has surpassed $7 billion. The fear of unfounded lawsuits led DuPont to refuse to supply Dacron polyester for vascular grafts. In 1994, foreign suppliers refused to supply American manufacturers with Dacron for the first time. Because silicone is used in catheters, pacemakers and artificial-heart valves, it is possible that these products could some day be in short supply. After all, in the aftermath of a lawsuit based on no scientific evidence whatsoever, why would any company supply a product that could bring about its financial ruin? It would not be surprising if firms chose to steer clear of this risky business altogether.

    Dow-Corning is not the only company that has paid huge settlements to victims of diseases its products did not cause. Drug companies have cut back on research on contraceptives and vaccines because these products, given to healthy people, inspire lawsuits. Obstetricians are switching to the less risky field of gynecology, and the specialists who remain sometimes refuse to deliver lawyers' babies. In addition, companies guilty of outrageous crimes spend millions on attorneys' fees to protect themselves from litigation.

    An obscure backwater of the law since the Middle Ages, the law of torts has today become almighty. Every day, plaintiffs win millions of dollars in damages for harms that cannot be considered crimes, and our existing tort system funnels as much money to lawyers as it does to victims. These fees could be avoided if parties were able to settle disputes through efficient negotiation.

    Why don't we change the American legal system? The underlying psychological reason is that we pay too much attention to the losses that might result from action or change, but we ignore potential gains. As a consequence, we refuse to take small risks in order to reduce a large risk. People often resist changing jobs, homes or relationships because, biased toward the status quo, they focus on what they will lose rather than what they might gain.

    The same psychological forces that cause people to adhere to the status quo also make them resist government policies that might improve matters for them as individuals. Citizens and government officials oppose tort reform proposals because of the vivid but small risk that recouping losses in court will become more difficult. This fear of risk overshadows the greater benefits they will gain in the form of lower health care costs, tax reductions and product availability. Through wise tradeoffs, citizens will discover where their true self-interest lies, and they will be more accepting of government policies that advance this interest.


The Way We Look at Risk


On December 21, 1988, 270 people died when a bomb planted by a terrorist on Pan American Airways Flight 103 exploded over Lockerbie, Scotland. In response to the tragedy, the U.S. Federal Aeronautics Administration (FAA) tightened regulations on air travel, including more thorough baggage checks and mandatory early arrival for travelers on international flights.

    In 1989, reporter Henry Farlie made a conservative estimate of the cost per life saved by the new regulations. Taking into account that the main cost factor is extra time, Farlie figured lost time to passengers at what was then the minimum wage, $3.35 per hour, even though most international travelers earn quite a bit more. He generously assumed that the new regulations would prevent all deaths from terrorism, which had averaged sixty-one each year since 1976. Given the 221,471,000 people who take international flights each year, this comes to $6,081,375.81 per life saved.

    Perhaps this is not an outrageous amount to spend to save a human life. In other realms, the cost per life saved by governmental regulation is much higher. Still, there are far less expensive ways of saving lives, such as reducing air and water pollution. In general, developed nations spend far too much money on reducing certain risks. For example, the complete removal of asbestos from American school buildings would save about 400 lives over 40 years-at an estimated cost of about $100 billion, or $250 million per life saved. Of course, because of the danger to asbestos-removal workers, it's possible this $100 billion might save no lives at all. Estimates of the amount of money it costs to save a certain number of lives are full of inaccuracy; but even if each figure given in the examples above were off by a factor of ten, the allocation of resources to reduce risk would still be shockingly high.

    Why fixate on excessive expenditures? Because resources such as money and time come in limited quantities. Just as companies generate a finite amount of profit, governments collect a limited amount of funds each year. Government officials and employees can devote only a fixed amount of hours to solving a problem. Once money and time run out, they're gone for good.

    Most changes to government policy carry some degree of risk. Ideally, reforms reduce one risk more than they increase another, achieving an overall improvement for each individual. Yet citizens often resist reforms, believing that they will suffer from the increased risk. Why, for instance, do we have so few organs to allocate to those who need them? Because people focus on the contributions they will be making—and by association, the risks they will be taking—without thought for the benefits they may receive as potential organ recipients. The solution may be a system in which people accept or reject both roles at once. An even simpler solution would be to assume that all citizens are both potential donors and potential recipients unless they specifically opt out of the program. These measures would frame organ donation as a cooperative enterprise in which the benefits to the individual far outweigh the costs.

    There are many areas in which government has failed to create optimal legislation for society, and there are numerous reasons for these failures, including partisanship, political processes, special-interest groups and incompetence. We will examine many of these in later chapters. For now, we will focus on the most central explanation: the failure of the human mind to make wise tradeoffs. Citizens and legislators make a variety of systematic cognitive mistakes that lead to suboptimal legislation.

    Opportunities for beneficial trades abound: We could direct time and money spent on very expensive measures to control one risk and use them to reduce other risks. This would save money and lives. Instead, we tend to react to each catastrophe with new regulations, which pile up to massive size and become institutionalized. When someone suggests the repeal of an existing regulation, activists rush to its defense regardless of its cost inefficiency.

    The poor distribution of government funds is not a case of the government's ignoring the wishes of the people. When elected officials vote to pay huge sums on regulations aimed at risk reduction, they are usually responding to the demands of their constituents. The problem is that citizens look at risk through the lens of a variety of irrational biases. By supporting the status quo, these biases become roadblocks on the path of beneficial change—the type of change that requires a small increase in one risk in return for a large decrease in another risk.


The Omission Bias


Suppose you have a 10 percent chance of catching a new strain of flu virus. The only available vaccine completely prevents this type of flu, but it has a 5 percent chance of causing symptoms identical to those it is supposed to prevent, and with the same severity. If all other factors (such as cost) were the same, would you get the vaccine? Many people would not. They would be more concerned about the risk of harm from action—the 5 percent risk of an adverse reaction to the vaccine—than about the risks of inaction, or the 10 percent risk of catching the flu without the vaccine. This is true even though the vaccine reduces the chance of flu symptoms by 5 percent. Although the flu example is hypothetical, the same bias affects people's decisions about vaccination in real life.

    This irrational preference for harms of omission over harms of action is known as the omission bias. More than any other cognitive error, the omission bias pervades our decisions regarding risk. When contemplating risky choices, many people follow the rule of thumb, "Do no harm." Implicit in this advice is the notion that "do" means "do through action," making harms of omission easy to ignore. Our susceptibility to the omission bias means that every year many more people catch the flu than need to. The bias is not limited to government officials who oppose the wishes of super-rational citizens; nor is it the case that citizens need to be set straight by all-knowing officials. While the strength of the bias varies from person to person, it is found in every group of people.


The Status Quo Bias


One feature of the omission bias is that it usually supports the status quo. When contemplating a change, people are more likely to attend to the risk of change than to the risk of failing to change. Taking losses more seriously than gains, they will be motivated to preserve the status quo.

    A classic psychology experiment demonstrates the pitfalls of the status quo bias. Students were divided into three groups: "sellers," "buyers," and "choosers." The sellers were each given a coffee mug from their university bookstore and were asked to indicate on a form whether they would sell the mug at each of a series of prices ranging from $0 to $9.25. On a similar form, buyers indicated whether they were willing to buy a mug at each price in the same range. The choosers were asked to choose between a mug (which they did not "own") and various amounts of money. Sellers valued the mug at a median price of $7.12, while the average buyer thought it was worth only $2.87 and choosers gave it a median value of $3.12. Motivated to avoid a "loss" and maintain the status quo, sellers irrationally overvalued the mug.

    The status quo bias is a general principle of decision making that aggravates many of the problems discussed in this book. Any kind of reform requires some losses and some gains. If it is a good reform, gains outweigh losses. But because people worry more about losses, they will tend to oppose the reform.


The Preference for Natural Risk


A final cognitive bias that interferes with rational decision making regarding risk is the human preference for natural risk over artificial risk. In general, "don't mess with nature" is a good rule for human beings to follow. We have evolved by adapting to the natural world and learning that tampering with nature—by bleeding people as a medical treatment, for example, or destroying entire forests for firewood—can lead to trouble. But, as often happens with such cognitive shortcuts, we fail to recognize important exceptions to the rule. In particular, we follow it even when the consequences of letting nature take its course would be worse than the consequences of altering it by "artificial" means.

    Our preference for natural risk has two effects. First, we react to the same consequences differently depending upon whether they are brought about by humans or by nature. Specifically, we are more tolerant of "natural" disasters than of artificial ones. "I don't mind natural extinctions, but I'm not too enthusiastic about extinctions that are directly caused by man," commented a subject in a study of environmental values. This bias toward nature leads us to ignore opportunities for lessening the devastation of natural risks such as hurricanes, earthquakes, floods and epidemics. People tend to regard such disasters as the inevitable "will of God." Of course, we cannot prevent hurricanes and earthquakes, but we can do a great deal to protect ourselves against their worst consequences (such as not building homes on sandbars in hurricane-prone regions).

    In general, people are more willing to pay to reduce risks when the source of harm is human error than when it is a nonhuman natural source. Subjects were willing to contribute about $19 to an international fund to save Mediterranean dolphins when the dolphins were "threatened by pollution," but only $6 when they were "threatened by a new virus"—even though the same number of dolphins were expected to die in both cases. Similarly, subjects in one study (some of whom were judges) thought that workmen's compensation paid by a state panel should be greater when an injury is caused by a drug rather than by a natural disease. These questionnaire studies suggest that economic allocations are based on the psychological properties of human judgment rather than on the amount of benefit from the allocation. This leads to a misallocation. We could spend the same money more fairly by dividing it equally among those with the same need.

    The second effect of the preference for natural risk is our tendency to be suspicious of new technology, even when we have every reason to believe that the technology will improve on natural outcomes. Synthetic chemicals added to food are often banned because they cause cancer in laboratory animals. Yet when natural foods are broken into their constituent chemicals, they too have been found to cause cancer in lab animals. Caffeic acid, for example, is a carcinogen found in coffee, lettuce, apples, pears, plums, celery, carrots and potatoes. One review found that 94 percent of synthetic carcinogens were subject to government regulation, compared to 41 percent of a sample of natural carcinogens; the review also showed that the average risk to humans from synthetic chemicals was lower than from natural ones. Researchers have argued that animal tests require such high doses of the chemicals that cancer is an almost inevitable result of increased cell division brought on by an overdose of the chemical in the animal's body. At lower doses, a given chemical typically does not cause cell poisoning; for this reason, animal tests may be highly inaccurate. If these tests provide inaccurate and alarming information about natural chemicals, they probably do the same for synthetic chemicals.

    The irrationality of the preference for natural risk is illustrated by the human tendency to consider some technologies "natural" simply because they are old. For example, although most of our crops are the result of breeding—a technology—people resist further improvements from biotechnology on the grounds that they are "unnatural." It is inefficient to spend money reducing the risk of artificial chemicals when these risks are no more serious than those of nature itself. If expenditures are needed at all, we should treat all risks equally and simply strive to save the most lives per million dollars spent.

    We will consider these three biases—omission, status quo and preference for natural risk—in the context of cases in which gaining a large social benefit requires acceptance of a small risk. The examples we will explore are:


1. The benefit is a product and the risk is some rare harm caused by that product—or, as in the case of breast implants, merely believed to have been caused by the product. These products are sometimes withdrawn or withheld because of fear of lawsuits.

2. The benefit is an organ transplant for a patient who would die without it. The risk is removing organs from a person's dead body without knowing the wishes of the deceased. Potential donors fail to recognize that their omission—refusal to consent to donation—may cause irreparable harm. From the perspective of those who will die waiting for an organ transplant, inaction causes at least as much harm as action.

3. The benefit is the development and approval of a beneficial drug, and the risk is harmful side effects from the drug. The introduction of new drugs has been slow in the United States because of the fear of harm resulting from action, while the harm caused by inaction has traditionally been ignored.


    All three cases involve an imbalance in which citizens and policymakers worry too much about one risk and not enough about another. To restore balance, we, as a society, will have to reduce one risk greatly and increase the other risk by a small amount. The human tendency to be more concerned about losses than gains often prevents us from taking such potentially beneficial action. We will explore these three dilemmas, consider practical solutions to each one and offer a better way to think about tradeoffs across a broad array of policy decisions.


Lawsuits and Corporate Inaction


In his book Galileo's Revenge: Junk Science in the Courtroom, Peter Huber presents several alarming signs that something is wrong with the American legal system.


· Cancer victims have sued successfully for damages after developing cancer at the site of an injury, even though the best evidence indicated that the association was coincidental.

· The Audi 5000 was one of the safest cars on the road, but the close proximity of the brake and the accelerator—a safety feature on the highway—may have led some drivers to step on the accelerator instead of the brake in parking lots or garages. Drivers argued successfully in court, despite strong scientific evidence against their claim, that the Audi accelerated suddenly on its own. Audi sales plummeted.

· Bendictin, an anti-nausea drug that has probably saved the lives of many mothers and infants in cases of severe morning sickness, was withdrawn from the market after courts decided that it caused birth defects despite good scientific evidence that it did not. Since then, research on drugs for pregnant women has declined drastically.


    Huber blames these events on judges' and juries' abandonment of scientific standards and tolerance for "junk science" in the courtroom. All sorts of shady characters are paid to pose as expert witnesses, regardless of their credentials. By leaving juries with the impression that "experts disagree," lawyers enable them to rationalize siding with the plaintiff purely out of sympathy. After all, who can truly judge which experts are credible?

    One common point among lawsuits gone haywire, from the Dow-Corning breast implants case to these triumphs of junk science, is that companies are sued for their actions—but never for inaction. No company will ever be taken to court for refusing to manufacture a silicone product or an anti-nausea drug. Thus, the omission bias is practically built into the law. The result? Lawsuits tend to encourage the omission bias in the companies themselves. If lawsuits were predictable and preventable, companies could fend them off by establishing adequate safety measures. But when companies can be sued for misfortunes they have not caused, they may refuse to take the risk. Large corporations, whose deep pockets make them an easy target of spurious claims, will be afraid to act when action involves developing a beneficial product that carries some degree of risk, as most medical products do. They will choose to avoid the healthcare industry and turn to other markets in which lawsuits are less likely.

    Of course, this reluctance decreases societal resources because the harms of new medical devices are often small compared to their benefits. Lawsuits are supposed to deter the production of goods that have few benefits and many harms, such as automobiles with faulty brakes or drugs that don't work as advertised. But when producers are sued for unforeseeable harms that are in any case minor when compared with the benefits, the lawsuits lead to the withdrawal of beneficial products, and the legal system has done a poor job of balancing harms and benefits. A prime example is the case of vaccines.



Continues...


Excerpted from You Can't Enlarge the Pie by Max H. Bazerman Copyright © 2002 by Max H. Bazerman. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

From the B&N Reads Blog

Customer Reviews