Expert Political Judgment: How Good Is It? How Can We Know? - New Edition
Since its original publication, Expert Political Judgment by New York Times bestselling author Philip Tetlock has established itself as a contemporary classic in the literature on evaluating expert opinion.


Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox—the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events—is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits—the single-minded determination required to prevail in ideological combat.


Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making. Now with a new preface in which Tetlock discusses the latest research in the field, the book explores what constitutes good judgment in predicting future events and looks at why experts are often wrong in their forecasts.

"1125843928"
Expert Political Judgment: How Good Is It? How Can We Know? - New Edition
Since its original publication, Expert Political Judgment by New York Times bestselling author Philip Tetlock has established itself as a contemporary classic in the literature on evaluating expert opinion.


Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox—the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events—is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits—the single-minded determination required to prevail in ideological combat.


Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making. Now with a new preface in which Tetlock discusses the latest research in the field, the book explores what constitutes good judgment in predicting future events and looks at why experts are often wrong in their forecasts.

25.95 In Stock
Expert Political Judgment: How Good Is It? How Can We Know? - New Edition

Expert Political Judgment: How Good Is It? How Can We Know? - New Edition

Expert Political Judgment: How Good Is It? How Can We Know? - New Edition

Expert Political Judgment: How Good Is It? How Can We Know? - New Edition

Paperback(New Edition)

$25.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Since its original publication, Expert Political Judgment by New York Times bestselling author Philip Tetlock has established itself as a contemporary classic in the literature on evaluating expert opinion.


Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox—the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events—is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits—the single-minded determination required to prevail in ideological combat.


Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making. Now with a new preface in which Tetlock discusses the latest research in the field, the book explores what constitutes good judgment in predicting future events and looks at why experts are often wrong in their forecasts.


Product Details

ISBN-13: 9780691175973
Publisher: Princeton University Press
Publication date: 08/29/2017
Edition description: New Edition
Pages: 368
Product dimensions: 6.00(w) x 9.10(h) x 1.00(d)

About the Author

Philip E. Tetlock is Mitchell Professor of Leadership at the University of California, Berkeley. His books include Counterfactual Thought Experiments in World Politics (Princeton).

Read an Excerpt

CHAPTER 1

Quantifying the Unquantifiable

I do not pretend to start with precise questions. I do not think you can start with anything precise. You have to achieve such precision as you can, as you go along.

— Bertrand Russell

Every day, countless experts offer innumerable opinions in a dizzying array of forums. Cynics groan that expert communities seem ready at hand for virtually any issue in the political spotlight — communities from which governments or their critics can mobilize platoons of pundits to make prepackaged cases on a moment's notice.

Although there is nothing odd about experts playing prominent roles in debates, it is odd to keep score, to track expert performance against explicit benchmarks of accuracy and rigor. And that is what I have struggled to do in twenty years of research of soliciting and scoring experts' judgments on a wide range of issues. The key term is "struggled." For, if it were easy to set standards for judging judgment that would be honored across the opinion spectrum and not glibly dismissed as another sneaky effort to seize the high ground for a favorite cause, someone would have patented the process long ago.

The current squabble over "intelligence failures" preceding the American invasion of Iraq is the latest illustration of why some esteemed colleagues doubted the feasibility of this project all along and why I felt it essential to push forward anyway. As I write, supporters of the invasion are on the defensive: their boldest predictions of weapons of mass destruction and of minimal resistance have not been borne out.

But are hawks under an obligation — the debating equivalent of Marquis of Queensbury rules — to concede they were wrong? The majority are defiant. Some say they will yet be proved right: weapons will be found — so, be patient — or that Baathists snuck the weapons into Syria — so, broaden the search. Others concede that yes, we overestimated Saddam's arsenal, but we made the right mistake. Given what we knew back then — the fragmentary but ominous indicators of Saddam's intentions — it was prudent to over- rather than underestimate him. Yet others argue that ends justify means: removing Saddam will yield enormous long-term benefits if we just stay the course. The know-it-all doves display a double failure of moral imagination. Looking back, they do not see how terribly things would have turned out in the counterfactual world in which Saddam remained ensconced in power (and France wielded de facto veto power over American security policy). Looking forward, they do not see how wonderfully things will turn out: freedom, peace, and prosperity flourishing in lieu of tyranny, war, and misery.

The belief system defenses deployed in the Iraq debate bear suspicious similarities to those deployed in other controversies sprinkled throughout this book. But documenting defenses, and the fierce conviction behind them, serves a deeper purpose. It highlights why, if we want to stop running into ideological impasses rooted in each side's insistence on scoring its own performance, we need to start thinking more deeply about how we think. We need methods of calibrating expert performance that transcend partisan bickering and check our species' deep-rooted penchant for self-justification.

The next two sections of this chapter wrestle with the complexities of the process of setting standards for judging judgment. The final section previews what we discover when we apply these standards to experts in the field, asking them to predict outcomes around the world and to comment on their own and rivals' successes and failures. These regional forecasting exercises generate winners and losers, but they are not clustered along the lines that partisans of the left or right, or of fashionable academic schools of thought, expected. What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin's prototypical fox — those who "know many little things," draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life — than we are turning to Berlin's hedgehogs — those who "know one big thing," toil devotedly within one tradition, and reach for formulaic solutions to ill-defined problems. The net result is a double irony: a perversely inverse relationship between my prime exhibit indicators of good judgment and the qualities the media prizes in pundits — the tenacity required to prevail in ideological combat — and the qualities science prizes in scientists — the tenacity required to reduce superficial complexity to underlying simplicity.

Here Lurk (The Social Science Equivalent of) Dragons

It is a curious thing. Almost all of us think we possess it in healthy measure. Many of us think we are so blessed that we have an obligation to share it. But even the savvy professionals recruited from academia, government, and think tanks to participate in the studies collected here have a struggle defining it. When pressed for a precise answer, a disconcerting number fell back on Potter Stewart's famous definition of pornography: "I know it when I see it." And, of those participants who ventured beyond the transparently tautological, a goodly number offered definitions that were in deep, even irreconcilable, conflict. However we set up the spectrum of opinion — liberals versus conservatives, realists versus idealists, doomsters versus boomsters — we found little agreement on either who had it or what it was.

The elusive it is good political judgment. And some reviewers warned that, of all the domains I could have chosen — many, like medicine or finance, endowed with incontrovertible criteria for assessing accuracy — I showed suspect scientific judgment in choosing good political judgment. In their view, I could scarcely have chosen a topic more hopelessly subjective and less suitable for scientific analysis. Future professional gatekeepers should do a better job stopping scientific interlopers, such as the author, from wasting everyone's time — perhaps by posting the admonitory sign that medieval mapmakers used to stop explorers from sailing off the earth: hic sunt dragones.

This "relativist" challenge strikes at the conceptual heart of this project. For, if the challenge in its strongest form is right, all that follows is for naught. Strong relativism stipulates an obligation to judge each worldview within the framework of its own assumptions about the world — an obligation that theorists ground in arguments that stress the inappropriateness of imposing one group's standards of rationality on other groups. Regardless of precise rationale, this doctrine imposes a blanket ban on all efforts to hold advocates of different worldviews accountable to common norms for judging judgment. We are barred from even the most obvious observations: from pointing out that forecasters are better advised to use econometric models than astrological charts or from noting the paucity of evidence for Herr Hitler's "theory" of Aryan supremacy or Comrade Kim Il Sung's juche "theory" of economic development.

Exasperation is an understandable response to extreme relativism. Indeed, it was exasperation that, two and a half centuries ago, drove Samuel Johnson to dismiss the metaphysical doctrines of Bishop Berkeley by kicking a stone and declaring, "I refute him thus." In this spirit, we might crankily ask what makes political judgment so special. Why should political observers be insulated from the standards of accuracy and rigor that we demand of professionals in other lines of work?

But we err if we shut out more nuanced forms of relativism. For, in key respects, political judgment is especially problematic. The root of the problem is not just the variety of viewpoints. It is the difficulty that advocates have pinning each other down in debate. When partisans disagree over free trade or arms control or foreign aid, the disagreements hinge on more than easily ascertained claims about trade deficits or missile counts or leaky transfer buckets. The disputes also hinge on hard-to-refute counterfactual claims about what would have happened if we had taken different policy paths and on impossible-to-refute moral claims about the types of people we should aspire to be — all claims that partisans can use to fortify their positions against falsification. Without retreating into full-blown relativism, we need to recognize that political belief systems are at continual risk of evolving into self-perpetuating worldviews, with their own self-serving criteria for judging judgment and keeping score, their own stocks of favorite historical analogies, and their own pantheons of heroes and villains.

We get a clear picture of how murky things can get when we explore the difficulties that even thoughtful observers run into when they try (as they have since Thucydides) to appraise the quality of judgment displayed by leaders at critical junctures in history. This vast case study literature underscores — in scores of ways — how wrong Johnsonian stone-kickers are if they insist that demonstrating defective judgment is a straightforward "I refute him thus" exercise. To make compelling indictments of political judgment — ones that will move more than one's ideological soul mates — case study investigators must show not only that decision makers sized up the situation incorrectly but also that, as a result, they put us on a manifestly suboptimal path relative to what was once possible, and they could have avoided these mistakes if they had performed due diligence in analyzing the available information.

These value-laden "counterfactual" and "decision-process" judgment calls create opportunities for subjectivity to seep into historical assessments of even exhaustively scrutinized cases. Consider four examples of the potential for partisan mischief:

a. How confident can we now be — sixty years later and after all records have been declassified — that Harry Truman was right to drop atomic bombs on Japan in August 1945? This question still polarizes observers, in part, because their answers hinge on guesses about how quickly Japan would have surrendered if its officials had been invited to witness a demonstration blast; in part, because their answers hinge on values — the moral weight we place on American versus Japanese lives and on whether we deem death by nuclear incineration or radiation to be worse than death by other means; and, in part, because their answers hinge on murky "process" judgments — whether Truman shrewdly surmised that he had passed the point of diminishing returns for further deliberation or whether he acted impulsively and should have heard out more points of view.

b. How confident can we now be — forty years later — that the Kennedy administration handled the Cuban missile crisis with consummate skill, striking the perfect blend of firmness to force the withdrawal of Soviet missiles and of reassurance to forestall escalation into war? Our answers hinge not only on our risk tolerance but also on our hunches about whether Kennedy was just lucky to have avoided dramatic escalation (critics on the left argue that he played a perilous game of brinkmanship) or about whether Kennedy bollixed an opportunity to eliminate the Castro regime and destabilize the Soviet empire (critics on the right argue that he gave up more than he should have).

c. How confident can we now be — twenty years later — that Reagan's admirers have gotten it right and the Star Wars initiative was a stroke of genius, an end run around the bureaucracy that destabilized the Soviet empire and hastened the resolution of the cold war? Or that Reagan's detractors have gotten it right and the initiative was the foolish whim of a man already descending into senility, a whim that wasted billions of dollars and that could have triggered a ferocious escalation of the cold war? Our answers hinge on inevitably speculative judgments of how history would have unfolded in the no-Reagan, rerun conditions of history.

d. How confident can we be — in the spring of 2004 — that the Bush administration was myopic to the threat posed by Al Qaeda in the summer of 2001, failing to heed classified memos that baldly announced "bin Laden plans to attack the United States"? Or is all this 20/20 hindsight motivated by desire to topple a president? Have we forgotten how vague the warnings were, how vocal the outcry would have been against FBI-CIA coordination, and how stunned Democrats and Republicans alike were by the attack?

Where then does this leave us? Up to a disconcertingly difficult to identify point, the relativists are right: judgments of political judgment can never be rendered politically uncontroversial. Many decades of case study experience should by now have drummed in the lesson that one observer's simpleton will often be another's man of principle; one observer's groupthink, another's well-run meeting.

But the relativist critique should not paralyze us. It would be a massive mistake to "give up," to approach good judgment solely from first-person pronoun perspectives that treat our own intuitions about what constitutes good judgment, and about how well we stack up against those intuitions, as the beginning and end points of inquiry.

This book is predicated on the assumption that, even if we cannot capture all of the subtle counterfactual and moral facets of good judgment, we can advance the cause of holding political observers accountable to independent standards of empirical accuracy and logical rigor. Whatever their allegiances, good judges should pass two types of tests:

1. Correspondence tests rooted in empiricism. How well do their private beliefs map onto the publicly observable world?

2. Coherence and process tests rooted in logic. Are their beliefs internally consistent? And do they update those beliefs in response to evidence?

In plain language, good judges should both "get it right" and "think the right way."

This book is also predicated on the assumption that, to succeed in this ambitious undertaking, we cannot afford to be parochial. Our salvation lies in multimethod triangulation — the strategy of pinning down elusive constructs by capitalizing on the complementary strengths of the full range of methods in the social science tool kit. Our confidence in specific claims should rise with the quality of converging evidence we can marshal from diverse sources. And, insofar as we advance many interdependent claims, our confidence in the overall architecture of our argument should be linked to the sturdiness of the interlocking patterns of converging evidence.

Of course, researchers are more proficient with some tools than others. As a research psychologist, my comparative advantage does not lie in doing case studies that presuppose deep knowledge into the challenges confronting key players at particular times and places. It lies in applying the distinctive skills that psychologists collectively bring to this challenging topic: skills honed by a century of experience in translating vague speculation about human judgment into testable propositions. Each chapter of this book exploits concepts from experimental psychology to infuse the abstract goal of assessing good judgment with operational substance, so we can move beyond anecdotes and calibrate the accuracy of observers' predictions, the soundness of the inferences they draw when those predictions are or are not borne out, the evenhandedness with which they evaluate evidence, and the consistency of their answers to queries about what could have been or might yet be.

The goal was to discover how far back we could push the "doubting Thomases" of relativism by asking large numbers of experts large numbers of questions about large numbers of cases and by applying no-favoritism scoring rules to their answers. We knew we could never fully escape the interpretive controversies that flourish at the case study level. But we counted on the law of large numbers to cancel out the idiosyncratic case-specific causes for forecasting glitches and to reveal the invariant properties of good judgment. The miracle of aggregation would give us license to tune out the kvetching of sore losers who, we expected, would try to justify their answers by arguing that our standardized questions failed to capture the subtleties of particular situations or that our standardized scoring rules failed to give due credit to forecasts that appear wrong to the uninitiated but that are in some deeper sense right.

(Continues…)



Excerpted from "Expert Political Judgment"
by .
Copyright © 2005 Princeton University Press.
Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Acknowledgments, ix,
Preface, xi,
Preface to the 2017 Edition, xvii,
CHAPTER 1 Quantifying the Unquantifiable, 1,
CHAPTER 2 The Ego-deflating Challenge of Radical Skepticism, 25,
CHAPTER 3 Knowing the Limits of One's Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs, 67,
CHAPTER 4 Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs, 121,
CHAPTER 5 Contemplating Counterfactuals: Foxes Are More Willing than Hedgehogs to Entertain Self-subversive Scenarios, 144,
CHAPTER 6 The Hedgehogs Strike Back, 164,
CHAPTER 7 Are We Open-minded Enough to Acknowledge the Limits of Open-mindedness?, 189,
CHAPTER 8 Exploring the Limits on Objectivity and Accountability, 216,
Methodological Appendix, 239,
Technical Appendix Phillip Rescober and Philip E. Tetlock, 273,
Index, 313,

What People are Saying About This

Robert Jervis

This is a marvelous book—fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed.
Robert Jervis, Columbia University

Larson

This book is a major contribution to our thinking about political judgment. Philip Tetlock formulates coding rules by which to categorize the observations of individuals, and arrives at several interesting hypotheses. He lays out the many strategies that experts use to avoid learning from surprising real-world events.
Deborah W. Larson, University of California, Los Angeles

Daniel Kahneman

This book is a landmark in both content and style of argument. It is a major advance in our understanding of expert judgment in the vitally important and almost impossible task of political and strategic forecasting. Tetlock also offers a unique example of even-handed social science. This may be the first book I have seen in which the arguments and objections of opponents are presented with as much care as the author's own position.
Daniel Kahneman, Princeton University, recipient of the 2002 Nobel Prize in economic sciences

Daniel Gilbert

This book is just what one would expect from America's most influential political psychologist: Intelligent, important, and closely argued. Both science and policy are brilliantly illuminated by Tetlock's fascinating arguments.
Daniel Gilbert, Harvard University

From the Publisher

"This book is a landmark in both content and style of argument. It is a major advance in our understanding of expert judgment in the vitally important and almost impossible task of political and strategic forecasting."—Daniel Kahneman, Princeton University, Nobel Laureate in Economics

"It is the somewhat gratifying lesson of Philip Tetlock's new book … that people who make prediction their business … are no better than the rest of us."—Louis Menand, New Yorker"The definitive work on this question." —Gavyn Davies, Financial Times

"[This] book … marshals powerful evidence to make [its] case.Expert Political Judgment… summarizes the results of a truly amazing research project…. The question that screams out from the data is why the world keeps believing that ‘experts' exist at all."—Geoffrey Colvin, Fortune

"This is a marvelous book—fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed."—Robert Jervis, Columbia University

"This book is just what one would expect from America's most influential political psychologist: Intelligent, important, and closely argued. Both science and policy are brilliantly illuminated by Tetlock's fascinating arguments."—Daniel Gilbert, Harvard University

From the B&N Reads Blog

Customer Reviews