Book Summary: “Expert Political Judgement: How Good Is It?” by Philip Tetlock


Title: Expert Political Judgement: How Good Is It? How Can We Know?
Author: Philip Tetlock
Scope: 4 stars
Readability: 2.5 stars
My personal rating: 5 stars
See more on my book rating system.

If you enjoy this summary, please support the author by buying the book.

Topic of Book

Tetlock asks 284 world-renowned experts to make 27,451 verifiable predictions of the future. Many years later he then measured their accuracy and determined why some experts were more accurate than others.

My Comments

This book is a very difficult read, but I believe that its conclusions are exceedingly important. So much of governance, politics, the media and civil discourse depends upon the accuracy of expert predictions of the future. Tetlock show how lacking it is. Expert predictions of the future are no more accurate than random guesses. And he is not testing media pundits, but true experts in their field.

Keep this book in mind the next time that you hear someone make a confident prediction of the future, particularly those involving ecological disaster (which have the lowest accuracy rate of all predictions).

Key Take-aways

  • Expert predictions of the future are no more accurate than random guesses!
  • The more confident an expert is in their own prediction, the less accurate the prediction turns out to be.
  • The overall amount of knowledge or prestige of the expert does not lead to more accurate predictions. It just leads to overconfidence.
  • Even after the predictions failed, most experts were unlikely to admit their error. This is particularly true with those who were most confident in their original prediction.
  • The key difference was whether the expert embraced “The One Big Idea”, which Tetlock calls Hedgehogs, or whether the expert embraced many different ideas and methods, which Tetlock calls Foxes.
  • Hedgehogs were very confident in their predictions because they believed that their “One Big Idea” enabled them to understand a simple reality. They performed much worse than random guessing!
  • Foxes believed that we live in a very complicated world with many causal factors. They were not very confident in their predictions, but they were the only type of expert who performed better than random guessing.
  • Political moderates were more accurate than either liberals or conservatives. They also performed better than optimists or pessimists.
  • “Doomsters” who embraced a pessimistic view based upon ecological limits were the least accurate predictors (far below random guessing).
  • Unfortunately, the media and political environment rewards very inaccurate Hedgehogs over somewhat accurate Foxes because of their certainty and flamboyant style.
  • The more expertise the Hedgehog has, the less accurate the prediction, strongly suggesting that new information merely feed their bias.

Important Quotes from Book

What experts think matters far less than how they think.

If we want realistic odds on what will happen next,  coupled to a willingness to admit mistakes, we are better off turning to  experts who embody the intellectual traits of Isaiah Berlin’s prototypical  fox—those who “know many little things,” draw from an eclectic array  of traditions, and accept ambiguity and contradiction as inevitable features  of life—than we are turning to Berlin’s hedgehogs—those who  “know one big thing,” toil devotedly within one tradition, and reach for  formulaic solutions to ill-defined problems.3 The net result is a double  irony: a perversely inverse relationship between my prime exhibit indicators  of good judgment and the qualities the media prizes in pundits—the  tenacity required to prevail in ideological combat—and the qualities  science prizes in scientists—the tenacity required to reduce superficial  complexity to underlying simplicity.

Without retreating  into full-blown relativism, we need to recognize that political  belief systems are at continual risk of evolving into self-perpetuating  worldviews, with their own self-serving criteria for judging judgment  and keeping score, their own stocks of favorite historical analogies, and  their own pantheons of heroes and villains.

Prediction and explanation are not as tightly coupled as once supposed.22 Explanation is possible without prediction.

Conversely, prediction is possible without explanation.

When we pit experts against minimalist performance  benchmarks—dilettantes, dart-throwing chimps, and assorted extrapolation  algorithms—we find few signs that expertise translates into greater  ability to make either “well-calibrated” or “discriminating” forecasts.  Chapter 3 tests a multitude of meliorist hypotheses—most of which  bite the dust. Who experts were—professional background, status, and  so on—made scarcely an iota of difference to accuracy. Nor did what experts  thought—whether they were liberals or conservatives, realists or  institutionalists, optimists or pessimists. But the search bore fruit. How  experts thought—their style of reasoning—did matter. Chapter 3 demonstrates  the usefulness of classifying experts along a rough cognitive-style  continuum anchored at one end by Isaiah Berlin’s prototypical hedgehog  and at the other by his prototypical fox.37 The intellectually aggressive  hedgehogs knew one big thing and sought, under the banner of parsimony,  to expand the explanatory power of that big thing to “cover” new cases;  the more eclectic foxes knew many little things and were content to improvise  ad hoc solutions to keep pace with a rapidly changing world.

The foxes consistently edge out the hedgehogs but enjoy their most decisive victories in long-term exercises inside their domains of expertise… The foxes’ self-critical, point-counterpoint style of thinking prevented  them from building up the sorts of excessive enthusiasm for their  predictions that hedgehogs, especially well-informed ones, displayed for  theirs. Foxes were more sensitive to how contradictory forces can yield  stable equilibria and, as a result, “overpredicted” fewer departures, good  or bad, from the status quo. But foxes did not mindlessly predict the  past. They recognized the precariousness of many equilibria and hedged  their bets by rarely ruling out anything as “impossible.

Low scorers look  like hedgehogs: thinkers who “know one big thing,” aggressively extend  the explanatory reach of that one big thing into new domains, display  bristly impatience with those who “do not get it,” and express considerable  confidence that they are already pretty proficient forecasters, at least  in the long term. High scorers look like foxes: thinkers who know many  small things (tricks of their trade), are skeptical of grand schemes, see explanation  and prediction not as deductive exercises but rather as exercises  in flexible “ad hocery” that require stitching together diverse sources  of information, and are rather diffident about their own forecasting.

The worst performers were hedgehog extremists making long-term predictions in their domains of expertise.

By contrast, the best performers were foxes making short-term predictions in their domains of expertise.

Foxes derive modest benefit from expertise whereas hedgehogs are—strange to say—harmed.

In this spirit, then, are six basic ways in which foxes and hedgehogs differed from each other. Foxes were more

a. skeptical of deductive approaches to explanation and prediction

b. disposed to qualify tempting analogies by noting disconfirming evidence

c. reluctant to make extreme predictions of the sort that start to flow

when positive feedback loops go unchecked by dampening mechanisms

d. worried about hindsight bias causing us to judge those in the past too harshly

e. prone to a detached, ironic view of life

f. motivated to weave together conflicting arguments on foundational issues in the study of politics, such as the role of human agency or the rationality of decision making.

When senior hedgehogs dispense advice to junior colleagues, they stress  the virtue of parsimony. Good judgment requires tuning out the  ephemera that dominate the headlines and distract us from the real, surprisingly  simple, drivers of long-term trends. They counsel that deep  laws constrain history, and that these laws are knowable and lead to correct  conclusions when correctly applied to the real world. They admire deductive reasoning that uses powerful abstractions to organize messy facts and to distinguish the possible from the impossible, the desirable from the undesirable.

The downside risk was that when hedgehogs were wrong, they were often very wrong.

Did  foxes give more weight to certain ideas over others? The answer is usually  no. Foxes were not especially likely to endorse particular substantive  positions on rationality, levels of analysis, macroeconomics, or foreign  policy. Their advantage resided in how they thought, not in what they thought.

Quantitative and qualitative methods converge on a common conclusion: foxes have better judgment than hedgehogs. Better judgment does not  mean great judgment. Foxes are not awe-inspiring forecasters: most of  them should be happy to tie simple extrapolation models, and none of  them can hold a candle to formal statistical models. But foxes do avoid  many of the big mistakes that drive down the probability scores of hedgehogs  to approximate parity with dart-throwing chimps. And this accomplishment  is rooted in foxes’ more balanced style of thinking about the  world—a style of thought that elevates no thought above criticism. 

By contrast, hedgehogs dig themselves into intellectual holes. The  deeper they dig, the harder it gets to climb out and see what is happening  outside, and the more tempting it becomes to keep on doing what they  know how to do: continue their metaphorical digging by uncovering new  reasons why their initial inclination, usually too optimistic or pessimistic,  was right. Hedgehogs are thus at continual risk of becoming prisoners of  their preconceptions, trapped in self-reinforcing cycles in which their initial  ideological disposition stimulates thoughts that further justify that inclination which, in turn, stimulates further supportive thoughts.

There are intriguing parallels between the evidence on how foxes  outperformed hedgehogs and the broader literature on how to improve  forecasting. We learn from the latter that (a) the average predictions of  forecasters are generally more accurate than the majority of forecasters  from whom the averages were computed; (b) trimming outliers (extremists)  further enhances accuracy; (c) one can do better still by using the  Delphi technique for integrating experts’ judgments in which one persuades  experts to advance anonymous predictions and arguments for  those predictions, one then circulates everyone’s predictions and arguments  to everyone else (so everyone has a chance to reflect but no one  has a chance to bully), and one continues the process until convinced the  process has reached the point of diminishing returns.46 These results dovetail  with the cognitive interpretation of the fox-hedgehog performance 

Overall, chapter 3 makes a strong case that the foxes’ “victory” was a  genuine achievement. We looked for good judgment and found it—  mostly among the foxes. And, interestingly, this does not appear to be  where most of the media are looking. Hedgehog opinion was in greater  demand from the media, and this was probably for the reason noted in  chapter 2: simple, decisive statements are easier to package in sound  bites. The same style of reasoning that impairs experts’ performance on  scientific indicators of good judgment boosts experts’ attractiveness to  the mass market–driven media.

In both chapters, the root cause of hedgehog underperformance has been a reluctance to entertain the possibility that they might be wrong.

Here we run into the defining dilemma of the social  scientist: the choice between being judged either obvious or obviously  wrong. Intellectual foxes will see the current results as a rather unsurprising,  although still welcome, vindication of what they have been saying  all along. These scholars have repeatedly traced the psychological  roots of intelligence failures to an unwillingness to be sufficiently selfcritical,  to reexamine underlying assumptions, to question dogma, and  to muster the imagination to think daringly about options that others  might ridicule.

Beyond a stark minimum, subject matter expertise in world politics translates less into forecasting accuracy than it does into overconfidence.

Too many lines of evidence  converge: hedgehogs are poor forecasters who refuse to acknowledge  mistakes, dismiss dissonant evidence, and warm to the possibility that  things could have worked out differently only if doing so rescues favorite  theories of history from falsification.

Leave a comment