site hit counter

≫ Download Free Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books

Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books



Download As PDF : Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books

Download PDF  Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books

The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This audiobook fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.

Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting.

Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox - the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events - is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems.

He notes a perversely inverse relationship between the best scientific indicators of good judgment and the qualities that the media most prizes in pundits - the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the audiobook fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.


Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books

"Expert political judgment" -- it sounds like an oxymoron, but only because it is. Philip E. Tetlock's groundbreaking research shows that experts are no better than the rest of us when it comes to political prognostication. But then again, you probably had a sneaking hunch that that was so. You need rely on hunches no more. Tetlock is Professor of Leadership at the Haas Management of Organizations Group, U.C. Berkeley. A Yale graduate with his Ph.D. in Psychology, Expert Political Judgment is the result of his 20 year statistical study of nearly 300 impeccably credentialed political pundits responding to more than 80,000 questions in total. The results are sobering. In most cases political pundits did no better than dart throwing chimps in prediciting political futures. Of course, Tetlock did not actually hire dart throwing chimps -- he simulated their responses with the statistical average. If the computer was programmed to use more sophisticated statistical forecasting techniques (e.g., autoregressive distributed lag models), it beat the experts even more resoundingly.

Were the experts better at anything? Well, they were pretty good at making excuses. Here are a few: 1. I made the right mistake. 2. I'm not right yet, but you'll see. 3. I was almost right. 4. Your scoring system is flawed. 5. Your questions aren't real world. 6. I never said that. 7. Things happen. Of course, experts applied their excuses only when they got it wrong... er... I mean almost right... that is, about to be right, or right if you looked at it in the right way, or what would have been right if the question were asked properly, or right if you applied the right scoring system, or... well... that was a dumb question anyway, or....

Not only did experts get it wrong, but they were so wedded to their opinions that they failed to update their forecasts even in the face of building evidence to the contrary. And then a curious thing happened -- after they got it wrong and exhausted all their excuses, they forgot they were wrong in the first place. When Tetlock did follow-up questions at later dates, experts routinely misremembered their predictions. When the expert's models failed, they merely updated their models post hoc, giving them the comforting illusion that their expert judgment and simplified model of social behavior remained intact. Compare this with another very complex system -- predicting the weather. In this latter case, there is a very big difference in the predictive abilities of experts and lay persons. Meteorologists do not use over-simplified models like "red in the morning, sailor's warning." They use complex modeling, statistical forecasting, computer simulations, etc. When they are wrong, weathermen do not say, well, it almost rained; or, it just hasn't rained yet; or, it didn't rain, but predicting rain was the right mistake to make; or, there's something wrong with the rain guage; or, I didn't say it was going to rain; or, what kind of a question is that?

Political experts, unlike weathermen, live in an infinite variety of counterfactual worlds; or as Tetlock writes, "Counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts." That is: sure, given x, y, and z, the former Soviet Union collapsed; but if z had not occurred, the former Soviet Union would have remained intact. Really? Considering the expert got it wrong in the first place, how could they possibly know the outcome in a hypothetical counterfactual world? At best, this is intellectual dishonesty. At worst, it is fraud.

But some experts did better than others. In particular, those who were less dogmatic and frequently updated their predictions in response to countervailing evidence (Tetlock's "foxes") did much better than the opposing camp (termed "hedgehogs"). The problem is that hedgehogs climb the ladder faster and have positions of greater prominence. My Machiavellian take? You might as well make dogmatic pronouncements because all the hedgehogs you work for aren't any better at predicting the future than you are -- they're just more sure of themselves. So, work on your self-confidence. It is apparently the only thing anyone pays any attention to.

Product details

  • Audible Audiobook
  • Listening Length 9 hours and 48 minutes
  • Program Type Audiobook
  • Version Unabridged
  • Publisher Audible Studios
  • Audible.com Release Date November 13, 2013
  • Language English
  • ASIN B00GN5XA3U

Read  Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books

Tags : Amazon.com: Expert Political Judgment: How Good is it? How can We Know? (Audible Audio Edition): Philip E. Tetlock, Anthony Haden Salerno, Audible Studios: Books, ,Philip E. Tetlock, Anthony Haden Salerno, Audible Studios,Expert Political Judgment: How Good is it? How can We Know?,Audible Studios,B00GN5XA3U
People also read other books :

Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books Reviews


Philip E. Tetlock provides readers with a sobering look at experts' ability to forecast future events and the cognitive styles of thinking that correlate with better forecasts. The research--a culmination of 20 years of original research from experts making predictions--shows that experts are no more accurate than non-experts. The best forecasters received less media attention (likely because they offered less sound bites) and tended to be moderates along the ideological spectrum, skeptical of grand schemes, more likely to consider contradictory evidence and hypotheses, and hedge their probabilistic bets.
Judging the possibility of future outcomes depends less on what a person knows (i.e. expertise), and more on how a person thinks. Best able to judge the future are those who see the world in fluid terms, embrace complexity and nuance, empathize with all sides of an argument, and seek out different opinions. Least able are those who see the world in black-and-white terms, tend toward dogma, hold to an idealistic view, and shut out others' opinions. Education and being up-to-date on current events were also required, but flexible thinking was the biggest variable. The book is extremely quantitative and deeply researched, but it is needlessly verbose in many places. Politics is the focus domain, but the principles extrapolate to other fields. Interestingly, the single most accurate predictor of the future was the autoregressive distributed lag function that the author used as a control, thus unintentionally showing the value of quantitative analytics.
This is a critical book for anyone one who depends on professional forecasters of "social" variables, and even more for anyone whose livelihood rests on making such forecasts. "Social" because Tetlock's book is focussed on political forecasting, but I'm convinced that it applies to economic and social forecasting as well. (Having spent a professional career forecasting economic variables, I have some insight here). Tetlock is not discussing forecasting in the hard sciences, where forecasting is based on much harder data.

His first critical conclusion is that, in forecasting complex political events, "we could do as well by tossing coins as by consulting experts". This is based on a massive set of surveys of expert opinion that were compared to outcomes in the real world over many years. The task was enormously complex to set up; defining an experiment in the social sciences presents the problems that constantly arise in making judgements in these sciences (what does one measure, and how? How can bias be measured and eliminated? etc. etc.) Much of the book is devoted to the problems in constructing the study, and how they were resolved.

His second key conclusion is that, while that may be true of experts as an undifferentiated group, some experts do significantly better than other experts. This does not reflect the level of expertise involved, nor does it reflect political orientation. Rather, it reflects the way the experts think. Poorer performers tend to be what Tetlock characterizes as "hedgehogs" -- people who apply theoretical frameworks, who stick with a line of argument, and who believe strongly in their own forecasts. The better performers tend to be what he calls "foxes" -- those with an eclectic approach, who examine many hypotheses, and who are more inclined to think probabilistically, by grading the likelihood of their forecasts.

But, as he notes, the forecasters who get the most media exposure tend to be the hedgehogs, those with a strong point of view that can be clearly expressed. This makes all the sense in the world; someone with a clear cut and compelling story is much more fun to listen to (and much more memorable than) someone who presents a range of possible outcomes (as a former many-handed economist, I know this all too well).

What does that mean for those of us who use forecasts? We use them in making political decisions, personal financial decisions, and investment decisions. This book tells us that WHAT THE EXPERTS SAY IS NOT LIKELY TO ADD MUCH TO THE QUALITY OF YOUR OWN DECISION MAKING. And that says be careful how much you pay for expert advice, and how much you rely on it. That of course applies to experts in the social sciences, NOT to experts in the hard (aka real) sciences. Generally, it is a good idea to regard your doctor as a real expert.

Because it makes it impossible to avoid these conclusions, I gave this book five stars; this is very important stuff. I would not have given it five stars for the way in which it is written. For me, it read as if it had been written for other academics, rather than for the general reader. This is hard to avoid, but some other works in the field do manage -- for example, "Thinking Fast and Slow". Don't skip the book because it is not exactly an enjoyable read, however its merit far outweighs its manner.
"Expert political judgment" -- it sounds like an oxymoron, but only because it is. Philip E. Tetlock's groundbreaking research shows that experts are no better than the rest of us when it comes to political prognostication. But then again, you probably had a sneaking hunch that that was so. You need rely on hunches no more. Tetlock is Professor of Leadership at the Haas Management of Organizations Group, U.C. Berkeley. A Yale graduate with his Ph.D. in Psychology, Expert Political Judgment is the result of his 20 year statistical study of nearly 300 impeccably credentialed political pundits responding to more than 80,000 questions in total. The results are sobering. In most cases political pundits did no better than dart throwing chimps in prediciting political futures. Of course, Tetlock did not actually hire dart throwing chimps -- he simulated their responses with the statistical average. If the computer was programmed to use more sophisticated statistical forecasting techniques (e.g., autoregressive distributed lag models), it beat the experts even more resoundingly.

Were the experts better at anything? Well, they were pretty good at making excuses. Here are a few 1. I made the right mistake. 2. I'm not right yet, but you'll see. 3. I was almost right. 4. Your scoring system is flawed. 5. Your questions aren't real world. 6. I never said that. 7. Things happen. Of course, experts applied their excuses only when they got it wrong... er... I mean almost right... that is, about to be right, or right if you looked at it in the right way, or what would have been right if the question were asked properly, or right if you applied the right scoring system, or... well... that was a dumb question anyway, or....

Not only did experts get it wrong, but they were so wedded to their opinions that they failed to update their forecasts even in the face of building evidence to the contrary. And then a curious thing happened -- after they got it wrong and exhausted all their excuses, they forgot they were wrong in the first place. When Tetlock did follow-up questions at later dates, experts routinely misremembered their predictions. When the expert's models failed, they merely updated their models post hoc, giving them the comforting illusion that their expert judgment and simplified model of social behavior remained intact. Compare this with another very complex system -- predicting the weather. In this latter case, there is a very big difference in the predictive abilities of experts and lay persons. Meteorologists do not use over-simplified models like "red in the morning, sailor's warning." They use complex modeling, statistical forecasting, computer simulations, etc. When they are wrong, weathermen do not say, well, it almost rained; or, it just hasn't rained yet; or, it didn't rain, but predicting rain was the right mistake to make; or, there's something wrong with the rain guage; or, I didn't say it was going to rain; or, what kind of a question is that?

Political experts, unlike weathermen, live in an infinite variety of counterfactual worlds; or as Tetlock writes, "Counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts." That is sure, given x, y, and z, the former Soviet Union collapsed; but if z had not occurred, the former Soviet Union would have remained intact. Really? Considering the expert got it wrong in the first place, how could they possibly know the outcome in a hypothetical counterfactual world? At best, this is intellectual dishonesty. At worst, it is fraud.

But some experts did better than others. In particular, those who were less dogmatic and frequently updated their predictions in response to countervailing evidence (Tetlock's "foxes") did much better than the opposing camp (termed "hedgehogs"). The problem is that hedgehogs climb the ladder faster and have positions of greater prominence. My Machiavellian take? You might as well make dogmatic pronouncements because all the hedgehogs you work for aren't any better at predicting the future than you are -- they're just more sure of themselves. So, work on your self-confidence. It is apparently the only thing anyone pays any attention to.
Ebook PDF  Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books

0 Response to "≫ Download Free Expert Political Judgment How Good is it? How can We Know? (Audible Audio Edition) Philip E Tetlock Anthony Haden Salerno Audible Studios Books"

Post a Comment