Facebook Twitter Pinterest <Einbetten>
EUR 18,90
  • Statt: EUR 32,10
  • Sie sparen: EUR 13,20 (41%)
  • Alle Preisangaben inkl. MwSt.
Lieferbar ab dem 3. September 2016.
Bestellen Sie jetzt.
Verkauf und Versand durch Amazon. Geschenkverpackung verfügbar.
Expert Political Judgment... ist in Ihrem Einkaufwagen hinzugefügt worden
Möchten Sie verkaufen?
Zur Rückseite klappen Zur Vorderseite klappen
Hörprobe Wird gespielt... Angehalten   Sie hören eine Hörprobe des Audible Hörbuch-Downloads.
Mehr erfahren
Alle 2 Bilder anzeigen

Expert Political Judgment: How Good Is It? How Can We Know? (Englisch) Taschenbuch – 31. Juli 2006

4.0 von 5 Sternen 1 Kundenrezension

Alle Formate und Ausgaben anzeigen Andere Formate und Ausgaben ausblenden
Preis
Neu ab Gebraucht ab
Kindle Edition
"Bitte wiederholen"
Taschenbuch
"Bitte wiederholen"
EUR 18,90
EUR 17,38 EUR 24,77
20 neu ab EUR 17,38 10 gebraucht ab EUR 24,77
click to open popover

Wird oft zusammen gekauft

  • Expert Political Judgment: How Good Is It? How Can We Know?
  • +
  • Superforecasting: The Art and Science of Prediction
Gesamtpreis: EUR 26,89
Die ausgewählten Artikel zusammen kaufen

Es wird kein Kindle Gerät benötigt. Laden Sie eine der kostenlosen Kindle Apps herunter und beginnen Sie, Kindle-Bücher auf Ihrem Smartphone, Tablet und Computer zu lesen.

  • Apple
  • Android
  • Windows Phone

Geben Sie Ihre Mobiltelefonnummer ein, um die kostenfreie App zu beziehen.

Jeder kann Kindle Bücher lesen — selbst ohne ein Kindle-Gerät — mit der KOSTENFREIEN Kindle App für Smartphones, Tablets und Computer.



Produktinformation

Produktbeschreibungen

Pressestimmen

ÝThis¨ book . . . Marshals powerful evidence to make Ýits¨ case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all. -- Geoffrey Colvin "Fortune"


Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts. -- Choice


Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts. -- "Choice

It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business--people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables--are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts." . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself.--Louis Menand "The New Yorker "

The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand.--Gavyn Davies "Financial Times "

[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all.--Geoffrey Colvin "Fortune "

Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book "Expert Political Judgment: How Good Is It? How Can We Know?" The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication.--Jim Coyle "Toronto Star "

Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words.--William D. Crano "PsysCRITIQUES "

Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting--by academics or intelligence analysts, independent pundits, journalists or institutional specialists--with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book.--Paul Monk "Australian Financial Review "

Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments.--John Kay "Financial Times "

Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in "Expert Political Judgment" contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking--but how he or she is thinking.--Leon Hadar "Business Times "


It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business--people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables--are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts." . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself.
--Louis Menand "The New Yorker "


The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand.
--Gavyn Davies "Financial Times "


Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book "Expert Political Judgment: How Good Is It? How Can We Know?" The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication.
--Jim Coyle "Toronto Star "


[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all.
--Geoffrey Colvin "Fortune "


Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting--by academics or intelligence analysts, independent pundits, journalists or institutional specialists--with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book.
--Paul Monk "Australian Financial Review "


Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words.
--William D. Crano "PsysCRITIQUES "


Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments.
--John Kay "Financial Times "


Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in "Expert Political Judgment" contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking--but how he or she is thinking.
--Leon Hadar "Business Times "

Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Grawemeyer Award for Ideas Improving World Order
Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association


Winner of the 2006 Grawemeyer Award for Ideas Improving World Order

Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association

Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association

Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association


"It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business--people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables--are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts." . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself."--Louis Menand, "The New Yorker"

"The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand."--Gavyn Davies, "Financial Times"

"Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book "Expert Political Judgment: How Good Is It? How Can We Know?" The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication."--Jim Coyle, "Toronto Star"

"Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts."--"Choice"

"[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all."--Geoffrey Colvin, "Fortune"

"Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting--by academics or intelligence analysts, independent pundits, journalists or institutional specialists--with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book."--Paul Monk, "Australian Financial Review"

"Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words."--William D. Crano, "PsysCRITIQUES"

"Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments."--John Kay, "Financial Times"

"Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in "Expert Political Judgment" contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking--but how he or she is thinking."--Leon Hadar, "Business Times"

Winner of the 2006 Grawemeyer Award for Ideas Improving World Order
Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association
Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association

Synopsis

The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting.

Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox - the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events - is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits - the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.

Alle Produktbeschreibungen

Welche anderen Artikel kaufen Kunden, nachdem sie diesen Artikel angesehen haben?

Kundenrezensionen

4.0 von 5 Sternen
5 Sterne
0
4 Sterne
1
3 Sterne
0
2 Sterne
0
1 Stern
0
Siehe die Kundenrezension
Sagen Sie Ihre Meinung zu diesem Artikel

Top-Kundenrezensionen

Format: Taschenbuch Verifizierter Kauf
Der Fuchs weiß viele verschiedene Sachen, der Igel aber nur eine große .
(Archilochos, griechischer Lyriker, 680 bis 645 v. Chr.)

Philip Tetlock ist gelernter Psychologe und Prof. für Leadership an der Univ. Kalifornien. Er forscht darüber, welche Faktoren für menschliche Weitsicht und Blindheit verantwortlich sind. Von 1985 bis 2003 befragte er 284 ausgewählte amerikanische Experten über den Lauf der Welt. Der Prognosehorizont war meistens 2 bis 5 Jahre, in Einzelfällen aber auch bis zu einer Dekade. Die Teilnehmer mussten z.B. beantworten, ob in einem bestimmten Land die jetzige Regierung nach den nächsten bzw. übernächsten Wahlen noch immer am Ruder ist (in autoritären Regimen, ob sie weg geputscht wird). Andere Fragen waren etwa, ob sich die Provinz Quebec von Kanada loslösen wird, ob zwischen Indien und Pakistan ein Krieg ausbricht. Ob das Wachstum des Bruttonationalproduktes, die Staatsverschuldung oder der Zinssatz der Notenbank höher, niedriger oder gleich ausfallen wird, die Preise für wichtige Rohstoffe nach oben, unten gehen oder gleich bleiben, ob die Internet-Börsenblase innerhalb des Prognosehorizontes platzt. Die Experten mussten nicht nur die Richtung angeben, sondern auch, für wie wahrscheinlich sie die einzelnen Szenarien – z.B. die Abspaltung Quebecs – hielten.
Ein roter Faden in Tetlock's Experimenten ist: Die Experten sind sich – egal auf welchem Gebiet – viel zu sicher. Wenn sie etwas als „praktisch sicher“ einstufen, dann kommt es höchstens mit 70% Wahrscheinlichkeit vor. Es sind auch Wunder gar nicht so selten. Es treten Ereignisse ein, die die Experten für (denk-)unmöglich gehalten haben.
Lesen Sie weiter... ›
Kommentar War diese Rezension für Sie hilfreich? Ja Nein Feedback senden...
Vielen Dank für Ihr Feedback.
Wir konnten Ihre Stimmabgabe leider nicht speichern. Bitte erneut versuchen
Missbrauch melden

Die hilfreichsten Kundenrezensionen auf Amazon.com (beta)

Amazon.com: 4.4 von 5 Sternen 37 Rezensionen
50 von 53 Kunden fanden die folgende Rezension hilfreich
5.0 von 5 Sternen A classic of Political Science & Cognitive Psychology 6. Januar 2007
Von Dr. Frank Stech - Veröffentlicht auf Amazon.com
Format: Taschenbuch Verifizierter Kauf
Tetlock shows conclusively two key points: First, the best experts in making political estimates and forecasts are no more accurate than fairly simple mathematical models of their estimative processes. This is yet another confirmation of what Robyn Dawes termed "the robust beauty of simple linear models." The inability of human experts to out-perform models based on their expertise has been demonstrated in over one hundred fields of expertise over fifty years of research; one of the most robust findings in social science. Political experts are no exception.

Secondly, Tetlock demonstrates that experts who know something about a number of related topics (foxes) predict better than experts who know a great deal about one thing (hedgehogs). Generalist knowledge adds to accuracy.

Tetlock's survey of this research is clear, crisp, and compelling. His work has direct application to world affairs. For example he is presenting his findings to a conference of Intelligence Community leaders next week (Jan 2007) at the invitation of the Director of National Intelligence.

"Expert Political Judgment" is recommended to anyone who depends on political experts, which is pretty much all of us. Tetlock helps the non-experts to know more about what the experts know, how they know it, and how much good it does them in making predictions.
37 von 40 Kunden fanden die folgende Rezension hilfreich
5.0 von 5 Sternen Careful, Plodding, Objective 23. September 2006
Von Peter McCluskey - Veröffentlicht auf Amazon.com
Format: Taschenbuch
This book is a rather dry description of good research into the forecasting abilities of people who are regarded as political experts. It is unusually fair and unbiased.

His most important finding about what distinguishes the worst from the not-so-bad is that those on the hedgehog end of Isaiah Berlin's spectrum (who derive predictions from a single grand vision) are wrong more often than those near the fox end (who use many different ideas). He convinced me that that finding is approximately right, but leaves me with questions.

Does the correlation persist at the fox end of the spectrum, or do the most fox-like subjects show some diminished accuracy?

How do we reconcile his evidence that humans with more complex thinking do better than simplistic humans, but simple autoregressive models beat all humans? That seems to suggest there's something imperfect in using the hedgehog-fox spectrum. Maybe a better spectrum would use evidence on how much data influences their worldviews?

Another interesting finding is that optimists tend to be more accurate than pessimists. I'd like to know how broad a set of domains this applies to. It certainly doesn't apply to predicting software shipment dates. Does it apply mainly to domains where experts depend on media attention?

To what extent can different ways of selecting experts change the results? Tetlock probably chose subjects that resemble those who most people regard as experts, but there must be ways of selecting experts which produce better forecasts. It seems unlikely they can match <a href="[...]">prediction markets</a>, but there are situations where we probably can't avoid relying on experts.

He doesn't document his results as thoroughly as I would like (even though he's thorough enough to be tedious in places):

I can't find his definition of extremists. Is it those who predict the most change from the status quo? Or the farthest from the average forecast?

His description of how he measured the hedgehog-fox spectrum has a good deal of quantitative evidence, but not quite enough for me check where I would be on that spectrum.

How does he produce a numerical timeseries for his autoregressive models? It's not hard to guess for inflation, but for the end of apartheid I'm rather uncertain.

Here's one quote that says a lot about his results:

Beyond a stark minimum, subject matter expertise in world politics translates less into forecasting accuracy than it does into overconfidence
17 von 17 Kunden fanden die folgende Rezension hilfreich
5.0 von 5 Sternen Great Decision Making Evidence 10. März 2006
Von T. Coyne - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
As both a consultant and an investment manager I have spent a lot of years studying decision theory. One limitation in a lot of the work I encountered was its heavy reliance on lab studies using students. You were never quite sure if the conclusions applied in the "real world." This outstanding book puts that concern to rest. It is by far the richest body of evidence I have encountered on decision making in real world situations. Anybody interested in decision making and decision theory will profit from reading it.
6 von 6 Kunden fanden die folgende Rezension hilfreich
4.0 von 5 Sternen Provocative, insightful, maybe overambitious 28. Oktober 2012
Von Stuart Shapiro - Veröffentlicht auf Amazon.com
Format: Taschenbuch
Tetlock takes on an issue that transcends the social sciences. When should we trust experts (he focuses on political experts but I think (and I think he would agree) that his findings have broader applicability)? His carefully constructed tests indicate that on average experts are little better than the lay public and inferior to simple statistical models. Within the expert fields "foxes" are better than "hedgehogs." Generalists and flexible thinkers are better than ideologues and narrow specialists.

I tend to like this book in part because it reaffirms my prior beliefs. And it does so in a very careful way. Tetlock bends over backwards to test his own conclusions. A particularly insightful conclusion is that those who are most likely to get publicity for their predictions are also those who are least likely to be right.

Two mild criticisms. First, the book veers into the highly technical at times and is not really for the lay reader. It was perhaps not intended to be for an audience beyond academia but some of the attention it has gotten may have attracted a broader audience that may have to gloss over wide sections of the book. Second, in his conclusions, Tetlock attempts to broaden his already far-reaching argument to deal with conflicts between relativists and objectivists. This felt tacked on and too cursory to contribute in this area.

But as far as his treatment of experts goes, I wish that everyone could read this and treat them with a more cynical eye.
5 von 5 Kunden fanden die folgende Rezension hilfreich
5.0 von 5 Sternen Review by J. Colannino 3. Dezember 2012
Von Joseph Colannino - Veröffentlicht auf Amazon.com
Format: Taschenbuch Verifizierter Kauf
"Expert political judgment" -- it sounds like an oxymoron, but only because it is. Philip E. Tetlock's groundbreaking research shows that experts are no better than the rest of us when it comes to political prognostication. But then again, you probably had a sneaking hunch that that was so. You need rely on hunches no more. Tetlock is Professor of Leadership at the Haas Management of Organizations Group, U.C. Berkeley. A Yale graduate with his Ph.D. in Psychology, Expert Political Judgment is the result of his 20 year statistical study of nearly 300 impeccably credentialed political pundits responding to more than 80,000 questions in total. The results are sobering. In most cases political pundits did no better than dart throwing chimps in prediciting political futures. Of course, Tetlock did not actually hire dart throwing chimps -- he simulated their responses with the statistical average. If the computer was programmed to use more sophisticated statistical forecasting techniques (e.g., autoregressive distributed lag models), it beat the experts even more resoundingly.

Were the experts better at anything? Well, they were pretty good at making excuses. Here are a few: 1. I made the right mistake. 2. I'm not right yet, but you'll see. 3. I was almost right. 4. Your scoring system is flawed. 5. Your questions aren't real world. 6. I never said that. 7. Things happen. Of course, experts applied their excuses only when they got it wrong... er... I mean almost right... that is, about to be right, or right if you looked at it in the right way, or what would have been right if the question were asked properly, or right if you applied the right scoring system, or... well... that was a dumb question anyway, or....

Not only did experts get it wrong, but they were so wedded to their opinions that they failed to update their forecasts even in the face of building evidence to the contrary. And then a curious thing happened -- after they got it wrong and exhausted all their excuses, they forgot they were wrong in the first place. When Tetlock did follow-up questions at later dates, experts routinely misremembered their predictions. When the expert's models failed, they merely updated their models post hoc, giving them the comforting illusion that their expert judgment and simplified model of social behavior remained intact. Compare this with another very complex system -- predicting the weather. In this latter case, there is a very big difference in the predictive abilities of experts and lay persons. Meteorologists do not use over-simplified models like "red in the morning, sailor's warning." They use complex modeling, statistical forecasting, computer simulations, etc. When they are wrong, weathermen do not say, well, it almost rained; or, it just hasn't rained yet; or, it didn't rain, but predicting rain was the right mistake to make; or, there's something wrong with the rain guage; or, I didn't say it was going to rain; or, what kind of a question is that?

Political experts, unlike weathermen, live in an infinite variety of counterfactual worlds; or as Tetlock writes, "Counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts." That is: sure, given x, y, and z, the former Soviet Union collapsed; but if z had not occurred, the former Soviet Union would have remained intact. Really? Considering the expert got it wrong in the first place, how could they possibly know the outcome in a hypothetical counterfactual world? At best, this is intellectual dishonesty. At worst, it is fraud.

But some experts did better than others. In particular, those who were less dogmatic and frequently updated their predictions in response to countervailing evidence (Tetlock's "foxes") did much better than the opposing camp (termed "hedgehogs"). The problem is that hedgehogs climb the ladder faster and have positions of greater prominence. My Machiavellian take? You might as well make dogmatic pronouncements because all the hedgehogs you work for aren't any better at predicting the future than you are -- they're just more sure of themselves. So, work on your self-confidence. It is apparently the only thing anyone pays any attention to.
Waren diese Rezensionen hilfreich? Wir wollen von Ihnen hören.