Top critical review
18 of 18 people found this helpful
Entertaining and thought provoking but technically flawed
on 28 December 2012
This book is okay as far as it goes but please keep in mind that risk and uncertainty are specialist subjects that have vexed some of the finest thinkers amongst us. You should not, therefore, be surprised to find that Dan Gardner's track record as a successful journalist, as opposed to a successful risk analyst, has resulted in a book that is both entertaining and persuasive whilst still being technically naive. The problem isn't his grasp of the political and social dimensions of risk - I bought the book in the hope that this aspect of the subject would be expertly covered and, in this respect, the book did not disappoint. The real problem is that the author only has a layman's understanding of risk's conceptual framework. Consequently, he frequently conflates risk with uncertainty, consistently confuses ambiguity aversion with risk aversion, and vacillates between discussing risk and discussing the probability component of risk, in a way that I found decidedly confusing. Furthermore, the author's superficial understanding of the cognitive science behind risk perception comes perilously close to undermining the author's whole thesis.
Central to the argument, the author repeatedly cites cognitive biases which he claims lead people to overestimate risk. However, this is a serious misrepresentation of the true significance of such biases, and the reason why he misrepresents them is because he isn't sufficiently careful at distinguishing between risk and probability. The fact is that the cognitive biases he refers to can lead people to overestimate likelihood. Whether or not this leads to overestimation of risk depends upon whether the individual is focused upon the likelihood of a positive outcome, or a negative outcome. The thesis of the book is that there are social and political factors that ensure we are focused upon the negative; the cognitive biases then do the rest. Unsurprisingly, therefore, the book is full of examples that illustrate the point. No counter-examples are offered until (strangely) the Afterword, in which he provides a superb example of how such cognitive biases can just as easily lead to overconfidence (the author gives no hint as to whether he is aware of the stark ambivalence that this example belatedly introduces into his argument). The reality is that the cognitive biases cited do not indicate that human beings are inherently risk averse. Without wishing to invalidate the author's conclusions regarding the manipulation of fear, one could just as easily cite the same cognitive science in a book entitled `Risk: The science and politics of overconfidence'.
I wasn't expecting or wanting an academic and mathematically sound dissertation, but I think that the analysis offered would have benefited greatly from being presented in terms that, at least, demonstrated a firmer grasp of the subject's basic tenets.
Did I enjoy reading the book? Yes, despite the frustrations. But would I recommend it as a primer in the assessment and evaluation of risk and uncertainty? Certainly not!