The author of this book completely misses the point and perpetuates a misunderstanding that he (and journalists in general) live by. While science certainly does have its problems, the systemic problem that he points out falls almost wholly on the shoulders of journalists and reporters rather than the experts he derides.
Science doesn't claim to be right; it is a process by which incorrect answers can be discarded. Science has progresses as much as it has, and provided what it has to us, not because it is necessarily right, but because, unlike other common ways of thinking, it allows us to move on from conclusions that are clearly wrong. The scientific method was designed with precisely this understanding: that most answers that we will come up with will be wrong. Studies, those things that journalists love to cite and paraphrase, are not intended to be THE FINAL ANSWER--they are working answers, or a way of working toward an answer. Most scientists not only understand this on an abstract level, but are viscerally aware of it; it shapes their thinking (or at least it should) about every published paper they read. Scientists understand that all conclusions published in all papers are tentative, and they understand that a study is only a model or simulation of the real world and may therefore not present an outcome or conclusion that accurately represents the real world. Yes, some excited moron enamored with his own research may exaggerate the strength and meaning of his conclusions. Especially when prodded by a journalist to do so. (Although far more often the exaggeration is injected by the journalist.)But this is not how science as a whole works. Half or more of Freedman's book, then, is wasted arguing a point that is elementary and obvious to anyone with a basic understanding of the scientific method, and that he, apparently, is shamefully unaware of.
He didn't focus on the real cause of "expert failure" in the realm of science, because he IS that problem: namely, that journalists create most of these public misconceptions of fact by incorrectly representing the work of scientists. It's not only that they often misrepresent the conclusions of a particular study or what those conclusions mean. It's not only that they remove the vitally important language of tentativity that characterizes scientific writing, changing their write-up to display a tone of certitude that the original never contained simply to increase their work's wow factor for the public. It is also, more deeply and more importantly, that they entirely and consistently misrepresent what it means for a conclusion to be supported by data and published at all. If he wrote this book to help the public understand how to interpret scientific results, he failed miserably because he never made this most vital point. I suspect this failure was because 1) He didn't really understand it himself, and 2) Journalism IS the art of spin; it would have undermined the attractiveness of his book to be reasonable.
Further, although Freedman occasionally distinguished between different types of expertise, he generally conflated some fundamentally different types of information suppliers under the term "expert." These types of fact suppliers includes scientists, advertisers, journalists, self-promoting gurus, and idealogues. Each of these types of information suppliers is so fundamentally different that treating them all as "experts" serves to muddle rather than improve a reader's ability to assess information.
Advertisers, idealogues, and self-promoters have the objective of financial gain and therefore cannot generally be trusted. Journalists need to shape facts into news or stories, which are a form of information repackaged as entertainment that is inherently unreliable. These information suppliers need to be assessed and understood differently.
An expert is someone who dedicates a substantial portion of his or her life to learning about and understanding a particular topic. Not a few months, or even a few years. The better part of a working lifetime, or at least a decade or preferably decades. Another huge problem with expertise is that, in order to make a stopry more convincing or interesting, journalists or other people in the entertainment industry will elevate a non-expert, usually another journalist, to the status of "expert" when such a label is totally inappropriate. As cases in point, on public radio I recently heard a journalist who wrote a book on fruit presented as an "expert" but he didn't know what a quince was when a caller called; another supposedly expert journalist who wrote a book on honey didn't even know what sugar honey was composed of (glucose) or have any idea on how it was physiologically produced.
Many of the author's points are important, thoughtful, and well-made, but this content could have been thoroughly addressed in a medium to long magazine article; in no way did it warrant a book. Instead he went on and on with irrelevant or poorly presented examples that demonstrated nothing in particular. For example, he cites, as an example of confusing and contradictory expert opinion, the theories that asthma is caused by sterile environments in childhood and environmental pollution. These ideas are in no way contradictory or exclusive; the only confusion is that introduced by his simplification of the topic for the sake of producing catchy sentences that speciously sound perceptive.
The information that Freedman gives us to help us assess expert advice is almost useless and totally, and astoundingly, misses most of the most important points. If anyone cares, here is what they are:
1) Look for real experts. Advertisers, idealogues, and journalists are not real experts and should not be trusted to provide information. Journalists are better than the first two, but if at all possible a primary source or true expert should be consulted. Freedman confuses these categories; for example, he seems to think of economists as scientists when they are not. Economists are idealogues who theorize within an artificial mathematical world that they create to support their ideas; economics as practiced is not constrained by nor does it adhere to the scientific method. By conflating idealogues, advertisers, and such with real expertise, Freedman is able to display the inherent shortcomings of one group and let it by juxtaposition and association discolor our view of the other. This is a disservice to our understanding of the issue he purports to address.
2) In reading any published or unpublished study, assess the methods used to produce data, the strength of the data, and perhaps most importantly, the logical connection between the data and the author's conclusion. I know this is work, but it is the only way. And any intelligent person who reads a few studies will quickly begin to see that the conclusions are often illogical, the data sparse or weak, or the methods ridiculous. Even journalists almost nnever do this, and I find it telling that Freedman never once in the book dissects a published study, or directly critiques one. This tells me that he has only researched this topic skin-deep, as beautiful and illustrative examples of poor studies are remarkably easy to find.
3) Realize that published conclusions supported by data (ie, studies) are not facts; they are just part of the process of science.
So, in short, I like the author's point, but dislike the way he made it and the misunderstandings he obfuscated it with, and the way he failed to clarify it. But he did what journalists do: they sell words, not information.