Learn more Shop now Learn more Shop now Shop now Shop now Shop now Shop now Shop now Shop now Shop now Shop now Learn More Shop now Learn more Click Here Shop Kindle New Album - Pink Shop now Shop Now



on 22 August 2017
A really good looking at how forecasting and prediction has performed in recent years. It also looks at Nate Silver's proposal on how to improve the collective forecasting performance through the wider use of Bayes theorem when assessing a prediction or testing a theory
0Comment|Was this review helpful to you?YesNoReport abuse
on 26 August 2017
Read the free sample and you've basically got the idea.
0Comment|Was this review helpful to you?YesNoReport abuse
on 16 June 2017
great read, very happy
0Comment|Was this review helpful to you?YesNoReport abuse
on 28 November 2012
Mr Silver clearly knows what he is talking about, but I'm less sure he knows how to talk about it. I assume he set out to write a chatty, non-challenging book, but the result is light on substance and structure.

The Nobel prize-winning physicist Niels Bohr famously said 'Prediction is very difficult, especially if it's about the future'. This pretty much sums up the first half of the book. Yes, the detail about the financial crisis, weather forecasting, earthquakes etc is mildly interesting, but in relation to prediction, you will be wading through a lot of noise to extract the signal ('human nature makes us over-confident predictors', 'without either good theory or good empirical data, you may as well just guess','the most confident pundits are usually the worst' etc).

The substance of the book comes in twenty pages in the middle, where Silver introduces Bayesian logic (I learnt in maths classes at school when I was fourteen so it wasn't new to me, and it doesn't need 200 pages of build up). The best section is where Silver contrasts Bayesian logic to Fisherian logic. Fisher created the maths that is used almost universally in medical and social science research to prove the efficacy of a treatment or theory. Silver explains how flawed this maths is - which is presumably why two thirds of the positive findings claimed in medical journals cannot be replicated. This is pretty heady stuff.

Silver claims that the second half of the book is about how to make predictions better. It is mostly more examples of failure, this time in chess, investment, climate and terrorism, with a few asides that might be considered signals ('testing is good', 'groups/markets tend to make better predictions than individuals'). The exception is the section on poker, which delivers the strongest message in the book: good gamblers think in probabilities (rather than dead certs) - when these probabilities diverge from the odds on offer by a suitable margin, they may place a bet. Bad poker players lose a lot more than good poker players make. The best is the enemy of the good...

Of course, the point of the book is that there is no silver bullet - good prediction requires detail, nuance, hard work, honesty and humility. It would be wrong to expect a check list for success at the end, and naturally, there isn't one. Even so, you are left with a craving for clarity.

'The Signal and the Noise' is a pleasant enough read, but it is mostly anecdote. Rather ironically, you are left to sort out the signal from noise yourself.
33 Comments| 186 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 14 October 2013
Book contains lot of cases to get acquainted with. Easy to read. Unfortunately I didn't get a point how things have to bee done in order to avoid problems and get positive result, especially during reading the beginning of the book.
0Comment|Was this review helpful to you?YesNoReport abuse
on 25 November 2012
Silver has some good ideas, and he is to be commended for scruplously footnoting his references, but there are some mistakes (the "cows would rate this" was from an S&P analyst,not Moody's) and he utilises heuristics he criticises elsewhere (lazily claiming the industrial revolution happened, just like that, in 1775 with the excuse "it is a nice round number").

My two main criticisms for non Americna readers is that it is quite US centric (I don't care about baseball, and the general moneyball story is impossible to avoid) and the main philosophical stuff (which was most useful and ineresting to me) makes up a small portion of the book, the majority with various examples where he makes the same arguments with interview of different people that are somewhat non-questioning.

He gives some useful examples throughout the book, covering meteorology, earthquakes, transmision of viruses, but it still feels as if it could have been cut. The stuff on Bayes is interetsing but really skates over the issue of how you come up with a Bayesian prior when you can't iteratively improve them because you do not have many data points. Given the time he spends looking at the financial crisis, this is a flaw as it reduces the "wow, Bayes is really useful" impact when it cannot offer that much resolution to the problem of predicting economic and financial crises, the key predictive failure he cites.

Even so, as a way of getting people to think a bit more deeply about what it is to make a prediction and how to know if it was well constructed, and how to integrate concepts of epistemology, it is a useful introductory book.
0Comment| 36 people found this helpful. Was this review helpful to you?YesNoReport abuse
VINE VOICEon 16 July 2013
Format: Paperback|Vine Customer Review of Free Product( What's this? )
This took longer to read than I expected....and I couldn't bring myself to read the chapter on baseball as I have no knowledge whatsoever on the subject.

However, the chapters on weather forecasting, earth tremors, the stock exchange (I'll never think about investing the same way again!) and global warming were very interesting.

Nate makes a good argument for the perils and pitfalls of prediction. The overly optimistic forecasts are slashed down with his gentle reprimands and he explains in much detail why and how they are wrong. He also has big issues with the amount of data v.s what is relevant. It's no good having tons of information (the noise), if you don't understand what you've gathered.....to make a prediction (the signal).

His favourite form of prediction includes something called Bayesian reasoning, which is almost the basis of the whole book.

There are charts galore, lots of 'case examples' and the ins and outs of how he-got-to-where-he-is-now by actually immersing himself in his field (election & baseball forecasts) ....I was a little concerned to read about how he played on-line poker, winning thousands of $'s, then losing a big chunk in 2006/7 but with that admission, there came some humility where he describes (in great detail) not only how he won, but also WHY he lost so much....

I enjoyed the book, mostly....
0Comment|Was this review helpful to you?YesNoReport abuse
on 10 June 2013
Part of my job is making forecasts so I was very interested in reading a book by someone successful in the field. Fortunately this is a well written book otherwise it could be a tedious read. It is a very interesting book, fascinating and even amusing in places. My only criticisms are that the author, despite being widely travelled, uses examples from North America assuming we all understand the rules and terminology of baseball. I had to skip 20 pages at one point. Secondly, I'd say the book was too long by 50 to 100 pages for a general audience. Definitely well worth a read though.
0Comment|Was this review helpful to you?YesNoReport abuse
on 6 July 2014
If you are really into measuring signals in noise this isn't for you. If you want to be told, page after page, how brilliant the author is at, for example, football (US) statistics then you will find something of interest. Don't expect to find any useful information on regression; Bayes; predictor-corrector; Kalman; entropy; ... or just about anything to do with prediction.
0Comment| 13 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 8 December 2012
This is a book about prediction and the use of statistics to forecast future events such as earthquakes and the outcome of elections. When it's good it's a lucid and enjoyable read which makes some important points about the art of prediction, with the chapters on political punditry and economic forecasting stand out as especially good. Unfortunately this is let down by a number of problems. These include the interminable and really quite tedious chapters on poker, baseball and chess (I really don't know why the chess one is in the book at all), and the inclusion of a number of serious errors and misconceptions in the chapter on epidemiology. This last is a subject that I think I have some knowledge of, and it's disturbing to see straightforward and important factual errors - the definition of the basic reproductive rate used is badly wrong, for example (if anyone's interested the correct definition is that it is the number of new infections produced by a single infectious host *in a population of completely susceptible hosts*), and the interpretation is also wrong (it's not correct that any disease with basic reproductive rate >1 will go on to infect all susceptible hosts in the population). These are not nit-picking little errors - it's the equivalent of getting the definition of interest rate seriously wrong in a discussion of economics. These are fundamental concepts and the errors tell us that the author did not properly understand the subject that he's writing about.

The use of mathematical models in epidemiology is also portrayed in a misleading way: no-one would ever expect a simple SIR model to produce useful predictions about disease spread in a real-world population, rather these simple models are used in an exploratory way to help us understand the theoretically possible behaviours of such diseases (something which is mentioned at the end of the chapter as a bit of an afterthought). It would be better if this chapter looked at some examples where modelling has been useful in the management of disease, such as the fairly dramatic story of how models of foot-and-mouth disease spread in the UK were used to persuade the government to change policy and bring the army in to help deal with the disease during the 2001 outbreak.

A final comment is that the portrayal of Bayesian statistics as the only way to analyse data and the use of straw-man arguments to ridicule "frequentist" statistics and bash Fisher is getting really tired. I use both Bayesian and "frequentist" stats in my research and I am able to understand that both are useful under different circumstances and both have advantages and disadvantages. I suspect that if Silver had some experience with working in experimental science, for example, he would have a better appreciation for when conventional statistical analysis is a useful tool.
22 Comments| 61 people found this helpful. Was this review helpful to you?YesNoReport abuse