on 28 November 2012
Mr Silver clearly knows what he is talking about, but I'm less sure he knows how to talk about it. I assume he set out to write a chatty, non-challenging book, but the result is light on substance and structure.
The Nobel prize-winning physicist Niels Bohr famously said 'Prediction is very difficult, especially if it's about the future'. This pretty much sums up the first half of the book. Yes, the detail about the financial crisis, weather forecasting, earthquakes etc is mildly interesting, but in relation to prediction, you will be wading through a lot of noise to extract the signal ('human nature makes us over-confident predictors', 'without either good theory or good empirical data, you may as well just guess','the most confident pundits are usually the worst' etc).
The substance of the book comes in twenty pages in the middle, where Silver introduces Bayesian logic (I learnt in maths classes at school when I was fourteen so it wasn't new to me, and it doesn't need 200 pages of build up). The best section is where Silver contrasts Bayesian logic to Fisherian logic. Fisher created the maths that is used almost universally in medical and social science research to prove the efficacy of a treatment or theory. Silver explains how flawed this maths is - which is presumably why two thirds of the positive findings claimed in medical journals cannot be replicated. This is pretty heady stuff.
Silver claims that the second half of the book is about how to make predictions better. It is mostly more examples of failure, this time in chess, investment, climate and terrorism, with a few asides that might be considered signals ('testing is good', 'groups/markets tend to make better predictions than individuals'). The exception is the section on poker, which delivers the strongest message in the book: good gamblers think in probabilities (rather than dead certs) - when these probabilities diverge from the odds on offer by a suitable margin, they may place a bet. Bad poker players lose a lot more than good poker players make. The best is the enemy of the good...
Of course, the point of the book is that there is no silver bullet - good prediction requires detail, nuance, hard work, honesty and humility. It would be wrong to expect a check list for success at the end, and naturally, there isn't one. Even so, you are left with a craving for clarity.
'The Signal and the Noise' is a pleasant enough read, but it is mostly anecdote. Rather ironically, you are left to sort out the signal from noise yourself.
on 6 July 2014
If you are really into measuring signals in noise this isn't for you. If you want to be told, page after page, how brilliant the author is at, for example, football (US) statistics then you will find something of interest. Don't expect to find any useful information on regression; Bayes; predictor-corrector; Kalman; entropy; ... or just about anything to do with prediction.
on 8 December 2012
This is a book about prediction and the use of statistics to forecast future events such as earthquakes and the outcome of elections. When it's good it's a lucid and enjoyable read which makes some important points about the art of prediction, with the chapters on political punditry and economic forecasting stand out as especially good. Unfortunately this is let down by a number of problems. These include the interminable and really quite tedious chapters on poker, baseball and chess (I really don't know why the chess one is in the book at all), and the inclusion of a number of serious errors and misconceptions in the chapter on epidemiology. This last is a subject that I think I have some knowledge of, and it's disturbing to see straightforward and important factual errors - the definition of the basic reproductive rate used is badly wrong, for example (if anyone's interested the correct definition is that it is the number of new infections produced by a single infectious host *in a population of completely susceptible hosts*), and the interpretation is also wrong (it's not correct that any disease with basic reproductive rate >1 will go on to infect all susceptible hosts in the population). These are not nit-picking little errors - it's the equivalent of getting the definition of interest rate seriously wrong in a discussion of economics. These are fundamental concepts and the errors tell us that the author did not properly understand the subject that he's writing about.
The use of mathematical models in epidemiology is also portrayed in a misleading way: no-one would ever expect a simple SIR model to produce useful predictions about disease spread in a real-world population, rather these simple models are used in an exploratory way to help us understand the theoretically possible behaviours of such diseases (something which is mentioned at the end of the chapter as a bit of an afterthought). It would be better if this chapter looked at some examples where modelling has been useful in the management of disease, such as the fairly dramatic story of how models of foot-and-mouth disease spread in the UK were used to persuade the government to change policy and bring the army in to help deal with the disease during the 2001 outbreak.
A final comment is that the portrayal of Bayesian statistics as the only way to analyse data and the use of straw-man arguments to ridicule "frequentist" statistics and bash Fisher is getting really tired. I use both Bayesian and "frequentist" stats in my research and I am able to understand that both are useful under different circumstances and both have advantages and disadvantages. I suspect that if Silver had some experience with working in experimental science, for example, he would have a better appreciation for when conventional statistical analysis is a useful tool.
on 25 November 2012
Silver has some good ideas, and he is to be commended for scruplously footnoting his references, but there are some mistakes (the "cows would rate this" was from an S&P analyst,not Moody's) and he utilises heuristics he criticises elsewhere (lazily claiming the industrial revolution happened, just like that, in 1775 with the excuse "it is a nice round number").
My two main criticisms for non Americna readers is that it is quite US centric (I don't care about baseball, and the general moneyball story is impossible to avoid) and the main philosophical stuff (which was most useful and ineresting to me) makes up a small portion of the book, the majority with various examples where he makes the same arguments with interview of different people that are somewhat non-questioning.
He gives some useful examples throughout the book, covering meteorology, earthquakes, transmision of viruses, but it still feels as if it could have been cut. The stuff on Bayes is interetsing but really skates over the issue of how you come up with a Bayesian prior when you can't iteratively improve them because you do not have many data points. Given the time he spends looking at the financial crisis, this is a flaw as it reduces the "wow, Bayes is really useful" impact when it cannot offer that much resolution to the problem of predicting economic and financial crises, the key predictive failure he cites.
Even so, as a way of getting people to think a bit more deeply about what it is to make a prediction and how to know if it was well constructed, and how to integrate concepts of epistemology, it is a useful introductory book.
I am going to hazard my own prediction on this book. If you buy it, will you like it? That depends on how much you know about statistics - if you know a lot, then this book is unlikely to tell you anything new. And if you already know about Bayes' Theorem then the likelihood is that you will not find anything new in this book.
If however you have never heard of Bayes' Theorem and you don't have a background or training in statistics (like me)then the likelihood of you enjoying the book is greater. However, though, I liked it, in the sense that overall I found it interesting, I only gave it three stars. Why?
First of all the book is quite uneven on the topics it discusses. The chapters on the failure of ratings agencies to predict the sub prime crash, on why it is so difficult statistically to predict earthquakes, or why weather forecasting has improved (and incidentally why private weather channels deliberately forecast a higher probability of rain than is actually merited) are good chapters. Others are only so-so, and feel like they have been padded out, like the chapter on economic forecasting, which does not end up saying anything that you don't already know (economists are rubbish at forecasting). And there are chapters that are just plain dull - like the ones on baseball and poker for instance. Some chapters rate five stars for interest, others three stars and some just the one star. It's a very variable reading experience.
Second, the underlying range of ideas, despite the eclecticism of the topics discussed, seems to be quite narrow. The book seems to be saying that the difficulty of predicting any given event, from earthquakes to the outcome of elections, depends on the availability of data. With earthquakes, the big ones, the ones we really need to worry about, these are hard to predict because we have observed or recorded so few of them. The problem is too few data. But baseball matches produce lots of data, and are therefore the outcomes are easier to predict.
The book is valuable in that it seeks - rightly so - to counteract the tub-thumping certainties of media pundits. It is an important insight to realize that all predictions are about degrees of wrongness. But somehow these insights do not cohere into a sustained thesis. Instead the book reads like a collection of loosely organised chatty discussions.
I wanted to like this book because I admired the author's recent triumph with his successful prediction of the outcome of the 2012 US presidential election. But this book might have been improved with the chapters on baseball and poker being pulled out and the rest of the book edited down. Then it would have served a good introduction to the perils and pitfalls of forecasting. Then I would probably have given it five stars. But as it currently stands, I feel I can only give it three stars.
on 25 July 2013
Nate Silver has shot to fame as the oracular figure who decoded political polling data into plain English and successfully predicted the US election. His debut book brings him back down to earth, using familiar examples as diverse as moneyball and warfare to demonstrate the sore lack of and need for better prediction in our lives, and the path to improvement through critical thinking and Bayesian reasoning.
Each chapter uses a particular area of prediction to teach broader lessons. The book opens with great momentum, using the financial crisis as a set of unambiguous examples of how not to make predictions before drilling into the all-too-human reasons that political commentators make poor election forecasts. There are good lessons here about how the need to feel confident and a single-minded focus on a few issues can lead one astray; he turns back to the financial crisis to emphasise the same failures there.
It's not all about the human factors, though, and Silver then turns to "moneyball" - statistics-based sports recruitment - to provide an overview of the more technical aspects of the art of prognostication. The idea of a predictive model is well articulated and applied to common-sense issues with surprising complications. With the reader warmed up, he spends several chapters digging into the fundimental reasons why level-headed and critically thinking scientists are unable to predict earthquakes. Some things - weather, disease, tectonic plates - are inherently challenging to forecast for interesting reasons, and he is equally quick to emphasise the technical traps that researchers can fall into in building their models.
The heart of the book, however, is Bayesian reasoning: the idea that we should take new predictions as adjustments to whatever our existing prediction said, as a sort of rolling improvement to our models. As a simple illustration, a test result indicating that one may have a rare disease should be combined with the low probability that one had the disease before the test results were in. Even if the test is 95% accurate, if the disease only affects one in a million people then the odds are far, far lower than 95% that one actually has the condition.
This is the tool Silver uses in the latter half of the book to show the way to better predictions, while still taking the time to illuminate other forecasting challenges. Whether it's poker or chess, the stockmarket or the battlefield, making a good model and refining it with new data is the key to victory. He lays out how the problems rise in these fields, be it a new raft of human frailties or the hefty challenge of trying to beat the "wisdom of the crowds", sets out how these failures in prediction can be capitalised by good agents or bad, and suggests Bayesian solutions.
A chapter on climate change in a book aimed at at those in big business has a huge potential to be a train wreck but Silver manages to weave a fairly acceptable course through the problem. This chapter acts to draw the book together, forcing together issues of complex models, noisy new data, and incentives to mislead, with Bayesian reasoning as the knight in shining armour. The overall theme is that climate models are difficult to make for fundamental reasons, and the warming consensus that has come out from those models has stood up to new results - despite the claims of think tanks who wish it otherwise.
This section has annoyed commentators on both sides of the issue. Silver manages to make good points without falling into the many huge rhetorical traps that the denialist movement has laid in any writer's path, but he's never particularly strong on the issue either. I liked the unspoken conclusion that less-confident predictions - 95% confidence rather than 99%, say - are more resilient to contradictory data in a Bayesian world, and Silver does not make false equivalencies and is unambiguous in supporting global warming. However this is not a strong introduction into climate science, or a real challenge to many of the incorrect claims made by denialists.
Truth be told this is a deliberate stylistic choice and potential issue throughout the book. Silver avoids bringing in controversies in the fundimental results that feed forecasts, except where it is directly relevant to a chapter's lesson. In the section on the financial crisis, human incentives are raised as a source of bias, but the humans responsible are hardly taken to task. If you want to find out about the failures of reasoning that permitted the 9/11 attacks, you'll have to read elsewhere. (Donald Rumsfeld appears but only as a lead into the "unknown unknowns" idea.) The implications of Scott Armstrong's work with the notoriously vociferous anti-climate-change Heartland Institute are left for the reader to find out about on their own.
This will variously come across as refreshingly expedient, frustratingly wishy-washy, focussed or cowardly depending on your reading preferences and ideological views. Consider yourself forewarned and take the book on its own terms.
The Signal and the Noise is certainly cleanly written and well-structured. Silver's introduction sets the book up as a toolbox, first outlining the failures of prediction and their causes before moving onto the successes and the processes that enable them, but in truth he allows the book to digress around the broader themes raised in each chapter, be it the problems and benefits of the "wisdom of the crowds" or the failure to properly, quantitatively account for the uncertainty in the prediction. These digressions are brief and enlightening, and echo back and forth between the chapters to make a more cohesive whole.
With the aforementioned caveat this is a superb route into the whole issue of modeling and forecasting. It's accessible, clearly written, technically sound and meticulously reasoned. It's recommended as reading on a difficult subject, although it's probably not going to prove to be the definitive work.
(If you want an primer to thinking about statistics before you dig into this I strongly recommend Darrell Huff's "How to Lie with Statistics". It's inexpensive, funny, brief, and makes a good companion piece.)
on 13 March 2014
I think a whole lot of unnecessary is found in this book without a clear and coherent point. A bit of poker, sports, and other things but the analysis is nothing unusual and very often fails to draw lessons with broader meaning.
on 27 September 2012
*A full executive summary of this book is available at newbooksinbrief dot com.
Making decisions based on an assessment of future outcomes is a natural and inescapable part of the human condition. Indeed, as Nate Silver points out, "prediction is indispensable to our lives. Every time we choose a route to work, decide whether to go on a second date, or set money aside for a rainy day, we are making a forecast about how the future will proceed--and how our plans will affect the odds for a favorable outcome" (loc. 285). And over and above these private decisions, prognosticating does, of course, bleed over into the public realm; as indeed whole industries from weather forecasting, to sports betting, to financial investing are built on the premise that predictions of future outcomes are not only possible, but can be made reliable. As Silver points out, though, there is a wide discrepancy across industries and also between individuals regarding just how accurate these predictions are. In his new book `The Signal and the Noise: Why So Many Predictions Fail--but Some Don't' Silver attempts to get to the bottom of all of this prediction-making to uncover what separates the accurate from the misguided.
In doing so, the author first takes us on a journey through financial crashes, political elections, baseball games, weather reports, earthquakes, disease epidemics, sports bets, chess matches, poker tables, and the good ol' American economy, as we explore what goes into a well-made prediction and its opposite. The key teaching of this journey is that wise predictions come out of self-awareness, humility, and attention to detail: lack of self-awareness causes us to make predictions that tell us what we'd like to hear, rather than what is true (or most likely the case); lack of humility causes us to feel more certain than is warranted, leading us to rash decisions; and lack of attention to detail (in conjunction with self-serving bias and rashness) leads us to miss the key variables that make all the difference. Attention to detail is what we need to capture the signal in the noise (the key variable[s] in the sea of data and information that are integral in determining future outcomes), but without self-awareness and humility, we don't even stand a chance.
While self-awareness requires us to make an honest assessment of our particular biases, humility requires us to take a probabilistic approach to our predictions. Specifically, Silver advises a Bayesian approach. Bayes' theorem has it that when it comes to making a prediction, the most prudent way to proceed is to first come up with an initial probability of a particular event occurring (rather than a black and white prediction of the form `I believe x will occur'). Next, we must continually adjust this initial probability as new information filters in.
The level of certainty that we can place on our initial estimate of the probability of a particular event (and the degree to which we can accurately refine it moving forward) is limited by the complexity of the field in which we are making our prediction, and also the amount and quality of the information that we have access to. For instance, in a field like baseball, where wins and losses mostly comes down to two variables (the skill of the pitchers, and the skill of the hitters), and where there is an enormous wealth of precise data, prediction is relatively straightforward (but still not easy). On the other hand, in a dynamic field such as the American economy, where the outcomes are influenced by an enormous number of variables, and where the interactions between these variables can become incredibly complex (due to things like positive and negative feedback), probabilities become a whole lot more difficult to pin down precisely (though they often remain possible on a general and/or long-term scale).
It is also important to recognize that while additional information can help us no matter what field we are trying to make our prediction in, we must be careful not to think that information can stand on its own. Indeed, additional information (when it is not met with insightful analysis) often does nothing more than draw our attention away from the key variables that truly make a difference. In other words, it creates more noise, which can make it more difficult to identify the signal. It is for this reason that predictive models that rely on statistics and statistics alone are often not very effective (though they do often help a seasoned expert who is able to apply insightful analysis to them).
In the final stage of the book Silver explores how the lessons that he lays out can be applied to such issues as global warming, terrorism and bubbles in financial markets. Unfortunately, each of these fields is a lot noisier than many of us would like to think (thus making them very difficult to predict precisely). Nevertheless, the author argues, within each there are certain signals that can help us make better predictions regarding them, and which should help make the world a safer and more livable place.
If you are hoping that this book will make you a fool-proof prognosticator, you are going to be disappointed. A key tenet of the book is that this is simply not possible (no matter what field you are in). That being said, Silver makes a very strong argument that by applying a few simple principles (and putting in a lot of hard work in identifying key variables) our predictive powers should take a great boost indeed. A full executive summary of this book is available at newbooksinbrief dot com; a podcast discussion of Silver's treatment of Bayes' theorem is also available.
on 21 December 2012
Nate Silver's book is very well written and accessible read. It is also wide ranging in coverage though with a distinct American flavour. As he has a well deserved reputation as a successful Bayesian practitioner my expectations for a more insightful book were somewhat frustrated. However this is nevertheless a good enough and friendly introduction to the "Art and Science" of prediction.
on 4 March 2015
Highly informative and thoughtful, to the point of actively changing my way of thinking. Quite heavy on statistical theory in part, so it's not a light and fluffy 'Freakonomics' type book, but I really couldn't wait to get onto the next page. Nate Silver covers baseball, earthquakes, weather, poker, banking and other topics throughout different chapters the book, giving a new and interesting insight into each. So much so in fact that I have been passing off many of the statistical revelations in my own conversations since, which has undoubtedly made me the life and soul of any party! The only downside I would say is that it can be quite Americanised at times, particularly with the baseball parts where the author assumes knowledge about field positions and typical hit rates when discussing game stats, which will probably be lost on a lot of European readers.