Top positive review
15 people found this helpful
Helped my understanding of why big things fail
on 17 January 2001
"if we look at the individual elements of behavoir that ultimately produced the accident at Chernobyl, we cannot find a single example of failure". A sobering line from the fist chapter.
The book contrasts good decision making processes with bad ones, using just a few real life examples and a lot of studies. There are some best practice patterns that can be drawn out from it, but it leaves you to apply serious thought if you want to use the lessons in a "traditional" project plan, for example.
You will have to add your own industry experience to get real value out of it: I'm interested in how it can be applied to the software engineering - after reading this its made me more interested in good feedback loops and early trend monitoring during project management. Paradoxically, I am also more wary of "analysis paralysis"! I think it would make a great definitive handbook if it could be translated to real technology programmes, where there is often a strategy plan and a "traditional" technology plan running together, plus an issue/risk management plan.
Other readers might think about applying it to public policy (or, alternatively, how a politicians dogma can overcome inconvenient facts. You can probably think of a dozen examples easily - then wonder how western democracies manage their economies so successfully given the opportunities for mistakes)
Regarding the book's style, although it is a seamless translation, I wish that some of the wording was better chosen - for example, using "ballistic" instead of "linear", & "intransparent" for "opaque" - or do these new words have a special meaning that I've missed? And although it mentions the value in diagramming time-dependent systems, it doesn't have any good examples or concrete references to proven methods.