The January 1990 AT&T telephone network crash and the June 1996 inflight explosion of an Ariane 5 rocket were caused by software failures. These two citations by Harel are two examples of incorrect computer programming that should have been avoidable. With our industrial economy relying to an ever-greater extent on computers for essential functions, the importance of software reliability stands in stark relief.
Harel's third example, that of a 107 year old woman who was mailed registration paperwork for first grade, highlights that even our system of social organization is being dependent on competently run computer networks. Now, this may not be so dramatic as network or rocket crashes, but multiplied by our burgeoning population, it illustrates the fiscal nibbling that computer errors exact on our public budgets.
Thus Harel, having established the stakes (not at the outset, unfortunately, but near the end of Chapter 1), takes up the technical issues having to do with correctness of computation. The book begins with a discussion of the algorithm: the program, inputs, instances, programming languages, and termination. Then in the next chapters he goes on to problems that, even theoretically, defy solution by any means. He describes the Church-Turing Thesis having to do with "effective computability", and the Halting Problem and Rice's Theorem, "No algorithm can decide any nontrivial property of computations.
Even the problems that are solvable in theory just take too much time or machine resources to be economically worthwhile. These are the subject of Ch. 3. Chapter 4 has to do with NP-complete problems: decidable but not known to be tractable (worthwhile). In other words, you know that you can know, but you don't know!
Ch. 5 takes up algorithmic parallelism (mainly), which offers hope. Also touches on randomization, quantum computing, and molecular computing. Ch. 6 takes up cryptography, leading up to the RSA algorithm, and the zero knowledge proofs.
The last chapter takes up the notion of "artificial intelligence", the Turing test, Eliza, searching strategies, etc.
It also touches on issues not unlike those demonstrated by the recent IBM Watson project: "The difficulty is rooted in the observation that human knowledge does not consist merely of a large collection of facts. It is not only the sheer number and volume of facts that is overwhelming...but, to a much larger extent, their interrelationships and dynamics...a human's knowledge base is incredibly complex, and works in ways that are still far beyond our comprehension." Fact is, even now, after Watson, we STILL don't understand how a human knowledge base works, because Watson is not a human and does not employ human search strategies. Despite the media hype that IBM has been trying to work up on the Sunday morning news shows, Watson is still just a souped-up search engine with an English-language front end. Interesting and potentially useful, but no breakthrough.
Seems funny, or perhaps not, that this topic is taken up in the same chapter discussing the Turing test. Watson may produce results competitive with those of humans, but it works in a completely different way -- machine learning. Which means, basically, it is still a rules-based system, but it makes up new rules and modified rules as it operates. Human cognitive machinery is not rules-based. Turing says you can ignore the underlying mechanism; the only way you have to compare a human and a machine is by the results alone. It is a computer equivilant to the behaviorist perspective in psychology: all that matters is what you can see in front of you. Again, nothing new here, this has been apparent since the days of Eliza.
The book is rather theory-oriented but still educational. When Harel cited those three real-world instances I thought the text would be more practically oriented; on this score I was disappointed. But still it's a worthwhile read.