The topic of machine intelligence continues to inspire both worry and elation. This book is an interesting mixture of these two, for the author is both optimistic about the eventual rise of machine intelligence, which he argues is to a large degree already here, but he is also clearly concerned about its possible negative consequences. Failure to understand and adapt to the new technologies arising may threaten us with extinction, he argues in the first chapter of the book.
He also states in chapter 1 that in order to survive our "technological adolescence" humans must lose some of their "self-destructive evolutionary baggage." This belief seems to be a popular one, being pervasive in literature, performing arts, and philosophy. But from a statistical/scientific standpoint, it is clearly unsupported. In comparison to the total number of humans who have ever lived, only a tiny minority of individuals throughout history have ever hurt anyone physically; an even smaller number have actually killed another human being. The author's cynicism here is totally unjustified.
The author though does engage in interesting discussion on the nature of intelligence and why he believes that machines are already more intelligent than humans are in certain specialized domains. Because of this, he also argues (correctly) that the further rise of machine intelligence will take place incrementally, with no well-defined time at which one could say that machine intelligence has surpassed human intelligence. It seems as though we have learned to live with machines doing things better than we can, at least in some areas, but have not yet viewed these capabilities as being "intelligent". But, asks the author, if they are more intelligent, at least in these areas, how would one know if they are working properly? It is at this point that the author believes that one should worry about the future of humanity as the dominant life-form on Earth.
Throughout the book, the author shows keen insight into the real goals behind research and development in A.I. The main goal he says is not to create machines that think and behave completely like humans, but find solutions to problems and do tasks that humans require. This will bring about, the author believes, intelligent machines whose cognitive abilities are quite unique, and characteristically non-human-like. There are many examples of his opinions on these matters in current developments in A.I., such as genetic programming and automatic theorem proving. These two areas have exhibited solutions to problems that clearly are very different than what humans would have done.
In addition, and perhaps to the alarm of some philosophers, the author takes a pragmatic view concerning the question as to whether machines can think. He clearly does not want to engage in the arm-chair philosophical debates about this question, and considers them totally irrelevant. What matters to him is whether the machine "acts in all respects" as though it understands. The imputation of mental processes to a machine will assist in the understanding of how it works and what it can do, and this is perfectly fine with the author. But this does, in the author's view raise questions as to the legal and ethical status of thinking machines.
Because of the title of the book, it is not surprising to find a discussion of the "strong A.I." problem included in it. The author spends a chapter addressing the nature of consciousness and some of the ideas and myths surrounding it. He recognizes, correctly, that the doctrines of vitalism and dualism are not useful at all from a scientific perspective. The proponents of these doctrines adhere to the "irreducibility" of consciousness, and therefore to the untenability of its analysis. Pure speculation is thus the tool of inquiry, all of this done on the philosopher's armchair and not in the laboratory. The author though, thankfully, advocates a purely scientific approach, taking the physical nature of consciousness as an axiom, and then seeing how far this will lead. His analysis and commentary throughout the chapter are very interesting and connected with evolutionary arguments as to why consciousness is structured the way it is.
Most interesting is the author's discussion on the role of emotions in human cognition. Not viewing emotions as inherently undesirable or "irrational", he gives reasons for wanting to incorporate them into an intelligent machine. One of these is an algorithmic notion: emotions provide a "weighting scheme" that will filter out undesirable paths in the total path space of alternatives. Anyone who has attempted to design search algorithms will understand the importance of weighting schemes that will allow pruning of the search space. The same goes for those involved in the design of neural networks for pattern matching or time series prediction: bias nodes are essential for the proper function of the neural network. The author gives as an example the biases that are built into chess-playing machines, without which the machine's capabilities would be crippled.
The author definitely believes in the possibility of machines "taking over", devoting an entire chapter to the possible scenarios that might bring this about. But his cynicism acts against him here, namely his belief that humans, even though clearly expressing intelligence, are prone to extreme violence. His notion of intelligence therefore is too narrow: an alternative one is that the more intelligent an entity becomes, the less prone to violence it becomes. In other words, violence disrupts the cognitive flow of the entity in question, and it avoids it out of necessity: to maintain a state of intelligence that not only has survival value but may indeed be purely a subjective need. The degree of intelligence is thus inversely related to the violence participated in. There are many examples of this, billions in fact, these being the humans who have lived throughout history. The vast majority of humans have been superb thinking machines, and they serve as excellent examples to the ones which they are creating and will create.