Top critical review
7 people found this helpful
Well intentioned, but rather short and misses out on some important points
on 25 February 2014
I read this book and also read a review of it on amazon.com entitled "Naive, fearmongering nonsense that has already been debunked". For my comments on that review, see the end of this post.
While the book was interesting, it was rather too short and did not cover as much ground as I would have liked to see. I believe that the future outlook as regards artificial AI may be even more depressing than Armstrong suggests in the book.
The book covers how we might define intelligence, and how machines could end up being phenomenally more capable than a human in terms of intelligence. It then discusses the difficulties of ensuring that we design the machine so that it might do precisely what we want it to do, and the difficulty of deciding whether a suggestion made by such a machine should be followed. It then went on to suggest how we might control the design and operation of AI machines. The problem I found in the book was that it generally assumed that the humans designing/controlling such machines would be doing so with the greater interests of humankind in mind.
And this is where I believe that the book (and Richard Loosemore’s review) fails to see the full implications of a truly intelligent AI machine. Surely it will be the case that as soon as AI machines are developed to a level comparable with human intelligence, it will only be a short time till certain individuals or groups of individuals realise that such machines could be used to profit that individuals/group of individuals in some manner. Some of these individuals/groups of individuals will have few scruples about the effect that the use of such machines might have on the rest of humanity. And because of that, there is no reason to believe that the day will not come when there is such a machine that could have a hugely deleterious effect on humanity.
As regards Richard Loosemore’s review as mentioned above (Richard Loosemore is the reviewer Richard L) I do not consider this an appraisal of the book at all; the reviewer is simply promoting his ideas. He referred to a paper he has written (which is at [...]). In the paper Loosemore dismisses the possibility that an AI machine that is instructed to not have any negative impact on human welfare might misunderstand the instructions that it is given and therefore might act in an unintended manner. He does this on the basis that if a machine misunderstands such instructions that it cannot really be an intelligent machine. But here Loosemore is confusing two quite disparate strands of thought; one strand is reaching an intelligent conclusion from certain premises and the other strand is passing a moral judgement as to whether acting according to that conclusion is for the greater good of all or a selected part of humanity. If the question is whether a machine has as much intelligence as a human, then there is no reason to suppose that a machine is any less capable of ignoring the effects of its decisions on humans than any intelligent human. Loosemore goes on to make the incredibly naive assumption that all future AI machines will be designed so that they will act according to a system that Loosemore envisages, in order to achieve their goals, and then assumes that because of that, everything will be OK. It won't, because the goals of at least some of the machines will be defined such as to override any concerns for humanity.