Shop now Shop now Shop now  Up to 70% Off Fashion  Shop all Amazon Fashion Cloud Drive Photos Shop now Learn More Shop now Shop now Shop Fire Shop Kindle Shop now Shop now

Customer Reviews

4.1 out of 5 stars
4.1 out of 5 stars
Your rating(Clear)Rate this item

There was a problem filtering reviews right now. Please try again later.

on 25 February 2014
I read this book and also read a review of it on entitled "Naive, fearmongering nonsense that has already been debunked". For my comments on that review, see the end of this post.

While the book was interesting, it was rather too short and did not cover as much ground as I would have liked to see. I believe that the future outlook as regards artificial AI may be even more depressing than Armstrong suggests in the book.

The book covers how we might define intelligence, and how machines could end up being phenomenally more capable than a human in terms of intelligence. It then discusses the difficulties of ensuring that we design the machine so that it might do precisely what we want it to do, and the difficulty of deciding whether a suggestion made by such a machine should be followed. It then went on to suggest how we might control the design and operation of AI machines. The problem I found in the book was that it generally assumed that the humans designing/controlling such machines would be doing so with the greater interests of humankind in mind.

And this is where I believe that the book (and Richard Loosemore’s review) fails to see the full implications of a truly intelligent AI machine. Surely it will be the case that as soon as AI machines are developed to a level comparable with human intelligence, it will only be a short time till certain individuals or groups of individuals realise that such machines could be used to profit that individuals/group of individuals in some manner. Some of these individuals/groups of individuals will have few scruples about the effect that the use of such machines might have on the rest of humanity. And because of that, there is no reason to believe that the day will not come when there is such a machine that could have a hugely deleterious effect on humanity.

As regards Richard Loosemore’s review as mentioned above (Richard Loosemore is the reviewer Richard L) I do not consider this an appraisal of the book at all; the reviewer is simply promoting his ideas. He referred to a paper he has written (which is at [...]). In the paper Loosemore dismisses the possibility that an AI machine that is instructed to not have any negative impact on human welfare might misunderstand the instructions that it is given and therefore might act in an unintended manner. He does this on the basis that if a machine misunderstands such instructions that it cannot really be an intelligent machine. But here Loosemore is confusing two quite disparate strands of thought; one strand is reaching an intelligent conclusion from certain premises and the other strand is passing a moral judgement as to whether acting according to that conclusion is for the greater good of all or a selected part of humanity. If the question is whether a machine has as much intelligence as a human, then there is no reason to suppose that a machine is any less capable of ignoring the effects of its decisions on humans than any intelligent human. Loosemore goes on to make the incredibly naive assumption that all future AI machines will be designed so that they will act according to a system that Loosemore envisages, in order to achieve their goals, and then assumes that because of that, everything will be OK. It won't, because the goals of at least some of the machines will be defined such as to override any concerns for humanity.
0Comment| 7 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 18 February 2014
This is a surprisingly easy book to read, and gives the lay reader a guide to some of the fascinating ideas surrounding the impact of decision-making computers. Not designed for those looking for a technical read, although it does have useful references at each chapter end.
0Comment| 4 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 21 February 2014
The book offers a short, concise and yet easy to read review on the potential dangers (and usefulness) of AI.

It is true that you can find the same information from other sources (mostly written by close colleagues of the author), however, this might be the shortest book that manages to get the idea out clearly.
0Comment| 3 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 2 May 2016
A great book to get you thinking...

I bought this book as something to read by the pool on holiday on my kindle...

It's not expensive...

And gives some nice insights into the future philosophical issues AI will bring..

It's nicely structured and not too technical... But you can tell the author really knows his field...

It's very short... But don't let that put you off..

I'd recommend..

Nick norton
0Comment|Was this review helpful to you?YesNoReport abuse
on 12 May 2015
Are AIs going to take over? The author sketches a plausible scenario arguing that if we build them, their intelligence will be other than human. Everything safeguard to protect us will have to be built in, and we're not going to be able to do that.

It's a well-written and informative book. Recommended!
0Comment|Was this review helpful to you?YesNoReport abuse
on 29 January 2016
This short book provides basic information and comment on the problems likely to arise from the advance of AI. Publicises the author's group, which is studying the situation. Satisfactory as an introduction of the uninformed to this topic, which will affect the human race in ways yet unknown in the years to come.
0Comment|Was this review helpful to you?YesNoReport abuse
on 15 May 2014
I must confess to being deeply disappointed with the naïve, contradictory and logically inconsistent approach this book takes to what could be an extremely important and thought provoking subject.

The author oddly seems to start with a very poor to non-existent definition of what constitutes intelligence.

They talk about not anthropomorphising artificial intelligences and then spend several pages doing just that describing what it would be like to be a super intelligent AI with control across almost all areas of human endeavour from a human perspective which is totally logically inconsistent.

He makes the common mistake of pointing out that a computer can do addition much faster than a human. This is because it has an ALU a circuit specifically designed by human intelligence to add binary numbers. It’s like observing a car can move faster than a human because it has an engine, drive, transmission, etc., e.g. it is a purpose built machine rather than a human body which is a general machine. Likewise a computer has purpose built arithmetic units, humans have to use intelligence to learn how to carry out arithmetic.

Deep Blue took years of application of human intelligence to refine computational algorithms for playing chess and to design sufficiently fast computational hardware to run it in real time to beat a chess grand master. But what happens if I challenge it to a game of noughts and crosses…nothing it’s a purpose built human designed machine for playing chess, unlike the chess grand master it can’t learn new games the software engineers have to start again. My point is these sort of applications are stored intelligence, they are the results of intelligent agents but these programs are not themselves intelligent.

Progress is being made in producing truly intelligent learning machines, but like children they will need to be taught not programmed. It also doesn’t deal with the point that we are not likely to make one super intelligent AI there are likely to be many deployed in separate domains, in a sense they can monitor each other.

The author also has taken areas such as stock trading, natural language understanding, social manipulation, etc and decided we will combine them all under the command of some higher level AI we give vague English language goals too, I find that unlikely in the extreme.

Lastly one of the points made in conclusion is that any task an AI matches a human in they will quickly surpass them in, yet oddly this seems to mean every task but moral judgement to the author otherwise what is the point this book is trying to make.

Altogether ill reasoned and disappointing but will appeal to those who know little about how computers or AI work, who will enjoy a good doom prophecy.
11 comment| 4 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 16 October 2014
An excellent, non-technical look at a possible future of machine intelligence. Touches on morality, control and what it means for us. Surprisingly witty in places.
0Comment|Was this review helpful to you?YesNoReport abuse
on 20 May 2016
A nice book, a bit shorter than I expected but an overall good informative read
0Comment|Was this review helpful to you?YesNoReport abuse
on 2 May 2015
Prompt delivery - will use again
0Comment|Was this review helpful to you?YesNoReport abuse