Customer Reviews


10 Reviews
5 star:
 (6)
4 star:
 (2)
3 star:
 (2)
2 star:    (0)
1 star:    (0)
 
 
 
 
 
Average Customer Review
Share your thoughts with other customers
Create your own review
 
 

The most helpful favourable review
The most helpful critical review


16 of 16 people found the following review helpful
5.0 out of 5 stars A seriously important book
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge...
Published 2 months ago by Calum

versus
3.0 out of 5 stars The book is not quite well structured. The content is interesting however quite speculative.
The book is not quite well structured. The content is interesting however the author builds many conclusions on very speculative foundation. In many cases I could agree. A bit disappointment was that the book does not cover very much how to build a superintelligent machine. The author only touches this particular topic and describes the problematic very generally terms...
Published 6 days ago by Mirek


Most Helpful First | Newest First

16 of 16 people found the following review helpful
5.0 out of 5 stars A seriously important book, 11 July 2014
Verified Purchase(What is this?)
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.

It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:

“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”

This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all.

Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin.

He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast.

He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).

The book’s middle chapter and fulcrum is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever.

Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.

In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”

Amen to that.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


2 of 2 people found the following review helpful
5.0 out of 5 stars Nick Bostrom presents a clear case that humanity needs a better understanding of the potential goals of a superintelligence, 14 July 2014
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
This book covers a range of possible mechanisms for creating a superintelligence, as well as a staunch warning about the consequences of creating such an entity. Nick Bostrom presents a clear case that humanity needs a better understanding of the potential goals of a superintelligence. Ideally, before we blindly stumble into a potentially cataclysmic outcome.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


4.0 out of 5 stars Machines with morals?, 11 Sep 2014
By 
PT Cruiser (West Coast) - See all my reviews
(TOP 500 REVIEWER)   
Nick Bostrom has packed this book with information about how AI (artificial intelligence) may some day progress to a point that is beyond even our wildest imaginings. He explains his theories based on history and data, all noted with sources in the back of the book, from every angle imaginable. It seems to not be a question of if, but the question of when AI will be not only able to learn on its own but to improve upon itself and build more advanced versions of AI. And where will that leave humanity? Perhaps in the dust. Along with all the different scenarios of how this could happen, Bostrom suggests possible ways to keep these machines under some sort of control. But with such superintelligence, almost beyond our imagining, it doesn't leave me with a real sense of confidence that humanity will survive.

Bostrom packs a lot of information into this book. Much of it is highly philosophical. I was not surprised to learn that he is a Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute. He has a background in physics, computational neuroscience, and mathematical logic. This book reads much like a text and reminded me of books in my college philosophy courses where it would take me a long time to get to a single page, not because there were words that I didn't understand, but because some of the concepts were difficult to wrap my brain around. This book is amazing, don't get me wrong, but there were sections where it was a real chore to get through without having my mind wander because of the dry language. I am very interested in the subject matter could hardly wait to read the book. But since it is marketed as a book for the general public rather than as a college textbook, I think the author could have done a better job of presenting the material in a less academic manner. I believe it is possible to convey complex ideas in more engaging language.

It's difficult for me to give this book a 4 star rating because it's packed with so much relevant and thought provoking information. But I have to be honest and say that parts were a real slog to get through.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


4.0 out of 5 stars What is Man's Future?, 24 Aug 2014
Verified Purchase(What is this?)
Many people will have considered the idea of superintelligence, particularly in the light of robotics and developmnents in artificial intelligence. Fewer will have faced the possibility that superintelligence is now on the horizon and the implication of this. The book outlines the knowledge base (where we are now) against which to take such considerations forward and leads the reader along thought pathways (where we might be) arising from this base. For those prepared to engage with the concept and reality of superintelligence, rather than dismiss it out of hand or put it out of mind, this is a book that should be read and its contents debated..
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


3.0 out of 5 stars The book is not quite well structured. The content is interesting however quite speculative., 10 Sep 2014
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
The book is not quite well structured. The content is interesting however the author builds many conclusions on very speculative foundation. In many cases I could agree. A bit disappointment was that the book does not cover very much how to build a superintelligent machine. The author only touches this particular topic and describes the problematic very generally terms. However I can recommend this book as valuable reading about artificial intelligence.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


5.0 out of 5 stars Superintelligent!, 29 Aug 2014
Verified Purchase(What is this?)
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
Very smart book, based on facts and scientific papers. In some sense brave: although most of expert forecasts are wrong, this one is at least inspiring. Strongly reccomended for those who like intelligent papers.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


5 of 8 people found the following review helpful
3.0 out of 5 stars A serious attempt to consider the impact of artificial intelligence, 9 July 2014
By 
Brian Clegg "Brian Clegg" (Wiltshire, England) - See all my reviews
(TOP 500 REVIEWER)   
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or superintwords to that effect. I challenge any of them to read this book and still say that philosophy is pointless.

It’s worth pointing out immediately that this isn’t really a book for the general reader, and that's why I only gave it three stars. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.

What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.

The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.

I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.

I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.

The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.

However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


5.0 out of 5 stars An excellent read, 22 Aug 2014
Verified Purchase(What is this?)
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
Fascinating
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


0 of 2 people found the following review helpful
5.0 out of 5 stars Five Stars, 11 Aug 2014
Verified Purchase(What is this?)
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
Excellent book.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


0 of 3 people found the following review helpful
5.0 out of 5 stars Five Stars, 3 Aug 2014
Verified Purchase(What is this?)
This review is from: Superintelligence: Paths, Dangers, Strategies (Hardcover)
This book may very well change your life.
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No


Most Helpful First | Newest First

This product

Superintelligence: Paths, Dangers, Strategies
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (Hardcover - 3 July 2014)
12.91
In stock
Add to basket Add to wishlist
Only search this product's reviews