Shop now Shop now Shop now Shop Black Friday Deals Week in Fashion Cloud Drive Photos Shop now Amazon Fire TV Shop now DIYED Shop now Shop Fire Shop Kindle Paperwhite Listen in Prime Shop Now Shop now

Customer Reviews

4.2 out of 5 stars32
4.2 out of 5 stars
Your rating(Clear)Rate this item

There was a problem filtering reviews right now. Please try again later.

45 of 46 people found the following review helpful
on 11 July 2014
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.

It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:

“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”

This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all.

Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin.

He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast.

He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).

The book’s middle chapter and fulcrum is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever.

Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.

In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”

Amen to that.
22 commentsWas this review helpful to you?YesNoReport abuse
7 of 7 people found the following review helpful
TOP 100 REVIEWERon 11 September 2014
Edit Review
Delete Review
This review is from: Superintelligence: Paths, Dangers, Strategies (Kindle Edition)
Nick Bostrom has packed this book with information about how AI (artificial intelligence) may some day progress to a point that is beyond even our wildest imaginings. He explains his theories based on history and data, all noted with sources in the back of the book, from every angle imaginable. It seems to not be a question of if, but the question of when AI will be not only able to learn on its own but to improve upon itself and build more advanced versions of AI. And where will that leave humanity? Perhaps in the dust. Along with all the different scenarios of how this could happen, Bostrom suggests possible ways to keep these machines under some sort of control. But with such superintelligence, almost beyond our imagining, it doesn't leave me with a real sense of confidence that humanity will survive.

Bostrom packs a lot of information into this book. Much of it is highly philosophical. I was not surprised to learn that he is a Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute. He has a background in physics, computational neuroscience, and mathematical logic. This book reads much like a text and reminded me of books in my college philosophy courses where it would take me a long time to get to a single page, not because there were words that I didn't understand, but because some of the concepts were difficult to wrap my brain around. This book is amazing, don't get me wrong, but there were sections where it was a real chore to get through without having my mind wander because of the dry language. I am very interested in the subject matter could hardly wait to read the book. But since it is marketed as a book for the general public rather than as a college textbook, I think the author could have done a better job of presenting the material in a less academic manner. I believe it is possible to convey complex ideas in more engaging language.

It's difficult for me to give this book a 4 star rating because it's packed with so much relevant and thought provoking information. But I have to be honest and say that parts were a real slog to get through.
Comment Comment | Permalink
0CommentWas this review helpful to you?YesNoReport abuse
12 of 13 people found the following review helpful
on 20 December 2014
Bostrom's work is always fascinating and this area does deserve attention. However I found none of the routes to superintelligence at all convincing and there appear to be major obvious holes in the arguments. Whole brain scanning and replication looks like an intractable problem now that the quantum effects in nanotubules have been discovered in the brain. Other routes may result in super AI.

The main problem with the argument is that we don't need super intelligence to be threatened. A robot with the ability to kill does not have to be superintelligent, only well adapted at killing humans. This is the main problem with AI, its definition of 'I' sometimes misses the point. To be well adapted to your environment does not always require great intelligence.
0CommentWas this review helpful to you?YesNoReport abuse
1 of 1 people found the following review helpful
TOP 100 REVIEWERon 7 September 2015
John H. Flavell was probably the first to use the term metacognition when suggesting that it "refers to one's knowledge concerning one's own cognitive processes or anything related to them, e.g., the learning-relevant properties of information or data. For example, I am engaging in metacognition if I notice that I am having more trouble learning A than B; if it strikes me that I should double check C before accepting it as fact." That was in 1976.

As I began to read Nick Bostrom's brilliant book, I was again reminded of Flavell's research. What does the term "superintelligence" mean? According to Bostrom, "We can tentatively define a superintelligence as [begin italics] any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest [end italics]." Bostrom focuses on three different forms of superintelligence and asserts that they are essentially equivalent: Speed superintelligence, a system that can do all that a human intellect can do, but much faster; Collective superintelligence, a system composed of a large number of smaller intellects such that the system's overall performance across many very general domains vastly outstrips that of any current cognitive system; and Quality superintelligence, a system that is at least as fast as a human mind and vastly qualitatively smarter.

He could well be right that the development of superintelligence - by human beings -- could be "quite possibly the most important and most daunting challenge humanity has ever faced. And - whether we succeed or fail - it is probably the last challenge we will ever face." To his credit, he duly acknowledges the possibility that many of the points made in the book could be wrong. "It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions."

These are among the several dozen passages of greatest interest and value to me, also listed to suggest the scope of Bostrom's coverage in Chapters 1-7

o Seasons of hope and despair (Pages 5-11)
o Opinions about the future of machine intelligence (18-21)
o Artificial intelligence (23-30)
o Whole brain emulation (30-36)
o Biological cognition (36-44)
o Brain-computer interfaces (44-48)
o Forms of Superintelligence (52-57)
o Recalcitrance (66-73)
o Will the forerunner get a decisive strategic advantage (79-82)
o From decisive strategic advantage to singleton (87-90)
o Functionalities and superpowers (92-95)
o An AI takeover scenario (95-99)
o The relation between intelligence and motivation (105-108)
o Instrumental convergence (109-114)

I commend Bostrom on his skillful use of reader-friendly devices such as Figures (14), Tables (13), Boxes (13), summaries and synopses, and extensively annotated notes (Pages 261-304). These devices will facilitate, indeed expedite frequent review of key material later.

As of now, today, no one knows with certainty to what extent (if any) superintelligence will eventually be able to do everything that human intellect can do...and do it better and faster. Humans design systems and, as Bostrom suggests, humans are beginning to design systems that can also design systems. My own crystal ball imploded long ago so I have no predictions to offer. I do have a few articles of faith that I presume to share now. First, I believe that instruments of artificial intelligence (AI) will never replace human beings but, over time, they will become increasingly more valuable collaborators insofar as the whats and hows are concerned. Second, I believe that human beings will always be much better qualified to rank priorities and determine the whys. Finally, and of greatest importance to me, I believe that only human beings possess a soul that can be nourished by "a compassionate and jubilant use of humanity's cosmic endowment."
0CommentWas this review helpful to you?YesNoReport abuse
1 of 1 people found the following review helpful
on 31 January 2015
It's true the style of this book makes it hard going at times, but fascinating insights make it worth the effort and flashes of dry humour enliven the lengthy paragraphs.

Bostrom says we aren't ready to deal with the dangers of superintelligent AI yet, but when it does arrive - most likely several decades from now - it's vital that we have worked out well in advance how to control it (if that's even possible!) We have to make sure to program it in such a way that it will benefit rather than destroy humanity and the book walks us through various possible routes to avoid disaster. We are a long way from building superintelligent machines yet though and there are incredibly hard questions to be answered first, such as how do you put ideal motivation and values into words, let alone computer code? Isaac Asimov's three laws of robotics would be hopelessly inadequate.

This book informs an important and necessary debate about the future of AI. It provides a deep insight into the dangers and how we might try to avoid them (it's also a great book to read before watching Ex Machina, which I think gets it just right!)
0CommentWas this review helpful to you?YesNoReport abuse
on 17 December 2014
It is a really interesting book and in someways frightening. There are a numerous possibilities for "super intelligence" from genetic engineering through to the development of some kind of artificial entity. The book is readable if somewhat heavy going, the reason for that is that it makes you think, well it certainly makes me think.

Mr Bostrom does his best to give a time line but there are so many permutations and combinations, I can understand the difficulty of gazing into a crystal ball.

I think Mr Bostrom does an excellent job of keeping the topic within believable boundaries, the book could so easily have become another title wound up in conspiracy theory. Who knows it may yet?

The world is at a turning point, that is easy to see when you read the book. You read in the papers now of the automation of weapons systems and the prospect of a system itself deciding what to target and who to kill. Whose in charge then? (What ever happened to Asimov's Laws).

A good book that will certainly open your eyes to a potentially dark future....
0CommentWas this review helpful to you?YesNoReport abuse
VINE VOICEon 19 October 2015
This is a profoundly interesting book looking at the risks of the development of a super-intelligence and how this might be achieved as well as the overall effect it might have on us as a species. I have always had a great interest in this as a subject and so it made for fascinating reading.

However, this book is heavily based in the science of how this might be achieved and is not a speculative piece of work but instead features a considerable amount of theory and information, a good deal of which was well over my head. It is not something you can idly read on the train or on your lunch break, but at times where your attention is at it's peak so you can get your head around the points it is making. Tremendously interesting but very heavy going.
0CommentWas this review helpful to you?YesNoReport abuse
11 of 14 people found the following review helpful
There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or superintwords to that effect. I challenge any of them to read this book and still say that philosophy is pointless.

It’s worth pointing out immediately that this isn’t really a book for the general reader, and that's why I only gave it three stars. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.

What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.

The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.

I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.

I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.

The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.

However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
11 commentWas this review helpful to you?YesNoReport abuse
on 24 August 2014
Many people will have considered the idea of superintelligence, particularly in the light of robotics and developmnents in artificial intelligence. Fewer will have faced the possibility that superintelligence is now on the horizon and the implication of this. The book outlines the knowledge base (where we are now) against which to take such considerations forward and leads the reader along thought pathways (where we might be) arising from this base. For those prepared to engage with the concept and reality of superintelligence, rather than dismiss it out of hand or put it out of mind, this is a book that should be read and its contents debated..
0CommentWas this review helpful to you?YesNoReport abuse
on 24 November 2014
Excellent book, beautifully laid out and edited with ideas and theories concerned with the information age...this writer is famous for these sorts of essays and doesn't need me to praise him. I was delighted to consult his ideas for a third level essay I was contributing to, along with my son, now in the later stages of his university degree.
0CommentWas this review helpful to you?YesNoReport abuse
Customers who viewed this item also viewed

Lord of All Things
Lord of All Things by Andreas Eschbach

Send us feedback

How can we make Amazon Customer Reviews better for you?
Let us know here.