Shop now Shop now Shop Fire Shop Kindle Listen with Prime Shop now Shop now Shop now Shop now Shop now  Up to 50% Off Fashion  Shop all Amazon Fashion Cloud Drive Photos Shop now Learn More

Customer Reviews

4.0 out of 5 stars43
4.0 out of 5 stars
Your rating(Clear)Rate this item


There was a problem filtering reviews right now. Please try again later.

on 11 July 2014
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.

It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:

“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”

This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all.

Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin.

He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast.

He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).

The book’s middle chapter and fulcrum is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever.

Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.

In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”

Amen to that.
22 comments|54 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 11 September 2014
Edit Review
Delete Review
This review is from: Superintelligence: Paths, Dangers, Strategies (Kindle Edition)
Nick Bostrom has packed this book with information about how AI (artificial intelligence) may some day progress to a point that is beyond even our wildest imaginings. He explains his theories based on history and data, all noted with sources in the back of the book, from every angle imaginable. It seems to not be a question of if, but the question of when AI will be not only able to learn on its own but to improve upon itself and build more advanced versions of AI. And where will that leave humanity? Perhaps in the dust. Along with all the different scenarios of how this could happen, Bostrom suggests possible ways to keep these machines under some sort of control. But with such superintelligence, almost beyond our imagining, it doesn't leave me with a real sense of confidence that humanity will survive.

Bostrom packs a lot of information into this book. Much of it is highly philosophical. I was not surprised to learn that he is a Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute. He has a background in physics, computational neuroscience, and mathematical logic. This book reads much like a text and reminded me of books in my college philosophy courses where it would take me a long time to get to a single page, not because there were words that I didn't understand, but because some of the concepts were difficult to wrap my brain around. This book is amazing, don't get me wrong, but there were sections where it was a real chore to get through without having my mind wander because of the dry language. I am very interested in the subject matter could hardly wait to read the book. But since it is marketed as a book for the general public rather than as a college textbook, I think the author could have done a better job of presenting the material in a less academic manner. I believe it is possible to convey complex ideas in more engaging language.

It's difficult for me to give this book a 4 star rating because it's packed with so much relevant and thought provoking information. But I have to be honest and say that parts were a real slog to get through.
Comment Comment | Permalink
0Comment|12 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 20 December 2014
Bostrom's work is always fascinating and this area does deserve attention. However I found none of the routes to superintelligence at all convincing and there appear to be major obvious holes in the arguments. Whole brain scanning and replication looks like an intractable problem now that the quantum effects in nanotubules have been discovered in the brain. Other routes may result in super AI.

The main problem with the argument is that we don't need super intelligence to be threatened. A robot with the ability to kill does not have to be superintelligent, only well adapted at killing humans. This is the main problem with AI, its definition of 'I' sometimes misses the point. To be well adapted to your environment does not always require great intelligence.
0Comment|17 people found this helpful. Was this review helpful to you?YesNoReport abuse
TOP 500 REVIEWERon 7 September 2015
John H. Flavell was probably the first to use the term metacognition when suggesting that it "refers to one's knowledge concerning one's own cognitive processes or anything related to them, e.g., the learning-relevant properties of information or data. For example, I am engaging in metacognition if I notice that I am having more trouble learning A than B; if it strikes me that I should double check C before accepting it as fact." That was in 1976.

As I began to read Nick Bostrom's brilliant book, I was again reminded of Flavell's research. What does the term "superintelligence" mean? According to Bostrom, "We can tentatively define a superintelligence as [begin italics] any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest [end italics]." Bostrom focuses on three different forms of superintelligence and asserts that they are essentially equivalent: Speed superintelligence, a system that can do all that a human intellect can do, but much faster; Collective superintelligence, a system composed of a large number of smaller intellects such that the system's overall performance across many very general domains vastly outstrips that of any current cognitive system; and Quality superintelligence, a system that is at least as fast as a human mind and vastly qualitatively smarter.

He could well be right that the development of superintelligence - by human beings -- could be "quite possibly the most important and most daunting challenge humanity has ever faced. And - whether we succeed or fail - it is probably the last challenge we will ever face." To his credit, he duly acknowledges the possibility that many of the points made in the book could be wrong. "It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions."

These are among the several dozen passages of greatest interest and value to me, also listed to suggest the scope of Bostrom's coverage in Chapters 1-7

o Seasons of hope and despair (Pages 5-11)
o Opinions about the future of machine intelligence (18-21)
o Artificial intelligence (23-30)
o Whole brain emulation (30-36)
o Biological cognition (36-44)
o Brain-computer interfaces (44-48)
o Forms of Superintelligence (52-57)
o Recalcitrance (66-73)
o Will the forerunner get a decisive strategic advantage (79-82)
o From decisive strategic advantage to singleton (87-90)
o Functionalities and superpowers (92-95)
o An AI takeover scenario (95-99)
o The relation between intelligence and motivation (105-108)
o Instrumental convergence (109-114)

I commend Bostrom on his skillful use of reader-friendly devices such as Figures (14), Tables (13), Boxes (13), summaries and synopses, and extensively annotated notes (Pages 261-304). These devices will facilitate, indeed expedite frequent review of key material later.

As of now, today, no one knows with certainty to what extent (if any) superintelligence will eventually be able to do everything that human intellect can do...and do it better and faster. Humans design systems and, as Bostrom suggests, humans are beginning to design systems that can also design systems. My own crystal ball imploded long ago so I have no predictions to offer. I do have a few articles of faith that I presume to share now. First, I believe that instruments of artificial intelligence (AI) will never replace human beings but, over time, they will become increasingly more valuable collaborators insofar as the whats and hows are concerned. Second, I believe that human beings will always be much better qualified to rank priorities and determine the whys. Finally, and of greatest importance to me, I believe that only human beings possess a soul that can be nourished by "a compassionate and jubilant use of humanity's cosmic endowment."
0Comment|2 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 31 January 2015
It's true the style of this book makes it hard going at times, but fascinating insights make it worth the effort and flashes of dry humour enliven the lengthy paragraphs.

Bostrom says we aren't ready to deal with the dangers of superintelligent AI yet, but when it does arrive - most likely several decades from now - it's vital that we have worked out well in advance how to control it (if that's even possible!) We have to make sure to program it in such a way that it will benefit rather than destroy humanity and the book walks us through various possible routes to avoid disaster. We are a long way from building superintelligent machines yet though and there are incredibly hard questions to be answered first, such as how do you put ideal motivation and values into words, let alone computer code? Isaac Asimov's three laws of robotics would be hopelessly inadequate.

This book informs an important and necessary debate about the future of AI. It provides a deep insight into the dangers and how we might try to avoid them (it's also a great book to read before watching Ex Machina, which I think gets it just right!)
0Comment|2 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 21 January 2016
I started off by reading Surviving AI: The promise and peril of artificial intelligence by Calum Chace. This is a great introduction to the topic, but a bit light on detail. I noticed that Bostrom was referenced frequently, so I picked this book up hoping for a more in-depth examination of the topic, and was not disappointed! There are only a few sections where the maths flew over my head, but otherwise it's a very accessible book on a topic I didn't realise was so complex. Very interesting to note the complex philosophical questions thrown up by the potential for super-intelligence, and also that the odds are weighted in favour of a negative outcome for humankind if we develop a super-intelligence without proper consideration to its goals and safety mechanisms. Overall, a great book. Highly recommended!
0Comment|2 people found this helpful. Was this review helpful to you?YesNoReport abuse
VINE VOICEon 19 October 2015
This is a profoundly interesting book looking at the risks of the development of a super-intelligence and how this might be achieved as well as the overall effect it might have on us as a species. I have always had a great interest in this as a subject and so it made for fascinating reading.

However, this book is heavily based in the science of how this might be achieved and is not a speculative piece of work but instead features a considerable amount of theory and information, a good deal of which was well over my head. It is not something you can idly read on the train or on your lunch break, but at times where your attention is at it's peak so you can get your head around the points it is making. Tremendously interesting but very heavy going.
0Comment|2 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 9 December 2015
As someone who holds a PhD in AI, I was super-excited to get Nick Bostrom's book "Superintelligence". Finally, mainstream discussions on a topic I care about from a highly ranked academic institution, with a cool looking owl to boot. Yea, owls! Fantastic, right? Wrong. After the first few chapters I had to force myself to finish it; it's miserable.

This book is a 260 page tribute to Elizer Yadkowsky that does not appreciate concepts at all in the entire field of constraints nor does it take into account common sense or any practical basis in AI. At best, this is a grammatically well-written sensationalist book designed to inspire irrational fear of a fictional form of AI, but more likely it is but one of many examples of the distilled essence of naivety of people writing on a topic they know nothing about.

At one point Nick quotes an idea postulated by Elizer as a practical / credible scenario in the rise of malevolent AI. Elizer suggests an advanced AI could reassemble a biomimetic computing device using a naive human and a stereo speaker to force certain chemical reactions to create a 'nanosystem' threat inside a glass beaker (p. 98). Oh, but first it needs to 'crack the protein folding problem'. Never mind that problem has been shown to be at least np-hard, and possibly np-complete - a clear demonstration of a lack of understanding of the protein folding problem in the first place, but the notion of basic physics to assemble nano-particles using a liquid substrate through non-uniformed shaped vessel made of glass seems "far-fetched", to put it mildly. I've read more plausible science fiction.

At one point Nick states that by adding more money growth in computational capabilities increase linearly. I guess he's never heard of order of operation -- sometimes referred to associativity. You see, we have things like division and subtraction that take precedence in computer science. (Most people are introduced to these concepts before 5th year maths, folks.) This means we cannot just pass along as many computations as we'd like evenly across a giant grid of processors as Nick seems to assume.

This book is riddled with purple prose and denotatively 'weird' use of terms -- such as "recalcitrance". Ever been next to that person at the party who uses unnecessarily large words when a simplistic explanation will do? You'll get a lot of that sensation in this. You'll also see the occasional spelling error, which is nice -- like at the bottom of page 56, amongst others.

In addition the citations are dubious at best - e.g 'world robotics 2011'. Was there some big consensus at a conference regarding all of one topic in AI? That'd be a first in history I'm pretty sure for any conference on any topic; I guess I missed one hell of a conference.

The head of one of Oxford's philosophy departments might be an intelligent person but he's absolutely unqualified to speak on any topic in practical AI - by his own demonstration. I can honestly say that purchasing this book was literally the worst money I've ever spent - and I've bought an Oasis album.

There is one redeeming quality of this book: The survey of scientists in AI-related fields who believe SGI might become a reality that describes when, at a 50% threshold, that might occur (p. 19), and the general description of the 'types' of SGI and the mediums in which those have been thought to occur in (Chapter 2) -- e.g. whole brain emulation versus evolutionary models, etc, but these come with serious errors in assumptions. Basically, if you had a PhD student and this was their thesis, it'd be s*** except for the second chapter of their work.

TL;DR: If you want to be told that AIs will take over space and destroy humanity using a stereo speaker then this is the book for you. If you like reason and don't want to waste £20, then may I suggest something of equivalent intelligence such as Where's Wally, or maybe a nice collection of toilet roll? At least the latter would be fit for a higher use. Oxford should be ashamed to have given this guy his own department, but proud he's free of any teaching responsibilities. Who knows how much damage he'd do to graduates.
44 comments|28 people found this helpful. Was this review helpful to you?YesNoReport abuse
on 11 February 2016
Quite the most provocative and, in its way, inspiring, book I have read for many years. Bostrom makes few concessions to the reader, but the challenge he sets is immensely worthwhile. He makes a persuasive case that mankind is about to face its greatest challenge, and that the outcome of that challenge is completely binary. He also has a wicked sense of humour, although one has to dig quite deep into the footnotes to find it !
Even if for some reason one does not buy his thesis that Superintelligence is coming, the rigour of his thought as he teases out the possible consequences of various paths and strategies on offer is worth experiencing in its own right. This book is Practical Philosophy at its finest.
0Comment|One person found this helpful. Was this review helpful to you?YesNoReport abuse
on 24 August 2014
Many people will have considered the idea of superintelligence, particularly in the light of robotics and developmnents in artificial intelligence. Fewer will have faced the possibility that superintelligence is now on the horizon and the implication of this. The book outlines the knowledge base (where we are now) against which to take such considerations forward and leads the reader along thought pathways (where we might be) arising from this base. For those prepared to engage with the concept and reality of superintelligence, rather than dismiss it out of hand or put it out of mind, this is a book that should be read and its contents debated..
0Comment|One person found this helpful. Was this review helpful to you?YesNoReport abuse

Sponsored Links

  (What is this?)