Superintelligence: Paths, Dangers, Strategies and over 2 million other books are available for Amazon Kindle . Learn more
£12.91
  • RRP: £18.99
  • You Save: £6.08 (32%)
FREE Delivery in the UK.
In stock.
Dispatched from and sold by Amazon.
Gift-wrap available.
Quantity:1
Trade in your item
Get a £5.50
Gift Card.
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Superintelligence: Paths, Dangers, Strategies Hardcover – 3 Jul 2014


See all 3 formats and editions Hide other formats and editions
Amazon Price New from Used from
Kindle Edition
"Please retry"
Hardcover
"Please retry"
£12.91
£9.54 £10.33

Trade In Promotion


Frequently Bought Together

Superintelligence: Paths, Dangers, Strategies + Our Mathematical Universe: My Quest for the Ultimate Nature of Reality
Price For Both: £29.91

Buy the selected items together


Trade In this Item for up to £5.50
Trade in Superintelligence: Paths, Dangers, Strategies for an Amazon Gift Card of up to £5.50, which you can then spend on millions of items across the site. Trade-in values may vary (terms apply). Learn more

Product details

  • Hardcover: 352 pages
  • Publisher: OUP Oxford (3 July 2014)
  • Language: English
  • ISBN-10: 0199678111
  • ISBN-13: 978-0199678112
  • Product Dimensions: 23.6 x 1.5 x 16.3 cm
  • Average Customer Review: 4.4 out of 5 stars  See all reviews (14 customer reviews)
  • Amazon Bestsellers Rank: 1,666 in Books (See Top 100 in Books)
  • See Complete Table of Contents

More About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the forthcoming book Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). Earlier this year he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 22 languages. There have been more than 100 translations and reprints of his works.

For more, see www.nickbostrom.com

Product Description

Review

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era (Stuart Russell, Professor of Computer Science, University of California, Berkley)

Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book (Martin Rees, Past President, Royal Society)

This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? (Max Tegmark, Professor of Physics, MIT)

Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever (Olle Haggstrom, Professor of Mathematical Statistics)

Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking (The Economist)

There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake (Financial Times)

Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes (Elon Musk, Founder of SpaceX and Tesla)

A damn hard read (Sunday Telegraph)

About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index
Search inside this book:

Customer Reviews

4.4 out of 5 stars
Share your thoughts with other customers

Most Helpful Customer Reviews

27 of 27 people found the following review helpful By Calum on 11 July 2014
Format: Hardcover Verified Purchase
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.

It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:

“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”

This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all.

Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100.
Read more ›
1 Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
4 of 4 people found the following review helpful By PT Cruiser on 11 Sep 2014
Format: Kindle Edition
Edit Review
Delete Review
This review is from: Superintelligence: Paths, Dangers, Strategies (Kindle Edition)
Nick Bostrom has packed this book with information about how AI (artificial intelligence) may some day progress to a point that is beyond even our wildest imaginings. He explains his theories based on history and data, all noted with sources in the back of the book, from every angle imaginable. It seems to not be a question of if, but the question of when AI will be not only able to learn on its own but to improve upon itself and build more advanced versions of AI. And where will that leave humanity? Perhaps in the dust. Along with all the different scenarios of how this could happen, Bostrom suggests possible ways to keep these machines under some sort of control. But with such superintelligence, almost beyond our imagining, it doesn't leave me with a real sense of confidence that humanity will survive.

Bostrom packs a lot of information into this book. Much of it is highly philosophical. I was not surprised to learn that he is a Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute. He has a background in physics, computational neuroscience, and mathematical logic. This book reads much like a text and reminded me of books in my college philosophy courses where it would take me a long time to get to a single page, not because there were words that I didn't understand, but because some of the concepts were difficult to wrap my brain around. This book is amazing, don't get me wrong, but there were sections where it was a real chore to get through without having my mind wander because of the dry language. I am very interested in the subject matter could hardly wait to read the book.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
Format: Hardcover
It is a really interesting book and in someways frightening. There are a numerous possibilities for "super intelligence" from genetic engineering through to the development of some kind of artificial entity. The book is readable if somewhat heavy going, the reason for that is that it makes you think, well it certainly makes me think.

Mr Bostrom does his best to give a time line but there are so many permutations and combinations, I can understand the difficulty of gazing into a crystal ball.

I think Mr Bostrom does an excellent job of keeping the topic within believable boundaries, the book could so easily have become another title wound up in conspiracy theory. Who knows it may yet?

The world is at a turning point, that is easy to see when you read the book. You read in the papers now of the automation of weapons systems and the prospect of a system itself deciding what to target and who to kill. Whose in charge then? (What ever happened to Asimov's Laws).

A good book that will certainly open your eyes to a potentially dark future....
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
Format: Hardcover Verified Purchase
Bostrom's work is always fascinating and this area does deserve attention. However I found none of the routes to superintelligence at all convincing and there appear to be major obvious holes in the arguments. Whole brain scanning and replication looks like an intractable problem now that the quantum effects in nanotubules have been discovered in the brain. Other routes may result in super AI.

The main problem with the argument is that we don't need super intelligence to be threatened. A robot with the ability to kill does not have to be superintelligent, only well adapted at killing humans. This is the main problem with AI, its definition of 'I' sometimes misses the point. To be well adapted to your environment does not always require great intelligence.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Most Recent Customer Reviews



Feedback