Start reading Superintelligence: Paths, Dangers, Strategies on your Kindle in under a minute. Don't have a Kindle? Get your Kindle here or start reading now with a free Kindle Reading App.

Deliver to your Kindle or other device

 
 
 

Try it free

Sample the beginning of this book for free

Deliver to your Kindle or other device

Sorry, this item is not available in
Image not available for
Colour:
Image not available
 

Superintelligence: Paths, Dangers, Strategies [Kindle Edition]

Nick Bostrom
4.3 out of 5 stars  See all reviews (19 customer reviews)

Print List Price: £18.99
Kindle Price: £11.40 includes VAT* & free wireless delivery via Amazon Whispernet
You Save: £7.59 (40%)
* Unlike print books, digital books are subject to VAT.

Free Kindle Reading App Anybody can read Kindle books—even without a Kindle device—with the FREE Kindle app for smartphones, tablets and computers.

To get the free app, enter your e-mail address or mobile phone number.

Formats

Amazon Price New from Used from
Kindle Edition £11.40  
Hardcover £12.91  
Audio Download, Unabridged £15.57 or Free with Audible.co.uk 30-day free trial
Kindle Daily Deal
Kindle Daily Deal: Up to 70% off
Each day we unveil a new book deal at a specially discounted price--for that day only. Learn more about the Kindle Daily Deal or sign up for the Kindle Daily Deal Newsletter to receive free e-mail notifications about each day's deal.

Book Description

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological
cognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.


Product Description

Review

Interesting from an economics and business perspective, but also more widely (City A.M.)

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era (Stuart Russell, Professor of Computer Science, University of California, Berkley)

Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book (Martin Rees, Past President, Royal Society)

This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? (Max Tegmark, Professor of Physics, MIT)

Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever (Olle Haggstrom, Professor of Mathematical Statistics)

Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking (The Economist)

There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake (Financial Times)

Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes (Elon Musk, Founder of SpaceX and Tesla)

A damn hard read (Sunday Telegraph)

About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Product details

  • Format: Kindle Edition
  • File Size: 2565 KB
  • Print Length: 352 pages
  • Publisher: OUP Oxford; 1 edition (3 July 2014)
  • Sold by: Amazon Media EU S.à r.l.
  • Language: English
  • ASIN: B00LOOCGB2
  • Text-to-Speech: Enabled
  • X-Ray:
  • Word Wise: Not Enabled
  • Average Customer Review: 4.3 out of 5 stars  See all reviews (19 customer reviews)
  • Amazon Bestsellers Rank: #9,952 Paid in Kindle Store (See Top 100 Paid in Kindle Store)
  •  Would you like to give feedback on images?


More About the Author

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the forthcoming book Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). Earlier this year he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 22 languages. There have been more than 100 translations and reprints of his works.

For more, see www.nickbostrom.com

What Other Items Do Customers Buy After Viewing This Item?


Customer Reviews

Most Helpful Customer Reviews
28 of 29 people found the following review helpful
5.0 out of 5 stars A seriously important book 11 July 2014
By Calum
Format:Hardcover|Verified Purchase
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.

It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:

“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”

This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all.

Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100.
Read more ›
Was this review helpful to you?
4 of 4 people found the following review helpful
4.0 out of 5 stars Machines with morals? 11 Sept. 2014
By PT Cruiser TOP 100 REVIEWER
Format:Kindle Edition
Edit Review
Delete Review
This review is from: Superintelligence: Paths, Dangers, Strategies (Kindle Edition)
Nick Bostrom has packed this book with information about how AI (artificial intelligence) may some day progress to a point that is beyond even our wildest imaginings. He explains his theories based on history and data, all noted with sources in the back of the book, from every angle imaginable. It seems to not be a question of if, but the question of when AI will be not only able to learn on its own but to improve upon itself and build more advanced versions of AI. And where will that leave humanity? Perhaps in the dust. Along with all the different scenarios of how this could happen, Bostrom suggests possible ways to keep these machines under some sort of control. But with such superintelligence, almost beyond our imagining, it doesn't leave me with a real sense of confidence that humanity will survive.

Bostrom packs a lot of information into this book. Much of it is highly philosophical. I was not surprised to learn that he is a Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute. He has a background in physics, computational neuroscience, and mathematical logic. This book reads much like a text and reminded me of books in my college philosophy courses where it would take me a long time to get to a single page, not because there were words that I didn't understand, but because some of the concepts were difficult to wrap my brain around. This book is amazing, don't get me wrong, but there were sections where it was a real chore to get through without having my mind wander because of the dry language. I am very interested in the subject matter could hardly wait to read the book.
Read more ›
Comment | 
Was this review helpful to you?
4 of 4 people found the following review helpful
Format:Hardcover|Verified Purchase
Bostrom's work is always fascinating and this area does deserve attention. However I found none of the routes to superintelligence at all convincing and there appear to be major obvious holes in the arguments. Whole brain scanning and replication looks like an intractable problem now that the quantum effects in nanotubules have been discovered in the brain. Other routes may result in super AI.

The main problem with the argument is that we don't need super intelligence to be threatened. A robot with the ability to kill does not have to be superintelligent, only well adapted at killing humans. This is the main problem with AI, its definition of 'I' sometimes misses the point. To be well adapted to your environment does not always require great intelligence.
Comment | 
Was this review helpful to you?
5.0 out of 5 stars Hard questions 31 Jan. 2015
By Gaynor
Format:Hardcover
It's true the style of this book makes it hard going at times, but fascinating insights make it worth the effort and flashes of dry humour enliven the lengthy paragraphs.

Bostrom says we aren't ready to deal with the dangers of superintelligent AI yet, but when it does arrive - most likely several decades from now - it's vital that we have worked out well in advance how to control it (if that's even possible!) We have to make sure to program it in such a way that it will benefit rather than destroy humanity and the book walks us through various possible routes to avoid disaster. We are a long way from building superintelligent machines yet though and there are incredibly hard questions to be answered first, such as how do you put ideal motivation and values into words, let alone computer code? Isaac Asimov's three laws of robotics would be hopelessly inadequate.

This book informs an important and necessary debate about the future of AI. It provides a deep insight into the dangers and how we might try to avoid them (it's also a great book to read before watching Ex Machina, which I think gets it just right!)
Comment | 
Was this review helpful to you?
Would you like to see more reviews about this item?
Were these reviews helpful?   Let us know
Most Recent Customer Reviews
5.0 out of 5 stars was leading edge; look forward to his next one on this
superb book, although I disagree with his premise that we'll be able to have any control at all over AI. His advice to avoid anthromorphising AI is spot on.
Published 10 days ago by Hector
2.0 out of 5 stars Worth reading but a lot of it is rather silly
The first part of the book surveys some of the ways superintelligence might happen, of which AI is only one. Read more
Published 15 days ago by Amazon Customer
4.0 out of 5 stars Great scholarship and lively entertainment rarely go hand-in-hand
I was going to write my own review, then discovered that better reviews had already been written by Calum and PT Cruiser. No need to say more.
Published 1 month ago by Jan V. H. Luthman
4.0 out of 5 stars Four Stars
If you can get past the difficult words it's great!
Published 1 month ago by Alice
4.0 out of 5 stars A good book that will certainly open your eyes to a potentially dark...
It is a really interesting book and in someways frightening. There are a numerous possibilities for "super intelligence" from genetic engineering through to the development... Read more
Published 2 months ago by Robert A. Carter
4.0 out of 5 stars Excellent book, beautifully laid out and edited with ideas ...
Excellent book, beautifully laid out and edited with ideas and theories concerned with the information age... Read more
Published 3 months ago by Mme Sosostris
5.0 out of 5 stars Very interesting
An excellent and extraordinary book on super cognition. Every angle is considered and enjoyably so. The author's mind is surely a candidate for whole brain emulation.
Published 5 months ago by A. W. O. Jenkins
3.0 out of 5 stars The book is not quite well structured. The content is interesting...
The book is not quite well structured. The content is interesting however the author builds many conclusions on very speculative foundation. In many cases I could agree. Read more
Published 5 months ago by Mirek
5.0 out of 5 stars Superintelligent!
Very smart book, based on facts and scientific papers. In some sense brave: although most of expert forecasts are wrong, this one is at least inspiring. Read more
Published 6 months ago by Wodecki
4.0 out of 5 stars What is Man's Future?
Many people will have considered the idea of superintelligence, particularly in the light of robotics and developmnents in artificial intelligence. Read more
Published 6 months ago by Stoker
Search Customer Reviews
Only search this product's reviews

Customer Discussions

This product's forum
Discussion Replies Latest Post
No discussions yet

Ask questions, Share opinions, Gain insight
Start a new discussion
Topic:
First post:
Prompts for sign-in
 

Search Customer Discussions
Search all Amazon discussions
   


Look for similar items by category