Buy new:
-22% £17.49£17.49
£4.31 delivery Friday, 13 December
Dispatches from: Amazon Sold by: Amazon
Save with Used - Very Good
£9.99£9.99
£4.29 delivery 13 - 18 December
Dispatches from: momox co uk Sold by: momox co uk
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet or computer – no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Superintelligence: Paths, Dangers, Strategies Hardcover – Illustrated, 3 July 2014
Purchase options and add-ons
If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- ISBN-100199678111
- ISBN-13978-0199678112
- EditionIllustrated
- PublisherOUP Oxford
- Publication date3 July 2014
- LanguageEnglish
- Dimensions23.62 x 2.54 x 15.75 cm
- Print length352 pages
Products related to this item
Product description
Review
I highly recommend this book ― Bill Gates
very deep ... every paragraph has like six ideas embedded within it. ― Nate Silver
Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era ― Stuart Russell, Professor of Computer Science, University of California, Berkley
Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book ― Martin Rees, Past President, Royal Society
This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? ― Max Tegmark, Professor of Physics, MIT
Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever ― Olle Haggstrom, Professor of Mathematical Statistics
Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking ― The Economist
There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake ― Financial Times
His book Superintelligence: Paths, Dangers, Strategies became an improbable bestseller in 2014 ― Alex Massie, Times (Scotland)
Ein Text so nüchtern und cool, so angstfrei und dadurch umso erregender, dass danach das, was bisher vor allem Filme durchgespielt haben, auf einmal höchst plausibel erscheint. A text so sober and cool, so fearless and thus all the more exciting that what has until now mostly been acted through in films, all of a sudden appears most plausible afterwards. (translated from German) ― Georg Diez, DER SPIEGEL
Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes ― Elon Musk, Founder of SpaceX and Tesla
A damn hard read ― Sunday Telegraph
I recommend Superintelligence by Nick Bostrom as an excellent book on this topic ― Jolyon Brown, Linux Format
Every intelligent person should read it. ― Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
An intriguing mix of analytic philosophy, computer science and cutting-edge science fiction, Nick Bostrom's Superintelligence is required reading for anyone seeking to make sense of the recent surge of interest in artificial intelligence (AI). ― Colin Garvey, Icon
Book Description
About the Author
Product details
- Publisher : OUP Oxford; Illustrated edition (3 July 2014)
- Language : English
- Hardcover : 352 pages
- ISBN-10 : 0199678111
- ISBN-13 : 978-0199678112
- Dimensions : 23.62 x 2.54 x 15.75 cm
- Best Sellers Rank: 216,526 in Books (See Top 100 in Books)
- 287 in Higher Education of Engineering
- 366 in Higher Mathematical Education
- 763 in Popular Maths
- Customer reviews:
About the author

NICK BOSTROM is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds. His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. He has an academic background in theoretical physics, AI, and computational neuroscience as well as philosophy.
Products related to this item
Customer reviews
Customer Reviews, including Product Star Ratings, help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyses reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonCustomers say
Customers find the book very informative with huge intellectual depth. They say it's an important book showcasing the work we collectively need to do before the fact. However, some readers find the readability challenging and hard going.
AI-generated from the text of customer reviews
Customers find the book very informative and say it has a huge intellectual depth. They appreciate the author's thorough work in studying and compiling all his references. Readers mention the book cuts through all the artificial intelligence nonsense and provides a much better analysis.
"...He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies...." Read more
"This book is topical, even urgent...." Read more
"I read this some years ago, but as i remember, it was a better analysis and better communicated than other books i've read on the topic...." Read more
"...where the maths flew over my head, but otherwise it's a very accessible book on a topic I didn't realise was so complex...." Read more
Customers find the cover design lovely.
"...care about from a highly ranked academic institution, with a cool looking owl to boot. Yea, owls! Fantastic, right? Wrong...." Read more
"A fantastic read and also a lovely cover design." Read more
"bit of a head melter, but i like the book. I love the cover illustration too" Read more
Customers find the book challenging, hard to read, and dry. They say it's not an easy book to write and is written in turgid academic speak. Readers also mention that the book is difficult to navigate for people with a general interest.
"...This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. …..." Read more
"...understanding our existential risks with AI... despite the unusually floral vocabulary :)" Read more
"...I'm usually a speed reader, but not with this one.....a lot of technical words and unfamiliar language...however, I still enjoyed and it's made me..." Read more
"...It demands quite an effort from the reader but the more you are willing to make the more the reward...." Read more
Reviews with images
Great Book, one I will likely refer to
-
Top reviews
Top reviews from United Kingdom
There was a problem filtering reviews right now. Please try again later.
I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.
It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:
“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”
This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all.
Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin.
He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast.
He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).
The book’s middle chapter and fulcrum is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever.
Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.
In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry:
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”
Amen to that.
The take away i strongly remember was: Even if you ask the owl on the cover (ie something more powerful than us) to just make buttons, it still might conclude that it should take over the world just to ensure it can maximise its chances of always securely supplying those buttons... so don't let it out of its box. Something like that.
But LLMs like ChatGPT are nowhere near that, as i understand it. Aren't they just probablistically putting the next word infront of the last? Probably time to read it again, which although i almost never reread books, i think this one might be worth it.
Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.
This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.
Also a good understanding of economic theory would also help any reader.
Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.
At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.
Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.
“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.