Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet or computer – no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Global Catastrophic Risks Paperback – Illustrated, 1 Aug. 2011
-
Prime Student members get 10% off Shop items | Terms
Purchase options and add-ons
In Global Catastrophic Risks 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues - policy responses and methods for predicting and managing catastrophes.
This is invaluable reading for anyone interested in the big issues of our time; for students focusing on science, society, technology, and public policy; and for academics, policy-makers, and professionals working in these acutely important fields.
- ISBN-100199606501
- ISBN-13978-0199606504
- EditionIllustrated
- PublisherOxford University Press, USA
- Publication date1 Aug. 2011
- LanguageEnglish
- Dimensions3.3 x 15.49 x 23.37 cm
- Print length560 pages
Frequently bought together

What do customers buy after viewing this item?
Product description
Review
The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations. ― Nature
We should welcome this fascinating and provocative book. ― Martin J Rees (from foreword)
[Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations. ― Nature
Review
The book works well, providing a mine of peer-reviewed information on the great risks that threaten our own and future generations. ― Nature
We should welcome this fascinating and provocative book. ― Martin J Rees (from foreword)
[Provides] a mine of peer-reviewed information on the great risks that threaten our own and future generations. ― Nature
Book Description
About the Author
Milan M. Cirkovic, PhD, is a senior research associate of the Astronomical Observatory of Belgrade, (Serbia) and a professor of Cosmology at Department of Physics, University of Novi Sad (Serbia). He received both his PhD in Physics and his MSc in Earth and Space Sciences from the State University of New York at Stony Brook (USA) and his BSc in Theoretical Physics was received from the University of Belgrade.
Product details
- Publisher : Oxford University Press, USA; Illustrated edition (1 Aug. 2011)
- Language : English
- Paperback : 560 pages
- ISBN-10 : 0199606501
- ISBN-13 : 978-0199606504
- Dimensions : 3.3 x 15.49 x 23.37 cm
- Best Sellers Rank: 581,996 in Books (See Top 100 in Books)
- 56 in Nanotechnology
- 111 in Chemical & Biochemical Engineering (Books)
- 184 in Mathematical Game Theory
- Customer reviews:
About the authors

NICK BOSTROM is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds. His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. He has an academic background in theoretical physics, AI, and computational neuroscience as well as philosophy.

Discover more of the author’s books, see similar authors, read book recommendations and more.

Discover more of the author’s books, see similar authors, read book recommendations and more.
Customer reviews
Customer Reviews, including Product Star Ratings, help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyses reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from United Kingdom
There was a problem filtering reviews right now. Please try again later.
As another reviewer has commented on, "obscure and unlikely" risks receive as much, if not more, attention than the more well known risks but the book makes it very clear from the onset that it is not supposed to be a manual for saving the world. Instead, it is simply trying to inform readers about global catastrophic risks as a wider issue (the book includes several chapters on sociological aspects such as cognitive biases) rather than specifically trying to help people prepare for them. I think it does this very well.
As I was reading the child-like introduction by Martin J. Rees, who lashes out at those he sees as a threat in a similar way to a kid in the playground starts name calling when he's not getting his way, I knew it was going to be an uphill struggle. To quote one sentence: "And there are extreme eco-freaks who believe that the world would be better off if it were rid of humans." Convince me that we are beneficial to life on earth with well constructed arguments and I will try to see just how "better off" the world is with us lot subjugating nature to our own selfish ends. But it will take a lot to convince me!
The book doesn't claim to cover every conceivable threat to human existence on the planet, but the threats that are included are not necessarily the most realistic. Whole chapters are devoted to what can only be termed as science-fiction: that we are part of some simulation and the "Director" of said simulation could tire of watching us and switch the simulation off. Huh? Another chapter is devoted to artificial intelligence that will somehow begin a life of it's own. Come on, people, there are more realistic threats that we should be concerned with, such as disease (a mere 17 pages compared to 35 on artificial intelligence), weaponry (only nuclear and biological weapons are covered), natural events that come as a result of over-population (this is the single biggest threat to human life on earth as no-one is addressing the issue and we expect the earth to support an infinite number of humans - truly absurd!).
The chapter on cognitive bias was probably the most interesting chapter for me, which is about how difficult it is to be truly objective in trying to assess risks. This makes sense as it takes selfless introspection to truly transcend the bias of our human nature, i.e.: the instincts that drive us. The assumption that human beings are the most intelligent life-forms ever to inhabit the planet, that the planet would be worse-off if we were not here - these are emotionally-driven statements that come from the deepest instinct of all - that of survival. They are not intellectual arguments and have no place in a scientific body of work.
One thing that does strike me is how little we know of the past. There is a lot of guesswork, a lot of assumptions, a lot of "probables". This is not factual nor is it scientific, so why do so-called scientists put it forward as "fact"?
All-in-all, I was left quite numb after reading this book. Not because of any threats to the existence of "human" life on earth, but the arrogance that shines through from all these "intellectuals". As far as they are concerned, human beings are the only things that matter and everything else on the planet, in the universe, is there for the taking. It is this attitude towards life that will lead to our extinction and for the sake of the rest of life on this wonderful and unique planet, the sooner the better! Even if we became extinct next week, we would leave behind a terrible legacy, one that we should be ashamed of (assuming that any of us have a sense of humility, that is).
Four stars for a not very exiting but industrious work.
Top reviews from other countries
Amongst the core chapters discussing particular risks, the three that are most ``hard science", on supervolcanos, asteroid or comet impact, and extra-solar-system risks are just great -- one learns for instance that (contrary to much science fiction) comets are more a risk than asteroids, and the major risk in the last category is not nearby supernovas but cosmic rays created by gamma ray bursts. These three chapters are perhaps the only contexts where it's reasonable to attempt to estimate actual probabilities of the catastrophes.
The balanced article on global warming is unlikely to please extremists, concluding that mainstream science predicts a linear increase in temperature that may be unpleasant but not catastrophic, while the various speculative non-linear possibilities leading to catastrophe have plausibilities impossible to assess. The article on pandemics is surprisingly upbeat (``are influenza pandemics likely? Possibly, except for the preposterous mortality rate that has been proposed"), as is the article on exotic physics ("Might our vacuum be only metastable? If so, we can envisage a terminal catastrophe, when the field configuration of empty space changes, and with it the effective laws of physics ..."). The articles on nuclear war, on nuclear terrorism, and on risks from biotechnology and from nanotechnology are perfectly sensible and well-argued. These articles are somewhat technical, so it is a curious relief to arrive at "totalitarian government" which discusses in an easy to read way why 20th century totalitarian governments did not last forever, and circumstances under which a stable worldwide totalitarian government might emerge.
The article on AIs emphasizes that we wrongly imagine intelligent machines as like humans -- "how likely is it that AI will cross the vast gap from amoeba to village idiot, and then stop at the level of human genius?" -- and that we should attempt to envisage something quite different. But the subsequent discussion of Friendly or Unfriendly AIs rests on the assumptions that AIs may be created which have intelligence and motivation ("optimization targets", in the author's effort to avoid anthropomorphizing) to do things on their own initiative, and that their motivations will be comprehensible to humans. Well, I find it hard enough to imagine what "motivation/optimization targets" mean to an amoeba or a village idiot, let alone an AI.
The only article I found positively unsatisfactory was on social collapse. A catastrophe eliminating global food production for one year would likely cause "collapse of civilization" in fighting over the 2 months food supply in storage. But not elimination for just one month. A serious discussion of the sizes of different catastrophes needed to reach this tipping point would be fascinating, but the article merely assumes power law distributions for the size of an unspecified disaster -- this is the sort of thing that brings mathematical modeling into disrepute.
Overall, a valuable and eclectic selection of thought-provoking articles.
Man merkt dem Buch an, dass dies eine Veranstaltung von Wissenschaftlern für Wissenschaftler war - die von mir sonst so gerne gelobte Fähigkeit und Bereitschaft anglophoner Autoren, sich für den interessierten Laien verständlich auszudrücken, ist hier sehr unterschiedlich ausgeprägt, und das ist angesichts der Vielfalt der vertretenen Fakultäten etwas schade. Naturwissenschaftliche Vorbildung sowie eine gewisse Bereitschaft, sich mit Zahlen auseinanderzusetzen, sind jedenfalls bei den meisten Themen von Nutzen.
Auch wenn ein paar Randgebiete reingerutscht sind - milleniaristische Kulte haben mit echten Katastrophen wenig zu tun, und den Artikel über Für und Wider alleskönnender Nanofabrikchen fand ich etwas sehr verspielt-hypothetisch - haben andere dafür umso mehr Substanz, wie die über Asteroiden, Kometen oder Supervulkane. Sehr eindrucksvoll auch das Kapitel über künstliche Intelligenz und die Gedanken, die man sich tunlichst gemacht haben sollte, bevor diese sich dem Einfluss ihrer Macher entzieht. Beim Kapitel über den Klimawandel war ich mir nicht sicher, ob hier nicht zu viel hinterfragt, zu wenig gewarnt und damit den interessengesteuerten Leugnern zu viel Munition an die Hand gegeben wurde.
Das könnte aber auch damit zusammenhängen, dass das Buch inzwischen fast zehn Jahre alt ist. Das merkt man nicht nur daran, das Osama bin Laden noch lebt und es den IS noch nicht gibt, sondern es wird auch eine so unmittelbar drohende Gefahr wie die multiresistenter Keime nur in einem Absatz gestreift. Dass ein Sonnensturm wie der von 1859, der unser Kommunikationsnetz weltweit und langfristig lahmlegen würde, nicht als Risiko eingeschätzt wird, hat mich ebenfalls sehr gewundert (2012 hätte es uns fast wieder erwischt). Und schließlich dürften auch die Eintrittswahrscheinlichkeit einer nuklearen Katastrophe heute etwas anders eingeschätzt werden, angesichts der Tatsache, dass ein inkompetenter Narzisst und ein undurchsichtiger Diktator die Finger an den jeweiligen Roten Knöpfen haben.
Das sind, zugegebenermaßen, alles nur persönliche, den Blick möglicherweise verstellende Sichtweisen - vor dem cognitive bias wird in dem Buch nicht nur einmal ausdrücklich gewarnt.
I had read a review of GCR in the scientific journal "Nature" in which the reviewer complained that the authors had given the global warming issue short shrift. I considered this a plus.
If, like me, you get very annoyed by "typos," be forewarned. There are enough typos in GCR to start a collection. At first I was a bit annoyed by them, but some were quite amusing... almost as if they were done on purpose.
Most of the typos were straight typing errors, or errors of fact. For example, on page 292 the author says that the 1918 flu pandemic killed "only 23%" of those infected. Only 23%? That seems a rather high percentage to be preceeded by the qualifier "only". Of course, although 50 million people died in the pandemic, this represented "only" 2% to 3% of those infected... not 23%. On p 295 we read "the rats and their s in ships" and it might take us a moment to determine that it should have read, "the rats and their fleas in ships."
But many of the typos were either fun, or a bit more tricky to figure out: on p. 254 we find "canal so" which you can probably predict should have been "can also." Much trickier, on p. 255 we find, "A large meteoric impact was invoked (...) in order to explain their idium anomaly." Their idium anomaly?? Nah. Better would have been..."the iridium anomaly!" (That's one of my favorites.) Elsewhere, we find twice on the same page "an arrow" instead of "a narrow"... and so it goes..."mortality greater than $1 million." on p. 168 (why the $ sign?) etc. etc.
But the overall impact of the book is tremendous. We learn all sorts of arcane and troubling data, e.g. form p.301 "A totally unforseen complication of the successful restoration of immunologic function by the treatment of AIDS with antiviral drugs has been the activation of dormant leprosy..." I can hear the phone call now...."Darling, I have some wonderful news, and some terrible news...hold on a second dearest, my nose just fell off..."
So even if you're usually turned off by typos, don't let that stop you from buying this book. I expected more from the Oxford University Press, but I guess they've sacked the proofreader and they're using Spell-Check these days. But then, how did "their idium anomaly" get past Spell-Check? I guess Spell-Check at Oxford includes Latin.
Probably the most dangerous future risk is going to be the advent of real Artificial Intelligence within our lifetime or very near into the future. Eliezer Yudkowsky is the top figurehead and spokesman for factors involved in this risk and is the editor for this specific risk within the book. If our fears are to become a reality, then it doesn't matter much of whatever else we get right. Many of the other risks to worry about, we already have a wealth of information on their occurrences, how they work, how likely they are to affect us, and how they will affect us when they come. The risks concerning the arrival of AI however are far more dangerous in that this isn't an experiment that we get to practically represent so that reality can beat us over the head with the correct answer. If we are to achieve true FAI (Friendly Artificial Intelligence as Yudkowsky calls it) then a massive amount of dedication, money and effort is needed for research needed to avoid a real disaster. If our aims are achieved and realized however, many of the other risks and concerns we have can be offset to the handling of an intelligence much greater than ourselves with a higher probability and likelihood of being overcome.
We are passing through a stage where we are beginning to create problems that are beyond our current capacity to provide solutions for. This book is probably the best general and somewhat technical primer to become acquainted with serious problems we are currently facing and that we will inevitably arrive at in the future. If you are truly keen to getting involved in with the kinds of problems we will have to confront, this book is indispensable.



