Sometimes, the individual benefit seems to conflict with the benefit of the community as a whole, even though the community includes this very individuum. One such example has been formulated as the Prisonner's Dilemma: two suspects, A and B, are arrested, and kept separated so that they cannot communicate. If they continue to cooperate, they will be both sentenced to one year. However, if suspect A cooperates, but suspect B defects, A is going to be sentenced to five years, and suspect B will be released. Vice versa, if B cooperates and A defects, A will be released and B sentenced to five years. Finally, if both defect, they will both be sentenced to three years each.
It is clear that the best solution for both of them is cooperation. On the other hand, each individual is also tempted to maximize his own individual benefit. And each of them benefits most if he decides to defect, which in turn brings the worst possible outcome for both (six years total). So one-shot Prisonner's Dilemma rarely leads to cooperation. Now, what if the very two chaps are later arrested again? Will they cooperate when given another chance? And if they know they will face the same situation every five years? Professor Axelrod tested the iterated Prisonner's Dilemma with computer programs, and investigated under which circumstances cooperation can emerge.
The book is nicely scattered with fragments of game theory and examples from world politics. All in all, as Richard Dawkins has commented in the foreword to the Penguin edition of this book, in breathes with optimism, and is a delight to read. Still, it has one problem, and actually shares it with Dawkins: the book reaches its climax right at the beginning. The book starts with a strong and very convincing idea, but later fails to keep the same pace of dynamic. The idea is splendid, but the structure of the book could be improved.