The title covers so many possibilities, from the level of processor architecture to grid computing, that no book could possibly cover the whole range. The selections found here surprised me, though: just a few topics, but remarkable depth in at least one of them.
The authors begin with a classic model of computing, the parallel RAM. This conceptual family includes shared-memory systems of many kinds, possibly with restrictions on concurrent reading and/or writing. Here, the emphasis is on data structures that optimize concurrent computation on linked structures. The next chapter covers 'sorting networks.' These consist of two-input, two-output devices that compute the min and max of the input, generally configured with fixed topology. Although classics of the computing canon (see Knuth), they still represent a potentially useful structure, e.g. when creating a fixed-latency median filter in hardware. These chapters seem to tie only loosely to the main substance of the book, which begins in chapter 3.
That begins with a few classic communication topologies, including rings, stars, meshes, and hypercubes. The authors present performance models of graduated sophistication, as well as some low-level operations on specific architectures (e.g., broadcast on a simple ring). Chapters 4 and 5 get to the kinds of algorithms that highly parallel processors are typically called on to handle: matrix operations, stencil algorithms, and solving systems of linear equations. These chapters stand out for their close coupling between problem decomposition and the communication topology involved. Chapters 6-8 discuss load balancing and task scheduling, both static and dynamic. Algorithms like LU decomposition on non-sparse systems have reasonably predictable performance and communication characteristics over the course of one computation, so static and dynamic algorithms are both mentioned.
Task scheduling receives especially thorough treatment, especially when different processors run at predictably different rates. I found the material clearly laid out, but baffling in one respect. Heterogeneous processors typically arise in irregular communication networks such at the Internet, i.e. networks quite unlike the ones discussed in ch. 3-5.
I can give only conditional recommendation of this book, depending on the reader's interests. It discussion of topology-dependent communication and problem partitioning contains good material, task scheduling is exceptional, and the problems in each chapter seem well designed for reinforcing the content of each chapter. The book's idiosyncratic choices of subjects limit the audiences to whom this will be helpful, but, if you're in the target audience for one of the book's strong points, you're likely to find it very rewarding.