Learn more Shop now Learn more Shop now Shop now Shop now Shop now Shop now Shop now Shop now Shop now Shop now Learn More Shop now Learn more Click Here Shop Kindle Learn More Shop now Shop Women's Shop Men's

Customer Review

on 11 February 2013
This second edition of PMPP extends the table of contents of the first one, almost doubling the number of pages (in the 2nd ed. its ~500 pages. I have the paper version.)

The book can be separated roughly in 4 parts: the first, and more important, deals with parallel programming using Nvidia's CUDA technology: this takes about the first 10 chapters and Ch. 20; the second slice shows a couple of important examples (MRI image reconstruction and molecular simulation and visualization, chapters 11 and 12); the 3rd important block of chapters (chapters 14 upto 19) deals with other parallel programming technologies and CUDA expansions: OpenCL, OpenACC, CUDA Fortran, Thrust, C++AMP, MPI. Finally, spread all over the book, there are several "outlier", but nevertheless important, chapters: Ch. 7 discusses floating-point issues and its impact in calculation's accuracy; Ch. 13, "PP and Computational Thinking", discusses broadly how to think when converting sequential algorithms to parallel; and Ch. 21 discusses the future of PP (using CUDA goggles :-).

I've read about half of the book (I attended Coursera's MOOC -"Heterogeneous Parallel Computing"- taught by one of the authors, Prof. W. Hwu, and waited until the 2nd edition was out to buy it), and browsed carefully the other half. Here are my...

Comments
----------
(+++) Pluses:
# There are just a few typos, here and there, but they are easy to spot (the funniest is in line 5 of ch. 1 (!), where Giga corresponds to 10^12 and Tera to 10^15, according to the authors: of course Giga is 10^9 and Tera is 10^12 - this bug is browseable with Amazon's "look inside" feature...).
# CUDA is described under an application POV; many computation patterns are exemplified in CUDA, from the simplest (vector addition) to more difficult ones (matrix multiplication, image filtering with convolution kernels, scanning,...)
# There is a description of several other PP technologies (OpenCL, MPI,...), what is a good and useful feature if you are evaluating or selecting a PP technology to use.
# The book is quite comprehensive about current PP technologies. CUDA is the "leading actress", but if you dominate CUDA you can easily transfer your PP knowledge to other technologies. The "tête-à-tête" of CUDA with those technologies appears in the respective chapters, by showing the respective templates for basic tasks (e.g. for vector addition or matrix multiplication).
# There are many discussions about the tradeoffs between memory transfer (from CPU to GPU) and speed of GPU computation, as well as about the optimization of this speed.

(---) Minuses:
# The figures, pictures and tables use a variety of font sizes and backgrounds (either gray, white, etc...); some fonts are very tiny and so in those cases the code is difficult to read.
# The chapters with descriptions of other PP technologies (OpenCL, OpenACC, MPI,...), often written by "invited" authors (acknowledged by Kirk and Hwu), are in general succinct; and the maturity and availability (free, commercial, open-source,...) of the technologies are not discussed.
# The prose is often excessive (somewhat verbose), and then the tentative to explain deeply the matters sometimes leads to confusion.
# There is some heterogeneity in the book (that's OK, we are talking about "heterogeneous parallel processors and programming" here ;-) perhaps because there are two main authors with different backgrounds and several "guest authors" in the latter chapters.
# It lacks a well-thought introductory chapter treating, along a pedagogical and explicative style, the subset of C commonly seen in PP computation and the CUDA extensions/templates. These matters are (lightly) in the book, but scattered in many chapters.
# Browsing the CUDA programming manual we can see that there are many issues not treated (or barely mentioned) in PMPP. An appendix of 20 or 30 pages with a systematic summary of the CUDA API and C extensions would be an welcome addition to the book.

Conclusion
----------
After having browsed (with Amazon's "look inside" feature and by reading the reader's comments) other books about PP and CUDA, I decided to get this one and I am not disappointed at all. It has a nice description of CUDA and of many parallel computation patterns and its CUDA implementation and it gives you a barebones sample of other PP technologies. PMPP can be read easily as a "straight-line" text or on a chapter-by-chapter basis (this last one was more useful for me). Recommended for guys and gals with some experience in C programming and a will of getting into PP (or in expanding their skills...)
0Comment| 5 people found this helpful. Was this review helpful to you?YesNoReport abuse| Permalink
What's this?

What are product links?

In the text of your review, you can link directly to any product offered on Amazon.com. To insert a product link, follow these steps:
1. Find the product you want to reference on Amazon.com
2. Copy the web address of the product
3. Click Insert product link
4. Paste the web address in the box
5. Click Select
6. Selecting the item displayed will insert text that looks like this: [[ASIN:014312854XHamlet (The Pelican Shakespeare)]]
7. When your review is displayed on Amazon.com, this text will be transformed into a hyperlink, like this:Hamlet (The Pelican Shakespeare)

You are limited to 10 product links in your review, and your link text may not be longer than 256 characters.


Product Details

4.0 out of 5 stars
2
4.0 out of 5 stars