Customer Reviews

1 Review
5 star:    (0)
4 star:
3 star:    (0)
2 star:    (0)
1 star:    (0)
Average Customer Review
Share your thoughts with other customers
Create your own review
Most Helpful First | Newest First

4 of 5 people found the following review helpful
4.0 out of 5 stars A solid introduction to CUDA programming and more..., 11 Feb 2013
Verified Purchase(What is this?)
This review is from: Programming Massively Parallel Processors: A Hands-on Approach (Paperback)
This second edition of PMPP extends the table of contents of the first one, almost doubling the number of pages (in the 2nd ed. its ~500 pages. I have the paper version.)

The book can be separated roughly in 4 parts: the first, and more important, deals with parallel programming using Nvidia's CUDA technology: this takes about the first 10 chapters and Ch. 20; the second slice shows a couple of important examples (MRI image reconstruction and molecular simulation and visualization, chapters 11 and 12); the 3rd important block of chapters (chapters 14 upto 19) deals with other parallel programming technologies and CUDA expansions: OpenCL, OpenACC, CUDA Fortran, Thrust, C++AMP, MPI. Finally, spread all over the book, there are several "outlier", but nevertheless important, chapters: Ch. 7 discusses floating-point issues and its impact in calculation's accuracy; Ch. 13, "PP and Computational Thinking", discusses broadly how to think when converting sequential algorithms to parallel; and Ch. 21 discusses the future of PP (using CUDA goggles :-).

I've read about half of the book (I attended Coursera's MOOC -"Heterogeneous Parallel Computing"- taught by one of the authors, Prof. W. Hwu, and waited until the 2nd edition was out to buy it), and browsed carefully the other half. Here are my...

(+++) Pluses:
# There are just a few typos, here and there, but they are easy to spot (the funniest is in line 5 of ch. 1 (!), where Giga corresponds to 10^12 and Tera to 10^15, according to the authors: of course Giga is 10^9 and Tera is 10^12 - this bug is browseable with Amazon's "look inside" feature...).
# CUDA is described under an application POV; many computation patterns are exemplified in CUDA, from the simplest (vector addition) to more difficult ones (matrix multiplication, image filtering with convolution kernels, scanning,...)
# There is a description of several other PP technologies (OpenCL, MPI,...), what is a good and useful feature if you are evaluating or selecting a PP technology to use.
# The book is quite comprehensive about current PP technologies. CUDA is the "leading actress", but if you dominate CUDA you can easily transfer your PP knowledge to other technologies. The "tÍte-à-tÍte" of CUDA with those technologies appears in the respective chapters, by showing the respective templates for basic tasks (e.g. for vector addition or matrix multiplication).
# There are many discussions about the tradeoffs between memory transfer (from CPU to GPU) and speed of GPU computation, as well as about the optimization of this speed.

(---) Minuses:
# The figures, pictures and tables use a variety of font sizes and backgrounds (either gray, white, etc...); some fonts are very tiny and so in those cases the code is difficult to read.
# The chapters with descriptions of other PP technologies (OpenCL, OpenACC, MPI,...), often written by "invited" authors (acknowledged by Kirk and Hwu), are in general succinct; and the maturity and availability (free, commercial, open-source,...) of the technologies are not discussed.
# The prose is often excessive (somewhat verbose), and then the tentative to explain deeply the matters sometimes leads to confusion.
# There is some heterogeneity in the book (that's OK, we are talking about "heterogeneous parallel processors and programming" here ;-) perhaps because there are two main authors with different backgrounds and several "guest authors" in the latter chapters.
# It lacks a well-thought introductory chapter treating, along a pedagogical and explicative style, the subset of C commonly seen in PP computation and the CUDA extensions/templates. These matters are (lightly) in the book, but scattered in many chapters.
# Browsing the CUDA programming manual we can see that there are many issues not treated (or barely mentioned) in PMPP. An appendix of 20 or 30 pages with a systematic summary of the CUDA API and C extensions would be an welcome addition to the book.

After having browsed (with Amazon's "look inside" feature and by reading the reader's comments) other books about PP and CUDA, I decided to get this one and I am not disappointed at all. It has a nice description of CUDA and of many parallel computation patterns and its CUDA implementation and it gives you a barebones sample of other PP technologies. PMPP can be read easily as a "straight-line" text or on a chapter-by-chapter basis (this last one was more useful for me). Recommended for guys and gals with some experience in C programming and a will of getting into PP (or in expanding their skills...)
Help other customers find the most helpful reviews 
Was this review helpful to you? Yes No

Most Helpful First | Newest First

This product

Programming Massively Parallel Processors: A Hands-on Approach
Programming Massively Parallel Processors: A Hands-on Approach by Wen-mei W. Hwu (Paperback - 20 Dec 2012)
In stock
Add to basket Add to wishlist
Only search this product's reviews