FREE Delivery in the UK.
Only 2 left in stock (more on the way).
Dispatched from and sold by Amazon.
Gift-wrap available.
Introduction to Parallel ... has been added to your Basket
Trade in your item
Get a £7.75
Gift Card.
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Introduction to Parallel Computing Hardcover – 4 Feb 2003

See all formats and editions Hide other formats and editions
Amazon Price New from Used from
"Please retry"
£39.96 £39.96
£53.99 FREE Delivery in the UK. Only 2 left in stock (more on the way). Dispatched from and sold by Amazon. Gift-wrap available.

Special Offers and Product Promotions

  • When you trade in £15 or more you’ll receive an additional £5 Amazon.co.uk Gift Card for the next time you spend £10 or more.

Trade In this Item for up to £7.75
Trade in Introduction to Parallel Computing for an Amazon Gift Card of up to £7.75, which you can then spend on millions of items across the site. Trade-in values may vary (terms apply). Learn more

Product details

More About the Authors

Discover books, learn about writers, and more.

Product Description

From the Back Cover

Introduction to Parallel Computing, Second Edition
Ananth Grama
Anshul Gupta
George Karypis
Vipin Kumar
Increasingly, parallel processing is being seen as the only cost-effective method for the fast solution of computationally large and data-intensive problems. The emergence of inexpensive parallel computers such as commodity desktop multiprocessors and clusters of workstations or PCs has made such parallel methods generally applicable, as have software standards for portable parallel programming. This sets the stage for substantial growth in parallel software.
Data-intensive applications such as transaction processing and information retrieval, data mining and analysis and multimedia services have provided a new challenge for the modern generation of parallel platforms. Emerging areas such as computational biology and nanotechnology have implications for algorithms and systems development, while changes in architectures, programming models and applications have implications for how parallel platforms are made available to users in the form of grid-based services.
This book takes into account these new developments as well as covering the more traditional problems addressed by parallel computers. Where possible it employs an architecture-independent view of the underlying platforms and designs algorithms for an abstract model. Message Passing Interface (MPI), POSIX threads and OpenMP have been selected as programming models and the evolving application mix of parallel computing is reflected in various examples throughout the book.
* Provides a complete end-to-end source on almost every aspect of parallel computing (architectures, programming paradigms, algorithms and standards).
* Covers both traditional computer science algorithms (sorting, searching, graph, and dynamic programming algorithms) as well as scientific computing algorithms (matrix computations, FFT).
* Covers MPI, Pthreads and OpenMP, the three most widely used standards for writing portable parallel programs.
* The modular nature of the text makes it suitable for a wide variety of undergraduate and graduate level courses including parallel computing, parallel programming, design and analysis of parallel algorithms and high performance computing.
Ananth Grama is Associate Professor of Computer Sciences at Purdue University, working on various aspects of parallel and distributed systems and applications.
Anshul Gupta is a member of the research staff at the IBM T. J. Watson Research Center. His research areas are parallel algorithms and scientific computing.
George Karypis is Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota, working on parallel algorithm design, graph partitioning, data mining, and bioinformatics.
Vipin Kumar is Professor in the Department of Computer Science and Engineering and the Director of the Army High Performance Computing Research Center at the University of Minnesota. His research interests are in the areas of high performance computing, parallel algorithms for scientific computing problems and data mining.

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index | Back Cover
Search inside this book:

What Other Items Do Customers Buy After Viewing This Item?

Customer Reviews

There are no customer reviews yet on Amazon.co.uk.
5 star
4 star
3 star
2 star
1 star

Most Helpful Customer Reviews on Amazon.com (beta)

Amazon.com: 10 reviews
28 of 31 people found the following review helpful
Better read Journals than this book 28 Nov. 2005
By Kindle Customer - Published on Amazon.com
Format: Hardcover Verified Purchase
I bought the book a few months ago as textbook for my semester class in high performance computing. After reading the first 3 chapters I realized that this book is a waste. The examples are only solved partially, a lot of jargons (they should have put the terminology in separate table, maybe).

I was hoping, by reading the book I'd learn something essential and got the basic philosophy of high-performance computing/parallel processing. Instead, I got more confused than before reading it! (I used to be real-time software programmer, so the field is not totally new to me). The authors tried to put everything in this small 633-pages book.

Even my professor said it is useless to read the book and referred us to other research papers [Robertazzi's papers], and yes, these IEEE/ACM papers are much clearly understood! I also found that some websites much better explaining the concept. Another book is also I guess better: "Fundamentals of Parallel Processing" by Harry F. Jordan and Gita Alaghband.

Don't waste your money on this book.
12 of 12 people found the following review helpful
Too many mistakes. 19 Feb. 2006
By Erik R. Knowles - Published on Amazon.com
Format: Hardcover
I agree with the other reviewers who have said that this book is sloppy. There are just far too many mistakes for a 2nd edition book; very discouraging in an Addison-Wesley print.

The content is OK, and fairly thorough, but as another reviewer noted, there's considerable handwaving going on in some of the explanations.

Bottom line: a cleaned-up 3rd edition could be a very good textbook. Too bad I'm stuck with the 2nd edition :(
19 of 22 people found the following review helpful
A sloppily written book 17 Jan. 2004
By A Customer - Published on Amazon.com
Format: Hardcover
The content should be accessible to any graduate student but the sloppy writing style has made it unnecessarily difficult to read. Out of the many poorly written places, here is an example. In section 6.3.5 on page 248, it wrote, "Recall from section 9.3.1..." But I am only in chapter 6, how can I recall something from chapter 9. I then checked chapter 9 and found out that the forward reference was not a typo.
"Foundations of Multithreaded, Parallel, and Distributed Programming" by Gregory Andrews is a much better written book. Unfortunately, Gregory's book does not cover the same content.
22 of 31 people found the following review helpful
Worst text book ever written.. 1 Dec. 2005
By Panda Bear - Published on Amazon.com
Format: Hardcover
This book is extremely poorly written. The authors glaze over complex equations and magically come up with answers that don't make any sense. For example, to anyone having taken a prior architecture course the author's are completely wrong in the majority of cache performance analysis done early on in the book. Problems associated with that topic force the reader to dumb-down quite a bit to achieve their "expected" answer.

The user is left in most cases to derive the bizarre math that is involved through the authors' hand-waiving.

One of my personal favorites is from a formula derivation given on page 340, the sequence follows from the text as:



n^2=K^2tw^2p^2, <--what, did I miss something here?


On top of that there are numerous typos in the sparse visual examples that do exist. Thus it makes it even more confounding to read through.

If you are evaluating the text for a possible parallel computing course. Don't waste your time or money with this text, your students will thank you. If you are student looking to take a class that uses this text...dropping a brick on your foot might be more enjoyable. If you think I'm a disgruntled student trying to seek revenge, I'm not. I did fine in the course, and I just want to make sure that no one else gets blind-sided by the non-sensical garbage that is this text. If there was a negative rating...this would be below 1 star.
3 of 4 people found the following review helpful
Solid material but not clean enough 16 May 2009
By Mikael Öhman - Published on Amazon.com
Format: Hardcover
I like this book very much. I have used it for a course I am about to finish.

It provides a solid foundation for anyone interested in parallel computing on distributed memory architectures. Although there is some material on shared memory machines, this material is fairly limited which might be something the authors should change for a 3rd edition given the times we're living in.

The complaint I would raise is that the book doesn't always feel "clean". It's hard to give a concrete example but sometimes you really have to spend some time to understand where a communication time complexity comes from even though the author's refer to a table of communication time complexities. Why? Because the table is based on that the underlying architecture is a hypercube which isn't really made explicit anywhere (?).
Were these reviews helpful? Let us know