Sign in to turn on 1-Click ordering.
Trade in Yours
For a £1.11 Gift Card
Trade in
More Buying Choices
Have one to sell? Sell yours here
Sorry, this item is not available in
Image not available for
Image not available

Tell the Publisher!
I’d like to read this book on Kindle

Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.

Introduction to Parallel Computing [Hardcover]

Ananth Grama , George Karypis , Vipin Kumar , Anshul Gupta

Price: £51.99 & FREE Delivery in the UK. Details
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
Only 3 left in stock (more on the way).
Dispatched from and sold by Amazon. Gift-wrap available.
Want it tomorrow, 2 Sept.? Choose Express delivery at checkout. Details
Trade In this Item for up to £1.11
Trade in Introduction to Parallel Computing for an Amazon Gift Card of up to £1.11, which you can then spend on millions of items across the site. Trade-in values may vary (terms apply). Learn more

Book Description

4 Feb 2003 0201648652 978-0201648652 2
Introducation to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).

Product details

Product Description

From the Back Cover

Introduction to Parallel Computing, Second Edition
Ananth Grama
Anshul Gupta
George Karypis
Vipin Kumar
Increasingly, parallel processing is being seen as the only cost-effective method for the fast solution of computationally large and data-intensive problems. The emergence of inexpensive parallel computers such as commodity desktop multiprocessors and clusters of workstations or PCs has made such parallel methods generally applicable, as have software standards for portable parallel programming. This sets the stage for substantial growth in parallel software.
Data-intensive applications such as transaction processing and information retrieval, data mining and analysis and multimedia services have provided a new challenge for the modern generation of parallel platforms. Emerging areas such as computational biology and nanotechnology have implications for algorithms and systems development, while changes in architectures, programming models and applications have implications for how parallel platforms are made available to users in the form of grid-based services.
This book takes into account these new developments as well as covering the more traditional problems addressed by parallel computers. Where possible it employs an architecture-independent view of the underlying platforms and designs algorithms for an abstract model. Message Passing Interface (MPI), POSIX threads and OpenMP have been selected as programming models and the evolving application mix of parallel computing is reflected in various examples throughout the book.
* Provides a complete end-to-end source on almost every aspect of parallel computing (architectures, programming paradigms, algorithms and standards).
* Covers both traditional computer science algorithms (sorting, searching, graph, and dynamic programming algorithms) as well as scientific computing algorithms (matrix computations, FFT).
* Covers MPI, Pthreads and OpenMP, the three most widely used standards for writing portable parallel programs.
* The modular nature of the text makes it suitable for a wide variety of undergraduate and graduate level courses including parallel computing, parallel programming, design and analysis of parallel algorithms and high performance computing.
Ananth Grama is Associate Professor of Computer Sciences at Purdue University, working on various aspects of parallel and distributed systems and applications.
Anshul Gupta is a member of the research staff at the IBM T. J. Watson Research Center. His research areas are parallel algorithms and scientific computing.
George Karypis is Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota, working on parallel algorithm design, graph partitioning, data mining, and bioinformatics.
Vipin Kumar is Professor in the Department of Computer Science and Engineering and the Director of the Army High Performance Computing Research Center at the University of Minnesota. His research interests are in the areas of high performance computing, parallel algorithms for scientific computing problems and data mining.

Inside This Book (Learn More)
Browse Sample Pages
Front Cover | Copyright | Table of Contents | Excerpt | Index | Back Cover
Search inside this book:

Sell a Digital Version of This Book in the Kindle Store

If you are a publisher or author and hold the digital rights to a book, you can sell a digital version of it in our Kindle Store. Learn more

What Other Items Do Customers Buy After Viewing This Item?

Customer Reviews

There are no customer reviews yet on Amazon.co.uk.
5 star
4 star
3 star
2 star
1 star
Most Helpful Customer Reviews on Amazon.com (beta)
Amazon.com: 3.1 out of 5 stars  8 reviews
27 of 30 people found the following review helpful
2.0 out of 5 stars Better read Journals than this book 28 Nov 2005
By Kindle Customer - Published on Amazon.com
Format:Hardcover|Verified Purchase
I bought the book a few months ago as textbook for my semester class in high performance computing. After reading the first 3 chapters I realized that this book is a waste. The examples are only solved partially, a lot of jargons (they should have put the terminology in separate table, maybe).

I was hoping, by reading the book I'd learn something essential and got the basic philosophy of high-performance computing/parallel processing. Instead, I got more confused than before reading it! (I used to be real-time software programmer, so the field is not totally new to me). The authors tried to put everything in this small 633-pages book.

Even my professor said it is useless to read the book and referred us to other research papers [Robertazzi's papers], and yes, these IEEE/ACM papers are much clearly understood! I also found that some websites much better explaining the concept. Another book is also I guess better: "Fundamentals of Parallel Processing" by Harry F. Jordan and Gita Alaghband.

Don't waste your money on this book.
11 of 11 people found the following review helpful
2.0 out of 5 stars Too many mistakes. 19 Feb 2006
By Erik R. Knowles - Published on Amazon.com
I agree with the other reviewers who have said that this book is sloppy. There are just far too many mistakes for a 2nd edition book; very discouraging in an Addison-Wesley print.

The content is OK, and fairly thorough, but as another reviewer noted, there's considerable handwaving going on in some of the explanations.

Bottom line: a cleaned-up 3rd edition could be a very good textbook. Too bad I'm stuck with the 2nd edition :(
19 of 22 people found the following review helpful
2.0 out of 5 stars A sloppily written book 17 Jan 2004
By A Customer - Published on Amazon.com
The content should be accessible to any graduate student but the sloppy writing style has made it unnecessarily difficult to read. Out of the many poorly written places, here is an example. In section 6.3.5 on page 248, it wrote, "Recall from section 9.3.1..." But I am only in chapter 6, how can I recall something from chapter 9. I then checked chapter 9 and found out that the forward reference was not a typo.
"Foundations of Multithreaded, Parallel, and Distributed Programming" by Gregory Andrews is a much better written book. Unfortunately, Gregory's book does not cover the same content.
22 of 30 people found the following review helpful
1.0 out of 5 stars Worst text book ever written.. 1 Dec 2005
By Panda Bear - Published on Amazon.com
This book is extremely poorly written. The authors glaze over complex equations and magically come up with answers that don't make any sense. For example, to anyone having taken a prior architecture course the author's are completely wrong in the majority of cache performance analysis done early on in the book. Problems associated with that topic force the reader to dumb-down quite a bit to achieve their "expected" answer.

The user is left in most cases to derive the bizarre math that is involved through the authors' hand-waiving.

One of my personal favorites is from a formula derivation given on page 340, the sequence follows from the text as:



n^2=K^2tw^2p^2, <--what, did I miss something here?


On top of that there are numerous typos in the sparse visual examples that do exist. Thus it makes it even more confounding to read through.

If you are evaluating the text for a possible parallel computing course. Don't waste your time or money with this text, your students will thank you. If you are student looking to take a class that uses this text...dropping a brick on your foot might be more enjoyable. If you think I'm a disgruntled student trying to seek revenge, I'm not. I did fine in the course, and I just want to make sure that no one else gets blind-sided by the non-sensical garbage that is this text. If there was a negative rating...this would be below 1 star.
3 of 4 people found the following review helpful
4.0 out of 5 stars Solid material but not clean enough 16 May 2009
By Mikael Öhman - Published on Amazon.com
I like this book very much. I have used it for a course I am about to finish.

It provides a solid foundation for anyone interested in parallel computing on distributed memory architectures. Although there is some material on shared memory machines, this material is fairly limited which might be something the authors should change for a 3rd edition given the times we're living in.

The complaint I would raise is that the book doesn't always feel "clean". It's hard to give a concrete example but sometimes you really have to spend some time to understand where a communication time complexity comes from even though the author's refer to a table of communication time complexities. Why? Because the table is based on that the underlying architecture is a hypercube which isn't really made explicit anywhere (?).
Were these reviews helpful?   Let us know

Customer Discussions

This product's forum
Discussion Replies Latest Post
No discussions yet

Ask questions, Share opinions, Gain insight
Start a new discussion
First post:
Prompts for sign-in

Search Customer Discussions
Search all Amazon discussions

Look for similar items by category