• Alle Preisangaben inkl. MwSt.
Nur noch 1 auf Lager (mehr ist unterwegs).
Verkauf und Versand durch Amazon.
Geschenkverpackung verfügbar.
Menge:1
Introduction to Parallel ... ist in Ihrem Einkaufwagen hinzugefügt worden
+ EUR 3,00 Versandkosten
Gebraucht: Gut | Details
Verkauft von Deal DE
Zustand: Gebraucht: Gut
Kommentar: Dieses Buch ist in gutem, sauberen Zustand. Seiten und Einband sind intakt.
Ihren Artikel jetzt
eintauschen und
EUR 33,95 Gutschein erhalten.
Möchten Sie verkaufen?
Zur Rückseite klappen Zur Vorderseite klappen
Anhören Wird wiedergegeben... Angehalten   Sie hören eine Probe der Audible-Audioausgabe.
Weitere Informationen
Alle 2 Bilder anzeigen

Introduction to Parallel Computing (Englisch) Gebundene Ausgabe – 4. Februar 2003


Alle Formate und Ausgaben anzeigen Andere Formate und Ausgaben ausblenden
Amazon-Preis Neu ab Gebraucht ab
Gebundene Ausgabe
"Bitte wiederholen"
EUR 162,43
EUR 47,98 EUR 100,35
13 neu ab EUR 47,98 4 gebraucht ab EUR 100,35
EUR 162,43 Kostenlose Lieferung. Nur noch 1 auf Lager (mehr ist unterwegs). Verkauf und Versand durch Amazon. Geschenkverpackung verfügbar.

Hinweise und Aktionen

  • Reduzierte Bestseller und Neuheiten: Entdecken Sie unsere vielseitige Auswahl an reduzierten Hörbüchern und englischen Büchern. Klicken Sie hier, um direkt zur Aktion zu gelangen.

Jeder kann Kindle Bücher lesen — selbst ohne ein Kindle-Gerät — mit der KOSTENFREIEN Kindle App für Smartphones, Tablets und Computer.


Produktinformation


Mehr über die Autoren

Entdecken Sie Bücher, lesen Sie über Autoren und mehr

Produktbeschreibungen

Synopsis

Introducation to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).

Buchrückseite

Introduction to Parallel Computing, Second Edition

Ananth Grama

Anshul Gupta

George Karypis

Vipin Kumar

Increasingly, parallel processing is being seen as the only cost-effective method for the fast solution of computationally large and data-intensive problems. The emergence of inexpensive parallel computers such as commodity desktop multiprocessors and clusters of workstations or PCs has made such parallel methods generally applicable, as have software standards for portable parallel programming. This sets the stage for substantial growth in parallel software.

Data-intensive applications such as transaction processing and information retrieval, data mining and analysis and multimedia services have provided a new challenge for the modern generation of parallel platforms. Emerging areas such as computational biology and nanotechnology have implications for algorithms and systems development, while changes in architectures, programming models and applications have implications for how parallel platforms are made available to users in the form of grid-based services.

This book takes into account these new developments as well as covering the more traditional problems addressed by parallel computers. Where possible it employs an architecture-independent view of the underlying platforms and designs algorithms for an abstract model. Message Passing Interface (MPI), POSIX threads and OpenMP have been selected as programming models and the evolving application mix of parallel computing is reflected in various examples throughout the book.

* Provides a complete end-to-end source on almost every aspect of parallel computing (architectures, programming paradigms, algorithms and standards).

* Covers both traditional computer science algorithms (sorting, searching, graph, and dynamic programming algorithms) as well as scientific computing algorithms (matrix computations, FFT).

* Covers MPI, Pthreads and OpenMP, the three most widely used standards for writing portable parallel programs.

* The modular nature of the text makes it suitable for a wide variety of undergraduate and graduate level courses including parallel computing, parallel programming, design and analysis of parallel algorithms and high performance computing.

Ananth Grama is Associate Professor of Computer Sciences at Purdue University, working on various aspects of parallel and distributed systems and applications.

Anshul Gupta is a member of the research staff at the IBM T. J. Watson Research Center. His research areas are parallel algorithms and scientific computing.

George Karypis is Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota, working on parallel algorithm design, graph partitioning, data mining, and bioinformatics.

Vipin Kumar is Professor in the Department of Computer Science and Engineering and the Director of the Army High Performance Computing Research Center at the University of Minnesota. His research interests are in the areas of high performance computing, parallel algorithms for scientific computing problems and data mining.




Welche anderen Artikel kaufen Kunden, nachdem sie diesen Artikel angesehen haben?


In diesem Buch (Mehr dazu)
Ausgewählte Seiten ansehen
Buchdeckel | Copyright | Inhaltsverzeichnis | Auszug | Stichwortverzeichnis | Rückseite
Hier reinlesen und suchen:

Kundenrezensionen

Es gibt noch keine Kundenrezensionen auf Amazon.de
5 Sterne
4 Sterne
3 Sterne
2 Sterne
1 Sterne

Die hilfreichsten Kundenrezensionen auf Amazon.com (beta)

Amazon.com: 10 Rezensionen
28 von 31 Kunden fanden die folgende Rezension hilfreich
Better read Journals than this book 28. November 2005
Von Kindle Customer - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe Verifizierter Kauf
I bought the book a few months ago as textbook for my semester class in high performance computing. After reading the first 3 chapters I realized that this book is a waste. The examples are only solved partially, a lot of jargons (they should have put the terminology in separate table, maybe).

I was hoping, by reading the book I'd learn something essential and got the basic philosophy of high-performance computing/parallel processing. Instead, I got more confused than before reading it! (I used to be real-time software programmer, so the field is not totally new to me). The authors tried to put everything in this small 633-pages book.

Even my professor said it is useless to read the book and referred us to other research papers [Robertazzi's papers], and yes, these IEEE/ACM papers are much clearly understood! I also found that some websites much better explaining the concept. Another book is also I guess better: "Fundamentals of Parallel Processing" by Harry F. Jordan and Gita Alaghband.

Don't waste your money on this book.
12 von 12 Kunden fanden die folgende Rezension hilfreich
Too many mistakes. 19. Februar 2006
Von Erik R. Knowles - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
I agree with the other reviewers who have said that this book is sloppy. There are just far too many mistakes for a 2nd edition book; very discouraging in an Addison-Wesley print.

The content is OK, and fairly thorough, but as another reviewer noted, there's considerable handwaving going on in some of the explanations.

Bottom line: a cleaned-up 3rd edition could be a very good textbook. Too bad I'm stuck with the 2nd edition :(
19 von 22 Kunden fanden die folgende Rezension hilfreich
A sloppily written book 17. Januar 2004
Von Ein Kunde - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
The content should be accessible to any graduate student but the sloppy writing style has made it unnecessarily difficult to read. Out of the many poorly written places, here is an example. In section 6.3.5 on page 248, it wrote, "Recall from section 9.3.1..." But I am only in chapter 6, how can I recall something from chapter 9. I then checked chapter 9 and found out that the forward reference was not a typo.
"Foundations of Multithreaded, Parallel, and Distributed Programming" by Gregory Andrews is a much better written book. Unfortunately, Gregory's book does not cover the same content.
22 von 31 Kunden fanden die folgende Rezension hilfreich
Worst text book ever written.. 2. Dezember 2005
Von Panda Bear - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
This book is extremely poorly written. The authors glaze over complex equations and magically come up with answers that don't make any sense. For example, to anyone having taken a prior architecture course the author's are completely wrong in the majority of cache performance analysis done early on in the book. Problems associated with that topic force the reader to dumb-down quite a bit to achieve their "expected" answer.

The user is left in most cases to derive the bizarre math that is involved through the authors' hand-waiving.

One of my personal favorites is from a formula derivation given on page 340, the sequence follows from the text as:

n^2=Ktwnp,

n=Ktwp,

n^2=K^2tw^2p^2, <--what, did I miss something here?

W=K^2tw^2p^2,

On top of that there are numerous typos in the sparse visual examples that do exist. Thus it makes it even more confounding to read through.

If you are evaluating the text for a possible parallel computing course. Don't waste your time or money with this text, your students will thank you. If you are student looking to take a class that uses this text...dropping a brick on your foot might be more enjoyable. If you think I'm a disgruntled student trying to seek revenge, I'm not. I did fine in the course, and I just want to make sure that no one else gets blind-sided by the non-sensical garbage that is this text. If there was a negative rating...this would be below 1 star.
3 von 4 Kunden fanden die folgende Rezension hilfreich
Solid material but not clean enough 16. Mai 2009
Von Mikael Öhman - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
I like this book very much. I have used it for a course I am about to finish.

It provides a solid foundation for anyone interested in parallel computing on distributed memory architectures. Although there is some material on shared memory machines, this material is fairly limited which might be something the authors should change for a 3rd edition given the times we're living in.

The complaint I would raise is that the book doesn't always feel "clean". It's hard to give a concrete example but sometimes you really have to spend some time to understand where a communication time complexity comes from even though the author's refer to a table of communication time complexities. Why? Because the table is based on that the underlying architecture is a hypercube which isn't really made explicit anywhere (?).
Waren diese Rezensionen hilfreich? Wir wollen von Ihnen hören.