GPU Computing Gems Emerald Edition und über 1,5 Millionen weitere Bücher verfügbar für Amazon Kindle. Erfahren Sie mehr
EUR 42,95
  • Statt: EUR 50,25
  • Sie sparen: EUR 7,30 (15%)
  • Alle Preisangaben inkl. MwSt.
Nur noch 1 auf Lager (mehr ist unterwegs).
Verkauf und Versand durch Amazon.
Geschenkverpackung verfügbar.
Menge:1
GPU Computing Gems (Appli... ist in Ihrem Einkaufwagen hinzugefügt worden
Ihren Artikel jetzt
eintauschen und
EUR 4,30 Gutschein erhalten.
Möchten Sie verkaufen?
Zur Rückseite klappen Zur Vorderseite klappen
Anhören Wird wiedergegeben... Angehalten   Sie hören eine Probe der Audible-Audioausgabe.
Weitere Informationen
Dieses Bild anzeigen

GPU Computing Gems (Applications of Gpu Computing) (Englisch) Gebundene Ausgabe – 9. März 2011

1 Kundenrezension

Alle 2 Formate und Ausgaben anzeigen Andere Formate und Ausgaben ausblenden
Amazon-Preis Neu ab Gebraucht ab
Kindle Edition
"Bitte wiederholen"
Gebundene Ausgabe
"Bitte wiederholen"
EUR 42,95
EUR 42,95 EUR 48,36
49 neu ab EUR 42,95 3 gebraucht ab EUR 48,36

Hinweise und Aktionen

  • Große Hörbuch-Sommeraktion: Entdecken Sie unsere bunte Auswahl an reduzierten Hörbüchern für den Sommer. Hier klicken.

Jeder kann Kindle Bücher lesen — selbst ohne ein Kindle-Gerät — mit der KOSTENFREIEN Kindle App für Smartphones, Tablets und Computer.



Produktinformation

  • Gebundene Ausgabe: 886 Seiten
  • Verlag: Morgan Kaufmann (9. März 2011)
  • Sprache: Englisch
  • ISBN-10: 0123849888
  • ISBN-13: 978-0123849885
  • Größe und/oder Gewicht: 3,2 x 19,7 x 24,8 cm
  • Durchschnittliche Kundenbewertung: 4.0 von 5 Sternen  Alle Rezensionen anzeigen (1 Kundenrezension)
  • Amazon Bestseller-Rang: Nr. 271.244 in Fremdsprachige Bücher (Siehe Top 100 in Fremdsprachige Bücher)

Mehr über den Autor

Entdecken Sie Bücher, lesen Sie über Autoren und mehr

Produktbeschreibungen

Pressestimmen

Praise for GPU Computing Gems: Emerald Edition: "GPU computing is becoming an outstanding field in high performance computing. Due to its easiness, the CUDA approach enables programmers to take advantage of GPU-acceleration very quickly. My research in complex science as well as applications in high frequency trading benefited significantly from GPU computing. --Dr. Tobias Preis, ETH Zurich, Switzerland "This book is an important reference for everyone working on GPU/CUDA, and contains definitive work in a selection of fields. The patterns of CUDA parallelization it describes can often be adapted to applications in other fields. --Dr. Ming Ouyang, Assistant Professor - Director Visualization and Intensive Graphics Lab, University of Louisville "Diving into the world of GPU computing has never been more important these days. GPU Computing Gems: Emerald Edition takes you through the looking glass into this fascinating world. --Martin Eisemann, Computer Graphics Lab, TU Braunschweig ".an outstanding collection of vignettes of how to program GPUs for a breathtaking range of applications. --Dr. Amitabh Varshney, Director, Institute for Advanced Computer Studies, University of Maryland "The book features a useful index that might help readers mine the gems in search of a solution to a specific algorithmic problem. The index is accompanied by online resources containing source code samples-and further information-for some of the chapters. A second volume with another 30 chapters of GPGPU application reports, somewhat more focused on generic algorithms and programming techniques, is currently in the pipeline and scheduled to appear as the "Jade Edition" sometime this month."--Computing in Science and Engineering "The book is an excellent selection of important papers describing various applications of GPUs. As such, I believe it would be a valuable addition to the bookshelf of any researcher in modeling and simulation.This is not a substitute for a more detailed text on massively parallel programming...Instead, it is a nice practical addition to that text."--Computing Reviews, August 2012

Über den Autor und weitere Mitwirkende

Wen-mei Hwu: CTO of MulticoreWare, and is a professor at University of Illinois at Urbana-Champaign specializing in compiler design, computer architecture, computer microarchitecture, and parallel processing. He currently holds the Walter J. ("Jerry") Sanders III-Advanced Micro Devices Endowed Chair in Electrical and Computer Engineering in the Coordinated Science Laboratory. He is a PI for the petascale Blue Waters system, is co-director of the Intel and Microsoft funded Universal Parallel Computing Research Center (UPCRC), and PI for the world's first NVIDIA CUDA Center of Excellence. At the Illinois Coordinated Science Lab, Dr. Hwu leads the IMPACT Research Group and is director of the OpenIMPACT project - which has delivered new compiler and computer architecture technologies to the computer industry since 1987. He previously edited GPU Computing Gems, a similar work focusing on NVIDIA CUDA.

In diesem Buch

(Mehr dazu)
Ausgewählte Seiten ansehen
Buchdeckel | Copyright | Inhaltsverzeichnis | Auszug | Stichwortverzeichnis
Hier reinlesen und suchen:

Kundenrezensionen

4.0 von 5 Sternen
5 Sterne
0
4 Sterne
1
3 Sterne
0
2 Sterne
0
1 Sterne
0
Siehe die Kundenrezension
Sagen Sie Ihre Meinung zu diesem Artikel

Die hilfreichsten Kundenrezensionen

Format: Gebundene Ausgabe Verifizierter Kauf
Dieses Buch ist eine Sammlung von 50 wissenschaftlichen Artikeln über Erfahrungen bei der Verwendung des GPU-Computing in verschiedenen Fachgebieten.

Alle Artikel haben einen ähnlichen Aufbau: nach dem Abstract folgen die theoretischen Grundlagen, die teilweise sehr mathematisch sind. Anschließend werden die Kernel vorgestellt, die dann wiederum optimiert werden. Letztlich wird die Performance mit der CPU verglichen.

Die Autoren stellen hier Techniken vor, mit denen sie erhebliche Performance-Gewinne erreichen konnten. Für mich als CUDA-Entwickler war das an vielen Stellen interessant.

Ich kann dieses Buch aber nicht uneingeschränkt empfehlen. Eine Begeisterung für CUDA und für die naturwissenschaftlichen Algorithmen muss beim Leser vorhanden sein. Wenn man einfach nur die Optimierung mit CUDA lernen will, ist das hier nicht das richtige Buch.

P.S. Das Buch ist schon 2011 erschienen, daher sind ein paar Stellen schon veraltet. Ich bewerte es aber jetzt erst, weil ich erst jetzt alle 50 Artikel gelesen habe.
Kommentar War diese Rezension für Sie hilfreich? Ja Nein Feedback senden...
Vielen Dank für Ihr Feedback. Wenn diese Rezension unangemessen ist, informieren Sie uns bitte darüber.
Wir konnten Ihre Stimmabgabe leider nicht speichern. Bitte erneut versuchen

Die hilfreichsten Kundenrezensionen auf Amazon.com (beta)

Amazon.com: 17 Rezensionen
21 von 25 Kunden fanden die folgende Rezension hilfreich
A missed opportunity 22. Februar 2011
Von Sean - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe Verifizierter Kauf
I have to agree with H. Nguyen. This book is a missed opportunity. GPGPU computing is new for programmers and barely even known by scientists. The entries in this book don't really show sophisticated GPGPU philosophy or idioms. You won't read this and have "aha" moments. It would be nice if the text focused on advanced uses of segmented scan (the central trick in GPGPU computing) for load balancing and allocation, and helped the reader develop a toolbox for writing their own kernels. What's really needed is a GPU replacement for basic computer science texts like Sedgewick et. al. Just learning how to add up numbers, write a sort, write a sparse matrix code, etc, near peak efficiency of the device, is a great learning experience, because you learn to think with cooperative thread array logic rather than imperative logic. Until you master that, it's not possible to write efficient GPU code. I give the contributors credit for the articles, but I think the editorship made a mistake by not giving the book a clearer and more narrow focus. Hopefully there will soon be a book that tackles ten can't-live-without algorithms and covers them in very fine detail, addressing all performance aspects of the code and showing how coupled it is to device architecture.

On the other hand I'm giving the book a second star because it does let the reader know there are others using GPGPU to solve science problems, and the topics are pretty interesting, even if the implementations are not in the GPU idiom.

The best references are still the technical docs from NVIDIA and ATI (you should read both vendor's docs even if you only deal with CUDA, as extra perspective helps), the CUDA technical forum, and the handful of research papers written by good GPGPU coders (many who work at NV now).
6 von 6 Kunden fanden die folgende Rezension hilfreich
wide survey but not deep 19. April 2011
Von E. Baxter - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe Vine Kundenrezension eines kostenfreien Produkts ( Was ist das? )
I use GPU computing in my own research and so was eager to get my hands on this book. The authors' introduction states that they observed that while GPUs are now used in extremely diverse circumstances, many fundamental operations easily cross disciplines. Their goal therefore is to help disseminate knowledge from one area of science to others who can learn from what has already been done. This is an admirable goal with uncertain execution in this book. The text consists of 50 chapters, each chapter written by experts in their field. I can testify to the top quality of the experts contributing here from my own field of medical imaging. The chapters are well written and their variety do give a good understanding of the breathe of applications in which GPUs are finding themselves. Unfortunately, I did not learn anything new or useful that I could apply. If you are using GPUs in your field, you probably know more than this book presents. If you don't know anything about GPUs, then this book is not a good introduction. The book's audience is unclear. If you are looking for details for graphics applications this is not your book as this focuses on scientific application. I agree with several of my colleagues when they say this book should have been a GPU programming cookbook with code examples for fundamental and common operations.
4 von 4 Kunden fanden die folgende Rezension hilfreich
It was ok but... 21. Juni 2011
Von K.Waggner - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe Vine Kundenrezension eines kostenfreien Produkts ( Was ist das? )
I found previous books in the GPU series really helpful, this one, not so much.
The graphics were great but not very helpful. With such a broad array of topics, I
think readers will probably benefit from only a small portion of the book.

I think GPU pro was much better. I also agree with others that this book should
have been a GPU programming cookbook with code examples for fundamental
and common operations.
2 von 2 Kunden fanden die folgende Rezension hilfreich
Good research paper overview 26. September 2011
Von Mike - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe Vine Kundenrezension eines kostenfreien Produkts ( Was ist das? )
This book is not for someone seeking guidance into algorithms for parallel programs or an introduction to GPU programming. The target audience is either a researcher seeking a literature survey snapshot of the use of GPUs in some high-performance computing areas or a engineering professional looking to see which universities are working in an area of interest.

The papers are very academic in style and followed a basic pattern:
1) problem outline,
2) GPU solution overview,
3) comparison of performance and
4) conclusions.

There is little coverage of openCL (chapter 34), an alternate non-proprietary CPU+GPU computing language which was a little disappointing - probably because of NVIDIA heaviliy managed content; editors, reviewers and authors. The content will age quickly as platforms (GPUs) and languages develop and university departments change. Given this I think the book would have been better published on the web where the content would keep up with that pace.
1 von 1 Kunden fanden die folgende Rezension hilfreich
A great book, if your expectations are in line with what the book delivers 28. Juli 2011
Von N. J. Simicich - Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe Vine Kundenrezension eines kostenfreien Produkts ( Was ist das? )
I have been donating spare cycles to Folding@Home. I have a 4 way Intel, and a NVidia graphics card, and the NVidia card outperforms the main computer engine by far in churning out the floating point calculations Folding@Home wants... So I had been interested in what was available. NVIDIA has a toolkit out that allows you to access the CPUs on the graphics card. The toolkit is called CUDA.

I'm a retired programmer with 30 years of programming experience. While I don't work hard, these days, I like to keep my hand in, keep up with technology.
And it seemed that the floating point processing power that was available in the GPU had to be looked at.
It seemed serendipitous that this book became available from Vines just as I was looking at CUDA and the power of the GPU.
Now, when I was a young (19 year old) programmer, someone gave me an enormous scheduling problem to do. A hundred students had to be assigned to discussion groups. Each student was available at some times and not at others (they might be in class, for example). They were male and female and it was imperative that the sexual balance of the group be respected. Study groups had to end up with 4-8 people, if too few or too many, try again using another choice set.
The student's numbers, requested discussion group numbers, and sex were punched onto cards.
I wrote a FORTRAN program (it was 1970) that read the cards into an array (as few bytes per student as possible) and started traversing the problem set. And it was taking a very long time. Finally, I modified the program to check one of the toggle switches on the console, and to dump its state to the console and stop if one of the switches was toggled, and left, late Friday night, with instructions on the console that the operator, on Monday when they needed the computer, should flip the switch and wait for the program to print. The program ran for a CPU Weekend. CPU Weekends were important back then. The most processing that one programmer could do was to run for a weekend. When I worked at IBM Research, Fractals were developed by that MIT guy using spare CPU Weekends.
Now the machine I had use of in 1970 was an IBM 1130. It had 8k of core memory and a 3.2 microsecond cycle time, and a 1 megabyte hard drive. I expect that I could exhaustively search through all potential solutions and find the best fit in a few minutes to an hour today, on my desktop PC. But this was a slow machine. It finished no more than 20% of the search in the CPU Weekend. I declare d the problem "too large to compute" and tried a Monte Carlo approach, where I learned the weaknesses of pseudo random number generators.

These days, the CPU Weekend, the largest piece of work that can be computed in spare cycles, is what can be processed on a Desktop PC with a high end NVIDIA graphics card, over a weekend -- because the $200 NVIDIA Superclocked GTX 460 is the most powerful mass marketed computer to date.

So, I started to think, was this power accessible? Could one reprogram the calculation loop of a spreadsheet, one that might have a million or more rows, to calculate in parallel? How about inherently parallel languages, like J, where data is commonly held in matrices and procesed all at once, in parallel. Could you speed up that sort of processing by reprogramming the calculation loops of J, so that when you casually toss around a large matrix it can be processed in parallel?

And then I saw this book. I was intrigued. Actual projects? Source code, maybe? Explanations?

And today the book came.

Now, I was a bit disappointed. These were all independent papers. Many of them basically say the same thing. We had a computationally hard problem. We reprogrammed it to use CUDA. It sped up a whole lot. We were happy. There is a chart that was repeated in study after study. It goes something like:

We had been running the app on a 2.6 Gz Intel core ?? chip. The app ran in 50 seconds. We reprogrammed it for an 8 way Intel chip, the latest, and we were able to do the computation in 10 seconds. When we ran it on a NVIDIA using CUDA, we were able to run the app in 0.12 seconds after applying full optimization. Our first try at NVIDIA got us a 3 second compute time, but we did something fancy and we were able to get another 75% reduction over these three stages. But the detail of that optimization is not explained here.

The chart is almost exactly the one in the CUDA Programming Guide, where they note that the GTX 480 has a theoretical floating point output of over 1.3 teraflops. Some of the researchers got 85% of theoretical max.

One line from the Introduction intrigued me. We expect processing power to double every two years, although these processors are not getting faster, they are just adding more cores. When people port their compute bound app from a high end Intel box to a GPU, the app runs 10 to 100 times faster. So, in a way, people who port to GPUs travel 8-12 years into the future. And there is no reason to believe that GPUs won't continue to add processing power, since they can add more cores or speed up the ones they have, since right now they are loafing at 1500 mz. So they have traveled to the future and need not come back, but we can reap the benefits of their computation.

There was a lot of detail in the book, but it was the wrong sort, for me. If I were a physical chemist, say, the equations that describe the end location of the electron in the shell and how its position was calculated would have been more useful. It was probably obvious to other physical chemists how they would translate those equations to an algorithm, but it wasn't to me. In a few places people included application pseudocode, while, all too briefly, actual code was quoted, but I didn't see any explanations - especially not in the detail I hoped for. To be clear, there was plenty of detail; it just wasn't where I needed it.

In "A Brief History of Time", Hawking was warned by an editor that every equation he put in his book would halve the readership. By that principle, this book is flat out of readers. '

So in the end, I was disappointed. If I were looking at the cost of a supercomputer, and needed a lot of floating point processing to achieve computational goals, and was told that my budget was so large that it meant layoffs, well, I might see this as proof of concept. I could use this book as a justification in putting a staff together to investigate the use of CUDA and NVIDIA rather than more traditional supercomputer vendors.
If I was one of those investigators, this book would hardly be useful, other than as a way to determine if the speedups I was getting were in line with what the rest of the industry was getting on similar projects.
All that said, this is still a four star book. If you are the person who needs this book, it is a great book, pretty much unique in its field.
Waren diese Rezensionen hilfreich? Wir wollen von Ihnen hören.