- Gebundene Ausgabe: 328 Seiten
- Verlag: Oxford University Press (3. Juli 2014)
- Sprache: Englisch
- ISBN-10: 0199678111
- ISBN-13: 978-0199678112
- Größe und/oder Gewicht: 23,6 x 2,8 x 16,3 cm
- Durchschnittliche Kundenbewertung: 20 Kundenrezensionen
- Amazon Bestseller-Rang: Nr. 48.748 in Fremdsprachige Bücher (Siehe Top 100 in Fremdsprachige Bücher)
- Komplettes Inhaltsverzeichnis ansehen
Andere Verkäufer auf Amazon
+ EUR 3,00 Versandkosten
+ kostenlose Lieferung
+ kostenlose Lieferung
Superintelligence: Paths, Dangers, Strategies (Englisch) Gebundene Ausgabe – 3. Juli 2014
|Neu ab||Gebraucht ab|
Wird oft zusammen gekauft
Kunden, die diesen Artikel gekauft haben, kauften auch
Es wird kein Kindle Gerät benötigt. Laden Sie eine der kostenlosen Kindle Apps herunter und beginnen Sie, Kindle-Bücher auf Ihrem Smartphone, Tablet und Computer zu lesen.
Geben Sie Ihre Mobiltelefonnummer ein, um die kostenfreie App zu beziehen.
Wenn Sie dieses Produkt verkaufen, möchten Sie über Seller Support Updates vorschlagen?
"I highly recommend this book" --Bill Gates
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." -- Brian Clegg, Popular Science
"Bostrom...delivers a comprehensive outline of the philosophical foundations of the nature of intelligence and the difficulty not only in agreeing on a suitable definition of that concept but in living with the possibility of dire consequences of that concept." -- A. Olivera, Teachers College, Columbia University, CHOICE
"Bostrom's achievement (demonstrating his own polymathic intelligence) is a delineation of a difficult subject into a coherent and well-ordered fashion. This subject now demands more investigation."--PopMatters
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
Über den Autor und weitere Mitwirkende
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
Kunden, die diesen Artikel angesehen haben, haben auch angesehen
Derzeit tritt ein Problem beim Filtern der Rezensionen auf. Bitte versuchen Sie es später noch einmal.
The first chapter treats past developments and the state of the art in AI research. Well written, but lacking important aspects of current AI-related research: Scientific progress in all areas of machine learning was not mentioned in sufficient detail; clustering, policy development via dynamic programming, classification, natural language processing, etc. Which of these special-purpose skills are important ingredients for an AGI, and which are problems that are AI-complete? I would have appreciated a bit more on that topic.
The second chapter, "Paths to superintelligence" outlines several possible ways to get there, but is not very convincing in which of these seems to be likely. The reader is well-informed, but left in a state of "OK, but Bostrom himself seems not to believe that any of them is likely to achieve superintelligence in the next 50 years." Slightly connected, but much better written, chapter 3 deals with different forms of superintelligence.
The fourth chapter deals with the "kinetics of an intelligence explosion", and is again very vague: Both accelerating and decelerating effects (nicely matched with all possible paths to superintelligence) are discussed at lengths, and again the reader is left with the feeling that absolutely no prognosis is possible. Bostrom himself end the chapter with "although a fast or medium [speed] takeoff looks more likely, the possibility of a slow takeoff cannot be excluded".
Chapter 5 marks a transition in the writing style: Bostrom changes from a very neutral, unconfident tone to a highly convincing one. If I had to guess, I would say that this and the following chapters are at the core of his own research interest. Bostrom makes a very convincing point that as soon as a superintelligence is created, it will very likely take control over the world. Chapter 6 briefly deals with possible (super-)capabilities such a superintelligence will develop and how it can use them to take over control.
Chapter 7 contains the important orthogonality hypothesis, i.e., that there is no reason to believe that a superintelligence has high moral standards. It then discusses important instrumental goals (i.e., goals necessary to achieve to achieve the intelligence' ultimate goal, whatever that may be). Count in self-preservation and ressource acquisition, for example. The following chapter then shows that even a non-malevolent superintelligence may destroy everything dear to us or perform otherwise morally terrible actions (e.g., simulating what human people wish requires simulating human people - terminating a simulation could then easily turn out as a genocide). Both chapters are extremely well-written and captivating, an easy and convincing read. In that line of thinking, chapter 11 discusses scenarios in which not one, but multiple superintelligences come to power, a world in which humankind is a mere slave race. Opposed to the previously mentioned chapters, this one again lacks confidence and seems to paint a quite unrealistic picture.
Chapter 9 tries to illustrate possible ways to control a superintelligence, and, more importantly, illustrates how and why they will probably fail. Chapter 10 merely categorizes superintelligences via their controlled environment. Chapter 12 connects with chapter 9, assuming that the control problem is solved: How should we design and control our superintelligence? What kind of morale should we install? This chapter very nicely explains the important problem that morale is not a (mathematically) well-defined object, and that we are currently lacking an operational definition ourselves. Still, the chapter presents a few interesting ideas about how to "load" our values into the superintelligence. Chapter 13 augments this chapter by suggesting more indirect methods for the value loading problem.
Chapter 14 finally deals with "what we should do now": Should we continue researching or should we do our best to stall progress? Although several scenarios are presented, a definite conclusion escaped my attention. This then nicely summarizes my impression of chapters 2-4 and 13-14: Bostrom is jumping between pros and cons, eager to give a complete picture (which is always better than a one-sided one). Jumping around destroyed a lot of the book's effect in these topics. By mentioning something good and something bad in consecutive sentences, the book invoked a feeling of neutrality (probably more than what would have been invoked by first listing ALL pros and then ALL cons). Maybe not the best strategy in a topic where the credo should be: "Better be safe than sorry."
Zunächst geht es um verschieden Wege auf denen es zu einer Super Intelligence kommen kann, darauf aufbauend spielt er verschieden Szenarien durch welche z.b. die Geschwindigkeit des "Take-Offs" betreffen oder verschieden Kontrollmechanismen aufzeigen.
Outstanding read ... in my personal top two with Guns Germs and Steel
Möchten Sie weitere Rezensionen zu diesem Artikel anzeigen?
Die neuesten Kundenrezensionen