Am höchsten bewertete positive Rezension
7 Personen fanden diese Informationen hilfreich
A very interesting book, and an important one - but it could have been written a bit better.
am 22. Mai 2016
It might not be the most pleasurable read, but sticking with it till the end is worth its while. The book has a great logical structure, and most chapters end with a summary - that's exactly how such a book should be written. Other than this, it is hard to give a general critique, since many chapters are extremely well-written:
The first chapter treats past developments and the state of the art in AI research. Well written, but lacking important aspects of current AI-related research: Scientific progress in all areas of machine learning was not mentioned in sufficient detail; clustering, policy development via dynamic programming, classification, natural language processing, etc. Which of these special-purpose skills are important ingredients for an AGI, and which are problems that are AI-complete? I would have appreciated a bit more on that topic.
The second chapter, "Paths to superintelligence" outlines several possible ways to get there, but is not very convincing in which of these seems to be likely. The reader is well-informed, but left in a state of "OK, but Bostrom himself seems not to believe that any of them is likely to achieve superintelligence in the next 50 years." Slightly connected, but much better written, chapter 3 deals with different forms of superintelligence.
The fourth chapter deals with the "kinetics of an intelligence explosion", and is again very vague: Both accelerating and decelerating effects (nicely matched with all possible paths to superintelligence) are discussed at lengths, and again the reader is left with the feeling that absolutely no prognosis is possible. Bostrom himself end the chapter with "although a fast or medium [speed] takeoff looks more likely, the possibility of a slow takeoff cannot be excluded".
Chapter 5 marks a transition in the writing style: Bostrom changes from a very neutral, unconfident tone to a highly convincing one. If I had to guess, I would say that this and the following chapters are at the core of his own research interest. Bostrom makes a very convincing point that as soon as a superintelligence is created, it will very likely take control over the world. Chapter 6 briefly deals with possible (super-)capabilities such a superintelligence will develop and how it can use them to take over control.
Chapter 7 contains the important orthogonality hypothesis, i.e., that there is no reason to believe that a superintelligence has high moral standards. It then discusses important instrumental goals (i.e., goals necessary to achieve to achieve the intelligence' ultimate goal, whatever that may be). Count in self-preservation and ressource acquisition, for example. The following chapter then shows that even a non-malevolent superintelligence may destroy everything dear to us or perform otherwise morally terrible actions (e.g., simulating what human people wish requires simulating human people - terminating a simulation could then easily turn out as a genocide). Both chapters are extremely well-written and captivating, an easy and convincing read. In that line of thinking, chapter 11 discusses scenarios in which not one, but multiple superintelligences come to power, a world in which humankind is a mere slave race. Opposed to the previously mentioned chapters, this one again lacks confidence and seems to paint a quite unrealistic picture.
Chapter 9 tries to illustrate possible ways to control a superintelligence, and, more importantly, illustrates how and why they will probably fail. Chapter 10 merely categorizes superintelligences via their controlled environment. Chapter 12 connects with chapter 9, assuming that the control problem is solved: How should we design and control our superintelligence? What kind of morale should we install? This chapter very nicely explains the important problem that morale is not a (mathematically) well-defined object, and that we are currently lacking an operational definition ourselves. Still, the chapter presents a few interesting ideas about how to "load" our values into the superintelligence. Chapter 13 augments this chapter by suggesting more indirect methods for the value loading problem.
Chapter 14 finally deals with "what we should do now": Should we continue researching or should we do our best to stall progress? Although several scenarios are presented, a definite conclusion escaped my attention. This then nicely summarizes my impression of chapters 2-4 and 13-14: Bostrom is jumping between pros and cons, eager to give a complete picture (which is always better than a one-sided one). Jumping around destroyed a lot of the book's effect in these topics. By mentioning something good and something bad in consecutive sentences, the book invoked a feeling of neutrality (probably more than what would have been invoked by first listing ALL pros and then ALL cons). Maybe not the best strategy in a topic where the credo should be: "Better be safe than sorry."