Am höchsten bewertete kritische Rezension
7 Personen fanden diese Informationen hilfreich
Very thought-provoking, but fails to address the real problems of creating a mind
am 2. März 2015
Kurzweil takes the reader through a tour of selected neuroscience research and concepts of reverse-engineering the brain with a focus on neocortex neural architecture for pattern recognition. As always, Kurzweil is very enthusiastic about artificial intelligence and the pace of technological progress, illustrated in his belief in the law of accelerating returns in the field of information- processing. It is very tempting to share in this enthusiasm, but key aspects of the workings of the human mind are unfortunately not addressed in the book. It might be true that information processing will advance in leaps and bounds, so that cars will find their way from A to B better and more efficiently than human drivers, that computers will be able to answer any knowledge question in the blink of an eye etc. They might even do this by using techniques derived from architectural features of the human brain. But this intelligence, that can solve these kinds of problems will always represent a fundamentally different kind of intelligence than the one our brain creates. This can be illustrated by the nature of machine speech recognition, of which Kurzweil is an expert and which also features prominently in the book. I have been using this technology for more than 10 years and I must say that the progress has peaked about seven years ago. It might be true that recognition rates are somewhere around 90% + for continuous speech recognition since that time. This is very impressive, and being handicapped I much appreciate this technology. But it is the nature of the mistakes made that has not changed and is not likely to change only by improving processing power or adding new layers of pattern recognition. Every child will aptly label the mistakes made by machine speech recognition as stupid, because they make for nonsensical sentences. Speech recognition among humans is always a process of dialogue, a search for meaning, a search for expression. The human speaker and the human listener will work along the lines of their personal agendas. Ideally (but far from necessarily), they will try to match these agendas while communicating. This is something fundamentally different from bottom-up pattern recognition. Everybody can try this out for themselves: first dictate some thoughts to your computer, then ask a friend to write the same dictation down for you. The human friend will not just write down whatever he thinks she heard, but will try to get into your head, to anticipate your meaning and match this meaning to what she hears acoustically. Also, there is no need to dictate punctuation marks, for it will be as if your friend is writing down his own thoughts. If lost, or unsure of your meaning, she will pause and ask for clarification ("I am not familiar with this Hungarian name [sic!], can you spell that?"). Your human scribe will certainly not reach 100% accuracy, just as his machine counterpart, but his sentences will never turn out to be nonsensical. Also, you will not have to go through the tedious process of selecting and correcting the mistakes one by one, and correcting the mistakes made during correction etc. The corrections will be a fluent part of your conversation/dictation.
This of course addresses the field of top-down processing, which is briefly mentioned by Kurzweil as "a very important point" in the beginning of the book, but sadly nowhere further elaborated. "We are continually making predictions", says Kurzweil, but so far no machine does this at the level of complexity or intrinsic motivation seen in human minds. Certainly my speech recognition software is not making any predictions as to what I'm going to say next in terms of meaning. Again, this is different than using texts I have written before as templates for pattern recognition (it is even counter-productive to use letters to my daughter or to my insurance company to help in understanding my comments on a book on neuroscience). What is asked for is to share in my thoughts and purposes and making predictions based on this attempt. Kurzweil writes that the Siri-speechrecognition for example "works impressively for a first-generation product, and it is clear that this category of product is only going to get better." This does not seem to be so clear for me, as long as we do not address the problem of top-down processing. I am not saying that this is an unsolvable problem, not even a problem of unreachable processing power or forever mysterious processing architecture. It is a question of what drives the processing in the first place, of the mind or the machine actively searching for a definition of the problem to be solved. The selfdriving car might be much better than us at getting from A to B, but if we don't tell it to, it will not move. Just as the horse or oxen that we used to use for these purposes in the olden days. And they were also kind of smart. All these intelligences, biological or artificial are so far only extensions of our own selfishly motivated intelligence. We humans are thrown into a vastly complex and dangerous world and we are constantly driven to make sense of it, to find or create tools, to gain companions in our quests that extend far beyond mere survival. Our brains are vastly complex meaning-creating biological machines, they are not only problem-solving but problem-creating machines, constantly inventing new tasks and fields for learning. Everyone of us is uniquely driven by constantly evolving multilayered and conflicting needs that we project onto our surroundings. The fallen branch becomes a stick to reach an apple or a weapon to destroy an opponent, because our needs coupled with our processing power make it so. The information processing of the human mind is always also affective information processing (and the nascent field of affective neuroscience is still groping about very much in the dark). Unless we put this kind of need into a machine, we will not create true artificial intelligence, we will not create a mind, but just more and more "clever" tools. If Kurzweil knows how to create this kind of individualistic and unpredictable, self-evolving need-machine, he does not tell us in his book. But then again, do we really want to create such a mind artificially? Together with his needs we would also create his frustrations (an intrinsic part of the need-driven problem-solving process) and ultimately a rich bouquet of suffering. And we certainly wouldn't want it to become conscious (this still mysterious "emergent property") of its suffering and our part in it. We humans know all about suffering, and how it can only be made bearable by a passion for life's adventures - or by changing the very architecture of our brain through year-long meditation techniques. Do we really want to artificially create a passionate mind? The reverse-engineering of a Buddhist monks mind wouldn't solve any problems. Because it knows no more problems. It would just smile at us and pat us on our shoulder.