am 29. April 2000
Read the other reviewers below for more details and various viewpoints. Here I'm assuming that you will hire a reputable consultant or tutor to either translate the book into ordinary English more or less or to teach you the mathematics behind it. Neural networks are important for everybody to understand because this is one of the important directions that computers and robotics are taking: learning things. As you move into this book, you'll discover that there are important categories that such learning machines fall into: learning with a teacher (that is, with some examples for the machine to learn from) or without a teacher (with no such examples), also called supervised versus unsupervised learning. There's also learning without or with feedback (including subtypes of feedforward networks with short-term memory, associative memory, and recurrent networks which use input-output mapping or relationships). Even high school and college students who wonder why they have to learn statistics and probability may be astonished to discover that some of the most effective learning machines involve statistics and probability. They fall into various categories such as maximum entropy (literally maximizing the entropy), maximum likelihood (again, the idea of maximizing likelihood as used in everyday language is a rough approximation, though the mathematical one is much more precise), minimizing the energy (Hopfield networks), minimizing mean square error (literally minimizing squares of statistical errors, though there is more to it), etc. In the last category mentioned fall (mostly) Kalman filter-predictors, which I worked on at the Defense Department in the 1980s. I'm currently more interested in the use of maximum entropy methods combined with the others, since my field of logic-based probability (LBP) is closely related to maximum entropy. See some of my other reviews for discussion of various of these topics, including statistics and probability. Haykin and Prentice Hall have done a very good job with this difficult and encyclopedic material.
am 15. Januar 2001
An excellent book, explaining the "state of the art" in neural networks on a very high scientific level. The choice of subjects is actual and demanding. The chapters are well structured, leading the reader from easy to understand basic knowledge to high sophisticated contents. Formulas, diagrams, textual explanations and the "problems" at the end of each chapter are superior, and of high educational value.
With this book the reader can be sure to achieve an actual overview of the necessary and important fields of neural networks and neural computing.
This book is not only well suited for advanced students starting to get a comprehensive overview over the field of neural networks, but also for scientists already working in that area, to complete and update their knowledge.
am 14. September 1998
A wonderfully well written, insightful, treatment of artificial neural networks. Beginning from the basics, the author sets forth both a technological and historical perspective for the understanding this multidisiplinary subject area. The book is written from a practical engineering perspective and comprehensively spans the entire discipline of modern neural network theory. A+