- Gebundene Ausgabe: 735 Seiten
- Verlag: Cambridge University Press (2. Februar 2012)
- Sprache: Englisch
- ISBN-10: 0521518148
- ISBN-13: 978-0521518147
- Größe und/oder Gewicht: 18,9 x 3,7 x 24,6 cm
- Durchschnittliche Kundenbewertung: 2 Kundenrezensionen
- Amazon Bestseller-Rang: Nr. 44.065 in Fremdsprachige Bücher (Siehe Top 100 in Fremdsprachige Bücher)
- Komplettes Inhaltsverzeichnis ansehen
Andere Verkäufer auf Amazon
+ GRATIS Lieferung innerhalb Deutschlands
+ EUR 3,00 Versandkosten
+ GRATIS Lieferung innerhalb Deutschlands
Bayesian Reasoning and Machine Learning (Englisch) Gebundene Ausgabe – 2. Februar 2012
Wird oft zusammen gekauft
Kunden, die diesen Artikel gekauft haben, kauften auch
Es wird kein Kindle Gerät benötigt. Laden Sie eine der kostenlosen Kindle Apps herunter und beginnen Sie, Kindle-Bücher auf Ihrem Smartphone, Tablet und Computer zu lesen.
Geben Sie Ihre Mobiltelefonnummer ein, um die kostenfreie App zu beziehen.
Wenn Sie dieses Produkt verkaufen, möchten Sie über Seller Support Updates vorschlagen?
'This book is an exciting addition to the literature on machine learning and graphical models. What makes it unique and interesting is that it provides a unified treatment of machine learning and related fields through graphical models, a framework of growing importance and popularity. Another feature of this book lies in its smooth transition from traditional artificial intelligence to modern machine learning. The book is well-written and truly pleasant to read. I believe that it will appeal to students and researchers with or without a solid mathematical background.' Zheng-Hua Tan, Aalborg University, Denmark
'With approachable text, examples, exercises, guidelines for teachers, a MATLAB toolbox and an accompanying website, Bayesian Reasoning and Machine Learning by David Barber provides everything needed for your machine learning course. Only students not included.' Jaakko Hollmén, Aalto University
'The chapters on graphical models form one of the clearest and most concise presentations I have seen … The exposition throughout uses numerous diagrams and examples, and the book comes with an extensive software toolbox - these will be immensely helpful for students and educators. It's also a great resource for self-study.' Arindam Banerjee, University of Minnesota
'I repeatedly get unsolicited comments from my students that the contents of this book have been very valuable in developing their understanding of machine learning … My students praise this book because it is both coherent and practical, and because it makes fewer assumptions regarding the reader's statistical knowledge and confidence than many books in the field.' Amos Storkey, University of Edinburgh
Über das Produkt
This practical introduction for final-year undergraduate and graduate students is ideally suited to computer scientists without a background in calculus and linear algebra. Numerous examples and exercises are provided. Additional resources available online and in the comprehensive software package include computer code, demos and teaching materials for instructors.Alle Produktbeschreibungen
Welche anderen Artikel kaufen Kunden, nachdem sie diesen Artikel angesehen haben?
Derzeit tritt ein Problem beim Filtern der Rezensionen auf. Bitte versuchen Sie es später noch einmal.
Especially for me who has very little background in machine learning this is a self-use textbook. I am still reading the earlier chapters and can recommend it as a reading for graduate and PhD students.
Though CUP could have made further corrections to the book. Reading and consulting errata is becoming bothersome too. I hope they rectify it as soon as possible. But apart from this, the book is top notch !!!
Die hilfreichsten Kundenrezensionen auf Amazon.com
For relative beginners, Bayesian techniques began in the 1700s to model how a degree of belief should be modified to account for new evidence. The techniques and formulas were largely discounted and ignored until the modern era of computing, pattern recognition and AI, now machine learning. The formula answers how the probabilities of two events are related when represented inversely, and more broadly, gives a precise mathematical model for the inference process itself (under uncertainty), where deductive reasoning and logic becomes a subset (under certainty, or when values can resolve to 0/1 or true/false, yes/no etc. In "odds" terms (useful in many fields including optimal expected utility functions in decision theory), posterior odds = prior odds * the Bayes Factor.
For context, I'm the lead scientist at IABOK dot org-- we design algorithms for huge data mining problems and applications. This text is our "go to" reference for programmers not up to speed in many of the new pattern recognition algorithms, including those writing new versions. All the most recent relevant models, from a probability standpoint, are represented here, with a clarity that is stunning. My only criticism (a mild one) is that, when applying Barber's examples to Bodies of Knowledge and data mining, he skips Prolog, backward chaining, predicate calculus and other techniques that are the foundation of automated inference systems (systems that extend knowledge bases automatically by checking whether new propositions can be inferred from the KB as consistent, relevant, etc.).
In the next 20 years, algorithms will rule this planet. If you either want to see the future of your grandkids, or participate in it if you're young, this is a MUST HAVE exploration of where what we used to call AI is now headed. There IS plenty of calculus in this volume, so don't mistakenly think it is "simple" -- but if you put the time in, you can "get it" even if you're a bright undergrad level thinker. The author's goal of training new algorithm programmers is laudable and right on point for where pattern recognition is headed.
With this amount of math, how can we star it high for self study? Easy: unlike most "recipe" books that just give bushels of codes or techniques, the authors here give the what, where when and why of both code and math, not just the how, as their goal is independent, creative contributors who can write their OWN algorithms. There are a few minor UK vs US differences in terminology also (event space instead of sample space, for example), but they expand the reader's horizon rather than distract or annoy as some others do. There are others like Bishop and many more that have more recipes, and more compact and difficult math, but you have to either be really good (just show me the recipe) or really bad (I don't know what I'm doing, but can follow this recipe) to benefit from them. This is a happy middle ground that does not disappoint.
One aspect somewhat lacking is a more systematic treatment of computational complexity, given that this and similar books are read by practitioners who need to code, modify or analyze code implementations.