3 von 3 Kunden fanden die folgende Rezension hilfreich
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
First, let me note that I have bachelor's degree in Math, and I have worked in a statistics-related field for the past 15 years. Which is to say that I am comfortable with complex mathematics, but it has been a while since my formal education, and some parts of my training have gotten a little rusty over the years.
I was looking for a book that would refresh my memory a bit on linear algebra, particularly for the use of matrix calculations in statistics. This is the third book that I have tried in order to accomplish that goal. The first book that I tried was Poole's Linear Algebra textbook. While the level of this wasn't too difficult, its coverage was too broad for the specific goal that I had in mind (I didn't love it as a self-learning, or rather self-refreshing, textbook either). Next I tried "Matrix Algebra" by Gentle. I suspect that this will be a book that I like to go back to for reference, as it is very thorough and rigorous, but it was a nightmare to try to learn from. Finally, I found Linear Algebra and Matrix Analysis for Statistics (LAMAS) by Banerjee and Roy. Based on the title, I was hopeful that this book would be more focused on the topic that I was interested in, and the flap description - which touted starting at the basics and then heading into complex matters - sounded just right for me.
I'm finding that I have a little bit of a love/hate relationship with this book. First: the good. After striking out twice before finding this book, I'm really enjoying the overall flow of the instruction and the chapters. Topics are brought up in what seems like a logical order, and time is taken to build an understanding before moving on to the next subject. Most Linear Algebra texts are going to have a chapter or two at the beginning explaining the fundamentals of vectors and matrix math, but often the books just list the basics and then jump right into the complex stuff. Another frustration I have with a lot of upper-level math texts is that the authors often think that just showing a formula is enough to give the reader understanding. Yes, I can reason through what a formula is saying, but for me (especially given the growing years since my last math class) I find it much easier to learn when I can see that formula in action with actual numerical examples (this was one of my frustrations with Gentle). In LAMAS, the authors don't just list the basic functions of vectors and matrices. They spend time explaining the basics and then give numerical examples to help you work through the formulas. And because of the smart flow of the book, you start out doing basic examples, but before you know it you're doing more complex things. In terms of a book helping me self-(re)learn Linear Algebra, especially matrix math for statistics, this is exactly what I was looking for.
The bad: the editor really dropped the ball with this book. There is a decent number of mostly-harmless punctuation and grammar errors (e.g., "Each step of the Gauss-Jordan method forces the pivot to be equal to 1, and then annihilates all entries above and below the pivot are annihilated." - pg 43), but the true crime is that there are sometimes errors in the formulas. Some of these are easy to spot (e.g., "Left-hand distributive law: Let A be an m*p matrix, B and C both be p*n matrices. Then, A(B+C) = AB + BC." - pg 12), others are not. For example, when using the Type III Elementary Matrix formula on page 46, the answer that I get when working through the example is always the transpose of the answer shown in the book. The answer shown in the book is correct; I suspect that an i and a j have been switched somewhere in the formula (it's possible that I'm doing something wrong, though I've worked it through several times, but once a book shows a propensity for error in a formula, you never know if the issue is your own understanding or a typo in the book). One last example: when discussing Crout's Algorithm on page 71, when talking about the u vectors and the l vectors, the book uses the subscript i to refer to rows and j to refer to columns, as it always had up to that point in the book. When giving the generalized formulas at the bottom of the page, it starts with Uij = ..., keeping the same convention. Then it states Lji = .... At this point I wasn't sure if they just decided to switch the order of how things were shown or what, but after working through the example, it turns out that, just for this formula, j refers to the row and i refers to the column, for whatever reason. Another annoyance is that, while there are problems at the end of each chapter to help reinforce the learning, there are no answers given for any of them.
In some ways, I feel like this is actually making me learn the topics better, as I am having to pay very close attention to the details and work through every example either by hand or in Excel. Still, it's no way to write a textbook. In this respect, it's helpful to have Gentle's book on hand to cross-check some of the formulas (though he tends to use different notation). All in all, I still like this book, and I am still finding it the most helpful for me for re-learning the material. Once I know the material I will probably use Gentle's book for reference. With a more careful editor, this could be a great book. I bet the second edition will be wonderful.
***Update: I wrote the first part of this review before finishing the book (which, in retrospect, was a bad idea), and I found myself getting more frustrated with the book as I progressed. First, one of the things that I praised the book for in the first review (giving numerical examples) almost entirely went away in the back half of the book. Second, and more frustrating for me, the book never actually made the connection between the linear algebra and the statistics. Thus, without numerical examples, and without directly making the connection to statistics, you're left with "This is a such-and-such matrix: [gives mathematical definition]. This is a such-and-such transformation of the matrix: [gives formula]. Based on these definitions, here's Lemma 5.3...." This is precisely the kind of approach that made me give up on Gentle. If you don't give me any information about what the actual functional uses of these things are (preferably with numerical examples), then it becomes a real grind to work through. And even if you do work through it to try to understand it, you're not really sure what you just learned because no application in statistics is given. Even more tantalizing, the authors frequently say things like, "...this is a critical conclusion [or process, or definition, or transformation, etc.] for Linear Algebra with many uses." Great! Please tell me where. The title of this book, after all, does say "...For Statistics"!
I began to fear that this was not the book I was hoping it would be in the chapter titled "More on Orthogonality." In that chapter the authors mentioned the normal equations, which I was already familiar with, and I was surprised to find that they didn't discuss the connection to regression equations. Then, at the end of the chapter they state, "Orthogonality, orthogonal projections and projectors, as discussed in this chapter and Chapter 7, play a central role in the theory of statistical linear regression models.... While it is tempting to discuss the beautiful theory of statistical linear models, which brings together probability theory and linear algebra, we opt not to pursue that route in order to maintain our focus on linear algebra and matrix analysis." (pg 252-253) If that is the case, then I wish that they had just titled the book, "Linear Algebra and Matrix Analysis." I won't change my rating just because the book didn't meet my specific needs, although I do find the title misleading. I guess my search continues...
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
I have a Bachelors in Computer Science and am currently pursuing an MS with emphasis on statistical data mining and modeling. While I had exposure to undergraduate linear algebra from the text by Gilbert Strang, I was looking for a text that would help me get a sound grasp of the topics in linear algebra that would be most relevant for statistical modeling and data mining.
Compared to other texts of a similar flavor, I found this to be exactly what I was looking for. The book is beautifully written and what is especially attractive to me is that every theorem is proved in detail and, often, using different approaches. Many of the proofs given are much more elegant and clean than what I have seen in my earlier courses (e.g. proofs on equality of row and column ranks are revisited from different angles). Certain topics such as rank factorization, oblique projectors, orthogonal projectors and positive definite matrices are covered in much greater detail than in other texts. Another nice feature of the book is that algorithms for matrix computations are explained in adequate detail although not quite at the level of specialized texts such as Golub and Van Loan.
On the negative side, I agree with the other review that there are some typos although they are mostly obvious and not misleading.
In summary, a truly unique text that covers a wide range of topics with rigor and elegance at an intermediate level. It is at a level somewhere between the undergraduate applied linear algebra texts and the more formal matrix analysis texts targeted for mathematicians.