79 von 83 Kunden fanden die folgende Rezension hilfreich
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
How do we know if our risk management methods are working? Would we notice if they were not working? What are the consequences if they are not working? These are the three basic questions that Douglas Hubbard asks in his book The Failure of Risk Management.
In this book Mr. Hubbard lays out the basics of risk management and explains why many risk management methods are worse than useless. He also provides some ideas and first steps to fix the problem.
Here's a brief walk though 'The Failure of Risk Management':
Part I introduces the history of risk management and the problems with modern risk management methods. Independent events, for instance, are often times not independent at all. This common-mode failure is unaccounted for by many managers, yet can be devastating in an emergency.
Part II of the book goes in depth with some of the problems and failures of risk management, and to me was the most interesting part of the book. Chapter 4 is called The "Four Horseman" of Risk Management, and describes the differences between what the author considers the four main classes of risk managers. The four classes are actuaries, "war quants," economists, and management consultants. Each group has distinctly different methods and areas of expertise, as well as different levels of validation.
Chapter 5 is about how risk should be defined, and why different people may actually be talking about different things when they discuss volatility and risk. Chapter 6 breaks down why humans are not good at subjective methods (which lays the ground work for later chapters introducing quantitative methods). There are a few "calibration" tests available for you to see how overconfident you are in your decision making. These are pretty interesting, and even after reading about overconfidence I still did poorly on them.
Chapters 7, 8, and 9 talk about problems with subjective scoring methods, problems with describing one-off events, and the problems with some quantitative models. The author talks about "black swans," as described by Nassim Nicholas Taleb, and how they relate to modeling. Many times people believe that events can't be modeled, but the author believes this is not so.
The last section of the book, Part III, gives some ideas on how to fix risk management. Adding empiricism is a big start, as well as calibration of subjective human inputs. Many companies build and use models, but then they don't actually bother to see how well the things have performed in the past. I will leave the rest of the solutions for you to read in the book.
First off, the author says this book is geared towards all types of risk management, and all types of industries, and I think this is true. The author uses a wide variety of examples from airplane engine failures to volcano eruptions. But I still feel like this book is more geared towards enterprise risk management, and less towards the already quant heavy fields such as actuarial science or credit risk management. But it was an interesting read nonetheless.
It seems like in the past 20 years there have been several so-called "once-in-a-lifetime events," such as the floods of Hurricane Katrina or any of the financial crisis, including 1987, 1998, 2000, or 2008. I wish I had the money to buy this book for every person who ever said "no one saw that coming."
I think this is a great book for anyone who deals with the potential for risk, loss, or damage - no matter if it is financial, personal, or physical. When the stakes are high we should be careful relying on a risk matrix developed by a management consultant, and Douglas Hubbard will tell you why. If you work in risk management, or if you have influence on the operational strategy of some organization, then this book is a must read.
28 von 29 Kunden fanden die folgende Rezension hilfreich
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
I have been involved in business consulting, investment management, business valuation and corporate governance for most of the past 25 years, and I can say without hesitation that Doug Hubbard's book on The Failure of Risk Management is an outstanding and elucidating work. I have never been a risk manager per se, but I have frequently been deeply involved in risk assessment and risk management activities, so I do have firsthand experience in this topic.
This book is an eye opener from the outset. In Part One of his book ("An Introduction to the Crisis") Hubbard begins with fundamental, obvious questions about risk management that everyone (not just risk managers!) should be asking. For example: How do you know that your risk management program is effective? Would anyone in your organization know if your risk management program didn't work? (...and how would they know - and define - that it wasn't working?). These are very simple, obvious questions, yet I have never heard them asked by management teams or even members of boards where I have served as director. Alas, there is a huge "placebo effect" in so much of what passes for risk management nowadays - perhaps that is why it is so popular.
For example, consider the following: If risk management programs really do work, then it seems logical to assume that companies in a given industry with a (self proclaimed) "highly effective" risk management program would show greater shareholder returns, less earnings volatility, and better safety and regulatory compliance records than other companies in their peer group who lack such a program. Yet there appears to be no valid evidence that current risk management practices, taken as a whole, serve to improve overall corporate performance. The evidence just isn't there.
In Part Two ("Why It's Broken"), Hubbard provides a thorough and convincing overview of the many shortcomings of modern risk management practices. As a self proclaimed "Quant," he strongly endorse quantitative analytics as the most effective approach to both measuring risk as well as the implementation of risk management programs. His approach is compelling and convincing; after all, if we can't measure accurately, how can we rely on our system of "assessing" (i.e., measuring)? It sounds pretty obvious, doesn't it? Without metrics, what tools do we have, other than generalizations, hunches, intuition, and "gut feel"? Sure, certain qualitative techniques are helpful, but qualitative risk analytics is really effective (in my view) only for the most obvious risks, and therefore no better than having no risk management program at all. Indeed, Hubbard makes a compelling argument that ineffective risk management can be worse - possibly much worse - than having no risk management program at all.
Part Two also includes concepts that Hubbard brilliantly applies to risk management practices. This includes certain characteristics of human nature, such as a proven tendency to be overconfident in our estimates (of risk, but also of other estimates), that must be acknowledged and addressed in order for risk management programs to work effectively. He also provides a practical method of adjusting or "calibrating" for such overconfidence. Similarly, there is a fascinating discussion on risk correlations and how risk events seldom materialize in isolation from one another. Consider (my own example) certain risk correlations in mortgage banking. Banks that invested in mortgage backed securities no doubt undertook some sort of risk analysis of these investments. They also had risk management systems in place for their mortgage lending business. But how many lenders tied these two risk programs together, and properly concluded that a collapse of one market would also result in the collapse of the other? Thus, it's not just a case of accurately assessing and management individual risks, but also in considering the extent to which there might be a "domino" or "cascading" effect among different risk factors.
In reading Part Two (especially Chapters 6 and 9), it occurred to me that this book should be read by anyone and everyone involved in investing or lending money.
As one might expect, Part Three of Hubbard's book ("How to Fix It") embraces a scientific and quantitative approach to improving risk management. Once you get to this point in the book, you will find it very difficult to disagree. Another important concept introduced by Hubbard is that of language and communication with respect to risk. As a potentially murky and subjective topic (if not downright Byzantine at times) risk management systems require clear and concise language and terminology to be effective. Thus, if two different managers in the same factory concur that the likelihood of a risk event materializing is "very likely," we should not assume that they both agree on the use of the term "very likely." One may feel that this means the odds are one in three, while the other feels the odds are one in ten.
Hubbard is clearly on target when he proposes that risk managers apply scientific methods to risk management. His suggestions on how to do this are fairly simple and practical. Without such methodologies, risk managers are sailing through dense fog with an unreliable compass. You might even feel that you are making great headway, but if you can't measure where you are going, you will never know if you are really making any progress.
Finally, one of the greatest benefits to me in reading this book has less to do with the specifics of risk management and a lot more to do with the way people think. Consider, for example, why your sales team frequently falls short of their sales projections, or why so many portfolio managers buy stocks near their highs and sell near their lows. Or why risk management programs are so popular, and yet seldom work. Hubbard provides a brilliant and penetrating look into the human mind in the context of business decision making as a whole - not just with respect to risk. For me, this was an excellent "upside surprise" to this book. I finished reading this book several months ago, and I still think about it all the time. It has made a lasting and beneficial impression that I will never forget.
45 von 54 Kunden fanden die folgende Rezension hilfreich
Dennis J. Boccippio
- Veröffentlicht auf Amazon.com
Format: Gebundene Ausgabe
I had high expectations for this book after reading "How to Measure Anything", and unfortunately none of them were met. My very short review would state: were it not for those high expectations, I would have stopped reading the book about 1/3 of the way in, but based on past performance, I stuck it through to the end. That was a mistake.
The defects in Hubbard's second book are many. First and foremost, it is simply not pleasant to read. While "How to Measure" adopted a posture of helpful tutorial, "Failure" attempts to rehash most of the same material, albeit from a posture of criticizing almost every risk analysis method Hubbard has not personally worked on. The tone is shrill, smug, and "low emotional intelligence quotient". In the book we are treated to several "I won't name names but you know who you are" diatribes, a personal critique of author Nicholas Taleb for being too abrasive in delivery (which he is ... but Hubbard delivers this assessment with apparently no hint of irony), and ever more stories of how Hubbard publicly shames clients during working meetings into admitting they do not know as much as he does. If that is one's corporate approach towards change management, it would seem Hubbard is your man. Ironically, all of these things suggest a sensibility towards the actual "people systems" of not just management, but implementation, which is completely lacking - and thus undermines Hubbard's credibility as an expert on anything other than analytic techniques. This may be an unfair personal assessment, but Hubbard does little in the book to communicate even rudimentary management sensibilities, and the burden of proof - especially when exploring a topic such as this - should be his.
Hubbard spends an inordinate portion of the book repeatedly - redundantly - making the same self-evident point that low-fidelity risk analysis methods such as scoring approaches are, well, low-fidelity, and subject to bias. This is tautological. Even for those consumers of the methods who haven't thought hard about the issue, the point can be made in five pages, and does not need 150. (Note that it is at least that long before solutions begin to be offered). Even worse, Hubbard's primary critique other than offending "first principles" sensibilities is that these techniques have not been proven to actually have measurable impacts on performance. This might be an interesting line of inquiry had Hubbard actually done any new research on the subject, or joined with management consultants who had. Or, more importantly, had demonstrated the benefits of using the more rigorous, probabilistic risk assessment techniques which he advocates. He does not. (He alludes to this in literally the closing chapters of the book, but never actually tackles the challenge of performance-based assessment. Simple techniques are bad because they are not as rigorous or unbiased as the techniques he would advocate - therefore they must (or perhaps may?) do more harm than good. Difficult to say, as this issue is delivered rhetorically rather than rigorously.
The biggest failure of "The Failure of Risk Management" is that it mostly declines to tackle actual management. As Hubbard himself seems to realize and admit very late in the book, he has written a text about risk analysis, not risk management. Ultimately, the content - even if it were not largely a rehash of the material from "How to Measure" - is much, much, much thinner than the title, and the title could have been a very interesting exploration of modern (or not so modern) management techniques. A further challenge is that from Hubbard's anecdotes, it appears he views even risk management (read: analysis) as something done solely for the purpose of decision support for senior executives. No mention is made of risk management as a tool for allowing not just C-suite executives, but project managers and the employees who actually have to manage and mitigate risks. This elision allows Hubbard to even more stridently dismiss all low-fidelity techniques out of hand. (Make no mistake - scoring approaches and their efficacy do need hard scrutiny. Unfortunately, Hubbard does not provide it, he simply shouts for others to perform it.)
In summary - if you have read "How to Measure Anything", you have read 90% of what Hubbard has to say, and probably enjoyed reading it more than you will by engaging in this book. If you have an agenda to promote probabilistic risk assessment within your organization, to the detriment of other approaches, this book will provide you ample rhetoric, as well as theory, but not actual evidence, or ROI documentation, and very little in the way of tangible implementation tools or techniques to go forward. It is an opportunity missed.