James Reason's importance in shaping the current view on safety and accidents cannot be overstated. I could not agree more with this claim on the back cover of his latest book. The accident causation model that has brought Mr. Reason global fame, the Swiss cheese model, is at the foundation of most of the risk analysis models in use today.
The Human Contribution adds further to James' work but it also summarises a lot of his previous work. On the theoretical side, James describes some new risk models and comments on recent developments in risk models developed by others. Some of his comments are particularly interesting, I will provide you some extra highlights further below.
At the core of his new book are the stories of major accidents, in particular those where `heroic actions' prevented much worse from occurring. The Boeing from Air Canada that had to make an emergency landing on an air strip so small that the pilot had no choice but to gamble flying as if it was a glider plane (no one ever dared trying that). Captain Rostron who rescued the survivors of Titanic with an old ship that could have easily suffered the same fate as Titanic. There are many great stories of heroic behavior that are told by James in a very clear and factual manner.
The reason James puts such an emphasis on these stories becomes clear in his chapter on individual and collective mindfullness. The tension between the concept of `human as hazard' and `human as hero' is what Professor Reason has on his mind. His work on accident causation, which has led to the incredibly popular Tripod Beta incident analysis method, has contributed to organisations treating their employees mainly as hazards. He observes there is a trend towards less and less trust in human beings and a systems thinking that also reduces the chance that people will contribute positively to safety. James concludes that `heroic behavior' under extreme circumstances doesn't just happen, an organisation will need to do a lot to create the right basis. An organisation with mainly a `human as hazard' mental model will most likely do the opposite. This is a key point in his book.
I agree with Professor Reason that the `human as hazard' thinking has completely gotten out of hand. Quite a few organisations, both in government and in the Global Fortune 500, are creating such an extreme `one way of work culture' (stop thinking and do exactly as prescribed) that it has led to LESS safety and a dehumanizing treatment of their employees.
Another key point of The Human Contribution is its conclusion:
"As I come to write the final conclusions of this book, I am conscious of how ragged and inconclusive it is. I have provided you little in the way of formulae or prescriptions for safer operations. But, at least, I hope you would be suspicious of anything I (or any consultant) might have offered in this regard."
Now before I end with a list of the highlights of his book, I want to stress that James is underselling himself here. Professor Reason knows so much that he truly understands how little he really knows. His conclusion is a clear warning: beware of risk management consultants promising simple solutions and providing overconfident answers. In the real world, risk management is complex and seemingly muddy. Ambient Multi-Causality, all the factors in the world that can potentially contribute to an accident but have never seem to be a problem, keep combining in new ways and catching even the best risk managers off guard. Simple formulae for risk management can never be catch-all solutions. However, they can certainly assist in accident causation.
James concludes that the only thing you can do is to keep thinking and "do the best you can.". If the first and foremost risk thinker in the world is this modest about what he knows about risk, I hope it will help you to put in perspective the (bold) claims of the risk management consultants you may be talking to. They may know too little to know they know so little.
Some additional highlights:
- James has put in an extensive history and review of his own Swiss Cheese Model. Excellent, some stuff I never read elsewhere.
- Chapter 2 - A Mind's User Guide - very interesting. I never knew that people are so good at frequency gambling encounters of pretty much any kind. People often don't know they know it, but their `unconscious' will tell you if you ask the question right.
- Excellent overview of Human Error in general.
- Error Traps - excellent explanation of this concept and its importance. Sometimes risk management problems are simple, don't turn every incident into a leadership issue please!
- In chapter 7 James directly questions what he sees as reasoning errors in accident investigations. More troublingly, this extends to the current official Tripod Beta incident analysis method as used by companies like Shell. Mistaking latent conditions for underlying causes is what James calls a `counterfactual fallacy', and he gives a clear example of such a fallacy in the official accident investigation report into the Columbia Space Shuttle disaster. I don't agree with James on this point (I think it is all about causal power), but he has really got me thinking. Are we mistaking conditions for causes? And if so, when does it help and when does it hurt? (Fortunately, there are Tripodians who are using Investigator 3 with the term `latent condition' to avoid such counterfactual fallacies, so some Tripodians listen to Professor Reason).
- In addition, James clearly points out that what is identified by many as a latent condition or failure is really a Universal. Calling a reasonable balance between productivity and safety a latent failure is something James Reason would not agree with. The tightrope one has to walk between safety and productivity is universal, and `blaming' an organisation for the `latent failure' of the balance it has chosen is unReasonable (sorry, had to put that obligatory pun in somewhere). All kidding aside, James' strictness with respect to concepts and their definitions is very helpful in the discussion. This is a strength of his book, even though it is not perfect in this regard.
And a few points of critique:
- I have been informed that the metaphor of the Swiss Cheese model was proposed by the brilliant aviation safety expert Dr. Robert Bruce Lee. James has probably forgotten this himself. As James acknowledges the importance of the metaphor itself (I would say the model would never have become this popular without it), I hope that Dr. Robert Bruce Lee receives the proper recognition for his contribution.
- James confesses that he does not understand the stochastic resonance model developed by Erik Hollnagel, FRAM or the Functional Resonance Accident Model. That is incredible given the fact the if anyone should be able to understand it, it should be James Reason. Could even Professor Reason be suffering from Ambient Multi-Causality Coping Disorder (AMCCD)? I dare not even consider the possibility, as that could mean it may affect almost all of us. Erik Hollnagel's Functional Resonance Accident Model contains nothing less than the promise to provide us the tools to predict major accidents and a future credit crises. I have a hunch that Professor Reason really wants to say he thinks FRAM is nonsense, but people raised in the UK tend to phrase that as "I don't entirely understand it, but that fault is mine.". I appreciate that.
- I fear that some readers will expect to learn more about the process of accident investigation. Unfortunately this book is all about the `what we found' and almost nothing about the `how we investigated it'. So you will have to get an extra book for that. I highly recommend the TOP-SET Practical Handbook of Investigation by the renowned accident investigator David Ramsay.
- Some new safety models that James introduces in Chapter 14 in my opinion add little value to his Swiss Cheese Model. They're actually less clear and less useful than the cheese model. I would recommend to James to expand the Swiss Cheese Model and to expand the use of the cheese metaphor.
In my review I have tried to cover a broad range of topics in The Human Contribution. I hope that at least some of it has been helpful for you. One final tip: if you have not read any of Professor Reason's book, I recommend reading his book Managing the Risks of Organizational Accidents Managing the Risks of Organizational Accidents before you read this book. I realize I am undoubtedly suffering from some hindsight bias (meaning: I carry with me the knowledge and ideas from his previous work, cannot take the perspective of someone who hasn't read any of his books) and I have a hunch it will help greatly in understanding The Human Contribution and where James is coming from a decade after his last book.
Even if the only thing you take away from The Human Contribution is modesty and a healthy dose of doubt in how you approach safety and risk management, Professor Reason has delivered great work with his new book!
In short, I recommend The Human Contribution.