A popular model in risk management across domains as diverse as aerospace, healthcare, mining, and manufacturing, the Swiss Cheese Model gained a new audience when it was repurposed to represent a multi-layered COVID response.
The original Swiss Cheese Model used the metaphor of cheese slices to represent lines of defences spanning personal and organisational factors. Losses and impacts can be prevented through this multilayered approach and will impact if/when a hazard travels through an alignment of holes.
TO ERR IS HUMAN.
This model was developed in the 90s, at a time when safety tended to be viewed through the prism of personal errors. This led to a focus on shifting personal behaviours through methods such as training, confronting campaigns, and/or disciplinary action. In that context, James Reason and his Swiss Cheese Model was part of a movement that took a more holistic approach, incorporating a system view beyond the personal.
This systems approach was based on an understanding that “to err is human”, and that errors were inevitable. Reason identified unsafe acts as either intended (violations or mistakes) or unintended (mistakes again, with the addition of slips and lapses). Indeed, Reason began his exploration into this area when he distractedly put cat food into his teapot – see Origins below for more.
Industries and workplaces have created their own versions of this model, typically labeling the layers, or cheese slices in the metaphor, to span personal and organisational factors. Reason accepted that each layer of defence will have unintended holes, where things will inevitably go wrong, and classified them as:
Active failures: these are unsafe acts committed by people typically at the ‘front end’ of the process. For example, at Chernobyl, the operators violated procedures by turning off safety systems. However, rather than being isolated personal factors they can generally be traced back to underlying systemic issues.
Latent conditions: these are the ‘resident pathogens’, as Reason described them, that are the system or back end challenges, including elements such as design decisions, faulty processes, or equipment issues.
In many cases, hazards will make it through one or even several layers, but will remain contained and have a minimal impact — though ideally these breaches will be uncovered, tracked, and serve as feedback for continual improvement. The viewable impact arises when the holes in all defence layers are aligned, allowing hazards to break through and impact people.
APPLIED TO COVID.
More recently, Australian Virologist Ian Mackay adapted the Swiss Cheese Model to represent a defensive response to Covid. It points to the fact that each intervention is imperfect, but together they can create a strong defence. After posting earlier versions on Twitter, Mackay added a ‘misinformation mouse’ which, when left unchecked, will eat away at the defences. See the In Practice tab for Mackay’s original diagram.
IN YOUR LATTICEWORK.
The Swiss Cheese Model can be used as part of a risk management strategy that can incorporate the Risk Matrix, and the idea of building multiple defences aligns with Margin of Safety. The acknowledgment of fallibility connects to the irrationality of humans explored through Fast and Slow Thinking.
Understanding unsafe acts will likely lead you to root cause tools such as the Fishbone Diagram and/or 5 Whys, and often personal safety issues can be related back to Psychological Safety. Finally, the resonance of this model connects with Aristotle’s Rhetoric and the power of metaphors.
- Assume that human error will occur.
Reason’s work was premised on the idea that “you can’t change the human condition, but you can change the conditions that humans work in.” As such, he encouraged a view beyond the personal attribution of error to creating broader mitigating factors.
- Assume that points of system failure will occur.
This model is also a reminder that ‘latent conditions’ or underlying issues will inevitably cause other system failure.
- Build in multiple layers of defence.
Knowing the above, ensure that you design any safety measures with Redundancy and multiple layers to support greater safety.
- Uncover hidden breaches.
The Swiss Cheese Model is a reminder that just because hazards don’t reach the point of impact, they can still be present but have been stopped at a layer of defence. Such hidden challenges must be uncovered, tracked and addressed before the breach occurs.
Below is Australian Virologist Ian Mackay’s repurposed version of the Swiss Cheese Model as it was applied to Covid mitigation.
Risk consultant Julian Talbot used this model to explain the devastation of the 2009 Australian bushfires in the diagram below.
Michigan Tech used this diagram to explore the safety elements in engineering, including a mitigation layer on the end.
The metaphor of Swiss Cheese has clearly resonated in safety and accident domains, though criticism has persisted. One of the prime criticisms is the simplistic nature of the metaphor that leaves it too generic and without value. Many point to the fact that Reason himself tried to expand his work with subsequent diagrams and papers which have not persisted like the Swiss Cheese Model. At worst, it's seen as a reductionist approach that was born from his period working as a consultant, at best it's seen as a tool he used to communicate important concepts, albeit relatively superficially, to management.
For example, some would argue the metaphor presents accidents as a linear occurrence, while in reality, they occur in dynamic and non-linear ways. This links to a broader criticism that it lacks a systems and dynamic view of problems, implying that each component, like a slice of cheese, can be altered and even fixed in isolation.
Another issue with the original diagram is how it continues to be interpreted so differently by practitioners. While some would argue that its broad definition allows for diverse agreement and application, others point to studies of practitioners who were revealed to have different understandings of what the model represents and what it means as a result.
According to James Reason, his inspiration for this model came in the 1970s while he was making tea. He was distracted by his large insistent cat and absent-mindedly dolloped a large spoonful of cat food into the teapot. Reason was fascinated by the similarities of the tasks that led to his mistake and this deepened his research that culminated into his book A Life in Errors - From Little Slips to Big Disasters. He particularly was interested in the impact of mistakes with human-machine interaction, particularly in the high-stakes fields such as aerospace to nuclear power.
Others have noted that Reason had input from John Wreathall in developing what was essentially a building on traditional safety management thinking with an understanding of human error. Reason published the original work behind this model in 1990, then explored it more explicitly in the British medical journal in 2000, though it was several years before it was developed as the organisational accident model, and later known as the Swiss Cheese Model.
Oops, That’s Members’ Only!
Fortunately, it only costs US$5/month to Join ModelThinkers and access everything so that you can rapidly discover, learn, and apply the world’s most powerful ideas.
ModelThinkers membership at a glance:
“Yeah, we hate pop ups too. But we wanted to let you know that, with ModelThinkers, we’re making it easier for you to adapt, innovate and create value. We hope you’ll join us and the growing community of ModelThinkers today.”