“What you’re doing, Sidney,” Jim Reason said, looking at me intently, “is trying to crawl into the skull of a dead man.” He turned to gaze at the teacup in his hand, shook his head, and grumbled. “How is that possible?”
The way he looked, it was not a question.
We were in a side discussion during a conference some two decades ago. The topic was –what is in some circles known as – process-tracing methods. I had attempted to popularize these methods in the first version of the Field Guide to Understanding ‘Human Error’ in 2002. How far can we take these methods? It would be the ultimate test of a science of ‘human error’: trying to understand why it made sense for practitioners to do what they did, without them being around any longer.
It was one of the constructive scholarly disagreements with “this Dekker upstart” (his words!) through the decades. Another was about the ontological status of ‘human error’. At most, as Cook, Woods and others would agree, ‘human error’ was a convenient attribution, a label, a way to blame frontline people.
Reason did not disagree. Yet he would maintain (and maybe had to maintain, given the original Human Error book written from the perspective of cognitive psychology) that ‘errors’ (unsafe acts, slips, lapses, mistakes) represented a separate category of human performance that could be studied and controlled as such. His ‘unsafe acts’ – as necessary to complete an accident sequence – came straight out of the Heinrich playbook (even the term is the same), though Heinrich’s dominoes would transmogrify into cartoonish slices of cheese; the eyes of the dominoes now holes to let an accident trajectory through. (Jim left everyone wondering about the ontological nature of that arrow, incidentally.)
Two signature books
I came of age in safety during the first half of the 1990s. My postgraduate cultivation in the field was bookended by Jim’s two signature books – Human Error (1990), and Managing the Risks of Organizational Accidents (1997).
As a psychology master’s student I found myself in Patrick Hudson’s class on ‘human error’ at Leiden University where he brandished Jim’s new 1990 book, and regaled students with stories of unfolding horror as organizations descended into disaster. Jim came to visit Leiden from Manchester, and I shared rides with him in Patrick’s car. Much enthusiasm swirled around his model (of which the 1990 book showed only a rudimentary image) and around the man: Swiss cheese could do magic, like making the difference between incidents (particularly ‘dress rehearsals’) and accidents visible to decision makers who had little time for nuances or messy details.
Regulators in the 90s under neoliberal governments, increasingly enamoured with safety management systems as a make-the-customer-do-the-work sort of regulation, could call on Swiss cheese slices as a visual structure to hang their ideas and demands on, and invoke ‘resident pathogens’ as a dark force to be subjugated by more internal safety bureaucracy.
His Swiss Cheese slices, Jim told me once in Sweden, elicited “instant brand recognition, like the McDonald’s arches.”
The limits of Swiss Cheese
A few years later, as I was pursuing my PhD at The Ohio State University with David Woods, I flew to England and stayed at the Reason home in Disley, where he proudly served me Heineken and announced he never worked evenings. Leaning against his kitchen counter, I showed him my findings about supposed ‘violations’ (a term Dave Woods deemed anywhere from irrelevant to not salonfähig).
As the 1990s slid further into the past, a sense of ‘remains of the day’ started enveloping Swiss cheese, particularly among more avant-garde safety scientists. The literal interpretation of defences-in-depth – for instance, in attempts to prevent drug misadministration events through nurse interruptions in hospitals by putting nurses in a phone booth-like structure (behind a layer of defence) or in high-viz vests with DO NOT DISTURB imprinted upon them (also behind a layer of defence) – exposed the limits of this particular model of risk control.
Or it showed the limits of its usefulness in contexts that were vastly more complex than a linear progression of trouble through neatly lined-up defences. Nurses with vests and nurses huddled in phone booths were actually more likely to be interrupted than before. People in the busy, messy, complex life of a ward now actually had a way of spotting or locating nurses more easily.
A too linear model
As an embodiment of twentieth-century safety thinking, where it was seen as the job of safety to stop everything that can go wrong from going wrong, Swiss cheese never was able to fully adapt to the demands of the twenty-first.
The question ‘how do we assure that as much as possible goes well’ called on us to start looking for the dynamic, diverse capacities – in people, systems, teams, processes – that make safe success possible. How might we identify those capacities and enhance them?
As a linear model, wedded to seeing organizations as machines with components, layers and linkages between them, Swiss cheese was locked in a Newtonian-Cartesian vision of the world. Mishaps were sequences of events; actions-reactions between a trigger and an outcome. It remained mostly mute about the processes behind the build-up of latent failures, about the gradual, incremental loosening or loss of control. The processes of erosion of constraints, of attrition of safety, of drift towards margins, could not be captured.
At one point, Jim fashioned a little mouse nibbling away at one of the layers of defence, but Swiss cheese, as a structuralist approach, would remain a static metaphor for resulting form, not a dynamic model oriented at processes of erosion or deformation.
The enduring virtues
But those are just the self-correcting ebbs and flows of science. Models come and models go, new ones get invoked to push old ones off their perch, trying to show how they are superior in explaining the data. What mattered more was an enduring virtue in both Jim and his work: he, and it, evinced a compassion for the position we configure front-line people in.
And whereas this has been human factors orthodoxy since Chapanis, Fitts & Jones and others in the 1940s, Jim was the rhetorical master of its popularisation.
“Rather than being the main instigators of an accident, he wrote, “operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance and bad management decisions. Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking.”
Few could write with the flourish Jim possessed. And more than that, many of us could draw inspiration from the compassionate, generous heart he had. He taught us to pay attention to the organisational blunt end, to speak and write vividly, and to challenge leaders with both conviction and humour.
A lasting impact
Jim’s influence will endure. His wit made him a formidable presence in our field, and also a treasured colleague and friend. The conversations we had – whether in journals, over drinks, or in the wings of conference stages – will stay with me. And while we may have disagreed on the existence of ‘human error’, there is no error in recognising the deep and lasting impact of James Reason.
Professor Sidney Dekker is director of the Safety Science Innovation Lab at Griffith University in Brisbane and the author of books on Safety Differently, Just Culture and safety science.