Thursday, 15 September 2016

Book of the month: Bounce by Matthew Syed

About the author
Matthew Syed (@matthewsyed) is a journalist and was the English number one table tennis player for almost ten years


Who should read this book?

Anybody involved in education and training will find something useful in this book.  Although there are a few problems, they are more than outweighed by the readability of this book and the transferability of the acquired knowledge into practice. Syed talks about the myth of innate talent, deliberate practice, expertise, motivation, the benefits of standardisation, the training of radiologists and inattentional blindness.


In summary

The book is divided up into 3 Parts and 10 Chapters:

Part I: The Talent Myth. Here Syed effectively destroys the myth of innate talent. He tells us what you need is opportunity, deliberate practice with feedback and luck. 
  1. The Hidden Logic of Success
  2. Miraculous Children?
  3. The Path to Excellence
  4. Mysterious Sparks and Life-Changing Mindsets
Part II: Paradoxes of the Mind. In this part Syed look at how our beliefs can help (or hinder) us.
  1. The Placebo Effect
  2. The Curse of Choking and How to Avoid It
  3. Baseball Rituals, Pigeons, and Why Great Sportsmen Feel Miserable after Winning
Part III: Deep Reflections. This part is less obviously related to the preceding parts (see "What's bad about this book?" below)
  1. Optical Illusions and X-ray Vision
  2. Drugs in Sport, Schwarzenegger Mice, and the Future of Mankind
  3. Are Blacks Superior Runners?

What's good about this book?

This book is well-written and very easy to read. As someone who has "been there" Syed does a great job of debunking the talent myth (or the "myth of meritocracy" (p.7)) He references a few of the other writers in this field including Malcolm Gladwell (p.9) and Anders Ericsson (p.11)

Syed explains why the talent myth is bad, in part because it means we give up too quickly because "we're just not good at it". The talent myth also means that "talented" people are given jobs which they are not suited for, this may be a particular problem in government.

This book is relevant to the acquisition of skills (technical and non-technical): Syed refers to Ericsson when he says tasks need to be "outside the current realm of reliable performance, but which could be mastered within hours of practice by gradually refining performance through repetitions" (p.76) In addition, mastery of skills leads to automaticity and a decrease in mental workload.


As mentioned in previous blog posts, failure is an important element of improvement and in order to improve we need to push ourselves (and our learners). Are your sessions set up in order to make the best possible use of the learners' time? Syed also explains that it is not just time (cf 10,000 hours) but the quality of the practice that is important.

Syed extols the benefits of standardisation. He spent two months perfecting his stroke so that it would be identical "in every respect on each and every shot" (p.94). This meant that now he could introduce small changes and he would be able to tell if these were improvements or not as the rest of the stroke remained the same. There is a strong argument for similar standardisation or reduction in variation within healthcare. Currently it is extremely difficult to see whether a change is an improvement because of the variation in the system.

"Feedback is the rocket that propels the acquisition of knowledge (p.95-96). Syed again refers to Ericsson when he discusses how the training of radiologists and GPs could be improved by giving them access to a library of material where the diagnosis is already known (e.g. mammograms for radiologists, heart sounds for GPs). Because the participants are given immediate feedback on their diagnosis they can learn very quickly from their mistakes. Could your skills or simulation centre offer something similar?

Syed also deplores the lack of adoption of purposeful practice outwith the sports arena. He quotes one business expert: "There is very little mentoring or coaching... and objective feedback is virtually non-existent, often comprising little more than a half-hearted annual review" (p.103). How many of our workplaces can identify with this?

Syed's final chapter "Are Blacks Superior Runners?" is a very well-written argument that it is economic and social circumstances that result in more black people being motivated to take up sport and excel in it. The false belief that black people have sporting talent, but are intellectually inferior, is part of a wider culture of discrimination, where for example people with 'black'-sounding names are less likely to be invited to a job interview.

What's bad about this book?

Syed commits the same mistake as Gladwell (which is nicely refuted by Ericsson here) that "(w)hat is required is ten thousand hours of purposeful practice" (p.85) or that it takes 10,000 hours to "achieve excellence" (p.15) 

Syed changes Ericsson's "deliberate practice" to "purposeful practice". Although he does explain his reasoning, this change does not improve our understanding of what the term stands for and is an unnecessary variation.

Syed states that "some jobs demand deep application... nurses are constantly challenged to operate at the upper limits off their powers: if they don't people die."(p.72) Unfortunately this is not the case. Most nurses (and most healthcare workers) do not work at the upper limits of their powers and patients do die. Healthcare currently neither rewards nor encourages excellence. Healthcare rewards, if not mediocrity, then not being noticed for the wrong reasons.

Although applicable to sports, Syed's writing on the dispelling of doubt, does not translate well into healthcare. "Positive thinking" must not turn into the cognitive trap of "false positivism" and a degree of doubt is necessary for safe care.

Part III: Deep reflections consists of 3 chapters which seem to have been added, slightly ad hoc, to the end of the book (perhaps it wasn't long enough?) Syed's argument in Chapter 9 that a policy of "regulated permissiveness" would be better than the current doping ban does not hold water. It is more likely that everybody (who can afford it) will then be on the permitted drugs and the cat-and-mouse game between the dopers and the doping agencies would continue with the illegal drugs. With respect to the Haemoglobin-boosting drug EPO, Syed states: "It is only when [the haematocrit] is elevated above 55 per cent that the risks begin to escalate..." (p.226). When it is more likely that there is no safe limit for the haematocrit. In the same vein Syed states: "Moderate steroid use improves strength and aids recovery without significant damaging side effects" This begs the question: "Why are we not all taking a moderate amount of steroids?"

Final thoughts

Syed argues that standards are spiralling upward in a number of fields because "people are practising longer, harder (due to professionalism), and smarter." He also talks about coasting (driving car) and unfortunately this is where many of us end up. Once the exams are finished we neither push ourselves nor are we pushed.

If we "institutionalised the principles of purposeful practice" (p.84) as Syed encourages us to do, our training would be more effective, healthcare workers more qualified and patients safer.

Thursday, 8 September 2016

Harnessing the Power of Mistakes (by Vicky Tallentire)

Mistakes are an inevitable aspect of any system that involves decision-making; healthcare is no exception.  For better or for worse, the mistakes that we make over the course of our careers define, to some extent, who and what we become.  In the early days they often influence career decisions.  Subsequently, they shape our approach to work, subtly impacting on our communications with patients, our investigative decisions and our willingness to discharge people home.  For many nearing retirement, the timeline of a career is a haze of professional satisfaction, punctuated by incidents of avoidable harm recalled with the clarity of yesterday.  

Henry Marsh (1) describes the impact of mistakes on his professional demeanour: “At the end of a successful day’s operating, when I was younger, I felt an intense exhilaration. As I walked round the wards after an operating list… I felt like a conquering general after a great battle. There have been too many disasters and unexpected tragedies over the years, and I have made too many mistakes for me to experience such feelings now…”(p.33)  Dealing with one’s own failures is, I think, the most challenging aspect of a career in healthcare.  How does one balance the inevitable sorrow and guilt with the need to hold one’s head high and continue to make high-stakes decisions?

Medical school lays the foundations for a career in medicine.  The thirst for knowledge is unparalleled.  As Atul Gawande (2) says, “We paid our medical tuition to learn about the inner process of the body, the intricate mechanisms of its pathologies, and the vast trove of discoveries and technologies that have accumulated to stop them. We didn’t imagine we needed to think about much else.”(p.3)  And yet we do.  At medical school I was introduced to the abstract concepts of error, unintended harm and, God forbid, mistakes.  But I didn’t understand them concretely, like I do now.  That I will make mistakes, I will cause harm, inflict distress and compound misery.  That one day I would be crouched on the floor beside a patient, with the hateful glare of a relative fixed on the back of my head, uttering “I’m sorry”.

Don’t we, as a profession, have a duty to better prepare our future doctors to deal with their own failings?  Shouldn’t we augment the vast knowledge of pathophysiology with self-awareness, emotional resilience and the language of professional but meaningful apology?  The challenges are great, but so too are the rewards. 

Immersive simulation is a tool that facilitates rehearsal of high-stakes decision-making in emotionally charged situations. Mistakes are more than likely in such contexts.  The debrief allows participants to reflect on their actions, off-load emotionally and discuss the possible consequences of alternative choices.  That journey of self-discovery and emotional development is, in my mind, what underpins the power of immersive simulation.  The challenge now is how that journey can be continued, and supported, in the workplace.

References
  1. Henry Marsh. Do No Harm: Stories of Life, Death and Brain Surgery. Published by Weidenfeld & Nicolson, 2014.
  2. Atul Gawande. Being Mortal: Illness, Medicine and What Matters in the End. Published by Profile Books Ltd, 2014
About the author:
Vicky Tallentire is a consultant in acute medicine at the Western General Hospital in Edinburgh.  She has an interest in the training of physicians, and has held a number of roles in the Royal College of Physicians in Edinburgh.  Vicky has a particular interest in simulation based research and completed a doctorate at the University of Edinburgh in 2013 using simulation as a tool to explore decision-making and error.  She is keen to develop the research profile of the centre and would like to hear from anyone, from any professional background and at any level, who is interested in undertaking research projects in the field of simulation.

Friday, 29 April 2016

Book of the month: The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us by Christopher Chabris and Daniel Simons

About the authors
Christopher Chabris (@cfchabris) is an associate professor of psychology and co-director of the neuroscience programme, Union College, New York. Daniel Simons  (@profsimonsis a professor in the department of psychology and the Beckman Institute for Advanced Science and Technology at the University of Illinois. Chabris and Simons ran one of the most famous experiments in psychology, the "invisible gorilla" (video). A blogpost discussing the conclusions to be drawn from their experiment and related ones is available here: Inattentional blindness or "What's that gorilla doing there?".

Who should read this book?

Anybody with an interest in human performance limitations will find this book an interesting read. In addition, many of the concepts are useful to gain insight into how people perform within a simulated environment and in clinical practice.

In summary

The book is divided up into an Introduction, six chapters and a Conclusion. The six chapters are:
  1. "I Think I Would Have Seen That"
  2. The Coach Who Choked
  3. What Smart Chess Players and Stupid Criminals Have in Common
  4. Should You Be More Like a Weather Forecaster or a Hedge Fund Manager?
  5. Jumping to Conclusions
  6. Get Smart Quick!

Chabris and Simons explore and explain a number of misconceptions we have about our own abilities. Each chapter focuses on a specific "illusion": attention, memory, confidence, knowledge, cause, and potential. Chabris and Simons are interested in the fact that, not only do we suffer from these illusions, but we also are unaware of them and are surprised when they are pointed out.

What's good about this book?

This book is well-written and very easy to read. Each chapter focuses on one topic and is peppered with everyday examples to illustrate concepts. These include motorcycle collisions, film continuity errors, a sense of humour, and lifeguards in swimming pools.

Not an effective way to change behaviour
In Chapter 1 the authors discuss why cars hit motorcycles (at times due to inattentional blindness) and they also explain why "Watch out for motorcycles" posters and adverts are not effective. They suggest that making motorcycles look more like cars, by having two widely separated headlights, would make them more visible to other car drivers. The same concept of "attention" also explains why the risk of collision with a bicycle or motorcycle decreases as the number of these forms of transport increase. The more often people see a bicycle on the road, the more likely they are to expect to see one and look for one.The authors also provide additional details about the various illusions. For example, eye-tracking experiments have shown that those who do not see the "invisible" gorilla spend as much time directly looking at it as those who do.

Chapter 2 looks at memory and uses persuasive experimental evidence to convince the reader that memory is fallible. In particular, contrary to popular belief, people do not have crystal clear memories of what they were doing during exceptional events such as 9/11 or Princess Diana's death. People think they do, because they think they should, and therefore are confident about these (unclear) memories.

Chapter 3 explores confidence. The first example used is a doctor who looks up a diagnosis and treatment, which makes his patient feel very uneasy. Isn't a doctor supposed to know this stuff? We encounter similar situations in simulation, with the tension between appearing confident and being able to admit ignorance often results in a less than ideal outcome. The notion of moving from unconscious incompetence to unconscious competence is also covered here, by referring to an article ("Unskilled and Unaware of It") which begins with a description of an inept bank robber.

Would you ride this bike?
Chapter 4 explains why we often think we know more than we do. The authors make this point by asking the reader to draw a bicycle and then to compare this against the real thing. (Italian designer Gianluca Gimini has created some interesting 3-D renderings of people's concepts of what a bike looks like.) This illusion of knowledge, they argue, played a part in the 2008 banking crisis as bankers thought they understood both the banking system and the extremely complex collateral debt obligations (CDOs). 

In Chapter 5 Chabris and Simons explore causation and correlation. While many people with arthritis think they can tell when the weather is about to change, researchers have found no correlation. It is likely that their pain levels fluctuate but if the weather changes they then ascribe their pain to the change in atmospheric pressure.

In Chapter 6 the authors debunk the Mozart Effect, which led parents to play Mozart to babies in the belief that it would make them smarter. Similar claims by Lumosity, a company which alleged that playing its games would delay age-related cognitive impairment, resulted in a $2 million lawsuit.

What's bad about this book?

There is very little to fault this book. Chabris and Simons call limitations in human performance "illusions" because, like M. C. Escher's prints, they persist even when you know what they are. The authors do a great job of explaining the illusions but do not spend enough time addressing the ways in which we might improve our ability not to succumb to them. 


Final thoughts

In terms of simulation, this book explains a number of behaviours that we witness in the simulated environment. For example, it is not unusual for participants to "lie" about something that happened. They may be adamant that they called for help, but the debriefer knows (and the video shows) that this was not the case. The participant is falsely remembering a call for help because they think that they would always call for help.

Again, in terms of the illusion of confidence, we find that those who are least able are often most confident because they lack the insight required to know how poor their performance is.

In terms of human factors, this book will provide a number of examples of human fallibility for workshops or other courses. It also reinforces the need for systems which help humans. As an example, changes in a patient's end-tidal CO2 (ETCO2) trace can suggest physiological impairment, but most machines do not make the clinician aware of these. A smarter monitor would alert the clinician to these changes instead of relying on his or her continued awareness. 


Wednesday, 30 March 2016

Sharpening the saw: everyday debriefing practice

Participants on our 2-day introductory faculty development course are given all the tools they need to plan, run and debrief a simulated experience aligned to learning objectives. However, on returning to their own workplaces, they often do not have the opportunity to run simulations regularly. This lack of practice means that their skills in debriefing do not improve as quickly as they would like. Also participants often mention that they don't have the time to carry out a 40 minute debrief. The good news is that they don't have to.

In Stephen Covey's book "The 7 Habits of Highly Effective People", the seventh habit is "Sharpen the Saw". This habit, which includes social, emotional and physical well-being, also focuses on learning. This blogpost will explain how you can "sharpen the saw" every day with respect to debriefing in a few straightforward steps:

1) Find a learner
Anybody will do (a trainee, a student, a colleague...)


2) Rustle up some learning objectives
The learning objectives can come from your learner (e.g. "What do you want to focus us on today?" "What do you want to get out of today?" "What have you been struggling with?") Or they can come from you. 


3) Have an experience together
This can be pretty much anything. Inserting a nasogastric tube, carrying out a laparoscopic cholecystectomy, doing the drug round on a ward, going on a home visit, etc. The proviso is that you must have enough mental workspace available to observe the learner. This does not mean that you must be "hands off". However if you are too involved in the experience yourself, perhaps because it is complicated or time-critical, you are unlikely to be able to have a conversation with the learner about their performance.


4) Practice your debriefing skills (as per the SCSCHF method)

a) Reactions
Ask him/her how that felt. What are their emotions about the experience.

b) Agenda
Ask him/her what they thought went well and what the challenges were.

c) Analysis
The assumption is that you don't have the time to spend 30 minutes in this phase of the debrief, so focus on just one thing. Use good questioning technique (taught on the faculty development course) to delve into the mental frames, heuristics, assumptions etc. which led to this being a challenge or a good performance.

d) Take Home Messages
What is your learner going to differently or the same next time based on your facilitated discussion.


5) Get feedback
Practice does not make perfect, practice makes permanent. Deliberate practice with feedback propels you up the slope towards perfection. So get feedback from the learner. What was good about the way you helped them learn, what didn't work? If you can, now and again get a colleague, who has also been on the faculty development course, to sit in on the above and also give you feedback.


6) Reflect on your performance
This does not have to take long or to be done then and there. At some stage reflect on your performance with the benefit of the feedback you have obtained. What are you going to do differently next time?


7) Repeat
Do steps 1-6 again. Tell us how you get on....

Wednesday, 23 March 2016

Simulation and Learner Safety

Primarily when we talk about safety in simulation we are referring to patient safety. Patient safety in two senses. The first is that one of the main reasons for carrying out simulation is to improve patient safety by looking for latent errors, improving teamwork, testing equipment, etc. The second is that "no patient is harmed" during simulation exercises.

In the brief before the simulation event, safety is also often mentioned in the establishment of a "safe learning environment (SLE)" and, in this context, it refers to Learner Safety. A recent clinical experience reinforced my appreciation of the SLE.

It was 10pm and I was resident on-call when my phone went off to tell me that a poly-trauma was on its way in. 2 adults and 3 children had life-threatening injuries after a collision on the motorway. Although I have been an anaesthetist for 13 years, a consultant for 5 of those, my clinical experience of polytrauma in adults is minimal and in children is essentially nil. I have looked after a man who had major injuries and 95% burns after an industrial explosion, another man who suffered severe injuries after he ran his car underneath a flatbed truck and the occasional stabbing and shooting victims. In children I have intubated a  2-week-old "shaken baby" and anaesthetised a large number of children on the trauma list for broken wrists, arms, ankles, etc. 

When faced with infrequent events it is not unusual to carry out a memory scan to draw on previously obtained knowledge relevant to the situation at hand. I remembered the above patients and I also remembered a simulation course I had been on at the SCSCHF: Managing Emergencies in Paediatric Anaesthesia  for Consultants (MEPA-FC). My scenario involved a boy who had been run down by a car, he had a number of injuries including a closed intracranial bleed. My first thought when I remembered this scenario was "I did okay". Then I mentally went through the scenario again, thought about what had gone well and what, with input from the debrief, I should have done better. This then was the knowledge I had front-loaded and the emotional state I was in when the patients arrived in the ED.

When I talked through the above with David Rowney, the facilitator on the MEPA-FC course, he expressed surprise that my first thought was "I did okay" rather than remembering the Take Home Messages for my scenario. But there it is. It may be that I am very different from other people but I think it is not unusual to have an emotive reaction to a memory before a logical one.

This then made me think about the simulation participant who might not have had the SLE I had. The participant who, after their paediatric trauma scenario, had been dragged over the coals and made to feel incompetent. What would the emotional state of that doctor be as they walked down to the ED? And how would that affect their performance?

This blogpost is not a plea to "take it easy" or "be gentle" with participants. Poor performance must be addressed, but it must be addressed in a constructive manner. Help the participant understand their performance gaps and how to bridge them, while at the same time remembering "I'm okay. You're okay." Very few of us come to work (or to the simulation centre) to perform poorly. In fact most people in a simulation are trying to perform at the peak of their ability. When they fall short it is important to help them figure out why that is, while re-assuring them that they are not "bad".

Wednesday, 9 March 2016

Book of the month: Resilient Health Care (Hollnagel, Braithwaite and Wears (eds))


About the editors
Erik Hollnagel has a PhD in Psychology and is a Professor at the University of Southern Denmark and Chief Consultant at the Centre for Quality Improvement, Region of Southern Denmark. He is the chief proponent of the Safety-II paradigm and helped to coin the term "resilience engineering".
Jeffrey BraithwaitePhD, is the director and a professor of the Australian Institute of Health Innovation and the Centre for Health Care Resilience and Implementation Science, both based in the Faculty of Medicine and Health Sciences at Macquarie University, Australia. He is also an Adjunct Professor at the University of Southern Denmark.
Robert Wears, MD, PhD, is an emergency physician and professor of emergency medicine at the University of Florida and visiting professor at the Clinical Safety Research Unit, Imperial College London.

About the contributors

There are 27 other contributors, including well-known names such as Charles Vincent and Terry Fairbanks. The contributors are a world-wide selection, encompassing the US, Europe and Australasia. The majority are from a sociological/psychological research background rather than front-line clinical. 

Who should read this book?

This book will be of interest to those who are tasked with improving patient safety within their organisation, whether this is by collecting and analysing incident reports or "teaching" healthcare workers. It would be useful reading for board members, healthcare leaders and politicians involved in healthcare.

In summary

The book is divided into 3 parts (18 chapters), as well as a preface and epilogue by the editors

  1. Health care as a multiple stakeholder, multiple systems enterprise
    1. Making Health Care Resilient: From Safety-I to Safety-II
    2. Resilience, the Second Story, and Progress on Patient Safety
    3. Resilience and Safety in Health Care: Marriage or Divorce?
    4. What Safety-II Might Learn from the Socio-Cultural Critique of Safety-I
    5. Looking at Success versus Looking at Failure: Is Quality Safety? Is Safety Quality?
    6. Health Care as a Complex Adaptive System
  2. The locus of resilience - individuals, groups, systems
    1. Resilience in Intensive Care Units: The HUG Case
    2. Investigating Expertise, Flexibility and Resilience in Socio-technical Environments: A Case Study in Robotic Surgery
    3. Reconciling Regulation and Resilience in Health Care
    4. Re-structuring and the Resilient Organisation: Implications for Heath Care
    5. Relying on Resilience: Too Much of a Good Thing?
    6. Mindful Organising and Resilient Health Care
  3. The nature and practice of resilient health care
    1. Separating Resilience from Success
    2. Adaptation versus Standardisation in Patient Safety
    3. The Use of PROMs to Promote Patient Empowerment and Improve Resilience in Health Care Systems
    4. Resilient Health Care
    5. Safety-II Thinking in Action: 'Just in Time' Information to Support Everyday Activities
    6. Mrs Jones Can't Breathe: Can a Resilience Framework Help?

I haven't got the time to read 238 pages...

For the time-poor, the preface and epilogue are worth reading. Chapter 3 on the challenges resilience poses to safety, Chapter 5 on quality versus safety and Chapter 11, co-authored by Charles Vincent, on the downsides of resilience, are also worth reading.

What's good about this book?

This book makes it clear that "resilience" can mean different things to different people. The authors identify resilience as part of the defining core of a system, something a system does rather than something that it has (p.73, p.146, p.230). This is in contrast to some who call for more resilient healthcare workers, with the implication that if they were "tougher" then they would make fewer mistakes. Resilience is also not just about an ability to continue to function but an ability to minimise losses and maximise recovery (p.128).

The authors also make it clear that resilience is not a self-evident positive attribute. More resilience in a system does not come without cost including, for example, a system which may resist "positive" change, such as some of the changes that the patient safety movement is trying to embed. Safety may focus on standardisation and supervision while resilience focuses on innovation, personalisation and autonomy (p.29). In Chapter 3, René Amalberti argues that "it is not a priority to increase resilience in health care. The ultimate priority is probably to maintain natural resilience for difficult situations, and abandon some for the standard" (p.35).

The book helps to explain the lack of rapid advance in patient safety because of the "economic, social, organisational, professional, and political forces that surround healthcare" (p.21). Healthcare may be unique in the diversity and strength of these influences. In addition the authors argue that there is a gap between the front-line and those who manage "safety" (p.42), a finding echoed by Reason and Hobbs in their book on maintenance error.

The book makes a good critique of the "measure and manage" approach of Safety-I (p.41) which:
  • is retrospective
  • focuses on the 10%
  • misses learning to be found in safe practice
  • focuses on the clinical microsystem rather than the wider socio-cultural, organisational, political system 
Lastly, much work is currently focused on standardisation, however the authors argue that we should  acknowledge the inevitability of performance variability, the need to monitor it and to control it (by dampening it when it's going in the wrong direction and amplifying it when it's going in the right direction). (p.13) The standardisation that does improve resilience is the type that decreases the requirements for effortful attention or the need to memorise (e.g. checklists, layout of workplaces).


What's bad about this book?

Throughout this book, resilience is linked with the Safety-II concept (e.g. "Chapter 1: Making Health Care Resilient: From Safety-I to Safety-II"). The argument for Safety-II can be a nuanced one, therefore a good book on resilience would use simple language and provide specific examples. This book fails on the former and performs poorly on the latter. In particular, how Safety-II can be put into practice now is only vaguely referred to. Even the chapters which purport to show resilience in action do not make this very clear. Exceptions include Chapter 12 "Mindful Organising and Resilient Health Care" which suggests that people should be shown their inter-relations, i.e. how their actions affect those who interact with a patient upstream and downstream. 


At times, the championing of Safety-II gives its proponents the appearance of a cult, e.g. "Enlightened thinkers in both industry and academia began to appreciate..." (p.xxiv) while one must imagine that unenlightened thinkers continued to live in their caves. There are also attacks on the PDCA/PDSA cycle (p.177) and the use of barriers (p. 131) as Safety-I thinking. In addition Safety-I, as a term and paradigm, has been created by Safety-II advocates, and in fact "pure" Safety-I probably does not exist. For example: "In contrast to Safety-I, Safety-II acknowledges that systems are incompletely understood...", however very few people working in healthcare, even within a Safety-I system, would argue that they fully understand the system.


One of the examples in the book of proactive safety management is the stockpiling of H1N1 drugs and vaccines in 2009. This was later deplored by a number of sources as the mild epidemic killed fewer people than seasonal flu and millions of pounds of stockpiles had to be destroyed. 

Lastly one of the arguments the authors use against Safety-I thinking is that focusing on the small number of adverse events means we miss the opportunity to look at all the times things went well. However, with 10% of patients admitted to UK hospitals being subjected to iatrogenic harm (Vincent et al 2008), the number of times things go wrong is still a large chunk of the total work.

Final thoughts

This book makes a strong argument that we must stop looking purely at what has gone wrong in order to find out how to prevent mistakes. It also makes it clear that healthcare, as a complex adaptive system, will not be "fixed" by silver bullets, and that all solutions to problems create their own problems.

The concepts underpinning Safety-II, which include an urge to focus less on incidents and accidents and more on things that go well, are antithetical to much current thinking within healthcare. In addition patients and their families would not accept "I'm sorry you were harmed but we're focusing on things that go right" as an apology. This means that rather than pushing Safety-II, it may be more effective to advocate Safety-III. In Chapter 12 this is defined as: 
"... enactive safety - embodies the reactive [Safety-I] and proactive [Safety-II] and therefore both bridges the past and future, and synthesises their lessons and prospects into current action." (p.155)
Hollnagel himself says "...the way ahead does not lie in a wholesale replacement of Safety-I by Safety-II, but rather in a combination of the two ways of thinking" (p.16). Safety-III may turn out to be a quixotic Theory of Everything. Or it may mature into an accepted, practical and applied paradigm, with "a degree of autonomy at the interface with the patient, yet predictability and effectiveness at the level of the organisation" (p.132). Its adherents still have much work to do.

Further reading:


Vincent, C., et al. (2008) Is health care getting safer? British Medical Journal, 2008;337:a2426.

Wednesday, 10 February 2016

Book of the month: A life in error: from little slips to big disasters by James Reason

About the author

James Reason is one of the greats in human factors research. English Wikipedia does not have an entry for him (the French site does). Instead we have to content ourselves with a page on perhaps his major contribution to broadening the appeal and understanding of human factors, the Swiss cheese model of accident causation. Reason is Professor Emeritus of Psychology at the University of Manchester and has authored numerous papers and books on human factors, including: Human error, The Human Contribution and Managing Maintenance Error (A Practical Guide)

Who should read this book?

Anybody with an interest in human factors and patient safety (see below for why).

In summary

The book consists of 14 chapters:
  1. A Bizarre Beginning
  2. Plans, Actions and Consequences
  3. Three Performance Levels
  4. Absent-minded slips and lapses
  5. Individual differences
  6. A Courtroom Application of the SIML (Short Inventory of Mental Lapses)
  7. The Freudian Slip Revisited
  8. Planning Failures
  9. Violations
  10. Organizational accidents
  11. Organizational Culture: Resisting Change 
  12. Medical Error
  13. Disclosing Error
  14. Reviewing the Journey

What’s good about this book?

The book is very well written and easy to read. Reason takes us on an humorous, insightful, autobiographical journey from his first encounter with "human error" to his later theories. The book explains a number of concepts. For example, Reason argues that some familiar objects develop local control zones (p.3). In healthcare, an IV cannula may exhibit this property. If one finds oneself with a syringe in hand, distracted and near a cannula there is a strong possibility that one will inject the contents of the syringe into the cannula. When the syringe contains local anaesthetic or 1:1000 adrenaline this may result in adverse consequences.

Reason talks about differences between novices and experts. The former show a lack of competence, while the latter are much more likely to commit absent-minded slips, i.e. misapplied competence (p.21). Reason argues that, in absent-mindedness, it is the suppressive function which goes absent. Pre-programmed, habitual actions are normally actively suppressed, but in "strong habit intrusions" they are carried out by the distracted person.

Reason discusses the "Stress-vulnerability hypothesis", people under chronic stress are more likely to have cognitive failures such as absent-minded slips and lapses (p.33). However he argues that association is not causation, and it may be that people who are more likely to complain of chronic stress may also me more likely to be absent-minded, i.e. that the same poor cognitive resource management is responsible for both.

In his discussion of planning/decision-making, Reason describes the planning process and the sources of bias which lead to failure, grouping them by planning stage (p.56):
  1. Working database (e.g. recency, successes better recalled than failures)
  2. Mental operations (e.g. covariation, "halo", hindsight)
  3. Knowledge schema (e.g. confirmation, resistance to change, "effort after meaning")

For those interested in groups and organisations, Reason discusses "satisficing", i.e. groups will tend to select the first satisfactory outcome rather than an optimal one. He also looks at the heuristics of group decision-making, such as avoidance of uncertainty and selective organisational learning (p.59).  In terms of accidents, Reason contrasts "individual" (frequent, limited) and "organisational" (rare, devastating) accidents. He therefore agrees with Steven Shorrock that having a sign which says e.g. "135 days since our last accident" does not tell you how safe the system is. Why? Because they have different causal sets (p.79).

"Turning a blind eye" (Nelson commits a violation, p.68)
In terms of violations, that is conscious decisions to ignore or circumvent a rule, Reason argues that it is better to focus on decreasing the benefits of violations rather than trying to increase the costs of doing so. This means that one should look at why the system is promoting violations rather than punishing individuals for committing them.

Reason also covers latent conditions, active failures and how they combine with local triggers into an accident trajectory (p.75).

What’s bad about this book?

At 124 pages, this is a short book, however it is probably too short. A lack of explanation may leave some readers puzzled. For example, on p.30 Reason states: "The correlation between [two independent samples of the Short Inventory of Mental Lapses] over the 15 items was 0.879." It would probably have been better to leave out the numbers or to explain them. His coverage of the planning process and its biases is too short and superficial, he mentions "groupthink" (p.61) and provides 8 main symptoms but does not explain these in sufficient detail to allow one to use this knowledge in practice.

Final thoughts

This book spans the whole gamut of human factors science and touches on a great number of subjects including all the above, as well as a typology of safety cultures, vulnerable system syndrome (blame, deny, pursue wrong goals), why and how organisations resist change, models of medical error (plague, legal, person, system) and more. And if you would like an easy-to-read, broad introduction to human factors and healthcare then this book is a must-read.

Thursday, 28 January 2016

On the use and abuse of "human factors"

Words shape our world

The words we use, and how we use them, not only allow people to know what we are thinking but also shape the way we think. As a car mechanic, for example, knowing what all the components of an engine are called will make it easier for her to talk to a fellow mechanic and think about what the problem might be and how to fix it.

"Human factors"

In the podcast "Human factors, non-technical skills and professionalism", Liz Chan, a specialist in Veterinary Anaesthesia and Analgesia at the Royal Veterinary College, University of London tells us:
"Human factors were defined by a guy called Martin Bromiley, who set up the Clinical Human Factors Group [CHFG]... He defines them in such an excellent way I always steal his definition because it is, basically to paraphrase: 'Everything that makes us different from predictable machines.'"
Although it is possible that Martin Bromiley used that definition, it is extremely unlikely and, in trying to paraphrase, Liz Chan has changed the meaning of the term "human factors". This means that the podcast listeners are also likely not to use the term appropriately and when they read human factors literature they may wonder how this fits with their definition.

On their website, under "What is human factors?" the CHFG uses the Chartered Institute of Ergonomics and Human Factors (CIEHF) definition of:
"Ergonomics (or Human Factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimise human well-being and overall system performance."
An easier definition is provided by Martin Bromiley in a Health Foundation blog.
"I often talk about human factors making it easy to do the right things with reliability of outcome..."
So human factors is a science whose aim is to make it easy for us to do the right thing, and difficult to do the wrong thing.

The abuse of "human factors"


A lack of clarity around the use of the term "human factors" means that when it is used in the press, for example, it is almost always in a pejorative manner. This reinforces the (false) idea that if we could remove the humans from the system then things would be much safer.

In his book, The Human Contribution, James Reason argues that the predominant view of humans in complex systems is as "hazards" when they are often "heroes".

Still one of the predominant examples of the latter view is the "miracle on the Hudson", when Captain Chesley B. "Sully" Sullenberger landed an Airbus A320 on the Hudson river in New York after both engines had failed due to bird-strike. But we didn't see headlines like this:


We should use the term human factors to refer to the science of ergonomics and avoid using it to mean "human error" (itself a poor choice of words). This will help us and others to have more meaningful discussions and clearer thinking on the causes of, and remedies for, incidents and accidents.