Thursday, 19 December 2013

SimTechDay 2013: Backseat drivers take centre stage by Scott Rudnicki-Bayne

Simulation Technicians, we take care of the equipment, set-up scenarios, make the “magic” work. We’re confederates and actors. We keep scenarios on track, the guys and gals who ride in the backseat of the simulation education wagon. The inaugural Scottish Clinical Simulation Centre’s (SCSC) SimTechDay on the 27th of November 2013 was our day to take centre stage. Techs and other interested parties from all over Scotland descended on Forth Valley Royal Hospital (FVRH) in Larbert to network, share ideas and thoughts, and hear from our guest presenters.

Guest presenters:

Mannequin maintenance by Ian White (Laerdal) 
  • Ian White, a Laerdal Field Service Engineer, spoke about Laerdal’s commitment to improve after-sales and technical support, whilst also providing a few tips on maintaining the appearance of SimMan
  • Kevin Stirling, Lecturer in Simulation, University of Dundee and Finance Director, ASPiH, talked about the role of ASPiH and how Scottish SimTech could integrate this meeting into the national networks of technicians and how ASPiH might be able to support this.
  • Andrew Middleton, Scotia UK, who presented a piece on “Effective Video and Audio Recording for Simulation and Debrief” as well as showing us some new AV kit.
  • Nick Gosling, from St. Georges Advanced Patient Simulator Centre, talked about Mobile Simulation Strategies, which gave us a lot to consider when taking our simulated patients out to “in-situ” or “mobile” scenarios.
  • Sarah of Sarah’s Scars, who showed us how to create realistic-looking burns and an open wound with stage make-up, which will undoubtedly help increase the fidelity of our scenarios.
Presenters were available throughout the day to speak with us about any specific issues we had or developments we’d like to see.

What went well?

Don't do this at home
On reflection some aspects worked out really well. The support from presenters, attendees, sponsors and the SCSC team was fantastic. The ability to network and discuss issues with people in similar fields was both rewarding and motivating. Every presentation provided something for everyone: Ian White showed how to use talcum powder to improve the feel of, and minimise adhesive residue remaining on, SimMan's arms. Kevin Stirling initiated debate on a Tech Room at ASPiH.  Andrew Middleton provided individual support on SMOTs. Nick Gosling shared his huge wealth of knowledge and experience in simulation and Sarah made moulage look simple.

What were the challenges?

It's just a flesh wound.
There were also areas to improve upon, both in terms of organising and hosting an event like this. The programme was overpopulated for the time available, which meant some of the presentations were a bit rushed. Next time I’d try to provide time for the presentations rather than squeezing presentations into the time. I’d also consider using more rooms, for 2 reasons. Firstly, due to the overwhelming interest we sadly had to turn some people down who wished to attend. Secondly, the variety of techs (i.e. some university, some healthcare, some new to the role, some involved for years) had widely varying levels of knowledge and experience. So a larger number of attendees might give us the opportunity to have parallel sessions which could be tailored to suit the two distinct groups, where appropriate. I will also work on my skills as chair, e.g. formally introducing presenters and ensuring appreciation of their time and efforts is verbalised more clearly.  

Final thoughts

All in all a great day, focusing on some of the issues in our wide and varied job descriptions. A big thank you to all who attended, presented and helped make this event the success that it was. The feedback from the attendees was excellent and the planning has already started for SimTechDay 2014.

Scott Rudnicki-Bayne (SimTech, Scottish Clinical Simulation Centre, Larbert)
@5imTech1

Monday, 16 December 2013

It's not my fault, it's the drifting, complex system.


One of the explanations provided by people for not embracing a systems-based approach to incident investigation is that it allows the "bad" individual to escape punishment.

In their book "Crisis Management in Acute Care Settings", St Pierre, Hofinger, Buerschaper and Simon state:
"Misapplication of (Reason's) Swiss-cheese model can shift the blame backward from a 'blame-the-person' culture to a 'blame the system' culture. In its extremes, actors at the sharp end may be exculpated from responsibility"
The concern is that some individuals (Robert Wachter labels them the "incompetent, intoxicated or habitually careless clinicians or those unwilling to follow reasonable safety rules and standards") will not be held accountable. These deviants will blame "the system" for not stopping them earlier or not making it clear enough what a violation was or not training them better. Wachter's 2012 paper "Personal accountability in healthcare: searching for the right balance" argues that lines must be drawn to distinguish simple human mistakes from sub-standard performance.

In his book "Managing the risks of organisational accidents" James Reason also calls for the drawing of a line "between acceptable and unacceptable behaviour" and calls for the replacement of a "no-blame culture" with a "just culture".

Drawing the line

A number of people/organisations have proposed "line-drawing" mechanisms:

  • David Marx: who says we must differentiate between "human error" (a slip/lapse), "at-risk behaviour" (wilful shortcuts) and "reckless behaviour" (substantial and unjustifiable risk)
  • The English National Patient Safety Agency (NPSA): their decision tree asks us to carry out 4 tests (deliberate harm, incapacity, foresight, substitution)
  • Leonard and Frankel: they consider 4 questions (impairment, deliberately unsafe, substitution, prior history)
  • Reason (see figure)

James Reason's decision tree (although it looks more like a hedge)

Not "Where?" but "Who?"

In his book "Just Culture", Sidney Dekker argues very convincingly that the line is arbitrary and that the definitions are fuzzy. Without being able to travel back in time and then read the mind of the individual who was at the sharp end of the error how can we be 100% sure that an act was intended or not? Instead Dekker argues that the argument should centre around who gets to draw the line(s): peers, regulators or prosecutors.

Abolishing retrospective blame

One idea that may inform the argument is that it may not be sensible, effective or just to attempt to make "someone" accountable in retrospect once an error has been committed and a patient harmed. Therefore if errors are discovered as part of an adverse event analysis then these should not be used to "blame" individuals. However, as a corollary, it may be appropriate to make people accountable for their current and future behaviour. It is therefore just as important to have systems in place to check and enforce correct behaviours as it is to be able to analyse past events.

As an example, consider the surgical pause as described in the WHO safety checklist. It makes little sense to blame a person or a team for not completing the pause correctly in retrospect once an error has been reported. It is much better to seek an active safety culture which would pick up the fact that the pause is not being done correctly before a patient has been harmed. It is this proactive approach to safety which is still missing in many places.

This post concludes with Don Berwick's thoughts:

"Blame and punishment have no productive role in the scientifically proper pursuit of safety."

References:


  • Berwick, D. Quoted at: http://www.england.nhs.uk/2013/12/12/never-events-news/
  • Decker, S. Just Culture. Farnham, UK: Ashgate Publishing Ltd. 2007.
  • Leonard MW, Frankel A. The path to safe and reliable healthcare. Patient Educ Couns 2010;80:288–92.
  • NPSA http://www.ahrq.gov/professionals/quality-patient-safety/patient-safety-resources/resources/advances-in-patient-safety/vol4/Meadows.pdf
  • Reason, JT. Managing the risks of organisational accidents. Aldershot, UK: Ashgate Publishing Co. 1997.  (Decision tree from: http://www.coloradofirecamp.com/just-culture/definitions-principles.htm)
  • Wachter, RM. Personal accountability in healthcare: searching for the right balance. BMJ Qual Saf 2012;0:1–5.
  • Wachter RM, Pronovost PJ. Balancing "no blame" with accountability in patient safety. N Engl J Med. 2009;361:1401-1406 

Thursday, 12 December 2013

Book of the month: Just culture: balancing safety and accountability by Sidney Dekker (1st edition)

This is the second book by Sidney Dekker to be reviewed in this blog. (The first was his earlier book, "The Field Guide to Understanding Human Error"). This is a testament perhaps to both Dekker's readability and relevance to the patient safety movement.

About the author

Sidney Dekker is a Professor in the School of Humanities at Griffith University in Brisbane, Australia. He has published a raft of books on the topics of safety and failure. Dekker is also a pilot and therefore brings practical experience of the workings of a high-reliability industry to his writing.


Who should read this book?

"Just Culture" is aimed at anybody with an interest in how to bring about the conditions required to make the title of the book a reality, from individual practitioners to hospital managers to legislators. In terms of simulation centres, it will inform both your day-to-day debriefing skills, as well as your response to requests for the assessment of "bad" practitioners.


I haven't got time to read 149 pages… (but can manage 149 words)

Dekker's main argument is as follows:
  1.  'Everybody' agrees that the incompetent or reckless individual should be held accountable
  2. These individuals form a minority. The majority of errors are carried out by well-meaning practitioners who simply need re-training or support
  3. Unfortunately the decision as to who is incompetent is:
    1. Usually made "after the fact" and a number of biases may make it very difficult to consider the intent of the individual
    2. Fraught with adverse consequences. Blame and potential criminal/civil legal proceedings may have the effect of reducing patient safety as adverse events become less frequently reported through fear of a similar fate.
  4. To achieve a just culture one must find the answers to three questions:
    1. Who gets to draw the line?
    2. What is the role of domain expertise in the drawing of the line?
    3. How protected are safety data from the judiciary?

What's good about this book

Dekker uses some very powerful true stories which illustrate the tension between safety and accountability. Primarily these are stories of individuals who have been used as scapegoats for systematic failings: A nurse convicted of wrongly preparing a drug infusion, a captain accused of putting his passengers at risk, a police officer shooting an unarmed man.

Dekker discusses how the legal system, which is meant to provide "justice", is very poor at grasping the complexities of individual cases. The atmosphere of a courtroom several years after a lethal error is very different from a busy intensive care unit and so it may be impossible to relay the multifactorial causes of an error by the person "at the sharp end".

Dekker also informs us of that something that many suspected, namely that even in aviation (a high-reliability industry where error-reporting is the norm) it is not unusual for senior pilots to withhold information if they think that they can "get away with it". The reason? According to Dekker's source:
"Because you get into trouble too easily. The airline can give me no assurance that information will be safe from the prosecutor or anybody else. So I simply don't trust them with it. Just ask my colleagues. They will tell you the same thing."
Dekker provides useful definitions of reporting (informing supervisors etc.) and disclosure (informing clients, patients, etc.) and why they are both important. He also discusses how errors are often sub-divided (after the fact) into technical errors (honest mistakes made while learning a task) and normative errors (mistakes made by people failing to fulfil their role obligations).

Lastly, Dekker provides us with a step-wise approach to developing a safety culture and encourages us to start "at home, in your own organisation".


What's bad about this book

Dekker uses a number of examples showing how things go wrong but the book is very sparse on incidents where a "just culture" worked. It would have been useful to see some practical examples of just cultures in healthcare.

Final thoughts

Dekker does not pretend that realising a just culture will be easy or quick. However he does make a good argument for aiming for a just culture, not only because it will be "just" and safer, but because it is likely to be good for morale, job satisfaction and commitment to the organisation (p.25) 

Monday, 25 November 2013

Book of the month: If Disney Ran Your Hospital: 9 1/2 things you would do differently by Fred Lee

Before you wonder if you're reading the wrong blog, change the word in the book's title from "Hospital" to "Simulation Centre" and please bear with me for a few more paragraphs. 


About the author

Fred Lee was a senior vice president at Florida Hospital in Orlando before becoming a cast member (as their employees are referred to) at Disney. He helped to develop and facilitate "The Disney Approach to Quality Service for the Healthcare Industry".

Who should read this book?

Lee says: "Although this book was written with hospital managers in mind it should also be appealing to staff at all levels." And he is correct, anybody with an interest in how to improve the experience of the customer/client/participant/patient would benefit from reading this book.

I haven't got time to read a whole book!


The 9 and 1/2 things are listed on the right, however they will provide little insight without reading the chapters. The 10 chapters can be read in chunks and the plain language and personal examples make this book an easy read. 


What's good about this book?

In every chapter Lee provides both the theory behind the thing you would do differently as well as practical applications. A few concepts which  Lee explores, that may influence the way you run your simulation centre are covered below.

1) "Employees leave managers, not organizations" (p.4) Lee quotes work carried out by Buckingham and Coffman for their book "First, Break All the Rules". They argue that an employee's manager has a much greater impact on aspects such as their loyalty, job satisfaction and efficiency than the culture of the company. An employee who does not get on with their manager will not stay in a post because they love the company.

2) "Selling is trying to get people to want what you have. Marketing is trying to have what people want. When you have what people want, it makes selling unnecessary" (p.5) Lee took this quote from Terrance Ryan, a healthcare marketing consultant. A mistake for a simulation centre is for the faculty to sit around and think about what courses they should run. Inevitably this new course will not be radically different from previous courses. Instead what Lee is suggesting is to run the courses that the participants want to come to, i.e. having what people want. This concept may also be applied to timing. At the SCSC we currently don't run any courses after 5pm or on weekends. However, if this is what participants want then shouldn't we be offering it? (Some  sim centres provide 24/7 access.)

3) Most participants are comparing you against the "ideal" sim centre and not another sim centre (p.10) This is part of "redefining your competition" so that you are no longer comparing yourself with other (nearby) sim centres but instead against what you would expect and want from a sim centre which existed to give participants the best possible learning experience.

4) Work on outcomes and perceptions (p.13) Lee argues that both outcomes and perceptions are important, but that outcomes are improved by teams, while perceptions are improved by individuals. At Disney every individual cast member is aware that they represent the brand and that their words and actions reflect on the brand. Lee also argues that you may have fantastic teamwork but that you must show this to participants. How would you do this in a sim centre? Ideas might include referring to the faculty on a first-name basis, wearing badges with first names easily legible, being courteous to the admin and technical staff... The importance of perceptions may also be shown & assessed in a simulation centre by running sessions which include (for example) telling a simulated patient that their operation has been cancelled as the list has run over. Events such as a cancelled op may have a massive effect on the reputation of a hospital and ensuring that these conversations are handled with care and empathy may require some training. 

5) The four Disney priorities are (in order): Safety, Courtesy, Show, Efficiency (p.28). 
Fig. 1: Columns vs Ladder
Lee discusses how a lot of hospitals have graphics showing columns supporting a roof The problem with these graphics is that they do not tell the employees which one of the columns is most important. Lee argues that we should instead have a ladder of priorities which would make it clear to employees that safety is more important than being patient-centred (for example.)


6) Lee says that we must move away from a service to an experience paradigm. Participants at the sim centre do not talk about the service they received but rather the experience they had. We should be focusing on the experience of the participant. What happens when they call or email to book onto a course? Does anyone make them feel welcome when they walk through the door? Are they treated with respect and courtesy? As for your faculty, are the classrooms set up for them, computers & projectors on, kettle on for a cup of tea?

7) Encourage a culture of dissatisfaction. Lee tells us that "being good is the enemy of being great". Your sim centre should constantly be striving for excellence both in the faculty and in the participants. One reason to strive for excellence is because "excellence is fun" (p.212).  Lee means that once you have reached "excellence" you can stop worrying so much about whether what you are doing is right. One of the reasons people are stressed at their jobs is because they feel that they cannot cope with increased acute stress from an emergency. Simulation training can help to alleviate this stress.

8) As a sim centre director/manager, how do you know you're doing a good job? One possible method is to ask your staff what they would want from an excellent director or manager. Then use this template yearly to allow the staff to feedback on your performance.

What's not good about this book?

Very little…. There is no index and the "1/2 thing" does feel a bit gimmicky. (9 1/2 things… probably sounds more interesting than 10 things….)

Final thoughts

Buy this book, read it and keep it for referring back to. Then make some of the changes Lee suggests and see if they improve your sim centre.


Thursday, 24 October 2013

The success of failure

Sim centres around the world have routines for when participants first arrive. Perhaps a sign-in sheet, distribution of parking permits, directions to the nearest toilets, refreshments, etc.

Most sim centres will also have a period before embarking on scenarios which includes looking at (or setting) the learning objectives and talking about confidentiality. This period helps to create the safe learning environment which will allow the candidates to perform without fear of reprisal or ridicule.

During this "setting the scene" period I talk to the candidates about failure and say something along the lines of:
"We are all human. We all make mistakes. You will make mistakes today. That is alright. I have made some spectacular mistakes in my clinical practice which have resulted in patients being harmed. I have learned from these mistakes and am a better practitioner as a result.
We are all here to learn from one another and I'm sure you would much prefer to make a mistake here on a mannequin, who will not die or come to harm, than on a real patient."

The problem with participants... with all of us... is that we don't like failure. In fact, we actively avoid situations where failure is an option. Youtube has a plethora of "fail" clips. Cats failing to jump high or far enough seem to be a particular favourite. Although they can be humorous, perhaps the viewpoint should be that these cats are trying something, failing at it and then learning from it. And that persistence often pays off.

Perhaps there are still places where failure is not seen as failure? In this very readable Inc. article about cadets at the United States Military Academy at West Point, the author Jim Collins talks to some of them about failure. Their responses:
"It's better to fail here and have other people help you get it right than to fail in Afghanistan, where the consequences could be catastrophic"
"Here, everybody knows it's a learning experience"
Collins goes on to claim that repeated failure is built into the West Point culture. Currently our education system and our medical training system is not rewarding or encouraging of failure. Big summative tests at the end of periods of training allow you to advance (or not) to the next level. In terms of education and training, what would happen if there were a test on day 1, where everybody fails and then repeated tests throughout the year to show you how you are doing and where your strengths and weaknesses are?

In terms of the simulator I am not a believer in the idea that the participants must fail in order to learn. (This is the "they're doing really well, let's throw in an 'anaphylaxis'" school of thought.) I think if the participants shine then we can all learn from that. But perhaps we should be more positive about failure, build it into our simulation centre culture and show how failure can be a success if it makes you better. I leave you with two quotes. The first a youtube comment on one of the cat "fails" and the second from Tommy Caldwell, a rock climber who features in the above-mentioned Inc. article
"This is not a fail. This is an Epic try!" - Dio Rex

"(Failure) is making me stronger. I am not failing; I'm growing." - Tommy Caldwell

Tuesday, 22 October 2013

Book of the month: Crisis Management in Acute Care Settings (2nd ed) by St.Pierre, Hofinger, Buerschaper and Simon

The last two books reviewed here: "Why we make mistakes" and "Set phasers on stun" were light summer reading. "Crisis Management in Acute Care Settings" is the sort of book you need the rainy weather and darker evenings for; its 335 pages of densely packed text require concentration and persistence.

The four authors are: Michael St.Pierre (Anaesthetist, 'Oberarzt' at Erlangen University hospital), Gesine Hofinger (PhD, Cognitive Psychologist, Department of Intercultural Business Communication, Friedrich-Schiller-University, Jena), Cornelius Buerschaper (Researcher, focused on decision-making in crises, who unfortunately passed away in August 2011) and Robert Simon (Director of Center for Medical Education, Harvard Medical School)

Who should read this book?

This book is for the dedicated human factors and/or simulation devotee. Although an aim of the authors was to "formulate the text in an easy to read language" (p.ix) with a target audience of "nurses, technicians, paramedics and physicians" (p.ix) the language used is at times overly complex and the concepts require a more than basic understanding of human factors. This is not the book to give to people who have expressed an initial interest in human factors or simulation.

I haven't got time to read 335 pages...

The book is divided into four parts, so you can decide if there is one particular aspect you wish to explore:
  1. Basic Principles: Error, Complexity and Human Behaviour (81 pages)
  2. Individual Factors of Behaviour (111 pages)
  3. The Team (81 pages)
  4. The Organization (62 pages)

62 pages? That's still too much!

Every chapter finishes with an "In a nutshell" section which provides an overview of the content. It may therefore be worthwhile reading the "nutshells" and then deciding which chapters warrant a more detailed look.

What's bad about this book?

There are a number of minor annoyances such as:


  • Random use of italics e.g. "First, the majority of patients arrive at the ED rather unprepared..." (p.12)
  • Obtuse sentences e.g. "Humans try to balance actual and nominal physiological conditions"(p.66) "The interpretation of sensory impressions tries to form them as good a good Gestalt (the law of "Praegnanz" - good form)."(p.93)
  • Obtuse sentences which are also long e.g. "From an evolutionary point of view, the ability to rapidly produce workable patterns to understand of the environment seems to have been advantageous compared with a 100% scanning and consciously filtering important from unimportant information about the surroundings."(p.95) Including possibly the longest sentence I have ever read: "As complex situations are characterised by the interrelatedness of many system variables (on-scene situation, pathophysiology of the patient, main motives of the different providers and professional groups involved), there will be some goals which are in themselves justified but which are mutually exclusive - be it the parallel technical and medical rescue operation on site or the side by side of diagnostic and therapy during resuscitation of a trauma patient in the emergency room."(p.127)
  • The use of distracting background pictures in diagrams which add nothing to the understanding of the text (p.90,p.188)


A more important oversight is the lack of any reference to our acute medicine and surgical colleagues whom I would consider part of the "acute care setting". With the advent of Non-Technical Skills for Surgeons (NOTSS), the development of courses looking at surgical crisis teamwork and leadership and courses for acute medical practitioners, I would like to see surgeons and acute physicians included in the third edition.

What's good about this book?

This book provides a detailed analysis of human factors and team psychology in a high stakes environment. The book also links the aforementioned with patient safety and so enriches the understanding one may have of how work in human factors/simulation can improve patient safety.
The "in a nutshell" section at the end of every chapter is a useful reminder of what has been discussed. Most chapters also have a "tips for clinical practice" section which may help to convert theory into practice and there is an extensive list of references provided for every chapter.
Most chapters are packed full of information and, once the convoluted language has been overcome (see above), they begin with a good overview of the concepts and then delve into the core of the matter, focusing on each piece in turn.
For example, Chapter 11: "The Key to Success: Teamwork" discusses and defines teamwork and teams, followed by a review of team performance. The latter is analysed by looking at the input into the team from: individual characteristics, team characteristics, characteristics of the task and characteristics of the environment. The authors then go on to discuss how teams are formed, how a "good" team performs and where teams can go wrong (communication, shared misconceptions, groupthink etc. etc.) The level of detail is extremely impressive and educational. The same detail is found other chapters such as chapter 3 which looks at the nature of error and chapter 9 which looks at stress (acute, chronic, coping mechanisms). 

There are also some great quotes such as:



  • A situation does not cause emotions; your interpretation of the situation causes emotions (p.99) (with echoes of Jack Sparrow)
  • "As an overall philosophy, it is wise to use good judgment to avoid situations in which superior clinical skills must be applied to ensure safety"(Attributed to Hawkins in Human Factors in Flight, 1987) (p.118)
  • ...human factors should never be equated with "risk factors." Each time mindful healthcare professionals detect, diagnose, and correct a critical situation or an error before it has an opportunity to unfold, it is the human factors that prevent patient harm (p.15)
  • The development of expertise requires struggle. There are no shortcuts (p.33)
  • Practice does not make perfect; instead perfect practice makes perfect (p.33)
  • Teamwork is not an automatic consequence of placing healthcare professionals together in the same shift or room (p.210)
  • If you want to profit from a good team process in a critical situation, you need to rehearse team skills on a frequent basis. (p.216)
  • You will not succeed if you do not talk! Talking is the way team members develop and maintain a shared mental model. (p.217)
  • Teamwork seems to be the essential component in the pursuit of achieving high reliability in healthcare organisations. (p.324)

Final thoughts

Buy this book for your simulation centre. Set aside the time to read it. It is a great reference text and will inform your workshops, lectures, research, simulated scenarios and your clinical practice.

Monday, 30 September 2013

It's all about me, me, me: the problem with advocacy-inquiry in debriefing

The importance of the debrief

It is safe to say that those of us involved in simulation believe that the debrief is a very important part of the learning experience. Many of us (1) would say it was the most important part. A slide taken from the SCSC's faculty development (train the trainers) course helps us see why (Figure 1).


Fig. 1: Scenario and Debrief mapped onto Kolb's learning cycle
When we map the simulation activity onto Kolb's learning cycle, we can see that three-quarters of the process is supposed to occur during the debrief. The debrief is important therefore because much of the learning is supposed to take place during this period.

In 2007, Fanning and Gaba wrote an article entitled "The Role of Debriefing in Simulation-Based Learning". They explain how adult learners do not find "linear teaching models" (i.e. didactic teaching) very effective and instead benefit from active participation (i.e. experiential learning as shown in Kolb's learning cycle above). The role of the teacher is to facilitate the learners' progress around the cycle (hence "facilitator") through a debrief of the events which occurred in the simulated scenario. This role is an important one, as Fanning and Gaba state:
"Data from surveys of participants indicates that the perceived skills of the debriefer have the highest independent correlation to the perceived overall quality of the simulation experience." (2, p.118)
Dismukes and Smith identify three levels of facilitation (3):

  1. High level facilitation: participants more or less debrief themselves
  2. Intermediate level facilitation: somewhere between high and low
  3. Low level facilitation: facilitator directs the entire debrief

Fanning and Gaba state that we should use the highest level possible and, in their paper, go on to list some of the debriefing styles (e.g. funnelling, framing) and techniques (e.g. plus-delta, target-focused) which are used.

Debriefing with good judgment (Advocacy-inquiry)

In conversations with some facilitators at simulation-focused conferences I have been struck by their belief that "debriefing with good judgment" is the "one, true" debriefing style. After all, who does not want to debrief with good judgment?
In their two (similar) papers Rudolph et al discuss the theory and practice of debriefing with good judgment (4,5). Drawing on "a 35-year research program in the behavioral sciences on how to improve professional effectiveness through 'reflective practice'" they cite three elements in their approach:

  1. Uncovering the participant's knowledge, assumptions and feelings (internal frames) allows the facilitator to "reframe" and improve future actions and behaviour
  2. The facilitator has a stance of "genuine curiosity" about the participant's internal frames
  3. The facilitator uses the "advocacy-inquiry" conversational technique to bring his/her judgment and the participant's frames to light.
The first 2 elements are not controversial, but the third gives me some concern. Rudolph et al define advocacy as:
"A type of speech that includes an objective observation about and subjective judgment of the trainees’ actions" (4, p.49)
To illustrate what they mean, they provide the following worked example:
I have highlighted the problem (as I see it) with the worked example below:


It's "me, me, me" and then "I, I, I". For a facilitated debrief there seems far too much focus on the facilitator. Here's my alternative to the above debrief:

Debriefer: So, how was that?
A group member: I was a bit confused by what was happening.
Debriefer: Really? In what way?
A group member: I wasn't sure who was in charge or what I was supposed to do.
Debriefer: Okay, did anybody else feel that way?
Group: Several members agree.
Debriefer: Did the confusion have any effect on how you dealt with the patient?
etc. etc.

The above is still too facilitator-driven but at least removes the facilitator as the pivot around which the conversation flows. The advocacy-inquiry technique seems to be at best an intermediate-level facilitation and at worst a low-level facilitation where the participants rely on the facilitator to discuss what he/she thought were the important points.

Should we get rid of "debriefing with good judgment"? No. Much of what Rudolph et al discuss is valid. They are correct to say that there is no "non-judgmental" debriefing. Being "genuinely curious"is also extremely important. However, I would argue that one can be genuinely curious without focusing the conversation on the facilitator.

Is there a place for "advocacy-inquiry"? Yes. Fanning and Gaba state "the debriefing techniques employed need to take individual learning styles into consideration" (2, p.117) High-level facilitation should be used whenever possible, then perhaps stepping down to advocacy-inquiry if the participants need more direction.

In their paper, Rudolph et al follow the debriefer/trainees conversation above with the debriefer saying:


I think that question needs to come much earlier.



References:
1) Issenberg SB, McGaghie WC, Petrusa ER, et al: Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher 2005;27:10–28.
2) Fanning RM and Gaba DM: The role of debriefing in simulation‐based learning. Simulation in Healthcare 2007;2(2):115-125. (Article available for free here.)
3) Dismukes R, Smith G: Facilitation and debriefing in aviation training and operations. Aldershot; UK: Ashgate, 2000
4) Rudolph JW, Simon R, Dufresne R, et al: There’s no such thing as “Nonjudgmental” debriefing: A theory and method for debriefing with good judgment. Simul Healthcare 2006;1:49–55.
5) Rudolph JW, Simon R, Rivard P, et al: Debriefing with Good Judgment: Combining Rigorous Feedback with Genuine Inquiry. Anesthiol Clin 2007;25:361-376. (Article available for free here.)

Sunday, 1 September 2013

A train in Spain

On the 25th July 2013 a train travelling from Madrid to Ferrol derailed near Santiago de Compostela. Out of 218 passengers, 79 died. A BBC report quotes the president of the railway firm:
The train had passed an inspection that same morning. Those trains are inspected every 7500km... Its maintenance record was perfect.

One of the train drivers, according to the same BBC report, told the control room that he took the bend at 190km/h; the bend's speed limit is 80km/h. He is also reported to have kept repeating "We're human, we're human." The train was running five minutes late.


A later BBC report explores the safety systems in place on trains; the European Train Control System (ETCS) which can prevent speeding and the more basic ASFA (Anuncio de Senales y Frenado Automatico) which warns the drivers when the speed limit is exceeded but cannot stop speeding. The track where the train derailed only had the ASFA system in use. The train driver received 3 audible warnings to reduce speed, the last warning was 250m (or 4.6 seconds) before derailment. The BBC report also quotes a Spanish journalist who says that there had been concerns about this section of track since it opened, as it required the driver to reduce the train's speed from 200km/h to 80km/h "in just a matter of seconds". According to unnamed officials, the bend does not need additional safety measures "because of the proximity of a major urban center, which requires that drivers slow down trains regardless." Writing in the Guardian, Miguel-Anxo Murado tells us that:
There were arguments for having that section of the route remade completely, but Galicia's particular land tenure regime makes expropriations an administrative nightmare. So the bend was left as it was, and speed was limited there to 80km/h

On the 27th July the train driver is discharged from hospital and taken to a police station for questioning. The Interior Minister accuses him of reckless manslaughter. The driver has 30 years experience, became a fully qualified driver in 2003 and had been driving on that route for over a year.

On the 31st July we learn that the driver was on the phone to train company staff and/or the train's ticket inspector at the time of the crash. He would normally start braking 4km before the bend (at 200km/hr he has approximately 72 seconds to decrease his speed to 80km/hr).

The Guardian provides us with additional information:
Renfe is among the firms bidding for a €13bn contract to build a high-speed rail link in Brazil. The terms of the tender reportedly exclude firms involved in the running of high-speed train systems where an accident has taken place in the preceding five years.
In the latest BBC report dealing with the crash, the Public Works Minister is quoted as saying: "Everything is under review, everything is subject to proposals for improvements". There is discussion of beacons on sections of track which require rapid braking, the use of satellite technology and a review of the physical and psychological requirements of train drivers.

The train driver has been charged with involuntary homicide due to "professional recklessness".


The aftermath of this train disaster followed the same course as that of the sinking of the Costa Concordia: initial focus on the “sharp end” of the captain/driver, with immediate denials from the corporate offices of any wrong-doing or failure on their part. Then, once more information comes to light, there is an appreciation that perhaps there were other problems contributing to the sinking or crash.

Instead of an immediate denial of culpability, it would be refreshing if the president or CEO of a company would instead express sorrow at the loss of life and regret at the injuries, coupled with a promise to explore all factors leading up to the event. It is almost inevitable that any such large-scale disaster will have a number of causes and missed opportunities for prevention. However, in a world where such sentiments would affect stockmarkets and bids for contracts, this may remain wishful thinking.

Monday, 26 August 2013

Book of the month: Why we make mistakes by Joseph Hallinan

"Why we make mistakes" is another "light" read for this month following on from last month's "Set phasers on stun". Joseph T. Hallinan is a former writer for the Wall Street Journal and winner of the Pulitzer Prize. The Pulitzer Prize was for investigative reporting on medical malpractice in Indiana, USA, and so it seems apt that he has now written a book on mistakes.

Who's it for?

This book is written for a general audience and, as summer holidays draw to a close, could be squeezed into the last few days off, now that the kids have gone back to school. Hallinan covers a number of human factors terms and concepts such as hindsight bias (p. 5, p.65), different types of mistakes (p. 8), framing (p.92), anchoring (p. 103) and illusion of control (p.162). He also mentions some of the big names in human factors research such as Simons and Levin (p. 14), Kahnemann (p.93, p.206), Ericsson (p. 172) and Gaba (p. 192).

I haven't got time to read a whole book...

Read the introduction, chapter 1, chapter 5 and the conclusion. (For a description of these chapters see below)


What's good about this book?

Hallinan starts off well, stating:
"When something goes wrong the cause is overwhelmingly attributed to human error.... And once a human is blamed, the inquiry usually stops there. But it shouldn't - at least not if we want to eliminate the error."(p.2)
In the subsequent 13 chapters, he goes on to look at different causes of mistakes and offers advice on avoiding them.

The tabletop on the left is obviously narrower and longer...
In the first chapter, "We Look but Don't Always See," Hallinan refers to vision, perception and change blindness. The "door" study by Daniel Simons and Daniel Levin may be a useful example to use when discussing this aspect of human factors in a lecture or workshop. This chapter also uses the two tabletops image to show that even when we know that something is true we still cannot force ourselves to see the truth.

In this first chapter, Hallinan then goes on to provide us for another reason that radiologists miss dancing gorillas (and tumours). The less frequently something occurs, the more likely we are to miss it. Because tumours are infrequent radiologists tend to miss them (this is another argument for not requesting tests such as chest x-rays on the off chance that something may be picked up.) Baggage screeners fare little better for the same reason, as Hallinan states, in 2006 at "Los Angeles International Airport screeners missed 75 percent of bomb materials." A reassuring thought for the next time you're standing in line at the airport with your belt and boots in one hand and the other hand holding up your trousers.

The next chapter which has a decent shot at dealing with a human factors problem is chapter 5: "We Can Walk and Chew Gum - but Not Much Else". Hallinan explores the myth of multi-tasking and looks at task saturation, trying to do too many things at the same time. An interesting concept here is that we can walk and chew gum at the same time because neither of these tasks requires conscious thought. We can drive a car and have a conversation because (once we are proficient) driving the car no longer requires our undivided attention. This idea may be useful to present in a human factors workshop or talk; i.e. repeated practice of managing crises in a simulator will allow one to use less mental workload on the basics during a real crisis.

Would you work for this man?

A final concept which may prove of interest is that constant change at the top of an organisation may not be helpful to the organisation itself. Hallinan discusses Warren Buffett's company, Berkshire Hathaway, where none of the CEOs have voluntarily left for other jobs in its 42 year history (p. 160). Hallinan proposes that the CEOs stay in post long enough to receive feedback and learn from their mistakes. Might this have lessons for the NHS?


Hallinan's advice for making fewer mistakes is given in the conclusion and that is to "Think small". Unfortunately he doesn't then quite manage to explain what this means, but instead goes on to provide other, more useful, advice:
  1. Calibrate yourself. Be honest when you consider decisions that you have made and what their effect has been. (We have a tendency to think we performed better than we actually did.)
  2. Think negatively. By considering what can go wrong you can prepare for that eventuality.
  3. Ask others for advice. A spouse, friend or colleague may provide insight into a problem.
  4. Sleep. Fatigued people perform less well.
  5. Be happy. Happy people make decisions more quickly and are more creative with their problem-solving.

What's not so good about this book?

Areas which could be improved include Hallinan's introduction to System 1 and System 2 thinking which is somewhat clumsy, quoting Paul Slovic: "Our perception of risk lives largely in our feelings, so most of the time we're operating on system No. 1" (p. 95) He also digresses somewhat to spend a significant number of pages discussing the direction sense of men and women (p. 143-148).

Additionally Hallinan shows a slight lack of experience when he discusses how time affects decisions, stating:
"Many factors can affect the way we frame our decisions. One of the least obvious is time."
Anybody who has worked in a simulator or in an environment where time is critical, will be aware of how great an effect time has on on decision-making. Hallinan also mentions looking for "the" root cause which Sidney Dekker would have something to say about, namely:
"Asking what is the cause (of an accident), is just as bizarre as asking what is the cause of not having an accident. Accidents have their basis in the real complexity of the system, not their apparent simplicity. (p. xiii)" (The Field Guide to Understanding Human Error"

A final criticism of Hallinan's book (and perhaps much of human factors research) is that so many of the conclusions are based on research carried out on American university (Princeton, Ohio Wesleyan, Duke, Carnegie Mellon, Yale...) students; are these truly representative of the rest of the population?

To conclude

This is another book which should be borrowed, not bought. It has some material that may be useful for those who prepare human factors workshops. For the human factors novice it provides an easy-to-read introduction to some of the concepts and big thinkers in the field.

Tuesday, 6 August 2013

Putting patient safety first

Is your hospital adopting a safety culture? For both patients and staff? Below is a message sent to all hospital staff by the Interim Medical Director at Forth Valley Royal Hospital, Larbert. This is the sort of message we need to receive and spread in order to promote safety.



Dear Colleague

I am grateful to all of you who contribute to teaching and supervision of junior doctors and I am sure I can count on your wholehearted welcome and support to new colleagues starting this week.

Following recent guidance from the GMC and NES all new doctors at induction are being reminded of their duty to raise any concerns regarding patient safety as soon as possible.   This should be done as soon as possible after the perceived safety risk  -  with their supervisor or with another consultant in the department; with the Director of Medical Education (Dr David McQueen);  an AMD or;  with the Medical Director.   Concerns should be raised timeously in the knowledge that they will be considered seriously, without prejudice and will be acted upon appropriately.

The learning environment we provide should ensure that their training should take place in a supportive environment where undermining or bullying behaviour is not tolerated.   There is a ‘zero tolerance’ of undermining and we hope this will not be experienced at any time.   If the trainee feels that this is an issue for them, they are asked to urgently report it to a member of the consultant staff (if possible, to their clinical or educational supervisor or to a College Tutor, or equivalent).

Thank you again for your invaluable support.


Sent on behalf of
Dr Peter Murdoch
Interim Medical Director  
Forth Valley Royal Hospital