Wednesday, 30 April 2014

Book of the month: Human Factors in the Healthcare Setting (Fortune, Davis, Hanson, Phillips (eds))


About the associate editors

Peter-Marc Fortune is a consultant paediatric intensivist, chair and editor of the Advanced Life Support Group (ALSG) Paediatric and Neonatal Safe Transport and Retrieval Course (PaNSTaR) & ALSG Human Factors Group. Mike Davis is the lead educator at ALSG. Jacky Hanson is a consultant in emergency medicine, and Barbara Phillips is a consultant in paediatric emergency medicine and director of ALSG.

About the contributors

The contributors include three of the associate editors (not Barbara Phillips) and Simon Carley (Emergency Medicine, Manchester), Trevor Dale (Co-Founder and Managing Director, Atrainability), Sue Norwood (Training & Development Manager at Global Air Training) and John Rutherford (Anaesthesia, Dumfries).

Who should read this book?

This book is intended for clinical instructors on the ALSG courses and at 95 pages it can be read in a day.

In summary

The book is split into 11 chapters, so the time-poor could jump straight into the chapter of interest:
  • Introduction to human factors in medicine
  • Human cognition and error
  • Situation awareness
  • Leadership and team working
  • Personality and behaviour
  • Communication and assertiveness
  • Decision making
  • Fatigue and stress
  • Key elements in communication: briefing and debriefing
  • Organisational culture
  • Guidelines, checklists and protocols
Most chapters follow a structure of: Introduction & Aims, Background Concepts, Practical strategy & application, Summary.

What's good about this book

This book provides an overview of human factors and its relevance to healthcare. The important players (Flin, Reason, Gaba, Vincent, Salas, St Pierre, Helmreich) and important literature (To Err is Human, An Organisation with a memory, Crisis Management in Acute Care Settings) are referred to.
The book provides the reader with some of the HF vocabulary which will allow her to talk and think about the issues with others. Because of the wide readership this book will enjoy, the fact that these terms will be adopted into clinical language must be a good thing.

What's bad about this book

The book feels slightly cobbled together perhaps because of the number of authors and the lack of a concluding chapter (there is instead an anthology of cases from the editors' own experiences). It also starts off badly in the preface with the following described as the aims:
  • an understanding of the contribution of training in human factors on patient safety
  • an awareness of how this might impact on practice in a variety of settings
The first bullet tells us that we will understand how training in human factors contributes to patient safety, the second bullet point is unclear in terms of what "this" refers to… Is it the understanding? The training? Or patient safety?

"HF is a scientific
discipline"
The preface then continues by contrasting Ben Goldacre's view of science/evidence-based medicine, with human factors 'science' and goes on to say that "human factors are not science" which will come as a surprise to human factors scientists.

Chapter 1 goes on to tell us that: 
"Medical error, in this book, will be reworded as clinical error - meaning any error that has occurred in the clinical treatment of a patient." 
But then in the same paragraph tells us:
 "An error is any mistake that has occurred: they are specifically defined as clinical (technical) or human (non-technical)."
So clinical error is both a technical error and medical error?


Additionally, although the book does provide a multitude of stats, these can at times be confusing, e.g.:
"It has been found that almost 50% of errors can be due to a cognitive failure…This has led to death or permanent disability in 25% of cases (Wilson et al. 1999)" (and the volume number in the reference is wrong)
What Wilson's analysis of the Quality in Australian Health Care study showed was that 16.6% of hospital admissions were associated with an iatrogenic patient injury, in 57% of these cognitive failure had a role and in these 24.5% led to permanent disability or death. So in 1995, about 2% of people who were admitted to hospital suffered permanent disability or died due, in part, to cognitive failure.

The book also mentions systematic errors such as the post-completion error. The example provided is leaving your card in the cashpoint machine after withdrawing cash. This is a particularly poor example as cash machines have been specifically designed with this error in mind and therefore do not give you your cash until you have withdrawn your card. A better example would have been the "original on the photocopier plate" error.

Lastly and perhaps most worryingly, the book states that "some people are born leaders" (p.28). I am sure that Anders Ericsson would disagree, but perhaps there are some people who think they are born leaders.

Final thoughts

This book is not a bad start for the ALSG. However given the number of people for whom this will become the bible of human factors an improved second edition (where is the mention of feedback loops?) should be published fairly rapidly.


SCSN-ASPiH Symposium 2014: The triumphs and the challenges (by M Moneypenny)

Where's that Training Centre?
The second joint Scottish symposium organised by the Scottish Clinical Skills Network (SCSN) and the Association for Simulated Practice in Healthcare (ASPiH) took place on the 23rd and 24th April 2014.
The venue was the very impressive Uaill Scottish Fire & Rescue Service Training Centre in Glasgow. The Centre is new enough (officially opened in February 2013) that Google Maps still has tarmac and grass on the satellite view.

Triumphs

With perhaps a small nod to recency bias, this was one of the best conferences/symposia/meetings I have attended. The keynote speakers were excellent and the workshops I managed to attend were engaging, thought-provoking and a stimulus to additional work.

Keynote speakers:

1) Gareth Grier (Clinical Director and Education Lead at The Institute of Pre-Hospital Care at London’s Air Ambulance)
"Human factors is as
important as the medicine"
Gareth gave a very interesting talk on pre-hospital care, the golden hour, the training that the air ambulance medics get and the importance of teamwork. Gareth also talked about the adoption of a safety culture and his belief that pre-hospital anaesthesia has to be at least as safe as that performed in-hospital; for example nobody now draws up drugs at the scene. He contrasted the latter with the approach still taken in hospital resus rooms across the country where ampoules and syringes litter the area.

Gareth also discussed how important it is to acquire automatic skills (e.g. intubation, cannulation) so that the cognitive bandwidth which has been freed up by this can be used for other tasks (e.g. situational awareness(SA)). A useful tip was the use of the CRM terms "Eyes in/eyes out" to emphasise the loss of SA of the operator in a technical procedure. The operator would say "eyes in" to make others aware that they need to be "eyes out" to keep an eye on the scene, the patient's vital signs etc. Another useful tip was the use of a "rescue phrase" for the team. This can be used instead of telling people out loud that they are making a grievous error. It is used infrequently but when it is used the receiver pays attention to it.

Lastly Gareth talked about simulation and the need for fidelity. For example, participants should see the smoke, smell the burning and be rained on (with a hosepipe) to create the stressors that they will encounter on the scene.

2) Bill McGaghie (Adjunct Professor in Medical Education, Feinberg School of Medicine, Northwestern University, Chicago, Illinois)
"Self-assessments are biased
and show little relation
to actual performance"
As Jerry Morse, the chair of the SCSN said, Bill McGaghie needs no introduction. His review on simulation-based medical education research, the BEME systematic review on features of simulation that lead to effective learning and his meta-analysis of simulation versus traditional learning are landmarks in the field.

Bill talked about mastery learning and the papers that have shown how a mastery learning programme (in central venous catheter insertion) can lead to T1 (educational outcome in sim lab), T2 (improved patient care practices), T3 (improved patient outcomes) and T4 (collateral effects (e.g. length of stay, cost, improved baseline)) outcomes. Bill encouraged the audience to find the right intervention and then obtain robust data. As these types of studies build up, the evidence-base behind the cost-effectiveness of simulation will grow. This, in turn, will make applying for funding easier.

Bill also showed how the authors of the papers he co-authored has changed over the years. He emphasised that the best research programmes have the following 10 attributes:

  1. Shared goals
  2. Functional diversity
  3. Clear leadership (this may change or rotate)
  4. Shared mental models and language
  5. High standards, recognition and credit
  6. Sustained hard work and commitment
  7. Physical proximity
  8. Minimisation of status differences within the team
  9. Maximisation of the status of the team
  10. Shared activities that breed trust
Bill also talked about where he thinks research opportunities lie and challenges to the health professions. I would recommend attending a conference just to hear Bill speak.

3) Brendan McCormack (Professor and Head of Division, Nursing, Queen Margaret University, Edinburgh)
"Patients deserve much
more than not to be
harmed by healthcare staff"
Brendan talked about person-centred (as opposed to patient-centred care) and patient safety. Brendan asked us to consider why PDSA cycles are so popular and how they often do not change or challenge the underlying values, beliefs and assumptions which cause the resulting structures and processes that we want to change. In effect we are making superficial changes without addressing underlying patterns.

In a very powerful talk (which I am doing a poor effort to convey) Brendan called for a change in the current culture, quoting Buckminster Fuller: 
"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete."

Workshops:

I attended two workshops, one led by Bill McGaghie on devising your own mastery learning programme and the other led by Jenny Buckland from Amputees in Action. Although very different both were excellent, hands-on and what you would want from a workshop (as opposed to a lecture.)


Challenges

A major challenge of this conference (and many other simulation conferences) is the difficulty in moving beyond the T1 outcomes in research. The majority of posters and presentations dealing with outcomes still report how the participants felt better about themselves, with no long-term follow-up or evidence of translation into practice. I would love to see more work in this area and more sponsorship of this research.


Conclusion

There it is!
A great conference (even though I missed the Ceilidh) which I would recommend to anybody interested in SBME. The SCSN-ASPiH symposium is becoming a very nice fore-runner for the yearly ASPiH conference.

Monday, 31 March 2014

Book of the month: Thinking, Fast and Slow by Daniel Kahneman

About the author


Daniel Kahneman is a psychologist who won the 2002 Nobel Prize in Economics for his work on prospect theory (this theory includes the idea that losses feel worse than gains feel good). He has also published extensively on decision-making and judgment. He is currently professor emeritus of psychology and public affairs at the University of Princeton's Woodrow Wilson School of Public and International Affairs.

Who should read this book?

This book will be of interest to anyone who is interested in decision-making. However at 418 pages it will require a respectable amount of time to be set aside to read and digest. Kahneman's stated desire is that the book enriches people's vocabulary so that they can think and talk about the ideas explored therein.

I haven't got time to read 418 pages…

Kahneman splits the book into five parts so the time-poor may wish to focus on areas of interest:
  1. Two systems
    • Probably the best-known of Kahneman's theories, this part explores the idea of a quick-thinking, intuitive and dominant System 1 with a lazy, slow System 2. System 2 thinks it is in control but using a number of examples (such as hungry parole judges) Kahneman shows that System 1 has a major (and under-appreciated) impact on our thinking.
  2. Heuristics and biases
    • This part explores heuristics (simple procedures that help find adequate, though often imperfect, answers to difficult questions p.98) and biases.
  3. Overconfidence
    • This part explores some of the illusions that we all harbour (of understanding and validity) as well as intuition.
  4. Choices
    • This part explores prospect theory, including risk aversion and risk seeking, as well as the endowment effect (we value things in our possession more highly)
  5. Two selves
    • This part discusses the idea of an experiencing self and a remembering self. The latter controls the former and is relatively insensitive to the duration of an experience as well as remembering both the peak and end-experience best.

What's good about this book?

Kahneman carries out an in-depth analysis of both judgment and decision-making. He challenges many of the assumptions we have about ourselves and our rationality. Specific parts of the book are relevant to simulation/human factors and the clinical domain. Kahneman discusses the invisible gorilla and mental workload (p.23) which he says:
The halo effect: He's probably a
great anaesthetist and a good cook.
"illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness (p.24)"
Kahneman also discusses cognitive biases such as confirmation bias (a System 1 effect which comes up with a conclusion and then finds supporting arguments (p.45, p.81)), the halo effect (p.82), the availability heuristic (p.133) and recency bias.

In terms of patient safety, Kahneman's observations on hindsight bias and outcome bias (if the patient survives then all your actions were brilliant, if the patient dies all your actions were criminal) are useful reading.

In his review of assessment Kahneman provides some support for the use of immersive simulation as he explains how he used to assess soldiers' likelihood of being good officer material by seeing how well they worked in a group lifting a log over a wall. (To be clear: That is not a good way of assessing officer material.) Kahneman also uses the Apgar score to show how a scoring algorithm can supplement (or replace) clinical judgment.

Kahneman also discusses how the intuition of an anaesthetist may be more reliable than that of a radiologist. This is because the anaesthetist has the benefit of immediate feedback, while the radiologist receives little feedback about the accuracy of her diagnoses or pathologies she fails to detect.

Kahneman provides tips for dealing with some of the problems caused by our fallibility such as:

  • Learn to recognize situations in which mistakes are likely
  • Try harder to avoid significant mistakes when the stakes are high (although this advice may be less useful in the clinical domain where the stakes are very often high)
  • It is easier to recognise other's mistakes than your own (so call for help!)


What's bad about this book?

One of Kahneman's reasons for writing this book is to improve people's vocabulary. At the end of every chapter therefore there are a number of sentences which he expects people might say with this new-found vocabulary. Unfortunately many of the sentences come across poorly, such as:
"I won't try to solve this while driving. This is a pupil-dilating task. It requires mental effort!"
"Unfortunately, she tends to say the first thing that comes into her mind. She probably also has trouble delaying gratification. Weak System 2."
"She is a hedgehog. She has a theory that explains everything, and it gives her the illusion that she understands the world."
The occasional Americanisms (which are unnecessary) make the book at times less easy to read. For example, we are asked: "Which graduating GPA in an Ivy League college matches Julie's reading?" and "How many murders occur in the state of Michigan in one year?"

Kahneman's writes in a conversational style which, at times, grates such as: "Did the results surprise you? Very probably." (And a few similar instances where the results did not surprise this reviewer.) Some of the chapters are given "interesting" titles but this makes it very difficult to find out the chapters one might actually find interesting and worthwhile to read (e.g. "The fourfold pattern", "Tom W's specialty").

Lastly, much of this work is based on studies of US university students. When we are asked to consider cultural differences in debriefing, one must wonder if there are also cultural differences in judgment and decision-making.

Final thoughts

This book is a worthwhile read for those of us interested in human factors, as well as those interested in using assessment in simulation. The index is substantial and if you don't have the time to read the whole book then finding the topics of interest and reading those pages will still be beneficial.

Wednesday, 26 March 2014

Somebody is Nobody: The unspecified receiver in communication

The following story is based on actual events:
It's 10pm in the Emergency Department (ED) when a stand-by call is received. A 25-year-old man has been knocked down by a car travelling at high speed. He has multiple limb and facial fractures and the paramedic is concerned about splenic injury and intra-abdominal haemorrhage as his abdomen is becoming distended.

This case requires efficient and effective teamwork and leadership. The orthopaedic and general surgeons are called to attend, as is the anaesthetist and anaesthetic assistant. The patient arrives in hypovolaemic shock. The team of 8 or so people work together to assess and begin treatment. 
(Image source: http://www.northernhealth.ca/OurServices/TraumaProgram.aspx)

Life-threatening splenic haemorrhage is thought to be the most likely cause of his continuing deterioration and the patient receives O-ve blood on his way up to theatre. The surgeon begins his laparotomy to control the bleeding. The 3rd unit of O-ve blood is squeezed into the patient and the anaesthetic assistant is asked to contact the transfusion laboratory to ask when the cross-matched units will be available. To her surprise she is informed that the lab never received a sample and therefore they haven't even started to cross-match blood. What happened?

If we had a video-recording of the events in the ED resus bay we could look for causes of the missed blood transfusion request. Undoubtedly many factors played a part: perhaps the organisation does not have a standard operating procedure (SOP) for trauma patients, perhaps there is no checklist for making sure that all essential tests and procedures have been carried out… However, one of the factors was the following communication from the anaesthetist:
"And can someone make sure he's cross-matched for 8 units?"


Someone, Somebody, We…. 

In simulated scenarios and in the clinical environment, it is common to hear the same sort of communications:
"Can somebody call for help?"
"Could someone please check he's not allergic to anything?"
"We need a chest drain and we need to get IV access"
The common characteristic in all of these is the unspecified receiver. When a situation is stressful and dynamic, roles and tasks are not rigorously defined and workload is high then the "somebody" becomes "nobody". The danger then is that a task is not completed as illustrated in the scenario at the beginning of this post.

There are several reasons why we may use this form of communication:

  • Politeness: We don't want to seem dictatorial
  • Mental workload: It is easier to have an unspecified receiver than to maintain the situational awareness required to appreciate who could carry out a given task
  • Uncertainty: We are unsure who is capable of performing the given task and hope that those who are capable will step forward
  • Unfamiliarity: We don't know the names of the people in the team (cf. WHO checklist brief) and don't want to say "Hey, you, with the glasses! Cross-match some blood!"


How to specify the receiver


The following tips may lead to fewer unspecified receivers:

  1. Always specify the receiver. Some people argue that the receiver need only be specified in crises, however if we don't specify the receiver during low workload tasks there is a risk that we will not do so in high workload and high stress tasks.
  2. Know your team-members. If you don't know people's names, ask them or ask for a quick shout-out as to name and specialty. Have name-badges which are visible and legible.
  3. Use closed-loop communication. An unspecified receiver does not close the loop.



Further reading

St.Pierre, Hofinger, Buerschaper and Simon "Crisis Management in Acute Care Settings (2nd ed)" p235-236