Wednesday, 19 December 2012

When things go wrong

Many of us who are involved in simulation-based medical education (SBME) continue to carry out clinical duties. There are arguments in favour and against maintaining a clinical role, which I will not go into in this post. However, for those of us who are still clinically active there will come a time when, despite the human factors training we have received and deliver, we will be involved in a serious adverse event. I thought I would share some thoughts with you on what to do/not to do and how we can use these experiences to enrich SBME.

Step 1: Take ownership of your omissions and commissions
The best way not to learn from an adverse event is to deny that it has anything to do with you or that it was not your "fault". If you were in the room when the event was happening then there will have been steps you could have taken to prevent or mitigate the event. Ensure that the patient and/or family know that you are sorry for what has happened and that the event will be investigated.

Step 2: Write down a full timeline of the events from your perspective (and ask others to do the same)
Doing this as soon as possible after the event means that you will have the best chance of remembering things.

Step 3: Analyse the timeline and add human factors commentary
Consider at all stages and from as wide a view as possible what the circumstances were which led to the event. Were there gaps in knowledge? Did fatigue play a role? Was communication an issue? Were there any error traps such as confirmation bias, loss aversion, or recency bias?

Step 4: Debrief
Use the timelines from as many people as possible to create a "master timeline" (which may have contradictive events) and assign a non-involved person versed in human factors to debrief. Remember to list all the things that went well. Identify changes in practice which may attenuate or prevent a similar adverse event.

Step 5: Initiate and sustain changes in practice
As a person who was involved in the adverse event you have a duty to initiate and sustain changes in your workplace (e.g. use of WHO checklist, time-outs, encouraging people to speak up)

Step 6: Use the increased understanding of this adverse event in your delivery of SBME
Generally speaking I would discourage the exact "re-run" of a particular adverse event in the simulator, however many of the circumstances identified in step 3 will be applicable (with perhaps minor changes) to the courses you currently run.

Step 7: Inform the patient/family
Let the patient and/or family know about all of the above and how the lessons learnt are being applied.


I appreciate that the above is not a perfect sequence but it is a good starting point. Lastly, if you are involved in a serious adverse event, remember that you too are human, that you too will make mistakes and that the best possible outcome from a mistake is that you learn from it.

Monday, 3 December 2012

Please tell us how we did.

As I mentioned in my previous post, we collect feedback from the participants on all our courses. We don't (yet) have a generic feedback form so courses which are scenario-based will ask how each scenario went, courses such as faculty-development will ask what we did well and what we could do to improve, etc.

The more feedback forms I see and the more feedback forms I fill out myself the more I realise that our feedback forms are (generally) not fit for purpose. (For a similar point of view, see this article from the BBC.) In particular the tick boxes of "pre-course administration" "administration during the course" "catering" etc. actually give us very little information. The majority of participants tick "very good" or "good", while the occasional "poor" or "very poor" remains unexplained.

Even specific questions such as "Was the duration of the course: a) Too short b) About right or c) Too long" can be difficult to interpret. When people were ticking "Too short", I wasn't sure if they meant that the actual course on the day was too short or that they would have preferred a 2 day course. (When I asked that anybody who ticked "Too short" explain what they meant in the comments section, it turned out that they meant that they would like to have a 2 day course, not that they wanted to finish at 5:30pm or minded if the course finished "early" at 4:45pm)

Currently we also ask people to fill out the feedback on a piece of paper which our administrator then transcribes onto an excel spreadsheet, quite a time-consuming task.

The temptation then is to just ask two questions at the end of the course:
1) What did we do well?
2) What could we do better?
Might these two questions get to the heart of the matter? Would the lack of numerical data make it difficult to show funders that the courses are value-for-money?

I would be interested to hear from anybody who has cracked the feedback puzzle.

Monday, 26 November 2012

Let me give you some feedback...

Before 1997, "feedback" to me was the term used for the loud screeches which were the result of  a microphone receiving input from its speaker. When I started at Medical School, the term "feedback" is what you left for lecturers at the end of trimester. This seemed a pointless exercise of trying to remember who gave a lecture 8 weeks ago and what they were like. The situation was made worse by the fact that there seemed to be no obvious consequences to our feedback.

We ask for feedback from all of our courses (the wisdom and utility of this will no doubt be the topic of another post) and on Friday I received feedback from one our own faculty who sat in on my debrief.

I have been a strong proponent of feedback in simulation-based medical education (SBME) for a few reasons:

  • Equitability: Facilitators will happily spend many hours giving feedback (via facilitated debriefing) to the course participants. Why should they not then receive feedback on their own performance?
  • Expertise: Expertise comes about through deliberate practice of a skill with feedback
  • Reflection: Good (constructive, informed, timely as opposed to positive) feedback should lead to reflection on practice
The problem I have with getting feedback is that I still find it difficult not to explain away any negative observations. Too often I find myself saying: "Yes, well, he was a difficult candidate...." "Yes, but that wasn't my fault..."

Giving good feedback can be hard, but receiving feedback is always more difficult. Over the years one of the things I have learnt to do is to not say anything when receiving feedback (I don't mean to give them the silent treatment!) Instead I try and listen and take the comments as a genuine endeavour to improve my skills.

One of the observations from my colleague is that I occasionally use a scattergun questioning technique: "So what were you thinking? What was happening? Why do you think this happened?" The excuse on the tip of my tongue is that I get excited when debriefing and want to help the participants as much as possible by pushing/pulling them along. However, the correct response is that this is true, that scattergun questioning is not the  correct way and that silence and pauses are very effective in debriefing.

So, are you getting feedback on your feedback? And if not, why not?

Monday, 19 November 2012

Mannequin, manikin, manakin…


One of the comments from an ultimately rejected article submitted to Simulation in Healthcare (SiH) referred to our use of the word “manikin”. SiH’s guidance for authors, we were told, was to use “mannequin” to refer to patient simulators.

Etymology
The manneken pis in Brussels
“Manikin” derives from the Dutch “manneken”, a diminutive of “man”, so a manikin is a little man. A term which certainly does not sit well with the CAE Healthcare METI HPS.

Mannequin derives from the French word of the same spelling, which itself is derived from the Dutch "manneken". So manneken led to two different words, manikin and mannequin.

The Oxford English Dictionary (OED) tells us that a manikin is:
1) a) A small representation or statue of a human figure, b) jointed wooden figure of the human body or c) model of the human body designed for demonstrating anatomical structure
2) A little man, dwarf or pygmy

According to the OED, a mannequin is:
1) A person employed by a dressmaker, costumier, etc., to model clothes
2)  A model of (part of) a human figure, used for the display of clothes, etc.

The OED, it would seem, would suggest that what we use in simulation are manikins, however David Gaba disagrees.

Simulation in Healthcare
In an article written in 2006, Gaba who is the Editor-in-Chief of SiH, explained his position. I use the term “his” as opposed to “their” or “its” on purpose. Gaba discusses the origin of the two words and then proceeds to use Google searches to prove that “when discussing simulation in healthcare, for whatever reason, mannequin has become the more common term”. Finally, Gaba freely admits that he is biased towards using the term “mannequin”. However is this preponderance of mannequin still true today?

Current usage
Repeating Gaba’s Google searches (see bottom of post) tells us three things:
1) “Mannequin” still outperforms “manikin” when simulation is included within the search, although the difference remains small.
2) In a massive reversal, “mannequin” now greatly outperforms “manikin” when searching for resuscitation
3) There are now many more hits for both terms. This increase in number is proportional with the increase in the number of websites between 2006 (100 million) and 2012 (644 million)

In terms of manufacturers, Laerdal uses manikin for its lower fidelity models and, along with CAE Healthcare, the term patient simulators for the higher fidelity ones. Gaumard refers to HAL as as manikin.

What term to use?
A manakin
In the end, although for aesthetic reasons I prefer mannequin, I think it is irrelevant which term we use as long as the usage is consistent within the article... and we avoid "manakin", as this is a type of bird.



Term in Google in 2012 (2006)
Manikin + resuscitation: 219,000 (34,000)
Manikin + CPR: 587,000 (74,000)
Mannequin + resuscitation: 3,550,000 (23,000)
Mannequin + CPR: 444,000 (49,000)

Manikin + simulation: 510,000 (40,000)
Mannequin + simulation: 520,000 (88,000)
Manikin + medical simulation: 136,000 (21,000)
Mannequin + medical simulation: 169,000 (33,000)

Friday, 16 November 2012

Equality, Diversity & Simulation

I attended a three-hour equality and diversity (E&D) workshop this week. I would like to say that this was because I have a real interest in E&D. However my reason for attending was as a requirement for my position as centre director (and hence job interviewer). I had mentally prepared myself for a tedious, mind-numbing session and physically prepared myself by having a double shot of coffee in my mocha.

And  yes, there, as expected, is the PowerPoint presentation; there is the soft toy thrown from participant to participant as a signal for whose turn it is to speak; there is the "share with us one thing you're passionate about"... So far, so familiar, so conventional. 

But, in the second half of the workshop, a surprise: a DVD about a lady called Jane Elliott. Jane, a schoolteacher, ran an exercise with the kids in her class to show how easy it is to discriminate, how quickly we adopt a position of superiority or inferiority and the effects of this discrimination on both parties.

What has this got to do with simulation?

The first thing that struck me was that all of the mannequins we use at the SCSC are "white", although other skin colours are available. In terms of reflecting the local population, the latest available figures for the NHS Forth Valley area are from the 2001 census, where 3180 (1.14%) of people recorded themselves as belonging to an ethnic group other than white. In Scotland as a whole (in 2001) 2% of people were minority ethnic. So perhaps this is not an issue? Perhaps simulation centres in parts of the UK with higher proportions of ethnic minorities reflect this within their mannequin population?

The next thing that struck me was Jane Elliott's discussion about power (in the DVD she compares the power (or perceived power) that an older, tall, white man has with the power that a young, smaller, black woman has.) One of my interests within simulation is power inequalities and I have almost totally focused on the power inequalities between professional grades (nurse vs. doctor, consultant vs. trainee, etc.). Jane broadened my horizons to include the way we might treat those who (for example) are of a different skin colour or nationality or those who have a visual or mobility impairment.

Simulation, as Gaba says, is a technique. It is not the solution to all problems, it cannot solve the problem of discrimination. However, active or passive discrimination by staff against patients (see for example the 2007 CEMACH report) or by staff against other staff, may result in patient harm.  We can (and should) therefore use the scenarios we run in our simulation centres to focus on all aspects of care which may reduce (or improve) patient safety including the ideas surrounding equality and diversity.

(Footnote: I would be interested to know if anybody out there uses E&D issues in any scenarios within their simulation centre.)

Sunday, 11 November 2012

ASPiH 2012 annual conference: the "challenging"

In simulation-based medical education (SBME) we rarely talk about "what went badly", instead we talk about "what was challenging for you".

From this point of view, a major challenge of the ASPiH conference is its poster display. Although I will not single out individual posters, too many still followed the "we ran a simulation course/day/scenario and everybody felt better afterwards" style. The great Bill McGaghie talks about translational science research. T1 refers to results obtained in the simulation centre, T2 refers to improvements in patient care practices and T3 refers to improvements in patient and public health.

As an example, an SBME course on cardio-pulmonary resuscitation (CPR) might show that the participants felt more confident (T1) or were faster at applying oxygen to the mannequin (T1). Following these participants at their workplace might show that they were better/faster/more efficient at CPR than non-participants (T2) and that their patients were more likely to survive (T3).

The ASPiH posters gravitate around the T1 level which I think we should be moving away from. There are barriers to improving the quality of the posters including (perhaps) a desire not to turn down too many posters and the difficulty of showing T2 and T3 evidence. However, future conference organisers should make it clearer that T1 posters will be less and less acceptable. We need to raise our game and ASPiH needs to nudge us along.

(As an aside, T1 evidence which shows major negative effects may still be useful to publicise so that all of us can learn from what not to do.)

Saturday, 10 November 2012

ASPiH 2012 annual conference: the "good"

The recent ASPiH conference in Oxford (6th-8th November) was my third. It has improved year on year. The keynote speeches were a highlight.

The two excellent keynote speeches by K. Anders Ericsson and Donald Clark were bracketed by less impressive, but still thought-provoking, presentations by Jonathan Wyatt and Tom Reader.

Jonathan Wyatt's presentation style was original; mixing vignettes about his travels and encounters, with a more traditional presentation. Jonathan had obviously put a lot of work and thought into this keynote so there was certainly not a lack of preparation. Where it fell short, however, was in its applicability and relevance to the audience of the entire conference. An excellent speech for an interested minority became an unsuitable performance (with its incorporation of Gilles Deleuze and post-structural theory) to the plenary.
I did like his comment on how, at times, we go to a lecture or presentation in order to switch off. We disparage the orator for his use of powerpoint slides while at the same time praying that he/she does not make the session interactive.

Tom Reader had the misfortune of using a powerpoint-based, behind-the-lectern-standing presentation which had been mocked by Donald Clark earlier that day. However, Tom's idea that psychologists need to get involved in the building of scenarios and looking at outcomes is well-informed. Tom's challenge to us that the holy grail of simulation, showing that what we do improves morbidity and mortality across the board, may be unachievable (or wrong) was a good one. Tom also mentioned the ceiling effect of the simulator, that people may at one stage not improve despite further simulator sessions. This was relevant to Dr Ericsson's talk.

K. Anders Ericsson discussed expertise and the lessons which may be learned from expert chess players and musicians. Malcolm Gladwell's 10,000 hours to become an expert was referred to and supported. However Anders reinforced the idea of deliberate practice. As a guitar student I know exactly where he's coming from: repeatedly putting your fingers in the wrong position for a given chord means you become very good at playing the chord badly.

Anders also showed that "older/more experienced" is not necessarily "better". If we fail to continue to carry out deliberate practice then our skills deteriorate. This may be one reason for the fear of senior doctors to come to the simulation centre: those whom we would expect to be most experienced (and best) in a scenario may disappoint us and, much more importantly, themselves. This makes the creation of a safe learning environment with an understanding faculty an essential foundation of good simulation-based medical education.

Donald Clark provided the most thought-provoking and challenging keynote. Why do we continue to use lectures to provide information? Why are we paying traditional universities thousands of pounds so that students can sit in lecture theatres to hear a one-off talk from a person behind a podium? Can we use adaptive learning in simulation? Should we be embrace "failure" in the simulator? Donald gave me the kick up the backside I needed to look at how we can communicate better and smarter. This blog is a small first step.



Friday, 9 November 2012

Blame Donald Clark

In his keynote address at the ASPIH 2012 conference in Oxford, Donald Clark challenged the audience to move away from the whiteboard/blackboard/didactic lecture to interactive and mobile education. He also encouraged us to embrace social media. The Scottish Clinical Simulation Centre has a  twitter feed but we didn't (until now) blog or use Facebook or any of the other numerous social media techniques for telling people who we are and what we do.
So, welcome to the blog of the SCSC, a first attempt to share our/my thoughts on simulation and education.
If you are disappointed by what you find in this and future blogposts, blame Donald Clark, I sure will. :-)