Monday, 31 March 2014

Book of the month: Thinking, Fast and Slow by Daniel Kahneman

About the author

Daniel Kahneman is a psychologist who won the 2002 Nobel Prize in Economics for his work on prospect theory (this theory includes the idea that losses feel worse than gains feel good). He has also published extensively on decision-making and judgment. He is currently professor emeritus of psychology and public affairs at the University of Princeton's Woodrow Wilson School of Public and International Affairs.

Who should read this book?

This book will be of interest to anyone who is interested in decision-making. However at 418 pages it will require a respectable amount of time to be set aside to read and digest. Kahneman's stated desire is that the book enriches people's vocabulary so that they can think and talk about the ideas explored therein.

I haven't got time to read 418 pages…

Kahneman splits the book into five parts so the time-poor may wish to focus on areas of interest:
  1. Two systems
    • Probably the best-known of Kahneman's theories, this part explores the idea of a quick-thinking, intuitive and dominant System 1 with a lazy, slow System 2. System 2 thinks it is in control but using a number of examples (such as hungry parole judges) Kahneman shows that System 1 has a major (and under-appreciated) impact on our thinking.
  2. Heuristics and biases
    • This part explores heuristics (simple procedures that help find adequate, though often imperfect, answers to difficult questions p.98) and biases.
  3. Overconfidence
    • This part explores some of the illusions that we all harbour (of understanding and validity) as well as intuition.
  4. Choices
    • This part explores prospect theory, including risk aversion and risk seeking, as well as the endowment effect (we value things in our possession more highly)
  5. Two selves
    • This part discusses the idea of an experiencing self and a remembering self. The latter controls the former and is relatively insensitive to the duration of an experience as well as remembering both the peak and end-experience best.

What's good about this book?

Kahneman carries out an in-depth analysis of both judgment and decision-making. He challenges many of the assumptions we have about ourselves and our rationality. Specific parts of the book are relevant to simulation/human factors and the clinical domain. Kahneman discusses the invisible gorilla and mental workload (p.23) which he says:
The halo effect: He's probably a
great anaesthetist and a good cook.
"illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness (p.24)"
Kahneman also discusses cognitive biases such as confirmation bias (a System 1 effect which comes up with a conclusion and then finds supporting arguments (p.45, p.81)), the halo effect (p.82), the availability heuristic (p.133) and recency bias.

In terms of patient safety, Kahneman's observations on hindsight bias and outcome bias (if the patient survives then all your actions were brilliant, if the patient dies all your actions were criminal) are useful reading.

In his review of assessment Kahneman provides some support for the use of immersive simulation as he explains how he used to assess soldiers' likelihood of being good officer material by seeing how well they worked in a group lifting a log over a wall. (To be clear: That is not a good way of assessing officer material.) Kahneman also uses the Apgar score to show how a scoring algorithm can supplement (or replace) clinical judgment.

Kahneman also discusses how the intuition of an anaesthetist may be more reliable than that of a radiologist. This is because the anaesthetist has the benefit of immediate feedback, while the radiologist receives little feedback about the accuracy of her diagnoses or pathologies she fails to detect.

Kahneman provides tips for dealing with some of the problems caused by our fallibility such as:

  • Learn to recognize situations in which mistakes are likely
  • Try harder to avoid significant mistakes when the stakes are high (although this advice may be less useful in the clinical domain where the stakes are very often high)
  • It is easier to recognise other's mistakes than your own (so call for help!)

What's bad about this book?

One of Kahneman's reasons for writing this book is to improve people's vocabulary. At the end of every chapter therefore there are a number of sentences which he expects people might say with this new-found vocabulary. Unfortunately many of the sentences come across poorly, such as:
"I won't try to solve this while driving. This is a pupil-dilating task. It requires mental effort!"
"Unfortunately, she tends to say the first thing that comes into her mind. She probably also has trouble delaying gratification. Weak System 2."
"She is a hedgehog. She has a theory that explains everything, and it gives her the illusion that she understands the world."
The occasional Americanisms (which are unnecessary) make the book at times less easy to read. For example, we are asked: "Which graduating GPA in an Ivy League college matches Julie's reading?" and "How many murders occur in the state of Michigan in one year?"

Kahneman's writes in a conversational style which, at times, grates such as: "Did the results surprise you? Very probably." (And a few similar instances where the results did not surprise this reviewer.) Some of the chapters are given "interesting" titles but this makes it very difficult to find out the chapters one might actually find interesting and worthwhile to read (e.g. "The fourfold pattern", "Tom W's specialty").

Lastly, much of this work is based on studies of US university students. When we are asked to consider cultural differences in debriefing, one must wonder if there are also cultural differences in judgment and decision-making.

Final thoughts

This book is a worthwhile read for those of us interested in human factors, as well as those interested in using assessment in simulation. The index is substantial and if you don't have the time to read the whole book then finding the topics of interest and reading those pages will still be beneficial.

Wednesday, 26 March 2014

Somebody is Nobody: The unspecified receiver in communication

The following story is based on actual events:
It's 10pm in the Emergency Department (ED) when a stand-by call is received. A 25-year-old man has been knocked down by a car travelling at high speed. He has multiple limb and facial fractures and the paramedic is concerned about splenic injury and intra-abdominal haemorrhage as his abdomen is becoming distended.

This case requires efficient and effective teamwork and leadership. The orthopaedic and general surgeons are called to attend, as is the anaesthetist and anaesthetic assistant. The patient arrives in hypovolaemic shock. The team of 8 or so people work together to assess and begin treatment. 
(Image source:

Life-threatening splenic haemorrhage is thought to be the most likely cause of his continuing deterioration and the patient receives O-ve blood on his way up to theatre. The surgeon begins his laparotomy to control the bleeding. The 3rd unit of O-ve blood is squeezed into the patient and the anaesthetic assistant is asked to contact the transfusion laboratory to ask when the cross-matched units will be available. To her surprise she is informed that the lab never received a sample and therefore they haven't even started to cross-match blood. What happened?

If we had a video-recording of the events in the ED resus bay we could look for causes of the missed blood transfusion request. Undoubtedly many factors played a part: perhaps the organisation does not have a standard operating procedure (SOP) for trauma patients, perhaps there is no checklist for making sure that all essential tests and procedures have been carried out… However, one of the factors was the following communication from the anaesthetist:
"And can someone make sure he's cross-matched for 8 units?"

Someone, Somebody, We…. 

In simulated scenarios and in the clinical environment, it is common to hear the same sort of communications:
"Can somebody call for help?"
"Could someone please check he's not allergic to anything?"
"We need a chest drain and we need to get IV access"
The common characteristic in all of these is the unspecified receiver. When a situation is stressful and dynamic, roles and tasks are not rigorously defined and workload is high then the "somebody" becomes "nobody". The danger then is that a task is not completed as illustrated in the scenario at the beginning of this post.

There are several reasons why we may use this form of communication:

  • Politeness: We don't want to seem dictatorial
  • Mental workload: It is easier to have an unspecified receiver than to maintain the situational awareness required to appreciate who could carry out a given task
  • Uncertainty: We are unsure who is capable of performing the given task and hope that those who are capable will step forward
  • Unfamiliarity: We don't know the names of the people in the team (cf. WHO checklist brief) and don't want to say "Hey, you, with the glasses! Cross-match some blood!"

How to specify the receiver

The following tips may lead to fewer unspecified receivers:

  1. Always specify the receiver. Some people argue that the receiver need only be specified in crises, however if we don't specify the receiver during low workload tasks there is a risk that we will not do so in high workload and high stress tasks.
  2. Know your team-members. If you don't know people's names, ask them or ask for a quick shout-out as to name and specialty. Have name-badges which are visible and legible.
  3. Use closed-loop communication. An unspecified receiver does not close the loop.

Further reading

St.Pierre, Hofinger, Buerschaper and Simon "Crisis Management in Acute Care Settings (2nd ed)" p235-236