Friday 29 April 2016

Book of the month: The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us by Christopher Chabris and Daniel Simons

About the authors
Christopher Chabris (@cfchabris) is an associate professor of psychology and co-director of the neuroscience programme, Union College, New York. Daniel Simons  (@profsimonsis a professor in the department of psychology and the Beckman Institute for Advanced Science and Technology at the University of Illinois. Chabris and Simons ran one of the most famous experiments in psychology, the "invisible gorilla" (video). A blogpost discussing the conclusions to be drawn from their experiment and related ones is available here: Inattentional blindness or "What's that gorilla doing there?".

Who should read this book?

Anybody with an interest in human performance limitations will find this book an interesting read. In addition, many of the concepts are useful to gain insight into how people perform within a simulated environment and in clinical practice.

In summary

The book is divided up into an Introduction, six chapters and a Conclusion. The six chapters are:
  1. "I Think I Would Have Seen That"
  2. The Coach Who Choked
  3. What Smart Chess Players and Stupid Criminals Have in Common
  4. Should You Be More Like a Weather Forecaster or a Hedge Fund Manager?
  5. Jumping to Conclusions
  6. Get Smart Quick!

Chabris and Simons explore and explain a number of misconceptions we have about our own abilities. Each chapter focuses on a specific "illusion": attention, memory, confidence, knowledge, cause, and potential. Chabris and Simons are interested in the fact that, not only do we suffer from these illusions, but we also are unaware of them and are surprised when they are pointed out.

What's good about this book?

This book is well-written and very easy to read. Each chapter focuses on one topic and is peppered with everyday examples to illustrate concepts. These include motorcycle collisions, film continuity errors, a sense of humour, and lifeguards in swimming pools.

Not an effective way to change behaviour
In Chapter 1 the authors discuss why cars hit motorcycles (at times due to inattentional blindness) and they also explain why "Watch out for motorcycles" posters and adverts are not effective. They suggest that making motorcycles look more like cars, by having two widely separated headlights, would make them more visible to other car drivers. The same concept of "attention" also explains why the risk of collision with a bicycle or motorcycle decreases as the number of these forms of transport increase. The more often people see a bicycle on the road, the more likely they are to expect to see one and look for one.The authors also provide additional details about the various illusions. For example, eye-tracking experiments have shown that those who do not see the "invisible" gorilla spend as much time directly looking at it as those who do.

Chapter 2 looks at memory and uses persuasive experimental evidence to convince the reader that memory is fallible. In particular, contrary to popular belief, people do not have crystal clear memories of what they were doing during exceptional events such as 9/11 or Princess Diana's death. People think they do, because they think they should, and therefore are confident about these (unclear) memories.

Chapter 3 explores confidence. The first example used is a doctor who looks up a diagnosis and treatment, which makes his patient feel very uneasy. Isn't a doctor supposed to know this stuff? We encounter similar situations in simulation, with the tension between appearing confident and being able to admit ignorance often results in a less than ideal outcome. The notion of moving from unconscious incompetence to unconscious competence is also covered here, by referring to an article ("Unskilled and Unaware of It") which begins with a description of an inept bank robber.

Would you ride this bike?
Chapter 4 explains why we often think we know more than we do. The authors make this point by asking the reader to draw a bicycle and then to compare this against the real thing. (Italian designer Gianluca Gimini has created some interesting 3-D renderings of people's concepts of what a bike looks like.) This illusion of knowledge, they argue, played a part in the 2008 banking crisis as bankers thought they understood both the banking system and the extremely complex collateral debt obligations (CDOs). 

In Chapter 5 Chabris and Simons explore causation and correlation. While many people with arthritis think they can tell when the weather is about to change, researchers have found no correlation. It is likely that their pain levels fluctuate but if the weather changes they then ascribe their pain to the change in atmospheric pressure.

In Chapter 6 the authors debunk the Mozart Effect, which led parents to play Mozart to babies in the belief that it would make them smarter. Similar claims by Lumosity, a company which alleged that playing its games would delay age-related cognitive impairment, resulted in a $2 million lawsuit.

What's bad about this book?

There is very little to fault this book. Chabris and Simons call limitations in human performance "illusions" because, like M. C. Escher's prints, they persist even when you know what they are. The authors do a great job of explaining the illusions but do not spend enough time addressing the ways in which we might improve our ability not to succumb to them. 


Final thoughts

In terms of simulation, this book explains a number of behaviours that we witness in the simulated environment. For example, it is not unusual for participants to "lie" about something that happened. They may be adamant that they called for help, but the debriefer knows (and the video shows) that this was not the case. The participant is falsely remembering a call for help because they think that they would always call for help.

Again, in terms of the illusion of confidence, we find that those who are least able are often most confident because they lack the insight required to know how poor their performance is.

In terms of human factors, this book will provide a number of examples of human fallibility for workshops or other courses. It also reinforces the need for systems which help humans. As an example, changes in a patient's end-tidal CO2 (ETCO2) trace can suggest physiological impairment, but most machines do not make the clinician aware of these. A smarter monitor would alert the clinician to these changes instead of relying on his or her continued awareness. 


No comments:

Post a Comment