How could ideas from psychology, lean, systems thinking and behavioural economics help us design systems which are better able to detect and correct error, so that we could ‘mistake-proof’ our own (and others’) thinking?
We know that it is common for humans to feel that they are right. As Kathryn Schulz (@wrongologist!) says in her book “Being Wrong: Adventures in the margin of error”, “what does being wrong feel like? It feels *exactly* the same as being right until the point we realise that we’ve done something wrong”. She illustrates this through the “Wile E. Coyote Moment” where the cartoon character, runs off a cliff (he is ‘wrong’ at this point, but still feeling ‘right’), looks down and realises (detects the error) that he’s standing in thin air and plunges (now he no longer feels ‘right’)
One of the problems we have with detecting error is that we often trust our direct sensory experience as a way of testing if we are wrong or not. We know, from optical illusions and auditory illusions, that our eyes and ears can play tricks on us. However, we rarely acknowledge or act with an awareness that we can have similar problems with our thinking. There are many sources of evidence that we experience ‘cognitive illusions’, such as the work of Behavioural Economist Dan Ariely. For the Lean readers, Taiichi Ohno discusses the problem of “illusions involving mental processes” in “Workplace Management”.
Chris Argyris’ research (see my Argyris links) has found that we are often ‘blind’ to the fact that we could be wrong. Further, in situations where the consequences of being wrong are potentially embarrassing or threatening then we are even less likely to be vigilant about the detection of error, and if it is discovered that we were wrong we’re likely to bury, bypass or cover-up the error (and deny that we’re bypassing the bypass!).
So, if we know that humans act like this (e.g. this is the ‘system’ we have to work with), how would we mistake-proof our thinking? (the concept, not tool)
I’d say that we should ask questions like the following:
- How could we reduce the potential embarrassment and threat around being wrong?
- How could we be more open to the fact that we rely too much on our own tests of our assumptions (where we often ask ourselves “Do I believe what I believe? Why, yes, I do!”)?
- How could we be more aware of the fact that we often cover-up the fact that we test our assumptions privately? (e.g. we generally don’t say “I was unsure if I was wrong, but I’ve just tested it with myself and have decided I’m right!”)
- How could we work with others to overcome these problems and remain vigilant about detecting and correcting errors?
What are your thoughts?
Links
- Kathryn Schulz’s “Being Wrong: Adventures in the margin of error”* (amazon uk, amazon.com). See her talk at the RSA where she mentions what being wrong feels like.
- Dan Ariely’s “Predictably Irrational: The Hidden Forces That Shape Our Decisions”* (amazon.co.uk, amazon.com)
- Mark Graban’s blog “Dangers of a Pithy Quote About Patient Safety?” where he reflects on some of these ideas
(* Disclosure: if you buy these excellent books after using these links I get money from amazon to buy more books I’ll blog about!)
Leave a Reply