You can watch the video of my talkfrom the Lean Software Systems Consortium (LSSC12) conference in Boston earlier this month.
Visualising work is a key part of the Kanban Method. In many situations it can lead to people realising there are problems or opportunities for improvement, which can be successfully accomplished by simply changing behaviour (single loop learning). However, in some situations, particularly where there could embarrassment or threat, these change may need challenging existing mindsets (called double loop learning). Using practical examples drawn from directly helping teams, this talk will present a model for understanding how we can proactively engage in conversations that increase the chances of capitalising on the value that visualising the work provides.
Here’s a review from Jack Vinson
Benjamin Mitchell used the topic of “what comes after visualization” to start a conversation of what to do once you’ve got some visualization. He particularly talked about Chris Argyris‘ Ladder of Inference (and expanded by Peter Senge), which he used as a way of thinking about how we see things and how we interact with our colleagues and coaching / consulting clients. He particularly warned about staying away from making assumptions and working at the levels of Select and Describe (rather than Explain, Evaluate, Propose Actions). Since Argyris is one of the promoters of double-loop learning, it is not surprising that Benjamin discussed the Mindset -> Actions -> Results learning loop. I liked the discussion of taking different actions to get results vs changing one’s mindset because the Actions aren’t getting anywhere like where they need to go.
Here were some reactions from Twitter:
Let me know your reaction in the comments.
When we spot and elephant in the room, or an undiscussable topic that isn’t being addressed, it is tempting to tackle it head on. However, just naming the elephant or telling people that they’re not discussing an undiscussable topic is rarely a productive approach.
Having spotted an elephant in the
room it is tempting to shout about it
Here’s a scenario from a team’s retrospective meeting:
The team had talked about a problem and had decided to hold a workshop to focus on that issue. Kelly, the external consultant, saw a problem that no one was mentioning.
“I think there’s an elephant in the room here!” declared Kelly
“Yes, there’s a proposal to have a workshop, but no one has mentioned that last time we ran a workshop no one turned up! This seems like an undiscussable topic!” said Kelly
There was general agreement that people hadn’t shown up for the last workshop. After some discussion the team decided “let’s not have a workshop then” and the meeting ended.
I think Kelly’s intention was honourable – how can I get the group to start discussing things to better understand the cause of problems and ways to avoid them in future.
However, in this scenario, Kelly didn’t get what she wanted – rather than get to the cause of their problems in the past, they just decided to bypass these issues and cancel the workshop.
Unfortunately I think Kelly’s behaviour may have contributed to the results she got including the unintended consequences, such as possibly reducing the chance that the team would feel comfortable talking about ‘undiscussable’ topics in future.
Problems with the approach
There are several possible problems I see with Kelly’s approach.
Unclear intent. Kelly raises the issue of the groups not mentioning that no-one attended the previous workshop, but she doesn’t state what her intention for mentioning it was. If you are not explicit about your intention for saying something then people will automatically invent their own reason, which may not have been what you wanted.
Negative assumptions about others’ motives without providing evidence. When Kelly makes the claim that there’s an “elephant in the room” it could be interpreted as her saying that the group were all aware that no one turned up to the previous workshop and that they were all deliberately not mentioning it.
Kelly doesn’t provide any evidence that others are all aware of the issue, or that they have made a deliberate decision to avoid discussing the issue. Kelly’s claim is high on the ladder of inference.
Making an assumption about someone else’s motive, such as thinking “this group is deliberately not talking about a problem they know to exist” is an example of an attribution. Making negative attributions like this without providing evidence can mean that people feel confused or unjustly accused. Once people feel accused then it increases the chance they will respond defensively or withdraw from the conversation.
No curiosity about how others see the situation. Kelly states her view to the group but doesn’t ask whether they see things the same way or see it differently. I’d assume that Kelly was acting as if her view was obvious to others. Since Kelly asked no questions about how others see the situation and expressed her view in a definite way, it reduces the chance that others will offer their view or that Kelly would find out if others saw the situation differently.
Changing the focus from conversation’s content to it’s style is challenging. Moving from talking about the topic of a conversation (“we should have a workshop”) to talking about the style of the conversation (“we’re not discussing the undiscussable”) is a high-impact change of direction. “Going meta” like this is often worthwhile but takes skill, time and energy. To justify the investment it is better to wait until you have solid evidence of a pattern of this type of behaviour. If it’s just a single instance it more effective to keep talking about the content (“how can we make sure people turn up to this next workshop?”) rather than the communication pattern (“we’re not discussing the undiscussable”)
A more effective approach
A more effective approach may have been as follows, with annotations in brackets on what I’m trying to model:
I’d like to check a concern I have about how we are discussing the plan to hold a workshop [share your intent] and see what other’s views are. My recollection was that the last time we planned a workshop no one showed up. I was speaking to Bob and Jane about this yesterday [share your evidence]. Do you remember the last workshop the same way or differently? [be curious about others’ views]
If there was agreement around the fact no-one showed up to the last workshop I’d continue:
This is making me wonder if we are avoiding talking about what happened around the last workshop [state your reasoning]. I would like to talk briefly about what happened so we can avoid the same problems happening with this workshop [state your intent]. In terms of the last workshop, would anyone be willing to share what caused them not to attend? [inquire into others views]
Let me know your view in the comments.
Hi, I’m Benjamin. I hope that you enjoyed the post. I’m a consultant and coach who helps IT teams and their managers consistently deliver the right software solutions. You can find out more about me and my services. Contact me for a conversation about your situation.
Image Credit: David Blackwell on Flickr
As part of my workshops on effective communication, especially around introducing innovative new process approaches such as Lean and Agile, I’ve found examples that demonstrate how people can act with “skilled incompetence” and “skilled unawareness”.
One exercise I run is to help people look at how effectively they communicate feedback on difficult topics. The scenario (based on Argyris’ XY case study) is that a manager, Y, has given some feedback to a poorly performing team member X. I show the participants a list of comments that Y (the manager) has said to X (team member) and ask them to pretend that Y has come to them as a coach and asked:
Can you review the feedback I gave to X? I’m interested in learning how to improve – how effective was I?
At the recent London Software Practice Advancement (SPA2011) conference, I ran this workshop and one of the workshop participants said they’d give Y the following feedback:
Y, your feedback to X was ineffective because you failed to focus on things that X was doing well, and you also didn’t illustrate your feedback with an example of what X specifically did
I think the participant’s statement illustrated that we tend to store theories of effective behaviour in our heads. In this case, the participant’s feedback contains a micro-theory that:
Feedback is effective when it:
1) focuses on behaviours that are done well and
2) illustrates the feedback by using a specific example of the behaviours.
The participant confirmed that this was their view.
I was curious then to apply the participant’s ideas about effective feedback to the feedback they gave Y. I asked:
Let’s use your own theory of effectiveness to analyse the feedback you gave Y. I don’t think your feedback was about something that Y was doing well and I don’t think you gave an example of what Y specifically did. Do you see it that way or see it differently?
The participant agreed that the feedback they provided was not consistent with their own beliefs about effective feedback, and they also confirmed that they were able to see the inconsistency once it had been pointed out, but they were unaware that what they said was inconsistent when they originally produced the feedback.
In effect the workshop participant was telling Y, “do as I say, but not as I do” without admitting this directly. It’s likely that the recipient of the feedback, in this case Y, will experience it as inconsistent and puzzling.
This small example illustrates several points of Argyris’ concepts of skilled incompetence and skilled unawareness. Firstly, the fact that participant’s comment was incompetent in the sense that it did not meet the participant’s own beliefs about effective feedback. Secondly, the feedback was skilful in the sense it was automatically produced without the participant being aware of the inconsistency. Argyris refers to the lack of awareness of our own skilled incompetence as skilled unawareness.
Although it’s common for most of us to admit we can sometimes act in a hypocritical way, most of us are not aware that we are doing so when we are producing the action.
There are many benefits from accepting and finding ways to overcome our tendency to act with skilled incompetence and skilled unawareness. Adopting the mindset and behaviours of the mutual learning model (based on Argyris & Schön’s Model II) involves accepting we can act like this and choosing to act in ways that invite others to help us realise it in order to be more productive. I’ll write more about this in future.
Photo credit: Adwale Oshineye
I’m currently focussed on improving my own skills around the Mutual Learning model (‘model II’ from Argyris & Schon’s Theory of Action). In order to do this, I’ve been using a Left Hand Right Hand Case Study approach, one of the key learning tools. In the interest of being open and sharing my experience with others, I wanted to highlight some of my recent reflections. I’m doing this to help me with my learning and to invite others to share their views on the approach and my goals.
Creating a Left Hand Column / Right Hand Column Case Study
The Left Hand Right Hand Case Study approach is a very simple tool. In order describe what it is I’ll go through how I created mine.
I started by describing the situation and what I was trying to achieve in the situation. In my case, I’d had a conversation with someone from another organisation about my experiences of trying to discover if there was a potential to work together in future. I was unclear about the status of the discussions and had some concerns about how the situation had developed and wanted to talk to someone I knew from that organisation about the situation.
The next step is to create two columns. I started with the right hand column, which is “what was said” written like a script. I did this using my memory of the conversation (the conversation had happened a couple of weeks earlier. Many people worry about ‘if it will work’ when using a remembered conversation. The answer is yes). I put it aside for a week or so, before filling in the left hand column, “what I thought, but did not say”. I was pretty surprised when started filling that column in as it highlighted the gap between how I think I act and how I actually act (espoused theory and theory in use in Argyris’ theory) then put it aside for another couple of days because I found it quite confronting and I wanted to let myself ‘calm down’ and come back to it with a fresh mind.
Here’s a fictitious example similar to what mine looked like:
|What I thought but did not say||What was said|
|I think that this group have mucked me around. Let’s see if I can prove my case.||Me: Hi Bob, have you got a minute for a quick chat?|
|Bob: [up beat] Sure!|
|I think they treated me badly and don’t even realise it. I’m going to show them.||Me: I wanted to check out what was happening in terms of us working together. I caught up with your colleague the other day and they told me something that didn’t match my expectations [I briefly illustrated] and I felt mucked around!|
|Bob:[More serious] What they said was right.|
|What?! It looks like he agrees with his colleague. I can’t believe that! I need to show him that his view is wrong.||Me: [Raising my voice and speaking quicker] Well there’s no way that what I was told was reasonable …
[further justification of my position, point/counter-point discussion and a muted resolution when the conversation was ended by an interruption]
The next step was to reflect on what the case study had surfaced. I did this by answering the following kinds of questions:
- What was my intent with this conversation? How effective was the conversation at achieving my intent? How effectively did I communicate my intent?
- How effectively did I balance advocacy and inquiry?
- What was my ‘frame’ of the conversation, how did I view myself, the other person and the task I was trying to accomplish?
- What was I hiding from the other person, what was undiscussable and what prevented me from making it discussable?
I wasn’t expecting that I’d discover as many things about how I think and act as I did. Here’s what I came up with as I reflected:
- I wasn’t clear on my own intent. When I looked back over the conversation I realised that I’d entered the conversation without a clear understanding of what I wanted to achieve. From what I’d said I inferred that my goal was “to get the other person to make me feel better about the situation by agreeing with my view of the world”. Realising this gave me some insight into how it might have come across to the other person. If I wasn’t clear on it, what chance did they have of understanding me? Their difficulty may have been compounded by the fact that I didn’t express any tentativeness in my world view, in fact, the opposite is true!
- There was no balance of advocacy and inquiry. In terms of balancing advocacy (explaining how I saw and felt about the situation) and inquiry (asking about their view of the world) I was poor. I discovered that I had asked only three questions and two of them were rhetorical! At the same time, I’d made around 27 statements in a 10 minute conversation. I was unaware that the conversation was this unbalanced whilst I was having it.
- The goals I was trying to achieve were unilaterally controlling, fixed and hidden. I wanted the other person to see my view of the world and to agree with my position that they were wrong. I had no intention of changing my mind to accommodate their point of view. But I didn’t state any of these reasons as I was worried about how they might feel, and I didn’t tell them that I was hiding my intent because I was worried about their reaction. Although I say it was because I was worried about them, the fact that I didn’t test my beliefs meant that it was actually self-protective.
- Without intending to, I created conditions I didn’t want. The case study helped me see more of how I acted from the other person’s point of view. The evidence I saw was that I lured them into a conversation where I was asking them to admit that I was right and they were wrong. When I told them about my point of view, I often used high-level judgements like “you acted weirdly!” without demonstrating any observable things they said or did that led me to that belief. I thought I was being open with them, but I can see how they might have felt accused and threatened (there was evidence for this in the kinds of responses they made and the fact that the conversation felt like a ‘tussle’). So, my behaviour may have inadvertently created exactly the conditions I wanted to avoid.
- I wasn’t able to express myself as effectively as I thought I was. The conversation on paper highlighted there were many times when I used a kind of short-hand to describe my points, but in a way that, on reflection, was unclear or rife with potential points of confusion.
How did I feel after this?
On an intellectual level I found the case study interesting to do because it showed how unaware I was of how I actually acted. It was useful to realise that my framing of the situation (I’m right, they’re wrong / misguided, I have to convince them of my view) may have contributed to acting in the way I did (this gave me hope that maybe I could learn more about how I could be more effective in future).
On an emotional level, I felt pretty embarrassed (“How could I have acted like this without being aware of it? What if others knew I acted like this – in a way that I would not espouse?”), defensive (“I still believe that they were mostly responsible for the situation!”) and even a bit dejected (“How much more am I unaware of? It took me days to realise how blind I was to my involvement in the situation, and I produced all of these responses without even thinking about them, how am I ever going to learn to act differently? Is it even possible to learn a different way of thinking/acting?”).
Reflection is often improved by doing it with others
Reflecting is hard cognitive and emotional work. I had given myself some ‘rest days’ between filling in the case study to make it easier for me to reflect without getting emotionally engaged (I believe it’s a similar effect where it’s easy to spot things in other people’s behaviour, but it’s hard to spot it in ourselves when we are acting). It was interesting to me how each time I came back to look at the conversation I realised that I was able to reflect with more detachment, but I was still pretty attached to my view of the world being right! To help me further, I sent the case study to another person who reviewed it and provided some comments before a meeting where we discussed it.
The review comments were pretty confronting. I was secretly hoping that they might evaluate me positively and agree that the problem was the other person, but their comments highlighted how my behaviour may have had a lot more to do with the other person’s response than I was aware of / wanted to admit. The reviewer highlighted things such as:
- I was stuck on advocacy. There wasn’t a single example of genuine inquiry from me into the other person’s view (which he stated at least three times in the conversation, but I never acknowledged).
- I was hiding information. I was hiding a lot of useful information in my left hand column which would have been useful to find ways of sharing (and leaked out in the way I was treating the other person anyway)
- I wasn’t Illustrating evaluations and judgements. When I was advocating (sharing my view of the world) I was using high level evaluations (“I was mucked around”) without explaining the data I used to come to that conclusion. In Argyris’ model, I was advocating from a high rung on the Ladder of Inference without describing the ‘lower rungs’ that lead me to my conclusion. Doing this may have contributed to the other person being defensive or feeling attacked (I used some pretty extreme emotive words!).
- I was using ‘gimmicks’. I was using phrases and approaches associated with a Mutual Learning mindset but designed to achieve the goals of a Unilateral Control mindset (model I in Argyris’ approach). I was using my knowledge of the Mutual Learning model (model II) to ‘win’ (a goal of the Unilateral Control model, model I). It was curious that I was using my knowledge of the Mutual Learning model to accuse the other person of acting in a way that was consistent with the Unilateral Control model, and I was blind to the irony that doing this demonstrated that I was acting in a way consistent with the Unilateral Control model (e.g. trying to win)!
- I was punishing them for being wrong. Rather than testing if they shared my view, or being open to learning more about theirs, I was pushing them to admit they were wrong, and more than that, wrong for being wrong. I was in full righteous mode (at one point they even agreed with me that the way they acted had been unclear, but I didn’t listen to it because I was so focussed on ‘letting them have it’!)
Conversations with the reviewer
The conversation with the reviewer was very helpful. He wanted to check how I’d reacted to the case and his feedback and share the point that most people feel pretty embarrassed when confronted with what they find. There were several points I got out:
- It’s important to take responsibility for identifying what triggered my behaviour. Understanding the triggers allows me then to be aware of what might be about to happen, and to ‘create a buffer’ where I can pause my natural response (usually to react to the other person by attacking or to withdraw by becoming passive aggressive) and act differently. The reviewer shared that this is what Argyris’ Model II / Mutual Learning model is all about – providing another ‘degree of freedom’ in choosing how to act (rather than trying to ‘be Model II all the time’)
- The Ladder of Inference is a useful tool to help learning how to act differently. It was useful to realise that if I just state a high level evaluation without illustrating the data that I used (‘rung 1′) and the culturally meaning I applied (‘rung 2′) it could lead to the other person reacting defensively. Also, ‘staying low on the Ladder of Inference’ means that there is less likelihood that the other person will be confused, and ‘working slowly up the ladder’ helps more easily identify where the points of confusion/departure are.
- Advocating effectively is a skill which takes practice. The conversation with the reviewer helped me practice being clearer about what I was advocating. At some times I was able to do this, at other times I found this very difficult and stumbled or spoke for too long. It was confronting to realise that this would take more practice.
- Use the concept of binds, dilemmas or paradoxes to surface things that are undiscussable. I was worried about sharing that I had some concerns about whether I would be a compatible fit with the other group, but I didn’t want to raise this issue because I was worried that they would react negatively to it (“why would we want to work with you if you hold a negative belief about us?”). We spoke about how I could raise this in the form of a bind and ask for assistance from the other person (“I’m in a bind. On the one hand, I’d like to work with you. On the other hand, I’ve had a few experiences, which I could describe, which I’ve found confusing. I’d like your help to go through these experiences and check my understanding. Would you be interested in that?”) .
- My own competitiveness is not helping me learn. For better or worse, I’m often quite competitive with myself and other people (this is something I observe in how I think and act, rather than something I’d espouse to others!). My initial reaction when seeing the gap between how I think I act (espoused theory) and how I actually behave (theory-in-use) I wanted to close it as quickly as possible as I found it deeply uncomfortable. However, the pressure to overcome it quickly sets me up for failure, which makes me less likely to practice.
- My attitude to failure is not helping me learn. When confronted with feedback that I’m not as effective as I’d hope (e.g. demonstrating no examples of inquiry) I kind of collapse and go into a bit of a ‘doom zoom’. The problem with this approach is it means I find it harder to focus on learning to practice new behaviours that will help me be different in future.
- Improving skills is a matter of practice and that means failing (a lot). In order to improve my skills I need to do lots of practice (Argyris compares learning Model II to learning to play tennis. It would be unreasonable to think a few books and a lecture on tennis would be enough to learn how to play – you need to actually hit some balls). Similarly learning more effective ways to handle difficult conversations and learn will require ‘hitting a few balls’ and trying behaviours that ‘fail’ in order to reflect and learn.
- Changing how I frame the situation is useful. It was useful to reframe how I saw the discussion to think more about the fact I only have a partial view of the situation (self), that the other person may see parts I don’t (other) and the task of the conversation is to try and learn more about the situation together (task). It’s a challenge, in the heat of a difficult situation, to delay the natural tendency to attack / respond, and replace it with a ‘buffer’ around being curious about the other person’s perspective.
Where am I now?
I found the experience very useful. I’m now more humble about the scale of the task of learning a new set of skills and developing a different mindset. I’m grateful for having more insight into how I may have inadvertently been creating the conditions that I didn’t want. I’ve been able to try out some new skills in a few low-key conversations recently and I’ve been practicing watching for moments where I get ‘emotionally hooked’ and trying to work out what caused it. These experiences have been very rewarding.
I’ve also noticed that I’m less angry when I see others acting in a unilaterally controlling way (getting angry or punishing people for acting the same way I frequently/mostly do isn’t fair). My mindset is shifting from an evangelical one (Argyris’ model is great! Everyone needs it! I need to go out and evangelise!) to more of a reflective one (I really like it, and find it useful myself so I’m going to use it and model it. I’d like more opportunities to practice it. I’d welcome talking to others, if they are interested). And mostly, I’m still struggling. I’d like to be better sooner, with less effort and fewer embarrassing failures and I’m aware of the paradox that those expectations are probably making is slower and harder!.
I’d welcome comments, feedback or questions. If you’d like to go through a case study, please contact me.
Agile approaches are sometimes focussed on helping organisations experience transformational change. Many Agile adoptions have failed to achieve long-term change, especially outside core teams, where the problems are non-routine and potentially embarrassing or threatening. Chris Argyris has developed a theory that provides a possible explanation of why Agile adoption has failed to bring about these hoped-for organisational changes.
Argyris & Schön’s Theory of Action
Chris Argyris, a retired Harvard Professor, has spent his career developing ideas around Theory of Action (co-developed with Donald Schön) [1, 2]. The Theory of Action approach is based on the idea that we store programs in our heads which we use to determine action strategies (behaviours) that will achieve the consequences we desire in a way that is consistent with our governing values (preferred states we try to ‘satisfice’ when acting). Effective action is any action which produces an intended outcome that persists over time and achieves this without harming the current level of organisational performance.
Argyris and Schön believe there are two types of theories, those that we say we use (espoused theories), and those that we actually use (our ‘theories-in-use’). Espoused theories represent our ideals about effective action, whereas theories-in-use are used to produce real, concrete actions. We are often able to identify the gaps between what someone says and how they act, as the saying “watch what people do, not what they say” illustrates. However, we are often blind to the fact our own actions aren’t consistent with our espoused view of the world. If we are made aware of this gap, our usual reaction is to blame someone else or “the system”
Model I and Skilled Incompetence
Argyris and Schön have found that while there are differences between people’s espoused theories, there is very little difference in theories-in-use across cultures, age groups and gender (even after over 10,000 case studies). Argyris and Schön label this common theory-in-use “Model I” (other authors have describe it as “closed to learning” or the “unilateral control model”). The governing variables of Model I are:
- Maintain control the situation (unilaterally). Get what you want, achieve your objectives/goals
- “win, do not lose”
- suppress negative feelings, such as embarrassment, in yourself and others
- act “rationally” (suppress or deny emotions).
Based on these governing variables we choose action strategies such as advocating our own position, making evaluations of others’ performance and their intentions in ways that ensure we remain in control, maximise our chance of winning whilst ensuring that we act diplomatically and ensure that no-one expresses negative feelings. We do this in ways that encourage neither inquiry into our views nor the robust testing of claims that we make, often relying on self-sealing logic such as “Trust me; I know what I am doing” .
When we are producing these actions, especially in non-routine situations which might be embarrassing or threatening, we are often blind to our own Model I behaviour. Worse, we actively try and by-pass the embarrassment or threat and then cover-up the bypass, leading to situations where we are unable to “discuss the undiscussable”. Using Model I means that we are likely to produce consequences we don’t intend. Model I behaviour is learnt over a lifetime and is produced skilfully, which makes it even harder to spot when we are producing it, leading to what Argyris labels “Skilled Incompetence”.
One way of detecting the gap between your own espoused theory and your theory-in-use is to use the “Left Hand Right Hand Case Study” tool. Describe an actual or an imagined conversation with another person on a difficult topic. On the right hand side, write the script of what was said. Ideally this would be a transcript of an audio recording, but a description of the conversation will also work. On the left hand side, write what you thought but did not say. Having done this, reflect on whether there was a gap between what you said and what you thought, but did not say. Argyris describes this gap as an ethical gap since it involves deliberately hiding information that may be useful to test, or share with others, without admitting that this is what is actually happening (it is covered-up and the cover-up is also covered-up). Argyris advocates striving to reduce the gap between what is on the left hand side and right hand side in a way that minimises the likelihood of all of those involved becoming defensive.
Individuals operating in a Model I fashion are likely to produce organisations full of defensive routines. Defensive routines are ways of acting that prevent us and others from threat or embarrassment, but also from learning. Common examples of defensive routines are mixed messages, such as “I didn’t mean to interrupt you …” (clearly you did, and just have) or “I don’t want to upset you, but …” or to say “that’s an interesting idea” when there is no intent to act on it. Defensive routines make it harder for organisations to surface the information needed in order to learn.
Argyris defines learning as “the detection and correction of error” where an error is a mismatch between what was intended and what was produced. Single-loop learning is where the changes only involve changing the action strategies (at its simplest ‘try harder!’). Double-loop learning goes one step further and requires changing the values that govern theory-in use, often by questioning the status quo. The most common analogy is a thermostat [4, p.10]:
A thermostat is a single-loop learner. It is programmed to increase or decrease the heat in order to keep the temperature constant. A thermostat could be a double-loop learner if it inquired into why it should measure heat and why it is set so that the temperature is constant
Single-loop learning can be compared with becoming more efficient at what you’re already doing, whereas double-loop learning is about questioning the effectiveness of the goals. Or in other words that single-loop learning is doing things right, while double-loop learning is doing the right things.
Double-loop learning can happen around technical problems, while at the same time not occurring around human problems. My view is that XP practices, such as Test Driven Development (TDD), have led to double-loop learning at the technical level because there has been a change in mind-set. Prior to TDD I remember people trying single loop solutions that only involved changes in action strategies, such as “just write better quality software, and leave testing to the testers” whereas now people talk more about TDD as a design tool. I do not believe that Agile approaches have led to double-loop learning in terms of human problems.
Model II: Overcoming organisational defensive routines
Changing the defensive routines requires double-loop learning because it involves people giving up their Model I theories-in-use. Argyris describes Model II as one possible theory-in-use that can produce double-loop learning. The three governing variables of Model II are:
- Produce valid information
- Informed Choice
- Internal (rather than external commitment)
These are used together with vigilant monitoring of the effectiveness of the implemented actions.
It’s important to note that Model II is more than just the opposite of Model I; in the same way that listening is more than just the suppression of the urge to talk. The governing variables of the opposite of Model I would be :
- Everyone is in control
- Everyone wins
- [all] feelings are expressed
- rationality is downplayed
Model II is not a replacement for Model I; Model I behaviour is appropriate when problems are routine or in emergency situations. The action strategies of Model II include clearly articulating a position, the difference from Model I is that there is an emphasis on enquiry and testing, similar to Bob Sutton’s concept of “strong opinions, weakly held”. Often when people realise the gaps between their espoused- and theory-in-use they want to quickly overcome this gap. A common experience is that after a few days of trying to learn quickly, most people relax and slow down, realising that learning to produce actions consistent with Model II will take some time. Argyris argues that “most people require as much practice to overcome skilled incompetence [by learning Model II] as to play a not-so-decent game of tennis” .
Examples from Agile / XP
In general, Agile methodologies and frameworks have taken unsophisticated approaches to organisational change, most of which fit within a Model I view of the world.
Scrum talks about “shock therapy” where “teams are trained on exactly how to implement Scrum with no deviations for several sprints” . It uses an openly coercive approach described as “forceful and mandatory way of implementing Scrum”  in the hope that managers will receive a “wake-up call” and change their view of the world and their behaviours once they see the “hyperproductive” results. The approach does not focus on organisational defensive routines, or even double loop learning at the management level. It does not ask questions like “What was stopping us from acting this way before? Can we be sure that the thinking behind the previous approaches has really changed?” Predictably, from an Argyris point of view, the authors report that management failed to change their view of the world: “…management tends to disrupt hyper-productive teams … in all but one case, management ‘killed the golden goose.’”.
XP and Agile often speak of the importance of underlying values, such as “courage”. The problem with values is that they are not usually described in an actionable way. Further, the interpretation of a value depends on whether a person has a Model I or Model II mindset, or view of the world. When courage is illustrated, it is often of examples that represent a coercive approach consistent with Model I, as mentioned in an interview on “What’s Missing from the Agile Manifesto?” :
[Courage is] … the courage to do what is best for the team, the project, even the business, despite the pressure to do otherwise. … An example [credited to Ken Schwaber] is of a scrum master who disassembled the team’s cubicles, so that they could have the team space that they wanted. When confronted by the ‘furniture police’ she made it clear that she would quit if the cubicles were restored.
This advice seems to contain several potential errors. How is a “courageous person” meant to validate or test that what they believe is “best for the team” is actually the best for the team? Is it OK for them to decide simply by “asking themselves?” Do they need to make this known to others? How would this advice deal with the potential that the courageous person did not understand the wider context of their change? In the example given, the scrum master acted in a unilaterally controlling way, and when confronted blackmailed the organisation in order to get their way, entirely consistent with Model I.
Moving Forward: Detection and then correction of errors
If Agile approaches are to have an effective impact on organisations at more than just a local team level, across longer than just the short-term, then it would be useful to spend time focussing on personal and organisational defensive mechanisms. This starts with developing an awareness of the gaps between what we espouse and how we act, so that we can at least detect errors. A useful step is to acknowledge threatening or embarrassing issues that are likely to lead to defensive Model I behaviour at the individual and group level. The next step is to work on being able to demonstrate that we have learnt by being able to produce effective behaviour, even around threatening or embarrassing issues. The challenge for the Agile community is whether we want to deal with the feelings that come from acknowledging our own blindness to our current skilled incompetence and start practicing more effective ways of acting.
1 – Argyris, C., & Schön, D. (1978) Organisational learning: A theory of action perspective. Reading, Mass: Addison Wesley.
2 – Argyris, C., & Schön, D. (1996) Organisational learning II: Theory, Method, and Practice. Reading, Mass: Addison Wesley.
3 – Argyris, C. (2000) Flawed Advice and the Management Trap: How Managers Can Know When They’re Getting Good Advice and When They’re Not. Oxford, England: Oxford University Press.
4 – Argyris, C. (2004), Reasons and Rationalizations. The Limits to Organizational Knowledge, Oxford, England: Oxford University Press.
5 – Argyris, C (1986) Skilled incompetence. Harvard Business Review, 64(5), 74-79.
6 – Sutherland, J., Downey, S. & Granvick, B. (2009) Shock Therapy: A Bootstrap for Hyper-Productive Scrum. http://jeffsutherland.com/SutherlandShockTherapyAgile2009.pdf
7 – Sutherland, J. (2008) Shock Therapy: Bootstrapping Hyperproductive Scrum. http://scrum.jeffsutherland.com/2008/09/shock-therapy-bootstrapping.html
8 – Brian Marick: What’s Missing From the Agile Manifesto http://www.infoq.com/news/2008/11/Marick-on-Agile-Manifesto