After deciding to adopt a new process a key challenge is to actually start doing it. This is the story of how a team I’m working with decided to carry out code reviews as part of our process, and how our kanban board helped us.
The kanban board helped us visualise this new step in our process and allowed us to see that despite our belief and enthusiasm in introducing code reviews, we weren’t actually doing them. Reviewing the board at the daily stand up meeting provided visual feedback that lead to productive conversations about what might be stopping us and how we could improve. After we implemented new behaviours the board highlighted whether we improved and allowed us to continue to monitor our behaviour.
Introducing the Code Review process: a focus on ‘just enough process’
At the team meeting where we discussed a code review step we wanted to introduce ‘just enough’ process to make sure we would actually do it. We agreed to do the smallest thing that could work and to avoid being too ambitious or strict in how we defined code review. We came up with the following rules:
- If the person worked alone they needed to ask someone else on the team to come to their machine and discuss the code.
- The developer who needed the code review was responsible for pulling another person in to do it.
- If two people paired on the task it didn’t need a code review.
- If the task didn’t need a code review then the developer responsible would say so at the daily stand up and if the team agreed, it could skip that step.
What the board showed next: Good intentions were not enough
During the daily stand up meetings in the first week of working with the new process, the team reported many tasks as “finished development” when they hadn’t been code reviewed. We started moving those cards to the right of the “In Dev” column. Within a couple of days it was clear that we had a queue of index cards to the right of the “In Dev” column.
We had a brief discussion on the queue of cards to the right of “In Dev” and we agreed that it was a sign that we weren’t doing code reviews in an effective way. We said we’d focus on doing code reviews and ‘unblock the queue’ that day.
At the next day’s standup the queue was still there. This illustrates an important point; changing the way we work is hard and sometimes good intentions are not enough. At that day’s stand up meeting we had a deeper discussion about what was preventing us from doing it and identified two major issues.
The first issue was a concern about interrupting other developers. We agreed that one solution was to do the reviews directly after stand-up, since the team had all been interrupted.
A second issues was making it more obvious which cards were actively being worked on “In Dev” and which cards needed a code review. To be more explicit we created a column on the board between ‘In Dev’ and ‘Waiting for Test’ called ‘Code Review’.
By the next day’s stand up the queue of cards in Code Review was gone and we haven’t seen a queue of work build up in code review since.
Visualisation and productive conversations are key to process improvement
The kanban board showed us that our behaviour wasn’t producing the goals we wanted. Seeing this and discussing it as a team helped us understand more about what was stopping us and allowed us to experiment and test new solutions, as well as providing ongoing feedback about whether the new process step was working effectively.
What’s your experience implementing new process steps such as code review? Have you found visualising has been effective in reflecting how well your doing? Have you redesigned the way you visualise things in order to help you act more effectively? Let me know your views and experiences in the comments.
This post originally appeared on Platformability: Insight from Caplin’s tech team
- Conversations for double-loop mindset changes with Kanban (benjaminm.net)
You can watch the video of my talkfrom the Lean Software Systems Consortium (LSSC12) conference in Boston earlier this month.
Visualising work is a key part of the Kanban Method. In many situations it can lead to people realising there are problems or opportunities for improvement, which can be successfully accomplished by simply changing behaviour (single loop learning). However, in some situations, particularly where there could embarrassment or threat, these change may need challenging existing mindsets (called double loop learning). Using practical examples drawn from directly helping teams, this talk will present a model for understanding how we can proactively engage in conversations that increase the chances of capitalising on the value that visualising the work provides.
Here’s a review from Jack Vinson
Benjamin Mitchell used the topic of “what comes after visualization” to start a conversation of what to do once you’ve got some visualization. He particularly talked about Chris Argyris‘ Ladder of Inference (and expanded by Peter Senge), which he used as a way of thinking about how we see things and how we interact with our colleagues and coaching / consulting clients. He particularly warned about staying away from making assumptions and working at the levels of Select and Describe (rather than Explain, Evaluate, Propose Actions). Since Argyris is one of the promoters of double-loop learning, it is not surprising that Benjamin discussed the Mindset -> Actions -> Results learning loop. I liked the discussion of taking different actions to get results vs changing one’s mindset because the Actions aren’t getting anywhere like where they need to go.
Here were some reactions from Twitter:
Let me know your reaction in the comments.
When we spot and elephant in the room, or an undiscussable topic that isn’t being addressed, it is tempting to tackle it head on. However, just naming the elephant or telling people that they’re not discussing an undiscussable topic is rarely a productive approach.
Having spotted an elephant in the
room it is tempting to shout about it
Here’s a scenario from a team’s retrospective meeting:
The team had talked about a problem and had decided to hold a workshop to focus on that issue. Kelly, the external consultant, saw a problem that no one was mentioning.
“I think there’s an elephant in the room here!” declared Kelly
“Yes, there’s a proposal to have a workshop, but no one has mentioned that last time we ran a workshop no one turned up! This seems like an undiscussable topic!” said Kelly
There was general agreement that people hadn’t shown up for the last workshop. After some discussion the team decided “let’s not have a workshop then” and the meeting ended.
I think Kelly’s intention was honourable – how can I get the group to start discussing things to better understand the cause of problems and ways to avoid them in future.
However, in this scenario, Kelly didn’t get what she wanted – rather than get to the cause of their problems in the past, they just decided to bypass these issues and cancel the workshop.
Unfortunately I think Kelly’s behaviour may have contributed to the results she got including the unintended consequences, such as possibly reducing the chance that the team would feel comfortable talking about ‘undiscussable’ topics in future.
Problems with the approach
There are several possible problems I see with Kelly’s approach.
Unclear intent. Kelly raises the issue of the groups not mentioning that no-one attended the previous workshop, but she doesn’t state what her intention for mentioning it was. If you are not explicit about your intention for saying something then people will automatically invent their own reason, which may not have been what you wanted.
Negative assumptions about others’ motives without providing evidence. When Kelly makes the claim that there’s an “elephant in the room” it could be interpreted as her saying that the group were all aware that no one turned up to the previous workshop and that they were all deliberately not mentioning it.
Kelly doesn’t provide any evidence that others are all aware of the issue, or that they have made a deliberate decision to avoid discussing the issue. Kelly’s claim is high on the ladder of inference.
Making an assumption about someone else’s motive, such as thinking “this group is deliberately not talking about a problem they know to exist” is an example of an attribution. Making negative attributions like this without providing evidence can mean that people feel confused or unjustly accused. Once people feel accused then it increases the chance they will respond defensively or withdraw from the conversation.
No curiosity about how others see the situation. Kelly states her view to the group but doesn’t ask whether they see things the same way or see it differently. I’d assume that Kelly was acting as if her view was obvious to others. Since Kelly asked no questions about how others see the situation and expressed her view in a definite way, it reduces the chance that others will offer their view or that Kelly would find out if others saw the situation differently.
Changing the focus from conversation’s content to it’s style is challenging. Moving from talking about the topic of a conversation (“we should have a workshop”) to talking about the style of the conversation (“we’re not discussing the undiscussable”) is a high-impact change of direction. “Going meta” like this is often worthwhile but takes skill, time and energy. To justify the investment it is better to wait until you have solid evidence of a pattern of this type of behaviour. If it’s just a single instance it more effective to keep talking about the content (“how can we make sure people turn up to this next workshop?”) rather than the communication pattern (“we’re not discussing the undiscussable”)
A more effective approach
A more effective approach may have been as follows, with annotations in brackets on what I’m trying to model:
I’d like to check a concern I have about how we are discussing the plan to hold a workshop [share your intent] and see what other’s views are. My recollection was that the last time we planned a workshop no one showed up. I was speaking to Bob and Jane about this yesterday [share your evidence]. Do you remember the last workshop the same way or differently? [be curious about others’ views]
If there was agreement around the fact no-one showed up to the last workshop I’d continue:
This is making me wonder if we are avoiding talking about what happened around the last workshop [state your reasoning]. I would like to talk briefly about what happened so we can avoid the same problems happening with this workshop [state your intent]. In terms of the last workshop, would anyone be willing to share what caused them not to attend? [inquire into others views]
Let me know your view in the comments.
Hi, I’m Benjamin. I hope that you enjoyed the post. I’m a consultant and coach who helps IT teams and their managers consistently deliver the right software solutions. You can find out more about me and my services. Contact me for a conversation about your situation.
Image Credit: David Blackwell on Flickr
I’m hosting this edition of Jon Hunter’s Curious Cat Management Improvement Carnival. It’s been published three times a month since 2006. Here’s my round-up of interesting management-related posts from the last month with a focus on the psychology of change and software development philosophies.
Change Artist Challenge #7: Being Fully Absent by Gerald Weinberg
For managers who want to create systems that allow people to do great work, one solid test is to see if the systems works without you there:
Your challenge is to take a week away from work, and when you get back, notice what changed without you being there. … Do you think you can’t do this? Then you have a different assignment … “If you’re going on a week-long vacation and feel the project cannot do without you, then take a two-week vacation.”
Forecasting misunderstood by David M. Kasprzak
David writes well about understanding the purpose of forecasting and reporting to avoid counter-productive fire-fighting management behaviour:
Forecasting has to do with long-term vision and strategy, measurement, and learning. Focusing on reporting without planning leads to delayed information and chronic “hot buttons” that require immediate attention.
When this occurs, the PDCA cycle is simply broken. The end result is a system where the people in the organization are in a constant state of “Do!” and “Act!” without any sense of why they are doing anything, or if their efforts have actually caused an improvement.
Matt Damon does it again by Ben Decker
One of the challenges for managers is how to present their views in a persuasive way. Ben Decker analyses the techniques Matt Damon used in a recent presentation to a rally against standardised test-score based funding for schools:
[Damon uses a story -] he weaves the point of his speech around his experiences in public schools. This personalizes the message, gives him credibility, and is memorable. When listing out all the growth he experienced in school, he brought it back to the point by saying, “None of these qualities that have made me who I am can be tested.”
This links in my mind with W. Edwards Deming’s statement that “the most important figures that one needs for management are unknown or unknowable …, but successful managers must nevertheless take account of them”
Is Thinking Allowed? by Tobias Fors
Continuing the theme of managers focussing on what is easy to see, and not what is important, Tobias writes about a manager challenging him for not typing (even though typing is not the bottleneck):
When we sit and think, it looks like we’re doing nothing. This makes it hard to think in many organizations.
Doing is what it takes to change the world, but if we don’t think a little first, how can we know if we’re about to change it for the better or the worse?
Leadership Coaching Tip: A Process for Change by Barbara Alexander
Starting with a reference to Deming’s famous quote “It is not necessary to change. Survival is not mandatory”, Barbara writes a summary of the work of Robert Kegan and Lisa Laskow Lahey including their focus on uncovering the competing commitments and underlying assumptions which keep us “immune from change”:
One example from Immunity To Change that many of us may relate to is the leader whose goal is to be more receptive to new ideas. As you might imagine the behaviors he’s doing instead of his goal include talking too much, not asking open-ended questions and using a curt tone when an employee makes a suggestion. His hidden competing commitments? You guessed it . . . to have things done his way and to maintain his sense of self as a super problem solver
Why progress matters: 6 questions for Harvard’s Teresa Amabile by Daniel H. Pink
Dan Pink reports on research behind “The Progress Principle” (affiliate link) which finds that “people’s ‘inner work lives’ matter profoundly to their performance – and what motivates people the most day-to-day is making progress on meaningful work”. The research showed that support for making progress is more potent than other motivators (incentives, recognition, clear goals, interpersonal support) although surveys have found that it isn’t rated highly by most managers.
Why Is Failure Key to Lean Success? by Michael Balle
In contrast to the support for making progress, Michael Balle defends Lean Sensei’s who leave teams feeling let down by focussing on more on what was not achieved than celebrating what was. Balle talks about improvements made without challenging underlying assumptions (similar to single-loop learning) represent “pretending to learning” and not “real learning (acknowledging and understanding why we were wrong about something)” (similar to double-loop learning). I’m hopeful that a “sensei” could learn to act in ways that could help teams meet the desired higher-order learning without having the potentially de-motivating impact described.
Agile Vs. Lean Startup by Joshua Kerievsky
Whilst the “X vs Y” style is unnecessarily combative, Joshua has done an interesting job contrasting the different practices and approaches between Agile Software Development and the Lean Startup approach (which uses Agile Software Development approaches to “build things right” alongside the Customer Development process focussed on finding what the “right thing to build” is).
How could ideas from psychology, lean, systems thinking and behavioural economics help us design systems which are better able to detect and correct error, so that we could ‘mistake-proof’ our own (and others’) thinking?
We know that it is common for humans to feel that they are right. As Kathryn Schulz (@wrongologist!) says in her book “Being Wrong: Adventures in the margin of error”, “what does being wrong feel like? It feels *exactly* the same as being right until the point we realise that we’ve done something wrong”. She illustrates this through the “Wile E. Coyote Moment” where the cartoon character, runs off a cliff (he is ‘wrong’ at this point, but still feeling ‘right’), looks down and realises (detects the error) that he’s standing in thin air and plunges (now he no longer feels ‘right’)
One of the problems we have with detecting error is that we often trust our direct sensory experience as a way of testing if we are wrong or not. We know, from optical illusions and auditory illusions, that our eyes and ears can play tricks on us. However, we rarely acknowledge or act with an awareness that we can have similar problems with our thinking. There are many sources of evidence that we experience ‘cognitive illusions’, such as the work of Behavioural Economist Dan Ariely. For the Lean readers, Taiichi Ohno discusses the problem of “illusions involving mental processes” in “Workplace Management”.
Chris Argyris’ research (see my Argyris links) has found that we are often ‘blind’ to the fact that we could be wrong. Further, in situations where the consequences of being wrong are potentially embarrassing or threatening then we are even less likely to be vigilant about the detection of error, and if it is discovered that we were wrong we’re likely to bury, bypass or cover-up the error (and deny that we’re bypassing the bypass!).
So, if we know that humans act like this (e.g. this is the ‘system’ we have to work with), how would we mistake-proof our thinking? (the concept, not tool)
I’d say that we should ask questions like the following:
- How could we reduce the potential embarrassment and threat around being wrong?
- How could we be more open to the fact that we rely too much on our own tests of our assumptions (where we often ask ourselves “Do I believe what I believe? Why, yes, I do!”)?
- How could we be more aware of the fact that we often cover-up the fact that we test our assumptions privately? (e.g. we generally don’t say “I was unsure if I was wrong, but I’ve just tested it with myself and have decided I’m right!”)
- How could we work with others to overcome these problems and remain vigilant about detecting and correcting errors?
What are your thoughts?
- Kathryn Schulz’s “Being Wrong: Adventures in the margin of error”* (amazon uk, amazon.com). See her talk at the RSA where she mentions what being wrong feels like.
- Dan Ariely’s “Predictably Irrational: The Hidden Forces That Shape Our Decisions”* (amazon.co.uk, amazon.com)
- Mark Graban’s blog “Dangers of a Pithy Quote About Patient Safety?” where he reflects on some of these ideas
(* Disclosure: if you buy these excellent books after using these links I get money from amazon to buy more books I’ll blog about!)