Reflecting Back on the First Dojo Consortium Event
Last week, we were proud to be a small part of the Dojo Consortium Event. From the humble, insightful speakers, the engaging question and thoughts shared by attendees, to the general energy at the conference, the event felt like something special.
Last week, we were proud to be a small part of the Dojo Consortium Event. From the humble, insightful speakers, the engaging question and thoughts shared by attendees, to the general energy at the conference, the event felt like something special.
Our Top Takeaways:
The Dojo is a Thing
We had a hunch this was the case, but let’s make it official - dojos are a thing. When there are over 30 (large) organizations working on dojos, a sold out conference a month ahead of time, and even industry thought leaders getting excited about the dojo approach to learning - there is something there. Our challenge as a community is how do we stay true to our first principles around learning and not lose sight of that. One excellent open space topic even brought this forward.
The Dojo Community is Vibrant and Open
Attendees were challenged right from the get go - get uncomfortable; share not only your successes but your challenges; explore your learnings and known unknowns. And the attendees were up to the challenge. We had 18 diverse lightning talks from the community - ranging in topics from measuring your dojo impact, to new ideas for dojos, to coaching skills. And that does not even address the open space topics. Our favorite open space topics (in no particular order):
Project to Product discussion - great to see this becoming more real
Frameworks discussion - the realization that dojo coaches are succeeding through engagement, not through frameworks
Community discussion on what is next - It was great to see the community come together to define what is next for the consortium
Common Dojo Challenges Aren’t Necessarily Dojo Specific
We heard a few common themes around challenges for dojos - and challenges that perhaps aren’t just dojo challenges.
Remote teams are challenging for many organizations, especially those trying to start dojos. While we personally have not seen fully-distributed teams work well in the dojo, there were discussions around what might work. We are interested in experimenting with these ideas as long as we continue to follow the core principles of what a dojo is.
Finding coaches is a challenge. Similar to working with remote teams, finding good coaches is not just a dojo problem. However, strong coaches are essential for running a successful dojo. Without good coaches, you run the risk of teams having bad experiences that will result in a lack of learning and any significant change.
Coaches Never Stop Growing
We had many wonderful speakers. Thematically we heard the same thing - great coaches never stop learning. One particular great comment came from the man Kent Beck himself when he said “The best thing I can do as a coach for a team is be the best me I can be that day.” Beautiful and simple.
We have many more fond memories that we will cherish. But… we’re already on to thinking about the next event! We are honored to have been asked to coordinate the next event. While we and the community set a pretty high bar with the first event, we promise the next event will continue to challenge us and push us all forward.
If you joined us last week, what were your takeaways? What is next for you? If you weren’t able to attend, what would you like to see in the next event?
Lessons Learned From 5 Years in the Dojo - Part 2 - Focus on the Value Stream
In our last post we discussed how Dojos require a sound overarching strategy. In this post we talk about why Dojos provide the most value when they address the whole value stream.
In our last post we discussed how Dojos require a sound overarching strategy. In this post we talk about why Dojos provide the most value when they address the whole value stream.
Dojos Need to Address the Whole Value Stream
What Do We Mean by Whole Value Stream?
Your company offers products and services that deliver value to your customers. How you deliver that value - from your initial product ideas through delivery of products that evolve from those ideas - that's the value stream. A value stream includes how you identify opportunities and problems, generate product ideas, develop and release those products, and ends with your customer receiving value from your product.
Here is how we view the value stream:
Why Dojos Need to Address the Whole Value Stream
Several years ago we helped a large retailer start their Dojo. Initially, their Dojo was focused on helping teams learn DevOps practices. As we suspected going in, this focus was less than optimal. DevOps intentionally focuses on the subset of the value stream starting with code commit to product release. Focusing on improving that subset of the value stream often delivers significant gains, however, for many teams the biggest constraints and challenges exist upstream - before code commit.
The first teams that went through that Dojo improved how they built products, but for some teams the adoption and usage of their products was low. The primary constraint wasn’t their engineering skills - it was their lack of product knowledge.
We guided the organization on how to implement product discovery and product framing practices and started up-skilling teams in these practices in the Dojo. The practices helped teams vet product ideas before they invested in writing code. When they did write code, the contextual knowledge they had about the product improved design, architecture, and testing.
In addition to teaching specific product discovery and product framing practices, we coached the teams to adopt a “product thinking” mindset. We drove planning and design discussions with questions like “Does this get us closer to solving the root problem?” and “What else do we need to learn to solve the problem?”
Another situation we’ve run into was one where the delivery team was completely separated from the design team. Design was outsourced to a third party. Delivery team members could talk to the designers only every few weeks. Early in delivery, the team started raising questions about the design. They found gaps and inconsistencies. We challenged them to question whether or not they should continue delivering anything until these questions were resolved. Sadly, there was a lot of pressure on the delivery team to keep delivering. The constraints in their value stream had nothing to do with delivery.
Teams often come into the Dojo to improve their testing practices. This often involves automating test cases that are currently run manually. Teams sometimes get overwhelmed when learning to automate tests with a legacy codebase. The codebases often require complicated setup of test data and there are a large number of test permutations. We use product framing to help narrow the focus to find the most valuable tests to invest in automating. To do this, we need to understand the product in depth - again, we need to go upstream to get context.
Addressing the whole value stream is important. When we do Dojo readiness assessments with organizations this is one of the key aspects we explore before starting a Dojo. We need to be on the same page about what aspects of the value stream the Dojo will address and how we how we are measuring success for improving the value stream. For some organizations, the success of the Dojo might be based on improving the subset of the value stream limited to engineering skills, but this is usually not the case.
In our experience, the most successful and impactful Dojos address the whole value stream. Improvements have more impact and there is a direct connection between those improvements and the skills teams learn in the Dojo.
If your Dojo isn’t addressing the whole value stream, what can you do to help move it in that direction?
In the next post we’ll focus on the theme of learning over delivery.
lessons learned from five years in the dojo - part 1
Having helped organizations with Dojos for five years, we felt it was the right time to share what we’ve learned so far. In this series of blog posts, we want to offer you our “best tips” for starting your own Dojo or for improving your existing Dojo. We’ll wrap up with our thoughts on where Dojos are going next.
Without further ado…
Having helped organizations with Dojos for five years, we felt it was the right time to share what we’ve learned so far. In this series of blog posts, we want to offer you our “best tips” for starting your own Dojo or for improving your existing Dojo. We’ll wrap up with our thoughts on where Dojos are going next.
Without further ado…
Dojos Need to Support a Strategy
We talk with many organizations excited about starting a Dojo. The concept of teams learning new skills while building products is enticing and practical. Excitement is great, but Dojos work best when there is an overarching strategy the Dojos serve. Without a strategy, Dojos can flounder.
For example, an organization invested in moving from a project model to a product model would leverage that desired outcome as their strategy for the Dojo to support. The strategy frames the purpose for the Dojo. The Dojo is there to help teams understand how to work in a product model and learn the skills they don’t already have required to work in that model.
Another strategy we often see is leveraging Dojos to adopt DevOps. While this is more narrow than we recommend, it is still a nice frame for what purpose the Dojo is serving. (We prefer to see Dojos address as much of the product development value stream as possible. We’ll cover this in our next post in this series.) A “DevOps Dojo” would focus on helping teams learn how to build continuous delivery pipelines and automate infrastructure setup while foregoing other skills like product discovery practices.
A third example is using a Dojo to help the organization migrate applications to the cloud. This is an interesting start, but for the Dojo to truly be effective the strategy should be clear on how teams should migrate their applications. Will teams refactor applications for the cloud, move them in a “lift and shift” model, or follow a “re-platforming” model? If it’s a combination of those approaches, what are the criteria for determining which migration model for an individual application? And what is more important - having teams leave the Dojo with deep knowledge of the cloud or getting their applications into the cloud with sufficient knowledge of how to support them? Knowing the answer to these questions is critical if you want to use your Dojo to drive toward specific outcomes.
Starting from a sound strategy is key. It provides the following benefits:
Teams understand the value of why they should participate in the Dojo
The skills and topics taught in the Dojo are well-defined
Growing coaches is easier because coaches can focus on specific skills
Measuring the success and impact of the Dojo is easier since you can measure outcomes against the strategy
The strategy your Dojo supports should be clear and easily stated. If the strategy is nebulous or complicated, your Dojo will struggle to provide value to the rest of the organization.
What strategy is your Dojo supporting?
Be on the lookout for our next topic in this series: why Dojos need to address the entire value stream.
Continuous Learning - Talking, Improving, and Learning across Domains
We’re excited to be partnering with Mark Graban in presenting a new “conference” on creating learning organizations. Mark is and is an engaging speaker. We look forward to learning from him ourselves (see his book Measures of Success).
The event is on September 26th & 27th. As we write this, there are only 5 seats left. More information and registration details are available here.
Why are we so excited about this conference?
We’re excited to be partnering with Mark Graban in presenting a new “conference” on creating learning organizations. Mark is and is an engaging speaker. We look forward to learning from him ourselves (see his book Measures of Success).
The event is on September 26th & 27th. As we write this, there are only 5 seats left. More information and registration details are available here.
Why are we so excited about this conference?
It’s about Learning & Improvement
We’ve been working in the agile, lean, and DevOps spaces for years. Over the last five years, we’ve found our niche helping organizations embrace learning and continuous improvement. This conference is about that topic. It’s not about the process or technology. Instead, it focuses on these questions - how do we create spaces where people are engaged and continuously improving? How do we make it so that learning isn't a special one-off event? How do we make people’s lives better (isn't that what it is all about)?
The Conference is Hands On and Experiential
The conference is different by design. Mark, Dion, and Joel will have a small amount of content to deliver. Attendees will be experiencing events and sharing insights with each other rather than attending prepared sessions typical of most conferences.
We’re starting with a tour of a Toyota plant and seeing how improvement is a way of life there. We'll then visit a craft whiskey distillery that applies lean startup principles and we’ll experience a real world “beer game”. That’s just the first day!
The second day has more experience-based learning with Mark, Dion, and Joel leading short sessions in the morning. We’ll then have open space where we learn and share with each other.
This conference will be different.
Diverse Attendees
Principles of learning and improvement are not isolated to any one domain. The attendees who’ve already registered come from a variety of domains (financial, legal, government, retail, and IT). The diverse domains, backgrounds, and experiences people are bringing will enhance this experience even further.
The Concept Excites People
The idea for this conference started months ago with the three of us asking “Wouldn't something like this be cool”? But ideas are cheap, and you don't really know if they are interesting until you get feedback. So, we decided to try this conference out as a little experiment. We started by asking people if the concept was interesting. We quickly heard a resounding “Yes''. That was nice, but once it came to commit the time and money would there still be that level of interest? There was. And the feedback we have received tells us people are excited.
These are some of the many reasons we hope to see you there. Join us September 26th & 27th. Come learn and experience with us and Mark! Register here
See you in San Antonio!
Dion & Joel
Growing Coaches in the Dojo
Skilled coaches are critical to the success of any Dojo. The specific skills needed will vary. A Dojo focused on DevOps requires one set of coaching skills. A Dojo focused on agile and product discovery capabilities needs different coaching skills.
Staffing a Dojo with coaches can be a challenge. There’s an abundance of agile coaches but many of them know only process. The Dojo can be an effective place for improving development processes. But, the investment required to run a Dojo should return a bigger payoff. Most organizations want to improve engineering and product discovery practices. Coaches who can help teams improve these skills are hard to find. You may need to hire skilled engineers and product managers and develop their coaching skills.
Here are four ways we help grow coaching skills in the Dojo.
Skilled coaches are critical to the success of any Dojo. The specific skills needed will vary. A Dojo focused on DevOps requires one set of coaching skills. A Dojo focused on agile and product discovery capabilities needs different coaching skills.
Staffing a Dojo with coaches can be a challenge. There’s an abundance of agile coaches but many of them know only process. The Dojo can be an effective place for improving development processes. But, the investment required to run a Dojo should return a bigger payoff. Most organizations want to improve engineering and product discovery practices. Coaches who can help teams improve these skills are hard to find. You may need to hire skilled engineers and product managers and develop their coaching skills.
Here are four ways we help grow coaching skills in the Dojo.
Observing, Pairing, Leading
We onboard new coaches following an “observe, pair, lead” progression towards competency.
As soon as possible, new coaches observe other coaches working with teams. During breaks and at the end of coaching sessions we have debriefing conversations. The coach leading the session will ask the observing coach—what did you see, hear, and observe? What might you have done differently if you were guiding the session? What questions do you have about the way I coached? The coach guiding the session will then explain what they saw, heard, and observed. They explain the choices they made in coaching the team.
After a few “observing” sessions, new coaches pair with an experienced coach. They might take turns guiding the team or the new coach may take the lead. The more experienced coach focuses on keeping things on track. She will step in if the new coach “gets stuck” or has a question. The pair of coaches will continue having debriefing conversations during breaks and at the end of the session.
Once the new coach has paired on a specific practice a few times, they will then lead the practice with an experienced coach observing. The experienced coach may jump in if the coach leading the session starts struggling. However, they are there mainly to observe and offer guidance during the debriefing conversations.
This is an effective way of onboarding new coaches. It’s impossible to replicate the experiential learning that comes from working with teams through training.
Running Simulations
We help coaches grow their skills by practicing in simulated scenarios.
In a simulation, coaches and Dojo staff will roleplay various team roles. Those include manager, developer, testers, designer, product owner, customer, etc. A coach will guide the "team" through a session while the rest of us play out behaviors the coach might meet. This is useful for practicing problematic behaviors - a manager who wants to control everything the team does, a product owner who insists on delivery of a feature faster than the delivery team’s estimate, or an entire team with a weak and nebulous vision for their product.
We use the debriefing conversations described above to review the simulation once it’s finished. We talk through what we saw, heard, and observed, what were effective ways of guiding the group (or ineffective), and other ideas for handling what came up during the simulation.
Practicing Teaching
One of the most effective tools we use for developing coaches is having the coaches teach in practice training sessions.
For example, before teams come into the Dojo a coach leads them through a Dojo Chartering session. We teach coaches how to do Chartering and then they observe a few Chartering sessions. The next step in developing their skills may be having them teach the rest of the coaches how to do Chartering.
This is also an effective way for the coaches to stay in sync.
Reflecting on the Role of Coaching
Periodically, the coaches will get together and reflect on what it means to be a coach.
In a coaching workshop we gave recently, we discussed the various roles coaches play in the Dojo. We had an interesting discussion about the timing and triggers for moving from one role to another. For example, when do you move from a teaching role to more of a partner role as teams are adopting new skills?
One of the new coaches started the discussion by asking if there were standard coaching roles and how you knew when to adopt each role. You could use questions coaches have as opportunities to bring the coaching group together to discuss the questions, share information with each other, and develop coaching skills.
Skilled coaches are critical to the success of any Dojo. Experimenting with new ways of developing coaching skills is part of running a Dojo. Just as we ask teams to adopt a culture of continuous learning and experimentation we do the same inside the Dojo. We’ll continue to share new techniques for developing coaches. What are the ways you develop coaching skills?
Measuring Impact In The Dojo
Last month at Agile Day Chicago, I (Joel) had the pleasure of listening to Mark Graban speak about separating signal from noise in our measurements. Mark referenced Process Behavior Charts, a technique described in the book Understanding Variation: The Key to Managing Chaos by Donald J. Wheeler. This simple tool helps us look at metrics over time and understand the difference between naturally occurring variations (what Wheeler calls “The Voice of the Process”) and signals, or variation in the metrics representing real changes. Signals can be indicators that a desired change is manifesting, or they can be indicators that something is wrong and requires further investigation.
Last month at Agile Day Chicago, I (Joel) had the pleasure of listening to Mark Graban speak about separating signal from noise in our measurements. Mark referenced Process Behavior Charts, a technique described in the book Understanding Variation: The Key to Managing Chaos by Donald J. Wheeler. This simple tool helps us look at metrics over time and understand the difference between naturally occurring variations and signals, or variation in the metrics representing real changes. Wheeler calls both of these (signal and noise) “The Voice of the Process,” with the key being able to distinguish between the two. Signals can be indicators that a desired change is manifesting, or they can be indicators that something is wrong and requires further investigation.
We immediately saw the value in being able to separate signal from noise when evaluating the types of metrics we’re capturing in the Dojo that we talked about in our last post. We both grabbed copies of the book, devoured it quickly, and started brainstorming on applications for Process Behavior charts.
Let's look at an example of how to use Process Behavior Charts in the Dojo.
BEFORE YOU START
This may sound obvious, but before you start any measurement think about the questions you want to answer or the decisions you want to make with the data you’ll collect.
In the Dojo, we help teams shift from a project to product mindset. We focus on delivering specific outcomes, not simply more features . When delivering a new feature the obvious question is – did the feature have the desired outcome?
THE SCENARIO
Imagine yourself in this type of situation…
We’re working with a team and we’re helping them move from a project model to a product model. In the past, the team cranked out features based on stakeholders’ wishes and success was simply judged on whether the features were delivered or not. We’re helping the team shift to judging success on whether outcomes are achieved or not.
We’re also working with the stakeholders and there’s resistance to moving to a product model because there’s fear around empowering the teams to make product decisions. New features are already queued up for delivery. Before we give the team more ownership of the product, the stakeholders want delivery of some of the features in the queue.
We can use this as a coaching opportunity.
The stakeholders believe the next feature in the queue will lead to more sales - more conversions of customers. The team delivers the feature. Now we need to see if we achieved the desired outcome.
Our first step is to establish a baseline using historical data. Luckily, we’re already capturing conversion rates and for the 10 days prior to the introduction of the new feature the numbers look like this:
Then we look at the data for the next 10 days. On Day 11, we have 14 conversions. Success, right? But on day 12, we have 4 conversions. Certain failure?
Here’s the full set of data for the next 10 days:
Overall, it looks better, right? The average number of conversions have increased from 6.1 to 7.9. The stakeholders who pushed for the new feature shout “success!”
PROCESS BEHAVIOR CHARTS
Given a system that is reasonably stable, a Process Behavior Chart shows you what values the system will produce without interference. In our case, that means what values we can expect without introducing the new feature. Let's create a process behavior chart for our example and see if our new feature made a difference.
First Step - Chart Your Data In A Time Series and Mark the Average
What does this show us? Well, not much. Roughly half of our points are below average and half are above average (some might call that the definition of average).
Second Step - Calculate the Moving Range Average
Our next step is to calculate the average change from day to day. Our day to day changes would be 2, 4, 4, 2, 6, 3, 2, 5, 3 for an average change of 3.4. All this means is that on average, we see a change in the number of conversions day to day of about 3. If we were to plot the number of changes in conversion day to day, we would see roughly half above and half below - again, the definition of average.
Third Step - Calculate The Upper And Lower Bounds
To calculate the upper and lower bounds, you take the moving range average and multiply it by 2.66. Why 2.66? Great question - and it is well covered in Don Wheeler's book. In brief, you could calculate out the standard deviation and look at 3 sigma, but 2.66 is faster, easier to remember, and ultimately tells the same story.
We take our moving range average of 3.4 and multiply it by 2.66 giving us 9.044. What does this number mean? It means that with normal variance (the Voice of the Process), we can expect conversions to fluctuate 9.044 above or below our average number of conversions which was 6.
To put it more clearly, without any intervention or new features added, we should expect between 0 and 15 conversions per day - and that would be completely normal.
Let's visualize this data. We add our upper and lower bounds to our chart for our first 10 days. It now looks like this:
Fourth Step - Introduce Change & Continue To Measure
We have established the upper and lower bounds of what we can expect to happen. We know that after the feature was introduced, our conversion numbers looked better. Remember, the average went up almost 30% (from 6.1 to 7.9) - so that is success, right?
We extend our chart and look to see if the change actually made a difference.
Our average for the next 10 days was higher, but looking at what we could normally expect the system to produce, all of the conversions were within the expected range. In essence, the feature we delivered did not create a meaningful impact to our conversions.
Note, we’re not saying that nothing could be learned from delivering the new feature. The point we’re making is that prior to delivering the feature we assumed it would lead to an increase in conversions. Using a Process Behavior Chart we were able to show our assumption was invalid.
Now we can continue the conversation with the stakeholders around empowering the team to improve the product. Maybe now they'll be more open to listening to what the team thinks will lead to an increase in conversions.
MORE USES FOR PROCESS Behavior CHARTS
We like using this visual display of data to help us concretely answer questions focused on whether or not our actions are leading to the intended outcomes. For example, we are experimenting with Process Behavior Charts to measure the impact of teaching new engineering and DevOps practices in the Dojo.
REMEMBER - MEASURE IMPACTS to the WHOLE Value Stream
Process Behavior Charts can be powerful, but they require that you ask the right questions, collect the right data, AND and take the right perspective. Using a Process Behavor Chart to prove a change is beneficial to one part of the value stream (e.g., the “Dev” group) while not taking into consideration the impact to another group (e.g., the “Ops” group) would be missing the point. Consider the complete value stream when you are looking at these charts.
FURTHER READING
For more information on these charts, as well as the math behind them and what other trends in data are significant, we recommend the following:
Understanding Variation - The Key To Managing Chaos; Don Wheeler
Lean Blog - Mark Graban, in particular this post on home runs in the World Series
Process Behavior Charts (also called Shewhart Charts) – this article talks about various patterns that are statistically significant
If you found this helpful and you adopt Process Behavior Charts, please let us know how you are using them and what you are discovering.
Dojo Metrics - Moving From What is Easy to capture to What Matters
A fair question to ask when starting a Dojo (or any initiative for that matter) is “how do we know this is working?” Invariably, right on the heels of that question somebody always brings up the idea of capturing metrics. Then they turn to us and say “What are the right metrics for the Dojo?”.
A fair question to ask when starting a Dojo (or any initiative for that matter) is “how do we know this is working?” Invariably, right on the heels of that question somebody always brings up the idea of capturing metrics. Then they turn to us and say “What are the right metrics for the Dojo?”.
The best metrics provide insights that help us take action to improve the current situation. In the case of a new initiative like a Dojo, that action might be making a decision to continue the initiative, modify it, or end it.
Sadly, metrics are often arbitrary or they tell an incomplete story. Single metrics fail to capture the interplay and tradeoffs between different metrics. We’ve heard many stories of how organizations optimizing for one metric created detrimental results overall. (We’re looking at you, capacity utilization.)
how do we measure the effectiveness of the Dojo?
The primary goal of the Dojo is to foster learning. We need to measure the effectiveness of that learning and ultimately, we need to measure the economic impact that learning has on the organization. But it’s not learning at any cost. We’re aligned with Don Reinertsen on this point.
In product development, neither failure, nor success, nor knowledge creation, nor learning is intrinsically good. In product development our measure of “goodness” is economic: does the activity help us make money? In product development we create value by generating valuable information efficiently. Of course, it is true that success and failure affect the efficiency with which we generate information, but in a more complex way than you may realize. It is also true that learning and knowledge sometimes have economic value; but this value does not arise simply because learning and knowledge are intrinsically “good.” Creating information, resolving uncertainty, and generating new learning only improve economic outcomes when cost of creating this learning is less than its benefit."
Don Reinertsen - "The Four Impostors: Success, Failure, Knowledge Creation, and Learning"
Reinertsen stresses the need to generate information efficiently. This is easy to understand when thinking in terms of generating information that helps you make decisions about your product. For example, it’s a fairly straightforward exercise to determine the costs for generating information by running low-fi, paper prototype tests that answer the question “should we include this feature or not?”
It’s also easy to understand how you might measure the effectiveness of knowledge creation when helping teams make improvements on their continuous delivery pipelines. We can calculate the cost of learning DevOps practices and compare that to expenses saved by automating manual processes.
What’s not as easy to understand is how to measure the impact of learning cloud native architecture or micro services - or something even more nebulous, like product thinking and the impact of learning a design practice like personas.
We would expect the impact of these learnings to result in lower development costs, decreased cycle times, and increased revenues resulting from better market fit for our products. But – there is a high degree of uncertainty as to the level of impact these learnings are going to have on the organization. (Again, hat tip to Don Reinertsen. His post about looking at the economics of technical debt influences our thinking here.)
In addition, during a team’s tenure in the Dojo it’s quite probable that their productivity will decrease as the team is creating new knowledge and incorporating new practices. The team's investment in learning carries a cost.
Ultimately, we need to understand the impact the Dojo has on lifecycle profits. That impact will often occur after a team has left the Dojo.
We have started organizing metrics in the Dojo into three groups. Our goal is to help orient stakeholders, leaders, and teams around what actions these metrics will help them take. We also want to help them understand the level of effort required to collect the metrics and the timeframes in which they will be available.
Three Categories of Metrics for the Dojo
Simple To Capture - Organizational Reach
These metrics simply show the amount of “touch” the Dojo has.
Examples include:
Number of teams going through the Dojo
Total number of attendees
Number of Programs / Portfolios touched
Astute readers may critically call these “vanity metrics” and they would not be wrong. These metrics do not equate to impact. They don’t help us answer the questions “Were the right teams involved?”, “Did the amount of learning that happened justify the investment?”, or “How much learning stuck?”
However, these metrics are simple to collect and can be used as leading indicators once we have metrics on the economic impact the Dojo has on teams. For many organizations, these metrics are important because they imply value as the Dojo is being bootstrapped, even though they don't prove it. They are metrics everyone is comfortable with.
Harder To Capture – Directional/Team Based Improvements
Metrics in this category are more important than the previous category in the sense that these metrics look at the directional impact of learning in the Dojo and how that learning is impacting teams.
Examples include:
Number of automated tests
SQALE code quality index
Percentage reduction in defects
Cycle time reduction to deliver a product increment
Velocity / Story count (with the obvious caveat that these can be easily gamed)
Again, these metrics are far from perfect. The testing related metrics do not prove the right tests were written (or the right code for that matter). Metrics showing products were built faster don’t shed any light on whether those products should have been built in the first place (what if nobody buys them?).
What these metrics do show is the incorporation of product delivery practices that are being taught in the Dojo - practices that our experience and the experiences of other organizations have shown to have a positive impact on lifecycle profits. These metrics can be collected with agile project management software, SonarQube, Hygieia, or other comparable tools.
When we use these types of metrics we need to have a baseline. It’s helpful to have data for teams for two to three months prior to when they enter the Dojo. We don’t always have this baseline, however, and in some cases the best we can do during a team’s tenure in the Dojo is help them establish the baseline. Obviously, we want to track these metrics for teams after they’ve left the Dojo to see how well new practices are sticking.
Difficult To Capture – Impact/Economic Improvements
Metrics in this group are challenging - not only to collect but also because using them to drive action challenges the way many organizations work. These are the metrics that force us to look at the question “Is this initiative having a positive economic impact on the organization?”
Examples include:
Increase in sales conversion
Cycle time reduction for a delivery with impact (not just delivery, but a delivery that mattered)
Systematic cost reductions (not silo optimizations that may have detrimental effects in other areas)
Savings resulting from killing bad product ideas early in the discovery/delivery cycle
Metrics like these can prove initiatives like the Dojo are having a positive impact on lifecycle profits. These metrics will be substantially harder to collect. We need to collect data for a much longer period of time. We need to align with the finance department in our organizations. And, we need whole product communities aligned around a shared understanding of what successful outcomes look like. In addition, we need to understand how to separate real signals of change from noise. (This post has more on that topic.)
Ultimately, this last category of metrics is what matters. This is where the Dojo shines. We work with teams to teach the practices, thinking, and communication strategies that will have an impact on lifecycle profits.
This is an ongoing area of improvement for us. This is what we are currently practicing. These categories of metrics are helping foster conversations, understanding of what knowledge individual metrics can provide, and the value of investing in the Dojo.
Empowering & Enabling Responsibility
Empowering teams is a topic the DevOps and Agile communities frequently talk about. But it is easier said than done. Here is one simple approach to empowering teams you can do right now.
But first a little background...
Empowering teams is a topic the DevOps and Agile communities frequently talk about. But it is easier said than done. Here is one simple approach to empowering teams you can do right now.
But first a little background...
Responsibility vs Accountability
We frequently work with leaders who are new to DevOps. We ask them straightforward questions such as - “Why are you interested in DevOps?” We often hear answers along the lines of “We want to make teams more accountable for their actions.” When we dig in a bit further, we learn this is not actually what they mean. What they are trying to say is that they want to empower teams with responsibility for their own work.
What’s the difference between accountability and responsibility? Look up the definitions and you might find yourself going back and forth between them endlessly. It’s as if you are trying to navigate through an M.C. Escher drawing.
For us, being accountable means you’re answerable for something. In its worse form, a leader makes the team answerable for an outcome that is beyond their control to influence. It can come from command-and-control style of leadership. If you’ve ever been held accountable for meeting a goal without the ability to influence how to accomplish it, you know what we mean.
Being responsible means you have the competency, authority, and the correct understanding of the desired outcome so that you can deliver that outcome as you see fit.
When we discuss this topic with leadership, we often use Christopher Avery's work around the responsibility process. It’s an effective conversation starter that helps shift the focus away from accountability toward responsibility.
With that context out of the way, let's look at the one simple thing you can do to empower teams.
Decision Rings
The image above is something you can refer to with your leaders, coaches, and teams. In this example, the center circle is the team, the next outer circle is their manager(s), the next outer circle is domain experts from the business, and the outermost circle is some executive leadership.
The rings represent different levels in an organization. We use them to help frame discussions when asking “Who can make this decision?” The decision making structure in your organization may be different. The decision making structure may also change depending upon the question at hand.
Let's look at an example. Imagine your product team is working on the goal of increasing sales by delivering promotional content in banner ads. After starting to work on that goal, the team uncovers a better way of improving sales with promotions that has nothing to do with banner ads. Who makes the decision on what to deliver?
First, we make sure we agree where that decision is made today – and we’re not talking about where it’s “officially” made according to some policy. We’re talking about where it’s actually made given the messy, often political nature of decision making within organizations.
Next, we ask “what information would need to be made available or what competency would need to be developed to move that decision inward?” We can then get to work making any necessary changes. Or, we can move the decision-making authority inward if no changes are necessary.
In the above example, we might say “Right now, the business team needs to make that decision. For the product team to be able to make that decision, we would need to provide them more information on the organization’s strategic goals.”
We have just started exploring using this technique in the dojos and it is driving productive conversations. It is not a silver bullet, but it’s a nice simple visual that starts conversations driving empowerment and growing responsibility.
Try it out. Let us know how it works for you.
Technical Debt - Learning in (and With!) the Face of Debt
Technical Debt has a few definitions ranging from 'the previous person's bad code' to 'the shortcuts taken to hit a deadline' to my favorite - Technical Debt is 'the gap in the code between what we knew when we started building our product and what we know now'.
Technical Debt has a few definitions ranging from 'the previous person's bad code' to 'the shortcuts taken to hit a deadline' to my favorite - Technical Debt is 'the gap in the code between what we knew when we started building our product and what we know now'.
It's easy to look at a codebase with no automated tests, high cyclomatic complexity, and a manual build process and say 'look - Technical Debt'. It is more challenging to work with a team implementing new features where previous technical choices are making it costly to improve the product. In other words, the code is not maneuverable (Nygard).
These situations happen frequently in the dojo. Let's look at a couple examples.
THE UNTESTED 20-YEAR OLD MONOLITH
Imagine a 20+ year old code base, modified every year by the lowest cost outsourcing firm the organization could find. Now imagine you inherited that codebase with such delights as 900 line constructors that directly connect to databases, establishing pools and locks. An agile coach could (and had) come into that team and said “You need to get to 70% code coverage for your unit tests”. The team laughed because crying was too obvious.
When teams are in this kind of bind, entering the dojo isn't just about learning how to write unit tests. The team already knew how to write unit tests and was eager to do so. The challenge was finding a strategy to attack the beast. And when deadlines were hitting, it was easier to keep adding code and crossing your fingers.
We did a few things to address the problem. First was getting the team together to identify and discuss quick wins they all wanted to knock out. We did some light brainstorming and affinity mapping which led to a new set of lightweight design changes - all in about an hour or two - and the team had a nice shortlist of changes to start working on. The list would take a week to complete and would be worked on concurrently with new work. It was a small set of changes, but it gave the team a few wins to build upon.
Next, we came up with strategies for attacking the technical debt while working on new stories. One strategy we used quite a bit was using a test-driven approach, focusing on the design of the tests themselves. We’d have a quick discussion around the tests – “where should the responsibility of the tests lie”? With that in mind we’d ask – “what is the delta between where those responsibilities should be and where they'd be given our current design? Can we get there now?”
For some of the stories, we could implement our desired changes immediately. For others, we couldn't. In those cases, we kept a visual list of the spots we wanted to work on. We also created higher-level tests that we could implement immediately with the knowledge that we would remove those tests once we had implemented lower-level tests.
THE MULTI-TEAM MATRIXED DESIGN
Another team was similar in that they had a lack of testing in place, but their challenges were slightly different. This team was formed as an organization’s first attempt at creating 'product teams.' The codebase was gnarly and had contributions from multiple teams over multiple years without much knowledge sharing.
Enter the dojo. The team was doing a little product exploration around a new feature using impact mapping. They came up with a few ideas that would have more impact for the customer than the new feature as it was originally defined. Excellent, right? Not quite. While these ideas were great and the team agreed they were better approaches to the problem, the technology would not allow the better ideas to be built. The way the product was designed made it difficult for required data to be accessed when needed, resulting in unnecessary extra steps. Streamlining a separate process would break downstream services. And so on...
Teaching ideas like parallel change can help in this space. But the real value here came from the whole team learning the cost of technology decisions together, and working together to learn approaches to attack technical debt.
When items like this arise, first and foremost it is still a good thing. Now we can start quantifying design problems and communicating the opportunity cost of technical debt. In this example, we could calculate the cost of the options we could not deliver. The learning expanded beyond the team and became organizational learning.
And in the dojo – we don’t simply teach approaches for tackling technical debt and send the on their way – we help the teams work through it.
What have you done to help teams with technical debt?
A Better Center of Excellence
Many organizations attempt to manage knowledge by creating Centers of Excellence (CoE). A CoE is supposed to develop skills and best practices (or leading practices), codify the knowledge around those skills and practices, and blast that knowledge out to the rest of the organization. In our experience, CoEs seldom work. They do not deliver on the expected outcomes and are eventually abandoned.
Many organizations attempt to manage knowledge by creating Centers of Excellence (CoE). A CoE is supposed to develop skills and best practices (or leading practices), codify the knowledge around those skills and practices, and blast that knowledge out to the rest of the organization. In our experience, CoEs seldom work. They do not deliver on the expected outcomes and are eventually abandoned.
Dojos can deliver on the intent of centers of excellence. Let's look at CoEs, how they typically struggle, and how dojos address these issues.
Spreading Knowledge
The idea behind CoEs is that once we have a core group of people with 'the knowledge', they can simply 'go forth and deliver the knowledge.' As Dion referenced in our recent webinar, knowledge is very difficult to codify and externalize.
Instead of trying to package up knowledge and hand it off to teams, Dojos amplify knowledge creation through frequent application of new skills with teams. This application of new skills is not only done in the context of their daily work, making it stickier, but also in a safe environment with coaching support. Instead of trying to codify knowledge and pass it from person to person, dojos create learning experiences where participants have the opportunity to create knowledge for themselves.
Separating Centers of Excellence by Skill
Many CoEs fall into the trap of creating separate centers for each skill or practice area. Perhaps one for testing, another for micro services, another for cloud native architecture. While this does seem to make sense - get the best people with the most knowledge around a practice area together - it neglects how the different practices work together cohesively and how you tie the practice areas together.
Dojos solve this problem by not focusing on skills or practices in isolation but rather in the context of the end-to-end product development lifecycle. Dojos take into account how the various skills and practices influence each other. Coaches work with full-stack teams to provide broader perspectives and deeper knowledge creation across different skill sets.
Creating Internal Best Practices
The core tenet behind CoEs is to find and promote best practices. This is problematic in and of itself since we know best practices work only for the simplest, most obvious problem domains. In addition, this is usually attempted through recruiting a small set of people that have been successful in their roles and bringing them all occasionally together to try and create patterns. The onus is on the small group, a centralizing control. For those readers familiar with "The Starfish and the Spider", this is definitely a spider model.
The dojo, in contrast, is a starfish model. Instead of centralizing knowledge, we are trying to help teams grow their skills and then give back and share with others, removing the central ownership of ideas.
Dojos are the best way to achieve the intended outcomes of CoEs. Want to learn more? Reach out to us and let's chat - hello@dojoandco.com