“What should I measure???” is one of the questions I see when I work with people going through an agile transformation. Too often, managers measure people as individuals. (Traditional measurements focus on resource efficiency instead of flow efficiency.) Resource efficiency measures don’t measure what the organization delivers or what prevents the organization from delivering.
This measurement question can be the prompt that changes your culture and your system. It might help people realize there are reasons for change and help them create small, safe-to-fail experiments. I like to see the questions reflect the why for your organization’s agile transformation.
First, consider asking yourself these questions:
- What do we want more of? (I often discover the answers are about throughput and collaboration.)
- What do we want less of? (These questions often lead me to think about defects and delays.)
That might lead us to consider these possible measures:
- How often do we deliver something our customers need? (Lead time trends)
- Can we deliver what we want when we want? (A simplistic qualitative measure of organizational agility)
- Do we have work as inventory, stuck somewhere? (Where are our bottlenecks?)
- Do we incur costs because we don’t release that inventory of work? (Cost of Delay)
- Do we know our flow, our value stream, and where we add value and where we wait? (This is about seeing wait states and where we add value. It’s not so much about showing the flow of work. When we use flow, we can understand our value stream, where we add value and where we wait, we can see/visualize the wait states. We can see where in the flow we add value—and where we don’t. )
- Do our employees (all of them) like working here? (This is a partial measure of engagement. I find it telling when the senior managers like working here and the individual contributors don’t. One reason might be because they are “individual” contributors.)
- Do our employees believe in our mission? (Have we defined why we exist? Nobody exists to make money. Making money is a side effect of the mission. An important side effect, but a side effect.)
- How well does what we deliver reflect our mission? (Are we delivering improvements on a regular basis? Trying to keep up? Falling behind?)
I’ll stop there. Notice that there is no mention of velocity or burnups or any of that nonsense. While project teams might use velocity and local burnups, and projects might report on feature charts and product backlog burnups, those measures are not sufficiently strategic. (See Velocity is Not Acceleration for the discussion and examples of feature chart and product backlog burnup charts. Also, see Create Your Successful Agile Project to differentiate between measures teams need and project-reporting measurements.)
Notice that all of the measures, qualitative or quantitative is a trend over time.
One manager started with just two measures: lead time for projects and if people felt they had finished valuable work that day. The teams could report on their lead time. And, every team had a flip chart paper with a supply of yellow and red stickies next to this flip chart.
Each person on the team assessed the value of their work done that day. The yellow stickies were a qualitative assessment. The higher the yellow sticky, the more valuable people felt their work was. The lower the red sticky, the lower the value the work.
Some teams discovered their assessment of value was about their ability to deploy. Other teams decided value was more about collaboration in the team. Yet other teams decided it was about what the end customer could do with the released stories.
Each team could decide what value meant to them.
Team1 used their graphs and lead time as part of the data for their retrospectives. They (slowly!) moved their deployment time from two weeks to one week to three days to one day. They released something valuable to customers every single day.
Team2 learned to deploy once a week, which appears to be sufficient for them now.
Team3 was a little suspicious of Team1’s and Team2’s success. Team3’s lead time was quite long and they decided to focus on just lead time for one month (two iterations). Because their PO felt compelled to change the iteration contents inside the iteration, the team had an urgent column on their board. They also measured the number of items that flowed through the urgent column and the number of items they’d planned for.
Team3 discovered that their lead time for the Urgent work was between 3-6 days. They also discovered that their planned work lead time was about the same, 3-5 days. The difference was the quantity: they often had five items in the Urgent column and four items in their planned Ready column. They had a ton of WIP (Work in Progress). They switched to a kanban board, using what had been iteration boundaries as a planning and retrospective cadence. Team3 discovered they needed more data to understand their work.
After a month, they decided to track their daily perceived value, too. They start mostly with red stickies below the neutral line and progressed to more yellow stickies above the line.
For all three teams, as the lead times decreased, the perceived value increased.
That’s when the managers asked the teams to help them decide what to change next. The teams decided to change their measurements to take a closer look at their value streams and Cost of Delay. The teams asked the managers to look at their decision-making processes. Why did the PO feel such pressure to change what the teams had planned?
The managers measured the lead time from when they put a possible project on their board to when they gave that project to a team. That delay was longer than any of the projects (manager decision lead time was often 18 months, and the projects were anywhere from 6-9 months). The managers needed the teams to be able to deliver more often and the managers needed to manage the project portfolio better.
That prompted a discussion of how the managers influenced the product roadmaps and the project portfolio. It turns out that the managers were measured and incented by resource efficiency, not flow efficiency. They had to change their reward and bonus system in order to bring sanity back to the roadmaps and the project portfolio.
Your organization may decide to measure other data. However, it’s worth thinking about what to measure so you can create small, safe-to-fail experiments for your measurements.
If your agile transformation is stuck, consider rethinking your measurements. Teams still need team-based data. And, teams need to report project data. Managers might need data about their work because their decisions create and refine the culture (agile or not). When you think about an agile culture, teams and managers often need different data.
That’s one of the reasons Gil Broza and I are offering the Influential Agile Leader workshop in Boston, June 7-8, 2018. The early bird registration ends May 1, 2018. You might look at your system and current culture and decide to create other measurements to see what you can change. Do join us.
Update: Here are all the posts in order:
- Introduction and Part 1 (this post)
- Agile Transformation: Practice Change, Part 2
- Agile Transformation: See Your System and Culture, Part 3
- Agile Transformation: Possible Organizational Measurements, Part 4
- Agile Transformation: More Possible Organizational Measurements, Part 5
- Agile Transformation is a Journey, Part 6
Note: Updated May 21, 2018 for clarity. See the French translation here: http://www.les-traducteurs-agiles.org/2018/05/24/transformation-agile-indicateurs-organisationnels-possibles.html.