Michael Mahlberg taught me something cool last week. We're writing an article together. Part of the article is about forecasts. I was ranting and raving about how to help people see their confidence levels. He pointed me to this slideshare: Lightweight Kanban Metrics (in German). (Don't worry about the language.) Go to slides 24, 25, and 26.
I'll use his slides as inspiration when I walk through an example below. First, let me set the stage.
Many managers want to know the answers to these questions:
- When will we see the first bit? (That might be the feature set or the product. )
- When will we see half of it?
- When will we see most or all of it?
Let me start with the first question, about that first bit.
“When Will We See Something?”
Many managers now realize they don't have to see everything to get an idea of where the team is going with the product. That's good.
If you know about the various Minimums, you know you can answer the first question—when will we see something—pretty easily. You create minimums and deliver the first thing.
Maybe you also use these ideas:
- Your team works together to minimize the team's delays. (See the Measure Cycle Time post for a value stream map and to see about the team's delays.)
- Your team manages its WIP. For example, I recommend (as a guideline) that teams limit their stories in progress to a reasonable number. (I don't know what that number is, but let's assume you take the team's number of people and divide by two. Take the floor of that number. A team of six or seven people would have no more than three items in progress using my guideline.)
- If you can, keep the story size to some small range. I like one-day stories. However, maybe this is a new product for you and you don't quite know how to do that yet.
Regardless of how your team works, you can demo something inside of a week or two. (Assuming you implement by feature.)
Now, we get the next question, about half or all of the work. Your managers are asking you to give them the 50% confidence date. Their question about most/all is the 90-100% confidence date.
That's where Michael's low-tech way of looking at cycle/lead time distribution makes so much sense.
Create Forecasts Using Cycle Time
If your team has a board like the one here, you would take the done column cards and put them into this kind of a histogram.
Cycle time is on the X-axis. I used days in this example. The number of cards that deliver in that cycle time is the Y-axis. (That's how to create the histogram.)
Many of us have cycle time distributions that look something like this: most of the stories cluster around a relatively small number of days. In this chart, most of the stories take 1-6 days to complete. A few (here, five stories) cluster around a much larger time. Note that 5 stories take more than twice the original cycle time, taking 12-15 days.
To get to a 50% confidence level, you would use a cycle time of 5 days. Given what you know, about half the stories will take 5 days or fewer to complete. About half the stories will take longer than 5 days to complete.
Notice that you will be wrong half the time. That's why many managers don't much like 50% confidence.
What About a Higher Confidence?
Want 80% confidence? Given what you know about your cycle time, you can complete 80% of the stories in 12 days or less. You will still be wrong 20% of the time.
If you want to know with 90% confidence when the team will finish the work, you would use 13 days. You can complete 90% of the work using a cycle time of 13 days. (You'll still be wrong 10% of the time.)
If you want a 100% confidence level, you need to account for the maximum possible cycle time, before you've done the work. I have never been successful with a 100% confidence level. I also haven't needed that.
Can Your Forecast be Totally Accurate?
Percentage confidence uses past data (cycle time) to project what we have done in the past to forecast the future.
If you have some change in cycle time, you'll need more data to create a more reasonable forecast.
Worse, the larger the feature set, the more uncertainty you might see.
Larger Feature Sets or More Work Increases Uncertainty
That's where large feature sets make everything much more uncertain.
The 50% confidence is at 6 days. (If you assume everything has a cycle time of 6 days, you would be right about half the time. You would be wrong about half the time.) If you want 80% confidence, you would need to use 13 days. And for 90% confidence? You'd have to use 14 days for cycle time.
That's why I like to ask what managers will do with this information.
What Decisions Will You Make Based on This Data?
I've seen many good reasons for wanting to know “when”:
- Do we have enough to ship to customers?
- Can we capitalize yet?
- Can we offer a commitment to customers or other outside-the-organization stakeholders?
- Should we add more people?
- Should we cancel the project?
Your managers might have other reasons. That's why if the managers want to know any of the “when” questions, ask them what they will do with the information.
For more reading, please see:
- Dan Vacanti offers a much more in-depth look in his book, “When Will It Be Done?“
- Troy Magennis offers workshops, etc about this very problem.
I also offer workshops on this. First, I recommend you try it yourself, to see how easy probabilistic forecasting can be.