
A senior leader, Jim, emailed me to clarify how to track a project's progress. He's a sales guy by training and is now leading a technology-based company. He's confused. When he asks about progress, he feels as if everyone takes one step forward and two steps back, just as in the Project Kisses of Death image. (That's the problem of a waterfall approach. See Project Lifecycles.)
The project managers are tracking these easy-to-collect measures:
- Lines of Code.
- Story points (yes, in the same project).
- Earned value every time they finish story points. Not when the team finishes a feature. Instead, when the team delivers some number of points.
But none of this data helps anyone see the actual progress. Confusion abounds.
That's because these data are activity-based measures. (See How Activity-Based Plans Differ from Deliverable-Based Plans.) I don't know why the project managers are tracking any of these, because all of these measures are meaningless. Sure, they're easy to collect, but they offer no value in clarifying the reality of the project's progress.
The only measure of project progress is how fast the team can learn from your user-focused deliverables. That's two things: the actual delivery of some user-visible value, and the speed of that delivery.
Here are two small tests to see if you're focused on activities or deliverables:
- For clarifying whether this thing is an activity or a deliverable: Does this thing change how the user interacts with the product?
- For the deliverable itself: How long does it take to deliver this increment of value? That's cycle time. See Measure Cycle Time, Not Velocity for how and why that matters.
Let me unpack that, specifically with a rant against “technical stories.”
Stories Offer User-Focused Value
The more I work with people to use various aspects of agility, the more some people explain they need “technical” stories, not user-focused stories. User focused stories finish work through the architecture, implementing by feature. Technical stories implement across the architecture. (See my Product Minimums post for how we might think about the what to implement and when.)
I'm not opposed to experimentation at a certain level of the architecture. That might help us decrease risk by learning. And that decreased risk will allow us to make the next set of stories faster (reduced cycle time).
But let's not fool ourselves. Implementing across the architecture postpones everyone's learning until everything finally lines up to deliver a feature through the architecture. (That's why I suggest we Focus Component Teams.)
Instead of trying to convince you of anything, let me go a little meta: How often have you technical story people realized late in the project that you needed to change that early technical story because it doesn't work to deliver features?
Technical stories are up-front design. Worse, most of them do not deliver user-focused deliverables. That increases late feedback. Maybe not as late as the image at the top of this post. But later than anyone would like. That's the two steps forward, one step back. Or worse, one step forward and two steps back. (I wrote an article about this long ago: It's Just the First Slip.)
But what if you need to learn?
Early Learning Helps Reduce Risks
When I work with teams, the people who love technical stories tell me they need to learn. That's terrific. Learning helps to de-risk the future work. But I don't see how to learn without doing any form of user-focused deliverable.
Even if you're not using the measurements at the top of this post, someone needs to use the learning to validate it. While I prefer agile approaches, agility requires a culture change. And organizations that focus on activities and outputs cannot create an agile culture.
Instead, I recommend a Staged Delivery lifecycle if you can't use an agile approach. Since these people had a resource-efficiency culture focused on activities and outputs, there was no way they could use an agile approach.
But activity-based measures don't offer any insight at all.
Debunking Activity-Based Measures
Some people still seem to think that lines of code is a reasonable measure. It's not. (It never has been.)
Long ago, as in the late 80s and early 90s, before I knew about refactoring, I measured code growth in several projects and programs. Because the teams didn't factor as they proceeded, the code grew as an S-curve. Then, assuming the team fixed the outstanding defects, they decreased the total lines of code. (I wrote about this in Manage It!)
This was the general curve for at least a dozen commercial products. I'm not a researcher, so maybe I didn't measure this “right.” But I took snapshots of the number of lines of code for these projects as they proceeded. (Most of these projects used C, or C++).
Lines of code depends on if or when people refactor. That's it. If the team never refactors, they have an inflated number of lines of code. If they refactor as they go along, the code growth tends to be more of a sawtooth, still growing, but at a smaller pace. Unrefactored code tends to have more duplication and more defects.
But even if the team refactors as they go along, there's no guarantee that the team produces anything of value to the customer. Lines of code have nothing to do with progress.
So lines of code is irrelevant.
But these project managers tried to measure earned value from lines of code or story points. Earned value can only accrue from finished customer-focused stories.
Finished Stories Might Offer Earned Value
I've never been a fan of earned value for software, because I always wonder: if we remove bloated features, are we increasing or decreasing the value of the product? This is why earned value for software makes no sense to me. (See Manage It! or Create Your Successful Agile Project for deeper discussions.)
This graph shows how story points are not the same as finished stories, so trying to calculate earned value from finished points also makes no sense.
And if your team counts “technical” stories, no, there's no earned value there. I understand why activity-based accounting likes earned value, but earned value makes zero sense for software products. Zero sense.
What should you measure? Start with the flow metrics and demos, because they explain the true reality of the project's progress.
Flow Metrics and Demos Explain Real Progress
Don't bother trying to start with change. Instead, ask this one question to prompt people's behavior to change:
When can we demo something the customer could use?
That question will focus the team on:
- Deliverables through the architecture
- Reducing cycle time and increasing throughput. (See Flow Metrics and Why They Matter to Teams and Managers.)
That's what I explained to Jim, that senior leader. He used that question to refocus everyone on deliverables that offered value to the customers. Yes, the project managers and the teams needed training. That's okay. When Jim changed his focus, everyone else changed theirs.
Activities might offer some value. But the more you measure activities, the fewer user-focused outcomes, those necessary deliverables you get.