©1999 Johanna Rothman
I just read an article by a well-known author. He claimed that your first project slip isn't so bad; the third or fourth project slips are the bad ones.
In my mind, red flags went up. I flipped the bozo bit on the author. I completely disagree with his conclusion.
The first slip is your initial indication something is wrong. Don't expect you can make up time in your project. You can't. You can use the first slip to take a step back and observe what's going on vs. what you'd like to have going on. When you hit the third or fourth slip, you've lost the schedule battle.
When software projects start to slip, they're talking to you, the project manager. The first slip is a whisper: “Your expectation is not matching my reality. Listen to me, I can tell you my reality.” If you ignore the first slip, the second slip is a murmur: “Things aren't quite right. Don't you want to know what's going on?”
At the third slip, the project says: “Knock-knock. Are you there? Don't you want to know what's going on?” At the fourth slip, the project yells: “Hey, you! You didn't listen to me when you could take action. Now, get out the Tums. You'll pay for this.”
I prefer to have projects whisper to me. (Otherwise people think I'm crazy when I yell back at my projects.) If you and your project agree on reality at early stages, you can make small adjustments with big results.
I recently worked with a company just before they planned to ship a Beta release. They were having trouble getting the software ready, and they wanted help getting the work done, so they could meet their Beta date.
I was ready with questions about the schedule, defect data, the testing, how the developers integrated the code. Luckily, we talked about schedule first. “Oh, we planned the schedule six months ago. We haven't changed it.” I asked if they had met their milestone dates. “Well, not really. We missed the first deadline. The requirements weren't done, but we had to get started, so we started designing without knowing all the requirements.” This is risky, but not a Terrible Thing, especially if they planned to manage the risks. I asked about the other milestones.
“Well, since the requirements weren't done, we couldn't finish the design on time. Since the design wasn't done, the coding was a little late.” The first slip cascaded into slips for every other milestone. Then I asked what turned out to be the key question: “When did the testers know what to test, if the requirements, design, and implementation were a little late?” The answer I got was “Last week.”
Uh oh. I asked one more question: “How much testing did you plan for this project?” They looked at each other, and said “Oh we planned to do about 6 weeks worth, but I guess we won't get to that now, will we?”
These people were not stupid. It's important to emphasize that. They had a simple problem with a huge cascading effect: the first slip led to more slips. Then they had trouble hearing the reality of their project. They started with a small slip, but because they kept going, the small slip magnified the effect of later slips.
If they had stopped at the time of the first slip, and replanned their work, or replanned the schedule, they might have been able to meet their hoped-for Beta date. Now their only option was to extend the schedule.
Slips tell you valuable information about your project. Something is not going according to plan. Before the something turns into lots of things, have a heart-to-heart discussion with your project.
This article was written 20 years ago, but the problem it describes, for me, is so actual. 20 years ago the agile manifesto was not signed, so all the agile stuff was not the mainstream thing to do the work. Still, I find it actual because:
“He claimed that your first project slip isn’t so bad; the third or fourth project slips are the bad ones.” -> lack of knowledge/awareness regarding complex adaptive systems. Is not needed to have a big cause to have a big effect. Small details are important also.
-” The requirements weren’t done, but we had to get started, so we started designing without knowing all the requirements.” I read this text as if they started coding without knowing what to code. Even with agility, analysis is important, but I’m not thinking to business analysts, no no. We need to know what we will code not just say ” we will figure it afterwards, we already have something”.
“How much testing did you plan for this project?” – > The same is happening now. It is not known how to do the testing and what guides that test planning. Testing, for me, is about finding problems guided by risks. And all the dominant thinking with test automation does not help either, because it shifts the attention of what testing is to just some “automation”.
“Well, since the requirements weren’t done, we couldn’t finish the design on time. Since the design wasn’t done, the coding was a little late.” I read “requirements” word as use cases, or what Weinberg wrote about requirements. Architecture, design is guided by uses cases/activities/business. The same is now :(. The tech stuff prevails not the business which guides the tech thing. And yes, I say this with all the DDD, microservices hussle which is now. We, as industry, did not actually understood OOP at all… but we found the antidote:microservices, DDD. No matter the problem, the response is “microservices”.
I liked a lot how the slips “talk” to the manager.
Marius, thank you. At the time, I’d used a variety of incremental life cycles, either design-to-schedule or staged delivery. Those life cycles focus on finishing feature sets. (Not the way we do now in agile approaches where we iterate over a feature set. No, those life cycles assume you finish an entire feature set. They were quite useful for deliverable-based planning.)
I like the way you talk about testing as “finding problems guided by risks.” Yup, that’s how I think about testing, too.