What Lifecycle or Agile Approach Fits Your Context? Part 1, Serial Lifecycles

Are you trying to make an agile framework or approach work? Maybe you have technical or schedule risk. Maybe you've received a mandate to “go agile.” Maybe you'd like to experiment with better ways of working.

Or, maybe you're trying to fit an agile framework into your current processes—and you've got a mess. You're not getting the results you want.

I've seen plenty of problems when people try to adopt “agile” wholesale.

Instead of going “agile,” I will explore your options for lifecycles and agile approaches in this series. First, I'll explain the various lifecycles.  I'll wrap this series up with what you can do if your managers love predicting instead of using feedback loops.

Let's first talk about how a stage-gate approach was as agile as we could get in the 70s, 80s, and early 90s. Yes. A serial lifecycle, a stage-gate, was as agile as we could be back then.

Stage-Gate/Waterfall Was Supposed to Help Us Cancel Projects

Every organization I know has an appetite for more work than they have teams to do the work. That's why we have projects—to work on this product for now, until we've delivered something. Then we can move to the next project. (That's the essence of project portfolio management.) See Projects, Products, and the Project Portfolio: Part 1, Organize the Work to see what I mean about projects.

How a Stage-Gate Lifecycle Used Iteration

This image shows what we used to do as a stage-gate lifecycle. (See Predicting the Unpredictable for more details about the role of estimation.)

We created as many requirements as we knew. Many of us worked in organizations that “ran out” of requirements time. We had to take what we knew and re-estimate and replan the project.

Then, we offered that data to management. They might cancel the project.

Yes, cancel.

Until the 90s, I worked in plenty of organizations where my managers did cancel projects once they saw the requirements. What was their reaction when we, the engineers, said we had to bend the laws of physics to make the project work?

My managers canceled the project. Sometimes, they put the project on the parking lot, because maybe the laws of physics would change? (Computers kept getting cheaper and more capable, so, in a sense, the laws of physics did change.

Managers Had Many Cancellation Options

My managers didn't just cancel projects at the end of the requirements phase. They canceled at the end of Analysis when we chose the architecture. They canceled at the end of Design, too. Especially if we had to revisit the requirements (which we almost always did). That's because we almost always used Alternative 3, with more feedback loops.

I worked on several notable (to me!) projects, where the managers specifically stopped testing, as in Alternative 2. They said, “If we stop finding problems, we'll be able to release because we'll meet our release criteria.”

You're not surprised by my answer. “If you don't care about the release criteria, we can release any time you want. We've been testing installation and roll-back.”

Not all managers appreciated that frankness. (Insert laugh track of my career, here.)

My context was small-to-medium size organizations with commercial products, where we expected the project to take 6-9 months. I can't speak to other contexts.

The 90s Changed Everything

Several milestones changed the world in the early 90s:

  • The ubiquitous use of Gantt charts.
  • The role of the project manager changed from facilitator to controller.
  • Single-function “teams” reinforced resource efficiency thinking.

When I started to manage projects, I drew PERT charts on a blackboard. Yes, that was before we had whiteboards and stickies.

I started to use rolling-wave planning with stickies on a whiteboard in the 90s for two reasons:

  1. The people on the project could tell me what they could deliver.
  2. I expected the requirements to change, week to week. They did.

Rolling-wave planning helped everyone see our unknowns, risks, and how to manage for those possibilities. Was I perfect? Oh, no.

However, I worked on projects with cross-functional teams. That one thing helped us finish the work.

SIngle-Function Teams Made Serial Approaches Worse

I don't know if it was a function of all those Gantt charts, but managers got silly about utilization and being “efficient.” (The managers totally forgot about effectiveness.)

That's when many managers outsourced/offshored/moved vital work to far-off people and teams. These managers didn't think about the necessary hours of overlap for a collaborative team.

The idea of IV&V, Independent Verification & Validation, arose in the 90s, too. Managers thought independent testers would help us out of the “software crisis.”  Many managers and organizations decided to separate developers from testers. I suspect that's because the SEI and the PMI both thought creating software was similar to construction.

In construction, most of the iteration occurs in requirements and analysis. (Hardware is different than construction.  Most of the iteration occurs in analysis and design with simulation.)

However, because software is learning, software needs iteration all the way through the project. That's why we re-estimated after every stage.

I insisted on having all the people we needed on the project at the start of the project. I wrote about this in Manage It! and here in Are We Done Yet? (page down to the chart about when qualified people work on the project.)

If you stuck with single-function teams you always had the problem that the developers started a new project before the testers were done.

However, we somehow started the process and lifecycle wars.

Why the Need for a Defined Lifecycle?

Starting in the 80s and into the 90s, senior managers wanted us, middle managers, to define our “process.” They thought we used just one named approach to organize our projects.

Somehow, I was always the mouthpiece of these process groups. (Insert more laugh track here.)

I first drew my favorite no-name lifecycle, the “combo” for my colleagues. I said, “We do this.” They agreed. We then showed it to our managers.

We started by timeboxing requirements. That's because we knew the requirements would change.

We prototyped several requirements and tested our work. We challenged the architecture with these requirements.

Once we thought the architecture would work, we continued to implement by feature.

On most of my projects, we did not select the most valuable requirements. Instead, we selected the riskiest requirements. That's because we'd experienced the problems of trying to retrofit features into a product architecture that didn't work.

Why did we select the riskiest features?

Because we shipped on tape or CD, we couldn't release via the internet yet. Our cost to release was too high to release often enough.

Projects Incorporate Feedback Regardless of Lifecycle

I saw several problems with a stage-gate or any serial approach:

  • Not enough people had read all of Royce's paper, where he describes all the feedback loops. See MANAGING THE DEVELOPMENT OF LARGE SOFTWARE SYSTEMS. He makes several excellent points, including “do it twice.” He thought a waterfall approach was quite risky.
  • If you don't cancel projects, a serial or stage-gate lifecycle is quite risky if you need feedback during the project.
  • A stage-gate doesn't offer enough information to make good project portfolio decisions.

Stage-gate was the best we could do when the cost to release was high.

That was then.

Now, we have many inexpensive ways to release products. Why not incorporate feedback as part of a lifecycle? That's a function of the corporate culture's approach to change.

That's why we have iterative, incremental, and iterative/incremental lifecycles. Agile approaches are iterative/incremental plus culture changes.

This series:

One Reply to “What Lifecycle or Agile Approach Fits Your Context? Part 1, Serial Lifecycles”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.