The Case for and Against Estimates, Part 3

In Part 1, I discussed order-of-magnitude estimates and targets. In part 2, I said how estimates can be misused. In this part, I'll discuss when estimation is useful. Here are several possibilities:

  • How big is this problem that we are trying to solve?
  • Where are the risks in this problem?
  • Is there something we can do to manage the risk and explain more about what we need to do?

Estimates can be useful when they help people understand the magnitude of the problem.

One of my previous Practical Product Owner students said, “We use story size to know when to swarm or mob on a story.” People tackle stories up to 5. (They use Fibonacci series for story size.) They might pair or swarm on stories starting at size 8. Even if they have a 21 (or larger) size story, they swarm on it and finish it in a couple of days, as opposed to splitting the story.

They use estimates to understand the size and complexity of the feature. (I would call their features “feature-sets,” but they like to call that one big thing a feature.)

You might not like that approach. I think it's a fine way of not fighting with the PO to split stories. It's also helpful to work together to solve a problem. Working together spreads knowledge throughout the team, as a team.

My experience with estimation is that it's easy for me to not understand the magnitude of the work. We manage this problem in agile/lean by estimating together, or working together, or with timeboxing in some way.

The first time we solve a particular problem, it takes longer. The first time I worked on a control system (embedded software), I had to learn how things worked together. Where did the software interact with the hardware? What were the general risks with this kind of a product? The first time I self-published a book, everything took longer. What were the steps I needed to finish, in what order?

I worked on many control systems as a developer. Once I understood the general risks, my estimates were better. They were not sufficiently accurate until I applied the rules of deliverable-based planning. What deliverables did I need to deliver? (I delivered something at least once a week, even if it was data from what I now know is a spike.) What inch-pebbles did I need to create that deliverable?

The more I broke the work down into deliverables, the better the estimate was. The smaller the chunks, the better my estimate was. The more I broke the problem down, the more I understood what I had to do and what the risks were.

One of the things I like about agile and lean is the insistence on small chunks of value. The smaller my chunk is, the more accurate my estimate is.

Estimates can help people understand risks.

You'll notice I talked a lot about risks in the above section. There are general project risks, such as what is driving the project? (See Manage It! or Predicting the Unpredictable, or a series I wrote a few years ago, Estimating the Unknown.) We optimize different work when we know what is driving the project. That's the project view.

We have possible risks in many deliverables. We have the general risks: people get sick, they need to talk the duck, they multitask. But, each deliverable has its own risk.

I've said before software is learning, innovation. You may have done something like this project before, so you have domain expertise. But, you have probably not done this new thing here.

When I estimate, I start thinking about what I need to do, how to solve this problem. Then, I start thinking about the problems I might encounter in solving those problems.

I can't get to the problems unless I have inch-pebbles. I am a big-picture person. I see the whole problem, possibly even the whole solution, and I skip some of the steps in my head. I estimate top-down as a start. Unless I create my inch-pebbles, I am likely to gloss over some of the risks because I start top-down.

You might not be like me. You might estimate bottom-up. You might see all the details. You might not miss any steps in solving the problem as you think about it. (I wonder about people like you: do you see the big picture at the beginning, or does it evolve for you?)

I have met some people who estimate inside out. They tell me they see part of the big picture and part of the small steps. They iterate on both parts until they see and can estimate the whole thing.

I have taught a number of estimation workshops. Most of my participants are top-down people. They see the result they want and then envision the steps to get there. I have met some small number who start bottom up. I have met two people who are inside-out. I don't know if that's a normal distribution, or just the people who participate in my workshops.

Estimates can help people understand possible first steps.

When people think about the first thing that can provide value, and they think about how to make that first thing small (either inch-pebbles or agile stories), they can more easily see what the first deliverable could be. They can discuss the deliverable progression (in agile with a product owner and in a more traditional life cycle with a project manager or a product manager).

I have found the discussion of deliverable progression very helpful. Many years ago, I was the lead developer for a gauge inspection system (machine vision on an assembly line). I asked the customer what he wanted to see first. “Can you see the gauge enough to give us some kind of an answer as to whether it's a good gauge?” was his answer.

Notice he said “enough,” not “a perfect inspection.” We did a proof of concept in a couple of days. In the lab, with the right lighting, we had an algorithm that worked well enough. You might think of this as a discovery project. Based on that deliverable, we got the contract for the rest of the project. If I remember correctly, it took us close to 6 months to deliver a final system.

For that project, I acted as a cross between a project manager and what we now call a product owner. We had release criteria for the project, so I knew where we were headed. I worked with the customer to define deliverables every two weeks, after showing a demo of what we had finished every two weeks. (This was staged delivery, not agile. We worked in week-long timeboxes with demos to the customer every two weeks.)

This is in the days before we had project scheduling software. I drew PERT diagrams for the customer, showing date ranges and expected deliverables.

A few years ago, I coached a project manager. She was the Queen of the Gantt. She could make the Gantt chart do anything. I was in awe of her.

However, her projects were always late—by many months. She would work with a team. They would think it was a six-month project. She would put tasks into the Gantt that were two-, three-, and four weeks long. That's when I understood the problem of the estimation unit. “If you measure in weeks, you'll be off by weeks.” Her people were top-down thinkers, as I am. They glossed over some of the steps they needed to make the product work.

I explained how to do deliverable-based planning with yellow stickies. The people could generate their tasks and see their intersections and what they had to deliver. She and the team realized they didn't have a 6-month project. They had a project of at least a year, and that was if the requirements didn't change.

When they started thinking about estimating the bits, as opposed to a gross estimate and applying a fudge factor, they realized they had to spend much more time on estimating and that their estimate would be useful. For them, the estimation time was the exploration time. (Yes, I had suggested they do spikes instead. They didn't like that idea. Every project has its own culture.)

How do your estimates help you?

Maybe your estimates help you in some specific way that I haven't mentioned. If so, great.

powerlawdistributionThe problem I have with using estimates is that they are quite difficult to get right. See Pawel Brodzinski's post, Estimation? It’s Complicated… In

In Predicting the Unpredictable, I have a chart of how my estimates work. See the Power Law Distribution: Example for Estimation. (In that book, I also have plenty of advice about how to get reasonable estimates and what to do if your estimate is wrong.)

In my measurements with my clients and over time, I no longer buy the cone of estimation. I can't make it work for agile or incremental approaches. In my experience, my estimates are either off by hundreds of %, or I am very close. We discover how much I am off when the customer sees the first deliverable. (In Bossavit's Leprechauns of Software Engineering, (or on leanpub) he says the cone of estimation was never correct.)

For me, deliverables are key to understanding the estimate. If, by estimating, you learn about more deliverables, maybe your estimation time is useful.

Since I use agile and lean, estimating time for me is not necessarily useful. It's much more important to get a ranked backlog, learn if I have constraints on the project, and deliver. When I deliver, we discover: changes in requirements, that we have done enough, something. My delivery incorporates feedback. The more feedback I get, the more I can help my customer understand what is actually required and what is not required. (I often work on projects where requirements change. Maybe you don't.)

I realized that I need a part 4, that specifically talks about noestimates and how you decide if you want to use noestimates. Enough for this post.

The series:

And, the book: Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule.

27 thoughts on “The Case for and Against Estimates, Part 3”

  1. Hi!

    In my view, I think it’s all very simple actually:
    How would you make a decision without estimating the size of the impacts of that decision?

    Besides estimating, there is only one alternative as I can see it: going by chance. And I’m not convinced or comfortable with that going by chance is any better.

    There’s a gray area between knowing and guessing, that space is called: estimating. And the thresholds are probably vague.

    You can’t even rank (e.g. a backlog) without estimating the size of the importance (benefits, costs and risks etc.) of the items in it.

    An estimate is what we know at the moment. An estimate should never be used as something to carve in stone. When we learn more, we update the estimate if we feel it’s needed.

    Kind regards,

    1. HI Henrik,

      You might not have to estimate items in a backlog if you know the cost of delay for any of them. Sometimes, a gross estimate might be useful also, but cost of delay might be more valuable than a gross estimate.

      I’m working on part 4 which talks about noestimates, which is where you consider how to deliver value often enough that you don’t need to estimate.

      I like what you said about not carving estimates in stone. I agree with you. I have one question (which you don’t have to answer) which is: How large are the projects you estimate? If you’re estimating features/feature sets, the estimates are smaller. You have a better chance of getting the estimate right, which might help guide the project.

      The larger the project, the more difficult it is to estimate well. In my experience, I always have to reestimate projects, if I want to keep the estimate accurate. Sometimes, I do. More often, I don’t.

  2. Glen B alleman

    How did you arrive at the Cost of Delay? Was that a deterministic number with no variance? Did the CoD have any uncertainty.
    CoD is an estimating process.

    Most of the other “issues” with estimates are bad management. Don’t even blame a process – estimates – on a bad outcome, without first confirming the root cause of that bad outcome and confirming the corrective action is the thing you’re going to fix.

    This is a fundamental flaw with the #Noestimates conjecture starting on Day One. “estimates are the smell of dysfunction.” That cannot be technically true without performing the Root Cause Analysis of that dysfunction to confirm estimating is the root cause.

    1. I use the back of the napkin approach to CoD. CoD is an estimate of value, not cost. For me, that changes the conversation.

      When I talk about value, we stop with the conversations about, “Why does this feature take so long?” That conversation is part of bad management.

      As for the #noestimates ideas, I don’t buy the idea that estimates are the smell of dysfunction. I do agree with the #noestimates folks that you might not need estimates if you show value consistently over time. That will be in part 4.

      1. From Derek Hunter’s Blog

        If you’re a fan of Shark Tank, you’ll notice something about Mr. Wonderful. He keeps the conversation focused on the money. When will he get his money back? How many multiples of his investment should he expect to get back? Other investors (and many of our stakeholders) don’t focus enough on the money. Particularly, what is the cost of delaying the implementation of one feature over another.

        The Cost of Delay is the COST of delaying and what is the impact of that delay on the expected Value produced in exchange for that Cost

        COD starts in the construction business

        The COD advocates in Agile seemed to have missed to foundation of CoD

        Don has something to say here as well

        1. I don’t understand what you mean with these references. As far as I can tell, I am on the same page as Reinertsen with CoD.

          I’m suggesting that estimating project cost (or date) is insufficient. I certainly have data from my clients that they looked at project estimates, did not realize which projects were blocking other projects and missed the CoD calculation. In both Diving for Hidden Treasures and Manage Your Project Portfolio, there are sections about decision time and its effect on CoD.

          I have called this decision-time “Management debt.”

          1. We’re in different domains. Notion like “estimates help people understand first steps,” would be replaced with “estimates help people make decisions every month (every week, every day) for the life of the project.
            Monte Carlo Simyaltion of the Estimate to Complete and Estimate at Completion are mandatory minimally monthly in our domain same for software intensive system of systems for those programs.
            The notion of “speed” does provide value in some domains. In many others “staying on plan” is the critical success factor. Without first defining what the “framing assumption” is for the project – time to market or Time to Planned delivery is critical.
            Without stating that “framing assumption” first Reinertsen’s work has little value.
            In our SISoS there are little opportunities to “buy down the delay.”
            The IT budget for the Federal IT spend is $81.6B (yes Billion with a B).
            So my question, and this is always my question is – “where is CoD applicable.” Then assess if CoB can actually increase the probability of success.
            This avoids treating the symptom in place of correcting the root cause.

          2. Glen, I have found CoD useful when we want to compare two projects that might appear to have the same cost. They might well have different value, when you consider what the delay of one or the other will cost the organization.

            I have also used CoD when POs want to compare features, especially one feature in one feature set against another feature in a different feature set.

            CoD is an estimate of value, not cost.

            CoD is not for everyone. (I keep saying my ideas are not for everyone, either. Neither are yours. I agree with you, domain matters.)

            I am assuming money, the means to keep paying people, is of value to the organization. If so, we want to increase the revenue (funding) we get. We can do that by delivering what is most valuable to our customer/client now.

            If you know you have funding for some extended time period, maybe you don’t need to look at CoD. In my world, that assumption is almost never true.

            Oh, one more thing: In all my advice on estimate, I tell people to look at the project as a system and decide what is driving this project. If you want to see the early blog posts about this, see Estimating the Unknown, starting with Part 1. I explained this more in Predicting the Unpredictable.

          3. Glen B Alleman

            While consistent funding is many times an issue on large procurements – FY funding authorization – our contract based development efforts are “funded” through the obligatory language of the provider.

            Are you suggesting to Peter that CoD replaces the Product backlog estimating processes for prioritization of Features?

  3. I like the post, but I do have a puzzle. What is the data source for the power law distribution? My data doesn’t show that. What I see is that on a relative basis, estimates of small things sucks just as much and maybe more than estimates of large things.

      1. Johanna,
        I looked at Troy’s slides but I don’t see the connection to the Power Law curve that you show. What his slides show are probability distributions of cycle time. They don’t show anything to indicate that probabilities are higher for small items and lower for large items. I worry that this suffers from some of the same issues as the cone of uncertainty. It turns out that the cone of uncertainty got a couple of things right and quite a few things wrong. I speculate that the things it got wrong stuck because enough people wanted them to be right. We wanted the cone to be right because we wanted to see uncertainty decrease significantly. It made us feel a little better because we could say “go away and come back later when I can give you better estimates.” The problem, which I documented in an IEEE Software article “Uncertainty Surrounding the Cone of Uncertainty,” is that relative uncertainty of the work which remains does not decrease with time. This article was the impetus for Laurent Bossavit to do his archeology work on the cone. What he found was that cone was initially the subjective opinion of Barry Boehm. It had subsequently been declared validated by another article which contained no data whatsoever. The only published data “validating” the cone was one relatively small project that was implemented separately by 7 teams at University of Southern California and one additional US Air Force project that was bid on by 5 companies (we don’t know if it was actually developed). And in both cases the data was collected only at one time, so we really have no published data showing a decrease in uncertainty. What the cone got right is that there is a decrease in absolute uncertainty, but that is obvious as there should not be any uncertainty in the work that we have already completed. It also got something else mostly correct. Laurent incorrectly claims that the cone assumes a Gaussian distribution (I believe Laurent meant this relative to the implied symmetry in the cone, when in practice we observe significant estimation optimism). The cone actually implies a lognormal distribution (Gaussian on a log scale). A lognormal distribution is in the same family as a Weibull distribution. My data and others have shown lognormal distributions as well, although strongly biased for optimism. Troy has strongly argued for Weibull. I’m not sure which it is, but I’m pretty sure it doesn’t matter that much either as they will give similar answers given the typically large ranges of uncertainty. It also may depend what you measure. I was measuring total project duration, while Troy typically measures individual cycle times.
        So that was a long rant about the cone of uncertainty, to bring us back to the question of “are we better at estimating small things than large things?” I’ve heard this claim often from many in the agile community, but I’ve not seen the data to back it up. In fact, what my data shows (“To Estimate or #NoEstimates, that is the Question”) is that on a relative basis, we are pretty much just as bad at estimating small things as we are at big things, and in fact we may actually be worse. Like with the cone, I think this idea has persevered because it gives us hope. I do think it would be fair to say that we are more comfortable estimating small things than large things. That has to do with the consequences. On an absolute scale if I am wrong with my small estimate it is not such a big deal. If I estimate 1 day and it takes 4, then I lost 3 days. I feel I can make that up somewhere. But if I estimate 1 month and it takes 4 months then I lost 3 months. That is not very comfortable. Anyway, my data says otherwise, so if there is data supporting the Power Law that you present I would love to see it.

        1. Hi Todd, This might be a better link to what Troy is saying: The Economic Impact of Software Development Process Choice – Cycle-time Analysis and Monte Carlo Simulation Results. In any case, I don’t think that’s the point of your comment.

          no, I do not have “official” data to support my assertion that we are better at estimating smaller things close in, rather than larger things farther out. I have empirical data from many years of projects, but not the kind of data we would expect from a real academic study.

          Here’s what I have seen:
          – Project managers (and others) adding buffer time to large estimates. In many cases, those buffers were insufficient. Why? Because the overall work changed from the start of the project until the end/time when the estimate mattered. I’ll get back to the estimation unit in a bit.
          – Uncertainty as to what people actually wanted even if they did want an estimate. I have been in meetings with senior management when I asked the question, “Do you want a demo version or a full releasable version? I can provide a demo earlier and a release later. Please tell me what you want.” Senior management could not answer that question. If I provided a demo-based date, they would have taken that for the entire release.

          In my experience, too many people asking for the estimate don’t know what they want. If we don’t deliver interim value (agile helps tremendously with this), they can change their minds about what they want before we deliver anything. My experience again: The people asking for the estimates don’t realize they are doing this.

          When I have managed projects (mine inside orgs, mine now, and as a consultant/coach), I have noticed tremendous variation when people estimated in big chunks. That’s why I have the link to Estimation Units Predict Schedule Slippage. For me, the act of breaking the work into smaller chunks makes it more possible to create an accurate estimate, and to realize what we don’t know about this work. Sometimes, the breaking down/apart helps us realize we are working on several chunks, not one big thing.

          I suspect domain might matter here. If I remember, you work on hardware/software systems. My experience with those systems is that I had a difficult time separating/breaking apart functionality because the hardware had to work with the software, and that the system as a whole changed every single darn time we integrated anything. (That’s when I created release criteria in the form of scenarios because we might want to release “early” so we could manage the risk of something farther down the list wasn’t working and we couldn’t tell when it might work.)

          I estimate my work better when it is smaller and close to the time I estimate it. I can tell you how long it will take me to write a monthly column, or prepare a two-hour long workshop. I have ranges for a book chapter, multi-day workshop or a keynote address. The ranges are still small enough that I can guess only okay, and I am often off.

          My experience as a coach/consultant shows me the same data. The only way I have had great results is to define the deliverables, make the deliverables as frequent as possible, and refine the estimate after each deliverable. That works only okay in hardware/software systems before the hardware goes to fab. Once we have stable hardware, we are more able to estimate the software.

          That’s one of the reasons I like targets so much. The target becomes a timebox. It forces us to make decisions about what we will and will not do for now. (Not forever.)

          So, no proof. Empirical evidence in my experience, but no proof. Laurent doesn’t necessarily agree with me either about the Power Law distribution. His comment was about the “bull” in the name 🙂

          I agree with you that estimation is quite difficult—well, accurate estimation is difficult. I do not have a silver bullet. I can provide explanations of what has worked for me over several contexts.

          1. Thanks Johanna. Yes, I’ve read Troy’s paper. It’s very good but I don’t see a claim to improved estimation accuracy for small items.

            I agree with you that there may be very good reasons to break things into smaller pieces, I just don’t buy the argument that estimation is more accurate. My data from software projects from multiple studies shows a fairly consistent ratio of 4 to 1 between the high and the low range.

          2. Todd, it’s good that you have data for your context. For me, that is the most useful piece of information.

  4. “You might not have to estimate items in a backlog if you know the cost of delay for any of them.”
    Exactly where do you suppose a number for “cost of delay” of a given item comes from?

    1. Peter, I have used CoD for an entire release. Some of my clients use it for feature sets.

      Yes, I use a back of the napkin “estimate.” It takes much less time to calculate than a project estimate.

      I’ll be summarizing in post 5, especially about when to use which approach.

  5. “I have also used CoD when POs want to compare features, especially one feature in one feature set against another feature in a different feature set.

    CoD is an estimate of value, not cost.”

    Joanna, I have to be blunt: there is no CoD calculation without an estimate of duration at the very least. And duration is cost. One can wave one’s hands and insist that CoD is an estimate of value, not cost, but getting to that estimate of an item’s value includes (as estimates of value always should, by the way) factoring in the cost of that item. I think CoD is great (quantifying dollar impact of a decision to an org is always a good idea), but it’s not a magical alternative to estimating by any stretch of the imagination.

    a) Every CoD article that describes its calculation includes a column for Duration (which is, of course, an ESTIMATE). (see Joshua Arnold, for example: “When using CD3, the priority order of features or projects is determined by dividing the estimated Cost of Delay by some estimate of duration: the higher the resulting score, the higher the priority.” (
    b) In discussing the actual calculation, it’s clear that it’s wholly dependent on the ESTIMATES for duration. E.g., “We then move on to developing Feature B. For the 1 week this takes us to deliver we incur the Cost of Delay of Features B and C”.

    In short, if you want to use CoD to make decisions, I will stand and applaud you all day long, because you are quantifying the impact of your decision. But it is anything BUT an alternative to estimating, because CoD simply can’t be exercised without estimating.

    Given all that, I don’t understand how you can possibly state, “You might not have to estimate items in a backlog if you know the cost of delay for any of them.” It’s a contradiction in terms. It reminds me of the story I always tell about the small child who observed brightly, “We don’t need the farmers anymore; we just go to the grocery store instead!”

    1. Yes. In fact, Joshua has publicly stated that he’s not a fan of NE for the reason that CoD is dependent on estimates (basically his words, though I don’t remember verbatim).

    2. HI Peter, you are correct if you use the formal CoD calculation. I don’t.

      I guess I did not point you to my back-of-the-napkin calculation for CoD. See Cost of Delay, Why You Should Care, Part 6.

      I had an interesting conversation with a manager. One of my managers told me we needed to finish a specific project in three weeks. I had a gross estimate of about eight months, give or take a couple of months. He told me, “We will lose very-important-customer if we don’t finish it in three weeks.” The CoD was clear to me, and even more to him.

      I asked more questions. The customer needed a couple of features now and could wait for more. We did that.

      We didn’t need a ton of estimation for the problem. It was clear what the value of the customer was to us. The CoD conversation helped everyone decide that:
      – we could do less (not an entire project just for that customer)
      – we could deliver that chunk sooner
      – we could assess which customers were more important and less important and work according to value.

      I use CoD as an entry point into a conversation.

      I have no objection to great estimates. I have no objection to rigorous calculations of CoD. I have not needed to do either. That’s because I find a way to understand what people want and deliver that, or suggest alternatives that allow us to somehow create a win-win. Sometimes, I call the customer. Sometimes, we do small projects and release on teh way to bigger projects. Sometimes, we take a date as a target and back-plan from that.

      I don’t care what we do, as long as we don’t spend time estimating and estimating and estimating and estimating when we could deliver something.

      As I said in today’s post, my approaches may not work for you. I’m okay with that.

      1. Glen B Alleman

        How much do you need to estimate?

        This is called “Value at Risk.”

        This is a standard governance process. Invest sufficiently in the estimate to cover the risk.

        The notion of repeated estimating, estimating, estimating, is not only bad management, it is bad governance.

        Perhaps a visit to risk and estimating sections ITIL V3.1, COBIT, OGC P3M3, ISO 12207 guidance might provide some guidance for assessing Value at Risk for your customers money.

        Now it may very well be, that your customers aren’t guided by governance frameworks, but if you are ever engaged by one, the need to “adequately estimates to protect the investment” will not be unanticipated.

        1. Glen, one of my clients could not understand what to do with their project portfolio. They understood the ideas behind selecting projects and flowing work through teams, but they could not execute. Yes, they asked the teams to constantly estimate, again and again.

          The delay they incurred from management indecision was tremendous.

          When I asked them which projects they needed to do—as conversation—they could decide. They understood cost of project. They understood value of project. They did not understand cost of delay. When I used words and my back of the napkin explanation, they understood the cost of their delay. They were then able to make decisions much faster.

          Thank you for the link to value at risk.

  6. Johanna, I’m hardly suggesting that everyone’s approach needs to be identical. But based on the above description, let’s not call what you’re describing above any sort of real “Cost of Delay” analysis. Rather, you’re simply identifying customer constraints and responding by paring down what you’re working on in order to fit those constraints. It’s laudable, and I’m sure you’re very effective at getting to what the customer needs, but it’s not a Cost of Delay analysis per se. And all sorts of estimates (be they implicit or explicit) lie behind that exercise, of course. You may not recognize that you’re taking duration into account when you pick the two or three features for the three weeks, but you are.

    I feel that it’s very much a disservice to declare that “I use Cost of Delay” when you’re just looking at the customer needs, as you describe, and picking the most important (and doable) to fit a deadline. That’s just using a highfalutin term to obtain a kind of legitimacy to what really is little more than going on gut. It’s a marketing use of the term, not science. And words matter.

    Finally, your preference that one “not spend time estimating and estimating and estimating and estimating when we could deliver something” is also admirable, but I’d point out that it’s rather an incendiary way to put it; i.e., it’s an extreme case, a straw man. No one, literally no one, advocates spending time “estimating and estimating and estimating and estimating” vs working on delivering. The trick, of course, is always to find a suitable balance, based on value at risk and bang-for-the-buck. And there I think we’re probably in agreement at core.

    1. The actual Cost of Delay

      Disclosure, we’re a Rally user on a large ($400M) Fed Civ program and a Team Foundation Server user on another large Fed Civ (Law Enforcement in the news) applying these types of probabilistic decision making process.

      So as you say, this may not be appropriate for your domain, but without a definition of the domain UP FRONT for the applicability of the suggestions, there is likely a large audience looking for advice outside your specific examples – a broader set of principles that can be tailored to specific process and practices

      BTW the Monte Carlo simulations in Excel can be found for free on the web.

    2. Peter, I do think we agree more than we disagree.

      Anything that prevents a flow of work through teams is a cost of delay. I just replied to Glen’s comment re estimating again and again.

      In addition, I have seen a tremendous CoD due to multitasking, due to experts, due to waiting for other people to finish their work. And, yes, due to the never-ending estimation requests.

      You might want to calculate the CoD for a given project or program. I have not found the need to do so.

      1. Glen B alleman

        Your treating symptoms, not the cause of the your observations. Without fixxing the cause of those observed dysfunctions, they well continue to occur

        Much of the approaches to dysfunction I see in agile and especially in the #NoEstimaets community pursue fixes of the symptoms not the cause. This is another example.

        Until the cause to corrected, you will continue to observe the dysfunctions you describe.

        The Apollo method described in the link may be a start for removing the dysfunction. That the method we use in our practice on SISSoS in a wide range of domains

        1. I think I will start another post with the problem of people not understanding root causes and not wanting to solve them. I agree, that is part of the problem.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: