"But It's Just a Small Change"

  I had the pleasure of speaking with two different colleagues today, both with the same dilemma. They are near the end of their projects. They don’t quite have enough time for one round of final testing–but if they’re lucky and the stars align, and they don’t find too many problems, they can still (maybe) test what they need to test before the desired release date. But no, the stars don’t align. First week into testing, they find a nuisance of a defect. It only occurs once–in installation, and they can work around this with release notes–but they’re under pressure to make the change. They each asked me what I would do. After asking a few questions to make sure the problem only occurs when installing, and they can make big red stickers to explain to the customers what to do, I agreed with the PMs: don’t make the change. The risk is just too high. The reason the projects don’t quite have enough time for testing is that they’ve encountered trouble all the way through the projects. The builds take too long. The developers didn’t integrate testing as they developed. They implemented by architecture, and couldn’t manage the changes in requirements, so the architecture doesn’t quite fit what the customers want–but they shoehorned the features in anyway. The project team hasn’t met one deadline yet. If you’re in this position with your project team, ask yourself and the team this question: What did you see or hear that leads you to believe this would work? If the someone has data, “Oh, we fixed the build and we can...

Implement the Most Valuable Features First

  Scott points out Software Product Delivery – 20 Rules? that you should do the riskiest part of the project first. (He explains that you modify that given what’s most important.) I’d add a further refinement: that what’s most important better provide the most value. If it doesn’t, do the most valuable parts first. You might not have to do the riskiest parts. I saw this in action yesterday. I taught my pragmatic project management workshop, where part of the workshop is to do a project. The project takes somewhere from an hour to two to complete, and has a tricky architectural part. The idea is that PMs need to be able to see if the architecture is working. One team got stuck on an architecture that cannot work–the resulting product is not stable under any circumstances. They came to me for more materials. I’d already given them more materials than they needed. With my customer hat on, I said, “If I give you more materials, how do I know I’m not throwing good money after bad? It doesn’t look to me as if this architecture will work.” They were perturbed, to say the least. When we debriefed, they were still thinking that if I’d given them more materials, they would have succeeded. They were focused on the riskiest part of the project. But if they’d thought about what was most valuable, I bet they would have developed another architecture–and they would have been successful. Sometimes, release criteria can help you articulate what’s most valuable. Sometimes you have to ask someone. But starting with the riskiest part of the...

Unanticipated Events Screw Up Schedules

  So after I posted the Probabilistic Scheduling post, I was working merrily away. I had made some small progress on the book, but was still finishing up other things. Finally, Wednesday I had cleared the entire day to work on the book. I was having trouble with one chapter, so I decided to make tea and do some timed writing. But I encountered an unanticipated event. While picking up my electric teakettle, I fell down. Have no idea why, just fell. Normally, an ankle or knee gives out, and I fall sideways. I know to relax and go with the flow so I don’t damage joints worse than they are. But this time I fell straight down. Did a Greg Louganis on the table in my office. Head wounds hurt. Almost as much as childbirth. And when they bleed, they bleed a lot. Called the doctor, breathing through the pain, was told, “Go to the ER.” Ok, got in the car, drove myself to the emergency room, waited several hours for my 4 stitches, and returned home. I knew I was not myself; I didn’t even read in the ER. (I read all the time. Everywhere. Unless I’m with other people. But breakfast doesn’t count.) But by the time I returned home, I was in much better shape than when I left. The local painkiller was still working. I was no longer bleeding. And my headache was gone because of the local. I was still shook up, but certainly ok. So, Wednesday was a fairly lost cause, at least for writing. I made more progress yesterday, and hope...

Reducing Infrastructure Risk

  It’s been quite the Monday so far. My office toilet started spewing water, a cabinet door fell off one of the cabinets in the kitchen, and I’m trying to back up and duplicate my hard disk because both latches on my Powerbook broke at the Agile conference and I need to send my computer off to be fixed. And of course, I have deadlines for presentations and articles, and the PM book I’m trying to write. I have a theory about this string of events. I just came off a crazy amount of travel: 9 out of the last 12 weeks, I’ve been out of town. I realized about halfway through that was too many weeks. Oh well. But the problem with that much travel is that I don’t use things (like the toilet or light bulbs) in my office. Entropy happens. Entropy happens on projects too, especially if we don’t pay attention to pieces of the project or use certain tools/processes infrequently. One of my clients didn’t realize they’d broken the build in a way that required three weeks to fix until after six weeks had passed, because they only built once every couple of months. One client didn’t realize they could no longer generate the documentation system until they had to produce a one-off for a quick fix for a customer. Producing the documentation took them longer than the emergency patch. Short iterations help. If you’re starting a project, a Hudson Bay Start works. On a project, just asking the questions about the infrastructure can help people see if they should try something. In the meantime,...

Do Engineers Use Their Software?

  My friend and colleague, Stever Robbins, has started a blog, and one of his early posts is Are engineers living on another planet? Don’t they use their software? Unfortunately, not always. It takes self-discipline and the desire to look for problems to cause people to create systems that allow them to use their own software. If a project team only builds once a week, they’re not going to use their software. If they fix a bunch of defects at one time, the testers can’t do a complete install and test pieces in isolation. Instead, the testers need to install the whole darn thing and test everything together. The current phrase for using your own software under development is “eating your own dog food.” (Anyone know the origin of that phrase? I’m fairly sure I was using in the 80’s, before Microsoft popularized it.) It’s not easy to use the product under development. And, it’s a great...

Degrading Gracefully is an Oxymoron

  I changed ISPs last Friday. At some point Friday, my ISP bounced my email with a strange (to me) message. This is the same ISP that had problems just a few months ago, so I was done. I need email up virtually 100% of the time. And if I can’t receive email, I need my ISP to collect it, not lose or return it before delivery to me. I’m not privy to how my old ISP tested changes to the mail queue. But I can imagine the conversations people had about the changes and how they might test them. If they were anything like some of the systems I’ve worked on in the last 25+ years, there were comments like this: We’ll wait till the system starts to degrade and then we’ll intervene manually. We’ve tried simulating the system, and we just can’t tell enough. We need to put it in place and respond to the degradation. We’ve measured and monitored the system; we won’t have problems. We’ve architected the system to degrade gracefully. We’ll watch for that and respond. I have yet to see a system degrade gracefully. I’ve seen systems show small warning signs before degradation–signs that were sometimes too small to notice as the first instance of degradation. But we, the human beings who were supposed to respond to those signs, more often than not completely missed them. It was too easy to talk ourselves into thinking the warning signs weren’t really signs. (BTW, this works for the human body, too.) If you’d like to know how your system degrades and when, you gotta test...