I did decide to talk about how we got started on 100% utilization and what to do about it. I didn’t say everything I wanted to say, so here is what I wish I’d said:
How 100% Utilization Got Started
Back in the early days of computing, machines were orders of magnitude more expensive than programmers. In the ’70s, when I started, companies could pay highly experienced programmers about $50,000 per year. You could pay those of us just out of school less than $15,000 per year, and we thought we were making huge sums of money. (We were.) In contrast, companies either rented machines for many multiples of tens of thousands of dollars per year or bought them for millions. You can see that the scales of salaries to machine cost are not even close to equivalent.
When computers were that expensive, we utilized every second of machine time. We signed up for computer time. We desk-checked our work. We held design reviews and code reviews. We received minutes—yes, our jobs were often restricted to a minute of CPU time—of computer time. If you wanted more time, you signed up for after-hours time, such as 2am-4am.
Realize that computer time was not the only expensive part of computing. Memory was expensive. Back in these old days, we has 256 bytes of memory and programmed in assembly language code. We had one page of code. If you had a routine that was longer than one page, you branched at the end of a page to another page that had room that you had to swap in. (Yes, often by hand. And, no, I am not nostalgic for the old days at all!)
Minicomputers helped bring the money scales of pay and computer price closer in the late ’70s and the 80′s. But it wasn’t until minicomputers really came down in price and PCs started to dominate the market that the price of a developer became so much more expensive than the price of a computer. By then, many people thought it was cheaper for a developer to spend time one-on-one with the computer, not in design reviews or in code reviews, or discussing the architecture with others.
In the ’90s, even as the prices of computers, disks, and memory fell, and as programmers and testers became more expensive, it was clear to some of us that developing product was more collaborative than just a developer one with his or her computer. That’s why the SEI gained such traction during the ’90s. Not because people liked heavy-weight processes. But because especially with a serial lifecycle, you had to do something to make system development more successful. And, many managers were stuck in 100% utilization thinking. Remember, it hadn’t been that long before when 100% utilization meant something significant.
Now, let’s go back to what it means when a computer is fully utilized and it’s a single-process machine. It’s only a problem if the program is either I/O bound, memory-bound, or CPU bound. That is, if the program can’t get data in or out fast enough, if the program has to swap data or program in or out, or if the CPU can’t respond to other interrupts, such as to read the next card from the card reader. If it’s a single-user machine, running one program, maybe you can make allowances for this.
But if it’s a multi-process machine, if a computer is fully utilized, you have thrashing, and a potential of gridlock. Think of a highway at rush hour, with no one moving. That’s a highway at 100% utilization. We don’t want highways at 100% utilization. We don’t want computers at 100% utilization either. If your computer gets to about 50-75% utilization, it feels locked up.
Why 100% Utilization Doesn’t Work for People
Now, think of a person.When we are at 100% utilization, we have no slack time at all. We run from one task or interrupt to another, not thinking. There are at least two things wrong with this picture: the inevitable multitasking and the not thinking.
We don’t actually multitask at all. We fast-switch. And, we are not like computers, who, when they switch, write a perfect copy of what’s in memory to disk, and are able to read that back in again when it’s time to swap that back in. Nope, because we are human, we are unable to perfectly write out what’s in our memory, and we imperfectly swap back in. So, there is a context switch cost in the swapping, because we have to remember what we were thinking of when we swapped out. And that takes time.
So, there is a context switch in the time it takes us to swap out and swap back in. All of that time and imperfection adds up.
Now, let me address the not-thinking part of 100% utilization. What if you want people to consider working in a new way? If you have them working at 100% utilization, will they? Not on your life. They can’t consider it. They have no time. (For why, see the practice and integration part of the change mode in Change is Inevitable.)
So you get people performing their jobs by rote, servicing their interrupts in the best way they know how, doing as little as possible, doing enough to get by. They are not thinking of ways to improve. They are not thinking ways to help others. They are not thinking of ways to innovate. They are thinking, “How the heck can I get out from under this mountain of work?” It’s a horrible environment.
When you ask people to work at 100% utilization, you get much less work out of them than when you plan for them to work a roughly 6 hours of technical work day. People need time to read email, go to the occasional meeting, take bio breaks, have spirited discussions about the architecture or the coffee or something else. We seem to need spirited discussions in this industry! But if you plan for a good chunk of work in the morning and a couple of good chunks of work in the afternoon and keep the meetings to a minimum, technical people have done their fair share of work.
If you work in a meeting-happy organization, you can’t plan on 6 hours of technical work. You have to plan on less. You’re wasting people’s time with meetings. Hmm, maybe that should be the subject of my next lightning talk.
I thank Yves Hanoulle, Pete Miller, Don Cox, Rich, Markus Gärtner, Lisa Crispin. I hope I didn’t miss anyone. I can no longer find the twitter feed. Pete and Rich, I promise to write more about technical debt in the future.