Who Decides What Done Means for a Program?
When I start working with new-to-agile teams, one of the first things we do is to discuss what done means. Chances are good they have not discussed what done means before.
The developers don't agree with each other. The testers don't agree with the developers–or each other. And, until they discuss what done means, and develop a working agreement as a team, they will have a difficult time accomplishing anything.
Now, that's just one feature team. Imagine when you have multiple feature teams coming together to create one program (a collection of multiple projects' results into one deliverable) so they can release a product, especially if they are not all in one location.
As one team, the team can facilitate its working agreement by itself. The team, working with its customer or product owner has total control over what done means. The demo provides feedback, and the team learns quickly–at the end of the first iteration–if its definition of done is insufficient.
But what happens when you have more than one team? The problem is that all the teams need complementary definitions of done. If your team is waiting for a feature from another team, and one team thinks done means the unit tests pass and the systems tests pass, but another team only thinks the units pass, the teams' expectations do not meet. The teams do not have complementary definitions of done.
If you have two or three teams, you might be able to bring all the teams together to facilitate their working agreements of done together.
With up to eight or nine teams, you could ask them to bring representatives together, even though that's a pretty large meeting. If they timebox the meeting to one hour, maybe they could arrive at an agreement.
But, imagine you have 25 or 47 teams, all geographically distributed. Even if you could find a time they could send a representative to a meeting, that meeting would be too large to get anything done. You need some way to have an agreement on behalf of all of the teams.
What can you do for the good of the program, to facilitate the collaboration and to help the teams achieve done?
It sounds funny, but sometimes, proposing a strawman definition of done helps teams see what done could be.
As a program manager, I have asked the teams to agree to a common definition of done. I have taken a definition of done that reflects the risk level of the program I think we can manage, and asked the teams to use that. I often say something like this, “Dear technical teams: in your release planning meetings, please discuss this definition of done. Can you live with it? If not, please let me know. If you can live with it, please try it out for your first iteration, and let's see what the results are. If you encounter trouble, elevate that risk to the program team right away. Thank you.”
A common definition of done is a program risk. The larger the program, the larger the risk. As such, it is a program problem. Asking the teams to try it and report back on their experiences is a way to ask for their risks and to make sure the communication is not just one-way, from the program manager to the teams.
As always, when it comes to agile programs, your mileage will vary.