When is Continuous Integration Not?


I’m a big fan of continuous integration. For me, that means that as developers implement small pieces, they check in the changes, verify the changes with a local build and smoke test, promote the code to the mainline, check again, and they’re done.

I’ve been having a long discussion with one of my clients about what continuous integration is. They say that when the developer is done with a particular feature, and checks that feature in, they are doing continuous integration. But it can take months for a developer to finish a feature. For me, this is staged integration.

Their staged integration is marginally better than big-bang integration (integrating everything at the end), but it takes too long for me. I want to be as close as possible to “real” continuous integration (check everything in every day). Why? Because developers don’t write that much code on a given day, and they receive feedback on what they wrote immediately. If a developers has had a bad developer day, I want the developer to know that as early as possible.

During a recent project management workshop, one of the participants tried to prove to me that his team’s version of staged integration was the only alternative. Well, if I’m not jet-lagged, I can argue with anyone at any time 🙂 He decided to try some real continuous integration, checking in work every day. I haven’t yet heard from him, and I suspect their lack of smoke tests and automated system tests will make their problems more visible earlier–which won’t look like success to him, but does to me.

Do you have any good words around continuous integration and why it works for you? I’d like to be able to write more coherently about it, and right now I’m limited to:

  • Continuous integration provides early feedback to developers.
  • The PM can see every day if the build is broken. (I use this as a predictive metric.)
  • There’s less to check in every day, so it’s easier to see where the problems that broke the build occurred.
  • It’s much more obvious earlier whether we have enough tests to know if the build is any good.

Let me know if you have more good ideas about selling continuous integration.

Update, May 3, 2006: Martin Fowler has updated his Continuous Integration” article.

7 Replies to “When is Continuous Integration Not?”

  1. Welcome back, Johanna.
    Continuous integration can be tricky to implement, not only technically, but logistically. Personally, I would question case-by-case whether continuous integration was *really* of benefit to the project, or whether it’s more about helping management to feel good and have a sense of control. If one were to encounter resistance from the technical team about making this move, one would have to judge for oneself whether the resistance is fear of change or of giving up control, or whether there is a real resistance to the overhead that will come with it.
    One area of particular concern is whether the source code control system in use is up to the job of supporting team-based continuous integration, which suggests frequent check-in. For example, does the SCC system support private checkout/checkin areas for developers to save and version their work without having incomplete code integrated into the build? Does the SCC system support concurrent versioning? What I’m getting at is that you want to avoid “not-work” messes like the ones described in this blog post:
    If I were a technical lead on a hypothetical project that was looking to implement continuous integration, I would also want to know whether the original project estimates took into account the overhead of continuous integration, daily builds, more complicated configuration management processes, etc. I would also want to know whether resource allocations for the project allowed for a buildmaster or configuration management specialist, or for the fact that, absent a buildmaster, me or my team members are probably going to be dealing with it.
    I don’t mean to be down on continuous integration, but it does not come without overhead, and it’s not a silver bullet. It’s well established that integrating too late and not often enough can be costly, but so can integrating all the time.

  2. If the continuous integration build is automated, there is no reason to not use it.
    We would love to have 100% successful builds, but we are happy to have CruiseControl tell us we missed something. It allows us to fail fast!
    I got a lot of resistance to the idea of continual integration here: for a long time it was ‘his project!’. Now, people still expect others to do the plumbing, but they sure miss the tool when we turn it off.
    Continual Integration should provide the organization with solid metrics. Metrics are one thing that can be in short supply.
    Continuous Integration is a pragmatic programming practice: it improves the entire process…
    Hope this is helpful…

  3. One really nice feature is it forces developers not to rely on cryptic build processes. The build has to be scripted out in ant/maven/rake, etc. Before we put in continuous integration we had nasty issues with the build being dependent on a single developer and their magic machine.
    A quick win is when developers forget to check in some code, but don’t notice because it still compiles on their machine. It’s usually resolved in minutes after the broken build emails go out.
    Our continuous build box also runs Checkstyle and Clover. This gives us regular feedback on how well the code is falling our coding conventions and how the unit test coverage is progressing. These metrics are nice for management and also tend to make more transparent just how clean the code is.

  4. As Martin Fowler says, CI makes the daily developer routine take a fundamental shift to the whole development cycle. A ten-minute build automated with continuous integration helps developers establish a rhythm as they develop software. This rhythm reminds developers to integrate regularly. More about this in my blog post, Ten-minute build, continuous integration and developer rhythm. I coach developers to integrate as often as possible during the day, ideally every couple of hours. To achieve this, I coach them to slice a user story into a collection of vertical slices of customer-visible functionality. When a slice is done, it can be checked-in and integrated, and if successful, CI deploys it to an environment. This allows the customer to see and play with emerging functionality and promotes early and often feedback.
    The longer a developer refrains from checking code into the version control repository, the more his local workspace diverges from the code trunk. This has inherent risk and is likely to result in a painful merge. Integrating often significantly reduces this problem. Also integrating more frequently reduces the amount of change each time going into the version control repository. If there are bugs they should be easier to find because there’s only a small amount of new or changed code to check.
    Running all the units tests (and acceptance tests, if it can be achieved in a 10-minute timeframe) on each build provides a full regression test cycle every time. Testing doesn’t necessarily prove that there aren’t any defects but CI does greatly reduce the defect count.
    A broken build should not be tolerated, but it does happen especially if developer discipline fails around the definition of ‘done’. I like to do a local build before checking in. A broken build, however, is a natural outcome for an integration and so the team is responsible for fixing it immediately. There should certainly never be a broken build at the end of the day. In a self-organising agile team, it’s their tool and their responsibility. If the build is breaking all the time, then there’s a problem with the way the developers are working. The ScrumMaster or PM should watch for this. I’m not sure about using it as a metric, although an operating CI (that is running FIT tests) generates the data to track running tested features, which is what I’m interested in.
    CI removes the unpredictability of big-bang integration. By integrating often, integration is broken down into many small integrations that are performed as part of the cycle: test-code-refactor~~integrate. The integration nightmare goes away and the number of integration problems are reduced. This gives developers the courage to move quickly and with higher productivity.

  5. Johanna, something about “the PM seeing every day if the build is broken” gave me the chills. The way I think of CI, it’s for the developers, not management. It’s the developers who see all the time (not just every day but literally all the time) that the build is “green” and are naturally driven towards bringing it back to green when it breaks.
    In other words, I’d rather talk about the *team* being all the time aware of the health of their codebase.

  6. CI does not only expose problems in the code but also in the design and the process.
    A few weeks ago I went to a presentation where developers from a company shared their experience with CI. For them there was no question of going on with it, but it was not clear cut that they really saved time in the end after taking into account all their efforts and troubles.
    It became pretty obvious that almost all their negative experiences came down to two root causes: bad design and breakdown in developer discipline. And they almost never attacked the root causes but worked in the symptoms only – sometimes making matters worse. Examples: Builds took too long, because modules had too many dependencies and unit tests where often really acceptance tests; Builds broke too often, because developers didn’t run a local build before check-in – they just got accustomed to let the automated CI find any problems – and because of occasional race conditions (they chose to disregard the module dependencies to speed up the build).
    CI applied to a system not designed for CI like unit tests added to a system not designed for testing, may result in more effort than gain – but that is not CI’s fault.

Leave a Reply