Using a build server (such as CruiseControl doesn’t mean developers shouldn’t run local builds (even though broken builds aren’t really as serious as a lot of people make them out to be). So this raises the question: if developers run their build locally, what’s the build server for?
Here’s a list of the things I use a build server for, taken from a message I sent to the XP mailing list a while back.
I always advocate that developers build locally first. My version of “local build” is a full release cycle: compile, package, deploy, test. I aim to keep this really fast; when it gets past a minute I get edgy (I strive to keep unit tests inside of the IDE to under 10 seconds). Integration-level tests are another step on top, and developers are encouraged to run that as part of, well, integration, as well as most acceptance tests. The end result of this is a system that should be releasable. I would be surprised if there were really significant differences between what I advocate to my team and what you advocate.
What I use asynchronous integration for is:
- as a safety net, in the event that developers are not running their local builds. Build failures which should have been picked up by a local build are serious; a pattern of them means that it’s time to refocus on that practice.
- an information radiator of the current build status. I find that not knowing if the build is good or not gives me concerns. 🙂 Sure, I can ask, or I can even assume it’s good, but knowing is better.
- producing real release builds; this is just a side effect of the fact that it’s the controlled reproducible environment; it’s also a sop to the QA teams (who previously insisted on doing release builds themselves… *shudder*).
- a monitor of important, but non-urgent, information. Looking at my current build system, this includes code coverage, dependency analysis, coding style enforcement (also run locally), duplication detection, documentation generation, and statistical information. This stuff gets reviewed periodically, and is not worth examining all the time; however, various threshold levels produce warnings.
- a repository of historical information. I can easily see how often a team releases code back to the repository based on how many builds they do – I’ve had teams that produce upwards of 10 release builds a day; I’ve also been on teams that struggle to do that in a month. 🙂 I can also see historical information for some of the “important, but non-urgent, information” (particularly the code coverage).
- a “warning bell” for shared components, allowing me to easily see if a new release of a component breaks any of the dozen or more projects that use it.
- running longer running tests. In many cases (e.g. performance profiling), these are not of a pass-fail nature, but more to do with keeping an eye on the trends (again with threshold warnings). Some are just long running because they are long running; imagine an acceptance test that invokes a report that takes 30 minutes to prepare with production data, or a workflow process with a forced 2 hour delay.
All of this is information I find valuable, and thus important to me. None of it is really worth asking developers to run all the time (though I do often get people to focus on small parts of it for an iteration).