Thin Air

Automated Builds

Travis Griggs, over on This TAG is Extra has a post on doing automated builds in VisualWorks. We do this at Quallaby too, and we're expecting to make changes to the process in the near future, so here's a summary of how we do it now. Once the new process is in place I'll post again on what we changed and why. I'm still a little new here at Quallaby, so I'm a bit hazy on some of the details about how things work, but this is the gist of it.

Unlike KeyTech, we have only one product, but it involves many components that run on different machines and communicate with each other. To keep things simple, we just use one image for all the components and configure them at installation to perform different functions at run time.

Our build runs on a headless Solaris server and is invoked every night at midnight via cron. There's a directory for our current development stream that contains the static elements of the build - the VM, a base image, pre-compiled binaries for some external libraries we use etc. This stuff is organized by platform, with the platform-independent bits going to a 'common' directory.

The base image has just enough in it to get the process going; Store is installed and the repository is configured. When the build is kicked off, the base image is launched and begins loading code from Store. It first loads the most recent version of our root bundle, then updates any packages that are more recent still.

Then it creates a directory for the build, naming it based on the development stream, version and timestamp of the build. Within this directory are three subdirectories: release, test and working.

The contents of the 'release' directory are what we would ship to a customer. There's a subdirectory for each platform we support, each containing a complete release for that platform - VM, headless and headful images, shared libraries and a couple of installation scripts. The build populates these directories then builds tarballs for delivery.

The 'test' directory is for running the unit tests. The build image saves a headless image into this directory, and also copies some files with test data and generates a test script. Then it launches the test image. The test image reads in the test script, which launches a headless test runner. As the tests run, the results are logged to a text file in the same directory, and any errors that occur are caught and the stack is dumped to a text file. When the tests are complete the results are mailed to the team and the image exits.

The 'working' directory is for continuing with development based on the the code in the build. Again it contains platform-specific subdirectories for each architecture, each with a VM, precompiled libraries and headful images. The basic code is saved into an image called '' and then other images are saved and launched with a parameter for customizing the image. Several of the developers have their own customization code which gets called based on the parameter, and might involve loading goodies from cincom public store, configuring key bindings or anything else that might make development more pleasant.

All in all this process works pretty well for us. I find the 'working' directory a nice touch, as it means we don't spend much time configuring our development environments, and we have no qualms about discarding images and starting fresh each day or even more frequently.

There are a couple of things we'd like to improve. For one thing, it takes too long to run the tests - over an hour at the moment. Some of this can be improved though old-fashioned optimization; many of the tests do a lot more work than necessary, and we can get the same level of coverage with simpler and faster tests.

We also have tests that probably shouldn't be run during the build at all. These tend to be end-to-end tests of complete subsystems, using data with known characteristics. They take much longer to run than unit tests and should be broken out into a separate (but also automated) testing regime. If we can get the tests to run in about 10 minutes or so, it would be feasible to run builds automatically after publishing, as Travis describes.

The other problem we run into sometimes is broken builds. The code that loads packages from Store isn't very smart - it doesn't distinguish between trunk and branches, and doesn't pay attention to dependencies. Occasionally this leads to packages not loading correctly, or test runs going horribly wrong. Shouldn't be hard to fix this up.

Posted in programming