We're excited about what Docker offers for CI systems such as ours.
Strider is conceptually similar to Travis-CI or Jenkins with the following major differences:
Our cloud-hosted hosted Strider offering needs to be able to run tests in an isolated, secure way. Therefore, we need a sandboxed environment for security and determinism.
Today cloud computing and hardware virtualization are ubiquitious. You'd have to have been living under a rock to miss the rise of cloud-hosted VMs sold by providers like AWS EC2.
LXC and cgroups offer lightweight virtualization at the operating system level, rather than at the hypervisor level (a la Xen/EC2) or VM level (a la VMware/VirtualBox).
Linux Containers offer something conceptually similar to a Linux VM but with the following properties:
If you are familiar with
chroot(2), LXC is like a
chroot(2) on steroids.
Until now, the tooling for Linux Containers and cgroups has been somewhat crude or even downright byzantine. Using Linux Containers has required a significant investment in learning and working-around often-buggy tools.
Enter dotCloud and Docker. With their experience running a major PaaS for a number of years, dotCloud have a battle-hardened team of engineers deeply familiar with Linux Containers and related technology. Thus they are the perfect people to be building improved tools for these extremely awesome but under-appreciated technologies.
For our cloud hosted Strider CI offering, we originally maintained our own homebrew LXC-based sandbox environment.
However, with the advent of Docker we have thrown that all away and instead build upon a stable, standard base with a community. This way we get a bunch of amazing features for free:
Hosted Strider utilizes multiple workers for horizontal scalability. These workers are Linux Containers distributed over the network. Using Docker, we have been able to build our own standard worker image which can be pushed to any Docker host with no build step. This makes it extremely easy for us to add/shrink capacity.
One of the really exciting possibilities that Docker offers is to initialize a user-supplied custom environment for each test run. Often the code that you're testing relies on compilation of third party libraries. Unless the test environment has all those libraries available, you must build those libraries as part of the test run.
An example of this is the node-opencv binding (written by FrozenRidge's own Peter Braden). Since node-opencv requires OpenCV to be installed, currently the test suite has to apt-get and install this library and a long string of subdependencies. Just building these prerequisite libraries can add 10 minutes to a test run.
The reason this is the best we can currently do is that we currently think of a test environment as always starting from uniform, clean image. When you start your tests, you get a nice vanilla VM, but that VM may need a lot of work to get to the point where it's suitable for testing your code.
What Docker offers is the potential to allow a different idea of what a clean VM is for each test suite. You could, for example, build a docker image that had all of your prerequisite libraries, and thus you'd only have to build your code on each test run.
With Docker as a standard container format, our customers can interactively create their test environment - including any thirdparty libraries - upload them to us, and then we can simply start all their future tests from that exact environment. This eliminates any delay and removes the need for hard-to-maintain environment setup scripts.
Going even further, this would allow you to specify several different Docker images with different versions of prerequisite libraries installed, allowing you to test your code against multiple environments. This would allow me to test my node bindings against several versions of OpenCV for example.
And because Docker treats it's images as a tree of derivations from a source image, you have the ability to store an image at each stage of a build. This means we can provide full binary images of the environment in which the tests failed. This allows you to run locally bit-for-bit the same container as the CI server ran. Due to the magic of Docker and AUFS Copy-On-Write filesystems, we can store this cheaply.
Often tests pass when built in a CI environment, but when built in another (e.g. production) environment break due to subtle differences. Docker makes it trivial to take exactly the binary environment in which the tests pass, and ship that to production to run it.
You can be confident that the same code that passed the test suite is also running in production.