Setting Up Docker and Buildbot

One of the newest players in the field of increasing server density and utilization is a piece of software called Docker. The idea is solid. Rather than automate the creation and management of resource-heavy virtual machines, Docker automates the management of lightweight Linux Containers (LXC containers), which allow for process and resource isolation backed by kernel guarantees on a single system. This means you have one system kernel, instead of dozens, and you don’t have to waste resources on duplicated pieces of code like system libraries, daemons, and other things that every server will always load into memory.

The ability to create a uniform and repeatable software environment inside of a container is worthy of attention, since it directly relates to getting both development and continuous integration environments running cleanly across a variety of systems.

It’s a problem I’m having right now: I need a continuous integration setup that can poll the master branch of a git repository and trigger a build on Windows, OS X, Linux, and Android. I have limited physical resources, but at least one multicore machine with 8GB of RAM running a 64-bit host OS.

Getting Started with Docker

Without further ado, here’s how I got going with Docker, after getting a clean 64-bit Ubuntu 12.04.3 system installed inside of VirtualBox.

Purge the old kernel (3.2):

Install the new kernel (3.8):

Install Docker (instructions from here):

That last command spits out an interesting list of dependencies, which I’m capturing here in case I need to look up the associated manpages later:

$ sudo apt-get install lxc-docker
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  aufs-tools bridge-utils cgroup-lite cloud-utils debootstrap euca2ools libapparmor1 libyaml-0-2 lxc lxc-docker-0.6.1
  python-boto python-m2crypto python-paramiko python-yaml
Suggested packages:
  btrfs-tools lvm2 qemu-user-static
The following NEW packages will be installed:
  aufs-tools bridge-utils cgroup-lite cloud-utils debootstrap euca2ools libapparmor1 libyaml-0-2 lxc lxc-docker
  lxc-docker-0.6.1 python-boto python-m2crypto python-paramiko python-yaml
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3,817 kB of archives.
After this operation, 20.6 MB of additional disk space will be used.
Do you want to continue [Y/n]? 

With Docker installed, but before running the “Hello World” Docker example, I took a snapshot of the virtual machine. Now that I think about it, though, that’s the last snapshot I’ll need, since Docker is itself a snapshottable container organizer.

$ sudo docker run -i -t ubuntu /bin/bash
[sudo] password for nuket: 
Unable to find image 'ubuntu' (tag: latest) locally
Pulling repository ubuntu
8dbd9e392a96: Download complete
b750fe79269d: Download complete
27cf78414709: Download complete
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: []
WARNING: IPv4 forwarding is disabled.

No More sudo

I got a little tired of typing sudo in front of everything, so used the instructions here to add a docker group to the system, and restart the daemon with those credentials.

# Add the docker group
sudo groupadd docker

# Add the ubuntu user to the docker group
# You may have to logout and log back in again for
# this to take effect
sudo gpasswd -a ubuntu docker

# Restart the docker daemon
sudo service docker restart

Then I log out and log back into my desktop session, to gain the group permissions.

Getting Buildbot Installed

Someone beat me to it and uploaded a Dockerfile describing both a buildbot-master and buildbot-slave configuration:

Found 6 results matching your query ("buildbot")
NAME                             DESCRIPTION
ehazlett/buildbot-master         Buildbot Master See full description for available environment variables to customize.
ehazlett/buildbot                Standalone buildbot with master/slave.  See full description for available environment variables.

Pull buildbot-master

docker pull ehazlett/buildbot-master

According to the Docker Index entry for buildbot-master, there are a handful of environment variables available to be passed into docker run. (This is a bit of a kicker for me, that at the moment you have to pass these environment variables in via the command line, but I’m guessing they’ll fix that to read them in via a file at some point.)

CONFIG_URL: URL to buildbot config (overrides all other vars)
PROJECT_NAME: Name of project (shown in UI)
PROJECT_URL: URL of project (shown in UI)
REPO_PATH: Path to code repo (buildbot watches -- i.e. git://
TEST_CMD: Command to run as test
BUILDBOT_USER: Buildbot username (UI)
BUILDBOT_PASS: Buildbot password (UI)
BUILDSLAVE_USER: Buildbot slave username
BUILDSLAVE_PASS: Buildbot slave password

Start buildbot-master

The documentation isn’t super-clear on how to pass these multiple environment variables into the docker container, but it looks something like this:

$ docker run -e="foo=bar" -e="bar=baz" -i -t ubuntu /bin/bash
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: []
root@1f357c1e17b4:/# echo $foo
root@1f357c1e17b4:/# echo $bar

For the time being, I’ll just run the container with its default parameters.

CID=$(docker run -d ehazlett/buildbot-master)

But I’m also curious as to how I’m supposed to communicate with it. So I inspect the docker configuration for the buildbot-master:

So what happens when you run the container? You have to find out the portmapping of the buildbot ports inside the container to the ports on your host system.

Port 9989 is the communications port for the Buildbot systems to talk to one another. Port 8010 is the Buildbot web interface whch you can open in a browser, like so:


Of course, you can access this from the outside (in the top-most, non-virtual host OS as well):


Docker Subnetwork

It’s also not entirely clear from the Docker basic instructions, that Docker also creates an internal private network that is NATed to the host, so when you run ifconfig on the Docker host, you’ll see:

And if you’ve attached to a Docker container, and run ip addr show, you’ll see:

Which you’ll also see if you run docker inspect $CID, which returns useful runtime information about the specific container instance:

So now the question is how to get the buildbot-slaves on other VMs to talk to the buildbot-master, and how to configure the buildbot-master itself. I’m also considering getting an instance of CoreOS running, as it seems to have a mechanism for handling global configuration within a cluster, which would be one way to provide master.cfg to the buildbot-master.

Updates to this post to follow.

Update: Easier Overview

The easier way to see your container runtime configurations at a glance is to use the docker ps command (Duh!). Particularly nice is the port-mapping list in the rightmost column:

Update: Configuring the Buildbot Master Itself

You can just jump into the container and edit the master.cfg, like so:

Update: Getting the list of all Containers

It’s not entirely intuitive, but each time you docker run an Image, you get a new Container, and these Containers don’t generally show up as you might expect. (I was wondering how the diff command was supposed to work, for instance.)

You have to use the docker ps -a command, to see all of the Container IDs, which you then can start and stop.

In other words, using docker run image-name creates the new Container. But for subsequent calls, you should use docker start container-id.

This also clarifies why there’s a docker rm and a docker rmi command.

An easy way to remove unused Docker containers

Update: Using cpp to make Dockerfiles uniform

This section could also be called “The horror, the horror”. Following the best practices list mentioned here, I decided to create a uniform include files to pull into my Dockerfiles, which I then generate using the C preprocessor (pretty much because it’s cheap and available).

So the idea I had was to put common Dockerfile instructions into separate files. I’m guessing the Docker devs might build an INCLUDE instruction into the Dockerfile syntax at some point. The benefit to doing this is that you can take advantage of the docker image cache, which stores incremental versions of your build-images based on the instructions and base-image used to create them. In other words, you don’t have to keep rebuilding the common parts of various Docker images. And you’re less likely to mistype common lines across files, which could be a source of inefficiency.


In a clean subdirectory, below where the .ubuntu and .run files are located:

To create the custom Dockerfile:

Which generates something like:

Then just docker build . like usual.

Update: Now with github!

I’ve created a repository on github to play around with includable Dockerfiles.

The github repository currently has a few images in it, which are related to one another in a tree that looks like this:

The idea, then is to eliminate redundant text by including Dockerfiles in other Dockerfiles, and to organize this hierarchically, such that images further down the hierarchy are just combinations of their parents + some differentiation.

apt-get install package caching using squid-deb-proxy

If you’re actively developing Docker images, one thing that slows you down a lot and puts considerable load on Ubuntu’s mirror network is the redundant downloading of software packages.

To speed up your builds and save bandwidth, install squid-deb-proxy and squid-deb-proxy-client on the Docker container host (in my case, the outermost Ubuntu VM):

And, make sure you add (and any other PPA archive sources) to /etc/squid-deb-proxy/mirror-dstdomain.acl:

And restart the proxy:

In your Dockerfile, set the proxy configuration file to point to the default route address (which is where squid-deb-proxy will be running):

Once the caching is set up, you can monitor accesses via the logfiles in /var/log/squid-deb-proxy.

The first time you build an image, the log file has lots of cache misses:

The second time, you’ll see a number of hits (TCP_REFRESH_UNMODIFIED), saving you bandwidth and time:

Update: Using sshfs to mount Docker container folders

Your Docker containers might not actually have any editors installed. One way to easily get around this is to just mount folders inside the Docker container using the user-mode SSHFS:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.