savo.la
2015-01-30
Stable build environment using Docker
Docker is intended for application deployment, but it can also streamline development, as I learned when adopting it for Ninchat.
Build dependencies can be problematic. When deploying software on Linux, we can use the libraries and tools provided by the Linux distributions. But in order to build our application, we need to have the libraries and build tools available. For repeatable production builds, the versions should also be fixed. Keeping workstations in sync with the requirements may be inconvenient or impossible (e.g. not a Linux), so a separate build machine or a continuous integration server might be used. But that isn't helpful if we would like to compile incrementally while writing code, or run unit tests before committing changes to a version control system.
I'll show a simple way to set up a Docker environment for running builds and tests right from your source tree. Here's an example C program, hello.c:
#include <stdio.h> #include <gperftools/tcmalloc.h> int main(int argc, char **argv) { printf("hello world\n"); tc_malloc_stats(); return 0; }
It will be built using this Makefile:
hello: hello.c $(CC) -o $@ $^ -ltcmalloc clean:: rm -f hello
Let's record its build dependencies (Ubuntu package names) to deps.txt:
build-essential libgoogle-perftools-dev
We can create the build environment (a Docker image) using this Dockerfile:
FROM ubuntu:14.04 COPY deps.txt /tmp/ RUN apt-get update && \ xargs apt-get -y install < /tmp/deps.txt && \ apt-get clean && \ rm /tmp/deps.txt
So let's create it:
$ docker build -t hello-env .
There you have a nice, unchanging system image that you can put in a Docker registry and share with your teammates.
Next we need two shell scripts to help us with it. in-docker
launches a Docker container with the working directory mounted inside it (make
it executable with chmod +x in-docker
):
#!/bin/sh uid=`id -u` name=`whoami` dir=`readlink -f .` exec docker run --rm --tty --volume=$dir:$dir --workdir=$dir hello-env sh docker-setup.sh $uid $name $HOME "$@"
docker-setup.sh is run inside the container in order to create a user account before executing the actual build command:
uid=$1 shift name=$1 shift home=$1 shift chown $uid:$uid $home adduser --uid $uid --disabled-password --gecos "" --quiet $name exec sudo --set-home -u $name "$@"
Finally, we can build the program:
$ ./in-docker make cc -o hello hello.c -ltcmalloc $ ./hello ./hello: error while loading shared libraries: libtcmalloc.so.4: cannot open shared object file: No such file or directory
The resulting binary can be found normally in the working directory, but since google-perftools is missing, we can't run it... except inside the container:
$ ./in-docker ./hello hello world ------------------------------------------------ MALLOC: 16768 ( 0.0 MiB) Bytes in use by application MALLOC: + 933888 ( 0.9 MiB) Bytes in page heap freelist MALLOC: + 97696 ( 0.1 MiB) Bytes in central cache freelist MALLOC: + 0 ( 0.0 MiB) Bytes in transfer cache freelist MALLOC: + 224 ( 0.0 MiB) Bytes in thread cache freelists MALLOC: + 1142936 ( 1.1 MiB) Bytes in malloc metadata MALLOC: ------------ MALLOC: = 2191512 ( 2.1 MiB) Actual memory used (physical + swap) MALLOC: + 0 ( 0.0 MiB) Bytes released to OS (aka unmapped) MALLOC: ------------ MALLOC: = 2191512 ( 2.1 MiB) Virtual address space used MALLOC: MALLOC: 10 Spans in use MALLOC: 1 Thread heaps in use MALLOC: 8192 Tcmalloc page size ------------------------------------------------ Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()). Bytes released to the OS take up virtual address space but no physical memory.
Now, imagine a test suite which requires a database to run. Instead of installing a database on your laptop and resetting it to a known state before each run, we can bundle it in our build environment: install and prepare it in the Dockerfile, and start it in the docker-setup.sh script. Couldn't be simpler.
Finally, we probably also want to use Docker to deploy the built program. One approach is to create a base image with runtime dependencies, and create the image with the build dependencies on it (instead of directly on Ubuntu). The final deployment image may then also be created on the same base image, without including the unnecessary build dependencies.