Kennis Blogs Dockerizing your Bamboo builds with AWS

Dockerizing your Bamboo builds with AWS

Recently I was working with a customer on automating their Bamboo based build environment. Their wish was to use the elastic agents in Amazon web services to run their builds. This is because they wanted to be able to scale up and down, on demand.

 

It was a fun project, but not an easy one! We ran into some interesting challenges along the way and so I thought I'd share some of the lessons we learned in this blog post. So let's get straight to the point... these are the challenges we encountered: 

  1. The first Docker task on a freshly started elastic agent always hung.
  2. The disk of the elastic image was too small for some of the builds.
  3. Docker created files as root preventing Bamboo to clean up the build results.
  4. Fetching a Docker image from a private repository required authentication, which is not supported out-of-the box.

Let's take a closer look at these challenges to see how we dealt with them...

 

The first Docker task on a freshly started agent always hung

We have chosen Docker as the preferred way of building the projects. The advantage of using Docker is that the developer is in charge on the exact dependencies and configuration of the build environment.

When we started to Dockerize the build and to run it on a remote elastic agent, we ran into the same problem every time: The first Docker task that ran on the freshly started elastic agent would hang and then the build would end up being killed by the much appreciated "Hung Build Killer" add-on. Once that initial build was killed, all consecutive builds would then run fine. When I logged in on the specific elastic agent and ran a Docker login command manually before the build had actually started, the first build ran fine as well...

 

A support ticket with Atlassian informed me that this is a known issue with Ubuntu Stock images that run on an instance type that uses block storage (EBS volume). The workaround is extremely simple: just restart the Docker service during the initialization of the Elastic Image... To do so follow these steps:

  1. Login to Bamboo as an administrator
  2. Go to "Image configurations" in the admin section of Bamboo
  3. Edit the Elastic image configuration that is used to start the Elastic Agent
  4. Add the following line in the "Instance startup script": service docker restart


The disk of the elastic image was too small for some of the builds

Dockerizing everything brings you an extremely controlled build environment that can be different for each branch. It's a huge advantage, but it also comes with a downside: Our Docker images ranged in size from ~50MB to over 900MB. When each build is using its own image, you will soon require Gigabytes of disk space just to store them all. Add to that the number of repositories you are going to store on the Elastic Agent and you will soon reach the disk space limit.

 

The Elastic Stock Images provided by Atlassian have a fairly small disk size of 8GB. Almost 2 GB are already taken by the OS and all installed capabilities. Docker requires an extra 4.5 GB and some of the larger builds require 2 GB to complete the task. Even when we clean up after every build, we are still short on disk space. So we needed to increase the disk, but still, we would rather not create a custom Elastic Image for that.

 

Luckily, Atlassian recognizes the problem and provides a solution. In the Image configuration in Bamboo, there is the option to "Automatically attach an Amazon Block Storage volume to new elastic image." By selecting this option, you have to enter the AMI ID of the EBS volume / snapshot. Whenever a new elastic instance is started, based on the new image configuration, the additional EBS volume is automatically mounted to the new instance. Just make sure to check the "Use legacy EBS handling" in the instance configuration as well. Atlassian has written a knowledge article about it as well, including the way to create an EBS volume.

 

Docker created files as root preventing Bamboo to clean up the build results.

Docker runs differently, depending on the platform you choose. On Linux, container commands always run as root while on Mac OSX, commands run as the logged in user within the container. This has the effect that on the Bamboo agents, files were created during the build that were owned by root. Therefore, the Bamboo agent was unable to completely cleanup data after the build due to insufficient permissions on the files in the build directory.

 

In order to fix that, the commands in the Docker container should run as the logged in user. We used gosu to make that happen. Gosu works similarly to su or sudo but there is one very important difference: Where su and sudo create two processes (the process of running the actual command is a child of the su/sudo process), Gosu only runs one single process. The advantage is that Gosu will make the Docker container exit with the right exit code. It makes sure that the build fails. Using su / sudo does not return the exit code of the invoked command so the build never fails.

 

 The Docker image to fix the issue is quite simple:

 

FROM ubuntu:16.04
 
RUN apt-get update && \
    apt-get install gosu && \
rm -rf /var/lib/apt/lists/*
 
ADD gosu-entrypoint.sh /
 
ENTRYPOINT ["/gosu-entrypoint.sh"]
 
RUN chmod 750 /gosu-entrypoint.sh
 
CMD ["/bin/bash"]


This Docker code is expected to read for itself. The main line here is the ENTRYPOINT line that points to a shell script. The code of the shell script is added below. Running just this base image will startup a bash shell as the current logged in user.

 

The shell script that is used as the entrypoint does the magic here. This is the source code:

 

#!/usr/bin/env bash
 
WHO=/host_user_check
 
stat $WHO > /dev/null
if [[ $? != 0 ]]; then
  echo "You must mount a file to "$WHO" in order to properly assume user"
  exit 1
fi
 
CURRENT_USER_ID=$(id -u)
CURRENT_GROUP_ID=$(id -g)
USERID=$(stat -c %u $WHO)
GROUPID=$(stat -c %g $WHO)
 
deluser ubuntu > /dev/null 2>&1
if [[ $GROUPID == $CURRENT_GROUP_ID ]]; then
  echo "File mounted to $WHO already has same groupid then current user."
else
  groupadd -g $GROUPID ubuntu
fi
 
if [[ $USERID == $CURRENT_USER_ID ]]; then
  echo "File mounted to $WHO already has same userid then current user. Running cmd as current user"
  "$@"
else
  useradd -m -s /bin/bash -u $USERID -g $GROUPID ubuntu
  gosu ubuntu "$@"
fi
It's important to notice the 3rd line. It points to a file called /host_user_check which must be available. So in order to use this Docker image, make sure to mount a local file/directory to this specific path. The script will take the ownership of that file and use it to run the command with.
 

Fetching a Docker image from a private repository required authentication, which is not supported out-of-the box.

Dockerizing everything in your build pipeline almost always requires a private Docker repository. Building every image from scratch before using it will increase build times tremendously and that is not a great idea. Having some predefined Docker images available for use in your organisation will make the lives of developers way easier.

 

Your private Docker registry is ofcourse password protected, to prevent unauthorised access. Bamboo's Docker task for running Docker containers does not allow you to fill in a username and password, so you need some other way to authenticate.

 

You could use a script that runs Docker login every time you try to fetch a build but it would require storing your username and password in a script file in your version control system. And if there is one development guideline that should be strictly followed, it's to never store passwords in your repositories! It's clear: we need another way to fix this.

 

Our private Docker registry is hosted in Artifactory. So in our case, we decided to put the login credentials in the Docker config file (~/.dockercfg). It means that we needed to store the credentials in a place where someone could view them, but where it would be stored safely. First of all, here are the steps to configure the credentials on the Elastic agent:

  1. Login to Bamboo as an administrator.
  2. Edit the Elastic image configuration in "Image configurations"
  3. Add the following lines to the "Instance startup script":

 

echo '{ "artifactory.yourcompany.com:6555": { "auth": "", "email": "" } }' > /home/bamboo/.dockercfg
chown bamboo: /home/bamboo/.dockercfg
chmod 0600 /home/bamboo/.dockercfg


This way, the credentials are secured in multiple ways:

  1. The actual credential is stored on the Elastic Agent only, which is destroyed when it is not used anyway.
  2. The file storing the credential is only readable by the user that runs the Bamboo process.
  3. Only Bamboo administrators are able to view the Instance startup script.
  4. The actual password is not stored, just an authentication token (as a base64 encoded string). This is a convenience offered by Artifactory, that is hosting the Docker registry. To retrieve the config, use the following command: curl -uadmin:password "https://artifactory.yourcompany.com:6555/v2/auth"

 

Key take aways 

Dockerizing your environment has many advantages. Your agents are much easier to maintain because they are all the same. Next to that, development is much more in control of the build process. The entire build process is now under version control, which also makes it possible to change the build configuration between different branches.

However, there is also a downside. Docker images consume quite some disk space so you might need to increase the disk capacity. Even if you're using elastic agents, the available space may too limited to run even just a few builds.

 

Last but not least, the behaviour of the images differs on each platform. This can make the outcome of the build slightly different on different systems.

 

References

  1. https://confluence.atlassian.com/bamboo/about-elastic-bamboo-289277118.html
  2. https://confluence.atlassian.com/bamboo/stock-images-289277463.html
  3. https://confluence.atlassian.com/bamboo0602/configuring-elastic-instances-to-use-the-ebs-938867228.html
  4. https://github.com/tianon/gosu
  5. https://www.jfrog.com/confluence/display/RTF/Docker+Registry
  6. https://www.jfrog.com/confluence/display/RTF/Advanced+Topics#AdvancedTopics-SettingYourCredentialsManually