AWS ECR with Gradle
May 9th, 2017
Describes how to use a Gradle plugin to over come the ECR temporary token for CI builds.
Vagrant is a great tool for delivering consistent development environments in virtual machines. If you’ve built anything beyond simple with Vagrant, you might have grown tired of manually testing it. This post will show you one way of using continuous integration (CI) to test changes to your Vagrant box using Docker to speed up the tests and allow use of CI services.
Vagrant has a built-in provider for Docker. It will use a Docker instance already on your machine or create a VM and install Docker for you. We’re focusing on running on CI so we will ensure a Docker instance is available for Vagrant to use.
Vagrant uses SSH to communicate with the machine to provision it. However, it is bad practice to include an SSH server with a Docker image. Vagrant can use the Docker provider without an SSH server but that makes our CI test less reliable. We want to test as close as we can to the real thing.
There are several images available on Docker Hub that include an SSH server. We are going to use jdeathe/centos-ssh:centos-7-2.2.3 for our example. It is a CentOS image intended for providing an SSH server with flexible configuration options.
We’re going to look at one of my Vagrant repos that builds a Linux developer workstation. This repo uses CircleCI to test changes. The repo is at https://bitbucket.org/double16/linux-dev-workstation. The CircleCI build is at https://circleci.com/bb/double16/linux-dev-workstation. Below is the entire configuration needed for Docker, we’ll explain it in pieces.
config.vm.provider :docker do |docker, override|
override.vm.box = nil
override.vm.allowed_synced_folder_types = :rsync
docker.image = "jdeathe/centos-ssh:centos-7-2.2.3"
docker.name = "linux-dev-workstation"
docker.remains_running = true
docker.has_ssh = true
docker.env = {
:SSH_USER => 'vagrant',
:SSH_SUDO => 'ALL=(ALL) NOPASSWD:ALL',
:LANG => 'en_US.UTF-8',
:LANGUAGE => 'en_US:en',
:LC_ALL => 'en_US.UTF-8',
:SSH_INHERIT_ENVIRONMENT => 'true',
}
# There is no newline after the existing insecure key, so the new key ends up on the same line and breaks SSH
override.ssh.insert_key = false
override.ssh.proxy_command = "docker run -i --rm --link linux-dev-workstation alpine/socat - TCP:linux-dev-workstation:22,retry=3,interval=2"
end
It’s important to use Vagrant’s override
feature correctly. Otherwise, configuration changes made in the :docker
provider section will be applied when used with other providers.
override.vm.box = nil
docker.image = "jdeathe/centos-ssh:centos-7-2.2.3"
docker.remains_running = true
These lines disable the use of a virtual machine base image and specify a Docker image instead. The docker.remains_running
line tells Vagrant this container runs until explicitly stopped, which is necessary to host an SSH server. (Vagrant can build environments in which the container runs a command and stops.)
docker.has_ssh = true
docker.env = {
:SSH_USER => 'vagrant',
:SSH_SUDO => 'ALL=(ALL) NOPASSWD:ALL',
:LANG => 'en_US.UTF-8',
:LANGUAGE => 'en_US:en',
:LC_ALL => 'en_US.UTF-8',
:SSH_INHERIT_ENVIRONMENT => 'true',
}
# There is no newline after the existing insecure key, so the new key ends up on the same line and breaks SSH
override.ssh.insert_key = false
We need to tell Vagrant that this Docker image has SSH and then we need to set some environment variables to configure SSH and other odds and ends in the image. The docker.env
hash is dependent on the image you choose. The README for this image describes the configuration of SSH. The language config was found by trial and error. If you use a different image you’ll need to change the docker.env
hash.
The override.ssh.insert_key
value is also dependent on this image and is a work around. Ideally we wouldn’t need this.
docker.name = "linux-dev-workstation"
override.ssh.proxy_command = "docker run -i --rm --link linux-dev-workstation alpine/socat - TCP:linux-dev-workstation:22,retry=3,interval=2"
Normally Vagrant will retry connecting to SSH if the server has not yet started listening. In the case of Docker, the port is proxied from the host machine into the container. Therefore the Docker proxy is listening on the port, even though the SSH server hasn’t started. The connection is made but then immediately closed. Vagrant considers this a fatal error and stops provisioning.
We’ll work around this using socat
, a capable bidirectional relay. The override.ssh.proxy_command
will be used to contact the SSH server and we leverage the retry features of socat
to wait until the SSH server is up. Adjust the retry=
and interval=
values if necessary if you have problems with provisioning.
In order for socat
to know which container to contact, we name the container with docker.name
. Change the name as desired as long as it is changed in all locations in these lines.
override.vm.allowed_synced_folder_types = :rsync
Vagrant supports mounting synced folders as Docker volumes. However, CircleCI does not support mounting volumes from the repo into build containers. We limit to rsync which only requires the SSH connection to be working.
Some things in your Vagrantfile
may need to be changed based on the Docker provider being used. This is due to not being able to run in a container. For example, a virtualization package such as VirtualBox cannot be run in a container. The snippet below should an example of code using the Puppet provisioner. Puppet has a variable named $::virtual
that can be used to detect running in a Docker container.
unless $::virtual == 'docker' {
include virtualbox
package { "kernel-devel-${::kernelrelease}": }
->Exec<| title == 'vboxdrv' |>
}
unless $::virtual == 'docker' {
package { 'docker':
ensure => present,
}
->service { 'docker':
ensure => running,
enable => true,
}
}
CircleCI 2.0 has direct support for building with Docker. We’ll use it to build our Vagrant box as a test. The test is simple, it passes if Vagrant can build the box. More sophisticated tests can be added to ensure the software on the box is running properly.
There are two ways to send commands to the Vagrant container to run tests. One is through SSH, which requires access to the private key. A better way is using docker exec
. This command gets you easily into the container with either shell access or running any other command available on the container. Additional tests are left as an exercise to the reader ;) .
version: 2
jobs:
build:
working_directory: /home/linux-dev-workstation
docker:
- image: pdouble16/vagrant-build-base:latest
user: root
steps:
- checkout
- setup_remote_docker
# The 'test' here is that Vagrant can build the box. We should add more steps after this to ensure the software
# we installed is in working order.
- run:
name: Test Vagrant Up
command: |
vagrant up --provider docker --provision
CircleCI 2.0 uses a Docker image for building. This gives you a lot of flexibility in your build tools. For testing a Vagrant box you’ll need Vagrant installed. We also need rsync for synced folder support. The pdouble16/vagrant-build-base:latest referenced above has both of these tools. This is a public image that can be used in your own projects.
In order to use Docker to run our tests we need to use the [setup_remote_docker](https://circleci.com/docs/2.0/building-docker-images/#overview)
command. This will provision a Docker instance for the build and configure environment variables to use it.
The last command, vagrant up --provider docker --provision
, will provision the Docker image and fail the CI build if Vagrant returns a non-zero exit code. The --provision
option isn’t necessary, but it is useful when testing locally because it allows you to work through provisioning issues without re-creating the container each time.
You can run CircleCI builds locally by installing the circleci
CLI. It is necessary to use an absolute path for the working_directory
.
We’ve seen how we can leverage the Vagrant Docker provider to test our Vagrant box in a CI environment. The additional configuration is minimal. CircleCI was used in this post, but other CI tools will work as well. It is also recommended to add tests beyond provisioning, one way is using the docker exec
command.
I have been coding since 6th grade, circa 1986, professionally (i.e. college graduate) since 1998 when I graduated from the University of Nebraska-Lincoln. Most of my career has been in web applications using JEE. I work the entire stack from user interface to database. I especially like solving application security and high availability problems.