Source: http://activelamp.com/blog/devops/hashing-out-docker-workflow/
Several months ago I was having a conversation with a friend about our Chef workflow for managing and provisioning servers, as well as provision our local machines using Vagrant. That conversation led to us talking about Docker, and how Docker is going to change everything in the devops space. It was after that conversation that I got a hold of The Docker Book, and started getting up to speed with Docker. We don’t have a concrete process in place yet, but I want to share my thought process in creating a Vagrant / Docker workflow that will work for our team (and hopefully get valuable feedback from others that are thinking about the same things). This post is part 1 of a series of posts to hash these ideas out.
Background
We are a long time user of Vagrant. Most of our projects include a Vagrantfile, which allows us to just type vagrant up
inside the project folder, to provision a local environment and start developing on. We have used both Puppet and Chef as provisioners to setup the entire local environment.
We have also implemented various deployment processes via chef-client, capistrano, and packer.
The packer workflow we use isn’t entirely open source, as there is one key component of that workflow that hasn’t yet been open sourced, but I’ve heard from our client that they will likely be releasing that tool to the public.
Objective
My goal is to create an end-to-end process, using docker, that encompasses all the automation we are used to having with local environment setup, and full-blown virtual machines. I also want an easy way to automatically create deployable docker images (with our application baked into the image) upon each code release, like what we’re used to with the packer workflow we use. We would likely push these images (artifacts) up to a private docker hub.
It would also be great to have a command line tool (probably using fabric) that facilitates creating new containers on a specified docker host from the freshly baked docker image, cycle in the new containers and turn off the old containers (probably using HAProxy). The tool should be able to handle rollbacks as well, turning back on older containers, and the tool should also clean up after itself, removing containers that are more than 5 releases old.
Running Docker Locally
There are several ways to go about setting up Docker locally. I don’t intend to walk you through how to install Docker, you can find step-by-step installation instructions based on your OS in the Docker documentation. However, I am leaning toward taking an unconventional approach to installing Docker locally, not mentioned in the Docker documentation. Let me tell you why.
Running a Docker host
For background, we are specifically an all Mac OS X shop at ActiveLAMP, so I’ll be speaking from this context.
Unfortunately you can’t run Docker natively on OS X, Docker needs to run in a Virtual Machine with an operating system such as Linux. If you read the Docker OS X installation docs, you see there are two options for running Docker on Mac OS X, Boot2Docker or Kitematic.
Running either of these two packages looks appealing to get Docker up locally very quickly (and you should use one of these packages if you’re just trying out Docker), but thinking big picture and how we plan to use Docker in production, it seems that we should take a different approach locally. Let me tell you why I think you shouldn’t use Boot2Docker or Kitematic locally, but first a rabbit trail.
Thinking ahead (the rabbit trail)
My opinion may change after gaining more real world experience with Docker in production, but the mindset that I’m coming from is that in production our Docker hosts will be managed via Chef.
Our team has extensive experience using Chef to manage infrastructure at scale. It doesn’t seem quite right to completely abandon Chef yet, since Docker still needs a machine to run the Docker host. Chef is great for machine level provisioning.
My thought is that we would use Chef to manage the various Docker hosts that we deploy containers to and use the Dockerfile with Docker Compose to manage the actual app container configuration. Chef would be used in a much more limited capacity, only managing configuration on a system level not an application level. One thing to mention is that we have yet to dive into the Docker specific hosts such as AWS ECS, dotCloud, or Tutum. If we end up adopting a service like one of these, we may end up dropping Chef all together, but we’re not ready to let go of those reigns yet.
One step at a time for us. The initial goal is to get application infrastructure into immutable containers managed by Docker. Not ready to make a decision on what is managing Docker or where we are hosting Docker, that comes next.
Manage your own Docker Host
The main reason I was turned off from using Boot2Docker or Kitematic is that it creates a Virtual Machine in Virtualbox or VMWare from a default box / image that you can’t easily manage with configuration management. I want control of the host machine that Docker is run on, locally and in production. This is where Chef comes into play in conjunction with Vagrant.
Local Docker Host in Vagrant
As I mentioned in my last post, we are no stranger to Vagrant. Vagrant is great for managing virtual machines. If Boot2Docker or Kitematic are going to spin up a virtual machine behind the scenes in order to use Docker, then why not spin up a virtual machine with Vagrant? This way I can manage the configuration with a provisioner, such as Chef. This is the reason I’ve decided to go down the Vagrant with Docker route, instead of Boot2Docker or Kitematic.
The latest version of Vagrant ships with a Docker provider built-in, so that you can manage Docker containers via the Vagrantfile. The Vagrant Docker integration was a turn off to me initially because it didn’t seem it was very Docker-esque. It seemed Vagrant was just abstracting established Docker workflows (specifically Docker Compose), but in a Vagrant syntax. However within the container Vagrantfile, I saw you can also build images from a Dockerfile, and launch those images into a container. It didn’t feel so distant from Docker any more.
It seems that there might be a little overlap in areas between what Vagrant and Docker does, but at the end of the day it’s a matter of figuring out the right combination of using the tools together. The boundary being that Vagrant should be used for “orchestration” and Docker for application infrastructure.
When all is setup we will have two Vagrantfiles to manage, one to define containers and one to define the host machine.
Setting up the Docker Host with Vagrant
The first thing to do is to define the Vagrantfile
for your host machine. We will be referencing this Vagrantfile from the container Vagrantfile. The easiest way to do this is to just type the following in an empty directory (your project root):
$ vagrant init ubuntu/trusty64
You can configure that Vagrantfile however you like. Typically you would also use a tool like Chef solo, Puppet, or Ansible to provision the machine as well. For now, just to get Docker installed on the box we’ll add to the Vagrantfile a provision statement. We will also give the Docker host a hostname and a port mapping too, since we know we’ll be creating a Drupal container that should EXPOSE
port 80. Open up your Vagrantfile and add the following:
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
This ensures that Docker is installed on the host when you run vagrant up
, as well as maps port 4567
on your local machine to port 80
on the Docker host (guest machine). Your Vagrantfile should look something like this (with all the comments removed):
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
end
Note: This post is not intended to walk through the fundamentals of Vagrant, for further resources on how to configure the Vagrantfile check out the docs.
As I mentioned earlier, we are going to end up with two Vagrantfiles in our setup. I also mentioned the container Vagrantfile will reference the host Vagrantfile. This means the container Vagrantfile is the configuration file we want used when vagrant up
is run. We need to move the host Vagrantfile to another directory within the project, out of the project root directory. Create a host directory and move the file there:
$ mkdir host
$ mv Vagrantfile !$
Bonus Tip: The !$
destination that I used when moving the file is a shell shortcut to use the last argument from the previous command.
Define the containers
Now that we have the host Vagrantfile defined, lets create the container Vagrantfile. Create a Vagrantfile in the project root directory with the following contents:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
To summarize the configuration above, we are using the Vagrant Docker provider, we have specified the path to the Docker host Vagrant configuration that we setup earlier, and we defined a container using the Drupal image from the Docker Registry along with exposing some ports on the Docker host.
Start containers on vagrant up
Now it’s time to start up the container. It should be as easy as going to your project root directory and typing vagrant up
. It’s almost that easy. For some reason after running vagrant up
I get the following error:
A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.
Command: "docker" "ps" "-a" "-q" "--no-trunc"
Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
Stdout:
I’ve gotten around this is by just running vagrant up
again. If anyone has ideas what is causing that error, please feel free to leave a comment.
Drupal in a Docker Container
You should now be able to navigate to http://localhost:4567
to see the installation screen of Drupal. Go ahead and install Drupal using an sqlite database (we didn’t setup a mysql container) to see that everything is working. Pretty cool stuff!
Development Environment
There are other things I want to accomplish with our local Vagrant environment to make it easy to develop on, such as setting up synced folders and using the vagrant rsync-auto
tool. I also want to customize our Drupal builds with Drush Make, to make developing on Drupal much more efficient when adding modules, updating core, etc… I’ll leave those details for another post, this post has become very long.
Conclusion
As you can see, you don’t have to use Boot2Docker or Kitematic to run Docker locally. I would advise that if you just want to figure out how Docker works, then you should use one of these packages. Thinking longer term, your local Docker Host should be managed the same way your production Docker Host(s) are managed. Using Vagrant, instead of Boot2Docker or Kitematic, allows me to manage my local Docker Host similar to how I would manage production Docker Hosts using tools such as Chef, Puppet, or Ansible.
Requirements of a local dev environment
Before we get started, it is always a good idea to define what we expect to get out of our local development environment and define some requirements. You can define these requirements however you like, but since ActiveLAMP is an agile shop, I’ll define our requirements as users stories.
User Stories
As a developer, I want my local development environment setup to be easy and automatic, so that I don’t have to spend the entire day following a list of instructions. The fewer the commands, the better.
As a developer, my local development environment should run the same exact OS configuration as stage and prod environments, so that we don’t run into “works on my machine” scenario’s.
As a developer, I want the ability to log into the local dev server / container, so that I can debug things if necessary.
As a developer, I want to work on files local to my host filesystem, so that the IDE I am working in is as fast as possible.
As a developer, I want the files that I change on my localhost to automatically sync to the guest filesystem that is running my development environment, so that I do not have to manually push or pull files to the local server.
Now that we know what done looks like, let’s start fulfilling these user stories.
Things we get for free with Vagrant
We have all worked on projects that have a README file with a long list of steps just to setup a working local copy. To fulfill the first user story, we need to encapsulate all steps, as much as possible, into one command:
$ vagrant up
We got a good start on our one command setup in my last blog post. If you haven’t read that post yet, go check it out now. We are going to be building on that in this post. My last post essentially resolves the first three stories in our user story list. This is the essence of using Vagrant, to aid in setting up virtual environments with very little effort, and dispose them when no longer needed with vagrant up
and vagrant destroy
, respectively.
Since we will be defining Docker images and/or using existing docker containers from DockerHub, user story #2 is fulfilled as well.
For user story #3, it’s not as straight forward to log into your docker host. Typically with vagrant you would type vagrant ssh
to get into the virtual machine, but since our host machine’s Vagrantfile is in a subdirectory called /host
, you have to change directory into that directory first.
$ cd host
$ vagrant ssh
Another way you can do this is by using the vagrant global-status
command. You can execute that command from anywhere and it will provide a list of all known virtual machines with a short hash in the first column. To ssh into any of these machines just type:
$ vagrant ssh <short-hash>
Replace <short-hash>
with the actual hash of the machine.
Connecting into a container
Most containers run a single process and may not have an SSH daemon running. You can use the docker attach
command to connect to any running container, but beware if you didn’t start the container with a STDIN and STDOUT you won’t get very far.
Another option you have for connecting is using docker exec
to start an interactive process inside the container. For example, to connect to the drupal-container that we created in my last post, you can start an interactive shell using the following command:
$ sudo docker exec -t -i drupal-container /bin/bash
This will return an interactive shell on the drupal-container that you will be able to poke around on. Once you disconnect from that shell, the process will end inside the container.
Getting files from host to app container
Our next two user stories have to do with working on files native to the localhost. When developing our application, we don’t want to bake the source code into a docker image. Code is always changing and we don’t want to rebuild the image every time we make a change during the development process. For production, we do want to bake the source code into the image, to achieve the immutable server pattern. However in development, we need a way to share files between our host development machine and the container running the code.
We’ve probably tried every approach available to us when it comes to working on shared files with vagrant. Virtualbox shared files is just way too slow. NFS shared files was a little bit faster, but still really slow. We’ve used sshfs to connect the remote filesystem directly to the localhost, which created a huge performance increase in terms of how the app responded, but was a pain in the neck in terms of how we used VCS as well as it caused performance issues with the IDE. PHPStorm had to index files over a network connection, albiet a local network connection, but still noticebly slower when working on large codebases like Drupal.
The solution that we use to date is rsync, specifically vagrant-gatling-rsync. You can checkout the vagrant gatling rsync plugin on github, or just install it by typing:
$ vagrant plugin install vagrant-gatling-rsync
Syncing files from host to container
To achieve getting files from our localhost to the container we must first get our working files to the docker host. Using the host Vagrantfile that we built in my last blog post, this can be achieved by adding one line:
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
Your Vagrantfile within the /host
directory should now look like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end
We are syncing a drupal profile from a within the drupal directory off of the project root to a the /srv/myprofile
directory within the docker host.
Now it’s time to add an argument to run when docker run
is executed by Vagrant. To do this we can specify the create_args
parameter in the container Vagrant file. Add the following line into the container Vagrantfile:
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
This file should now look like:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
This parameter that we are passing maps the directory we are rsyncing to on the docker host to the profiles directory within the Drupal installation that was included in the Drupal docker image from DockerHub.
Create the installation profile
This blog post doesn’t intend to go into how to create a Drupal install profile, but if you aren’t using profiles for building Drupal sites, you should definitely have a look. If you have questions regarding why using Drupal profiles are a good idea, leave a comment.
Lets create our simple profile. Drupal requires two files to create a profile. From the project root, type the following:
$ mkdir -p drupal/profiles/myprofile
$ touch drupal/profiles/myprofile/{myprofile.info,myprofile.profile}
Now edit each file that you just created with the minimum information that you need.
myprofile.info
name = Custom Profile
description = My custom profile
core = 7.x
myprofile.profile
<?php
function myprofile_install_tasks() {
// Add tasks here.
}
Start everything up
We now have everything we need in place to just type vagrant up
and also have a working copy. Go to the project root and run:
$ vagrant up
This will build your docker host as well as create your drupal container. As I mentioned in a previous post, starting up the container sometimes requires me to run vagrant up
a second time. I’m still not sure what’s going on there.
After everything is up and running, you will want to run the rsync-auto command for the docker host, so that any changes you make locally traverses down to the docker host and then to the container. The easiest way to do this is:
$ cd host
$ vagrant gatling-rsync-auto
Now visit the URL to your running container at http://localhost:4567
and you should see the new profile that you’ve added.
Conclusion
We covered a lot of ground in this blog post. We were able to accomplish all of the stated requirements above with just a little tweaking of a couple Vagrantfiles. We now have files that are shared from the host machine all the way down to the container that our app is run on, utilizing features built into Vagrant and Docker. Any files we change in our installation profile on our host immediately syncs to the drupal-container on the docker host.
At ActiveLAMP, we use a much more robust approach to build out installation profiles, utilizing Drush Make, which is out of scope for this post. This blog post simply lays out the general idea of how to accomplish getting a working copy of your code downstream using Vagrant and Docker.
Review
The instructions in this post are assumming you followed my previous post to get a Drupal environment setup with the custom “myprofile” profile. In that post we brought up a Drupal environment by just referencing the already built Drupal image on DockerHub. We are going to use that same Docker image, and add our custom application to that.
All the code that I’m going to show below can be found in this repo on Github.
Putting the custom code into the container
We need to create our own image, create a Dockerfile
in our project that extends the Drupal image that we are pulling down.
Create a file called Dockerfile
in the root of your project that looks like the following:
FROM drupal:7.41
ADD drupal/profiles/myprofile /var/www/html/profiles/myprofile
We are basically using everything from the Drupal image, and adding our installation profile to the profiles directory of the document root.
This is a very simplistic approach, typically there are more steps than just copying files over. In more complex scenarios, you will likely run some sort of build within the Dockerfile as well, such as Gulp, Composer, or Drush Make.
Setting up Jenkins
We now need to setup a Jenkins server that will checkout our code and run docker build
and docker push
. Let’s setup a local jenkins container on our Docker host to do this.
Open up the main Vagrantfile
in the project root and add another container to the file like the following:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.define "jenkins" do |v|
v.vm.provider "docker" do |d|
d.vagrant_vagrantfile = "./host/Vagrantfile"
d.build_dir = "./Dockerfiles/jenkins"
d.create_args = ['--privileged']
d.remains_running = true
d.ports = ["8080:8080"]
d.name = "jenkins-container"
end
end
config.vm.define "drupal" do |v|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
end
Two things to notice from the jenkins
container definition, 1) The Dockerfile for this container is in the Dockerfiles/jenkins
directory, and 2) we are passing the --privileged
argument when the container is run so that our container has all the capabilities of the docker host. We need special access to be able to run Docker within Docker.
Lets create the Dockerfile:
$ mkdir -p Dockerfiles/jenkins
$ cd !$
$ touch Dockerfile
Now open up that Dockerfile and install Docker onto this Jenkins container:
FROM jenkins:1.625.2
USER root
# Add the new gpg key
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
# Add the repository
RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list
VOLUME /var/lib/docker
RUN apt-get update && \
apt-get -y install \
docker-engine
ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/dockerjenkins.sh" ]
We are using a little script that is found in The Docker Book as our entry point to start the docker daemon, as well as Jenkins. It also does some stuff on the filesystem to ensure cgroups are mounted correctly. If you want to read more about running Docker in Docker, go check out this article
Boot up the new container
Before we boot this container up, edit your host Vagrantfile and setup the port forward so that 8080 points to 8080:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end
Now bring up the new container:
$ vagrant up jenkins
or if you’ve already brought it up once before, you may just need to run reload:
$ vagrant reload jenkins
You should now be able to hit Jenkins at the URL http://localhost:8080
Install the git plugins for Jenkins
Now that you have Jenkins up and running, we need to install the git plugins. Click on the “Manage Jenkins” link in the left navigation, then click “Manage Plugins” in the list given to you, and then click on the “Available” Tab. Filter the list with the phrase “git client” in the filter box. Check the two boxes to install plugins, then hit “Download now and install after restart”.
On the following screen, check the box to Restart Jenkins when installation is complete.
Setup the Jenkins job
It’s time to setup Jenkins. If you’ve never setup a Jenkins job, here is a quick crash course.
- Click the New Item link in the left navigation. Name your build job, and choose Freestyle project. Click Ok.
- Configure the git repo. We are going to configure Jenkins to pull code directly from your repository and build the Docker image from that.
- Add the build steps. Scroll down toward the bottom of the screen and click the arrow next to Add build step and choose Execute Shell. We are going to add three build steps as shown below. First we build the Docker image with
docker build -t="tomfriedhof/docker_blog_post" .
(notice the trailing dot) and give it a name with the-t
parameter, then we login to DockerHub, and finally push the newly created image that was created to DockerHub. - Hit Save, then on the next screen hit the button that says Build Now
If everything went as planned, you should have a new Docker image posted on DockerHub: https://hub.docker.com/r/tomfriedhof/docker_blog_post/
Wrapping it up
There you have it, we now have an automated build that will automatically create and push Docker images to DockerHub. You can add on to this Jenkins job so that it polls your Github Repository so that it automatically runs this build anytime something changes in the tracking repo.
As another option, if you don’t want to go through all the trouble of setting up your own Jenkins server just to do what I just showed you, DockerHub can do this for you. Go checkout their article on how to setup automated builds with Docker.