One of the problems that you face with many projects that you want to Dockerize is figuring out how to deploy them. This is particularly challenging if you want to deploy and scale multiple projects on the same server or set of servers since most deployment solutions target a single application or project at a time.
Elastic Beanstalk, for example, out of the box only supports single site deployment from its GUI. Same with it’s single container Docker solution. Likewise, other services such as Horuko and Google App Engine focus on single app deployments as well.
As such, it can be particularly challenging to economically deploy multiple, smaller sites. Often, you end up manually configuring or writing provisioning scripts to configure a server. Unfortunately, that does not lend well to scaling or to creating new instances of the server. Worse, what if you forget how you provisioned the server?
I ran into this recently when I needed to deploy multiple, smallish Wordpress sites to a single EC2 instance. None of them were large enough to warrant their own instance and any one of them could take off at any given time and flood a tiny server with requests.
After mulling the situation over, I decided that a good approach would be to utilize Elastic Beanstalk’s multi-container Docker environment.
But, then I was faced with the problem of how to add the code for multiple sites to the Docker configuration. I thought about using Docker volumes, from my experience with using
docker-compose.yml files, but the code for the various sites would not be in the same repo (nor should it be) and I wanted a solution that could create a reproducible dev environment without using nasty hacks to get something up and running. So I crossed that off my list.
The solution that I came to was creating a master repository that would hold the
Dockerrun.aws.json file and the Nginx configuration; all the other sites would be a Docker image, with their own Dockerfile, and would be pulled down when launching or updating the server.
This would allow each site to maintain it’s own set of requirements while at the same time allowing for sites to be added seemlessly and nearly effortlessly in the future. As Billy Mays, the famous infomercial marketer, used to say, “but wait, there’s more!”
A side-benefit of Dockerizing these applications and using Elastic Beanstalk is that the server configuration and deployment information is now hosted in a repository. That way, if a developer leaves or if you forget, as I often do, what the configuration of a particular server is, it is relatively easy to figure out what was going on.
If you ever need to rebuild a server, it is just a few simple commands or an automated build trigger to set up a new one. That paves the way for blue/green deploys, greater fault tolerance, and rapid application development.
It’s a win, win.
In future articles, I am going to go over the master repository and
Dockerrun.aws.json file that controls this setup. Then I’ll move onto the individual repositories and their
Dockerfile’s. Once we have that covered, I’ll go over the handling of environment variables and how to deploy them to each site. After that, I’ll detail how to deploy the Elastic Beanstalk instance. Once that’s done, I’ll we’ll go from there.