We have been working days, nights and weekends (ok, not weekends, and barely even nights) on producing a new product that will change the way people rate the technologies that they use to produce products of their own.
Choosing a technology stack for your project is one of the hardest decisions any team will ever make. Our tool (Hive) is supposed to lessen the pain; ironically we have experienced much pain in deciding on which technologies to use. One technology that we have finally zeroed in on is Docker.
Docker is an interesting technology. Simply put, it provides the ability to run multiple different computers on one single piece of hardware. But wait... but... Virtual Machines already do that, why would I use Docker? Well, Docker runs all of these separate computers (called containers) all on the same kernel, effectively removing the VM overhead.
So, why did the the Hive team decide on using Docker? Well, we tried Chef for provisioning a deployment environment and failed horribly. After that, we asked for advice, and a couple of fine individuals in our own IT department (thanks Matt and Christian!) pointed us towards Docker. It still posed many hurdles for us, but we felt like we were making continuous progress towards our goal of deploying our application on any hardware. We currently deploy to a CentOS 7 and Ubuntu 14.04 environment with no differences in build steps between the two. So, what did we do? That leads me into the...
Our intrepid IT staff first set us up with a CentOS 7 and Ubuntu 14.04 environment with Docker set up and ready to go. (This article will not go into that process, so please ask the nearest expert you can find if you desire to get started with a Docker environment.) Our project consists of one main repository, the Hive repository, with two submodules: HiveAPI and HiveWeb. Below is what our project directory structure looks like:
First, we created Docker containers for our Rails application and associated PostgreSQL database. The PostgreSQL database was simple, it was based on an existing Docker image, and only required us to spin up its own container with some settings. Simply put, this 'run -d' command starts a container in detached mode with a name of a database. Two environment variables are added for the user and password, and this is based on the postgres image for the PostgreSQL database. Images in Docker are the guts of what is going to be running in any container you set up, and this image already exists in the main Docker repository (https://hub.docker.com).
$ docker run -d --name database -e POSTGRES_USER="su_postgres" -e POSTGRES_PASSWORD="nondescriptPostgresPassword" postgres
The Rails application is a bit more involved, but still fairly simple. We're basing this application on top of an existing image, and we've created a Dockerfile in our HiveAPI directory for simplifying this process:
Afterwards we build that image with that Dockerfile by specifying the path (HiveAPI) in the following bash command:
$ docker build -t hive/api HiveAPI
This builds an image named hive/api (using the -t switch) from the Dockerfile in the HiveAPI path. After this we run the HiveAPI and link it to the postgreSQL DB container with the following command.
$ docker run -d --expose=3000 -p 3000:3000 --name api --link database -e SECRET_KEY_BASE="ohboyohboyohboyitsasecretkey" hive/api
At this point, the main points to concern yourself with are the --link switch which is telling me to link this new container named 'api' to the existing container named 'database'. It is --expose(ing) port 3000 to the world, and linking container port 3000 to the public port 3000 accessible on the machine you are running this Docker container on. We also have a few more commands we want to run on this container to get the database up and running and to start our rails server. These are both accomplished with the 'docker exec' command:
$ docker exec api bundle exec rake db:setup $ docker exec api bundle exec rails server -d -b 0.0.0.0
At this point, our Rails server is up and running, and pointed to a live PostgreSQL database.
Now, before I get too much further, this is all wrapped up into a single bash script. Jenkins pulls down the latest repository and runs a pre-set series of steps depending on what we need to do. In this case, the next step is to build our Ember application. We do this on a temporary container made specifically for building the application. This is the first time we see the -v command, and it creates a shared volume that both the container (in this case, an unnamed container running the node image with the associated bash command), and the host machine (the CentOS or Ubuntu machine we're using) can access. Inside the $HIVE_ROOT variable is the path to the Git repository that Jenkins has pulled from Gitlab. This is tied to the /usr/src/Hive path on the newly created container, and anything done in this container within that directory will actually affect the directory on the host machine. The command being run on this container is the necessary command to set the node and bower dependencies for the ember build command, and then it builds it, placing it inside of the shared volume at /usr/src/Hive/HiveWeb/dist.
$ docker run --rm --link api -v $HIVE_ROOT:/usr/src/Hive -w /usr/src/Hive node bash -c 'npm install -g email@example.com bower; cd HiveWeb; npm install; bower install --allow-root; ember build'
Once this command finishes, we can now serve our Ember app. For our server we are using nginx, and you can see here that we are creating a shared volume (2 actually) linking to different parts of our cloned repository. We're linking to a nginx.conf file, and also to the newly created Ember application in the /dist path.
$ docker run -d -p 4200:80 --link api --name web -v $HIVE_ROOT/HiveWeb/nginx.conf:/etc/nginx/nginx.conf -v $HIVE_ROOT/HiveWeb/dist:/usr/share/nginx/html nginx
That's it! After all of those steps, we now have a running Rails server with a PostgreSQL DB, and an Ember application that uses the Rails application to save and load from the database.
This was our first foray into learning Docker. Please, if you have any information you think is incorrect, or if you would just like to talk some more about Docker and what we're doing, please seek me out. Thanks for reading!