Getting started is hard
When we started, we went through a large number of documentation, videos, tutorials and other resources to get our head around using docker. After going through all that, it seems like there is no one place where you’d find everything you need to know about docker. There are bits and pieces everywhere including obsolete documentation which is no longer applicable and you are left with the task of piecing everything together. This is really strange given the popularity and buzz around docker.
This is one of the things we had to figure out by ourselves - the concept of containers is separate from the concept of orchestration. If you try to use docker for more than a hello-world, you’ll most likely need some kind of orchestration. We used docker-compose and stuck to it. We later realised that there are other alternatives available. Docker-compose has few limitations but it worked fine for our purposes and we didn’t find enough reason to figure out if kubernetes was a better option.
Orchestration across multiple physical server gets even more nasty where you’d have to use something like Swarm. We’ve since realised that swarm is one of the lesser preferred options for orchestrating clusters.
Running out of disk space
This is a very frustrating and ugliest part of using docker. We realized about this problem when we were happily pushing code to deployment and one of the clients’ machines suddenly went down due to low disk space. Fortunately, it was only a staging server. During development, a large number of docker images tend to pile up on your machine pretty fast. Since images sizes can be as high as few GB, its easy to run out of disk space. This is another problem with docker which you have to figure out yourself. Despite the fact that everyone who’s ever used docker seriously has to come across this issue sooner or later; no one tells you about this at the outset. This is pretty annoying. There is no inbuilt docker command to deal with this. The sad part is there are a lot of hacks available and not a single standard solution. We ended up setting an hourly cron job to run docker-gc script on all our dev and production machines!
For our use case, it’d been a lot of overhead for us to host our own docker registry. The docker hub registry provides only one private repo for free. Since the client wasn’t keen on spending on getting a more private repos, we managed with the single repo to save our base image with the most of the dependencies bundled into it.
This is another aspect of any serious usage of docker that the tutorials usually ignore. We took quite some time to figure our entire workflow of how to setup dev and production orchestration, databases, backups, dependency management within the team and keeping the base image updated all the time. I’ll probably write another blogpost about how docker workflow.
Docker’s one major benefit is managing dependencies across different dev machines. We are mainly a Python/Django shop and before docker, we’ve been happily managing dependencies with virtualenv and a simple requirements.txt. Most of the times, that was all we needed to manage environments across all dev machines. Sometimes, we used Vagrant. So docker did bring some benefits by letting us specify dependencies right from OS, environment variables, native libraries, etc. it wasn’t really a game-changer for us.
Longer build times
Initially we had a single dockerfile for the project. This meant rebuilding the entire docker image each time we added a single python library dependency. And this would have to be done on every dev machine. With virtualenv it was as easy as a single pip install command. We eventually created a separate dockerfile for our base image which would just include all the dependencies.
Good practices dictate that you don’t mount your source code directory in the docker container in production. Which means you also have to rebuild the image on test/staging server every time you make a single line of code change. This meant a little more delay in addition to the slightly slow docker-compose orchestration, which made the deployments slower.
DB and Persistence
We spent hours figuring out a good way to use a databases in both dev and production with docker. It was tricky since docker containers don’t support persistence unless you use a mount-point. There were a few patterns documented which didn’t work for us or we didn’t really like. We had to figure it out by ourselves. This is another area where you’re expected to figure out by yourself whether it is a good idea to use docker to run your production database. Hint- Its not.
This is a problem in both dev and production. On dev although you’re still running a django runserver and can get console logs, we’ve frequently experienced issues with django autoreload and delay in flushing logs to the console while using django debug server with docker. On production, since your source code directory isn’t mounted in container on the server, you have to add a special mount point to get back your server logs.
After using Docker for about 6 months, a lot broken things that were trying to fit together and the time we were spending to just figure things out, didn’t seem worth all the fuss. There could be more issues with using Docker but these were enough for us to limit its usage to only a handful of projects. Here is a another thorough article on extensive docker usage in production.