Sunday 14 August 2016

Docker Concepts Plugged Together (for newbies)

Although Docker looks like a promizing tool facilitating project implementation and deployment, it took me some time to wrap my head around its concepts. Therefore, I thought I might write another blog post to summarize and share my findings.

Docker Container & Images

Docker is an application running containers on your laptop, but also on staging or production servers. Containers are isolated application execution contexts which do not interfere with each other by default. If something crashes inside a container, the consequences are limited to that container. There is a possibility to open ports in a container. Such containers can interact with the external world via such ports, including other containers having opened ports.

You can think about a Docker image as a kind of application ready to be executed in a container. In fact, an image can be more than just an application. It can be a whole linux environment running the Apache server and a website to test for example. By opening port 80, you can browse the content as if Apache and the website were installed on your laptop. But they are not. They are encaspulated in the container.

Docker runs in many environments: Windows, Linux, Mac. One starts, stops and restarts a container with docker using available images. Each container has its private file system. One can connect and 'enter' the container via a shell prompt (assuming the container is running Linux for example). You can add and remove files to the container. You can even install more software. However, when you delete the container, these modifications are lost.

If you want to keep these modifications, you can create a snapshot of the container, which is saved as a new image. Later, if you want to run the container with your modifications, you just need to start a container with this new image.

In theory, it is possible to run multiple processes in a container, but it is not considered a good practice.

Docker Build Files & Docker Layers

But how are Docker images created in the first place? In order to create an image, you need to install Docker Composer on your laptop. Then, in a separate directory, you'll create a Dockerfile file. This file will contain instructions to create the image.

Most often, you don't create an image from scratch, you rely on an existing image, for example Ubuntu. This is the 1st layer. Then, as Docker Compose processes each line from Dockerfile, each corresponding modification creates a new layer. It's like painting the wall. If you start with a blue background, and then paint some parts in red, the blue disappears under the red.

Once Docker Compose has finished its job, the image is ready. A Docker image is a pile of layers (in other words). Each time you launch a container, Docker simply copies the composed image in the container for execution. It does not recreate it from scratch.

Docker Volumes & Docker Registry

A Docker registry is simply a location where images can be pushed and stored for later use. There is a concept of version and latest image version. There is a public Docker repository, but one can also install private registries.

A volume is a host directory located outside of a Docker container file system. It is a mean to make data created by a container in one of its directory available in the external volume directory on your laptop. There is a relationship created between this inner container directory and the external directory on the local host. A volume 'belonging' to a container can be accessed by another container using proper configuration. For example, logs can be created by one container and processed by another. It is a typical use of volumes.

Contrary to containers, if a container is erased, the data in its volume directory is never explicitly deleted. It can be accessed again later by the same or by other containers.

There is also a possibility to mount a local host directory into a container's directory. This will make the content of the local host directory available in the container. In case of collision, the mounted data prevails on the container's data. It's like a poster on the blue wall. However, when the local host directory is unmounted, the initial container data is available again. If you remove the poster, that part of the wall is blue again.

But, Why Should I Use Docker?

Dockers brings several big benefits. One of them is that you don't need to install and re-install environments to develop and test new applications, which saves a lot of time. You can also re-use images by building your images on top of giants. This also saves a lot of time.

However, the biggest benefit, IMHO, is that you are guaranteed to have the same execution environment on your laptop as on your staging and production server. Hence, if a developer works under Windows 10 and another on Mac, it does not matter. The mitigates the risk of facing tricky environment bugs at runtime.

Hope this helped.

No comments:

Post a Comment