If buzzwords are to be believed, Docker is one word which is promising enough to rewrite all the DevOps rulebooks.

By definition, Containerization is an operating system level virtualization where several applications are deployed without launching dedicated VM for each application. Instead, multiple applications run on isolated systems that run on single kernel.
E.g. Docker, rkt (pronounced as rocket)

Docker is a popular open source and till now world’s leading containerization tool based on Linux containers. We hear a lot that Docker container is a “lightweight VM”. In actuality, a container is completely different from a VM.

In a normal virtualized environment, one or more virtual machines run on top of a physical machine using hypervisor. But Containers on the other hand run on user space on top of the operating system kernel. Containers are isolated in a host using two Linux kernel features called namespaces and control groups.

Docker containers pack up a piece of software in a complete filesystem that contains everything it needs to run (code, runtime, system tools, and system libraries – anything you can install on a server). This guarantees that it will always run the same, regardless of the environment it is running in and definitely avoids the most common problem “It works in my machine”.

Docker is largely used in development and testing environments where you can create the containers on the fly and destroy them once the requirement is verified/tested. In recent times with the help of production ready orchestration tools like Docker Swarm and Kubernetes, Docker is also used in production where you can scale up and down the containers easily based on the load of the application.

Docker Advantages and Disadvantages:

Although there are several pros for using Docker, key advantages are:

  • Containers take less time to spin up which comes in handy when there are spikes in user activity.
  • As containers don’t have overheads of the OS, we can always spin up more containers on a server than virtual machines.

The major disadvantage with Docker is a container being less secure than VM as it shares the same kernel with the other containers and it has root access which means that containers are less isolated from each other. If there is a vulnerability in the kernel it can compromise the security of other containers as well. This is a major deterrent for many clients to use Docker in their live environments.

Before & After Docker:

In earlier days, physical machines were used to host the applications where only 10% of the total capacity was used. With the invention of virtualization the total capacity usage increased by a fairly good percentage (still some capacity was wasted with the installation of full versions of the OS on each VM which used a part of disk space, RAM and CPU). Docker (containerization) addressed this problem. They don’t need any extra OS installation which saves some capacity that can be used to host the applications.

Now, here comes the question, Is there anything after Docker?

The industry is moving towards more and more lightweight infrastructure at the cost of heavy customizations.

Unikernels can be the most likely possibility where the libraries corresponding to the OS, which are required for the application to run are chosen, then compiled with the application and configuration code to make a fixed-purpose image called Unikernel which runs directly on hypervisors.