Containers Dev Ops Development Virtualization

Docker Vs Vagrant

Development and operations teams have been dealing with the complexity of software environments since the beginning. It’s a common problem where working code in one environment doesn’t work in another.

Both docker and vagrant help create predictable and repeatable development environments. However, docker uses container technology while vagrant uses virtual machines to achieve this goal. Understanding the strengths and weaknesses of Docker and Vagrant will help developers mix and match these tools to achieve the desired results.

Let’s start with the underlying technologies first.

Virtual Machine

A virtual machine (VM) emulates a physical computer. It comes with its own complete operating system and resource allocation. The host machine provides the necessary physical resources but the virtualized environment works as an independent machine with its own BIOS, CPU, storage, and network adapters.

Even though VMware today is most famous for the modern VM technology, the virtual machine idea has been around for a long time.

In 1965, IBM Yorktown Research Center needed a way to measure the effectiveness of different computer science ideas. The research team wanted to switch between features and measure the results. The team devised a scheme to divide a single machine into smaller partitions. The smaller partitions would manage their own resources. They would be small virtual machines.

The VM idea was successful. IBM started making Operating Systems based on virtual machines. IBM System 370 (S/370) and IBM System 390 (S/390), both IBM VM/ESA based systems, became popular with businesses and universities because they allowed the institutions to let their users share computing resources without affecting each other’s environments. The idea also helped create the Unix operating system and the Java programming language.

Modern virtual machines run on hypervisors. Hypervisors are the software, firmware or hardware responsible for the creation and execution of VMs. There are a lot of hypervisors available in the market. KVM, Red Hat Enterprise Virtualization (RHEV), XenServer, Microsoft Hyper-V and VMware vSphere / ESXi are the prominent players.

Today virtual machines have spurred the growth of cloud computing. Amazon AWS, Microsoft Azure, Google Cloud, Digital Ocean and other cloud companies heavily depend on virtualization technology.


Containers create virtualization on the operating system level. They work as an executable software package that isolates applications from its surrounding environment. Inside the package, a container has the necessary properties like code, runtime, system libraries and tools to keep the application separate from outside influence. It runs on the operating system of the host machine. Containers share libraries and binaries when possible and only separates the absolutely necessary resources.

In 1979, “chroot” system calls could isolate processes for Unix. It was the first seed of the container idea. The early container technology started with FreeBSD Jails in 2000. A year later, Linux VServer allowed multiple Linux machines to run on a single host. In 2004, Oracle Solaris Zones provided similar functionality as FreeBSD Jails. In 2006-2007, Google developed Process Container and then merged it into the Linux Kernel. Linux Containers (LXC) was created in 2008 to take advantage of Linux cgroups and namespacing. In 2013, Docker was created through combining LXC ideas. It also added tools to easily build and retrieve images of containers.


Docker is an open-source container technology based on LXC. It is popular because it makes it easier to create, run and deploy applications in a self-contained environment. Docker doesn’t create a whole operating system like a virtual machine. Instead, it uses the kernel of the host’s operating system and creates virtualization only for the application and necessary libraries. This approach makes it much more lightweight than virtual machines.

Docker Containers are created from Docker Images. Docker Images can be thought of as snapshots of machines. Users can easily start a container from an image. The images are created as layers. Suppose a development team needs a container with Apache and Python installed on a certain version of Linux. A developer can download a Linux Image from Docker Hub, start a container, install Apache and Python, create a new image from the container and share that image. Other members of the team don’t need to go through the same installation. It helps maintain a consistent environment for all.

Docker also supports scripting and multi-container applications. Users can use a text-based Dockerfile to define requirements and then build containers through Docker Compose. The above example of creating an Apache/Python/Linux server can also be achieved through this process. With Docker Compose, teams only need to share the Dockerfile to create the same environment.

Docker has more specialized tools for complex tasks. Docker Swarm helps orchestrate large-scale docker deployments.


Vagrant is an open-source tool that helps create and maintain virtual machines. It works with VirtualBox, VMWare, AWS and other providers.

Vagrant simplifies the management of VMs. Using a Vagrantfile, developers can define the virtual machine properties like operating system, software installations, and others. The text-based Vagrantfile can be shared through version control and the necessary machine can be started using a simple command like “vagrant up”. Users can then log into the machine like a physical server.

When to Use Docker or Vagrant

The use of Docker or Vagrant often comes down to the necessity for containers or virtual machines. Here are some similarities and differences between Docker and Vagrant in terms of use:


Both Docker and Vagrant have easily configurable environments that can be controlled through scripts. They are also cloud friendly.


Vagrant virtual machine provides Kernel-based security separation. The separation makes virtual machines less risky than containers. But Docker containers are very lightweight. They use fewer resources and are fast in execution. So you can have a lot more containers on a single host than virtual machines. Also, starting and stopping containers is almost instantaneous compared to VMs. The VMs go through the full BIOS and Operating System boot cycle.

The security separation of a virtual machine makes a VM failure more self-contained. On the other hand, containers share resources and can have a cascading crash effect. Also, container security threats can reach the kernel of the host operating system.

However, the speed of execution and the lightweight footprint of containers make Docker very attractive for development. With a microservice architecture, containers can perform well because the risk factors are mitigated through the use of microservices. Also, progress is being made to make Docker more secure every day.


Docker and Vagrant are both useful technologies that can help developers improve their productivity. If application security is a concern, then using Vagrant and VMs might be a good idea. For speedy development and sharing, Docker provides an advantage. Most teams use both to run a smooth operation.


About the author


A passionate Linux user for personal and professional reasons, always exploring what is new in the world of Linux and sharing with my readers.