Docker Containers Vs Virtual Machines VMs

Virtualization improves hardware utilization and lowers costs, but considering how it works more closely, you will notice that there is a large duplication of resources between “guest” (hosted) operating systems.

Virtual Machines aka VMs
Virtualization technology enables a single PC or server to simultaneously run multiple operating systems or multiple sessions of a single OS. A machine with virtualization software can host numerous applications, including those that run on different operating systems, on a single platform.

The host operating system can support a number of virtual machines, each of which has the characteristics of a particular OS and the solution that enables virtualization is a virtual machine monitor (VMM), or hypervisor.

You can say that Within each virtual machine runs a unique guest operating system. VMs or so called Virtual Machines with different operating systems can run on the same physical server which can be a UNIX VM can sit alongside a Linux VM, and so on. Each VM has its own binaries, libraries, and applications that it services, and the VM may be many gigabytes in size.

So you can say to that every VM will required separate operating system which is overhead on memory and storage. You need higher memory and storage in the case of Virtual Machines.

Fig 1.1- Containers and VMs Comparison

Docker Containers
With the help of Docker you can enables true independence between applications and infrastructure.  For sure It will leads to developers and IT ops to unlock their potential and creates a model for better collaboration and innovation. As per the shown diagram above you can run multiple applications. With the help of containers having various applications, they share a common operating system, only a single operating system needs care and feeding for bug fixes, patches, and so on. This concept is similar to what we experience with hypervisor hosts: fewer management points but slightly higher fault domain

These containers are smaller and less complex than complete operating systems. We can think of them as little magic wire mesh baskets with only the ground cover, lighting, food etc. that each fish uniquely needs inside the basket. Now many more different kinds of “fish” can live within a “tank” - each one isolated from the others (to prevent conflicts). 

Containers can only be created for applications written for operating systems that are similar, but not necessarily identical to, the host operating system. For example, while you can create a container for an application that runs on the “CentOS” version of the Linux operating system and run that container on a host running the “Ubuntu” version of Linux, one couldn't create a container for a Windows application and run it within a Linux host. 

Fig 1.2- Docker Containers
Isolating applications within containers creates the need route network traffic into and out of each container. This can cause the need for complicated network configurations as the number of containers grows. Fortunately, the Docker community of software engineers has created a variety of software platforms to automate this work. Better still, our own Zenoss engineering team has written one customized for Resource Manager 5.x. It’s called Zenoss Control Center, and it, too is an open source project.

Access to the process running within a container is limited, making configuration and diagnostics more complex than for virtualized operating system. In the specific case of Zenoss Resource Manager, however, the configuration files and logs have been centralized in Control Center, mitigating this downside.

Docker containers are lightweight by design and ideal for enabling microservices application development. Accelerate development, deployment and rollback of tens or hundreds of containers composed as a single application.