Recently, the increasing popularity of containers seems to have pushed virtual machines (VMs) out of their lime light and into the dusty basement hall filled with no longer relevant technology like beepers and dial up modems. Containers have garnered the attention of all the technological monoliths: Google, IBM, Microsoft. There is just one problem though: the assumption that virtual machines are out of date because they’re not.
For containers to have totally obliterated the relevance of VMs, it would have to prove to be better and substitute every function that VMs offer. Like most offerings in the data center sphere, there are splinterings between what is the best solution for the particular demands and details needed by enterprises.
Unquestionably, containers can enable a company to pack a lot more applications into a single physical server than a virtual machine can.
VMs require a lot of system resources. Each VM runs not only a full copy of an operating system, but a virtual copy of all the hardware that the operating system requires to run. This quickly adds up to a lot of RAM and CPU cycles. By comparison, all a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program.
What this means in practice is that two to three times as many as applications can be put on a single server with containers than with a VMs.
In addition, with containers, it is possible to create a portable, consistent operating environment for development, testing, and deployment of applications.
By sharing the same operating system, support programs, and system resources, containers are lightweight and create low cost overheads and are easily manageable. These same details that grant containers an edge over VMs also devise the main weakness in containers.
Security risks are much higher in containers than VMs. For example, a typical use case for XenApp (one type of application virtualization system) is the deployment of an office suite to hundreds of remote workers. To accomplish this goal, XenApp generates sandboxed user spaces on a Windows Server for each user. Then, each user shares the same OS including kernel, network connection and base file system. However, each instance of the office suite has a separate user space.
Good news with the common source operating system—it means only caring and monitoring a single operating system for patches, bug fixes and more. Bad news is that it is harder to protect and easier for the security to be compromised. Since containers share the same kernel, admins and software vendors need to apply special care to avoid security issues from adjacent containers. It is also important to know that a container cannot run with a guest operating system that differs from the host OS of the shared kernel—no Windows mixing with a Linux-based system.
Another security issue lies in the fact that many businesses are releasing containerized applications. Instead of securing that container and making sure the application is secure, some companies may fail to scan for potential threats and install the first container that is downloaded, which could bring a Trojan Horse into the company’s server with it. Downloading applications off of containers is not like downloading smartphone apps; the process needs to be more thoroughly vetted.
According to Rob Hirschfeld, CEO of RackN and OpenStack Foundation board member: “Packaging in [containers is] still tricky. Creating a locked box helps solve part of [the] downstream problem (in that you know what you have) but not the upstream problem (you don’t know what you depend on).”
This is a twofold problem: it is a security issue, but it’s also a quality assurance problem. It is not only ensuring that the container utilizes the correct web server, but the version needs to be correct as well. There are minute quality assurance issues that needs to be solidified before the container is used. It’s easy to deploy an app in a container, but if the wrong one is installed, time will still end up being wasted.
Hirschfeld also pointed out another issue: container sprawl can be a real problem. By this he means companies must be aware that “breaking deployments into more functional discrete parts is smart, but that means [there will be] MORE PARTS to manage. There’s an inflection point between separation of concerns and sprawl.”
Remember, the whole point of a container is to run a single application. The more functionality that is placed into a container, the more likely it is that a virtual machine should been used anyway.
A great rule of thumb when deciding between VMs and containers: generally speaking, it is better to use containers to run a single application and VMs to run multiple applications.
Both choices are still relevant for enterprises big and small. It all depends on need and details of what should be achieved. Businesses would do well to remember both options and utilize both options side-by-side.