The Container Ecosystem – Future of Cloud

A13

Software Containers like Docker, Kubernetes and Mesos have transformed the Cloud application development scene. Containers are amongst the most talked about technologies on social media channels, online publications and conferences. Most of the big names in web/cloud space like Google, Amazon, Microsoft, IBM, etc. have jumped on this bandwagon.

An interesting way to think about the impact of containers is to compare them with physical containers. One of the biggest reasons modern shipping industry is successful today is because of the standardization of shipping container sizes. Before standardization, it used to be quite complex and lengthy to ship anything in bulk. Consider how troublesome it will be to move laptops from a ship to a truck. Instead of a ship specializing in transporting laptops, they can be just put into a container, knowing fully well that they will fit on every container ship. The basis of software containers is similar. Rather than shipping complete operating system and application software, all the code and its dependencies are put into a container that can work anywhere. Since, these containers are of small size, a lot of containers can be put into a single computing machine.

In simple terms, a container comprises of the complete runtime environment-application, along with its dependencies, libraries and other binaries, and configuration files needed to run it, all bundled into one package. By packaging the application platform and its dependencies into a container, the difference in operating system distributions and underlying infrastructure are abstracted away. The host operating system is the only operating system on the server and the containers communicate directly with it. This keeps the overheads small.

You may wonder this sounds quite similar to 'virtualization', so what’s the big deal about containers. The central idea of a hypervisor based virtualization is to imitate the underlying bare metal hardware and provide virtual hardware (with resources like processor and memory). On top of this virtual hardware, an operating system is installed. This makes the operating system agnostic as you can have a hypervisor running on a Linux system create a virtual hardware with Windows installed on the virtual hardware, and vice versa.

The key advantage of virtualization is complete utilization of hardware resources. Consider a scenario, where you have a physical server with 20G RAM, 16 core processor and a 1G NIC card. Assume that you are using the server for hosting a website. You will get HTTP requests and respond back with resources like images and files. Now this server may be idle for bulk of the time, as the website will not face heavy traffic all the time. This results in the entire hardware resources being idle for most of the time. This is nothing but sheer wastage of computing resources. You can use virtualization to make virtual machines inside and allocate each one of them only with the requisite amount of hardware resources.

The figure below illustrates the distinction between virtualization and containers.

1

As is evident in the figure above, Virtual Machines take a lot of resources since they run a full copy of an operating system and need a virtual copy of all the hardware required by the operating system. This results in increased stress on CPU and RAM cycles. In comparison, a container just requires an operating system, supporting binaries and libraries, and resources to execute specific programs. If you want three virtual machines on a physical server, then you will need a hypervisor which runs three separate operating systems on top of it. In contrast, a physical server running three containerized applications needs to run just a single operating system, with each container sharing the operating system kernel. The shared parts of the operating system have read access while each container has its own mount point for write access. This results in containers being much more light weight and using far fewer resources than virtual machines.

Container technologies are mostly used in tandem with orchestration engines. The orchestration engines facilitate scheduling, resource management and service management for containers. Following is a list of popular container platforms:

Docker is the most popular container technology and is kind of a de facto container standard right now. Docker Inc. is the controlling company with active support from other partners like RedHat and IBM. At its core, Docker uses Linux kernel facilities like cgroups, namespaces and SELinux to create isolation amongst containers. Docker started off as a front end for the LXC container management subsystem, but release 0.9 changed the game by introducing libcontainer. This is a native Go-language library which provides the interface between user space and the kernel.

Core OS’ rkt is a container technology which comes along with its orchestration engine Fleet. Being a low-level framework, based on system, it is often used as foundation layer for higher level solutions. It differentiates from Docker by offering security, composability and standards/compatibility. It can run Docker images natively and has native Kubernetes integration via 'rktnetes'.

Cloud Foundry comes with Garden container, which is used under the hood for PaaS CloudFoundry. It is generally not used as a separate entity.

Apache Mesos’ is a 'distributed operating system' which can run on private and public cloud infrastructure and abstract the resources of a cluster of machines, while providing common services. It works as a complementary tool to orchestration tools like Swarm or Kubernetes.

(Docker) CaaS Cloud Offerings are available from cloud vendors like Amazon AWS ECS and Microsoft Azure Container Service (ACS). These services can manage Docker images, run Docker containers and schedule, orchestrate or monitor these container instances.

DigitalOcean provides the facility to build and deploy microservices by creating Droplets on their public cloud. DigitalOcean provides the facility of provisioning, monitoring and other platform requirements for Droplets. These droplets can be combined with different orchestration tools like Apache Mesos, Kubernetes or Docker Swarm.

As is evident, there are quite a few container technologies to choose from with newer options still emerging. Container technology holds lots of promise for providing true application portability amongst different cloud platforms. The lightweight platform abstraction without using virtualization is much more effective for creating portable workload bundles. More often than not, virtualization is an overkill for workload migration. Thus containers provide a practical mechanism to transport workloads around multi-cloud environments without modifying the application. Having said that, virtualization is not going out of scene any time soon as it provides seamless orchestration and ease of managing the hardware infrastructure such as networks, servers and storage. Virtualization will continue to be used with containers as a complementary technology rather than a competing one, at least in the near future.

 

Leave a Reply

Your email address will not be published. Required fields are marked *