Linux containers (LXC) are very popular these days among developers and companies (perhaps due to Docker, which leverages LXC on the back-end). LXC, as a lightweight, serves as an alternative to full machine virtualization such as those provided by “traditional” hypervisors like VirtualBox, VMWare, KVM, Xen, or ESXi.
Today, we are starting a complete tutorial series on Docker and this is the first post describing about the core concepts behind Docker. As you go along, you will learn more about Docker implementation and how to use it.
You might be knowing that existing virtualization technologies like VirtualBox, VMWare, KVM, Xen, or ESXi etc., use full machine virtualization that offers greater isolation at the cost of greater overhead, as each virtual machine runs its own full kernel and operating system instance.
Containers, on the other hand, generally offer less isolation but lower overhead through sharing certain portions of the host kernel and operating system instance.
virtual machine vs container
Linux containers does not provide a virtual machine, but rather provides a virtual environment that has its own CPU, memory, block I/O, network, etc. space. This is provided by cgroups features in Linux kernel on LXC host. It is similar to a chroot, but offers much more isolation.
Docker, on other hand, is a high level abstraction over containers which manages the life cycle of containers. Before 0.90 release, Docker was using LXC. But, with the release of version 0.9, Docker has dropped LXC as the default execution environment, replacing it with their own libcontainer.
Libcontainer provides a native Go implementation for creating containers with namespaces, cgroups, capabilities, and filesystem access controls. It allows Docker to manage the lifecycle of the container performing additional operations after the container is created.
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. And if that unit runs on your local, you can guarantee that it will run exactly the same way, anywhere from QA, to staging, to production environments. You will get to know more about how to create such standardized units and how to deliver it from local to production environment, later in this series.
I hope now you are familiar with core concepts of Containers and in the next article, we will be discussing more about Docker and its terminologies.
Let’s make it an interactive series. Tell us your views, doubts or questions in the comments below.
cgroups (aka control groups) is a Linux kernel feature to limit, police and account the resource usage of certain processes (actually process groups). There are multiple efforts to provide process aggregations in the Linux kernel, mainly for resource-tracking purposes. Such efforts include cpusets, CKRM/ResGroups, UserBeanCounters, and virtual server namespaces. These all require the basic notion of a grouping/partitioning of processes, with newly forked processes ending up in the same group (cgroup) as their parent process. More About cgroup – Introduction to Control Groups (Cgroups).
namespace – On a server, where you want to run multiple services, it is essential to security and stability that the services are as isolated from each other. Imagine a server running multiple services, one of which gets compromised by an intruder. In such a case, the intruder may be able to exploit that service and work his way to the other services, and may even be able compromise the entire server. Namespace isolation can provide a secure environment to eliminate this risk. More about namespace – Namespaces Overview.