The History of Container Technology – Tech| Linux

As a part of the It’s Okay to Be New series, I’ve been doing a lot of research about containers this week. My goal is to lay the foundation for anyone, regardless of technical background, to be able to start playing with and learning about containers — be it LXC, Docker, or the next big technology. Personally, I learn best through stories, and in order to tackle something new, I have to develop the story in my head; I like to understand not only where we are in the learning process, but also how we got here. So I started piecing together the history of container technology, and I found out that to fully understand it, we need to go back much further than you would think. We need to go back to the early days of virtualization in the 1960s.

Development of Virtualization

Back in the ’60s, computers were a rare commodity. It cost well over a thousand dollars a month just to rent one, putting them out of reach for many businesses. To put that into perspective, $1,000 in 1960 had the same buying power as $8,385 in 2018.

They say necessity is the mother of invention, and the history of computers is no exception. The earliest computers were typically dedicated to a specific task that might take days or even weeks to run, which is why in the 1960s and through the 1970s, we saw the development of virtualization. This development was spurred by the need to share computer resources among many users at the same time.

With the creation of centralized computers, we began to see the first hints of what we now call virtualization. Throughout the 1960s, multiple computer terminals were connected to a single mainframe, which allowed computing to be done at a central location. Centralized computers made it possible to control all processing from a single location, so if one terminal were to break down, the user could simply go to another terminal, log in there, and still have access to all of their files.

However, this did have some disadvantages. For example, if the user were to crash the central computer, the system would go down for everyone. Issues like this made it apparent that computers needed to be able to separate out not only individuals but also system processes.

In 1979, we took another step towards creating shared, yet isolated, environments with the development of the chroot (change root) command. The chroot command made it possible to change the apparent root directory for a running process, along with all of its children. This made it possible to isolate system processes into their own segregated filesystems so that testing could occur without impacting the global system environment. In March 1982, Bill Joy added the chroot command to the 7th edition of Unix.

For the purpose of understanding containers, we can skip forward a bit in time to the 1990s when Bill Cheswick, a computer security and networking researcher, was working toward understanding how a cracker would use their time if given access to his system. For those of you unfamiliar with the term cracker, it is used to refer to someone who breaks into a computer system for malicious reasons. Now you may be thinking, “Wait… isn’t that a hacker?” But in the security world, the word hacker is generally used to define someone who identifies security vulnerabilities in order to fix, rather than exploit, them. Though this difference may warrant its own blog post, for now, I will use the term cracker, since that’s the term Cheswick used in his paper “An Evening with Berferd in Which a Cracker Is Lured, Endured, and Studied.” In this research, Cheswick built an environment that allowed him to analyze the cracker’s keystrokes in order to trace the cracker and learn their techniques. His solution was to use a chrooted environment and make modifications to it. The result of his studies was what we now know as the jail command.

On March 4, 2000, FreeBSD introduced the jail command into its operating system. Although it was similar to the chroot command, it also included additional process sandboxing features for isolating filesystems, users, networks, etc. FreeBSD jail gave us the ability to assign an IP address, configure custom software installations, and make modifications to each jail. This wasn’t without its own issues, as applications inside the jail were limited in their functionality.

In 2004, we saw the release of Solaris containers, which created full application environments through the use of Solaris Zones. With zones, you can give an application full user, process, and filesystem space, along with access to the system hardware.  However, the application can only see what is within its own zone.

In 2006, engineers at Google announced their launch of process containers designed for isolating and limiting the resource usage of a process. In 2007, these process containers were renamed control groups (cgroups) to avoid confusion with the word container.

In 2008, cgroups were merged into Linux kernel 2.6.24, which led to the creation of the project we now know as LXC. LXC stands for Linux Containers and provides virtualization at the operating system level by allowing multiple isolated Linux environments (containers) to run on a shared Linux kernel. Each one of these containers has its own process and network space.

In 2013, Google changed containers once again by open-sourcing their container stack as a project called Let Me Contain That For You (LMCTFY). Using LMCTFY, applications could be written to be container-aware and thus, programmed to create and manage their own sub-containers. Work on LMCTFY was stopped in 2015 when Google decided to contribute the core concepts behind LMCTFY to the Docker project libcontainer.

The Rise of Docker

Docker was released as an open-source project in 2013. Docker provided the ability to package containers so that they could be moved from one environment to another. Docker initially relied on LXC technology, but in 2014, LXC was replaced with libcontainer, which enabled containers to work with Linux namespaces, libcontainer control groups, capabilities, AppArmor security profiles, network interfaces, and firewall rules. Docker continued its contributions to the community by including global and local container registries, a restful API, and a CLI client. Later, Docker implemented a container cluster management system called Docker Swarm.

Though we could go into container cluster management by diving into Docker Swarm, Kubernetes, and Apache Mesos, for the It’s Okay to Be New series, Docker will be our stopping point.

Our next step will be to explore Linux Academy’s Docker Deep Dive course so we can get some real, hands-on experience using Docker. After that, we will move on to the Docker Certified Associate course and put our knowledge to the test.

The post The History of Container Technology appeared first on Linux Academy Blog.

You might also like
Leave A Reply

Your email address will not be published.