top of page

The Immutable Desktop - Part 1

For the past several months I've seen many posts and comments discussing immutable distros - questioning why they exist (this was a big question for the forthcoming Snap-only Ubuntu), what problems they solve, and why anyone should care. I've also seen quite a lot of discussion of Flatpak, Snap, and AppImage - again, with many questioning why they exist and what benefit the world has for yet another new package format (complete with snarky XKCD references). I'm writing this series to explain what is going on, and perhaps convince some of the Linux desktop users out there that this IS the future, and that it's not a step backwards for the most powerful desktop OS ever developed for the PC.


The Container


Quick review: a Linux distro is made up of two things - kernel, and userland. The kernel interacts with the hardware (managing memory, storage device access, shuffling processes across available CPU cores, and so on) and offers very stable interfaces for applications run on it. The userland is everything running on top of the kernel. It is all the applications, shells, commands, and libraries that are outside the kernel itself. Originally, distros all had a single userland (one big shared set of libraries and utilities) but gradually Linux has gained very extensive support for having MULTIPLE userlands running on the same kernel. This evolving ability starts with chroot jails in the late 1970's, to the "jails" system in FreeBSD during the early 2000's, to cgroups with kernel 2.6.24, and explosion of interest in Docker when it was released in 2013. Containers allow for a system to have applications running within their own userland, but sharing the same kernel - which means CPU, memory, and other hardware access can be shared. This approach is different from virtualization (where the kernel is not shared, and memory has to be dedicated to each VM). Each of these applications with their own userlands are being managed by the same virtual memory system, a single scheduler, and so on. This is an oversimplification, bu think of containers as half-way between everything on the same userland (like a traditional distro) and virtual machines (where each VM is fully isolated from each other VM).

This is how server architecture is done today. Instead of spinning up a VM (which you can still do, of course), or installing a set of packages directly into the distribution (again, still possible), server loads are created as containers (bundles of applications and associated userland) that are then deployed on a host OS. Why? There are many advantages: much higher density than using a VM since each container allocates only the memory and CPU resources it requires from a shared kernel, reproducible builds since the what happens on the developer's container is the SAME as the one that is deployed, regardless of the underlying distribution (even when the developer is using WSL). They can be started and stopped almost instantly (no VM to boot up or shutdown) as though it was "installed" on the base distribution, but the containers are isolated from each other and from the rest of the system - so they can have userlands quite different from each other, or even have conflicting userlands - that is, one application might require a version of a software library that is incompatible with another application. Since they have their own userlands, they can happily coexist on the same system.


The result - running a server application no longer involves "installing" software onto the system (making sure that all the dependencies are accounted for, in the right place, and not conflicting with anything else on the system), but rather "deploying" a container that has everything in it already.


I believe that this new model for servers is why the recent RHEL source distribution changes have been met with more "meh" responses than would have been the case 5 or 10 years ago. Server systems might be running on a RHEL clone, sure - but the server application itself is distributed as an OCI image running on Docker/Podman. In this new world, having a RHEL-compatible distro is nice for finding admins with relevant experience, but the server application software isn't actually running on the distro, but inside its container. The distro inside the container is probably something like Alpine (or something else as small and efficient). If the software needs a RHEL environment, then they would just use the RedHat UBI to make their containers and deploy those - on the server distro of their choice.


This isn't some imaginary future. This is how things are done today. Many who use Linux on the desktop (my brother, for one) have never heard of containers and have no idea that the server landscape has changed so dramatically over the past 10-15 years. It's not relevant for them, doesn't show up in their news feeds, and yet this is big news. For one example (at a huge scale), take a look at FIS Global replacing entire mainframe setups with a swarm of Docker containers managed through Kubernetes. It's impressive stuff. The advantages of this approach are too great to ignore.

43 views0 comments

Recent Posts

See All

The Immutable Desktop - Part 2

Continued from Part 1: A few years ago I saw a presentation (I wish I could find it now) of someone who had managed to get all of their...

Commentaires


bottom of page