top of page
  • morphon3

The Immutable Desktop - Part 2

Continued from Part 1:

A few years ago I saw a presentation (I wish I could find it now) of someone who had managed to get all of their desktop applications running as docker containers. It was a kind of "let's Docker ALL THE THINGS" and it was quite entertaining to watch. There were so many hoops to jump through to get all the permissions right, and to make features like drag-and-drop working. It was charming, but mostly just a proof-of-concept. One could replace their desktop applications that otherwise would be running natively in the distribution with those same applications running in containers.


Many in attendance thought it was a gimmick. Why would anyone do this?


Here is a big one: Incomplete primary repositories.


The two largest distribution repositories are the AUR and Nix. They have tens of thousands of packages ready to go. Probably everything you want is in there. But what if you don't want to run Arch (to get access to the AUR), or NixOS? What if you're using Fedora, or Ubuntu? Fewer packages. What if you're using something more niche, like Oracle? What if it's VERY niche like Slackware or Void? Package availability can become a serious issue. Did some volunteer package what you need in the distribution of your choice? Is it the latest version (or close to it)? Maybe. Hopefully. The AUR and Nix are not always up to date with what the developer has released. If what you want isn’t packaged you'll need to do that yourself (and maybe become that volunteer that packages it for everyone else).


The traditional solutions to this problem have their own sets of problems. There are supplemental repositories (some of the better-known ones are the EPEL for RHEL and RPM-Fusion for Fedora. OpenSUSE has a rather large build service for this kind of thing as well). Perhaps what you want is in there. If not, perhaps the developer has kindly provided their own repository. Back in the Ubuntu heyday when it was the desktop distro of choice many developers had their own PPA that would enable users to download and automatically keep software updated. I remember having 10-15 PPAs on my primary desktop machine. Since Fedora will only have fully FOSS packages in their repo, most Fedora users will have at least a few things pulled from RPM-Fusion.


That, of course, is the best-case scenario: some secondary, official-adjacent repo has what you need. Minor-version updates of the distribution will probably be fine. Perhaps you can get away with a major-version update, but these will often break or (depending on the distro), not be possible until first uninstalling the packages that did not come from the primary repo. It's often recommended to make a backup and re-install the desktop in the case of a major update. But again - this is the best-case scenario.


The next-best options are much worse for keeping the system sane. You can sometimes find that the developer has packages available to be installed. You would simply download the RPM or DEB (good luck finding something else) and install it. If all the dependency checks go through, then your package is installed. Keep in mind, however, that these packages do not simply extract files to their destinations - they also run post-install scripts. Official repos will maintain some quality control over what those scripts do, but grabbing one straight from a developer's site is a bit risky. That script might overwrite something important - a typical user won't know until they encounter a failed update or another application segfault.


The next-best options after this get worse very quickly - binary archive (UnrealEd is distributed this way - as a giant ZIP file with some, but not all dependencies inside), a binary installer (DaVinci Resolve - meant to be installed on CentOS 7), or good old "make install" from a source tar.gz (many, many smaller projects or command-line utils are installed this way). None of these use the distro's packaging system at all, and have no way to keep themselves updated or prevent themselves from breaking when the distro is updated. Binary installers and "make install" often make changes to the system that the user may not want (and may break other software).


The general rule of thumb is this: The more a system deviates from official repositories, the greater the fragility of that system.


Now, don't get me wrong - the current state of things isn't "broken" and Desktop Linux isn't a "burning platform" with intractable problems. It is, in a way, somewhat normal to have to deal with these issues. Windows, for example, has similar problems: an official repo (the Windows Store), with software often installed from packages provided by the developer (the .EXE files downloaded from the developer's site), which must be installed with root-level privileges (Click "YES" at the UAC prompt), and have the potential to overwrite bits of the system, and also have to find some way to update themselves, and so on. Windows breaks enough that they now include a "reinstall" option inside the newest settings app in order to "refresh" the Windows experience! That almost seems like an admission of failure, but it’s the natural result of installing programs straight from the developer using system/root access.


Traditional desktop Linux is BETTER than Windows in this regard since the large distros have extensive official software repos and there are basic users (perhaps wanting merely Chrome, LibreOffice, and an email client) who may use ONLY what is available in the official repo. This is my brother's use-case and is why he enjoys the incredible stability and efficiency of his Linux desktop. These users have a delightful experience! The system magically stays updated and they can go from one major version to the next with no drama. For them, there is no problem to solve.


But... What if they want to use the newest version of LibreOffice and their point-release distro won't ship it until the next version in 4 months? What if they find some software they want to use that isn't in the official repos? What if they need to use an application that absolutely requires libXYZ.3.2.15 because later versions of that library break compatibility with some crucial plugin?


And then we get an even bigger question - should the end-user REALLY be selecting their distribution because its repos have the greatest overlap with the software (and the right version of that software) they want to use? Remember, the more the user deviates from those repos, the greater fragility their system will have. The more likelihood that something will break, or that they will need to re-install ("refresh"ing the experience, a la Windows).


At heart, this problem is about applications sharing userland with each other. The default for desktop Linux has been to share as much as possible (compile everything against the same libraries). This approach works well as long as the distro maintainers are able to compile everything themselves. But there are many occasions when they cannot. They cannot package everything, so there will always be some application that will not be included, and they may not have the very latest version of your desktop software of choice. Naturally, proprietary software will not be packaged this way either.


Containers solve this issue.


There has been some energy in building containers to run proprietary software already. DaVinci Resolve is a prime example - it officially supports only CentOS 7 (a much older version) and getting it to run on newer versions (or even other distros) is hit-or-miss. The instructions on how to run Resolve in a container include this rationale that is worth quoting in full:

Besides running DaVinci Resolve in its actual intended operating system (CentOS) without ever leaving the comfort of your own non-Centos Linux machine, containers offer some other big advantages:
For one, you can maintain multiple versions of DaVinci Resolve on your system without having to reformat everything and reinstall. How? Well, say a new version comes out and you want to test it out-- you can just pop in a new resolve .zip, rebuild a new container image with a single command, and quickly give it a spin-- using your existing projects and media. If you don't like what you see, you can instantly revert to the previous version (assuming the new version didn't just trash your project or anything, so back up first!)
You can also (theoretically, I haven't tried this) switch between the free and paid version or, hardware allowing, run them both simultaneously-- though maybe not on the same project files at the same time. That could be nuts.
Containerized, DaVinci Resolve can also be isolated from the Internet while the rest of your computer continues to be connected. And once the container image is built, it can also be quickly moved onto another machine without having to re-set it all up again.

Sounds like a win-win to me. Even if you were running CentOS 7 (the only officially supported distro), why not run this mission-critical software in a container and reap these benefits? The same rationale can be applied to FOSS software. Take something like Freyr by miraclx. This is a command-line utility to download music files. It's written in 12k lines of Javascript and Python and would have been the typical thing to have been built using "make install" 10-15 years ago. Mainstream distros are unlikely to have a utility like this included in official repos. How does the developer recommend running it? Using a pre-built OCI image on Docker. That is - he distributes a container (with Alpine as the base userland) with all the dependencies already included. Why wrangle with the tools necessary to compile it? You'd have to make sure every version of every dependency was compatible. And what if you had to modify the build to use some alternate library provided by your distro of choice?


Once I ran it in a pre-built container I started to wonder... why would I do it any other way? I literally don't have to worry about any compatibility issues other than the kernel. It starts instantly. It doesn't use much more memory than it would running natively (for a modern desktop machine the difference is negligible). I didn't need to give it root privileges to install. I can use standard container tools to delete it at any time without worrying that it left something in /usr/lib. And for the developer this is like a dream come true - there are ZERO distribution-specific packages for his project. He doesn’t have to maintain a bunch of different RPM, DEB, or whatever packages and then field requests from the people on Tumbleweed asking him why the RPM doesn’t work for them, or wondering when an update will be released once Ubuntu releases a new version. He doesn’t have to do any of that. He gives you the option: build from source, or use this docker container. Easy. It takes a few commands on the CLI, but no more than the cut and paste commands needed to add a PPA or "sudo dnf install".


For proprietary software the benefits are even greater – consider what happened with Linux gaming. A studio would take the time (or pay someone) to port their game to Linux. Great - now people could play Unreal Tournament and zap each other during their LAN parties. But distros moved on and gradually these games, expecting the old libraries, no longer worked on the new userland. Valve started promoting the use of their Linux container runtimes (Sniper and Soldier) to provide a (nearly) unchanging userland for game developers to target when producing Linux binaries. Steam installs these and coordinates game launches to make this process nearly invisible. Most Linux gamers aren't even aware they're there except when they see them being updated from time to time (Steam Deck users constantly asked about them during the first few weeks of the device's release). Now those games can run on newer and newer distros because they're using a userland that isn't a moving target. The Steam Linux runtimes are probably one of the most important pieces of the puzzle in getting Linux gaming where it is today. No game developer wants to constantly port their game to newer versions of their dependencies, or maintain a compatibility list of distros - especially since they don't have to do that for Windows. With the Steam runtimes, they don't have to do that on Linux either.


Linux gaming is a massive success story for containerized applications. In order for it to work the developer has to target a known container runtime, and the user needs a software launcher that keeps those runtimes updated and matches the binary to the runtime while making the process transparent to the user. For Linux gaming - that's Steam.


20 views0 comments

Recent Posts

See All

The Immutable Desktop - Part 1

For the past several months I've seen many posts and comments discussing immutable distros - questioning why they exist (this was a big...

Comments


bottom of page