Archive for the ‘Packaging’ Category
A short continuation of the previous post on
So, after the
KDEDIRS game, I was able to get most of the old programs running. But I had some trouble with the newer ones. KDE is slowly moving all their PIM applications to this framework called Akonadi. As of 4.4, the contacts were maintained by Akonadi.
Even though I don’t use much of KDE PIM, the pieces notoriously all integrate which each other, and KDE would constantly try to launch the Akonadi server, even at login (I have since disabled the offending KRunner plugin). Each time it would fail, complaining that the D-Bus services were not configured, among other problems.
D-Bus is the IPC mechanism behind the modern free desktop. It was inspired by KDE3′s old DCOP system and GNOME’s CORBA implementation, and has since replaced both its predecessors. Now, D-Bus has this concept of services. These services allow D-Bus to automatically launch a service when one attempts to connect to a name.
While the services for the system bus are a hopeless cause for me, I should be able to influence my session bus as I wish.
dbus-daemon‘s man page does claim to follow the XDG Base Directory Specification, so everything should Just Work. I set
XDG_DATA_DIRS and D-Bus picks up my session files.
But it doesn’t. Inspecting the process’s environment (via
/proc/PID/environ) reveals that my changes don’t take effect.
The problem: D-Bus is not launched by my
.xsession, where all the magic happens. D-Bus is launched by login manager! (Well, indirectly.) In
/etc/X11/Xsession.d/ are files that get sourced by your login session. In particular,
dbus-launch into the startup sequence before I ever get to do anything. It is conditional on a
STARTDBUS variable, but I am unaware of any way to modify that for my session alone. Removing this script or otherwise messing with it is not fair game; the goal is that I should be able to log back into system KDE safely having not modified any files it cares about.
So, lacking a better way to do this, my KDE 4.4 startup script contains this terrible little hack:
# Kill D-Bus killall -u "$USER" dbus-daemon # Launch KDE exec dbus-launch --exit-with-session startkde
At some point when I have time, I’ll investigate how NixOS manages this. I imagine they patch the display manager or session scripts at some level.
Continuing from the previous installment on lockers.
So, you would think that, with the previous path setup alone, things would Just Work. Of course, there’s a minor issue needing a newer Qt than Ubuntu provides in Karmic, but that’s easy to fix with another locker. In fact, one could even download the LGPL SDK installer for Linux and use the folder as-is as a locker.
And, indeed, this mostly worked. However, I did not simply want to run my own build of a desktop. I wanted my original software to still work, but use the newer libraries. There, I ran into a problem. If I tried to launch
yakuake, a terminal that I like to use for things like zephyr, I got this strange error:
Well, that’s a bother. To understand what happened, let’s look at how KDE applications locate files.
At the heart of the core kdelibs library is
KStandardDirs. (KDE’s API pages are down right now, so I shall direct you to this mirror a developer set up.) When a KDE application wishes to locate a file, it does not hard-code a path or use a compiled-in
PREFIX value. Instead, it asks KDECore to find it for them. You provide a resource type (such as
config) and a file path.
KStandardDirs then goes and locates it for you.
Reading down the docs a bit, we see that the class works by checking a set of registered suffixes for the resource type against a set of roots. (It also does some other magic like appending the application name for some resources.) These roots include the compiled prefix and a colon-separated variable
KDEDIRS. This prefix is the prefix
kdelibs was compiled with, not the application. As I was using my own KDE, of course it could not find Yakuake’s files. Aha! So I add
KDEDIRS and everything works.
Bah! What’s going on?
Well, if we look at the set of prefixes, the standard suffix for data is
share/apps. This fairly KDE-specific namespace in a global install gets stuffed under
/usr/share/apps, which is offensive to distributions, so they like to redirect it to
/usr/share/kde4/apps. A few other directories get a similar treatment. In Ubuntu’s case, a snippet from
/usr/share/pkg-kde-tools/makefiles/1/variables.mk reveals the cause:
# Standard Debian KDE 4 cmake flags DEB_CMAKE_KDE4_FLAGS += \ -DCMAKE_BUILD_TYPE=Debian \ -DKDE4_BUILD_TESTS=false \ -DKDE_DISTRIBUTION_TEXT="Kubuntu packages" \ -DCMAKE_SKIP_RPATH=true \ -DKDE4_USE_ALWAYS_FULL_RPATH=false \ -DCONFIG_INSTALL_DIR=$(DEB_CONFIG_INSTALL_DIR) \ -DDATA_INSTALL_DIR=/usr/share/kde4/apps \ -DHTML_INSTALL_DIR=/usr/share/doc/kde/HTML \ -DKCFG_INSTALL_DIR=/usr/share/kde4/config.kcfg \ -DLIB_INSTALL_DIR=/usr/lib \ -DSYSCONF_INSTALL_DIR=/etc
My kdelibs, however, were compiled directly from upstream sources (in fact, I compiled from the 4.4 branch on a
git-svn and hack on it myself). Moreover, these settings fail to set the standard suffixes, only a compiled-in value. (Kubuntu also carries a patch that changes the system-wide
FindKDE4Internal.cmake. It may actually register suffixes. I’m not sure.) When using the system kdelibs, these compiled values do their job and everything works fine. However, this makes the system KDE files special in that they are only a priori accessible via the system kdelibs. While I can inform KDE of the system root, the suffix is wrong.
So, I add a little hack. I have yet another locker,
kde-kubuntu-fake which contains a fake additional root for each of those directories. This contains merely a symlink farm:
kde-kubuntu-fake `-- share |-- apps -> /usr/share/kde4/apps/ |-- config -> /usr/share/kde4/config |-- config.kcfg -> /usr/share/kde4/config.kcfg `-- doc `-- HTML -> /usr/share/doc/kde4/HTML
which also gets added to my
KDEDIRS. Finally, after all that work, I can launch Yakuake.
So, hopefully this will help convince that random distribution patches like this are sketchy. Admittedly, given the mistake of trying to mush all packages into one single hierarchy under
/usr, the namespace poisoning of
/usr/share/apps is a little obnoxious, and this is a defensible change. Still, such things do prevent the compatibility between distributions and upstream and make it very hard for a unified free desktop platform to ever emerge from this tangled mess we have now.
Spring break is over. As expected, I didn’t manage to finish most of my projects for the week, but I did manage one of them: my laptop is is now running an install of KDE 4.4 parallel to the system 4.3 provided by Kubuntu.
Why I did this was described previously. Actually managing it was not that simple. We do not live in a perfect world, and indeed it’s silly to expect all of KDE to run without any root activity — any setuid portions, or global dbus configuration, for instance. Still, I wanted to try. For this and the next few posts, I’ll talk about the setup.
I have been managing software out of my home directory for quite some time now. To that end, I’ve built up a collection of functions in zsh, my primary shell. (There’s no particular reason why they’re in zsh; I just prefer it to bash.) They are inspired by the software lockers of MIT’s Athena system and the runtime setup of of Zero Install. At some point, I expect this system will converge to something that smells very much like part of Zero Install.
Any time I need some software which Ubuntu does not provide, I build it myself (or, if I’m lucky, find binary to unpack) isolated somewhere in my home directory. The current convention so far has been
~/pkg/PKGNAME for random software or
~/proj-build/PROJECTNAME for things I’m working on, but I’m not particular happy with this naming scheme. (It’s come up mostly by accident. I’ll likely move everything into
~/pkg or something.) Every locker contains approximately a UNIX directory tree.
A set of (fairly hacky) shell functions then inject subdirectories, as appropriate, into the environment when a locker is to be added. Unlike Zero Install, the variables are not specified by the locker. Instead, the shell script will look for, e.g.
bin, and add it to, e.g.
PATH, if it exists. This was mostly done out of laziness. At some point, variable choices will become the locker’s business. Current variables set include
There are two commands, borrowed from Athena, to add a locker to an environment. The first is
dir_run which runs a command with the given locker injected. The second is
dir_add which injects a locker into your current environment. I primarily use
dir_run with fancy completion scripts, but my dot files
dir_add any lockers which I use often or want injected into my standard environment.
So far, this setup has allowed me to run my system
okular on a development build of
popper when I hack on it. It’s allowed me to maintain a local build of git. It’s allowed me to parallel-install multiple snapshots of Chromium. It’s even allowed me to, via
dir_add, replace my system’s PyKerberos, when a bug in the packaged version prevented system software from using it. And, indeed, it allows me to run KDE out of my home directory.
Of course, building KDE for this wasn’t simply a matter of stuffing things into a folder and launching it. There were numerous problems along the way which I had to address, which I’ll describe in later posts.
If anyone wants my hacky scripts, they can be found in my athena Public. A disclaimer: they are hairy and very much need a cleanup. Also, they might need to tweaks to work well in bash; zsh lets me be lazy about quoting arguments. All that said, it’s sufficient for my needs and, despite being far from a true package management system, I think superior to anything
dpkg offers when it comes to maintaining different software configurations in parallel.
It’s no secret that I am unhappy with package management on Linux. One of these days, I’ll gather coherent enough thoughts on how things should work. In the meantime, here’s a glimpse at one of the biggest problems today.
If you look at the package management stacks in use on Linux today, be it apt/dpkg or yum/rpm or whatever, they share a fundamental assumption: there will only ever be one version of any package on the system. I argue that this mode of thinking is simply incorrect for a package manager on the free desktop. We need a package manager which fundamentally assumes parallel installation of packages. While correct parallel-install semantics are difficult, the flaws it fixes are well worth the effort.
One important use for parallel installation is testing. The user-base on any platform is different, and multiple configurations should be tested. One possibility is to use a separate machine, but this is painful. Indeed, Microsoft has not solved this problem; web designers wishing to run IE 6 and 7 concurrently were recommended to use a virtual machine.
When I used Windows, I used Portable Firefox for this. On Linux, I similarly download the official tarballs and run them out of my home directory, taking care not to eat my profile. But why should I manually manage this when I have a state-of-the-art (if the rumors on every Ubuntu advocate’s top 10 lists are true) package manager on my system!
Related to the needs of testing environments is the ability to fall-back when software breaks. for instance, my current browser of choice is Chrome. Now, Google provides an apt repository for Chrome, and yet I use the Chromium nightlies. The apt repositories force me into a single-install setup. Chrome is a very fast-moving target, and things sometimes break. Yet, I appreciate the movement as features I require quickly get added, such as client-side certificates. There is a simple solution with parallel installation: I keep around my old version when updating to a new build. If the new one proves unstable, I just revert to using the old one.
(These days things are less unstable than before. Should Youtube’s HTML5 video fix its quality problems, I’ll likely start using Chrome proper. Of course, my Chromium setup still parallel-installs, so I can rollback at will.)
But why bother? I can just uninstall the new version and install the old version. In practice, this doesn’t work. A month or so ago, KDE 4.4 was released. I, being the avid KDE user that I am, was eager to try it. Well, Kubuntu offered backported packages… why not? I can always go back to 4.3 if I wanted, right? To make a long story short, no. When I rolled back, dpkg and apt got woefully confused and I reinstalled most of the software on my machine. I am now in the process of creating a KDE 4.4 to run out of my home directory. When the last few remaining kinks are ironed out, I’ll describe the setup.
As they say, the best code is code you don’t have to write. Likewise, the best rollback procedure is one you don’t have to perform.
Finally, parallel installation acknowledges a fundamental fact of library compatibility: no two different pieces of software are completely compatible. Distributions love to force every package to use the same copy of every library. Most of the time, this is a sound and sensible goal. But it often falls short of reality. Even if the author of a library is very careful about keeping API and ABI working, programs may depend on subtle effects.
Take, for instance, this hypothetical situation. libfoo has a bug which causes some functionality of bar to fail. Bar eventually diagnoses this and perhaps even sends a patch to libfoo. In the meantime, bar should still work, so bar adds a workaround for this bug. This is, sadly, not compatible with the fix, so the workaround is conditionalized on libfoo’s version. Now, a distributor comes along, packages an older libfoo for stability, but backports the fix. Now bar fails to work on that distribution. Think I’m exaggerating? Search for “Debian” on these Eclipse release notes. Indeed, their solution is to install a different version of GTK+. Would it not be better if we could parallel install GTK+ and only use this specially crafted one for Eclipse?
Sometimes a package may even be incompatible with itself. SQLite has countless incompatible build options. The only possible solution is for every program to bundle its own SQLite in parallel.
Linux package managers of today are inadequate for supporting a platform for developers, content producers, and users. We need package managers which allow as much of the system as possible to be parallel-installed to support the evolving, disorganized nature of the Linux desktop.
This is somewhat of an old story that happened back in the fall, but people often ask me what became of it, so it is probably worth recounting everything…
Have you ever attempted to install a new package on your favorite Debian/Ubuntu system only to find your install stalling on a curious “Reading database” step? Some time ago, I was investigating dpkg performance issues on an older implementation of Debathena’s login chroot. While the inherent problems of the previous LVM-based chroots have since been fixed with an aufs-based solution, the chief cause still remains: dpkg’s poor IO behavior.
An important job of a package manager is to answer the queries “where did this file come from?” and “what files did this package install?”. dpkg maintains this information in a very simple scheme. In
/var/lib/dpkg/info (more specifically, the
info subdirectory of the admindir, which may be set with the
--admindir option or at compile-time) dpkg installs a listfile for every package:
packagename.list. These listfiles contain a linebreak-separated list of every path owned by that package. A fairly effective format — the information needed is stored, and consistency is (in theory) easy to maintain with
fsync, modulo the terrors of maintaining consistency on a filesystem — but for one flaw: no considerations whatsoever are given to the first of our two queries.
dpkg maintains no additional cache or index. Given a file, to obtain the package owning the file, dpkg loads in every listfile via
ensure_packagefiles_available on every installed package and stuffs them into a giant global hash table. Once the table is built, lookup is easy, but building the table is extremely expensive. This is what “Reading database” means. In arbitrary order, dpkg sequentially reads each listfile from disk. On cold cache, this is an expensive process fraught with seeking.
To see just how expensive it is, one can experiment with
dpkg -S does little more than read the listfiles into memory (it implements the “given a file, find me the owning package” query), so it makes a perfect way to isolate the bottleneck.
On my machine,
# echo 3 > /proc/sys/vm/drop_caches % time dpkg -S /bin/ls coreutils: /bin/ls dpkg -S /bin/ls 0.58s user 0.40s system 3% cpu 26.685 total
To prove that we are indeed bound by disk, run the command a second time on warm cache, and the results are much faster.
% time dpkg -S /bin/ls coreutils: /bin/ls dpkg -S /bin/ls 0.24s user 0.09s system 100% cpu 0.329 total
For so little data, this is hardly acceptable. Your typical off-the-shelf relational database will handle orders of magnitudes more data and still perform this query quickly (it’s a simple index lookup). In fact, I proposed on the list that dpkg add some form of database-backed index, or alternatively a dumber cache. (I wrote a proof-of-concept using a tar file.) Sadly, there was little interest. The internal abstractions of dpkg (what little there are) are poorly suited for a database or other index, so I would prefer not to start such a large project without the blessings of the dpkg team. (I did, however, end up getting two refactoring patches upstream in the process. Now Launchpad thinks I am active in dpkg.) Another developer had apparently in the past attempted a similar job and was shot down.
But there is still hope! A patch appeared on the dpkg developer list some time after I made my original proposal (I would like to think I had some influence in its making, but probably not) that bandaged the problem in a slightly less intrusive, but much more hackish way: perform a
FIBMAP (later changed to
FIEMAP) ioctl on every file and sort by first extent to obtain reading order. Of course, this actually wants asynchronous i/o, but this does not yet work to the best of my knowledge. As of writing, this patch has yet to be merged, but I expect it will be. dpkg people just merge patches very slowly.
The author claims a speedup from 27 seconds to 8 seconds on cold cache. (My original tar-based proof-of-concept manages a little over a second.) So, expect a slightly faster dpkg in the next few years. Possibly in time for M…ad Monkey? I’ll probably do a little happy post here when that finally gets merged. It somewhat saddens me that, should I ever get around to trying again, I will have a much harder time justifying the re-architecting necessary to introduce a proper index, but oh well.
And, in keeping with the previous post’s conclusion, how should this actually be done? Well, if we instead organize files under their owning package, both queries are trivial. Furthermore, the install process no longer even needs to make the once difficult query! dpkg requires the hash table of files primarily for file conflict detection, but in this setup, there is no such thing as file conflicts. It of course, opens other cans of worms, but that’s a topic for later.