Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 51 min ago

Michael Stapelberg: Linux package managers are slow

1 October, 2020 - 14:47

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix C for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program) distribution package manager data wall-clock time rate Fedora dnf 114 MB 33s 3.4 MB/s Debian apt 16 MB 10s 1.6 MB/s NixOS Nix 15 MB 5s 3.0 MB/s Arch Linux pacman 6.5 MB 3s 2.1 MB/s Alpine apk 10 MB 1s 10.0 MB/s qemu (large C program) distribution package manager data wall-clock time rate Fedora dnf 226 MB 4m37s 1.2 MB/s Debian apt 224 MB 1m35s 2.3 MB/s Arch Linux pacman 142 MB 44s 3.2 MB/s NixOS Nix 180 MB 34s 5.2 MB/s Alpine apk 26 MB 2.4s 10.8 MB/s


(Looking for older measurements? See Appendix B (2019).

The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes
Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers AppImage apps image: ISO9660, SquashFS no snappy apps image: SquashFS yes: hooks FlatPak apps archive: OSTree no 0install apps archive: tar.bz2 no nix, guix distro archive: nar.{bz2,xz} activation script dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes rpm distro archive: cpio.{bz2,lz,xz} scriptlets pacman distro archive: tar.xz install slackware distro archive: tar.{gz,xz} yes: doinst.sh apk distro archive: tar.gz yes: .post-install Entropy distro archive: tar.bz2 yes ipkg, opkg distro archive: tar{,.gz} yes Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

Appendix A: related work

There are a couple of recent developments going into the same direction:

Appendix C: measurement details (2020) ack

You can expand each of these:

Fedora’s dnf takes almost 33 seconds to fetch and unpack 114 MB.

% docker run -t -i fedora /bin/bash
[root@62d3cae2e2f9 /]# time dnf install -y ack
Fedora 32 openh264 (From Cisco) - x86_64     1.9 kB/s | 2.5 kB     00:01
Fedora Modular 32 - x86_64                   6.8 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         5.6 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 9.9 MB/s |  23 MB     00:02
Fedora 32 - x86_64                            39 MB/s |  70 MB     00:01
[…]
real	0m32.898s
user	0m25.121s
sys	0m1.408s

NixOS’s Nix takes a little over 5s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.ack'
unpacking channels...
created 1 symlinks in user environment
installing 'perl5.32.0-ack-3.3.1'
these paths will be fetched (15.55 MiB download, 85.51 MiB unpacked):
  /nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man
  /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31
  /nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18
  /nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10
  /nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53
  /nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0
  /nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31
  /nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0
  /nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48
  /nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1
copying path '/nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man' from 'https://cache.nixos.org'...
copying path '/nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10' from 'https://cache.nixos.org'...
copying path '/nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18' from 'https://cache.nixos.org'...
copying path '/nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0' from 'https://cache.nixos.org'...
copying path '/nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31' from 'https://cache.nixos.org'...
copying path '/nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0' from 'https://cache.nixos.org'...
copying path '/nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1' from 'https://cache.nixos.org'...
building '/nix/store/m0rl62grplq7w7k3zqhlcz2hs99y332l-user-environment.drv'...
created 49 symlinks in user environment
real	0m 5.60s
user	0m 3.21s
sys	0m 1.66s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@1996bb94a2d1:/# time (apt update && apt install -y ack-grep)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (8088 kB/s)
[…]
The following NEW packages will be installed:
  ack libfile-next-perl libgdbm-compat4 libgdbm6 libperl5.30 netbase perl perl-modules-5.30
0 upgraded, 8 newly installed, 0 to remove and 23 not upgraded.
Need to get 7341 kB of archives.
After this operation, 46.7 MB of additional disk space will be used.
[…]
real	0m9.544s
user	0m2.839s
sys	0m0.775s

Arch Linux’s pacman takes a little under 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9f6672688a64 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            130.8 KiB  1090 KiB/s 00:00
 extra          1655.8 KiB  3.48 MiB/s 00:00
 community         5.2 MiB  6.11 MiB/s 00:01
resolving dependencies...
looking for conflicting packages...

Packages (2) perl-file-next-1.18-2  ack-3.4.0-1

Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m2.936s
user	0m0.375s
sys	0m0.160s

Alpine’s apk takes a little over 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/4) Installing libbz2 (1.0.8-r1)
(2/4) Installing perl (5.30.3-r0)
(3/4) Installing perl-file-next (1.18-r0)
(4/4) Installing ack (3.3.1-r0)
Executing busybox-1.31.1-r16.trigger
OK: 43 MiB in 18 packages
real	0m 1.24s
user	0m 0.40s
sys	0m 0.15s

qemu

You can expand each of these:

Fedora’s dnf takes over 4 minutes to fetch and unpack 226 MB.

% docker run -t -i fedora /bin/bash
[root@6a52ecfc3afa /]# time dnf install -y qemu
Fedora 32 openh264 (From Cisco) - x86_64     3.1 kB/s | 2.5 kB     00:00
Fedora Modular 32 - x86_64                   6.3 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         6.0 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 334 kB/s |  23 MB     01:10
Fedora 32 - x86_64                            33 MB/s |  70 MB     00:02
[…]

Total download size: 181 M
Downloading Packages:
[…]

real	4m37.652s
user	0m38.239s
sys	0m6.321s

NixOS’s Nix takes almost 34s to fetch and unpack 180 MB.

% docker run -t -i nixos/nix
83971cf79f7e:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.qemu'
unpacking channels...
created 1 symlinks in user environment
installing 'qemu-5.1.0'
these paths will be fetched (180.70 MiB download, 1146.92 MiB unpacked):
[…]
real	0m 33.64s
user	0m 16.96s
sys	0m 3.05s

Debian’s apt takes over 95 seconds to fetch and unpack 224 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (5998 kB/s)
[…]
Fetched 216 MB in 43s (5006 kB/s)
[…]
real	1m25.375s
user	0m29.163s
sys	0m12.835s

Arch Linux’s pacman takes almost 44s to fetch and unpack 142 MB.

% docker run -t -i archlinux/base
[root@58c78bda08e8 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core          130.8 KiB  1055 KiB/s 00:00
 extra        1655.8 KiB  3.70 MiB/s 00:00
 community       5.2 MiB  7.89 MiB/s 00:01
[…]
Total Download Size:   135.46 MiB
Total Installed Size:  661.05 MiB
[…]
real	0m43.901s
user	0m4.980s
sys	0m2.615s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Appendix B: measurement details (2019) ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Russ Allbery: Review: Harrow the Ninth

1 October, 2020 - 10:55

Review: Harrow the Ninth, by Tamsyn Muir

Series: The Locked Tomb #2 Publisher: Tor Copyright: 2020 ISBN: 1-250-31320-1 Format: Kindle Pages: 510

Harrow the Ninth is a direct sequel to Gideon the Ninth and under absolutely no circumstances should you start reading here. You would be so lost. If you plan on reading this series, read the books as closely together as you can so that you can remember the details of the previous book. You may still resort to re-reading or searching through parts of the previous book as you go.

Muir is doing some complex structural work with Harrow the Ninth, so it's hard to know how much to say about it without spoiling some aspect of it for someone. I think it's safe to say this much: As advertised by the title, we do get a protagonist switch to Harrowhark. However, unlike Gideon the Ninth, it's not a single linear story. The storyline that picks up after the conclusion of Gideon is interwoven with apparent flashbacks retelling the story of the previous book from Harrowhark's perspective. Or at least it might have been the story of the previous book, except that Ortus is Harrowhark's cavalier, Gideon does not appear, and other divergences from the story we previously read become obvious early on.

(You can see why memory of Gideon the Ninth is important.)

Oh, and one of those storylines is written in the second person. Unlike some books that use this as a gimmick, this is for reasons that are eventually justified and partly explained in the story, but it's another example of the narrative complexity. Harrow the Ninth is dropping a lot of clues (and later revelations) in both story events and story structure, many of which are likely to upend reader expectations from the first book.

I have rarely read a novel that is this good at fulfilling the tricky role of the second book of a trilogy. Gideon the Ninth was, at least on the surface, a highly entertaining, linear, and relatively straightforward escape room mystery, set against a dying-world SF background that was more hinted at than fleshed out. Harrow the Ninth revisits and reinterprets that book in ways that add significant depth without feeling artificial. Bits of scenery in the first book take on new meaning and intention. Characters we saw only in passing get a much larger role (and Abigail is worth the wait). And we get a whole ton of answers: about the God Emperor, about Lyctors, about the world, about Gideon and Harrowhark's own pasts and backgrounds, and about the locked tomb that is at the center of the Ninth House. But there is still more than enough for a third book, including a truly intriguing triple cliffhanger ending. Harrow the Ninth is both satisfying in its own right and raises new questions that I'm desperate to see answered in the third book.

Also, to respond to my earlier self on setting, this world is not a Warhammer 40K universe, no matter how much it may have appeared in the glimpses we got in Gideon. The God Emperor appears directly in this book and was not at all what I was expecting, if perhaps even more disturbing. Muir is intentionally playing against type, drawing a sharp contrast between the God Emperor and the dramatic goth feel of the rest of the universe and many of the characters, and it's creepily effective and goes in a much different ethical direction than I had thought. (That said, I will warn that properly untangling the ethical dilemmas of this universe is clearly left to the third book.)

I mentioned in my review of Gideon the Ninth that I was happily to see more SF pulling unapologetically from fanfic. I'm going to keep beating that drum in this review in part because I think the influence may be less obvious to the uninitiated. Harrow the Ninth is playing with voice, structure, memory, and chronology in ways that I suspect the average reader unfamiliar with fanfic may associate more with literary fiction, but they would be wrongly underestimating fanfic if they did so. If anything, the callouts to fanfic are even clearer. There are three classic fanfic alternate universe premises that appear in passing, the story relies on the reader's ability to hold a canonical narrative and an alternate narrative in mind simultaneously, and the genre inspiration was obvious enough to me that about halfway through the novel I correctly guessed one of the fanfic universes in which Muir has written. (I'm not naming it here since I think it's a bit of a spoiler.)

And of course there's the irreverence. There are some structural reasons why the narrative voice isn't quite as good as Gideon the Ninth at the start, but rest assured that Muir makes up for that by the end of the book. My favorite scenes in the series so far happen at the end of Harrow the Ninth: world-building, revelations, crunchy metaphysics, and irreverent snark all woven beautifully together. Muir has her characters use Internet meme references like teenagers, which is a beautiful bit of characterization because they are teenagers. In a world that's heavy on viscera, skeletons, death, and horrific monsters, it's a much needed contrast and a central part of how the characters show defiance and courage. I don't think this will work for everyone, but it very much works for me. There's a Twitter meme reference late in the book that had me laughing out loud in delight.

Harrow the Ninth is an almost perfect second book, in that if you liked Gideon the Ninth, you will probably love Harrow the Ninth and it will make you like Gideon the Ninth even more. It does have one major flaw, though: pacing.

This was also my major complaint about Gideon, primarily around the ending. I think Harrow the Ninth is a bit better, but the problem has a different shape. The start of the book is a strong "what the hell is going on" experience, which is very effective, and the revelations are worth the build-up once they start happening. In between, though, the story drags on a bit too long. Harrow is sick and nauseated at the start of the book for rather longer than I wanted to read about, there is one too many Lyctor banquets than I think were necessary to establish the characters, and I think there's a touch too much wandering the halls.

Muir also interwove two narrative threads and tried to bring them to a conclusion at the same time, but I think she had more material for one than the other. There are moments near the end of the book where one thread is producing all the payoff revelations the reader has been waiting for, and the other thread is following another interminable and rather uninteresting fight scene. You don't want your reader saying "argh, no" each time you cut away to the other scene. It's better than Gideon the Ninth, where the last fifth of the book is mostly a running battle that went on way longer than it needed to, but I still wish Muir had tightened the story throughout and balanced the two threads so that we could stay with the most interesting one when it mattered.

That said, I mostly noticed the pacing issues in retrospect and in talking about them with a friend who was more annoyed than I was. In the moment, there was so much going on here, so many new things to think about, and so much added depth that I devoured Harrow the Ninth over the course of two days and then spent the next day talking to other people who had read it, trading theories about what happened and what will happen in the third book. It was the most enjoyable reading experience I've had so far this year.

Gideon the Ninth was fun; Harrow the Ninth was both fun and on the verge of turning this series into something truly great. I can hardly wait for Alecto the Ninth (which doesn't yet have a release date, argh).

As with Gideon the Ninth, content warning for lots and lots of gore, rather too detailed descriptions of people's skeletons, restructuring bits of the body that shouldn't be restructured, and more about bone than you ever wanted to know.

Rating: 9 out of 10

Norbert Preining: Plasma 5.20 coming to Debian

1 October, 2020 - 09:30

The KDE Plasma desktop is soon getting an update to 5.20, and beta versions are out for testing.

Plasma 5.20 is going to be one absolutely massive release! More features, more fixes for longstanding bugs, more improvements to the user interface!

There are lots of new features mentioned in the release announcement, I like in particular the ability that settings changed from the default can now be highlighted.

I have been providing builds of KDE related packages since quite some time now, see everything posted under the KDE tag. In the last days I have prepared Debian packages for Plasma 5.19.90 on OBS, for now only targeting Debian/experimental and amd64 architecture.

These packages require Qt 5.15, which is only available in the experimental suite, and there is no way to simply update to Qt 5.15 since all Qt related packages need to be recompiled. So as long as Qt 5.15 doesn’t hit unstable, I cannot really run these packages on my main machine, but I tried a clean Debian virtual machine installing only Plasma 5.19.90 and depending packages, plus some more for a pleasant desktop experience. This worked out quite well, the VM runs Plasma 5.19.90.

Well, bottom line, as soon as we have Qt 5.15 in Debian/unstable, we are also ready for Plasma 5.20!

Jonathan Carter: Free Software Activities for 2020-09

1 October, 2020 - 07:15

This month I started working on ways to make hosting access easier for Debian Developers. I also did some work and planning for the MiniDebConf Online Gaming Edition that we’ll likely announce within the next 1-2 days. Just a bunch of content that needs to be fixed and a registration bug then I think we’ll be ready to send out the call for proposals.

In the meantime, here’s my package uploads and sponsoring for September:

2020-09-07: Upload package calamares (3.2.30-1) to Debian unstable.

2020-09-07: Upload package gnome-shell-extension-dash-to-panel (39-1) to Debian unstable.

2020-09-08: Upload package gnome-shell-extension-draw-on-your-screen (6.2-1) to Debian unstable.

2020-09-08: Sponsor package sqlobject (3.8.0+dfsg-2) for Debian unstable (Python team request).

2020-09-08: Sponsor package bidict (0.21.0-1) for Debian unstable (Python team request).

2020-09-11: Upload package catimg (2.7.0-1) to Debian unstable.

2020-09-16: Sponsor package gamemode (1.6-1) for Debian unstable (Games team request).

2020-09-21: Sponsor package qosmic (1.6.0-3) for Debian unstable (Debian Mentors / e-mail request).

2020-09-22: Upload package gnome-shell-extension-draw-on-your-screen (6.4-1) to Debian unstable.

2020-09-22: Upload package bundlewrap (4.2.0-1) to Debian unstable.

2020-09-25: Upload package gnome-shell-extension-draw-on-your-screen (7-1) to Debian unstable.

2020-09-27: Sponsor package libapache2-mod-python (3.5.0-1) for Debian unstable (Python team request).

2020-09-27: Sponsor package subliminal (2.1.0-1) for Debian unstable (Python team request).

Paul Wise: FLOSS Activities September 2020

1 October, 2020 - 06:57
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review Administration
  • Debian wiki: unblock IP addresses, approve accounts
Communication Sponsors

The gensim, cython-blis, python-preshed, pytest-rerunfailures, morfessor, nmslib, visdom and pyemd work was sponsored by my employer. All other work was done on a volunteer basis.

Utkarsh Gupta: FOSS Activites in September 2020

1 October, 2020 - 06:00

Here’s my (twelfth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 21st month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

I’ve been busy with my undergraduation stuff but I still squeezed out some time for the regular Debian work. Here are the following things I did in Debian this month:

Uploads and bug fixes: Other $things:
  • Attended the Debian Ruby team meeting. Logs here.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored trace-cmd for Sudip, ruby-asset-sync for Nilesh, and mariadb-mysql-kbs for William.
RuboCop::Packaging - Helping the Debian Ruby team! \o/

This Google Summer of Code, I worked on writing a linter that could flag offenses for lines of code that are very troublesome for Debian maintainers while trying to package and maintain Ruby libraries and applications!

Whilst the GSoC period is over, I’ve been working on improving that tool and have extended that linter to now “auto-correct” these offenses by itself! \o/
You can now just use the -A flag and you’re done! Boom! The ultimate game-changer!

Here’s a quick demo for this feature:

A few quick updates on RuboCop::Packaging:

I’ve also spent a considerable amount of time in raising awareness about this and in more general sense, about downstream maintenance.
As a result, I raised a bunch of PRs which got really good response. I got all of the 20 PRs merged upstream, fixing these issues.

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twelfth month as a Debian LTS and third month as a Debian ELTS paid contributor.
I was assigned 19.75 hours for LTS and 15.00 hours for ELTS and worked on the following things:
(for LTS, I over-worked for 11 hours last month on the survey so only had 8.75 hours this month!)

LTS CVE Fixes and Announcements: ELTS CVE Fixes and Announcements:
  • Issued ELA 274-1, fixing CVE-2020-11984, for uwsgi.
    For Debian 8 Jessie, these problems have been fixed in version 2.0.7-1+deb8u3.
  • Issued ELA 275-1, fixing CVE-2020-14363, for libx11.
    For Debian 8 Jessie, these problems have been fixed in version 2:1.6.2-3+deb8u4.
  • Issued ELA 278-1, fixing CVE-2020-8184, for ruby-rack.
    For Debian 8 Jessie, these problems have been fixed in version 1.5.2-3+deb8u4.
  • Also worked on updating the version of clamAV from v0.101.5 to v0.102.4.
    This was a bit tricky package to work on since it involved an ABI/API change and was more or less a transition. Super thanks to Emilio for his invaluable help and him taking over the package, finishing, and uploading it in the end.
Other (E)LTS Work:
  • Front-desk duty from 31-08 to 06-09 and from 28-09 onward for both LTS and ELTS.
  • Triaged apache2, cryptsetup, nasm, node-bl, plinth, qemu, rsync, ruby-doorkeeper, and uwsgi.
  • Marked CVE-2020-15094/symfony as not-affected for Stretch.
  • Marked CVE-2020-{9490,11993}/apache2 as ignored for Stretch.
  • Marked CVE-2020-8244/node-bl as no-dsa for Stretch.
  • Marked CVE-2020-24978/nasm as no-dsa for Stretch.
  • Marked CVE-2020-25073/plinth as no-dsa for Stretch.
  • Marked CVE-2020-15094/symfony as not-affected for Jessie.
  • Marked CVE-2020-14382/cryptsetup as not-affected for Jessie.
  • Marked CVE-2020-14387/rsync as not-affected for Jessie.
  • Auto EOL’ed ark, collabtive, linux, nasm, node-bl, and thunderbird for Jessie.
  • Use mktemp instead of tempfile in bin/auto-add-end-of-life.sh.
  • Attended the fifth LTS meeting. Logs here.
  • General discussion on LTS private and public mailing list.

Until next time.
:wq for today.

Steinar H. Gunderson: plocate improvements

1 October, 2020 - 05:30

Since announcing plocate, a number of major and minor improvements have happened, and despite its prototype status, I've basically stopped using mlocate entirely now.

First of all, the database building now uses 90% less RAM, so if you had issues with plocate-build OOM-ing before, you're unlikely to see that happening anymore.

Second, while plocate was always lightning-fast on SSDs or with everything in cache, that isn't always the situation for everyone. It's so annoying having a tool usually be instant, and then suddenly have a 300 ms hiccup just because you searched for something rare. To get that case right, real work had to be done; I couldn't just mmap up the index anymore and search randomly around in it.

Long story short, mmap is out, and io_uring is in. (This requires Linux 5.1 or later and liburing; if you don't have either, it will transparently fall back to synchronous I/O. It will still be faster than before, but not nearly as good.) I've been itching to try io_uring for a while now, and this was suddenly the perfect opportunity. Not because I needed more efficient I/O (in fact, I believe I drive it fairly inefficiently, with lots of syscalls), but because it allows to run asynchronous I/O without the pain of threads or old-style aio. It's unusual in that I haven't heard of anyone else doing io_uring specifically to gain better performance on non-SSDs; usually, it's about driving NVMe drives or large amounts of sockets more efficiently.

plocate needs a fair amount of gather reads; e.g., if you search for “plocate”, it needs to go to disk and fetch disk offsets for the posting lists “plo”, “loc”, “oca”, “cat” and “ate”; and most likely, it will be needing all five of them. io_uring allows me to blast off a bunch of reads at once, having the kernel to reorder them as it sees fit; with some luck from the elevator algorithm, I'll get all of them in one revolution of the disk, instead of reading the first one and discovering the disk head had already passed the spot where the second one was. (After that, it needs to look at the offsets and actually get the posting lists, which can then be decoded and intersected. This work can be partially overlapped with the positions.) Similar optimizations exist for reading the actual filenames.

All in all, this reduces long-tail latency significantly; it's hard to benchmark cold-cache behavior faithfully (drop_caches doesn't actually always drop all the caches, it seems), but generally, a typical cold-cache query on my machine seems to go from 200–400 to 40–60 ms.

There's one part left that is synchronous; once a file is found, plocate needs to go call access() to check that you're actually allowed to see it. (Exercise: Run a timing attack against mlocate or plocate to figure out which files exist on the system that you are not allowed to see.) io_uring doesn't support access() as a system call yet; I managed to sort-of fudge it by running a statx() asynchronously, which then populates the dentry cache enough that synchronous access() on the same directory is fast, but it didn't seem to help actual query times. I guess that in a typical query (as opposed to “plocate a/b”, which will give random results all over the disk), you hit only a few directories anyway, and then you're just at the mercy of the latency distribution of getting that metadata. And you still get the overlap with the loads of the file name list, so it's not fully synchronous.

plocate also now no longer crashes if you run it without a pattern :-)

Get it at https://git.sesse.net/?p=plocate. There still is no real release. You will need to regenerate your plocate.db, as the file format has changed to allow for fewer seeks.

Antoine Beaupré: Presentation tools

1 October, 2020 - 00:21

I keep forgetting how to make presentations. I had a list of tools in a wiki from a previous job, but that's now private and I don't see why I shouldn't share this (even if for myself!).

So here it is. What's your favorite presentation tool?

Tips
  • if you have some text to present, outline keywords so that you can present your subject without reading every word
  • ideally, don't read from your slides - they are there to help people follow, not for people to read
  • even better: make your slides pretty with only a few words, or don't make slides at all

Further advice:

I'm currently using Pandoc with PDF input (with a trip through LaTeX) for most slides, because PDFs are more reliable and portable than web pages. I've also used Libreoffice, Pinpoint, and S5 (through RST) in the past. I miss Pinpoint, too bad that it died.

Some of my presentations are available in my GitLab.com account:

See also my list of talks and presentations which I can't seem to keep up to date.

Tools Beamer (LaTeX)
  • LaTeX class
  • Do not use directly unless you are a LaTeX expert or masochist, see Pandoc below
  • see also powerdot
  • Home page
Darkslide
  • HTML, Javascript
  • presenter notes, table of contents, Markdown, RST, Textile, themes, code samples, auto-reload
  • Home page, demo
Impress.js Libreoffice Impress
  • Powerpoint clone
  • Makes my life miserable
  • PDF export, presenter notes, outline view, etc
  • Home page, screenshots
Magicpoint
  • ancestor of everyone else (1997!)
  • text input format, image support, talk timer, slide guides, HTML/Postscript export, draw on slides, X11 output
  • no release since 2008
  • Home page
mdp Pandoc
  • Allows converting from basically whatever into slides, including Beamer, DZSlides, reveal.js, slideous, slidy, Powerpoint
  • PDF, HTML, Powerpoint export, presentation notes, full screen background images
  • nice plain text or markdown input format
  • Home page, documentation
PDF Presenter
  • PDF presentation tool, shows presentation notes
  • basically "Keynote for Linux"
  • Home page, pdf-presenter-console in Debian
Pinpoint
  • Native GNOME app
  • Full screen slides, PDF export, live change, presenter notes, pango markup, video, image backgrounds
  • Home page
  • Abandoned since at least 2019
Reveal.js
  • HTML, Javascript
  • PDF export, Markdown, LaTeX support, syntax-highlighting, nested slides, speaker notes
  • Source code, demo
S5
  • HTML, CSS
  • incremental, bookmarks, keyboard controls
  • can be transformed from ReStructuredText (RST) with rst2s5 with python-docutils
  • Home page, demo
sent
  • X11 only
  • plain text, black on white, image support, and that's it
  • from the suckless.org elitists
  • Home page
Sozi
  • Entire presentation is one poster, zooming and jumping around
  • SVG + Javascript
  • Home page, demo
Other options

Another option I have seriously considered is just generate a series of images with good resolution, hopefully matching the resolution (or at least aspect ratio) of the output device. Then you flip through a series of images one by one. In that case, any of those image viewers (not an exhaustive list) would work:

Update: it turns out I already wrote a somewhat similar thing when I did a recent presentation. If you're into rants, you might enjoy the README file accompanying the Kubecon rant presentation. TL;DR: "makes me want to scream" and "yet another unsolved problem space, sigh" (refering to "display images full-screen" specifically).

Bastian Blank: Booting Debian on ThinkPad X13 AMD

1 October, 2020 - 00:00

Running new hardware is always fun. The problems are endless. The solutions not so much.

So I've got a brand new ThinkPad X13 AMD. It features an AMD Ryzen 5 PRO 4650U, 16GB of RAM and a 256GB NVME SSD. The internal type identifier is 20UF. It runs the latest firmware as of today with version 1.09.

So far I found two problems with it:

  • It refuses to boot my Debian image with Secure Boot enabled.
  • It produces ACPI errors on every key press on the internal keyboard.
Disable Secure Boot

The system silently fails to boot a signed shim and grub from an USB thumb drive. I used on of the Debian Cloud images, which should properly work in this setup and do on my other systems.

The only fix I found was to disable Secure Boot alltogether.

Select Linux in firmware

Running a Linux 5.8 with default firmware setting produces ACPI errors on each key press.

ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.GPP3], AE_NOT_FOUND (20200528/psargs-330)
ACPI Error: Aborting method \_SB.GPIO._EVT due to previous error (AE_NOT_FOUND) (20200528/psparse-529)

This can be "fixed" by setting a strategic setting inside the firmware:
Config > Power > Sleep State to Linux

Chris Lamb: Free software activities in September 2020

30 September, 2020 - 23:49

Here is my monthly update covering what I have been doing in the free software world during September 2020 (previous month):


SPI is a non-profit corporation that acts as a fiscal sponsor for organisations that develop open source software and hardware.
  • As part of my role of being the assistant Secretary of the Open Source Initiative and a board director of Software in the Public Interest, I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet as well as the usual internal discussions, etc. I participated in the OSI's inaugural State of the Source conference and began the 'onboarding' of a new project to SPI.

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.


Conservancy is not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:



§


diffoscope
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

I made the following changes to diffoscope, including preparing and uploading versions 159 and 160 to Debian:

  • New features:

    • Show "ordering differences" only in strings(1) output by applying the ordering check to all differences across the codebase. [...]
  • Bug fixes:

    • Mark some PGP tests that they require pgpdump, and check that the associated binary is actually installed before attempting to run it. (#969753)
    • Don't raise exceptions when cleaning up after guestfs cleanup failure. [...]
    • Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump against all files that are recognised by file(1) as data. [...]
  • Codebase improvements:

    • Add some documentation for the EXTERNAL_TOOLS dictionary. [...]
    • Abstract out a variable we use a couple of times. [...]
  • diffoscope.org website improvements:

    • Make the (long) demonstration GIF less prominent. [...]

§


Debian Lintian
Lintian dissects Debian packages and reports bugs and policy violations. It contains automated checks for many aspects of Debian policy as well as some checks for common errors.

For Lintian, the static analysis tool for Debian packages, I uploaded versions 2.93.0, 2.94.0, 2.95.0 & 2.96.0 (not counting uploads to the backports repositories), as well as:

  • Bug fixes:

    • Don't emit odd-mark-in-description for large numbers such as 300,000. (#969528)
    • Update the expected Vcs-{Browser,Git} location of modules and applications maintained by recently-merged Python module/app teams. (#970743)
    • Relax checks around looking for the dh(1) sequencer by not looking for the preceding target:. (#970920)
    • Don't try and open debian/patches/series if it does not exist. [...]
    • Update all $LINTIAN_VERSION assignments in scripts and not just the ones we specify; we had added and removed some during development. [...]
  • Tag updates:

  • Developer documentation updates:

    • Add prominent and up-to-date information on how to run the testsuite. (#923696)
    • Drop recommendation to update debian/changelog manually. [...]
    • Apply wrap-and-sort -sa to the debian subdirectory. [...]
    • Merge data/README into CONTRIBUTING.md for greater visibility [...] and move CONTRIBUTING.md to use #-style Markdown headers [...].
Debian LTS

This month I've worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the project via the following video:

Uploads
Bugs filed
  • bluez-source: Contains bluez-source-tmp directory. (#970130)

  • bookworm: Manual page contains debugging/warning/error information from running binary. (#970277)

  • jhbuild: Missing runtime dependency on python3-distutils. (#971418)

  • wxwidgets3.0: Links in documentation points to within the original build path, not the installed path. (#970431)

Mike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 03)

30 September, 2020 - 03:50

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri

Over the past 4 months I worked on the following bits and pieces regarding Lomiri in Debian:

  • Work on lomiri-app-launch (Debian packaging, upstream work, upload to Debian)
  • Fork lomiri-url-dispatcher from url-dispatcher (upstream work)
  • Upload lomiri-url-dispatcher to Debian
  • Fork out suru-icon-theme and make it its own upstream project
  • Package and upload suru-icon-theme to Debian
  • First glance at lomiri-ui-toolkit (currently FTBFS, needs to be revisited)
  • Update of Mir (1.7.0 -> 1.8.0) in Debian
  • Fix net-cpp FTBFS in Debian
  • Fix FTBFS in gsettings-qt.
  • Fix FTBFS in mir (support of binary-only and arch-indep-only builds)
  • Coordinate with Marius Gripsgard and Robert Tari on shift over from Ubuntu Indicator to Ayatana Indicators
  • Upload ayatana-indicator-* (and libraries) to Debian (new upstream releases)
  • Package and upload to Debian: qmenumodel (still in Debian's NEW queue)
  • Package and upload to Debian: ayatana-indicator-sound
  • Symbol-Updates (various packages) for non-standard architectures
  • Fix FTBFS of qtpim-opensource-src in Debian since Qt5.14 had landed in unstable
  • Fix FTBFS on non-standard architectures of qtsystems, qtpim and qtfeedback
  • Fix wlcs in Debian (for non-standard architectures), more Symbol-Updates (esp. for the mir DEB package)
  • Symbol-Updates (mir, fix upstream tinkering with debian/libmiral3.symbols)
  • Fix FTBFS in lomiri-url-dispatcher against Debian unstable, file merge request upstream
  • Upstream release of qtmir 0.6.1 (via merge request)
  • Improve check_whitespace.py script as used in lomiri-api to ignore debian/ subfolder
  • Upstream release of lomiri-api 0.1.1 and upload to Debian unstable.

The next two big projects / packages ahead are lomiri-ui-toolkit and qtmir.

Credits

Many big thanks go to Marius and Dalton for their work on the UBports project and being always available for questions, feedback, etc.

Thanks to Ratchanan Srirattanamet for providing some of his time for debugging some non-thread safe unit tests (currently unsure, what package we actually looked at...).

Thanks for Florian Leeber for being my point of contact for topcis regarding my cooperation with the UBports Foundation.

Previous Posts about my Debian UBports Team Efforts

Vincent Bernat: Speeding up bgpq4 with IRRd in a container

29 September, 2020 - 15:32

When building route filters with bgpq4 or bgpq3, the speed of rr.ntt.net or whois.radb.net can be a bottleneck. Updating many filters may take several tens of minutes, depending on the load:

$ time bgpq4 -h whois.radb.net AS-HURRICANE | wc -l
909869
1.96s user 0.15s system 2% cpu 1:17.64 total
$ time bgpq4 -h rr.ntt.net AS-HURRICANE | wc -l
927865
1.86s user 0.08s system 12% cpu 14.098 total

A possible solution is to have your own IRRd instance in your network, mirroring the main routing registries. A close alternative is to bundle IRRd with all the data in a ready-to-use Docker image. This also has the advantage of easy integration into a Docker-based CI/CD pipeline.

$ git clone https://github.com/vincentbernat/irrd-legacy.git -b blade/master
$ cd irrd-legacy
$ docker build . -t irrd-snapshot:latest
[…]
Successfully built 58c3e83a1d18
Successfully tagged irrd-snapshot:latest
$ docker container run --rm --detach --publish=43:43 irrd-snapshot
4879cfe7413075a0c217089dcac91ed356424c6b88808d8fcb01dc00eafcc8c7
$ time bgpq4 -h localhost AS-HURRICANE | wc -l
904137
1.72s user 0.11s system 96% cpu 1.881 total

The Dockerfile contains three stages:

  1. building IRRd,1
  2. retrieving various IRR databases, and
  3. assembling the final container with the result of the two previous stages.

The second stage fetches the databases used by rr.ntt.net: NTTCOM, RADB, RIPE, ALTDB, BELL, LEVEL3, RGNET, APNIC, JPIRR, ARIN, BBOI, TC, and AFRINIC. However, it misses some of the databases as I was unable to locate them: ARIN-WHOIS, RPKI,2 and REGISTROBR. Feel free to adapt!

The image can be scheduled to be rebuilt daily or weekly, depending of your needs. The repository includes a .gitlab-ci.yaml file automating the build and triggering the compilation of all filters by your CI/CD upon success.

  1. Instead of using the latest version of IRRd, the image relies on an older version that does not require a PostgreSQL instance and uses flat files instead. ↩︎

  2. Unlike the others, the RPKI database is built from the published RPKI ROAs. ↩︎

Norbert Preining: Performance with Intel i218/i219 NIC

29 September, 2020 - 12:19

I always had the feeling that my server, hosted by Hetzner, somehow has a slow internet connection. Then, I did put it on the distance between Finland and Japan, and didn’t care too much. Until yesterday my server stopped reacting to pings/ssh, and needed a hard reset. It turned out that the server was running fine, only that the ethernet card did hang. Hetzner support answered promptly and directed me to this web page, which described a change in the kernel concerning fragmentation offloading, and suggested the following configuration to regain connection speed:

ethtool -K <interface> tso off gso off

And to my surprise, this simple thing did wonder, and the connection speed improved dramatically, even from Japan (something like factor 10 in large rsync transfers). I have added this incantation to system cron tab and run it every hour, just to be sure that even after a reboot it is fine.

If you have bad connection speed with this kind of ethernet card, give it a try.

Kentaro Hayashi: dnsZoneEntry: field should be removed when DD is retired

28 September, 2020 - 16:03

It is known that Debian Developer can setup *.debian.net.

wiki.debian.org

When Debian Developer had retired, actual DNS entry is removed, but dnsZoneEntry: field is kept on LDAP (db.debian.org)

So you can not reuse *.debian.net if retired Debian Developer owns your prefered subdomain already.

I've posted question about this current undocumented specification.

lists.debian.org

Vincent Bernat: Syncing RIPE, ARIN and APNIC objects with a custom Ansible module

28 September, 2020 - 15:33

Internet is split into five regional Internet registry: AFRINIC, ARIN, APNIC, LACNIC and RIPE. Each RIR maintains an Internet Routing Registry. An IRR allows one to publish information about the routing of Internet number resources.1 Operators use this to determine the owner of an IP address and to construct and maintain routing filters. To ensure your routes are widely accepted, it is important to keep the prefixes you announce up-to-date in an IRR.

There are two common tools to query this database: whois and bgpq4. The first one allows you to do a query with the WHOIS protocol:

$ whois -BrG 2a0a:e805:400::/40
[…]
inet6num:       2a0a:e805:400::/40
netname:        FR-BLADE-CUSTOMERS-DE
country:        DE
geoloc:         50.1109 8.6821
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:58Z
last-modified:  2020-05-19T08:04:58Z
source:         RIPE

route6:         2a0a:e805:400::/40
descr:          Blade IPv6 - AMS1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

The second one allows you to build route filters using the information contained in the IRR database:

$ bgpq4 -6 -S RIPE -b AS64476
NN = [
    2a0a:e805::/40,
    2a0a:e805:100::/40,
    2a0a:e805:300::/40,
    2a0a:e805:400::/40,
    2a0a:e805:500::/40
];

There is no module available on Ansible Galaxy to manage these objects. Each IRR has different ways of being updated. Some RIRs propose an API but some don’t. If we restrict ourselves to RIPE, ARIN and APNIC, the only common method to update objects is email updates, authenticated with a password or a GPG signature.2 Let’s write a custom Ansible module for this purpose!

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction, as well as “Syncing MySQL tables” for a more instructive example.

Code

The module takes a list of RPSL objects to synchronize and returns the body of an email update if a change is needed:

- name: prepare RIPE objects
  irr_sync:
    irr: RIPE
    mntner: fr-blade-1-mnt
    source: whois-ripe.txt
  register: irr
Prerequisites

The source file should be a set of objects to sync using the RPSL language. This would be the same content you would send manually by email. All objects should be managed by the same maintainer, which is also provided as a parameter.

Signing and sending the result is not the responsibility of this module. You need two additional tasks for this purpose:

- name: sign RIPE objects
  shell:
    cmd: gpg --batch --user noc@example.com --clearsign
    stdin: "{{ irr.objects }}"
  register: signed
  check_mode: false
  changed_when: false

- name: update RIPE objects by email
  mail:
    subject: "NEW: update for RIPE"
    from: noc@example.com
    to: "auto-dbm@ripe.net"
    cc: noc@example.com
    host: smtp.example.com
    port: 25
    charset: us-ascii
    body: "{{ signed.stdout }}"

You also need to authorize the PGP keys used to sign the updates by creating a key-cert object and adding it as a valid authentication method for the corresponding mntner object:

key-cert:  PGPKEY-A791AAAB
certif:    -----BEGIN PGP PUBLIC KEY BLOCK-----
certif:    
certif:    mQGNBF8TLY8BDADEwP3a6/vRhEERBIaPUAFnr23zKCNt5YhWRZyt50mKq1RmQBBY
[…]
certif:    -----END PGP PUBLIC KEY BLOCK-----
mnt-by:    fr-blade-1-mnt
source:    RIPE

mntner:    fr-blade-1-mnt
[…]
auth:      PGPKEY-A791AAAB
mnt-by:    fr-blade-1-mnt
source:    RIPE
Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    irr=dict(type='str', required=True),
    mntner=dict(type='str', required=True),
    source=dict(type='path', required=True),
)

result = dict(
    changed=False,
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)
Getting existing objects

To grab existing objects, we use the whois command to retrieve all the objects from the provided maintainer.

# Per-IRR variations:
# - whois server
whois = {
    'ARIN': 'rr.arin.net',
    'RIPE': 'whois.ripe.net',
    'APNIC': 'whois.apnic.net'
}
# - whois options
options = {
    'ARIN': ['-r'],
    'RIPE': ['-BrG'],
    'APNIC': ['-BrG']
}
# - objects excluded from synchronization
excluded = ["domain"]
if irr == "ARIN":
    # ARIN does not return these objects
    excluded.extend([
        "key-cert",
        "mntner",
    ])

# Grab existing objects
args = ["-h", whois[irr],
        "-s", irr,
        *options[irr],
        "-i", "mnt-by",
        module.params['mntner']]
proc = subprocess.run("whois", *args, capture_output=True)
if proc.returncode != 0:
    raise AnsibleError(
        f"unable to query whois: {args}")
output = proc.stdout.decode('ascii')
got = extract(output, excluded)

The first part of the code setup some IRR-specific constants: the server to query, the options to provide to the whois command and the objects to exclude from synchronization. The second part invokes the whois command, requesting all objects whose mnt-by field is the provided maintainer. Here is an example of output:

$ whois -h whois.ripe.net -s RIPE -BrG -i mnt-by fr-blade-1-mnt
[…]

inet6num:       2a0a:e805:300::/40
netname:        FR-BLADE-CUSTOMERS-FR
country:        FR
geoloc:         48.8566 2.3522
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:59Z
last-modified:  2020-05-19T08:04:59Z
source:         RIPE

[…]

route6:         2a0a:e805:300::/40
descr:          Blade IPv6 - PA1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

[…]

The result is passed to the extract() function. It parses and normalizes the results into a dictionary mapping object names to objects. We store the result in the got variable.

def extract(raw, excluded):
    """Extract objects."""
    # First step, remove comments and unwanted lines
    objects = "\n".join([obj
                         for obj in raw.split("\n")
                         if not obj.startswith((
                                 "#",
                                 "%",
                         ))])
    # Second step, split objects
    objects = [RPSLObject(obj.strip())
               for obj in re.split(r"\n\n+", objects)
               if obj.strip()
               and not obj.startswith(
                   tuple(f"{x}:" for x in excluded))]
    # Last step, put objects in a dict
    objects = {repr(obj): obj
               for obj in objects}
    return objects

RPSLObject() is a class enabling normalization and comparison of objects. Look at the module code for more details.

>>> output="""
... inet6num:       2a0a:e805:300::/40
... […]
... """
>>> pprint({k: str(v) for k,v in extract(output, excluded=[])})
{'<Object:inet6num:2a0a:e805:300::/40>':
   'inet6num:       2a0a:e805:300::/40\n'
   'netname:        FR-BLADE-CUSTOMERS-FR\n'
   'country:        FR\n'
   'geoloc:         48.8566 2.3522\n'
   'admin-c:        BN2763-RIPE\n'
   'tech-c:         BN2763-RIPE\n'
   'status:         ASSIGNED\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE',
 '<Object:route6:2a0a:e805:300::/40>':
   'route6:         2a0a:e805:300::/40\n'
   'descr:          Blade IPv6 - PA1\n'
   'origin:         AS64476\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE'}
Comparing with wanted objects

Let’s build the wanted dictionary using the same structure, thanks to the extract() function we can use verbatim:

with open(module.params['source']) as f:
    source = f.read()
wanted = extract(source, excluded)

The next step is to compare got and wanted to build the diff object:

if got != wanted:
    result['changed'] = True
    if module._diff:
        result['diff'] = [
            dict(before_header=k,
                 after_header=k,
                 before=str(got.get(k, "")),
                 after=str(wanted.get(k, "")))
            for k in set((*wanted.keys(), *got.keys()))
            if k not in wanted or k not in got or wanted[k] != got[k]]
Returning updates

The module does not have a side effect. If there is a difference, we return the updates to send by email. We choose to include all wanted objects in the updates (contained in the source variable) and let the IRR ignore unmodified objects. We also append the objects to be deleted by adding a delete: attribute to each them them.

# We send all source objects and deleted objects.
deleted_mark = f"{'delete:':16}deleted by CMDB"
deleted = "\n\n".join([f"{got[k].raw}\n{deleted_mark}"
                       for k in got
                       if k not in wanted])
result['objects'] = f"{source}\n\n{deleted}"

module.exit_json(**result)

The complete code is available on GitHub. The module supports both --diff and --check flags. It does not return anything if no change is detected. It can work with APNIC, RIPE and ARIN. It is not perfect: it may not detect some changes,3 it is not able to modify objects not owned by the provided maintainer4 and some attributes cannot be modified, requiring to manually delete and recreate the updated object.5 However, this module should automate 95% of your needs.

  1. Other IRRs exist without being attached to a RIR. The most notable one is RADb↩︎

  2. ARIN is phasing out this method in favor of IRR-online. RIPE has an API available, but email updates are still supported and not planned to be deprecated. APNIC plans to expose an API↩︎

  3. For ARIN, we cannot query key-cert and mntner objects and therefore we cannot detect changes in them. It is also not possible to detect changes to the auth mechanisms of a mntner object. ↩︎

  4. APNIC do not assign top-level objects to the maintainer associated with the owner. ↩︎

  5. Changing the status of an inetnum object requires deleting and recreating the object. ↩︎

Norbert Preining: Cinnamon for Debian – imminent removal from testing

28 September, 2020 - 06:23

I have been more or less maintaining Cinnamon now for quite some time, but using it only sporadically due to my switch to KDE/Plasma. Currently, Cinnamon’s cjs package depends on mozjs52, which also is probably going to be orphaned soon. This will precipitate a lot of changes, not the least being Cinnamon being removed from Debian/testing.

I have pinged upstream several times, without much success. So for now the future looks bleak for cinnamon in Debian. If there are interested developers (Debian or not), please get in touch with me, or directly try to update cjs to mozjs78.

Steinar H. Gunderson: Introducing plocate

28 September, 2020 - 05:45

In continued annoyance over locate's slowness, I made my own locate using posting lists (thus the name plocate) and compression, and it turns out that you hardly need any tuning at all to make it fast. Example search on a system with 26M files:

cassarossa:~/nmu/plocate> ls -lh /var/lib/mlocate  
total 1,5G                
-rw-r----- 1 root mlocate 1,1G Sep 27 06:33 mlocate.db
-rw-r----- 1 root mlocate 470M Sep 28 00:34 plocate.db

cassarossa:~/nmu/plocate> time mlocate info/mlocate
/var/lib/dpkg/info/mlocate.conffiles
/var/lib/dpkg/info/mlocate.list
/var/lib/dpkg/info/mlocate.md5sums
/var/lib/dpkg/info/mlocate.postinst
/var/lib/dpkg/info/mlocate.postrm
/var/lib/dpkg/info/mlocate.prerm
mlocate info/mlocate  20.75s user 0.14s system 99% cpu 20.915 total

cassarossa:~/nmu/plocate> time plocate info/mlocate
/var/lib/dpkg/info/mlocate.conffiles
/var/lib/dpkg/info/mlocate.list
/var/lib/dpkg/info/mlocate.md5sums
/var/lib/dpkg/info/mlocate.postinst
/var/lib/dpkg/info/mlocate.postrm
/var/lib/dpkg/info/mlocate.prerm
plocate info/mlocate  0.01s user 0.00s system 83% cpu 0.008 total

It will be slower if files are on rotating rust and not cached, but still much faster then mlocate.

It's a prototype, and freerides off of updatedb from mlocate (mlocate.db is converted to plocate.db). Case-sensitive matches only, no regexes or other funny business. Get it from https://git.sesse.net/?p=plocate (clone with --recursive so that you get the TurboPFOR submodule). GPLv2+.

Enrico Zini: Coup d'état in recent Italian history

28 September, 2020 - 05:00

Italy during the cold war has always been in too strategic a position, and with too strong a left wing movement, not to get the CIA involved.

Here are a few stories of coup d'état and other kinds of efforts to manipulate Italian politics:

Iain R. Learmonth: Multicast IPTV

28 September, 2020 - 04:35

For almost a decade, I’ve been very slowly making progress on a multicast IPTV system. Recently I’ve made a significant leap forward in this project, and I wanted to write a little on the topic so I’ll have something to look at when I pick this up next. I was aspiring to have a useable system by the end of today, but for a couple of reasons, it wasn’t possible.

When I started thinking about this project, it was still common to watch broadcast television. Over time the design of this system has been changing as new technologies have become available. Multicast IP is probably the only constant, although I’m now looking at IPv6 rather than IPv4.

Initially, I’d been looking at DVB-T PCI cards. USB devices have become common and are available cheaply. There are also DVB-T hats available for the Raspberry Pi. I’m now looking at a combination of Raspberry Pi hats and USB devices with one of each on a couple of Pis.

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

The Raspberry Pi devices will run DVBlast, an open-source DVB demultiplexer and streaming server. Each of the tuners will be tuned to a different transponder giving me the ability to stream any combination of available channels simultaneously. This is everything that would be needed to watch TV on PCs on the home network with VLC.

I’ve not yet worked out if Kodi will accept multicast streams as a TV source, but I do know that Tvheadend will. Tvheadend can also act as a PVR to record programmes for later playback so is useful even if the multicast streams can be viewed directly.

So how far did I get? I have built two Raspberry Pis in cases with the DVB-T hats on. They need to sit in the lounge as that’s where the antenna comes down from the roof. There’s no wired network connection in the lounge. I planned to use an OpenBSD box as a gateway, bridging the wireless network to a wired network.

Two problems quickly emerged. The first being that the wireless card I had purchased only supported 2.4GHz, no 5GHz, and I have enough noise from neighbours that the throughput rate and packet loss are unacceptable.

The second problem is that I had forgotten the problems with bridging wireless networks. To create a bridge, you need to be able to spoof the MAC addresses of wired devices on the wireless interface, but this can only be done when the wireless interface is in access point mode.

So when I come back to this, I will have to look at routing rather than bridging to work around the MAC address issue, and I’ll also be on the lookout for a cheap OpenBSD supported mini-PCIe wireless card that can do 5GHz.

Joachim Breitner: Learn Haskell on CodeWorld writing Sokoban

28 September, 2020 - 02:20

Two years ago, I held the CIS194 minicourse on Haskell at the University of Pennsylvania. In that installment of the course, I changed the first four weeks to teach the basics of Haskell using the online Haskell environment CodeWorld, and lead the students towards implementing the game Sokoban.

As it is customary for CIS194, I put my lecture notes and exercises online, and this has been used as a learning resources by people from all over the world. But since I have left the University of Pennsylvania, I lost the ability to update the text, and as the CodeWorld API has evolved, some of the examples and exercises no longer work.

Some recent complains about that, in bug reports against CodeWorld and in unrealistically flattering tweets (“Shame, this was the best Haskell course ever!!!”) motivated me to extract that material and turn it into an updated stand-alone tutorial that I can host myself.

So if you feel like learning Haskell without worrying about local installation, and while creating a reasonably fun game, head over to https://haskell-via-sokoban.nomeata.de/ and get started! Improvements can now also be contributed at https://github.com/nomeata/haskell-via-sokoban.

Credits go to Brent Yorgey, Richard Eisenberg and Noam Zilberstein, who held the previous installments of the course, and Chris Smith for creating the CodeWorld environment.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้