Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 min 52 sec ago

Thomas Koch: Know your tools - simple backup with rsync

10 June, 2022 - 18:40
Posted on June 9, 2022 Tags: debian, free software

I’ve been using rsync for years and still did not know its full powers. I just wanted a quick and dirty simple backup but realised that rsnapshot is not in Debian anymore.

However you can do much of rsnapshot with rsync alone nowadays.

The --link-dest option (manpage) solves the part of creating hardlinks to a previous backup (found here). So my backup program becomes this shell script in ~/backups/backup.sh:

#!/bin/sh

SERVER="${1}"
BACKUP="${HOME}/backups/${SERVER}"
SNAPSHOTS="${BACKUP}/snapshots"

FOLDER=$(date --utc +%F_%H-%M-%S)
DEST="${SNAPSHOTS}/${FOLDER}"

LAST=$(ls -d1 ${SNAPSHOTS}/????-??-??_??-??-??|tail -n 1)

rsync \
  --rsh="ssh -i ${BACKUP}/sshkey -o ControlPath=none -o ForwardAgent=no" \
  -rlpt \
  --delete --link-dest="${LAST}" \
  ${SERVER}::backup "${DEST}"

The script connects to rsync in daemon mode as outlined in section “USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION” in the rsync manpage. This allows to reference a “module” as the source that is defined on the server side as follows:

[backup]
path = /
read only = true
exclude from = /srv/rsyncbackup/excludelist
uid = root
gid = root

The important bit is the read only setting that protects the server against somebody with access to the ssh key to overwrit files on the server via rsync and thus gaining full root access.

Finally the command prefix in ~/.ssh/authorized_keys runs rsync as daemon with sudo and the specified config file:

command="sudo rsync --config=/srv/rsyncbackup/config --server --daemon ."

The sudo setup is left as an exercise for the reader as mine is rather opinionated.

Unfortunately I have not managed to configure systemd timers in the way I wanted and therefor opened an issue: “Allow retry of timer triggered oneshot services with failed conditions or asserts”. Any help there is welcome!

Sam Hartman: Flailing to Replace Jack with Pipewire for DJ Audio

10 June, 2022 - 07:14

I could definitely use some suggestions here, both in terms of things to try or effective places to ask questions about Pipewire audio. The docs are improving, but are still in early stages. Pipewire promises to combine the functionality of PulseAudio and Jack. That would be great for me. I use Jack for my DJ work, and it’s somewhat complicated and fragile. However, so far my attempts to replace Jack have been unsuccessful, and I might need to even use PulseAudio instead of Pipewire to get the DJ stuff working correctly.

The Setup

In the simplest setup I have a DJ controller. It’s both a MIDI device and a sound card. It has 4 channel audio, but it’s not typical surround sound. Two channels are the main speakers, and two channels are the headphones. Conceptually it might be better to model the controller as two sinks: one for the speakers and one for the headphones. At a hardware level they need to be one device for several reasons, especially including using a common clock. It’s really important than only the main mix go out channel 1-2 (the speakers). Random beeps or sound from other applications going out the main speakers is disruptive and unprofessional.

However, because I’m blind, I need that sound. I especially need the output of Orca (my screen reader) and Emacspeak (another screen reader). So I need that output to go to the headphones.

Under Pulse/Jack

The DJ card is the Jack primary sound device (system:playback_1 through system:playback_4). I then use themodule-jack-sink Pulse module to connect Pulse to Jack. That becomes the default sink for Pulse, and I link front-left from that sink to system:playback_3. So, I get the system sounds and screen reader mixed into the left channel of my headphones and nowhere else.

Enter Pipewire

Initially Pipewire sees the DJ card as a 4-channel sound card and assumes it’s surround4.0 (so front and rear left and right). It “helpfully” expands my stereo signal so that everything goes to the front and rear. So, exactly what I don’t want to have happen happens: all my system sounds go out the main speakers (channel 1-2).

It was easy to override Wireplumber’s ALSA configuration and assign different channel positions. I tried assigning something like a1,a2,fl,fr hoping that Pipewire wouldn’t mix things into aux channels that weren’t part of the typical surround set. No luck. It did correctly reflect the channels in things like pacmd list sinks so my Pipewire config was being applied. But the sound was still wrong. * I tried turning off channelmix.upmix. That didn’t help; that appears to be more about mixing stereo into center, rear and LFE. The basic approach of getting a stream to conform to the output node’s channels appears to be hurting me here.

  • Turning off stream.dont-remix actually got stereo sound to do exactly what I wanted. If I use sox to play a stereo MP3 for example, it comes out the headphones and not my speakers. Unfortunately, that didn’t help with the accessibility sounds at all. Those are mono in pulse land, and apparently mono is always expanded to all channels.

  • I didn’t try turning off channelmix entirely. I’m reasonably sure that would break mono sound entirely, so I’d get no accessibility output which would make my computer entirely unusable.

  • I tried using jack_disconnect to disconnect the accessibility ports from all but the headphones. The accessibility applications aren’t actually using Jack, but one of the cool things about Pipewire is that you can use Jack interfaces to manipulate non-Jack applications. Unfortunately, at least the Emacspeak espeak speech server regularly shuts down and restarts its sound connection. So, I get speech through the headphones for a phrase or two, and then it reverts to the default config.

I’d love any ideas about how I can get this to work. I’m sure it’s simple I’m just missing the right mental model or knowledge of how to configure things.

Pipewire Not Talking to Jack

I thought I could at least use Pipewire the same way I use Pulse. Namely, I can run a real jackd and connect up Pipewire to that server. According to the wiki, Pipewire can be a Jack client. It’s disabled by default, because you need to make sure that Wireplumber is using the real Jack libraries rather than the Pipewire replacements. That’s the case on Debian, so I enabled the feature.

A Jack device appeared in wpctl status as did a Jack sink. Using jack_lsp on that device showed it was talking to the Jack server and connected to system:playback_*. Unfortunately, it doesn’t work. The sink does not show up in pacmd list sinks, and pipewire-pulse gives an error about it not being ready. If I select it as the default sink in wpctl set-default I get no sound at all, at least from Pulse applications.

Versions of things

This is all on debian, approximately testing/bookworm or newer for the relevant libraries.

  • Pipewire 0.3.51-1
  • Wireplumber 0.4.10-2
  • pipewire-pulse and libspa0.2-jack are also 0.3.51-1 as you’d expect
  • Jackd2 1.9.17~dfsg-1


comments

Reproducible Builds (diffoscope): diffoscope 216 released

10 June, 2022 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 216. This version includes the following changes:

* Print profile output if we were called with --profile and we receive a
  TERM signal.
* Emit a warning if/when we are handling a TERM signal.
* Clarify in the code in what situations the main "finally" block gets
  called, especially in relation to handling TERM signals.
* Clarify and tidy some unconditional control flow in diffoscope.profiling.

You find out more by visiting the project homepage.

Enrico Zini: Updating cbqt for bullseye

9 June, 2022 - 17:15

Back in 2017 I did work to setup a cross-building toolchain for QT Creator, that takes advantage of Debian's packaging for all the dependency ecosystem.

It ended with cbqt which is a little script that sets up a chroot to hold cross-build-dependencies, to avoid conflicting with packages in the host system, and sets up a qmake alternative to make use of them.

Today I'm dusting off that work, to ensure it works on Debian bullseye.

Resetting QT Creator

To make things reproducible, I wanted to reset QT Creator's configuration.

Besides purging and reinstalling the package, one needs to manually remove:

  • ~/.config/QtProject
  • ~/.cache/QtProject/
  • /usr/share/qtcreator/QtProject which is where configuration is stored if you used sdktool to programmatically configure Qt Creator (see for example this post and see Debian bug #1012561.
Updating cbqt

Easy start, change the distribution for the chroot:

-DIST_CODENAME = "stretch"
+DIST_CODENAME = "bullseye"
Adding LIBDIR

Something else does not work:

Test$ qmake-armhf -makefile
Info: creating stash file …/Test/.qmake.stash
Test$ make
[...]
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link,…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o   …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Widgets.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Gui.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Core.so -lGLESv2 -lpthread
/usr/lib/gcc-cross/arm-linux-gnueabihf/10/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lGLESv2
collect2: error: ld returned 1 exit status
make: *** [Makefile:146: Test] Error 1

I figured that now I also need to set QMAKE_LIBDIR and not just QMAKE_RPATHLINKDIR:

--- a/cbqt
+++ b/cbqt
@@ -241,18 +241,21 @@ include(../common/linux.conf)
 include(../common/gcc-base-unix.conf)
 include(../common/g++-unix.conf)

+QMAKE_LIBDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR += {chroot.abspath}/usr/lib/
 QMAKE_RPATHLINKDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/

Now it links again:

Test$ qmake-armhf -makefile
Test$ make
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link,…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o   -L…/armhf/lib/arm-linux-gnueabihf -L…/armhf/usr/lib/arm-linux-gnueabihf -L…/armhf/usr/lib/ …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Widgets.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Gui.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Core.so -lGLESv2 -lpthread
Making it work in Qt Creator

Time to try it in Qt Creator, and sadly it fails:

…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
QMAKE_CXX.COMPILER_MACROS is not defined

I traced it to this bit in armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf (nonrelevant bits deleted):

isEmpty($${target_prefix}.COMPILER_MACROS) {
    msvc {
        # …
    } else: gcc|ghs {
        vars = $$qtVariablesFromGCC($$QMAKE_CXX)
    }
    for (v, vars) {
        # …
        $${target_prefix}.COMPILER_MACROS += $$v
    }
    cache($${target_prefix}.COMPILER_MACROS, set stash)
} else {
    # …
}

It turns out that qmake is not able to realise that the compiler is gcc, so vars does not get set, nothing is set in COMPILER_MACROS, and qmake fails.

Reproducing it on the command line

When run manually, however, qmake-armhf worked, so it would be good to know how Qt Creator is actually running qmake. Since it frustratingly does not show what commands it runs, I'll have to strace it:

strace -e trace=execve --string-limit=123456 -o qtcreator.trace -f qtcreator

And there it is:

$ grep qmake- qtcreator.trace
1015841 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "-query"], 0x56096e923040 /* 54 vars */) = 0
1015865 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "…/Test/Test.pro", "-spec", "arm-linux-gnueabihf", "CONFIG+=debug", "CONFIG+=qml_debug"], 0x7f5cb4023e20 /* 55 vars */) = 0

I run the command manually and indeed I reproduce the problem:

$ /usr/local/bin/qmake-armhf Test.pro -spec arm-linux-gnueabihf CONFIG+=debug CONFIG+=qml_debug
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.

I try removing options until I find the one that breaks it and... now it's always broken! Even manually running qmake-armhf, like I did earlier, stopped working:

$ rm .qmake.stash
$ qmake-armhf -makefile
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
Debugging toolchain.prf

I tried purging and reinstalling qtcreator, and recreating the chroot, but qmake-armhf is staying broken. I'll let that be, and try to debug toolchain.prf.

By grepping gcc in the mkspecs directory, I managed to figure out that:

  • The } else: gcc|ghs { test is matching the value(s) of QMAKE_COMPILER
  • QMAKE_COMPILER can have multiple values, separated by space
  • If in armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/arm-linux-gnueabihf/qmake.conf I set QMAKE_COMPILER = gcc arm-linux-gnueabihf-gcc, then things work again.

Sadly, I failed to find reference documentation for QMAKE_COMPILER's syntax and behaviour. I also failed to find why qmake-armhf worked earlier, and I am also failing to restore the system to a situation where it works again. Maybe I dreamt that it worked? I had some manual change laying around from some previous fiddling with things?

Anyway at least now I have the fix:

--- a/cbqt
+++ b/cbqt
@@ -248,7 +248,7 @@ QMAKE_RPATHLINKDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/

-QMAKE_COMPILER          = {chroot.arch_triplet}-gcc
+QMAKE_COMPILER          = gcc {chroot.arch_triplet}-gcc

 QMAKE_CC                = /usr/bin/{chroot.arch_triplet}-gcc
Fixing a compiler mismatch warning

In setting up the kit, Qt Creator also complained that the compiler from qmake did not match the one configured in the kit. That was easy to fix, by pointing at the host system cross-compiler in qmake.conf:

 QMAKE_COMPILER          = {chroot.arch_triplet}-gcc

-QMAKE_CC                = {chroot.arch_triplet}-gcc
+QMAKE_CC                = /usr/bin/{chroot.arch_triplet}-gcc

 QMAKE_LINK_C            = $$QMAKE_CC
 QMAKE_LINK_C_SHLIB      = $$QMAKE_CC

-QMAKE_CXX               = {chroot.arch_triplet}-g++
+QMAKE_CXX               = /usr/bin/{chroot.arch_triplet}-g++

 QMAKE_LINK              = $$QMAKE_CXX
 QMAKE_LINK_SHLIB        = $$QMAKE_CXX
Updated setup instructions

Create an armhf environment:

sudo cbqt ./armhf --create --verbose

Create a qmake wrapper that builds with this environment:

sudo ./cbqt ./armhf --qmake -o /usr/local/bin/qmake-armhf

Install the build-dependencies that you need:

# Note: :arch is added automatically to package names if no arch is explicitly specified
sudo ./cbqt ./armhf --install libqt5svg5-dev libmosquittopp-dev qtwebengine5-dev
Build with qmake

Use qmake-armhf instead of qmake and it works perfectly:

qmake-armhf -makefile
make
Set up Qt Creator

Configure a new Kit in Qt Creator:

  1. Tools/Options, then Kits, then Add
  2. Name: armhf (or anything you like)
  3. In the Qt Versions tab, click Add then set the path of the new Qt to /usr/local/bin/qmake-armhf. Click Apply.
  4. Back in the Kits, select the Qt version you just created in the Qt version field
  5. In Compilers, select the ARM versions of GCC. If they do not appear, install crossbuild-essential-armhf, then in the Compilers tab click Re-detect and then Apply to make them available for selection
  6. Dismiss the dialog with "OK": the new kit is ready

Now you can choose the default kit to build and run locally, and the armhf kit for remote cross-development.

I tried looking at sdktool to automate this step, and it requires a nontrivial amount of work to do it reliably, so these manual instructions will have to do.

Credits

This has been done as part of my work with Truelite.

Jonathan Dowland: Fight Club OST

9 June, 2022 - 16:40

I often listen to soundtracks when I'm concentrating. The Fight Club soundtrack, by the Dust Brothers, is not one I turn to very often. I do love the way it was packaged for vinyl. The cover design references IKEA, but the clever thing is it has a mailer-style pull-cord to open it up. You can't open the packaging using it without literally tearing the package in half. There is a secret, alternative way to get in with less damage, but if you try it the packaging has a surprise for you. This Image album summarizes most of the packaging secrets.

The records themselves are a pleasant mottled pink colour, reminiscent of the soap bars in the movie. They're labelled "Paper Street Soap Company".

Laura Arjona Reina: Moving to a faster but smaller disk, encrypted setup

8 June, 2022 - 19:05

My work computer runs Debian 11 bullseye (the current stable release) in a mechanical 500GB disk, and I was provided with a new SDD disk but its size was 480 GB. So I had to shrink my partitions before copying the data to the new disk. It turned out to be a bit difficult because my main partition was encrypted.

I write here how I did, maybe there are other simpler ways but I couldn’t find them.

References:

I had three partitions in my old 500GB disk: /dev/sda1 is the EFI partition, /dev/sda2 the boot partition and /dev/sda3 the root partition (encrypted, with LVM, the standard way the Debian installer proposes when you choose a simple encrypted setup).

First of all, I made a disk image with Clonezilla to an external USB disk, for the case I mess up things, to be able to return to a safe point and start again.

Then I started my computer with a Debian 11 live USB with KDE Plasma desktop and Spanish localisation environment.

I opened the KDE Partition manager and copied the non encrypted partitions (sda1, EFI and sda2, /boot) to the new disk.

I shrinked the encrypted partition from the terminal with the following commands (I had enough free space so reduced my partition to a total of 300GB):

Removed the swap partition and re-created it:

sudo lvremove /dev/larjona-pc-vg/swap_1
sudo pvresize --setphysicalvolumesize 380G /dev/mapper/cryptdisk
sudo pvchange -x y /dev/mapper/cryptdisk
sudo lvcreate -L 4G -n swap_1 larjona-pc-vg
sudo mkswap -L swap_1 /dev/larjona-pc-vg/swap_1

Display information about the physical volume in order to shrink it:

sudo pvs -v --segments --units s /dev/mapper/cryptdisk
sudo cryptsetup -b 838860800 resize cryptdisk
sudo cryptsetup status cryptdisk
sudo vgchange -a n vgroup
sudo vgchange -an
sudo cryptsetup luksClose cryptdisk

Then reduced the sda3 partition with the KDE partition manager (it took a while), and copy it to the new disk.

Turned off the computer and unplugged the old disk. Started the computer with the Debian 11 Live USB again, UEFI boot.

Now, to make my system boot:

sudo cryptsetup luksOpen /dev/sda3 crypdisk
sudo vgscan --mknodes
sudo vgchange -ay
sudo mount /dev/mapper/larjona--pc--vg-root /mnt
sudo mount /dev/sda2 /mnt/boot
sudo mount /dev/sda1 /mnt/boot/efi
mount --rbind /sys /media/linux/sys
mount -t efivarfs none /sys/firmware/efi/efivars
for i in /dev /dev/pts /proc /run; do sudo mount -B $i /mnt$i; done
sudo chroot /mnt

Then edited /mnt/etc/crypttab to reflect the name of the new encrypted partition, edited /mnt/etc/fstab to paste the UUIDs of the new partitions.
Then ran grub-install and reinstalled the kernels as noted in the reference, rebooted and logged in my Plasma desktop

(Well, the actual process was not so smooth but after several tries and errors and searching for help I managed to get the needed commands to make my system boot from the new disk).

Reproducible Builds: Reproducible Builds in May 2022

6 June, 2022 - 19:23

Welcome to the May 2022 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.


Repfix paper

Zhilei Ren, Shiwei Sun, Jifeng Xuan, Xiaochen Li, Zhide Zhou and He Jiang have published an academic paper titled Automated Patching for Unreproducible Builds:

[..] fixing unreproducible build issues poses a set of challenges [..], among which we consider the localization granularity and the historical knowledge utilization as the most significant ones. To tackle these challenges, we propose a novel approach [called] RepFix that combines tracing-based fine-grained localization with history-based patch generation mechanisms.

The paper (PDF, 3.5MB) uses the Debian mylvmbackup package as an example to show how RepFix can automatically generate patches to make software build reproducibly. As it happens, Reiner Herrmann submitted a patch for the mylvmbackup package which has remained unapplied by the Debian package maintainer for over seven years, thus this paper inadvertently underscores that achieving reproducible builds will require both technical and social solutions.

Python variables

Johannes Schauer discovered a fascinating bug where simply naming your Python variable _m led to unreproducible .pyc files. In particular, the types module in Python 3.10 requires the following patch to make it reproducible:

--- a/Lib/types.py
+++ b/Lib/types.py
@@ -37,8 +37,8 @@ _ag = _ag()
 AsyncGeneratorType = type(_ag)
 
 class _C:
-    def _m(self): pass
-MethodType = type(_C()._m)
+    def _b(self): pass
+MethodType = type(_C()._b)

Simply renaming the dummy method from _m to _b was enough to workaround the problem. Johannes’ bug report first led to a number of improvements in diffoscope to aid in dissecting .pyc files, but upstream identified this as caused by an issue surrounding interned strings and is being tracked in CPython bug #78274.

New SPDX team to incorporate build metadata in Software Bill of Materials

SPDX, the open standard for Software Bill of Materials (SBOM), is continuously developed by a number of teams and committees. However, SPDX has welcomed a new addition; a team dedicated to enhancing metadata about software builds, complementing reproducible builds in creating a more secure software supply chain. The “SPDX Builds Team” has been working throughout May to define the universal primitives shared by all build systems, including the “who, what, where and how” of builds:

  • Who: the identity of the person or organisation that controls the build infrastructure.

  • What: the inputs and outputs of a given build, combining metadata about the build’s configuration with an SBOM describing source code and dependencies.

  • Where: the software packages making up the build system, from build orchestration tools such as Woodpecker CI and Tekton to language-specific tools.

  • How: the invocation of a build, linking metadata of a build to the identity of the person or automation tool that initiated it.

The SPDX Builds Team expects to have a usable data model by September, ready for inclusion in the SPDX 3.0 standard. The team welcomes new contributors, inviting those interested in joining to introduce themselves on the SPDX-Tech mailing list.

Talks at Debian Reunion Hamburg

Some of the Reproducible Builds team (Holger Levsen, Mattia Rizzolo, Roland Clobus, Philip Rinn, etc.) met in real life at the Debian Reunion Hamburg (official homepage). There were several informal discussions amongst them, as well as two talks related to reproducible builds.

First, Holger Levsen gave a talk on the status of Reproducible Builds for bullseye and bookworm and beyond (WebM, 210MB):

Secondly, Roland Clobus gave a talk called Reproducible builds as applied to non-compiler output (WebM, 115MB):


Supply-chain security attacks

This was another bumper month for supply-chain attacks in package repositories. Early in the month, Lance R. Vick noticed that the maintainer of the NPM foreach package let their personal email domain expire, so they bought it and now “controls foreach on NPM and the 36,826 projects that depend on it”. Shortly afterwards, Drew DeVault published a related blog post titled When will we learn? that offers a brief timeline of major incidents in this area and, not uncontroversially, suggests that the “correct way to ship packages is with your distribution’s package manager”.

Bootstrapping

“Bootstrapping” is a process for building software tools progressively from a primitive compiler tool and source language up to a full Linux development environment with GCC, etc. This is important given the amount of trust we put in existing compiler binaries. This month, a bootstrappable mini-kernel was announced. Called boot2now, it comprises a series of compilers in the form of bootable machine images.

Google’s new Assured Open Source Software service

Google Cloud (the division responsible for the Google Compute Engine) announced a new Assured Open Source Software service. Noting the considerable 650% year-over-year increase in cyberattacks aimed at open source suppliers, the new service claims to enable “enterprise and public sector users of open source software to easily incorporate the same OSS packages that Google uses into their own developer workflows”. The announcement goes on to enumerate that packages curated by the new service would be:

  • Regularly scanned, analyzed, and fuzz-tested for vulnerabilities.

  • Have corresponding enriched metadata incorporating Container/Artifact Analysis data.

  • Are built with Cloud Build including evidence of verifiable SLSA-compliance

  • Are verifiably signed by Google.

  • Are distributed from an Artifact Registry secured and protected by Google.

(Full announcement)

A retrospective on the Rust programming language

Andrew “bunnie” Huang published a long blog post this month promising a “critical retrospective” on the Rust programming language. Amongst many acute observations about the evolution of the language’s syntax (etc.), the post beings to critique the languages’ approach to supply chain security (“Rust Has A Limited View of Supply Chain Security”) and reproducibility (“You Can’t Reproduce Someone Else’s Rust Build”):

There’s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say “almost anyone else” because this fix would be OS-dependent, so we’d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.

(Full post)

Reproducible Builds IRC meeting

The minutes and logs from our May 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 28th June at 15:00 UTC on #reproducible-builds on the OFTC network.

A new tool to improve supply-chain security in Arch Linux

kpcyrd published yet another interesting tool related to reproducibility. Writing about the tool in a recent blog post, kpcyrd mentions that although many PKGBUILDs provide authentication in the context of signed Git tags (i.e. the ability to “verify the Git tag was signed by one of the two trusted keys”), they do not support pinning, ie. that “upstream could create a new signed Git tag with an identical name, and arbitrarily change the source code without the [maintainer] noticing”. Conversely, other PKGBUILDs support pinning but not authentication. The new tool, auth-tarball-from-git, fixes both problems, as nearly outlined in kpcyrd’s original blog post.


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 212, 213 and 214 to Debian unstable.

Chris also made the following changes:

  • New features:

    • Add support for extracting vmlinuz Linux kernel images. []
    • Support both python-argcomplete 1.x and 2.x. []
    • Strip sticky etc. from x.deb: sticky Debian binary package […]. []
    • Integrate test coverage with GitLab’s concept of artifacts. [][][]
  • Bug fixes:

    • Don’t mask differences in .zip or .jar central directory extra fields. []
    • Don’t show a binary comparison of .zip or .jar files if we have observed at least one nested difference. []
  • Codebase improvements:

    • Substantially update comment for our calls to zipinfo and zipinfo -v. []
    • Use assert_diff in test_zip over calling get_data with a separate assert. []
    • Don’t call re.compile and then call .sub on the result; just call re.sub directly. []
    • Clarify the comment around the difference between --usage and --help. []
  • Testsuite improvements:

    • Test --help and --usage. []
    • Test that --help includes the file formats. []

Vagrant Cascadian added an external tool reference xb-tool for GNU Guix  [] as well as updated the diffoscope package in GNU Guix itself [][][].


Distribution work

In Debian, 41 reviews of Debian packages were added, 85 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated, including adding a new nondeterministic_ordering_in_deprecated_items_collected_by_doxygen toolchain issue [] as well as ones for mono_mastersummary_xml_files_inherit_filesystem_ordering [], extended_attributes_in_jar_file_created_without_manifest [] and apxs_captures_build_path [].

Vagrant Cascadian performed a rough check of the reproducibility of core package sets in GNU Guix, and in openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducible builds website

Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but also prepared and published an interview with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix. [][][][]

In addition, Tim Jones added a link to the Talos Linux project [] and billchenchina fixed a dead link [].


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Add support for detecting running kernels that require attention. []
    • Temporarily configure a host to support performing Debian builds for packages that lack .buildinfo files. []
    • Update generated webpages to clarify wishes for feedback. []
    • Update copyright years on various scripts. []
  • Mattia Rizzolo:

    • Provide a facility so that Debian Live image generation can copy a file remotely. [][][][]
  • Roland Clobus:

    • Add initial support for testing generated images with OpenQA. []

And finally, as usual, node maintenance was also performed by Holger Levsen [][].


Misc news

On our mailing list this month:


Contact

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Thorsten Alteholz: My Debian Activities in May 2022

6 June, 2022 - 17:51

FTP master

This month I accepted 288 and rejected 45 packages. The overall number of packages that got accepted was 290.

Debian LTS

This was my ninety-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 3029-1] cups security update for one embargoed CVE
  • [DLA 3028-1] atftp security update for one CVE
  • [DLA 3030-1] zipios++ security update for one CVE
  • [DSA-5149-1] cups security update in Buster and Bullseye
  • [#1008577] bullseye-pu: golang-github-russellhaering-goxmldsig/1.1.0-1+deb11u1 debdiff was approved and package uploaded
  • [#1009077] bullseye-pu: minidlna/1.3.0+dfsg-2+deb11u1 debdiff was approved and package uploaded
  • [#1009250] bullseye-pu: fribidi/1.0.8-2+deb11u1 debdiff was approved and package uploaded

Further I continued working on libvirt and started to work on blender and ncurses.

I also continued to work on security support for golang packages.

Last but not least I did some days of frontdesk duties and took care of issues on security-master.

Debian ELTS

This month was the forty-seventh ELTS month.

During my allocated time I uploaded:

  • ELS-618-1 for openldap

I also moved/refactored the current ELTS documentation to a new repository.

Further I started to work on blender and ncurses in ELTS as well as in LTS.

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

The reason for the new upstream version of ipp-usb was a strange bug. Some HP printers claim to have fax support but fail to respond to corresponding IPP queries. I understand that nowadays sending a fax is no longer a main theme for quality assurance. But if one tries to advertise as much features as possible, all these features should basically work and not prevent the things a printer should normally do.

The reason for the new upstream version of cups was a security issue. You now should have the latest version of cups installed (there have been updates in other Debian releases as well).

Debian Astro

This month I uploaded new upstream versions or improved packaging of:

Other stuff

This month I uploaded new packages:

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้