Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 25 min 52 sec ago

Dirk Eddelbuettel: RcppSimdJson 0.1.8 on CRAN: Maintenance

19 October, 2022 - 06:27

The RcppSimdJson package was just updated to release 0.1.8 today.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This release simply changes one statement to not trigger a warning under clang++-14.

The very short NEWS entry for this release follows.

Changes in version 0.1.8 (2022-10-18)
  • Use the '||' operator instead of '|' on a set of booleans to appease 'clang-14'.

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: digest 0.6.30 on CRAN: More Package Maintenance

19 October, 2022 - 05:47

Release 0.6.30 of the digest package arrived at CRAN earlier today, and was just uploaded to Debian as well.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a mature and widely-used as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

This release contains two tiny changes on old standard C code to appease the new / upcoming clang-15 release now used by CRAN in their forward-looking checks.

My CRANberries provides the usual summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Scarlett Gately Moore: New KDE Gear snaps in the works

18 October, 2022 - 02:53

KDE Extras 22.08.2 was released! https://kde.org/announcements/gear/22.08.2/

So… I am working on new snaps! This release also includes a new content snap I made with frameworks 5.98 and Qt 5.15.6. With all the new goodness, I am (Re) testing all snaps to make sure they are working as expected.

You can find a link to all of my snap releases from the KDE Snap Store Releases on the menu above.

Some notable releases that have new fixes and improvements are:

  • Kalzium: Molecular editor now works!
  • Artikulate: Now works on arm64 ( eg. Rasberry Pi )
  • Dragon: Now works on arm64
  • Minuet: Now works on arm64

New 22.08.2 releases re-tested on arm64 and amd64:

  • Picmi
  • Kturtle
  • Ksudoku
  • Konquest

More coming soon!

Please consider donating! I am seeking employment, but until then, I need assistance with gas to power my generator to power my laptop. Solar doesn’t work great in these coming winter months Thank you for your consideration! https://www.patreon.com/sgmoore

Dima Kogan: gnuplot output in an FLTK widget

18 October, 2022 - 02:28
Overview

I make a lot of plots, and the fragmentation of tools in this space really bugs me. People writing Python code mostly use matplotlib, R people use ggplot2. MS people use the internal Excel thing. I've seen people use gtkdatabox for GTK widgets, rrdtool for logging, qcustomplot for qt. And so on. This is really unhelpful, and it would benefit everybody if there was a single solid plotting backend with lots of bindings to different languages and tools.

For my own usage, I've been fighting this quixotic battle, using gnuplot as the plotting backend for all my use cases. gnuplot is

  • very mature
  • stable
  • fast
  • powerful
  • supported on every (with reason) platform
  • supports lots and lots of output backends

There are some things it can't do, but those can be added, and I haven't felt it to be limiting in over 20 years of using it.

I rarely use it directly, and usually interact with it through one of

I wrote all of these, although the Perl library was taken over by others long ago.

Recently I needed a plotting widget for an FLTK program written in Python. It would be great if there was a C++ class deriving from Fl_Widget that would be wrapped by pyfltk, but there isn't.

But it turns out that I already had all the tools to quickly hack together something that mostly works. This is a not-ready-for-primetime hack, but it works so well, I'd like to write it up. Hopefully this will be done "properly" someday.

Approach

Alright. So here I'm trying to tie together a Python program, gnuplot output and an FLTK widget. This is a Python program, I can use gnuplotlib to talk to the gnuplot backend. In a perfect world, gnuplot would ship a backend interfacing to FLTK. But it doesn't. What it does do is to ship an x11 backend that makes plots with X11 commands, and it allows these commands to be directed to an arbitrary X11 window. So we

  1. Make an FLTK widget that simply creates an X11 window, and never actually draws into it
  2. Tell gnuplot to plot into this window
Demo

This is really simple, and works shockingly well. Here's my Fl_gnuplotlib widget:

#!/usr/bin/python3

import sys
import gnuplotlib as gp
import fltk

class Fl_Gnuplotlib_Window(fltk.Fl_Window):

    def __init__(self, x,y,w,h, **plot_options):
        super().__init__(x,y,w,h)
        self.end()

        self._plot                 = None
        self._delayed_plot_options = None

        self.init_plot(**plot_options)

    def init_plot(self, **plot_options):
        if 'terminal' in plot_options:
            raise Exception("Fl_Gnuplotlib_Window needs control of the terminal, but the user asked for a specific 'terminal'")

        if self._plot is not None:
            self._plot = None

        self._delayed_plot_options = None

        xid = fltk.fl_xid(self)
        if xid == 0:
            # I don't have an xid (yet?), so I delay the init
            self._delayed_plot_options = plot_options
            return

        # will barf if we already have a terminal
        gp.add_plot_option(plot_options,
                           terminal = f'x11 window "0x{xid:x}"')

        self._plot = gp.gnuplotlib(**plot_options)

    def plot(self, *args, **kwargs):

        if self._plot is None:
            if self._delayed_plot_options is None:
                raise Exception("plot has not been initialized")

            self.init_plot(**self._delayed_plot_options)
            if self._plot is None:
                raise Exception("plot has not been initialized. Delayed initialization failed")

        self._plot.plot(*args, **kwargs)

Clearly it's simply making an Fl_Window, and pointing gnuplotlib at it. And a sample application that uses this widget:

#!/usr/bin/python3

import sys
import numpy as np
import numpysane as nps
from fltk import *
from Fl_gnuplotlib import *


window = Fl_Window(800, 600, "plot")
plot   = Fl_Gnuplotlib_Window(0, 0, 800,600)


iplot = 0
plotx = np.arange(1000)
ploty = nps.cat(plotx*plotx,
                np.sin(plotx/100),
                plotx)

def timer_callback(*args):

    global iplot, plotx, ploty, plot
    plot.plot(plotx,
              ploty[iplot],
              _with = 'lines')

    iplot += 1
    if iplot == len(ploty):
        iplot = 0

    Fl.repeat_timeout(1.0, timer_callback)


window.resizable(window)
window.end()
window.show()

Fl.add_timeout(1.0, timer_callback)

Fl.run()

This is nice and simple. Exactly what a program using a widget to make a plot (while being oblivious to the details) should look like. It creates a window, places the one plotting widget into it, and cycles the plot inside it at 1Hz (cycling between a parabola, a sinusoid and a line). Clearly we could place other UI elements around it, or add more plots, or whatever.

The output looks like this:

To run you need to apt install python3-numpysane python3-gnuplotlib python3-fltk. If running an older distro on a non-Debian-based distro, you should grab those from source.

Discussion

This works. But it's a hack. Some issues:

  • This plotting widget currently can output only. It can make whatever plot we like, but it cannot accept UI input from the container program in any way
  • More than that, when focused it completely replaces the FLTK event logic for that window. So all keyboard input is swallowed, including the keys to access FLTK menus, to exit the application, etc, etc.
  • This approach requires us to use the x11 gnuplot terminal. This works, but it's no longer the terminal preferred by the gnuplot devs, and it it's maintained as vigilantly as the others.
  • And it has bugs. For instance, asking to plot into a window that doesn't yet exist, causes it to create a new window. This breaks FLTK applications that start up and create a plot immediately. Here's a mailing list thread discussing these issues.

So this is a very functional hack, but it's still hack. And it feels like making this solid will take a lot of work. Maybe. I'll push more on this as I need it. Stay tuned!

Jeremy Bicha: Ubuntu bug fix anniversary

17 October, 2022 - 20:54

I first installed Ubuntu when Ubuntu 6.06 LTS “Dapper Drake” was released. I was brand new to Linux. This was Ubuntu’s first LTS release; the very first release of Ubuntu was only a year and a half before. I was impressed by how usable and useful the system was. It soon became my primary home operating system and I wanted to help make it better.

On October 15, 2009, I was helping test the release candidates ISOs for the Ubuntu 9.10 release. Specifically, I tested Edubuntu. Edubuntu has since been discontinued but at the time it was an official Ubuntu flavor preloaded with lots of education apps. One of those education apps was Moodle, an e-learning platform.

When testing Moodle, I found that a default installation would make Moodle impossible to use locally. I figured out how to fix this issue. This was really exciting: I finally found an Ubuntu bug I knew how to fix. I filed the bug report.

This was very late in the Ubuntu 9.10 release process and Ubuntu was in the Final Freeze state. In Final Freeze, every upload to packages included in the default install need to be individually approved by a member of the Ubuntu Release Team. Also, I didn’t have upload rights to Ubuntu. Jordan Mantha (LaserJock), an Edubuntu maintainer, sponsored my bug fix upload.

I also forwarded my patch to Debian.

While trying to figure out what wasn’t working with Moodle, I stumbled across a packaging bug. Edubuntu provided a choice of MySQL or PostgreSQL for the system default database. MySQL was the default, but if PostgreSQL were chosen instead, Moodle wouldn’t work. I figured out how to fix this bug too a week later. Jordan sponsored this upload and Steve Langasek from the Release Team approved it so it also was able to be fixed before 9.10 was released.

Although the first bug was new to 9.10 because of a behavior change in a low-level dependency, this PostgreSQL bug existed in stable Ubuntu releases. Therefore, I prepared Stable Release Updates for Ubuntu 9.04 and Ubuntu 8.04 LTS.

Afterwards

Six months later, I was able to attend my first Ubuntu Developer Summit. I was living in Bahrain (in the Middle East) at the time and a trip to Belgium seemed easier to me than if I were living in the United States where I usually live. This was the Ubuntu Developer Summit where planning for Ubuntu 10.10 took place. I like to believe that I helped with the naming since I added Maverick to the wiki page where people contribute suggestions.

I did not apply for financial sponsorship to attend and I stayed in a budget hotel on the other side of Brussels. The event venue was on the outskirts of Brussels so there wasn’t a direct bus or metro line to get there. I rented a car. I didn’t yet have a smartphone and I had a LOT of trouble navigating to and from the site every day. I learned then that it’s best to stay close to the conference site since a lot of the event is actually in the unstructured time in the evenings. Fortunately, I managed to arrive in time for Mark Shuttleworth’s keynote where the Unity desktop was first announced. This was released in Ubuntu 10.10 in the Ubuntu Netbook Remix and became the default for Ubuntu Desktop in Ubuntu 11.04.

Ubuntu’s switch to Unity provided me with a huge opportunity. In April 2011, GNOME 3.0 was released. I wanted to try it but it wasn’t yet packaged in Ubuntu or Debian. It was suggested that I could help work on packaging the major new version in a PPA. The PPA was convenient because I was able to get permission to upload there easier than being able to upload directly to Ubuntu. My contributions there then enabled me to get upload rights to the Ubuntu Desktop packages later that year.

At a later Ubuntu Developer Summit, it was suggested that I start an official Ubuntu flavor for GNOME. So along with Tim Lunn (darkxst), I co-founded Ubuntu GNOME. Years later, Canonical stopped actively developing Unity; instead, Ubuntu GNOME was merged into Ubuntu Desktop.

Along the way, I became an Ubuntu Core Developer and a Debian Developer. And in January 2022, I joined Canonical on the Desktop Team. This all still feels amazing to me. It took me a long time to be comfortable calling myself a developer!

Conclusion

My first Ubuntu bugfix was 13 years ago this week. Because Ubuntu historically uses alphabetical adjective animal release names, 13 years means that we have rolled around to the letter K again! Later today, we begin release candidate ISO testing for Ubuntu 22.10 “Kinetic Kudu”.

I encourage you to help us test the release candidates and report bugs that you find. If you figure out how to fix a bug, we still sponsor bug fixes. If you are an Ubuntu contributor, I highly encourage you to attend an Ubuntu Summit if you can. The first Ubuntu Summit in years will be in 3 weeks in Prague, but the intent is for the Ubuntu Summits to be recurring events again.

Sven Hoexter: CentOS 9, stunnel, an openssl memory leak and a VirtualBox crash

17 October, 2022 - 01:24

tl;dr; OpenSSL 3.0.1 leaks memory in ssl3_setup_write_buffer(), seems to be fixed in 3.0.5. The issue manifests at least in stunnel and keepalived on CentOS 9. In addition I learned the hard way that running a not so recent VirtualBox version on Debian bullseye let to dh parameter generation crashing in libcrypto in bn_sqr8x_internal().

A recent rabbit hole I went down. The actual bug in openssl was nailed down and documented by Quentin Armitage on GitHub in keepalived My bugreport with all back and forth in the RedHat Bugzilla is #2128412.

Act I - Hello stunnel, this is the OOMkiller Calling

We started to use stunnel on Google Cloud compute engine instances running CentOS 9. The loadbalancer in front of those instances used a TCP health check to validate the backend availability. A day or so later the stunnel instances got killed by the OOMkiller. Restarting stunnel and looking into /proc/<pid>/smaps showed a heap segment growing quite quickly.

Act II - Reproducing the Issue

While I'm not the biggest fan of VirtualBox and Vagrant I've to admit it's quite nice to just fire up a VM image, and give other people a chance to recreate that setup as well. Since VirtualBox is no longer released with Debian/stable I just recompiled what was available in unstable at the time of the bullseye release, and used that. That enabled me now to just start a CentOS 9 VM, setup stunnel with a minimal config, grab netcat and a for loop and watch the memory grow. E.g. while true; do nc -z localhost 2600; sleep 1; done To my surprise, in addition to the memory leak, I also observed some crashes but did not yet care too much about those.

Act III - Wrong Suspect, a Workaround and Bugreporting

Of course the first idea was that something must be wrong in stunnel itself. But I could not find any recent bugreports. My assumption is that there are still a few people around using CentOS and stunnel, so someone else should probably have seen it before. Just to be sure I recompiled the latest stunnel package from Fedora. Didn't change anything. Next I recompiled it without almost all the patches Fedora/RedHat carries. Nope, no progress. Next idea: Maybe this is related to the fact that we do not initiate a TLS context after connecting? So we changed the test case from nc to openssl s_client, and the loadbalancer healthcheck from TCP to a TLS based one. Tada, a workaround, no more memory leaking. In addition I gave Fedora a try (they have Vagrant Virtualbox images in the "Cloud" Spin, e.g. here for Fedora 36) and my local Debian installation a try. No leaks experienced on both. Next I reported #2128412.

Act IV - Crash in libcrypto and a VirtualBox Bug

When I moved with the test case from the Google Cloud compute instance to my local VM I encountered some crashes. That morphed into a real problem when I started to run stunnel with gdb and valgrind. All crashes happened in libcrypto bn_sqr8x_internal() when generating new dh parameter (stunnel does that for you if you do not use static dh parameter). I quickly worked around that by generating static dh parameter for stunnel. After some back and forth I suspected VirtualBox as the culprit. Recompiling the current VirtualBox version (6.1.38-dfsg-3) from unstable on bullseye works without any changes. Upgrading actually fixed that issue.

Epilog

I highly appreciate that RedHat, with all the bashing around the future of CentOS, still works on community contributed bugreports. My kudos go to Clemens Lang. Now that the root cause is clear, I guess RedHat will push out a fix for the openssl 3.0.1 based release they have in RHEL/CentOS 9. Until that is available at least stunnel and keepalived are known to be affected. If you run stunnel on something public it's not that pretty, because already a low rate of TCP connections will result in a DoS condition.

Colin Watson: Reproducible man-db databases

16 October, 2022 - 22:54

I’ve released man-db 2.11.0 (announcement, NEWS), and uploaded it to Debian unstable.

The biggest chunk of work here was fixing some extremely long-standing issues with how the database is built. Despite being in the package name, man-db’s database is much less important than it used to be: most uses of man(1) haven’t required it in a long time, and both hardware and software improvements mean that even some searches can be done by brute force without needing prior indexing. However, the database is still needed for the whatis(1) and apropos(1) commands.

The database has a simple format - no relational structure here, it’s just a simple key-value database using old-fashioned DBM-like interfaces and composing a few fields to form values - but there are a number of subtleties involved. The issues tend to amount to this: what does a manual page name mean? At first glance it might seem simple, because you have file names that look something like /usr/share/man/man1/ls.1.gz and that’s obviously ls(1). Some pages are symlinks to other pages (which we track separately because it makes it easier to figure out which entries to update when the contents of the file system change), and sometimes multiple pages are even hard links to the same file.

The real complications come with “whatis references”. Pages can list a bunch of names in their NAME section, and the historical expectation is that it should be possible to use those names as arguments to man(1) even if they don’t also appear in the file system (although Debian policy has deprecated relying on this for some time). Not only does that mean that man(1) sometimes needs to consult the database, but it also means that the database is inherently more complicated, since a page might list something in its NAME section that conflicts with an actual file name in the file system, and now you need a priority system to resolve ambiguities. There are some other possible causes of ambiguity as well.

The people working on reproducible builds in Debian branched out to the related challenge of reproducible installations some time ago: can you take a collection of packages, bootstrap a file system image from them, and reproduce that exact same image somewhere else? This is useful for the same sorts of reasons that reproducible builds are useful: it lets you verify that an image is built from the components it’s supposed to be built from, and doesn’t contain any other skulduggery by accident or design. One of the people working on this noticed that man-db’s database files were an obstacle to that: in particular, the exact contents of the database seemed to depend on the order in which files were scanned when building it. The reporter proposed solving this by processing files in sorted order, but I wasn’t keen on that approach: firstly because it would mean we could no longer process files in an order that makes it more efficient to read them all from disk (still valuable on rotational disks), but mostly because the differences seemed to point to other bugs.

Having understood this, there then followed several late nights of very fiddly work on the details of how the database is maintained. None of this was conceptually difficult: it mainly amounted to ensuring that we maintain a consistent well-order for different entries that we might want to insert for a given database key, and that we consider the same names for insertion regardless of the order in which we encounter files. As usual, the tricky bit is making sure that we have the right data structures to support this. man-db is written in C which is not very well-supplied with built-in data structures, and originally much of the code was written in a style that tried to minimize memory allocations; this came at the cost of ownership and lifetime often being rather unclear, and it was often difficult to make changes without causing leaks or double-frees. Over the years I’ve been gradually introducing better encapsulation to make things easier to follow, and I had to do another round of that here. There were also some problems with caching being done at slightly the wrong layer: we need to make use of a “trace” of the chain of links followed to resolve a page to its ultimate source file, but we were incorrectly caching that trace and reusing it for any link to the same file, with incorrect results in many cases.

Oh, and after doing all that I found that the on-disk representation of a GDBM database is insertion-order-dependent, so I ended up having to manually reorganize the database at the end by reading it all in and writing it all back out in sorted order, which feels really weird to me coming from spending most of my time with PostgreSQL these days. Fortunately the database is small so this takes negligible time.

None of this is particularly glamorous work, but it paid off:

# export SOURCE_DATE_EPOCH="$(date +%s)"
# mkdir emptydir disorder
# disorderfs --multi-user=yes --shuffle-dirents=yes --reverse-dirents=no emptydir disorder
# export TMPDIR="$(pwd)/disorder"
# mmdebstrap --variant=standard --hook-dir=/usr/share/mmdebstrap/hooks/merged-usr \
      unstable out1.tar
# mmdebstrap --variant=standard --hook-dir=/usr/share/mmdebstrap/hooks/merged-usr \
      unstable out2.tar
# cmp out1.tar out2.tar
# echo $?
0

Aigars Mahinovs: Ryzen 7000 amdgpu boot hang

16 October, 2022 - 21:15

So you decided to build a brand new system using all the latest and coolest tech, so you buy a Ryzen 7000 series Zen 4 CPU, like the Ryzen 7700X that I picked, with a new mother board and DDR5 memory and all that jazz. But for now, you don't yet have a fitting GPU for that system (as the new ones will only come out in November), so you are booting a Debian system using the new build-in video card of the new CPUs (Zen 4 generation has a simple AMD GPU build-in into every CPU now - great stuff for debugging and mostly-headless systems) and you get ... nothing on the screen. Hmm. You boot into the rescue mode and the kernel message stop after:

Oct 16 13:31:25 home kernel: [    4.128328] amdgpu: Ignoring ACPI CRAT on non-APU system
Oct 16 13:31:25 home kernel: [    4.128329] amdgpu: Virtual CRAT table created for CPU
Oct 16 13:31:25 home kernel: [    4.128332] amdgpu: Topology: Add CPU node

That looks bad, right?

Well, if you either ssh into the machine or reboot with module_blacklist=amdgpu in the kernel command line you will find in /var/log/kern.log.1 those messages and also the following messages that will clarify the situation a bit:

Oct 16 13:31:25 home kernel: [    4.129352] amdgpu 0000:10:00.0: firmware: failed to load amdgpu/psp_13_0_5_toc.bin (-2)
Oct 16 13:31:25 home kernel: [    4.129354] firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
Oct 16 13:31:25 home kernel: [    4.129358] amdgpu 0000:10:00.0: firmware: failed to load amdgpu/psp_13_0_5_toc.bin (-2)
Oct 16 13:31:25 home kernel: [    4.129359] amdgpu 0000:10:00.0: Direct firmware load for amdgpu/psp_13_0_5_toc.bin failed with error -2
Oct 16 13:31:25 home kernel: [    4.129360] amdgpu 0000:10:00.0: amdgpu: fail to request/validate toc microcode
Oct 16 13:31:25 home kernel: [    4.129361] [drm:psp_sw_init [amdgpu]] *ERROR* Failed to load psp firmware!
Oct 16 13:31:25 home kernel: [    4.129432] [drm:amdgpu_device_init.cold [amdgpu]] *ERROR* sw_init of IP block <psp> failed -2
Oct 16 13:31:25 home kernel: [    4.129525] amdgpu 0000:10:00.0: amdgpu: amdgpu_device_ip_init failed
Oct 16 13:31:25 home kernel: [    4.129526] amdgpu 0000:10:00.0: amdgpu: Fatal error during GPU init
Oct 16 13:31:25 home kernel: [    4.129527] amdgpu 0000:10:00.0: amdgpu: amdgpu: finishing device.
Oct 16 13:31:25 home kernel: [    4.129633] amdgpu: probe of 0000:10:00.0 failed with error -2

So what you need is to get a new set of Linux Kernel Firmware blobs and upack that in /lib/firmware. The tarball from 2022-10-12 worked well for me.

After that you also need to re-create the initramfs with update-initramfs -k all -c to include the new firmware. Having kernel version 5.18 or newer is also required for stable Zen 4 support. It might be that a fresh Mesa version is also of importance, but as I am running sid on this machine I can only say that Mesa 22.2.1 that is in sid works fine.

Vincent Fourmond: Tutorial: analysis of multiwavelength fast kinetics data

16 October, 2022 - 19:54
The purpose of this post is to demonstrate a first approach to the analysis of multiwavelength kinetic data, like those obtained using stopped-flow data. To practice, we will use data that were acquired during the stopped flow practicals of the MetBio summer school from the FrenchBIC. During the practicals, the student monitored the reaction of myoglobin (in its Fe(III) state) with azide, which yields a fast and strong change in the absorbance spectrum of the protein, which was monitored using a diode array. The data is publicly available on zenodo.

Aims of this tutorial The purpose of this tutorial is to teach you to use the free softwareQSoas to run a simple, multiwavelength exponential fit on the data, and to look at the results. This is not a kinetics lecture, so that it will not go in depth about the use of the exponential fit and its meaning.

Getting started: loading the file First, make sure you have a working version of QSoas, you can download them (for free) there. Then download the data files from zenodo. We will work only on the data file Azide-1.25mm_001.dat, but of course, the purpose of this tutorial is to enable you to work on all of them. The data files contain the time evolution of the absorbance for all wavelengths, in a matrix format, in which each row correpond to a time point and each column to a wavelength.

Start QSoas, and launch the command:
QSoas> load /comments='"'
Then, choose the Azide-1.25mm_001.dat data file. This should bring up a horizontal red line at the bottom of the data display, with X values between about 0 and 2.5. If you zoom on the red line with the mouse wheel, you'll realize it is data. The /comments='"' part is very important since it allows the extraction of the wavelength from the data. We will look at what it means another day. At this stage, you can look at the loaded data using the command:
QSoas> edit
You should have a window looking like this: The rows each correspond to a data point displayed on the window below. The first column correspond to the X values, the second the Y values, and all the other ones to extra Y columns (they are not displayed by default). What is especially interesting is the first row, which contains a nan as the X value and what is obviously the wavelength for all the Y values. To tell that QSoas should take this line as the wavelength (which will be the perpendicular coordinate, the coordinate of the other direction of the matrix), first close the edit window and run:
QSoas> set-perp /from-row=0

Splitting and fitting Now, we have a single dataset containing a lot of Y columns. We want to fit all of them simultaneously with a (mono) exponential fit. For that, we first need to split the big matrix into a series of X,Y datasets (because fitting only works on the first Y). This is possible by running:
QSoas> expand /style=red-to-blue /flags=kinetics
Your screen should now look like this: You're looking at the kinetics at all wavelengths at the same time (this may take some time to display on your computer, it is after all a rather large number of data points). The /style=red-to-blue is not strictly necessary, but it gives the red to blue color gradient which makes things easier to look at (and cooler !). The /flags=kinetics is there to attach a label (a flag) to the newly created datasets so we can easily manipulate all of them at the same time. Then it's time to fit, with the following command:
QSoas> mfit-exponential-decay flagged:kinetics
This should bring up a new window. After resizing it, you should have something that looks like this: The bottom of the fit window is taken by the parameters, each with two checkboxes on the right to set them fixed (i.e. not determined by the fitting mechanism) and/or global (i.e. with a single value for all the datasets, here all the wavelengths). The top shows the current dataset along with the corresponding fit (in green), and, below, the residuals. You can change the dataset by clicking on the horizontal arrows or using Ctrl+PgUp or Ctrl+PgDown (keep holding it to scan fast). See the Z = 728.15 showing that QSoas has recognized that the currently displayed dataset corresponds to the wavelength 728.15. The equation fitted to the data is: $$y(x) = A_\infty + A_1 \times \exp -(x - x_0)/\tau_1$$ In this case, while the \(A_1\) and \(A_\infty\) parameters clearly depend on the wavelength, the time constant of evolution should be independent of wavelength (the process happens at a certain rate regardless of the wavelength we're analyzing), so that the \(\tau_1\) parameter should be common for all the datasets/wavelengths. Just click on the global checkbox at the right of the tau_1 parameter, make sure it is checked, and hit the Fit button...

The fit should not take long (less than a minute), and then you end up with the results of the fits: all the parameters. The best way to look at the non global parameters like \(A_1\) and \(A_\infty\) is to use the Show Parameters item from the Parameters menu. Using it and clicking on A_inf too should give you a display like this one: The A_inf parameter corresponds to the spectum at infinite time (of azide-bound heme), while the A_1 parameter corresponds to the difference spectrum between the initial (azide-free) and final (azide-bound) states.

Now, the fit is finished, you can save the parameters if you want to reload them in a later fit by using the Parameters/Save menu item or export them in a form more suitable for plotting using Parameters/Export (although QSoas can also display and the parameters saved using Save). This concludes this first approach to fitting the data. What you can do is
  • look at the depence of the tau_1 parameter as a function of the azide concentration;
  • try fitting more than one exponential, using for instance:
    QSoas> mfit-exponential-decay /exponentials=2 flagged:kinetics
    

How to read the code above All the lines starting by QSoas> in the code areas above are meant to be typed into the QSoas command line (at the bottom of the window), and started by pressing enter at the end. You must remove the QSoas> bit. The other lines (when applicable) show you the response of QSoas, in the terminal just above the command-line. You may want to play with the QSoas tutorial to learn more about how to interact with QSoas.

About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.1. You can freely (and at no cost) download its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.
Contact: find my email address there, or contact me on LinkedIn.

kpcyrd: updlockfiles: Manage lockfiles in PKGBUILDs for upstreams that don't ship them

16 October, 2022 - 07:00

I’ve released a new tool to manage lockfiles for Arch Linux packages that can’t use a lockfile from the official upstream release. It integrates closely with other Arch Linux tooling like updpkgsums that’s already used to pin the content of build inputs in PKGBUILD.

To use this, the downstream lockfile becomes an additional source input in the source= array of our PKGBUILD (this is already the case for some packages).

source=("git+https://github.com/vimeo/psalm.git#commit=${_commit}"
        "composer.lock")

You would then add a new function named updlockfiles that can generate new lockfiles and copies them into $outdir, and a prepare function to copy the lockfile in the right place:

 prepare() {
   cd ${pkgname}
   cp ../composer.lock .
}

updlockfiles() {
  cd ${pkgname}
  rm -f composer.lock
  composer update
  cp composer.lock "${outdir}/"
}

To update the package to the latest (compatible) patch level simply run:

updlockfiles

This can also be used in case upstreams lockfile has vulnerable dependencies that you want to patch downstream. For more detailed instructions see the readme.

Thanks

This work is currently crowd-funded on github sponsors. I’d like to thank @SantiagoTorres, @repi and @rgacogne for their support in particular. ♥️

Jonathan Dowland: podman generate

15 October, 2022 - 17:04

I've been working with and on container technology for seven years, but I still learn new things every day. Recently I read the excellent LWN article Docker and the OCI container ecosystem and this was news to me:

Running the docker CLI under a process supervisor only results in supervising the CLI process. This has a variety of consequences for users of these tools. For example, any attempt to limit a container's memory usage by running the CLI as a systemd service will fail; the limits will only apply to the CLI and its non-existent children. In addition, attempts to terminate a client process may not result in terminating all of the processes in the container.

Huh — of course! I hadn't really thought about that. I run a small number of containers on my home system via docker (since I was using it at work) and systemd, and sometimes I have weird consistency issues. This explains them.

Later:

As a result, Podman plays nicely with tools like systemd; using podman run with a process supervisor works as expected, because the processes inside the container are children of podman run. The developers of Podman encourage people to use it in this way by a command to generate systemd units for Podman containers.

Given the above, it seemed like a good idea to migrate my few local containers over to Podman. This was easy. The first part is copying the images from Docker's storage to Podman's. To do this, I used the skopeo tool:

sudo skopeo copy {docker-daemon,containers-storage}:octoprint/octoprint:latest

(I want to launch these as a superuser, rather than use root-less containers, to isolate the containers from each other: rootless ones are forced to share a common namespace.)

Once the images were copied over, I needed to start up a container instance via Podman. For most of them, running under Docker, I had volume-mounted the important bits for persistent data. I can do the same with Podman:

# podman run -d --rm \
    --name octoprint \
    -p 8080:80 \
    -v /archive/octoprint:/octoprint \
    --device /dev/ttyACM0:/dev/ttyACM0 \
    octoprint/octoprint

Once an instance was running, I can use podman generate to create a Systemd unit file which describes creating an equivalent container:

# cd /etc/systemd/system
# podman generate systemd octoprint \
    --new \
    --name \
    --files

For some of the containers I was running, there are a few more steps required: migrating docker volumes and configuring equivalents for private docker networks. But Podman's versions of those concepts are largely identical. Running a mixture of Podman and Docker containers side-by-side also meant renumbering some private addresses for migrated hosts whilst the original network was still up.

Steinar H. Gunderson: Firsts

15 October, 2022 - 00:15

Two nice “firsts” for me in tech this month:

  • First Chromium feature I'm leading (previous work has mostly been about optimizations): CSS Nesting. (Note that the CSSWG is still ironing out the exact spec.)
  • First patch to a Rust project: Function context for Difftastic. (Not yet processed by upstream.)

Shirish Agarwal: Dowry, Racism, Railways

15 October, 2022 - 00:03
Dowry

Few days back, had posted about the movie Raksha Bandhan and whatever I felt about it. Sadly, just couple of days back, somebody shared this link. Part of me was shocked and part of me was not. Couple of acquaintances of mine in the past had said the same thing for their daughters. And in such situations you are generally left speechless because you don’t know what the right thing to do is. If he has shared it with you being an outsider, how many times he must have told the same to their wife and daughters? And from what little I have gathered in life, many people have justified it on similar lines. And while the protests were there, sadly the book was not removed. Now if nurses are reading such literature, how their thought process might be forming, you can tell :(. And these are the ones whom we call for when we are sick and tired :(. And I have not taken into account how the girls/women themselves might be feeling. There are similar things in another country but probably not the same, nor the same motivations though although feeling helplessness in both would be a common thing.

But such statements are not alone. Another gentleman in slightly different context shared this as well –

The above is a statement shared in a book recommended for CTET (Central Teacher’s Eligibility Test that became mandatory to be taken as the RTE (Right To Education) Act came in.). The statement says “People from cold places are white, beautiful, well-built, healthy and wise. And people from hot places are black, irritable and of violent nature.”

Now while I can agree with one part of the statement that people residing in colder regions are more fair than others but there are loads of other factors that determine fairness or skin color/skin pigmentation. After a bit of search came to know that this and similar articulation have been made in an idea/work called ‘Environmental Determinism‘. Now if you look at that page, you would realize this was what colonialism is and was all about. The idea that the white man had god-given right to rule over others. Similarly, if you are fair, you can lord over others. Seems simplistic, but yet it has a powerful hold on many people in India. Forget the common man, this thinking is and was applicable to some of our better-known Freedom fighters. Pune’s own Bal Gangadhar Tilak – The Artic Home to the Vedas. It sort of talks about Aryans and how they invaded India and became settled here. I haven’t read or have access to the book so have to rely on third-party sources. The reason I’m sharing all this is that the right-wing has been doing this myth-making for sometime now and unless and until you put a light on it, it will continue to perpetuate . For those who have read this blog, do know that India is and has been in casteism from ever. They even took the fair comment and applied it to all Brahmins. According to them, all Brahmins are fair and hence have god-given right to lord over others. What is called the Eton boy’s network serves the same in this casteism. The only solution is those idea under limelight and investigate. To take the above, how does one prove that all fair people are wise and peaceful while all people black and brown are violent. If that is so, how does one count for Mahatma Gandhi, Martin Luther King Junior, Nelson Mandela, Michael Jackson the list is probably endless. And not to forget that when Mahatma Gandhiji did his nonviolent movements either in India or in South Africa, both black and brown people in millions took part. Similar examples of Martin Luther King Jr. I know and read of so many non-violent civl movements that took place in the U.S. For e.g. Rosa Parks and the Montgomery Bus Boycott. So just based on these examples, one can conclude that at least the part about the fair having exclusive rights to being fair and noble is not correct.

Now as far as violence goes, while every race, every community has had done violence in the past or been a victim of the same. So no one is and can be blameless, although in light of the above statement, the question can argumentated as to who were the Vikings? Both popular imagination and serious history shares stories about Vikings. The Vikings were somewhat nomadic in nature even though they had permanent settlements but even then they went on raids, raped women, captured both men and women and sold them at slaves. So they are what pirates came to be, but not the kind Hollywood romanticizes about. Europe in itself has been a tale in conflict since time immemorial. It is only after the formation of EU that most of these countries stopped fighting each other From a historical point perspective, it is too new. So even the part of fair being non-violent dies in face of this evidence. I could go on but this is enough on that topic.

Railways and Industrial Action around the World.

While I have shared about Railways so many times on this blog, it continues to fascinate me that how people don’t understand the first things about Railways. For e.g. Railways is a natural monopoly. What that means is and you can look at all and any type of privatization around the world, you will see it is a monopoly. Unlike the road or Skies, Railways is and would always be limited by infrastructure and the ability to have new infrastructure. Unlike in road or Skies (even they have their limits) you cannot run train services on a whim. At any particular point in time, only a single train could and should occupy a stretch of Railway network. You could have more trains on one line, but then the likelihood of front or rear-end collisions becomes a real possibility. You also need all sorts of good and reliable communications, redundant infrastructure so if one thing fails then you have something in place. The reason being a single train can carry anywhere from 2000 to 5000 passengers or more. While this is true of Indian Railways, Railways around the world would probably have some sort of similar numbers.It is in this light that I share the below videos.

To be more precise, see the fuller video –

Now to give context to the recording above, Mike Lynch is the general secretary at RMT. For those who came in late, both UK and the U.S. have been threatened by railway strikes. And the reason for the strikes or threat of strikes is similar. Now from the company perspective, all they care is to invest less and make the most profits that can be given to equity shareholders. At the same time, they have freezed the salaries of railway workers for the last 3 years. While the politicians who were asking the questions, apparently gave themselves raise twice this year. They are asking them to negotiate at 8% while inflation in the UK has been 12.3% and projected to go higher. And it is not only the money. Since the 1980s when UK privatized the Railways, they stopped investing in the infrastructure. And that meant that the UK Railway infrastructure over period of time started getting behind and is even behind say Indian Railways which used to provide most bang for the buck. And Indian Railways is far from ideal. Ironically, most of the operators on UK are nationalized Railways of France, Germany etc. but after the hard Brexit, they too are mulling to cut their operations short, they have too There is also the EU Entry/Exit system that would come next year.

Why am I sharing about what is happening in UK Rail, because the Indian Government wants to follow the same thing, and fooling the public into saying we would do it better. What inevitably will happen is that ticket prices go up, people no longer use the service, the number of services go down and eventually they are cancelled. This has happened both in Indian Railways as well as Airlines. In fact, GOI just recently announced a credit scheme just a few days back to help Airlines stay afloat. I was chatting with a friend who had come down to Pune from Chennai and the round-trip cost him INR 15k/- on that single trip alone. We reminisced how a few years ago, 8 years to be precise, we could buy an Air ticket for 2.5k/- just a few days before the trip and did it. I remember doing/experiencing at least a dozen odd trips via air in the years before 2014. My friend used to come to Pune, almost every weekend because he could afford it, now he can’t do that. And these are people who are in the above 5-10% of the population. And this is not just in UK, but also in the United States. There is one big difference though, the U.S. is mainly a freight carrier while the UK Railway Operations are mostly passenger based. What was and is interesting that Scotland had to nationalize their services as they realized the Operators cannot or will not function when they were most needed. Most of the public even in the UK seem to want a nationalized rail service, at least their polls say so. So, it would definitely be interesting to see what happens in the UK next year.

In the end, I know I promised to share about books, but the above incidents have just been too fascinating to not just share the news but also share what I think about them. Free markets function good where there is competition, for example what is and has been happening in China for EV’s but not where you have natural monopolies. In all Railway privatization, you have to handover the area to one person, then they have no motivation. If you have multiple operators, then there would always be haggling as to who will run the train and at what time. In either scenario, it doesn’t work and raises prices while not delivering anything better

I do take examples from UK because lot of things are India are still the legacy of the British. The whole civil department that was created in 1953 is/was a copy of the British civil department at that time and it is to this day.

P.S. – Just came to know that the UK Chancellor Kwasi Kwarteng was just sacked as UK Chancellor. I do commend Truss for facing the press even though she might be dumped a week later unlike our PM who hasn’t faced a single press conference in the last 8 odd years.

https://www.youtube.com/watch?v=oTP6ogBqU7of

The difference in Indian and UK politics seems to be that the English are now asking questions while here in India, most people are still sleeping without a care in the world.

Another thing to note Minidebconf Palakkad is gonna happen 12-13th November 2022. I am probably not gonna go but would request everyone who wants to do something in free software to attend it. I am not sure whether I would be of any use like this and also when I get back, it would be an empty house. But for people young and old, who want to do anything with free/open source software it is a chance not to be missed. Registration of the same closes on 1st of November 2022. All the best, break a leg

Simon Josefsson: On language bindings & Relaunching Guile-GnuTLS

14 October, 2022 - 20:58

The Guile bindings for GnuTLS has been part of GnuTLS since spring 2007 when Ludovic Courtès contributed it after some initial discussion. I have been looking into getting back to do GnuTLS coding, and during a recent GnuTLS meeting one topic was Guile bindings. It seemed like a fairly self-contained project to pick up on. It is interesting to re-read the old thread when this work was included: some of the concerns brought up there now have track record to be evaluated on. My opinion that the cost of introducing a new project per language binding today is smaller than the cost of maintaining language bindings as part of the core project. I believe the cost/benefit ratio has changed during the past 15 years: introducing a new project used to come with a significant cost but this is no longer the case, as tooling and processes for packaging have improved. I have had similar experience with Java, C# and Emacs Lisp bindings for GNU Libidn as well, where maintaining them centralized slow down the pace of updates. Andreas Metzler pointed to a similar conclusion reached by Russ Allbery.

There are many ways to separate a project into two projects; just copying the files into a new git repository would have been the simplest and was my original plan. However Ludo’ mentioned git-filter-branch in an email, and the idea of keeping all git history for some of the relevant files seemed worth pursuing to me. I quickly found git-filter-repo which appears to be the recommend approach, and experimenting with it I found a way to filter out the GnuTLS repo into a small git repository that Guile-GnuTLS could be based on. The commands I used were the following, if you want to reproduce things.

$ git clone https://gitlab.com/gnutls/gnutls.git guile-gnutls
$ cd guile-gnutls/
$ git checkout f5dcbdb46df52458e3756193c2a23bf558a3ecfd
$ git-filter-repo --path guile/ --path m4/guile.m4 --path doc/gnutls-guile.texi --path doc/extract-guile-c-doc.scm --path doc/cha-copying.texi --path doc/fdl-1.3.texi

I debated with myself back and forth whether to include some files that would be named the same in the new repository but would share little to no similar lines, for example configure.ac, Makefile.am not to mention README and NEWS. Initially I thought it would be nice to preserve the history for all lines that went into the new project, but this is a subjective judgement call. What brought me over to a more minimal approach was that the contributor history and attribution would be quite strange for the new repository: Should Guile-GnuTLS attribute the work of the thousands of commits to configure.ac which had nothing to do with Guile? Should the people who wrote that be mentioned as contributor of Guile-GnuTLS? I think not.

The next step was to get a reasonable GitLab CI/CD pipeline up, to make sure the project builds on some free GNU/Linux distributions like Trisquel and PureOS as well as the usual non-free distributions like Debian and Fedora to have coverage of dpkg and rpm based distributions. I included builds on Alpine and ArchLinux as well, because they tend to trigger other portability issues. I wish there were GNU Guix docker images available for easy testing on that platform as well. The GitLab CI/CD rules for a project like this are fairly simple.

To get things out of the door, I tagged the result as v3.7.9 and published a GitLab release page for Guile-GnuTLS that includes OpenPGP-signed source tarballs manually uploaded built on my laptop. The URLs for these tarballs are not very pleasant to work with, and discovering new releases automatically appears unreliable, but I don’t know of a better approach.

To finish this project, I have proposed a GnuTLS merge request to remove all Guile-related parts from the GnuTLS core.

Doing some GnuTLS-related work again felt nice, it was quite some time ago so thank you for giving me this opportunity. Thoughts or comments? Happy hacking!

Gunnar Wolf: Learning some Rust with Lars!

14 October, 2022 - 12:11

A couple of weeks ago, I read a blog post by former Debian Developer Lars Wirzenius offering a free basic (6hr) course on the Rust language to interested free software and open source software programmers.

I know Lars offers training courses in programming, and besides knowing him for ~20 years and being proud to consider us to be friends, have worked with him in a couple of projects (i.e. he is upstream for vmdb2, which I maintain in Debian and use for generating the Raspberry Pi Debian images) — He is a talented programmer, and a fun guy to be around.

I was admitted to the first cohort of students of this course (please note I’m not committing him to run free courses ever again! He has said he would consider doing so, specially to offer a different time better suited for people in Asia).

I have wanted to learn some Rust for quite some time. About a year ago, I bought a copy of The Rust Programming Language, the canonical book for learning the language, and started reading it… But lacked motivation and lost steam halfway through, and without having done even a simple real project beyond the simple book exercises.

How has this been? I have enjoyed the course. I must admit I did expect it to be more hands-on from the beginning, but Rust is such a large language and it introduces so many new, surprising concepts. Session two did have two somewhat simple hands-on challenges; by saying they were somewhat simple does not mean we didn’t have to sweat to get them to compile and work correctly!

I know we will finish this Saturday, and I’ll still be a complete newbie to Rust. I know the only real way to wrap my head around a language is to actually have a project that uses it… And I have some ideas in mind. However, I don’t really feel confident to approach an already existing project and start meddling with it, trying to contribute.

What does Rust have that makes it so different? Bufff… Variable ownership (borrow checking) and values’ lifetimes are the most obvious salient idea, but they are relatively simple, as you just cannot forget about them. But understanding (and adopting) idiomatic constructs such as the pervasive use of enums, understanding that errors always have to be catered for by using expect() and Result<T,E>… It will take some time to be at ease developing in it, if I ever reach that stage!

Oh, FWIW — Interested related reading. I am halfway through an interesting article, published in March in the Communications of the ACM magazine, titled «Here We Go Again: Why Is It Difficult for Developers to Learn Another Programming Language?», that presents an interesting point we don’t always consider: If I’m a proficient programmer in the X programming language and want to use the Y programming language, learning it… Should be easier for me than for the casual bystander, or not? After all, I already have a background in programming! But it happens that mental constructs we build for a given language might hamper our learning of a very different one. This article presents three interesting research questions:

  1. Does cross-language interference occur?
  2. How do experienced programmers learn new languages?
  3. What do experienced programmers find confusing in new languages?

I’m far from reaching the conclusions, but so far, it’s been a most interesting read.

Anyway, to wrap up — Thanks Lars! I am learning (although at a pace that is not magically quick… But I am aware of the steep learning curve of the language) quite a bit of a very interesting topic, and I’m also enjoying the time I spend in front of my computer on Saturday.

Dirk Eddelbuettel: GitHub Streak: Round Nine

13 October, 2022 - 07:59

Eight years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking 366 days:

github activity october 2013 to october 2014

And seven years ago a first follow-up appeared in this post about 731 days:

github activity october 2014 to october 2015

And six years ago we had a followup at 1096 days

github activity october 2015 to october 2016

And five years ago we had another one marking 1461 days

github activity october 2016 to october 2017

And four years ago another one for 1826 days

github activity october 2017 to october 2018

And three years ago another one bringing it to 2191 days

github activity october 2018 to october 2019

And two years ago another one bringing it to 2557 days

github activity october 2019 to october 2020

And last year another one bringing it to 2922 days

github activity october 2020 to october 2021

And as today is October 12 here is the newest one from 2021 to 2022 one bringing it 3287 days:

github activity october 2021 to october 2022

As always, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Ben Hutchings: Debian LTS work, August-September 2022

11 October, 2022 - 21:33

I have continued to work for Freexian on Debian LTS. In August I carried over 21 hours from July, and worked 13 hours. In September I was assigned an additional 17 hours, and worked 16 hours. I will carry over 9 hours into October.

In August, Debian 10 "buster" entered LTS status. I spent some time on the backport of Linux 5.10 for buster. While this previously existed in buster-backports, further changes were required to add it as an alternative kernel version in buster-security, particularly around code signing. When that was complete, I issued DLA-3102-1.

I also prepared and uploaded an update to the linux (4.19) package. I issued DLA-3131-1 for these changes.

Ian Jackson: Skipping releases when upgrading Debian systems

10 October, 2022 - 22:48

Debian does not officially support upgrading from earlier than the previous stable release: you’re not supposed to “skip” releases. Instead, you’re supposed to upgrade to each intervening major release in turn.

However, skipping intervening releases does, in fact, often work quite well. Apparently, this is surprising to many people, even Debian insiders. I was encouraged to write about it some more.

My personal experience

I have three conventionally-managed personal server systems (by which I mean systems which aren’t reprovisioned by some kind of automation). Of these at least two have been skip upgraded at least once:

The one I don’t think I’ve skip-upgraded (at least, not recently) is my house network manager (and now VM host) which I try to keep to a minimum in terms of functionality and which I keep quite up to date. It was crossgraded from i386 (32-bit) to amd64 (64-bit) fairly recently, which is a thing that Debian isn’t sure it supports. The crossgrade was done a hurry and without any planning, prompted by Spectre et al suddenly requiring big changes to Xen. But it went well enough.

My home “does random stuff” server (media server, web cache, printing, DNS, backups etc.), has etckeeper records starting in 2015. I upgraded directly from jessie (Debian 8) to buster (Debian 10). I think it has probably had earlier skip upgrade(s): the oldest file in /etc is from December 1996 and I have been doing occasional skip upgrades as long as I can remember.

And of course there’s chiark, which is one of the oldest Debian installs in existence. I wrote about the most recent upgrade, where I went directly from jessie i386 ELTS (32-bit Debian 8) to bulleye amd64 (64-bit Debian 11). That was a very extreme case which required significant planning and pre-testing, since the package dependencies were in no way sufficient for the proper ordering. But, I don’t normally go to such lengths. Normally, even on chiark, I just edit the sources.list and see what apt proposes to do.

I often skip upgrade chiark because I tend to defer risky-looking upgrades partly in the hope of others fixing the bugs while I wait :-), and partly just because change is disruptive and amortising it is very helpful both to me and my users. I have some records of chiark’s upgrades from my announcements to users. As well as the recent “skip skip up cross grade, direct”, I definitely did a skip upgrade of chiark from squeeze (Debian 6) to jessie (Debian 8). It appears that the previous skip upgrade on chiark was rex (Debian 1.2) to hamm (Debian 2.0).

I don’t think it’s usual for me to choose to do a multi-release upgrade the “officially supported” way, in two (or more) stages, on a server. I have done that on systems with a GUI desktop setup, but even then I usually skip the intermediate reboot(s).

When to skip upgrade (and what precautions to take)

I’m certainly not saying that everyone ought to be doing this routinely. Most users with a Debian install that is older than oldstable probably ought to reinstall it, or do the two-stage upgrade.

Skip upgrading almost always runs into some kind of trouble (albeit, usually trouble that isn’t particularly hard to fix if you know what you’re doing).

However, officially supported non-skip upgrades go wrong too. Doing a two-or-more-releases upgrade via the intermediate releases can expose you to significant bugs in the intermediate releases, which were later fixed. Because Debian’s users and downstreams are cautious, and Debian itself can be slow, it is common for bugs to appear for one release and then be fixed only in the next. Paradoxically, this seems to be especially true with the kind of big and scary changes where you’d naively think the upgrade mechanisms would break if you skipped the release where the change first came in.

I would not recommend a skip upgrade to someone who is not a competent Debian administrator, with good familiarity with Debian package management, including use of dpkg directly to fix things up. You should have a mental toolkit of manual bug workaround techniques. I always should make sure that I have rescue media (and in the case of a remote system, full remote access including ability to boot a different image), although I don’t often need it.

And, when considering a skip upgrade, you should be aware of the major changes that have occurred in Debian.

Skip upgrading is more likely to be a good idea with a complex and highly customised system: a fairly vanilla install is not likely to encounter problems during a two-stage update. (And, a vanilla system can be “upgraded” by reinstalling.)

I haven’t recently skip upgraded a laptop or workstation. I doubt I would attempt it; modern desktop software seems to take a much harder line about breaking things that are officially unsupported, and generally trying to force everyone into the preferred mold.

A request to Debian maintainers

I would like to encourage Debian maintainers to defer removing upgrade compatibility machinery until it is actually getting in the way, or has become hazardous, or many years obsolete.

Examples of the kinds of things which it would be nice to keep, and which do not usually cause much inconvenience to retain, are dependency declarations (particularly, alternatives), and (many) transitional fragments in maintainer scripts.

If you find yourself needing to either delete some compatibility feature, or refactor/reorganise it, I think it is probably best to delete it. If you modify it significantly, the resulting thing (which won’t be tested until someone uses it in anger) is quite likely to have bugs which make it go wrong more badly (or, more confusingly) than the breakage that would happen without it.

Obviously this is all a judgement call.

I’m not saying Debian should formally “support” skip upgrades, to the extent of (further) slowing down important improvements. Nor am I asking for any change to the routine approach to (for example) transitional packages (i.e. the technique for ensuring continuity of installation when a package name changes).

We try to make release upgrades work perfectly; but skip upgrades don’t have to work perfectly to be valuable. Retaining compatibility code can also make it easier to provide official backports, and it probably helps downstreams with different release schedules.

The fact that maintainers do in practice often defer removing compatibility code provides useful flexibility and options to at least some people. So it would be nice if you’d at least not go out of your way to break it.

Building on older releases

I would also like to encourage maintainers to provide source packages in Debian unstable that will still build on older releases, where this isn’t too hard and the resulting binaries might be basically functional.

Speaking personally, it’s not uncommon for me to rebuild packages from unstable and install them on much older releases. This is another thing that is not officially supported, but which often works well.

I’m not saying to contort your build system, or delay progress. You’ll definitely want to depend on a recentish debhelper. But, for example, retaining old build-dependency alternatives is nice. Retaining them doesn’t constitute a promise that it works - it just makes life slightly easier for someone who is going off piste.

If you know you have users on multiple distros or multiple releases, and wish to fully support them, you can go further, of course. Many of my own packages are directly buildable, or even directly installable, on older releases.



comments

Joachim Breitner: rec-def: Minesweeper case study

10 October, 2022 - 15:22

I’m on the train back from MuniHac, where I gave a talk about the rec-def library that I have excessively blogged about recently (here, here, here and here). I got quite flattering comments about that talk, so if you want to see if they were sincere, I suggest you watch the recording of “Getting recursive definitions off their bottoms” (but it’s not necessary for the following).

After the talk, Franz Thoma approached me and told me a story of how we was once implementing the game Minesweeper in Haskell, and in particular the part of the logic where, after the user has uncovered a field, the game would automatically uncover all fields that are next to a “neutral” field, i.e. one with zero adjacent bombs. He was using a comonadic data structure, which makes a “context-dependent parallel computation” such as uncovering one field quite natural, and was hoping that using a suitable fix-point operator, he can elegantly obtain not just the next step, but directly the result of recursively uncovering all these fields. But, much to his disappointment, that did not work out: Due to the recursion inherent in that definition, a knot-tying fixed-point operator will lead to a cyclic definition.

Microsoft Minesweeper

He was wondering if the rec-def library could have helped him, and we sat down to find out, and this is the tale of this blog post. I will avoid the comonadic abstractions and program it more naively, though, to not lose too many readers along the way. Maybe read Chris Penner’s blog post and Finch’s functional pearl “Getting a Quick Fix on Comonads” if you are curious about that angle.

Minesweeper setup

Let’s start with defining a suitable data type for the grid of the minesweeper board. I’ll use the Array data type, it’s Ix-based indexing is quite useful for grids:

import Data.Array

type C = (Int,Int)
type Grid a = Array C a

The library lacks a function to generate an array from a generating function, but it is easy to add:

genArray :: Ix i => (i,i) -> (i -> e) -> Array i e
genArray r f = listArray r $ map f $ range r

Let’s also fix the size of the board, as a pair of lower and upper bounds (this is the format that the Ix type class needs):

size :: (C,C)
size = ((0,0), (3,3))

Now board is simply a grid of boolean values, with True indicating that a bomb is there:

type Board = Grid Bool

board1 :: Board
board1 = listArray size
  [ False, False, False, False
  , True,  False, False, False
  , True,  False, False, False
  , False, False, False, False
  ]

It would be nice to be able to see these board in a nicer way. So let us write A function that prints a grid, including a frame, given a function that prints something for each coordinate. Together with a function that prints a bomb (as *), we can print the board:

pGrid :: (C -> String) -> String
pGrid p = unlines
    [ concat [ p' (y,x) | x <- [lx-1 .. ux+1] ]
    | y  <- [ly-1 .. uy+1] ]
  where
    ((lx,ly),(ux,uy)) = size

    p' c | inRange size c = p c
    p'  _ = "#"

pBombs :: Board -> String
pBombs b = pGrid $ \c -> if b ! c then "*" else " "

The expression b ! c looks up a the coordinate in the array, and is True when there is a bomb at that coordinate.

So here is our board, with two bombs:

ghci> putStrLn $ pBombs board1
######
#    #
#*   #
#*   #
#    #
######

But that’s not what we want to show to the user: Every field should have have a number that indicates the number of bombs in the surrounding fields. To that end, we first define a function that takes a coordinate, and returns all adjacent coordinates. This also takes care of the border, using inRange:

neighbors :: C -> [C]
neighbors (x,y) =
    [ c
    | (dx, dy) <- range ((-1,-1), (1,1))
    , (dx, dy) /= (0,0)
    , let c = (x + dx, y + dy)
    , inRange size c
    ]

With that, we can calculate what to display in each cell – a bomb, or a number:

data H = Bomb | Hint Int deriving Eq

hint :: Board -> C -> H
hint b c | b ! c = Bomb
hint b c = Hint $ sum [ 1 | c' <- neighbors c, b ! c' ]

With a suitable printing function, we can now see the full board:

hint :: Board -> C -> H
hint b c | b ! c = Bomb
hint b c = Hint $ sum [ 1 | c' <- neighbors c, b ! c' ]

pCell :: Board -> C -> String
pCell b c = case hint b c of
    Bomb -> "*"
    Hint 0 -> " "
    Hint n -> show n

pBoard :: Board -> String
pBoard b = pGrid (pCell b)

And here it is:

ghci> putStrLn $ pBoard board1
######
#11  #
#*2  #
#*2  #
#11  #
######

Next we have to add masks: We need to keep track of which fields the user already sees. We again use a grid of booleans, and define a function to print a board with the masked fields hidden behind ?:

type Mask = Grid Bool

mask1 :: Mask
mask1 = listArray size
  [ True,  True,  True,  False
  , False, False, False, False
  , False, False, False, False
  , False, False, False, False
  ]

pMasked :: Board -> Mask -> String
pMasked b m = pGrid $ \c -> if m ! c then pCell b c else "?"

So this is what the user would see

ghci> putStrLn $ pMasked board1 mask1
######
#11 ?#
#????#
#????#
#????#
######
Uncovering some fields

With that setup in place, we now implement the piece of logic we care about: Uncovering all fields that are next to a neutral field. Here is the first attempt:

solve0 :: Board -> Mask -> Mask
solve0 b m0 = m1
  where
    m1 :: Mask
    m1 = genArray size $ \c ->
      m0 ! c || or [ m0 ! c' | c' <- neighbors c, hint b c' == Hint 0 ]

The idea is that we calculate the new mask m1 from the old one m0 by the following logic: A field is visible if it was visible before (m0 ! c), or if any of its neighboring, neutral fields are visible.

This works so far: I uncovered the three fields next to the one neutral visible field:

ghci> putStrLn $ pMasked board1 $ solve0 board1 mask1
######
#11  #
#?2  #
#????#
#????#
######

But that’s not quite what we want: We want to keep doing that to uncover all fields.

Uncovering all fields

So what happens if we change the logic to: A field is visible if it was visible before (m0 ! c), or if any of its neighboring, neutral fields will be visible.

In the code, this is just a single character change: Instead of looking at m0 to see if a neighbor is visible, we look at m1:

solve1 :: Board -> Mask -> Mask
solve1 b m0 = m1
  where
    m1 :: Mask
    m1 = genArray size $ \c ->
      m0 ! c || or [ m1 ! c' | c' <- neighbors c, hint b c' == Hint 0 ]

(This is roughly what happened when Franz started to use the kfix comonadic fixed-point operator in his code, I believe.)

Does it work? It seems so:

ghci> putStrLn $ pMasked board1 $ solve1 board1 mask1
######
#11  #
#?2  #
#?2  #
#?1  #
######

Amazing, isn’t it!

Unfortunately, it seems to work by accident. If I start with a different mask:

mask2 :: Mask
mask2 = listArray size
  [ True,  True,  False, False
  , False, False, False, False
  , False, False, False, False
  , False, False, False, True
  ]

which looks as follows:

ghci> putStrLn $ pMasked board1 mask2
######
#11??#
#????#
#????#
#??? #
######

Then our solve1 function does not work, and just sits there:

ghci> putStrLn $ pMasked board1 $ solve1 board1 mask2
######
#11^CInterrupted.

Why did it work before, but now now?

It fails to work because as the code tries to figure out if a field, it needs to know if the next field will be uncovered. But to figure that out, it needs to know if the present field will be uncovered. With the normal boolean connectives (|| and or), this does not make progress.

It worked with mask1 more or less by accident: None of the fields on in the first column don’t have neutral neighbors, so nothing happens there. And for all the fields in the third and forth column, the code will know for sure that they will be uncovered based on their upper neighbors, which come first in the neighbors list, and due to the short-circuting properties of ||, it doesn’t have to look at the later cells, and the vicious cycle is avoided.

rec-def to the rescue

This is where rec-def comes in: By using the RBool type in m1 instead of plain Bool, the recursive self-reference is not a problem, and it simply works:

import qualified Data.Recursive.Bool as RB

solve2 :: Board -> Mask -> Mask
solve2 b m0 = fmap RB.get m1
  where
    m1 :: Grid RB.RBool
    m1 = genArray size $ \c ->
      RB.mk (m0 ! c) RB.|| RB.or [ m1 ! c' | c' <- neighbors c, hint b c' == Hint 0 ]

Note that I did not change the algorithm, or the self-reference through m1; I just replaced Bool with RBool, || with RB.|| and or with RB.or. And used RB.get at the end to get a normal boolean out. And 🥁, here we go:

ghci> putStrLn $ pMasked board1 $ solve2 board1 mask2
######
#11  #
#?2  #
#?2  #
#?1  #
######

That’s the end of this repetition of “let’s look at a tying-the-knot-problem and see how rec-def helps”, which always end up a bit anti-climatic because it “just works”, at least in these cases. Hope you enjoyed it nevertheless.

Jonathan Dowland: Focus writing with (despite) LaTeX

10 October, 2022 - 02:54

LaTeX — the age-old typesetting system — makes me angry. Not because it's bad. To clarify, not because there's something better. But because there should be.

When writing a document using LaTeX, if you are prone to procrastination it can be very difficult to focus on the task at hand, because there are so many yaks to shave. Here's a few points of advice.

  • format the document source for legible reading. Yes, it's the input to the typesetter, and yes, the output of the typesetter needs to be legible. But it's worth making the input easy to read, too. Because…

  • avoid rebuilding your rendered document too often. It's slow, it takes you out of the activity of writing, and it throws up lots of opportunities to get distracted by some rendering nit that you didn't realise would happen.

  • Unless you are very good at manoeuvring around long documents, liberally split them up. I think it's fine to have sections in their own source files.

  • Machine-assisted moving around documents is good. If you use (neo)vim, you can tweak exuberant-ctags to generate more useful tags for LaTeX documents than what you get OOTB, including jumping to \label{}s and the BibTeX source of \cite{}s. See this stackoverflow post.

  • If you use syntax highlighting in your editor, take a long, hard look at what it's drawing attention to. It's not your text, that's for sure. Is it worth having it on? Consider turning it off. Or (yak shaving beware!) tweak it to de-emphasise things, instead of emphasising them. One small example for (neo)vim, to change tokens recognised as being "todo" to match the styling used for comments (which is normally de-emphasised):

    hi def link texTodo Comment
    

In a nutshell, I think it's wise to move much document reviewing work back into the editor rather than the rendered document, at least in the early stages of a section. And to do that, you need the document to be as legible as possible in the editor. The important stuff is the text you write, not the TeX macros you've sprinkled around to format it.

A few tips I benefit from in terms of source formatting:

  • I stick a line of 78 '%' characters between each section and sub-section. This helps to visually break them up and finding them in a scroll-past is quicker.

  • I indent as much of the content as I can in each chapter/section/subsection (however deep I go in sections) to tie them to the section they belong to and see at a glance how deep I am in subsections, just like with source code. The exception is environments that I can't due to other tool limitations: I have code excerpts demarked by \begin{code}/\end{code} which are executed by Haskell's GHCi interpreter, and the indentation can interfere with Haskell's indentation rules.

  • For large documents (like a thesis), I have little helper "standalone" .tex files whose purpose is to let me build just one chapter or section at a time.

  • I'm fairly sure I'll settle on a serif font for my final document. But I have found that a sans-serif font is easier on my eyes on-screen. YMMV.

Of course, you need to review the rendered document too! I like to bounce that to a tablet with a pen/stylus/pencil and review it in a different environment to where I write. I then end up with a long list of scrawled notes, and a third distinct activity, back at the writing desk, is to systematically go through them and apply some GTD-style thinking to them: can I fix it in a few seconds? Do it straight away. Delegate it? Unlikely… Defer it? transfer the review note into another system of record (such as LaTeX \\todo{…}).

And finally

  • Don't stop to write a blog post about it.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้