Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 41 min 33 sec ago

Mike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 03)

17 hours 16 min ago

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri

Over the past 4 months I worked on the following bits and pieces regarding Lomiri in Debian:

  • Work on lomiri-app-launch (Debian packaging, upstream work, upload to Debian)
  • Fork lomiri-url-dispatcher from url-dispatcher (upstream work)
  • Upload lomiri-url-dispatcher to Debian
  • Fork out suru-icon-theme and make it its own upstream project
  • Package and upload suru-icon-theme to Debian
  • First glance at lomiri-ui-toolkit (currently FTBFS, needs to be revisited)
  • Update of Mir (1.7.0 -> 1.8.0) in Debian
  • Fix net-cpp FTBFS in Debian
  • Fix FTBFS in gsettings-qt.
  • Fix FTBFS in mir (support of binary-only and arch-indep-only builds)
  • Coordinate with Marius Gripsgard and Robert Tari on shift over from Ubuntu Indicator to Ayatana Indicators
  • Upload ayatana-indicator-* (and libraries) to Debian (new upstream releases)
  • Package and upload to Debian: qmenumodel (still in Debian's NEW queue)
  • Package and upload to Debian: ayatana-indicator-sound
  • Symbol-Updates (various packages) for non-standard architectures
  • Fix FTBFS of qtpim-opensource-src in Debian since Qt5.14 had landed in unstable
  • Fix FTBFS on non-standard architectures of qtsystems, qtpim and qtfeedback
  • Fix wlcs in Debian (for non-standard architectures), more Symbol-Updates (esp. for the mir DEB package)
  • Symbol-Updates (mir, fix upstream tinkering with debian/libmiral3.symbols)
  • Fix FTBFS in lomiri-url-dispatcher against Debian unstable, file merge request upstream
  • Upstream release of qtmir 0.6.1 (via merge request)
  • Improve script as used in lomiri-api to ignore debian/ subfolder
  • Upstream release of lomiri-api 0.1.1 and upload to Debian unstable.

The next two big projects / packages ahead are lomiri-ui-toolkit and qtmir.


Many big thanks go to Marius and Dalton for their work on the UBports project and being always available for questions, feedback, etc.

Thanks to Ratchanan Srirattanamet for providing some of his time for debugging some non-thread safe unit tests (currently unsure, what package we actually looked at...).

Thanks for Florian Leeber for being my point of contact for topcis regarding my cooperation with the UBports Foundation.

Previous Posts about my Debian UBports Team Efforts

Vincent Bernat: Speeding up bgpq4 with IRRd in a container

29 September, 2020 - 15:32

When building route filters with bgpq4 or bgpq3, the speed of or can be a bottleneck. Updating many filters may take several tens of minutes, depending on the load:

$ time bgpq4 -h AS-HURRICANE | wc -l
1.96s user 0.15s system 2% cpu 1:17.64 total
$ time bgpq4 -h AS-HURRICANE | wc -l
1.86s user 0.08s system 12% cpu 14.098 total

A possible solution is to have your own IRRd instance in your network, mirroring the main routing registries. A close alternative is to bundle IRRd with all the data in a ready-to-use Docker image. This also has the advantage of easy integration into a Docker-based CI/CD pipeline.

$ git clone -b blade/master
$ cd irrd-legacy
$ docker build . -t irrd-snapshot:latest
Successfully built 58c3e83a1d18
Successfully tagged irrd-snapshot:latest
$ docker container run --rm --detach --publish=43:43 irrd-snapshot
$ time bgpq4 -h localhost AS-HURRICANE | wc -l
1.72s user 0.11s system 96% cpu 1.881 total

The Dockerfile contains three stages:

  1. building IRRd,1
  2. retrieving various IRR databases, and
  3. assembling the final container with the result of the two previous stages.

The second stage fetches the databases used by NTTCOM, RADB, RIPE, ALTDB, BELL, LEVEL3, RGNET, APNIC, JPIRR, ARIN, BBOI, TC, and AFRINIC. However, it misses some of the databases as I was unable to locate them: ARIN-WHOIS, RPKI,2 and REGISTROBR. Feel free to adapt!

The image can be scheduled to be rebuilt daily or weekly, depending of your needs. The repository includes a .gitlab-ci.yaml file automating the build and triggering the compilation of all filters by your CI/CD upon success.

  1. Instead of using the latest version of IRRd, the image relies on an older version that does not require a PostgreSQL instance and uses flat files instead. ↩︎

  2. Unlike the others, the RPKI database is built from the published RPKI ROAs. ↩︎

Norbert Preining: Performance with Intel i218/i219 NIC

29 September, 2020 - 12:19

I always had the feeling that my server, hosted by Hetzner, somehow has a slow internet connection. Then, I did put it on the distance between Finland and Japan, and didn’t care too much. Until yesterday my server stopped reacting to pings/ssh, and needed a hard reset. It turned out that the server was running fine, only that the ethernet card did hang. Hetzner support answered promptly and directed me to this web page, which described a change in the kernel concerning fragmentation offloading, and suggested the following configuration to regain connection speed:

ethtool -K <interface> tso off gso off

And to my surprise, this simple thing did wonder, and the connection speed improved dramatically, even from Japan (something like factor 10 in large rsync transfers). I have added this incantation to system cron tab and run it every hour, just to be sure that even after a reboot it is fine.

If you have bad connection speed with this kind of ethernet card, give it a try.

Kentaro Hayashi: dnsZoneEntry: field should be removed when DD is retired

28 September, 2020 - 16:03

It is known that Debian Developer can setup *

When Debian Developer had retired, actual DNS entry is removed, but dnsZoneEntry: field is kept on LDAP (

So you can not reuse * if retired Debian Developer owns your prefered subdomain already.

I've posted question about this current undocumented specification.

Vincent Bernat: Syncing RIPE, ARIN and APNIC objects with a custom Ansible module

28 September, 2020 - 15:33

Internet is split into five regional Internet registry: AFRINIC, ARIN, APNIC, LACNIC and RIPE. Each RIR maintains an Internet Routing Registry. An IRR allows one to publish information about the routing of Internet number resources.1 Operators use this to determine the owner of an IP address and to construct and maintain routing filters. To ensure your routes are widely accepted, it is important to keep the prefixes you announce up-to-date in an IRR.

There are two common tools to query this database: whois and bgpq4. The first one allows you to do a query with the WHOIS protocol:

$ whois -BrG 2a0a:e805:400::/40
inet6num:       2a0a:e805:400::/40
netname:        FR-BLADE-CUSTOMERS-DE
country:        DE
geoloc:         50.1109 8.6821
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:58Z
last-modified:  2020-05-19T08:04:58Z
source:         RIPE

route6:         2a0a:e805:400::/40
descr:          Blade IPv6 - AMS1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

The second one allows you to build route filters using the information contained in the IRR database:

$ bgpq4 -6 -S RIPE -b AS64476
NN = [

There is no module available on Ansible Galaxy to manage these objects. Each IRR has different ways of being updated. Some RIRs propose an API but some don’t. If we restrict ourselves to RIPE, ARIN and APNIC, the only common method to update objects is email updates, authenticated with a password or a GPG signature.2 Let’s write a custom Ansible module for this purpose!


I recommend that you read “Writing a custom Ansible module” as an introduction, as well as “Syncing MySQL tables” for a more instructive example.


The module takes a list of RPSL objects to synchronize and returns the body of an email update if a change is needed:

- name: prepare RIPE objects
    irr: RIPE
    mntner: fr-blade-1-mnt
    source: whois-ripe.txt
  register: irr

The source file should be a set of objects to sync using the RPSL language. This would be the same content you would send manually by email. All objects should be managed by the same maintainer, which is also provided as a parameter.

Signing and sending the result is not the responsibility of this module. You need two additional tasks for this purpose:

- name: sign RIPE objects
    cmd: gpg --batch --user --clearsign
    stdin: "{{ irr.objects }}"
  register: signed
  check_mode: false
  changed_when: false

- name: update RIPE objects by email
    subject: "NEW: update for RIPE"
    to: ""
    port: 25
    charset: us-ascii
    body: "{{ signed.stdout }}"

You also need to authorize the PGP keys used to sign the updates by creating a key-cert object and adding it as a valid authentication method for the corresponding mntner object:

key-cert:  PGPKEY-A791AAAB
certif:    -----BEGIN PGP PUBLIC KEY BLOCK-----
certif:    mQGNBF8TLY8BDADEwP3a6/vRhEERBIaPUAFnr23zKCNt5YhWRZyt50mKq1RmQBBY
certif:    -----END PGP PUBLIC KEY BLOCK-----
mnt-by:    fr-blade-1-mnt
source:    RIPE

mntner:    fr-blade-1-mnt
auth:      PGPKEY-A791AAAB
mnt-by:    fr-blade-1-mnt
source:    RIPE
Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    irr=dict(type='str', required=True),
    mntner=dict(type='str', required=True),
    source=dict(type='path', required=True),

result = dict(

module = AnsibleModule(
Getting existing objects

To grab existing objects, we use the whois command to retrieve all the objects from the provided maintainer.

# Per-IRR variations:
# - whois server
whois = {
    'ARIN': '',
    'RIPE': '',
    'APNIC': ''
# - whois options
options = {
    'ARIN': ['-r'],
    'RIPE': ['-BrG'],
    'APNIC': ['-BrG']
# - objects excluded from synchronization
excluded = ["domain"]
if irr == "ARIN":
    # ARIN does not return these objects

# Grab existing objects
args = ["-h", whois[irr],
        "-s", irr,
        "-i", "mnt-by",
proc ="whois", *args, capture_output=True)
if proc.returncode != 0:
    raise AnsibleError(
        f"unable to query whois: {args}")
output = proc.stdout.decode('ascii')
got = extract(output, excluded)

The first part of the code setup some IRR-specific constants: the server to query, the options to provide to the whois command and the objects to exclude from synchronization. The second part invokes the whois command, requesting all objects whose mnt-by field is the provided maintainer. Here is an example of output:

$ whois -h -s RIPE -BrG -i mnt-by fr-blade-1-mnt

inet6num:       2a0a:e805:300::/40
netname:        FR-BLADE-CUSTOMERS-FR
country:        FR
geoloc:         48.8566 2.3522
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:59Z
last-modified:  2020-05-19T08:04:59Z
source:         RIPE


route6:         2a0a:e805:300::/40
descr:          Blade IPv6 - PA1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE


The result is passed to the extract() function. It parses and normalizes the results into a dictionary mapping object names to objects. We store the result in the got variable.

def extract(raw, excluded):
    """Extract objects."""
    # First step, remove comments and unwanted lines
    objects = "\n".join([obj
                         for obj in raw.split("\n")
                         if not obj.startswith((
    # Second step, split objects
    objects = [RPSLObject(obj.strip())
               for obj in re.split(r"\n\n+", objects)
               if obj.strip()
               and not obj.startswith(
                   tuple(f"{x}:" for x in excluded))]
    # Last step, put objects in a dict
    objects = {repr(obj): obj
               for obj in objects}
    return objects

RPSLObject() is a class enabling normalization and comparison of objects. Look at the module code for more details.

>>> output="""
... inet6num:       2a0a:e805:300::/40
... […]
... """
>>> pprint({k: str(v) for k,v in extract(output, excluded=[])})
   'inet6num:       2a0a:e805:300::/40\n'
   'netname:        FR-BLADE-CUSTOMERS-FR\n'
   'country:        FR\n'
   'geoloc:         48.8566 2.3522\n'
   'admin-c:        BN2763-RIPE\n'
   'tech-c:         BN2763-RIPE\n'
   'status:         ASSIGNED\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE',
   'route6:         2a0a:e805:300::/40\n'
   'descr:          Blade IPv6 - PA1\n'
   'origin:         AS64476\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE'}
Comparing with wanted objects

Let’s build the wanted dictionary using the same structure, thanks to the extract() function we can use verbatim:

with open(module.params['source']) as f:
    source =
wanted = extract(source, excluded)

The next step is to compare got and wanted to build the diff object:

if got != wanted:
    result['changed'] = True
    if module._diff:
        result['diff'] = [
                 before=str(got.get(k, "")),
                 after=str(wanted.get(k, "")))
            for k in set((*wanted.keys(), *got.keys()))
            if k not in wanted or k not in got or wanted[k] != got[k]]
Returning updates

The module does not have a side effect. If there is a difference, we return the updates to send by email. We choose to include all wanted objects in the updates (contained in the source variable) and let the IRR ignore unmodified objects. We also append the objects to be deleted by adding a delete: attribute to each them them.

# We send all source objects and deleted objects.
deleted_mark = f"{'delete:':16}deleted by CMDB"
deleted = "\n\n".join([f"{got[k].raw}\n{deleted_mark}"
                       for k in got
                       if k not in wanted])
result['objects'] = f"{source}\n\n{deleted}"


The complete code is available on GitHub. The module supports both --diff and --check flags. It does not return anything if no change is detected. It can work with APNIC, RIPE and ARIN. It is not perfect: it may not detect some changes,3 it is not able to modify objects not owned by the provided maintainer4 and some attributes cannot be modified, requiring to manually delete and recreate the updated object.5 However, this module should automate 95% of your needs.

  1. Other IRRs exist without being attached to a RIR. The most notable one is RADb↩︎

  2. ARIN is phasing out this method in favor of IRR-online. RIPE has an API available, but email updates are still supported and not planned to be deprecated. APNIC plans to expose an API↩︎

  3. For ARIN, we cannot query key-cert and mntner objects and therefore we cannot detect changes in them. It is also not possible to detect changes to the auth mechanisms of a mntner object. ↩︎

  4. APNIC do not assign top-level objects to the maintainer associated with the owner. ↩︎

  5. Changing the status of an inetnum object requires deleting and recreating the object. ↩︎

Norbert Preining: Cinnamon for Debian – imminent removal from testing

28 September, 2020 - 06:23

I have been more or less maintaining Cinnamon now for quite some time, but using it only sporadically due to my switch to KDE/Plasma. Currently, Cinnamon’s cjs package depends on mozjs52, which also is probably going to be orphaned soon. This will precipitate a lot of changes, not the least being Cinnamon being removed from Debian/testing.

I have pinged upstream several times, without much success. So for now the future looks bleak for cinnamon in Debian. If there are interested developers (Debian or not), please get in touch with me, or directly try to update cjs to mozjs78.

Steinar H. Gunderson: Introducing plocate

28 September, 2020 - 05:45

In continued annoyance over locate's slowness, I made my own locate using posting lists (thus the name plocate) and compression, and it turns out that you hardly need any tuning at all to make it fast. Example search on a system with 26M files:

cassarossa:~/nmu/plocate> ls -lh /var/lib/mlocate  
total 1,5G                
-rw-r----- 1 root mlocate 1,1G Sep 27 06:33 mlocate.db
-rw-r----- 1 root mlocate 470M Sep 28 00:34 plocate.db

cassarossa:~/nmu/plocate> time mlocate info/mlocate
mlocate info/mlocate  20.75s user 0.14s system 99% cpu 20.915 total

cassarossa:~/nmu/plocate> time plocate info/mlocate
plocate info/mlocate  0.01s user 0.00s system 83% cpu 0.008 total

It will be slower if files are on rotating rust and not cached, but still much faster then mlocate.

It's a prototype, and freerides off of updatedb from mlocate (mlocate.db is converted to plocate.db). Case-sensitive matches only, no regexes or other funny business. Get it from (clone with --recursive so that you get the TurboPFOR submodule). GPLv2+.

Enrico Zini: Coup d'état in recent Italian history

28 September, 2020 - 05:00

Italy during the cold war has always been in too strategic a position, and with too strong a left wing movement, not to get the CIA involved.

Here are a few stories of coup d'état and other kinds of efforts to manipulate Italian politics:

Iain R. Learmonth: Multicast IPTV

28 September, 2020 - 04:35

For almost a decade, I’ve been very slowly making progress on a multicast IPTV system. Recently I’ve made a significant leap forward in this project, and I wanted to write a little on the topic so I’ll have something to look at when I pick this up next. I was aspiring to have a useable system by the end of today, but for a couple of reasons, it wasn’t possible.

When I started thinking about this project, it was still common to watch broadcast television. Over time the design of this system has been changing as new technologies have become available. Multicast IP is probably the only constant, although I’m now looking at IPv6 rather than IPv4.

Initially, I’d been looking at DVB-T PCI cards. USB devices have become common and are available cheaply. There are also DVB-T hats available for the Raspberry Pi. I’m now looking at a combination of Raspberry Pi hats and USB devices with one of each on a couple of Pis.

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

The Raspberry Pi devices will run DVBlast, an open-source DVB demultiplexer and streaming server. Each of the tuners will be tuned to a different transponder giving me the ability to stream any combination of available channels simultaneously. This is everything that would be needed to watch TV on PCs on the home network with VLC.

I’ve not yet worked out if Kodi will accept multicast streams as a TV source, but I do know that Tvheadend will. Tvheadend can also act as a PVR to record programmes for later playback so is useful even if the multicast streams can be viewed directly.

So how far did I get? I have built two Raspberry Pis in cases with the DVB-T hats on. They need to sit in the lounge as that’s where the antenna comes down from the roof. There’s no wired network connection in the lounge. I planned to use an OpenBSD box as a gateway, bridging the wireless network to a wired network.

Two problems quickly emerged. The first being that the wireless card I had purchased only supported 2.4GHz, no 5GHz, and I have enough noise from neighbours that the throughput rate and packet loss are unacceptable.

The second problem is that I had forgotten the problems with bridging wireless networks. To create a bridge, you need to be able to spoof the MAC addresses of wired devices on the wireless interface, but this can only be done when the wireless interface is in access point mode.

So when I come back to this, I will have to look at routing rather than bridging to work around the MAC address issue, and I’ll also be on the lookout for a cheap OpenBSD supported mini-PCIe wireless card that can do 5GHz.

Joachim Breitner: Learn Haskell on CodeWorld writing Sokoban

28 September, 2020 - 02:20

Two years ago, I held the CIS194 minicourse on Haskell at the University of Pennsylvania. In that installment of the course, I changed the first four weeks to teach the basics of Haskell using the online Haskell environment CodeWorld, and lead the students towards implementing the game Sokoban.

As it is customary for CIS194, I put my lecture notes and exercises online, and this has been used as a learning resources by people from all over the world. But since I have left the University of Pennsylvania, I lost the ability to update the text, and as the CodeWorld API has evolved, some of the examples and exercises no longer work.

Some recent complains about that, in bug reports against CodeWorld and in unrealistically flattering tweets (“Shame, this was the best Haskell course ever!!!”) motivated me to extract that material and turn it into an updated stand-alone tutorial that I can host myself.

So if you feel like learning Haskell without worrying about local installation, and while creating a reasonably fun game, head over to and get started! Improvements can now also be contributed at

Credits go to Brent Yorgey, Richard Eisenberg and Noam Zilberstein, who held the previous installments of the course, and Chris Smith for creating the CodeWorld environment.

Dirk Eddelbuettel: pkgKitten 0.2.0: Now with tinytest and new docs

27 September, 2020 - 21:13

A new release 0.2.0 of pkgKitten just hit on CRAN today, or about eleven months after the previous release.

This release brings support for tinytest by having pkgKitten::kitten() automagically call tinytest::puppy() if the latter package is installed (and the user did not opt out of calling it). So your newly created minimal package now also uses a wonderful yet tiny testing framework. We also added a new documentation site using the previously tweeted-about wrapper for Material for MkDocs I really dig. And last but not least we switched to BSPM-based Continued Integration (which I wrote about yesterday in R4 #30) and fixed one bug regarding the default NAMESPACE file.

Changes in version 0.2.0 (2020-09-27)
  • Continuous Integration uses the updated BSPM-based script on Travis and with GitHub Actions (Dirk in #11 plus earlier commits).

  • A new default NAMESPACE file is now installed (Dirk in #12).

  • A package documentation website was added (Dirk in #13).

  • Call tinytest::puppy if installed and not opted out (Dirk in #14).

More details about the package are at the pkgKitten webpage, the (new) pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Andrew Cater: Final post from media team for the day - most of the ordinary images and live images have been tested

27 September, 2020 - 04:51

 Winding down slightly - we've worked our way through most of the images and testing. Schweer tested all of the Debian Edu/Skolelinux images for which many thanks

Sledge, RattusRattus, Isy and I have been working pretty much solidly for 10 3/4 hours. There's still some images to build - mips, mipsel and s390x but these are all images that we don't have hardware to test on particularly.

Another good and useful day - bits and pieces done throughout. NOTE: There appear to have been some security updates since the main release this morning so, as ever, it's worth updating machines on a regular basis.

Waiting for the final images to finish building so that we can check the archive for completeness and then publish to the media mirrors. All the best until next time: thanks as ever to Sledge for his invaluable help. See you again in a couple of months in all likelihood. 

A much smaller release: some time in the next month we hope to be able to build and test an Alpha release for Bullseye. Bullseye is likely to be released somewhere round the middle of next year so we'll have additional Buster stable point releases in the meantime.

François Marier: Repairing a corrupt ext4 root partition

27 September, 2020 - 02:45

I ran into filesystem corruption (ext4) on the root partition of my backup server which caused it to go into read-only mode. Since it's the root partition, it's not possible to unmount it and repair it while it's running. Normally I would boot from an Ubuntu live CD / USB stick, but in this case the machine is using the mipsel architecture and so that's not an option.

Repair using a USB enclosure

I had to pull the shutdown the server and then pull the SSD drive out. I then moved it to an external USB enclosure and connected it to my laptop.

I started with an automatic filesystem repair:

fsck.ext4 -pf /dev/sde2

which failed for some reason and so I moved to an interactive repair:

fsck.ext4 -f /dev/sde2

Once all of the errors were fixed, I ran a full surface scan to update the list of bad blocks:

fsck.ext4 -c /dev/sde2

Finally, I forced another check to make sure that everything was fixed at the filesystem level:

fsck.ext4 -f /dev/sde2
Fix invalid alternate GPT

The other thing I noticed is this messge in my dmesg log:

scsi 8:0:0:0: Direct-Access     KINGSTON  SA400S37120     SBFK PQ: 0 ANSI: 6
sd 8:0:0:0: Attached scsi generic sg4 type 0
sd 8:0:0:0: [sde] 234441644 512-byte logical blocks: (120 GB/112 GiB)
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: 31 00 00 00
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Optimal transfer size 33553920 bytes
Alternate GPT is invalid, using primary GPT.
 sde: sde1 sde2

I therefore checked to see if the partition table looked fine and got the following:

$ fdisk -l /dev/sde
GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 799CD830-526B-42CE-8EE7-8C94EF098D46

Device       Start       End   Sectors   Size Type
/dev/sde1     2048   8390655   8388608     4G Linux swap
/dev/sde2  8390656 234441614 226050959 107.8G Linux filesystem

It turns out that all I had to do, since only the backup / alternate GPT partition table was corrupt and the primary one was fine, was to re-write the partition table:

$ fdisk /dev/sde

Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.

Command (m for help): w

The partition table has been altered.
Syncing disks.
Run SMART checks

Since I still didn't know what caused the filesystem corruption in the first place, I decided to do one last check: SMART errors.

I couldn't do this via the USB enclosure since the SMART commands aren't forwarded to the drive and so I popped the drive back into the backup server and booted it up.

First, I checked whether any SMART errors had been reported using smartmontools:

smartctl -a /dev/sda

That didn't show any errors and so I kicked off an extended test:

smartctl -t long /dev/sda

which ran for 30 minutes and then passed without any errors.

The mystery remains unsolved.

Dirk Eddelbuettel: #30: Easy, Reliable, Fast and Portable Linux and macOS Continuous Integration

26 September, 2020 - 23:46

Welcome to the 30th post in the rarified R recommendation resources series or R4 for short. The last post introduced BSPM. In the four weeks since, we have worked some more on BSPM to bring it to the point where it is ready for use with continuous integration. Building on this, it is now used inside the script that driven our CI use for many years (via the r-travis repo).

Which we actually use right now on three different platforms:

All three use the exact same script facilitating this, and run a ‘matrix’ over Linux and macOS. You read this right: one CI setup that is portable and which you can take to your CI provider of choice. No lock-in or tie-in. Use what works, change at will. Or run on all three if you like burning extra cycles.

This is already used by handful of my repos as well as by at least two repos of friends also deploying r-travis. How does it work? In a nutshell we are

  • downloading via curl and changing its mode;
  • running bootstrap which sets the operating system default:
    • on Linux we use Ubuntu,
      • add two PPAs repos for R itself and over 4600 r-cran-* binaries,
      • and enable BSPM to use these from install.packages()
    • on macOS we use the standard setup also used on Travis, GitHub Actions and elsewhere;
    • this provides us with fast, reliable, easy, and portable access to binaries on two OSs under dependency resolution;
  • running install_deps to install just the requireded Depends:, Imports: and LinkingTo:
  • running tests to build the tarball and test it via R CMD check --as-cran.

There are several customizations that are possible via environment variables

  • additional PPAs or drat repos can be added to offer even more package choice;
  • alternatively one could run install_all to also install Suggests:;
  • optionally one could run install_r pkgA pkgB ... to install packages explicitly listed;
  • optionally one could also run install_aptget r-cran-pkga r-cran-pkgb otherpackage to add more Ubuntu binaries.

We find this setup compelling. The scheme is simple: there really is just one shell script behind it which can also be downloaded and altered. The scheme is also portable as we can (as shown) rotate between CI provides. The scheme is also more flexible: in case of debugging needs one can simply run the script on a local Docker or VM instance. Lastly, the scheme moves away from single points of failure or breakage.

Currently the script uses only BSPM as I had the hunch that it would a) work and b) be compelling. Adding support for RSPM would be equally easy, but I have no immediate need to do so. Adding BioConductor installation may be next. That is easy when BioConductor uses r-release; it may be little more challenging under r-devel to but it should work too. Stay tuned.

In the meantime, if the above sounds compelling, give from r-travis a go!

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Andrew Cater: Chunking through the tests for various media images ...

26 September, 2020 - 22:47

We're working our way through some of the CD/DVD/Blu-Ray media images, doing test installs, noting failures and so on. It's repetitive work but vital if we're going to provide some assurance that folk can install from the images we make. 

There's always the few things that catch us out and there's always something to note for next time. Schweer has joined us and is busy chasing down debian-edu/Skolelinux installs from Germany. We're getting there, one way and another, and significantly ahead of where we were last time around when the gremlins got in and delayed us. All good :)

Andrew Cater: There are things that money can't buy - and sensible Debian colleagues are worth gold and diamonds :)

26 September, 2020 - 20:13

 Participating in the Debian media testing on debian-cd. One of my colleagues has just spent time to sort out an email issue having spent a couple of hours with me the other night. I now have good, working email for the first time in years - I can't value that highly enough.

Sledge, RattusRattus, Isy and myself are all engaged in testing various CD images. At the same time, they're debugging a new application to save us from wiki problems when we do this - and we're also able to use a video link which is really handy to chat backwards and forwards and means I can sit virtually in Cambridge :)

Lots of backchat and messages flying backwards and forwards - couldn't wish for a better way to spend an afternoon with friends.

Andrew Cater: There's a Debian point release for Debian stable happening this weekend - 10.6

26 September, 2020 - 17:58

 Nothing particularly new or unexpected: there's a point release happening at some point this weekend for Debian stable. Usual rules apply: if you've already got a system current and up to date, there's not much to do but the base files version will change at some point to reflect 10.6 when you next update. 

If you have media from 10.5, you may not _have_ to go and get media this weekend but it's always useful to get new media in due course.

This point release will contain security fixes, consequent changes etc. as usual - it is always good and useful to keep machines up to date.

Working with the CD team to eventually test, build and release CD / DVD images and media as and when files gradually become available. As ever, this may take 12-16 hours. As ever, I'll post some blog entries as we go.

Currently "sitting in Cambridge" with Sledge, RattusRattus and Isy who are all involved in the testing and we'll have a great day, as ever.

Russell Coker: Bandwidth for Video Conferencing

25 September, 2020 - 22:44

For the Linux Users of Victoria (LUV) I’ve run video conferences on Jitsi and BBB (see my previous post about BBB vs Jitsi [1]). One issue with video conferences is the bandwidth requirements.

The place I’m hosting my video conference server has a NBN link with allegedly 40Mb/s transmission speed and 100Mb/s reception speed. My tests show that it can transmit at about 37Mb/s and receive at speeds significantly higher than that but also quite a bit lower than 100Mb/s (around 60 or 70Mb/s). For a video conference server you have a small number of sources of video and audio and a larger number of targets as usually most people will have their microphones muted and video cameras turned off. This means that the transmission speed is the bottleneck. In every test the reception speed was well below half the transmission speed, so the tests confirmed my expectation that transmission was the only bottleneck, but the reception speed was higher than I had expected.

When we tested bandwidth use the maximum upload speed we saw was about 4MB/s (32Mb/s) with 8+ video cameras and maybe 20 people seeing some of the video (with a bit of lag). We used 3.5MB/s (28Mb/s) when we only had 6 cameras which seemed to be the maximum for good performance.

In another test run we had 4 people all sending video and the transmission speed was about 260KB/s.

I don’t know how BBB manages the small versions of video streams. It might reduce the bandwidth when the display window is smaller.

I don’t know the resolutions of the cameras. When you start sending video in BBB you are prompted for the “quality” with “medium” being default. I don’t know how different camera hardware and different choices about “quality” affect bandwidth.

These tests showed that for the cameras we had available a small group of people video chatting a 100/40 NBN link (the fastest Internet link in Australia that’s not really expensive) a small group of people can be all sending video or a medium size group of people can watch video streams from a small group.

For meetings of the typical size of LUV meetings we won’t have a bandwidth problem.

There is one common case that I haven’t yet tested, where there is a single video stream that many people are watching. If 4 people are all sending video with 260KB/s transmission bandwidth then 1 person could probably send video to 4 for 65KB/s. Doing some simple calculations on those numbers implies that we could have 1 person sending video to 240 people without running out of bandwidth. I really doubt that would work, but further testing is needed.

Related posts:

  1. Solving Rubik’s Cube and IO Bandwidth Solving Rubiks Cube by treating disk as RAM: Gene Cooperman...
  2. Digital Video Cameras I’ve just done some quick research on Digital Video Cameras...
  3. BBB vs Jitsi I previously wrote about how I installed the Jitsi video-conferencing...

Colin Watson: Porting Launchpad to Python 3: progress report

25 September, 2020 - 18:01

Launchpad still requires Python 2, which in 2020 is a bit of a problem. Unlike a lot of the rest of 2020, though, there’s good reason to be optimistic about progress.

I’ve been porting Python 2 code to Python 3 on and off for a long time, from back when I was on the Ubuntu Foundations team and maintaining things like the Ubiquity installer. When I moved to Launchpad in 2015 it was certainly on my mind that this was a large body of code still stuck on Python 2. One option would have been to just accept that and leave it as it is, maybe doing more backporting work over time as support for Python 2 fades away. I’ve long been of the opinion that this would doom Launchpad to being unmaintainable in the long run, and since I genuinely love working on Launchpad - I find it an incredibly rewarding project - this wasn’t something I was willing to accept. We’re already seeing some of our important dependencies dropping support for Python 2, which is perfectly reasonable on their terms but which is starting to become a genuine obstacle to delivering important features when we need new features from newer versions of those dependencies. It also looks as though it may be difficult for us to run on Ubuntu 20.04 LTS (we’re currently on 16.04, with an upgrade to 18.04 in progress) as long as we still require Python 2, since we have some system dependencies that 20.04 no longer provides. And then there are exciting new features like type hints and async/await that we’d like to be able to use.

However, until last year there were so many blockers that even considering a port was barely conceivable. What changed in 2019 was sorting out a trifecta of core dependencies. We ported our database layer, Storm. We upgraded to modern versions of our Zope Toolkit dependencies (after contributing various fixes upstream, including some substantial changes to Zope’s test runner that we’d carried as local patches for some years). And we ported our Bazaar code hosting infrastructure to Breezy. With all that in place, a port seemed more of a realistic possibility.

Still, even with this, it was never going to be a matter of just following some standard porting advice and calling it good. Launchpad has almost a million lines of Python code in its main git tree, and around 250 dependencies of which a number are quite Launchpad-specific. In a project that size, not only is following standard porting advice an extremely time-consuming task in its own right, but just about every strange corner case is going to show up somewhere. (Did you know that StringIO.StringIO(None) and io.StringIO(None) do different things even after you account for the native string vs. Unicode text difference? How about the behaviour of .union() on a subclass of frozenset?) Launchpad’s test suite is fortunately extremely thorough, but even just starting up the test suite involves importing most of the data model code, so before you can start taking advantage of it you have to make a large fraction of the codebase be at least syntactically-correct Python 3 code and use only modules that exist in Python 3 while still working in Python 2; in a project this size that turns out to be a large effort on its own, and can be quite risky in places.

Canonical’s product engineering teams work on a six-month cycle, but it just isn’t possible to cram this sort of thing into six months unless you do literally nothing else, and “please can we put all feature development on hold while we run to stand still” is a pretty tough sell to even the most understanding management. Fortunately, we’ve been able to grow the Launchpad team in the last year or so, and so it’s been possible to put “Python 3” on our roadmap in the understanding that we aren’t going to get all the way there in one cycle, while still being able to do other substantial feature development work as well.

So, with all that preamble, what have we done this cycle? We’ve taken a two-pronged approach. From one end, we identified 147 classes that needed to be ported away from some compatibility code in our database layer that was substantially less friendly to Python 3: we’ve ported 38 of those, so there’s clearly a fair bit more to do, but we were able to distribute this work out among the team quite effectively. From the other end, it was clear that it would be very inefficient to do general porting work when any attempt to even run the test suite would run straight into the same crashes in the same order, so I set myself a target of getting the test suite to start up, and started hacking on an enormous git branch that I never expected to try to land directly: instead, I felt free to commit just about anything that looked reasonable and moved things forward even if it was very rough, and every so often went back to tidy things up and cherry-pick individual commits into a form that included some kind of explanation and passed existing tests so that I could propose them for review.

This strategy has been dramatically more successful than anything I’ve tried before at this scale. So far this cycle, considering only Launchpad’s main git tree, we’ve landed 137 Python-3-relevant merge proposals for a total of 39552 lines of git diff output, keeping our existing tests passing along the way and deploying incrementally to production. We have about 27000 more lines of patch at varying degrees of quality to tidy up and merge. Our main development branch is only perhaps 10 or 20 more patches away from the test suite being able to start up, at which point we’ll be able to get a buildbot running so that multiple developers can work on this much more easily and see the effect of their work. With the full unlanded patch stack, about 75% of the test suite passes on Python 3! This still leaves a long tail of several thousand tests to figure out and fix, but it’s a much more incrementally-tractable kind of problem than where we started.

Finally: the funniest (to me) bug I’ve encountered in this effort was the one I encountered in the test runner and fixed in zopefoundation/zope.testrunner#106: IDs of failing tests were written to a pipe, so if you have a test suite that’s large enough and broken enough then eventually that pipe would reach its capacity and your test runner would just give up and hang. Pretty annoying when it meant an overnight test run didn’t give useful results, but also eloquent commentary of sorts.

Reproducible Builds: ARDC sponsors the Reproducible Builds project

25 September, 2020 - 07:00

The Reproducible Builds project is pleased to announce a donation from Amateur Radio Digital Communications (ARDC) in support of its goals. ARDC’s contribution will propel the Reproducible Builds project’s efforts in ensuring the future health, security and sustainability of our increasingly digital society.

About Amateur Radio Digital Communications (ARDC)

Amateur Radio Digital Communications (ARDC) is a non-profit that was formed to further research and experimentation with digital communications using radio, with a goal of advancing the state of the art of amateur radio and to educate radio operators in these techniques.

It does this by managing the allocation of network resources, encouraging research and experimentation with networking protocols and equipment, publishing technical articles and number of other activities to promote the public good of amateur radio and other related fields. ARDC has recently begun to contribute funding to organisations, groups, individuals and projects towards these and related goals, and their grant to the Reproducible Builds project is part of this new initiative.

Amateur radio is an entirely volunteer activity performed by knowledgeable hobbyists who have proven their ability by passing the appropriate government examinations. No remuneration is permitted. “Ham radio,” as it is also known, has proven its value in advancements of the state of the communications arts, as well as in public service during disasters and in times of emergency.

For more information about ARDC, please see their website at

About the Reproducible Builds project

One of the original promises of open source software was that peer review would result in greater end-user security and stability of our digital ecosystem. However, although it is theoretically possible to inspect and build the original source code in order to avoid maliciously-inserted flaws, almost all software today is distributed in prepackaged form.

This disconnect allows third-parties to compromise systems by injecting code into seemingly secure software during the build process, as well as by manipulating copies distributed from ‘app stores’ and other package repositories.

In order to address this, ‘Reproducible builds’ are a set of software development practices, ideas and tools that create an independently-verifiable path from the original source code, all the way to what is actually running on our machines. Reproducible builds can reveal the injection of backdoors introduced by the hacking of developers’ own computers, build servers and package repositories, but can also expose where volunteers or companies have been coerced into making changes via blackmail, government order, and so on.

A world without reproducible builds is a world where our digital infrastructure cannot be trusted and where online communities are slower to grow, collaborate less and are increasingly fragile. Without reproducible builds, we leave space for greater encroachments on our liberties both by individuals as well as powerful, unaccountable actors such as governments, large corporations and autocratic regimes.

The Reproducible Builds project began as a project within the Debian community, but is now working with many crucial and well-known free software projects such as Coreboot, openSUSE, OpenWrt, Tails, GNU Guix, Arch Linux, Tor, and many others. It is now an entirely Linux distribution independent effort and serves as the central ‘clearing house’ for all issues related to securing build systems and software supply chains of all kinds.

For more about the Reproducible Builds project, please see their website at

If you are interested in ensuring the ongoing security of the software that underpins our civilisation, and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้