Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 54 sec ago

Dirk Eddelbuettel: Rcpp Bug fix interim version

8 hours 15 min ago

Rcpp 1.0.4 was released on March 17, following the usual sequence of fairly involved reverse-depends check along with a call for community testing issued weeks before the release. In that email I specifically pleaded with folks to pretty-please test non-standard setups:

It would be particularly beneficial if those with “unsual” build dependencies tested it as we would increase overall coverage beyond what I get from testing against 1800+ CRAN packages. BioConductor would also be welcome.

Alas, you can’t always get what you want. Shortly after the release we were made aware that the two (large) pull request at the book ends of the 1.0.3 to 1.0.4 release period created trouble. Of these two, the earliest PR in the 1.0.4 release upset older-than-CRAN-tested installation, i.e. R 3.3.0 or before. (Why you’d want to run R 3.3.* when R 3.6.3 is current is something I will never understand, but so be it.) This got addressed in two new PRs. And the matching last PR had a bit of sloppyness leaving just about everyone alone, but not all those macbook-wearing data scientists when using newer macOS SDKs not used by CRAN. In other words, “unsual” setups. But boy, do those folks have an ability to complain. Again, two quick PRs later that was addressed. Along came a minor PR with two more Rcpp::Shield<> uses (as life is too short to manually count PROTECT and UNPROTECT). And then a real issue between R 4.0.0 and Rcpp first noticed with RcppParallel builds on Windows but then also affecting RcppArmadillo. Another quickly issued fix. So by now the count is up to six, and we arrived at Rcpp

Which is now on CRAN, after having sat there for nearly a full week, and of course with no reason given. Because the powers that be move in mysterious ways. And don’t answer to earthlings like us.

As may transpire here, I am little tired from all this. I think we can do better, and I think we damn well should, or I may as well throw in the towel and just release to the drat repo where each of the six interim versions was available for all to take as soon as it materialized.

Anyway, here is the state of things. Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 1897 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 191 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steasy at one millions downloads per month.

The changes for this interim version are summarized below.

Changes in Rcpp patch release version (2020-04-02)
  • Changes in Rcpp API:

    • The exception handler code in #1043 was updated to ensure proper include behavior (Kevin in #1047 fixing #1046).

    • A missing Rcpp_list6 definition was added to support R 3.3.* builds (Davis Vaughan in #1049 fixing #1048).

    • Missing Rcpp_list{2,3,4,5} definition were added to the Rcpp namespace (Dirk in #1054 fixing #1053).

    • A further updated corrected the header include and provided a missing else branch (Mattias Ellert in #1055).

    • Two more assignments are protect with Rcpp::Shield (Dirk in #1059)

  • Changes in Rcpp Attributes:

    • Empty strings are not passed to R CMD SHLIB which was seen with R 4.0.0 on Windows (Kevin in #1062 fixing #1061).
  • Changes in Rcpp Deployment:

    • Travis CI unit tests now run a matrix over the versions of R also tested at CRAN (rel/dev/oldrel/oldoldrel), and coverage runs in parallel for a net speed-up (Dirk in #1056 and #1057).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2356 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Antoine Beaupré: Mumble dreams

9 April, 2020 - 23:08

With everyone switching to remote tools for social distancing, I've been using Mumble more and more. That's partly by choice -- I don't like videoconferencing much, frankly -- and partly by necessity: sometimes my web browser fails and Mumble is generally more reliable.

Some friend on a mailing list recently asked "shouldn't we make Mumble better?" and opened the door for me to go on a long "can I get a pony?" email. Because I doubt anyone on that mailing list has the time or capacity to actually fix those issues, I figured I would copy this to a broader audience in the hope that someone else would pick it up.

Why Mumble rocks

Before I go on with the UI critique, I should show why care: Mumble is awesome.

When you do manage to configure it correctly, Mumble just works; it's highly reliable. It uses little CPU, both on the client and the server side, and can have rooms with tens if not hundreds of participants. The server can be easily installed and configured: there's a Debian package and resource requirements are minimal. It's basically network-bound. There are at least three server implementations, the official one called Murmur, the minimalist umurmur and Grumble, a Go rewrite.

It has great quality: echo canceling, when correctly configured, is solid and latency is minimal. It has "overlays" so you can use it while gaming or demo'ing in full screen while still having an idea of who's talking. It also supports positional audio for gaming that integrates with popular games like Counterstrike or Half-Life.

It's moderately secure: it doesn't support end-to-end encryption, but client/server communication is encrypted with TLS. It supports a server password and some moderation mechanisms.

UI improvements

Mumble should be smarter about a bunch of things. Having all those settings is nice for geeky control freaks, but it makes the configuration absolutely unusable for most people. Hide most settings by default, and make better defaults.

Specifically, those should be on by default:

  • RNNoise
  • echo cancellation (the proper "monitor" channels)
  • pre-configured shortcut for PTT (Push To Talk) -- right-shift is my favorite
  • "double-PTT" to hold it enabled
  • be more silent by default (I understand why it would want to do voice synthesis, but it would need to be much better at it before it's default)

The echo test should be more accessible, one or two clicks away from the main UI. I have only found out about that feature when someone told me where to find it. This basically means to take it out of the settings page and into its own dialog.

The basic UI should be much simpler. It could look something like Jitsi: just one giant mute button with a list of speakers. Basically:

  1. Take that status bar and make it use the entire space of the main window

  2. Push the chat and room list dialog to separate, optional dialog (e.g. the room list could be a popup on login, but we don't need to continuously see the damn thing)

  3. Show the name of the person talking in the main UI, along with other speakers (Big Blue Button does this well: just a label that fades away with time after a person talks)

Some features could be better explained. For example, the "overlay" feature makes no sense at all for most users. It only makes sense when you're a gamer and use Mumble alongside another full-screen program, to show you who's talking.

Improved authentication. The current authentication systems in Mumble are somewhat limited: the server can have a shared password to get access to it, and from there it's pretty much free-for-all. There are client certificates but those are hard to understand and the most common usage scenario is that someone manages to configure it once, forgets about it and then cannot login again with the same username.

It should be easier to get the audio right. Now, to be fair, this is hard to do in any setup, and Mumble is only a part of this. There are way too many moving parts in Linux for this to be easy: between your hardware, ALSA drivers, Pulseaudio mixers and Mumble, too many things can go wrong. So this is a general problem when doing multimedia in general, and the Linux ecosystem in particular, but Mumble is especially hard to configure in there.

Improved speaker stats. When you right-click on a user in Mumble, you get detailed stats about the user: packet loss, latency, bandwidth, codecs... It's pretty neat. But that is hard to parse for a user. Jitsi, in contrast, shows a neat little "bar graph" (similar to what you get on a cell phone) with a color code to show network conditions for that user. Then you can drill down to show more information. Having that info for the user would be really useful to figure out which user is causing that echo or latency. Heck, while I'm dreaming, we could do the same thing Jitsi and tell the user when we detect too much noise on their side and suggest muting!

There's probably more UI issues, but at that point you have basically rebuilt the entire user interface. This problem is hard to fix because UX people are unlikely to have the skills required to hack at an (old) Qt app, and C++ hackers are unlikely to have the best UX skills...

Missing features

Video. It has been on the roadmap since 2011, so I'm not holding my breath. It is, obviously, the key feature missing from the software when compared to other conferencing tools and it's nice to see they are considering it. Screensharing and whiteboarding would also be a nice addition. Unfortunately, all that is a huge undertaking and it's unlikely to happen in the short term. And even if it does, it's possible hard-core Mumble users would be really upset at the change...

A good web app -- a major blocker to the adoption of Mumble is the need for that complex app. If users could join just with a web browser, adoption would be much easier. There is a web app called mumble-web out there, but it seems to work only for listening as there are numerous problems with recording: quality issues, audio glitches, voice activation, voice activation.. The CCC seems to be using that app to stream talk translation, so that part supposedly works correctly.

Dial-in -- allow plain old telephones to call into conferences. There seems to be a program called mumsi that can do this, but it's unmaintained and it's unclear if any of the forks work at all.


Now the above will probably not happen soon. Unfortunately, Mumble has had trouble with their release process recently. It took them a long time to even agree on releasing 1.3, and when they did agree, it took them a long time again to actually do the release. There has been much more activity on the Mumble client and web app recently, so hopefully I will be proven wrong. The 1.3.1 release actually came out recently which is encouraging.

All in all, mumble has some deeply ingrained UI limitations. it's built like an app from the 1990, all the way down to the menu system and "status bar" buttons. It's definitely not intuitive for a new user and while there's an audio wizard that can help you get started, it doesn't always work and can be confusing in itself.

I understand that I'm just this guy saying "please make this for me ktxbye". I'm not writing this as a critic of Mumble: I love the little guy, the underdog. Mumble has been around forever and it kicks ass. I'm writing this in a spirit of solidarity, in the hope the feedback can be useful and to provide useful guidelines on how things could be improved. I wish I had the time to do this myself and actually help the project beyond just writing, but unfortunately the reality is I'm a poor UI designer and I have little time to contribute to more software projects.

So hopefully someone could take those ideas and make Mumble even greater. And if not, we'll just have to live with it.

Thanks to all the Mumble developers who, over all those years, managed to make and maintain such an awesome product. You rock!

Elana Hashman: Repack Zoom .debs to remove the `ibus` dependency

9 April, 2020 - 11:00

For whatever reason, Zoom distributes .debs that have a dependency on ibus. ibus is the "intelligent input bus" package and as far as I'm aware, might be used for emoji input in chat or something?? But is otherwise not actually a dependency of the Zoom package. I've tested this extensively... the client works fine without it.

I noticed when I installed ibus along with the Zoom package that ibus would frequently eat an entire core of CPU. I'm sure this is a bug in the ibus package or service, but I have no energy to try to get that fixed. If it's not a hard dependency, Zoom shouldn't depend on it in the first place.

Anyways, here's how you can repack a Zoom .deb to remove the ibus dependency:

scratch=$(mktemp -d)

# Extract package contents
dpkg -x zoom_amd64.deb $scratch

# Extract package control information
dpkg -e zoom_amd64.deb $scratch/DEBIAN

# Remove the ibus dependency
sed -i -E 's/(ibus, |, ibus)//' $scratch/DEBIAN/control

# Rebuild the .deb
dpkg -b $scratch patched_zoom_amd64.deb

Now you can install the patched .deb with

dpkg -i patched_zoom_amd64.deb

The upstream fix would be for Zoom to move the ibus "Dependency" to a "Recommends", but they have been unwilling to do this for over a year.

But wait, what version even is my package?

By the way, you may have also noticed that the Zoom client downloads do not conform to the standard Debian package naming scheme (i.e. including the version in the filename). If you're not sure what version a zoom_amd64.deb package you've downloaded is, you can quickly extract that information with dpkg-deb:

dpkg-deb -I zoom_amd64.deb | grep Version
# Version: 3.5.383291.0407

Louis-Philippe Véronneau: Using Jitsi Meet with Puppet for self-hosted video conferencing

9 April, 2020 - 03:45

Here's a blog post I wrote for the blog. Many thanks to Ben Ford and all their team!.

With everything that is currently happening around the world, many of us IT folks have had to solve complex problems in a very short amount of time. Pretty quickly at work, I was tasked with finding a way to make virtual meetings easy, private and secure.

Whereas many would have turned to a SaaS offering, we decided to use Jitsi Meet, a modern and fully on-premise FOSS videoconferencing solution. Jitsi works on all platforms by running in a browser and comes with nifty Android and iOS applications.

We've been using our instance quite a bit, and so far everyone from technical to non-technical users have been pretty happy with it.

Jitsi Meet is powered by WebRTC and can be broken into multiple parts across multiple machines if needed. In addition to the webserver running the Jitsi Meet JavaScript code, the base configuration uses the Videobridge to manage users' video feeds, Jicofo as a conference focus to manage media sessions and the Prosody XMPP server to tie it all together.

Here's a network diagram I took from their documentation to show how those applications interact:

Getting started with the Jitsi Puppet module

First of all, you'll need a valid domain name and a server with decent bandwidth. Jitsi has published a performance evaluation of the Videobridge to help you spec your instance appropriately. You will also need to open TCP ports 443, 4443 and UDP port 10000 in your firewall. The puppetlabs/firewall module could come in handy here.

Once that is done, you can use the smash/jitsimeet Puppet module on a Debian 10 (Buster) server to spin up an instance. A basic configuration would look like this:

  class { 'jitsimeet':
    fqdn                 => '',
    repo_key             => puppet:///files/apt/jitsimeet.gpg,
    manage_certs         => true,
    jitsi_vhost_ssl_key  => '/etc/letsencrypt/live/'
    jitsi_vhost_ssl_cert => '/etc/letsencrypt/live/'
    auth_vhost_ssl_key   => '/etc/letsencrypt/live/'
    auth_vhost_ssl_cert  => '/etc/letsencrypt/live/'
    jvb_secret           => 'mysupersecretstring',
    focus_secret         => 'anothersupersecretstring',
    focus_user_password  => 'yetanothersecret',
    meet_custom_options  => {
      'enableWelcomePage'         => true,
      'disableThirdPartyRequests' => true,

The jitsimeet module is still pretty young: it clearly isn't perfect and some external help would be very appreciated. If you have some time, here are a few things that would be nice to work on:

  • Tests using puppet-rspec
  • Support for other OSes (only Debian 10 at the moment)
  • Integration with the Apache and Ngnix modules

If you use this module to manage your Jitsi Meet instance, please send patches and bug reports our way!

Learn more

David Bremner: Tangling multiple files

8 April, 2020 - 23:35

I have lately been using org-mode literate programming to generate example code and beamer slides from the same source. I hit a wall trying to re-use functions in multiple files, so I came up with the following hack. Thanks 'ngz' on #emacs and Charles Berry on the org-mode list for suggestions and discussion.

(defun db-extract-tangle-includes ()
  (goto-char (point-min))
  (let ((case-fold-search t)
        (retval nil))
    (while (re-search-forward "^#[+]TANGLE_INCLUDE:" nil t)
      (let ((element (org-element-at-point)))
    (when (eq (org-element-type element) &aposkeyword)
          (push (org-element-property :value element) retval))))

(defun db-ob-tangle-hook ()
  (let ((includes (db-extract-tangle-includes)))
    (mapc #&aposorg-babel-lob-ingest includes)))

(add-hook &aposorg-babel-pre-tangle-hook #&aposdb-ob-tangle-hook t)

Use involves something like the following in your org-file.

#+TITLE: GC V: Mark & Sweep with free list

For batch export with make, I do something like

    emacs --batch --quick  -l org  -l ${HOME}/.emacs.d/org-settings.el --eval "(org-babel-tangle-file \"$<\")"
    touch $@

Shirish Agarwal: GMRT 2020 and lots of stories

8 April, 2020 - 05:22

First of all, congratulations to all those who got us 2022 Debconf, so we will finally have a debconf in India. There is of course, lot of work to be done between now and then. For those who would be looking forward to visit India and especially Kochi I would suggest you to hear this enriching tale –

I am sorry I used youtube link but it is too good a podcast not to be shared. Those who don’t want youtube can use the link for the same as shared below.

I am sure there are lot more details, questions, answers etc. but would direct them gently to Praveen, Shruti, Balasankar and the rest who are from Kochi to answer if you have any questions about that history.

National Science Day, GMRT 2020

First, as always, we are and were grateful to both NCRA as well as GMRT for taking such good care of us. Even though Akshat was not around, probably getting engaged, a few of us were there. About 6-7 from the Mozilla Nasik while the rest representing the foss community. Here is a small picture which commentrates the event –

National Science Day, GMRT 2020

While there is and was a lot to share about the event. For e.g. Akshay had bought RPI- Zero as well as RPI-2 (Raspberry Pi’s ) and showed some things. He had also bought up a Debian stable live drive with persistence although the glare from the sun was too much that we couldn’t show it to clearly to students. This was also the case with RPI but still we shared what and how much we could. Maybe next year, we either ask them to have double screens or give us dark room so we can showcase things much better. We did try playing with contrast and all but it didn’t have much of an effect . Of course in another stall few students had used RPI’s as part of their projects so at times we did tell some of the newbies to go to those stalls and see and ask about those projects so they would have a much wider experience of things. The Mozilla people were pushing VR as well as Mozilla lite the browser for the mobile.

We also gossiped quite a bit. I shared about indicatelts , a third-party certificate extension although I dunno if I should file a wnpp about it or not. We didn’t have a good experience of when I had put an RFP (Request for Package) which was accepted for an extension which had similar functionality which we later come to know was sharing the sites people were using the extension to call home and share both the URL and the IP Address they were using it from. Sadly, didn’t leave a good taste in mouth

Delhi Riots

One thing I have been disappointed with is the lack of general awareness about things especially in the youth. We have people who didn’t know that for e.g. in the Delhi riots which happened recently the law and order (Police) lies with Home Minister of India, Amit Shah. This is perhaps the only capital in the world which has its own Chief Minister but doesn’t have any say on its law and order. And this has been the case for last 70 years i.e. since independance. The closest I know so far is the UK but they too changed their tune in 2012. India and especially Delhi seems to be in a time-capsule which while being dysfunctional somehow is made to work. In many ways, it’s three body or a body split into three personalities which often makes governance a messy issue but that probably is a topic for another day. In fact, scroll had written a beautiful editorial that full statehood for Delhi was not only Arvind Kejriwal’s call (AAP) but also something that both BJP as well as Congress had asked in the past. In fact, nothing about the policing is in AAP’s power. All salaries, postings, transfers of police personnel everything is done by the Home Ministry, so if any blame has to be given it has to be given to the Home Ministry for the same.

American Capitalism and Ventilators

America had been having a history of high cost healthcare as can be seen in this edition of USA today from 2017 . The Affordable Care Act was signed as a law by President Obama in 2010 which Mr. Trump curtailed when he came into power couple of years back. An estimated 80,000 people died due to seasonal flu in 2018-19 . Similarly, anywhere between 24-63,000 have supposed to have died from Last October to February-March this year. Now if the richest country can’t take care of their population which is 1/3rd of the population of this country while at the same time United States has thrice the area that India has. This I am sharing as seasonal flu also strikes the elderly as well as young children more than adults. So in one senses, the vulnerable groups overlap although from some of the recent stats, for Covid-19 even those who are 20+ are also vulnerable but that’s another story altogether.

If you see the CDC graph of the seasonal flu it is clear that American health experts knew about it. One another common factor which joins both the seasonal flu and covid is both need ventilators for the most serious cases. So, in 2007 it was decided that the number of ventilators needed to be ramped up, they had approximately 62k ventilators at that point in time all over U.S. The U.S. in 2010, asked for bids and got bid from a small californian company called Newport Medic Instruments. The price of the ventilators was approximately INR 700000 at 2010 prices, while Newport said they would be able to mass-produce at INR 200000 at 2010 prices. The company got the order and they started designing the model which needed to be certified by FDA. By 2011, they got the product ready when a big company called Covidgen bought Newport Medic and shutdown the project. This was shared in a press release in 2012. The whole story was broken by New York Times again, just a few days ago which highlighted how America’s capitalism rough shod over public health and put people’s life unnecessarily in jeopardy. If those new-age ventilators would have been a reality then not just U.S. but India and many other countries would have bought the ventilators as every county has same/similar needs but are unable to pay the high cost which in many cases would be passed on to their citizens either as price of service, or by raising taxes or a mixture of both with public being none the wiser. Due to dearth of ventilators and specialized people to operate it and space, there is possibility that many countries including India may have to make tough choices like Italian doctors had to make as to who to give ventilator to and have the mental and emotional guilt which would be associated with the choices made.

Some science coverage about diseases in wire and other publications

Since Covid coverage broke out, the wire has been bringing various reports of India’s handling of various epidemics, mysteries, some solved, some still remaining unsolved due to lack of interest or funding or both. The Nipah virus has been amply discussed in the movie Virus (2019) which I shared in the last blog post and how easily it could have been similar to Italy in Kerala. Thankfully, only 24 people including a nurse succumbed to that outbreak as shared in the movie. I had shared about Kerala nurses professionalism when I was in hospital couple of years back. It’s no wonder that their understanding of hygeine and nursing procedures are a cut above the rest hence they are sought after not just in India but world-over including US and UK and the middle-east. Another study on respitory illness was bought to my attention by my friend Pavithran.

Possibility of extended lockdown in India

There was talk in the media of extended lockdown or better put an environment is being created so that an extended lockdown can be done. This is probably in part due to a mathematical model and its derivatives shared by two Indian-origin Cambridge scholars who predict that a minimum 49 days lockdown may be necessary to flatten the covid curve about a week back.

Predictions of the outcome of the current 21-day lockdown (Source: Rajesh Singh, R. Adhikari, Cambridge University) Alternative lockdown strategies suggested by the Cambridge model (Source: Rajesh Singh, R. Adhikari, Cambridge University)

India caving to US pressure on Hydroxychloroquine

While there has been lot of speculation in U.S. about Hydroxychloroquine as the wonder cure, last night Mr. Trump threatened India in a response to a reporter that Mr. Modi has to say no for Hydroxychloroquine and there may be retaliations.

As shared before if youtube is not your cup you can see the same on

Now while there have been several instances in the past of U.S. trying to bully India, going all the way back to 1954. In fact, in recent memory, there were sanctions on India by US under Atal Bihari Vajpayee Government (BJP) 1998 but he didn’t buckle under the pressure and now we see our current PM taking down our own notification from a day ago and not just sharing Hydroxychloroquine but also Paracetemol to other countries so it would look as if India is sharing with other countries. Keep in mind, that India, Brazil haven’t seen eye to eye on trade agreements of late and Paracetemol prices have risen in India. The price rise has been because the API (Active Pharmaceutical Ingredients) for the same come from China where the supply chain will take time to be fixed and we would also have to open up, although should we, should we not is another question altogether. I talk about supply chains as lean supply chains were the talk since late 90’s when the Japanese introduced Just-in-time manufacturing which lead to lean supply chains as well as lot of outsourcing as consequence. Of course, the companies saved money but at the cost of flexibility and how this model was perhaps flawed was shared by a series of articles in Economist as early as 2004 when there were lot of shocks to that model and would be exaberated since then. There have been frequent shocks to these fragile ecosystem more since 2008 after the financial meltdown and this would put more companies out of business than ever before.

The MSME sector in India had already been severely impacted first by demonetization and then by the horrendous implementation of GST whose cries can be heard from all sectors. Also the frequent changing of GST taxes has made markets jumpy and investors unsure. With judgements such as retrospective taxes, AGR (Adjusted Gross Revenue) etc. it made not only the international investors scared, but also domestic investors. The flight of the capital has been noticeable. This I had shared before when Indian Government shared about LRS report which it hasn’t since then. In fact Outlook Business had an interesting article about it where incidentally it talked about localcircles, a community networking platform where you get to know of lot of things and whom I am also a member of.

At the very end I apologize for not sharing the blog post before but then I was feeling down but then I’m not the only one.

Reproducible Builds: Reproducible Builds in March 2020

7 April, 2020 - 16:30

Welcome to the March 2020 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month and some plans for the future.

What are reproducible builds?

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security.

However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.


The report from our recent summit in Marrakesh was published and is now available in both PDF and HTML formats. A sincere thank you to all of the Reproducible Builds community for the input to the event a sincere thank you to Aspiration for preparing and collating this report.

Harmut Schorrig published a detailed document on how to compile Java applications in such as way that the .jar build artefact is reproducible across builds. A practical and hands-on guide, it details how to avoid unnecessary differences between builds by explicitly declaring an encoding as the default value differs across Linux and MS Windows systems and ensuring that the generated .jar — a variant of a .zip archive — does not embed any nondeterministic filesystem metadata, and so on.

Janneke gave a quick presentation on GNU Mes and reproducible builds during the lighting talk session at LibrePlanet 2020. []

Vagrant Cascadian presented There and Back Again, Reproducibly! video at SCaLE 18x in Pasadena in California which generated some attention on Twitter.

Hervé Boutemy mentioned on our mailing list in a thread titled Rebuilding and checking Reproducible Builds from Maven Central repository that since the update of a central build script (the “parent POM”) every Apache project using the Maven build system should build reproducibly. A follow-up discussion regarding how to perform such rebuilds was also started on the Apache mailing list.

The Telegram instant-messaging platform announced that they had updated their iOS and Android OS applications and claim that they are reproducible according to their full instructions, verifying that its original source code is exactly the same code that is used to build the versions available on the Apple App Store and Google Play distribution platforms respectfully.

Hervé Boutemy also reported about a new project called reproducible-central which aims to allow anyone to rebuild a component from the Maven Central Repository that is expected to be reproducible and check that the result is as expected.

In last month’s report we detailed Omar Navarro Leija’s work in and around an academic paper titled Reproducible Containers which describes in detail the workings of a user-space container tool called dettrace (PDF). Since then, the PhD student from the University Of Pennsylvania presented on this tool at the ASPLOS 2020 conference in Lausanne, Switzerland. Furthermore, there were contributions to dettrace from the Reproducible Builds community itself. [][]

Distribution work openSUSE

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:


Chris Lamb further refined his merge request for the debian-installer component to allow all arguments from sources.list files (such as “[check-valid-until=no]”) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure. (#13)

Holger Levsen filed a number of bug reports against the debrebuild tool that attempts to rebuild a Debian package given a .buildinfo file as input, including:

48 reviews of Debian packages were added, 17 were updated and 34 were removed this month adding to our knowledge about identified issues. Many issue types were noticed, categorised and updated by Chris Lamb, including:

Finally, Holger opened a bug report against the software running, a service for Debian Developers to follow the evolution of packages via web and email interfaces to request that they integrate information from (#955434) and Chris Lamb kept up to date. []

Software development diffoscope

Chris Lamb made the following changes to diffoscope, the Reproducible Builds project’s in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading version 138 to Debian:

  • Improvements:

    • Don’t allow errors with “R” script deserialisation cause the entire operation to fail, for example if an external library cannot be loaded. (#91)
    • Experiment with memoising output from expensive external commands, eg. readelf. (#93)
    • Use dumppdf from the python3-pdfminer if we do not see any other differences from pdftext, etc. (#92)
    • Prevent a traceback when comparing two R .rdx files directly as the get_member method will return a file even if the file is missing. []
  • Reporting:

    • Display the supported file formats into the package long description. (#90)
    • Print a potentially-helpful message if the PyPDF2 module is not installed. []
    • Remove any duplicate comparator descriptions when formatting in the --help output or in the package long description. []
    • Weaken “Install the X package to get a better output” message to “… may produce a better output” as the former is not actually guaranteed. []
  • Misc:

    • Ensure we only parse the recommended packages from --list-debian-substvars when we want them for debian/tests/control generation. []
    • Add upstream metadata file [] and add a Lintian override for upstream-metadata-in-native-source as “we” are upstream. []
    • Inline the RequiredToolNotFound.get_package method’s functionality as it is only used once. []
    • Drop the deprecated “py36 = [..]” argument in the pyproject.toml file. []

In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 138 [], as well as updating reprotest — our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences — to version 0.7.14 [].

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month we wrote a large number of such patches, including:

Project documentation

There was further work performed on our documentation and website this month including Alex Wilson adding a section regarding using Gradle for reproducible builds in JVM projects [] and Holger Levsen added the report from our recent summit in Marrakesh [][].

In addition, Chris Lamb made a number of changes, including correcting the syntax of some CSS class formatting [], improved some “filed against” copy a little better [] and corrected a reference to calendar.monthrange Python method in a utility function. []

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers that, amongst many other tasks, tracks the status of our reproducibility efforts as well as identifies any regressions that have been introduced.

This month, Chris Lamb reworked the web-based package rescheduling tool to:

  • Require a HTTP POST method in the web-based scheduler as not only should HTTP GET requests be idempotent but this will allow many future improvements in the user interface. [][][]
  • Improve the authentication error message in said rescheduler to suggest that the developer’s SSL certificate may have expired. []

In addition, Holger Levsen made the following changes:

  • Add a new ath97 subtarget for the OpenWrt distribution.
  • Revisit ordering of Debian suites; sort the experimental distribution last and reverse the ordering of suites to prioritise the suites in development. [][][]
  • Schedule Debian buster and bullseye a little less in order to allow unstable to catch up on the i386 architecture. [][]
  • Various cosmetic changes to the web-based scheduler. [][][][]
  • Improve wordings in the node health maintenance output. []

Lastly, Vagrant Cascadian updated a link to the (formerly) weekly news to our reports page [] and kpcyrd fixed the escaping in an Alpine Linux inline patch []. The usual build nodes maintenance was performed by Holger Levsen [][], Mattia Rizzolo [] and Vagrant Cascadian [][].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Jonathan Dowland: Morphite

7 April, 2020 - 16:13

Further Switch game recommendations…

Morphite is a first-person space exploration game, with a very distinctive aesthetic, which reminds me a little bit of No Man's Sky. This is a fairly child-friendly game of exploration and discovery. It also reminds me a little bit of Frontier: First Encounters, the second sequel to Elite.

It's currently discounted in the Nintendo Switch eShop by 83%, an all-time low price of £2.29, until 5th May. I've barely scratched the surface of this one, so I don't know how deep the game will go, but it looks promising. Certainly worth sacrificing one Flat White for.

Gunnar Wolf: For real…

7 April, 2020 - 14:00

Our good friend, Octavio Méndez «Octagesimal», passed away due to complications derived from COVID-2019.

Long-time free software supporter, very well known for his craft –and for his teaching– with Blender. Great systems administrator. 45 year old, father of two small girls, husband of our dear friend Claudia.

We are all broken. We will miss you.

For real, those that can still do it: Stay safe. Stay home.

Steve Kemp: A busy few days

7 April, 2020 - 12:30

Over the past few weeks things have been pretty hectic. Since I'm not working at the moment I'm mostly doing childcare instead. I need a break, now and again, so I've been sending our child to päiväkoti two days a week with him home the rest of the time.

I love taking care of the child, because he's seriously awesome, but it's a hell of a lot of work when most of our usual escapes are unavailable. For example we can't go to the (awesome) Helsinki Central Library as that is closed.

Instead of doing things outdoors we've been baking bread together, painting, listening to music and similar. He's a big fan of any music with drums and shouting, so we've been listening to Rammstein, The Prodigy, and as much Queen as I can slip in without him complaining ("more bang bang!").

I've also signed up for some courses at the Helsinki open university, including Devops with Docker so perhaps I have a future career working with computers? I'm hazy.

Finally I saw a fun post the other day on reddit asking about the creation of a DSL for server-setup. I wrote a reply which basically said two things:

  • First of all you need to define the minimum set of primitives you can execute.
    • (Creating a file, fetching a package, reloading services when a configuration file changes, etc.)
  • Then you need to define a syntax for expressing those rules.
    • Not using YAML. Because Ansible fucked up bigtime with that.
    • It needs to be easy to explain, it needs to be consistent, and you need to decide before you begin if you want "toy syntax" or "programming syntax".
    • Because adding on conditionals, loops, and similar, will ruin everything if you add it once you've started with the wrong syntax. Again, see Ansible.

Anyway I had an idea of just expressing things in a simple fashion, borrowing Puppet syntax (which I guess is just Ruby hash literals). So a module to do stuff with files would just look like this:

file { name   => "This is my rule",
       target => "/tmp/blah",
       ensure => "absent" }

The next thing to do is to allow that to notify another rule, when it results in a change. So you add in:

notify => "Name of rule"

# or
notify => [ "Name of rule", "Name of another rule" ]

You could also express dependencies the other way round:

shell { name => "Do stuff",
        command => "wc -l /etc/passwd > /tmp/foo",
        requires => [ "Rule 1", "Rule 2"] }

Anyway the end result is a simple syntax which allows you to do things; I wrote a file to allow me to take a clean system and configure it to run a simple golang application in an hour or so.

The downside? Well the obvious one is that there's no support for setting up cron jobs, setting up docker images, MySQL usernames/passwords, etc. Just a core set of primitives.

Adding new things is easy, but also an endless job. So I added the ability to run external/binary plugins stored outside the project. To support that is simple with the syntax we have:

  • We pass the parameters, as JSON, to STDIN of the binary.
  • We read the result from STDOUT
    • Did the rule result in a change to the system?
    • Or was it a NOP?

All good. People can write modules, if they like, and they can do that in any language they like.

Fun times.

We'll call it marionette since it's all puppet-inspired:

And that concludes this irregular update.

Norbert Preining: QOwnNotes for Debian (update)

7 April, 2020 - 11:19

Some time ago I posted about QOwnNotes for Debian. My recent experience with the openSUSE Build System has convinced me to move also the QOwnNotes packages there, which allows me to provide builds for Debian/Buster, Debian/testing, and Debian/sid, all for both i386 and amd64 architectures.

To repeat a bit about QOwnNotes, it is a cross-platform plain text and markdown note taking application. By itself, it wouldn’t be something to talk about, we have vim and emacs and everything in between. But QOwnNotes integrates nicely with the Notes application from NextCloud and OwnCloud, as well as providing useful integration with NextCloud like old version of notes, access to deleted files, watching changes, etc.

The new locations for binary packages for both amd64 and i386 architectures are as follows below. To make these repositories work out of the box, you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.


deb  ./


deb  ./


deb  ./

The source can be obtained from either the git repository or the OBS project debian-qownnotes.


Joachim Breitner: A Telegram bot in Haskell on Amazon Lambda

7 April, 2020 - 03:40

I just had a weekend full of very successful serious geekery. On a whim I thought: “Wouldn't it be nice if people could interact with my game Kaleidogen also via a telegram bot?” This led me to learn about how I write a Telegram bot in Haskell and how I can deploy such a Haskell program to Amazon Lambda. In particular the latter bit might be interesting to some of my readers, so here is how went about it.


Kaleidogen is a little contemplative game (or toy game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. BTW, I am looking for help turning it into an Android app!

KaleidogenBot in action

Amazon Lambda

Amazon Lambda is the “Function as a service” offering of Amazon Web Services. The idea is that you don’t rent a server, where you have to deal with managing the whole system and that you are paying for constantly, but you just upload the code that responds to outside requests, and AWS takes care of the rest: Starting and stopping instances, providing a secure base system etc. When nobody is using the service, no cost occurs.

This sounds ideal for hosting a toy Telegram bot: Most of the time nobody will be using it, and I really don't want to have to baby sit yet another service on my server. On Amazon Lambda, I can probably just forget about it.

But Haskell is not one of the officially supported languages on Amazon Lambda. So to run Haskell on Lambda, one has to solve two problems:

  • how to invoke the Haskell code on the server, and
  • how to build Haskell so that it runs on the Amazon Linux distribution
A Haskell runtime for Lambda

For the first we need a custom runtime. While this sounds complicated, it is actually a pretty simple concept: A runtime is an executable called bootstrap that queries the Lambda Runtime Interface for the next request to handle. The Lambda documentation is phrased as if this runtime has to be a dispatcher that calls the separate function’s handler. But it could just do all the things directly.

I found the Haskell package aws-lambda-haskell-runtime which provides precisely that: A function

runLambda :: (LambdaOptions -> IO (Either String LambdaResult)) -> IO ()

that talks to the Lambda Runtime API and invokes its argument on each message. The package also provides Template Haskell magic to collect “handlers“ of any JSON-able type and generates a dispatcher, like you might expect from other, more dynamic languages. But that was too much magic for me, so I ignored that and just wrote the handler manually:

main :: IO ()
main = runLambda (run tc)
   run ::  LambdaOptions -> IO (Either String LambdaResult)
   run opts = do
    result <- handler (decodeObj (eventObject opts)) (decodeObj (contextObject opts))
    either (pure . Left . encodeObj) (pure . Right . LambdaResult . encodeObj) result

data Event = Event
  { path :: T.Text
  , body :: Maybe T.Text
  } deriving (Generic, FromJSON)

data Response = Response
  { statusCode :: Int
  , headers :: Value
  , body :: T.Text
  , isBase64Encoded :: Bool
  } deriving (Generic, ToJSON)

handler :: TC -> Event -> Context -> IO (Either String Response)
handler tc Event{body, path} context =

I expose my Lambda function to the world via Amazon’s API Gateway, configured to just proxy the HTTP requests. This means that my code receives a JSON data structure describing the HTTP request (here called Event, listing only the fields I care about), and it will respond with a Response, again as JSON.

The handler can then simply pattern-match on the path to decide what to do. For example this code handles URLs like /img/CAFFEEFACE.png, and responds with an image.

handler :: TC -> Event -> Context -> IO (Either String Response)
handler tc Event{body, path} context
    | Just bytes <- isImgPath path >>= T.decodeHex = do
        let pngData = genPurePNG bytes
        pure $ Right Response
            { statusCode = 200
            , headers = object [ "Content-Type" .= ("image/png" :: String) ]
            , isBase64Encoded = True
            , body = T.decodeUtf8 $ LBS.toStrict $ Base64.encode pngData

isImgPath :: T.Text -> Maybe T.Text
isImgPath  = T.stripPrefix "/img/" >=> T.stripSuffix ".png"

If this program would grow more, then one should probably use something more structured for routing here; maybe servant, or bridging towards wai apps (amost like wai-lamda, but that still assumes an existing runtime, instead of simply being the runtime). But for my purposes, no extra layers of indirection or abstraction are needed!

Deploying Haskell to Lambda

Building Haskell locally and deploying to different machiens is notoriously tricky; you often end up depending on a shared library that is not available on the other platform. The aws-lambda-haskell-runtime package, and similar projects like serverless-haskell, solve this using stack and Docker – two technologies that are probably great, but I never warmed up to them.

So instead adding layers and complexities, can I solve this instead my making things simpler? If I compiler my bootstrap into a static Linux binary, it should run on any Linux, including Amazon Linux.

Unfortunately, building Haskell programs statically is also notoriously tricky. But it is made much simpler by the work of Niklas Hambüchen and others in the context of the Nix package manager, coordinated in the static-haskell-nix project. The promise here is that once you have set up building your project with Nix, then getting a static version is just one flag away. The support is not completely upstreamed into nixpkgs proper yet, but their repository has a nix file that contains a nixpkgs set with their patches:

let pkgs = (import (sources.nixpkgs-static + "/survey/default.nix") {}).pkgs; in

This, plus a fairly standard nix setup to build the package, yields what I was hoping for:

$ nix-build -A kaleidogen
$ file result/bin/kaleidogen-amazon-lambda
result/bin/kaleidogen-amazon-lambda: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$ ls -sh result/bin/kaleidogen-amazon-lambda
6,7M result/bin/kaleidogen-amazon-lambda

If we put this file, named bootstrap, into a zip file and upload it to Amazon Lambda, then it just works! Creating the zip file is easily scripted using nix:

  function-zip = pkgs.runCommandNoCC "kaleidogen-lambda" {
    buildInputs = [ ];
  } ''
    mkdir -p $out
    cp ${kaleidogen}/bin/kaleidogen-amazon-lambda bootstrap
    zip $out/ bootstrap

So to upload this, I use this one-liner (line-wrapped for your convenience):

nix-build -A function-zip &&
aws lambda update-function-code --function-name kaleidogen \
  --zip-file fileb://result/

Thanks to how Nix pins all dependencies, I am fairly confident that I can return to this project in 4 months and still be able to build it.

Of course, I want continuous integration and deployment. So I build the project with GitHub Actions, using a cachix nix cache to significantly speed up the build, and auto-deploy to Lambda using aws-lambda-deploy; see my workflow file for details.

The Telegram part

The above allows me to run basically any stateless service, and a Telegram bot is nothing else: When configured to act as a WebHook, Telegram will send a request with a message to our Lambda function, where we can react on it.

The telegram-api package provides bindigs for the Telegram Bot API (although I had to use the repository version, as the version on Hackage has some bitrot). Slightly simplified I can write a handler for an Update:

handleUpdate :: Update -> TelegramClient ()
handleUpdate Update{ message = Just m } = do
  let c = ChatId (chat_id (chat m))
  liftIO $ printf "message from %s: %s\n" (maybe "?" user_first_name (from m)) (maybe "" T.unpack (text m))
  if "/start" `T.isPrefixOf` fromMaybe "" (text m)
  then do
    rm <- sendMessageM $ sendMessageRequest c "Hi! I am @KaleidogenBot. …"
    return ()
  else do
    m1 <- sendMessageM $ sendMessageRequest c "One moment…"
    withPNGFile  $ \pngFN -> do
      m2 <- uploadPhotoM $ uploadPhotoRequest c
        (FileUpload (Just "image/png") (FileUploadFile pngFN))
      return ()
handleUpdate _ u =
  liftIO $ putStrLn $ "Unhandled message: " ++ show u

and call this from the handler that I wrote above:

    | path == "/telegram" =
      case eitherDecode (LBS.fromStrict (T.encodeUtf8 (fromMaybe "" body))) of
        Left err -> …
        Right update -> do
          runTelegramClient token manager $ handleUpdate Nothing update
          pure $ Right Response
            { statusCode = 200
            , headers = object [ "Content-Type" .= ("text/plain" :: String) ]
            , isBase64Encoded = False
            , body = "Done"

Note that the Lambda code receives the request as JSON data structure with a body that contains the original HTTP request body. Which, in this case, is itself JSON, so we have to decode that.

All that is left to do is to tell Telegram where this code lives:

curl --request POST \
  --header 'content-type: application/json'
  --data '{"url": ""}'

As a little add on, I also created a Telegram game for Kaleidogen. A Telegram game is nothing but a webpage that runs inside Telegram, so it wasn’t much work to wrap the Web version of Kaleidogen that way, but the resulting Telegram game (which you can access via still looks pretty neat.

No /dev/dri/renderD128

I am mostly happy with this setup: My game is now available to more people in more ways. I don’t have to maintain any infrastructure. When nobody is using this bot no resources are wasted, and the costs of the service are neglectible -- this is unlikely to go beyond the free tier, and even if it would, the cost per generated image is roughly USD 0.000021.

There is one slight disappointment, though. What I find most intersting about Kaleidogen from a technical point of view is that when you play it in the browser, the images are not generated by my code. Instead, my code creates a WebGL shader program on the fly, and that program generates the image on your graphics card.

I even managed to make the GL rendering code work headlessly, i.e. from a command line program, using EGL and libgbm and a helper written in C. But it needs access to a graphics card via /dev/dri/renderD128. Amazon does not provide that to Lambda code, and neither do the other big Function-as-a-service providers. So I had to swallow my pride and reimplement the rendering in pure Haskell.

So if you think the bot is kinda slow, then that’s why. Despite properly optimizing the pure implementation (the inner loop does not do allocations and deals only with unboxed Double# values), the GL shader version is still three times as fast. Maybe in a few years GPU access will be so ubiquitous that it’s even on Amazon Lambda; then I can easily use that.

Jonathan Carter: Free Software Activities for 2020-03

7 April, 2020 - 00:00

DPL Campaign 2020

On the 12th of March, I posted my self-nomination for the Debian Project Leader election. This is the second time I’m running for DPL, and you can read my platform here. The campaign period covered the second half of the month, where I answered a bunch of questions on the debian-vote list. The voting period is currently open and ends on 18 April.

Debian Social

This month we finally announced the Debian Social project. A project that hosts a few websites with the goal to improve communication and collaboration within the Debian project, improve visibility on the work that people do and make it easier for general users to interact with the community and feel part of the project.

Some History

This has been a long time in the making. From my side I’ve been looking at better ways to share/play our huge DebConf video archives for the last 3 years or so. Initially I was considering either some sort of script or small server side app that combined the archives and the metadata into a player, or using something like MediaDrop (which I was using on my website for a while). I ran into a lot of MediaDrop’s limitations early on. It was fine for a very small site but I don’t think it would ever be the right solution for a Debian-wide video hosting platform, and it didn’t seem all that actively maintained either. Wouter went ahead and implemented a web player option for the video archives. His solution is good because it doesn’t rely on any server side software, so it’s easy to mirror and someone who lives on an island could download it and view it offline in that player. It still didn’t solve all our problems though. Popular videos (by either views or likes) weren’t easily discoverable, and the site itself isn’t that easy to discover.

Then PeerTube came along. PeerTube provides a similar type of interface such as MediaDrop or YouTube that gives you likes, viewcount and comments. But what really set it apart from previous things that we looked at was that it’s a federated service. Not only does it federate with other PeerTube instances, but the protocols it uses means that it can connect to all kinds of other services that makes up an interconnected platform called the Fediverse. This was especially great since independent video sites tend to become these lonely islands on the web that become isolated and forgotten. With PeerTube, video sites can subscribe to similar sites on the Fediverse, which makes videos and other video sites significantly more discoverable and attracts more eyeballs.

At DebConf19 I wanted to ramp up the efforts to make a Debian PeerTube instance a reality. I spoke to many people about this and discovered that some Debianites are already making all kinds of Debian videos in many different languages. Some were even distributing them locally on DVD and have never uploaded them. I thought that the Debian PeerTube instance could not only be a good platform for DebConf videos, but it could be a good home for many free software content creators, especially if they create Debian specific content. I spoke to Rhonda about it, who’s generally interested in the Fediverse and wanted to host a instances of Pleroma (microblogging service) and PixelFed (free image hosting service that resembles the Instagram site), but needed a place to host them. We decided to combine efforts, and since a very large amount of fediverse services end with .social in their domain names, we ended up calling this project Debian Social. We’re also hosting some non-fediverse services like a WordPress multisite and a Jitsi instance for video chatting.

Current Status

Currently, we have a few services in a beta/testing state. I think we have most of the kinks sorted out to get them to a phase where they’re ready for wider use. Authentication is a bit of a pain point right now. We don’t really have a single sign-on service in Debian, that guest users can use, or that all these services integrate with. So for now, if you’re a Debian Developer who wants an account on one of these services, you can request a new account by creating a ticket on and selecting the “New account” template. Not all services support having dashes (or even any punctuation in the username whatsoever), so to keep it consistent we’re currently appending just “guest” to salsa usernames for guest users, and “team” at the end of any Debian team accounts or official accounts using these services

Stefano finished uploading all the Debconf videos to the PeerTube instance. Even though it’s largely automated, it ended up being quite a big job fixing up some old videos, their metadata and adding support for PeerTube to the DebConf video scripts. This also includes some videos from sprints and MiniDebConfs that had video coverage, currently totaling 1359 videos.

Future plans

This is still a very early phase for the project. Here are just some ideas that might develop over time on the Debian Social sites:

  • Team accounts. Some Debian teams already have accounts on a myriad of other platforms. For example, the Debian Med team has a blog on blogspot and the Debian Publicity team has an account on I’d really like to make our Debian Social platforms (like our WordPress multisite instance and Pleroma) a place where Debian teams can trust to host their updates on. It would also be nice to have more teams use these that don’t have a particularly big online presence right now, like Debian women or a DPL team account.
  • Developer demos. I enjoy the videos that the GNOME project makes that demos the new features in every release, as they’ve done for the 3.36 release. I think it would be great if people in Debian could make some small videos to demo the things that they’ve been working on. It doesn’t have to be as flashy or elaborate as the GNOME video I’ve linked to, but sometimes just a minute long demo can be really useful to convey a new idea or feature or to show progress that has been made.
  • User participation. YouTube is full of videos that review Debian or demo how to customise it. It would be great if we could get users to post such videos to PeerTube. For Pixelfed, I’d like to try out projects like users posting pictures of their computers with freshly installed Debian systems with a hashtag like #WeInstallDebian, then at the end of the year we could build a nice big mosaic that contains these images. Might make a cool poster for events too.
  • DebConf and other Debian events. We used to use a Gallery instance to host DebConf photos, but it’s always been a bit cumbersome managing photos there and Gallery hasn’t updated it’s UI much over the years causing it to fall a bit out of favour with attendees at these events. As a result, photos end up getting lost in WhatsApp/Telegram/Signal groups, Twitter, Facebook, etc. I hope that we could get enough users signed up on the Pixelfed instance that it could become the de facto standard for posting Debian event photos to. Having a known central place to post these make them easier to find as well.

If you’d like to join this initiative and help out, please join #debian-social on oftc. We’re also looking for people who can help moderate posts on these sites.

Debian packaging

I had the sense that there were fewer upstream releases this month. I suspect that everyone was busy figuring out how to cope during Covid-19 lockdowns taking place all over the world.

2020-03-02: Upload package calamares (3.2.10-1) to Debian unstable.

2020-03-10: Upload package gnome-shell-extension-dash-to-panel (29-1) to Debian unstable.

2020-03-10: Upload package gnome-shell-extension-draw-on-your-screen (5.1-1) to Debian unstable.

2020-03-28: Upload package gnome-shell-extension-dash-to-panel (31-1) to Debian unstable.

2020-03-28: Upload package gnome-shell-extension-draw-on-your-screen (6-1) to Debian unstable.

2020-03-28: Update package python3-flask-autoindexing packaging, not releasing due to licensing change that needs further clarification. (GitHub issue #55).

2020-03-28: Upload package gamemode (1.5.1-1) to Debian unstable.

2020-03-28: Upload package calamares (3.2.21-1) to Debian unstable.

Debian mentoring

2020-03-03: Sponsor package python-jaraco.functools (3.0.0-1) (Python team request).

2020-03-03: Review python-ftputil (3.4-1) (Needs some more work) (Python team request).

2020-03-04: Sponsor package pythonmagick (0.9.19-6) for Debian unstable (Python team request).

2020-03-23: Sponsor package bitwise (0.41-1) for Debian unstable (Email request).

2020-03-23: Sponsor package gpxpy (1.4.0-1) for Debian unstable (Python team request).

2020-03-28: Sponsor package gpxpy (1.4.0-2) for Debian unstable (Python team request).

2020-03-28: Sponsor package celery (4.4.2-1) for Debian unstable (Python team request).

2020-03-28: Sponsor package buildbot (2.7.0-1) for Debian unstable (Python team request).

Martin Michlmayr: ledger2beancount 2.1 released

6 April, 2020 - 18:38

I released version 2.1 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.1:

  • Handle postings with posting dates and comments but no amount
  • Show transactions with only one posting (without bucket)
  • Adding spacing between automatic declarations
  • Preserve preliminary info at the top

You can get ledger2beancount from GitHub.

Thanks to Thierry (thdox) for reporting a bug and for fixing some typos in the documentation. Thanks to Stefano Zacchiroli for some good feedback.

Russ Allbery: Review: Thick

6 April, 2020 - 11:21

Review: Thick, by Tressie McMillan Cottom

Publisher: The New Press Copyright: 2019 ISBN: 1-62097-437-1 Format: Kindle Pages: 247

Tressie McMillan Cottom is an associate professor of sociology at Virginia Commonwealth University. I first became aware of her via retweets and recommendations from other people I follow on Twitter, and she is indeed one of the best writers on that site. Thick: And Other Essays is an essay collection focused primarily on how American culture treats black women.

I will be honest here, in part because I think much of the regular audience for my book reviews is similar to me (white, well-off from working in tech, and leftist but privileged) and therefore may identify with my experience. This is the sort of book that I always want to read and then struggle to start because I find it intimidating. It received a huge amount of praise on release, including being named as a finalist for the National Book Award, and that praise focused on its incisiveness, its truth-telling, and its depth and complexity. Complex and incisive books about racism are often hard for me to read; they're painful, depressing, and infuriating, and I have to fight my tendency to come away from them feeling more cynical and despairing. (Despite loving his essays, I'm still procrastinating reading Ta-Nehisi Coates's books.) I want to learn and understand but am not good at doing anything with the information, so this reading can feel like homework.

If that's also your reaction, read this book. I regret having waited as long as I did.

Thick is still, at times, painful, depressing, and infuriating. It's also brilliantly written in a way that makes the knowledge being conveyed easier to absorb. Rather than a relentless onslaught of bearing witness (for which, I should stress, there is an important place), it is a scalpel. Each essay lays open the heart of a subject in a few deft strokes, points out important features that the reader has previously missed, and then steps aside, leaving you alone with your thoughts to come to terms with what you've just learned. I needed this book to be an essay collection, with each thought just long enough to have an impact and not so long that I became numb. It's the type of collection that demands a pause at the end of each essay, a moment of mental readjustment, and perhaps a paging back through the essay again to remember the sharpest points.

The essays often start with seeds of the personal, drawing directly on McMillan Cottom's own life to wrap context around their point. In the first essay, "Thick," she uses advice given her younger self against writing too many first-person essays to talk about the writing form, its critics, and how the backlash against it has become part of systematic discrimination because black women are not allowed to write any other sort of authoritative essay. She then draws a distinction between her own writing and personal essays, not because she thinks less of that genre but because that genre does not work for her as a writer. The essays in Thick do this repeatedly. They appear to head in one direction, then deepen and shift with the added context of precise sociological analysis, defying predictability and reaching a more interesting conclusion than the reader had expected. And, despite those shifts, McMillan Cottom never lost me in a turn. This is a book that is not only comfortable with complexity and nuance, but helps the reader become comfortable with that complexity as well.

The second essay, "In the Name of Beauty," is perhaps my favorite of the book. Its spark was backlash against an essay McMillan Cottom wrote about Miley Cyrus, but the topic of the essay wasn't what sparked the backlash.

What many black women were angry about was how I located myself in what I'd written. I said, blithely as a matter of observable fact, that I am unattractive. Because I am unattractive, the argument went, I have a particular kind of experience of beauty, race, racism, and interacting with what we might call the white gaze. I thought nothing of it at the time I was writing it, which is unusual. I can usually pinpoint what I have said, written, or done that will piss people off and which people will be pissed off. I missed this one entirely.

What follows is one of the best essays on the social construction of beauty I've ever read. It barely pauses at the typical discussion of unrealistic beauty standards as a feminist issue, instead diving directly into beauty as whiteness, distinguishing between beauty standards that change with generations and the more lasting rules that instead police the bounds between white and not white. McMillan Cottom then goes on to explain how beauty is a form of capital, a poor and problematic one but nonetheless one of the few forms of capital women have access to, and therefore why black women have fought to be included in beauty despite all of the problems with judging people by beauty standards. And the essay deepens from there into a trenchant critique of both capitalism and white feminism that is both precise and illuminating.

When I say that I am unattractive or ugly, I am not internalizing the dominant culture's assessment of me. I am naming what has been done to me. And signaling who did it. I am glad that doing so unsettles folks, including the many white women who wrote to me with impassioned cases for how beautiful I am. They offered me neoliberal self-help nonsense that borders on the religious. They need me to believe beauty is both achievable and individual, because the alternative makes them vulnerable.

I could go on. Every essay in this book deserves similar attention. I want to quote from all of them. These essays are about racism, feminism, capitalism, and economics, all at the same time. They're about power, and how it functions in society, and what it does to people. There is an essay about Obama that contains the most concise explanation for his appeal to white voters that I've read. There is a fascinating essay about the difference between ethnic black and black-black in U.S. culture. There is so much more.

We do not share much in the U.S. culture of individualism except our delusions about meritocracy. God help my people, but I can talk to hundreds of black folks who have been systematically separated from their money, citizenship, and personhood and hear at least eighty stories about how no one is to blame but themselves. That is not about black people being black but about people being American. That is what we do. If my work is about anything it is about making plain precisely how prestige, money, and power structure our so-called democratic institutions so that most of us will always fail.

I, like many other people in my profession, was always more comfortable with the technical and scientific classes in college. I liked math and equations and rules, dreaded essay courses, and struggled to engage with the mandatory humanities courses. Something that I'm still learning, two decades later, is the extent to which this was because the humanities are harder work than the sciences and I wasn't yet up to the challenge of learning them properly. The problems are messier and more fluid. The context required is broader. It's harder to be clear and precise. And disciplines like sociology deal with our everyday lived experience, which means that we all think we're entitled to an opinion.

Books like this, which can offer me a hand up and a grounding in the intellectual rigor while simultaneously being engaging and easy to read, are a treasure. They help me fill in the gaps in my education and help me recognize and appreciate the depth of thought in disciplines that don't come as naturally to me.

This book was homework, but the good kind, the kind that exposes gaps in my understanding, introduces topics I hadn't considered, and makes the time fly until I come up for air, awed and thinking hard. Highly recommended.

Rating: 9 out of 10

Enrico Zini: Burnout links

6 April, 2020 - 06:00
Demystifying Burnout in Tech burnout selfcare 2020-04-06 How to save your soul from getting too callused FOSDEM 2020 - Recognising Burnout burnout selfcare 2020-04-06 Mental health is becoming an increasingly important topic. For this talk Andrew will focus on one particular aspect of mental health, burnout. Including his own personal experiences of when it can get really bad and steps that could be taken to help catch it early. Burnout is Not Your Fault burnout selfcare 2020-04-06 Let’s unpack society’s general misunderstanding of the latest buzzword- burnout, shall we? Demystifying Burnout in Tech burnout selfcare 2020-04-06 How to save your soul from getting too callused Christina Maslach: Burnout From Heroic Action burnout selfcare 2020-04-06 Christina Maslach defines and explains burnout, in particular relating it to activism. She gives tips and lessons for avoiding it. Recorded at the Hero Round... Understanding Job Burnout - Dr. Christina Maslach burnout selfcare 2020-04-06 DOES19 London — Burnout is a hot topic in today's workplace, given its high costs for both employees and organizations. What causes this problem? And what ca...

Vincent Bernat: Safer SSH agent forwarding

5 April, 2020 - 22:50

ssh-agent is a program to hold in memory the private keys used by SSH for public-key authentication. When the agent is running, ssh forwards to it the signature requests from the server. The agent performs the private key operations and returns the results to ssh. It is useful if you keep your private keys encrypted on disk and you don’t want to type the password at each connection. Keeping the agent secure is critical: someone able to communicate with the agent can authenticate on your behalf on remote servers.

ssh also provides the ability to forward the agent to a remote server. From this remote server, you can authenticate to another server using your local agent, without copying your private key on the intermediate server. As stated in the manual page, this is dangerous!

Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent’s UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).

As mentioned, a better alternative is to use the jump host feature: the SSH connection to the target host is tunneled through the SSH connection to the jump host. See the manual page and this blog post for more details.

If you really need to use SSH agent forwarding, you can secure it a bit through a dedicated agent with two main attributes:

  • it holds only the private key to connect to the target host, and
  • it asks confirmation for each requested signature.

The following wrapper around the ssh command will spawn such an ephemeral agent:

assh() {
    # Ensure we don't use the "regular" agent.
    unset SSH_AUTH_SOCK
    # Spawn a new, empty, agent.
    eval $(ssh-agent)
    [ -n "$SSH_AUTH_SOCK" ] || exit 1
    # On exit, kill the agent.
    trap "ssh-agent -k > /dev/null" EXIT
    # Invoke SSH with agent forwarding enabled and
    # automatically add the needed private key in
    # the agent, with "confirm" mode.
    ssh -o AddKeysToAgent=confirm \
        -o ForwardAgent=yes \

With the -o AddKeysToAgent=confirm directive, ssh adds the unencrypted private key to the agent but each use must be confirmed.1 Once connected, you get a password prompt for each signature request:2

Request for the agent to use the specified private key

But, again, avoid using agent forwarding! ☠️

  1. Alternatively, you can add the keys with ssh-add -c. ↩︎

  2. Unfortunately, the dialog box default answer is “Yes.” ↩︎

Hideki Yamane: Zoom: You should hire an appropriate package maintainer

5 April, 2020 - 21:19
Through my daily job, sometimes I should use zoom for meetings and webinar but several resources indicate that they didn't pay enough security effort to their product, so I've decided to remove it from my laptop. However, I've found a weird message at that time.
The following packages will be REMOVED:
0 upgraded, 0 newly installed, 1 to remove and 45 not upgraded.
After this operation, 269 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 362466 files and directories currently installed.)
Removing zoom (3.5.374815.0324) ...
run post uninstall script, action is remove ...
current home is /root
Processing triggers for mime-support (3.64) ...
Processing triggers for gnome-menus (3.36.0-1) ...
Processing triggers for shared-mime-info (1.15-1) ...
Processing triggers for desktop-file-utils (0.24-1) ...
(Reading database ... 361169 files and directories currently installed.)
Purging configuration files for zoom (3.5.374815.0324) ...
run post uninstall script, action is purge ...
current home is /rootWait. "current home is /root"? What did you do? Then I've extracted its package (ar -x zoom_amd64.deb; tar xvf contro.tar.xz; view post*)
# Program:
#       script to be run after package installation

echo "run post install script, action is $1..."

#ln -s -f /opt/zoom/ZoomLauncher /usr/bin/zoom

#$1 folder path
function remove_folder
        if [ -d $1 ]; then
                rm -rf $1

echo current home is $HOME
remove_folder "$HOME/.cache/zoom"
(snip)Ouch. When I run apt with sudo, $HOME is /root. So, their maintscript tried to remove files under /root! Did they do any tests? Even if it would work well, touch user's files under $Home is NOT a good idea...

And it seems that not only for .deb package but also .rpm package.

Their linux installer scripts are clueless and icky too:

𝚛𝚎𝚖𝚘𝚟𝚎_𝚏𝚘𝚕𝚍𝚎𝚛 "/𝚘𝚙𝚝/𝚣𝚘𝚘𝚖"
𝚛𝚎𝚖𝚘𝚟𝚎_𝚏𝚘𝚕𝚍𝚎𝚛 "$𝙷𝙾𝙼𝙴/.𝚣𝚘𝚘𝚖/𝚕𝚘𝚐𝚜"
𝚛𝚎𝚖𝚘𝚟𝚎_𝚏𝚘𝚕𝚍𝚎𝚛 "$𝙷𝙾𝙼𝙴/.𝚌𝚊𝚌𝚑𝚎/𝚣𝚘𝚘𝚖"

rpm -q --scripts zoom output:— Will Stephenson (@wstephenson) March 31, 2020

Joey Hess: solar powered waterfall controlled by a GPIO port

5 April, 2020 - 03:56

This waterfall is beside my yard. When it's running, I know my water tanks are full and the spring is not dry.

Also it's computer controlled, for times when I don't want to hear it. I'll also use the computer control later on to avoid running the pump excessively and wearing it out, and for some safety features like not running when the water is frozen.

This is a whole hillside of pipes, water tanks, pumps, solar panels, all controlled by a GPIO port. Easy enough; the pump controller has a float switch input and the GPIO drives a 4n35 optoisolator to open or close that circuit. Hard part will be burying all the cable to the pump. And then all the landscaping around the waterfall.

There's a bit of lag to turning it on and off. It can take over an hour for it to start flowing, and around half an hour to stop. The water level has to get high enough in the water tanks to overcome some airlocks and complicated hydrodynamic flow stuff. Then when it stops, all that excess water has to drain back down.

Anyway, enjoy my soothing afternoon project and/or massive rube goldberg machine, I certainly am.

Thorsten Alteholz: My Debian Activities in March 2020

4 April, 2020 - 23:02

FTP master

This month I accepted 156 packages and rejected 26. The overall number of packages that got accepted was 203.

Debian LTS

This was my sixty ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads of:

  • [DLA 2156-1] e2fsprogs security update for one CVE
  • [DLA 2157-1] weechat security update for three CVEs
  • [DLA 2160-1] php5 security update for two CVEs
  • [DLA 2164-1] gst-plugins-bad0.10 security update for four CVEs
  • [DLA 2165-1] apng2gif security update for one CVE

Also my work on graphicsmagic was accepted which resulted in:

  • [DSA 4640-1] graphicsmagick security update in Buster and Strech for 16 CVEs

Further I sent debdiffs of weechat/stretch, weechat/buster, e2fsprogs/stretch to the corresponding maintainers but got no feedback yet.

As there have been lots of no-dsa-CVEs accumulated for wireshark, I started to work on them but could not upload yet.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty first ELTS month.

During my really allocated time I uploaded:

  • ELA-218-1 for e2fsprogs
  • ELA-220-1 for php5
  • ELA-221-1 for nss

I also did some days of frontdesk duties.

Other stuff

Unfortunately this month again strange things happened outside Debian and the discussions within Debian did not stop. Nonetheless I got some stuff done.

I improved packaging of …

I sponsored uploads of …

  • … ocf-spec-core
  • … theme-d-gnome

Sorry to all people who also requested sponsoring, but sometimes things happen and your upload might be delayed.

I uploaded new upstream versions of …

On my Go challenge I uploaded:
golang-github-dreamitgetit-statuscake, golang-github-ensighten-udnssdk, golang-github-apparentlymart-go-dump, golang-github-suapapa-go-eddystone, golang-github-joyent-gosdc, golang-github-nrdcg-goinwx, golang-github-bmatcuk-doublestar, golang-github-go-xorm-core, golang-github-svanharmelen-jsonapi, golang-github-goji-httpauth, golang-github-phpdave11-gofpdi


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้