Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 23 min ago

Jonathan Dowland: New Bike

9 October, 2020 - 18:47

I grew up riding bikes with my friends, but I didn't keep it up once I went to University. A couple of my friends persevered and are really good riders, even building careers on their love of riding.

I bought a mountain bike in 2006 (a sort of "first pay cheque" treat after changing roles) but didn't really ride it all that often until this year. Once Lockdown began, I started going for early morning rides in order to get some fresh air and exercise.

Once I'd got into doing that I decided it was finally time to buy a new bike. I knew I wanted something more like a "hybrid" than a mountain bike but apart from that I was clueless. I couldn't even name the top manufacturers.

Ross Burton—a friend from the Debian community—suggested I take a look at Cotic, a small UK-based manufacturer based in the peak district. Specifically their Escapade gravel bike. (A gravel bike, it turns out, is kind-of like a hybrid.)

My new Cotic Escapade

I did some due diligence, looked at some other options, put together a spreadsheet etc but the Escapade was the clear winner. During the project I arranged to have a socially distant cup of tea with my childhood friend Dan, now a professional bike mechanic, who by coincidence arrived on his own Cotic Escapade. It definitely seemed to tick all the boxes. I just needed to agonise over the colour choices: Metallic Orange (a Cotic staple) or a Grey with some subtle purple undertones. I was leaning towards the Grey, but ended up plumping for the Orange.

I could just cover it under Red Hat UK’s cycle to work scheme. I’m very pleased our HR dept is continuing to support the scheme, in these times when they also forbid me from travelling to the office.

And so here we are. I’m very pleased with it! Perhaps I'll write more about riding, or post some pictures, going forward.

Molly de Blanc: COVID and Reflections on Jessica Flanigan

8 October, 2020 - 19:48

One  of the points Flanigan makes in her piece “Seat Belt Mandates and Paternalism” is that we’re conditioned to use seat belts from a very early age. It’s a thing we internalize and build into our understanding of the world. People feel bad when they don’t wear a seat belt.(1) They’re unsettled. They feel unsafe. They feel like they’re doing something wrong.

Masks have started to fit into this model as well. Not wearing a mask feels wrong. An acquaintance shared a story of crying after realizing they had left the house without a mask. For some people, mask wearing has been deeply internalized.

We have regular COVID tests at NYU. Every other week I spit into a tube and then am told whether I am safe or sick. This allows me to hang out with my friends more confident than I would feel otherwise. This allows me to be closer to people than I would be otherwise. It also means that if I got sick, I would know, even if I was asymptomatic. If this happened, I would need to tell my friends. I would trace the places I’ve been, the people I’ve seen, and admit to them that I got sick. I would feel shame because something I did put me in that position.

There were (are?) calls to market mask wearing and COVID protection with the same techniques we use around sex: wear protection, get tested, think before you act, ask consent before touching, be honest and open with the people around you about your risk factors.

This is effective, at least among a swath of the population, but COVID has effectively become another STD. It’s a socially transmitted disease that we have tabooified into creating shame in people who have it.

The problem with this is, of course, that COVID isn’t treatable in the same way syphilis and chlamydia are. Still, I would ask whether people don’t report, or get tested, or even wear masks, because of shame? In some communities, wearing a mask is a sign that you’re sick. It’s stigmatizing.(2)

I think talking about COVID the way we talk about sex is not the right approach because, in my experience, the ways I learned about sex were everything from factually wrong to deeply harmful. If what we’re doing doesn’t work, what does?

(1) Yes, I know not everyone.

(2) Many men who don’t wear masks cite it as feeling emasculating, rather than stigmatizing.

Dirk Eddelbuettel: RcppSimdJson 0.1.2: Upstream update

8 October, 2020 - 19:05

A new RcppSimdJson release arrived on CRAN yesterday bringing along the simdjson 0.5.0 release that happened a few weeks.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk).

Beside the upstream update, not too much happened to our package itself since 0.1.1 though Brandon did help one user to seriously speed up his JSON processing. The (this time very short) NEWS entry follows.

Changes in version 0.1.2 (2020-10-07)
  • Upgraded to simdjson 0.5.0 (Dirk #49)

Courtesy of my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: littler 0.3.12: Exciting updates

8 October, 2020 - 07:14

The thirteenth release of littler as a CRAN package became available today (after a three day ‘rest’ at CRAN for no real reason), following in the fourteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.

A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release brings five new example scripts and command wrappers:

  • installDeps.r installs all dependencies of package (directory or tarball)
  • installRSPM.r relies on RSPM to install binary packages
  • installBSPM.r relies on BSPM to install binary packages (esp. on Linux)
  • cranIncomimg.r checks the incoming queue for one or more packages
  • urlUpdates.r checks and/or updates stale URLs leading to redirects

A number of commands were also extended or updated (see below for more). We have a new and very slick documentation website once again utilising Material for MkDocs. Last but not least the two included vignettes now use minidown and the fabulous water css theme—which reduced the file sizes of the two vignettes from, respectively, 884kb and 873kb to 47kb and 15kb. Yes, that is correct. That alone brought the package file size down from 641kb to 116kb. Incredible.

The NEWS file entry is below.

Changes in littler version 0.3.12 (2020-10-04)
  • Changes in examples

    • Updates to scripts tt.r, cos.r, cow.r, c4r.r, com.r

    • New script installDeps.r to install dependencies

    • Several updates tp script check.r

    • New script installBSPM.r and installRSPM.r for binary package installation (Dirk and Iñaki in #81)

    • New script cranIncoming.r to check in Incoming

    • New script urlUpdate.r validates URLs as R does

  • Changes in package

    • Travis CI now uses BSPM

    • A package documentation website was added

    • Vignettes now use minidown resulting in much reduced filesizes: from over 800kb to under 50kb (Dirk in #83)

My CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and now also on the new package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Vincent Fourmond: QSoas quiz #1 : averaging spectra

7 October, 2020 - 19:25
Here is the first QSoas quiz ! I recently measured several identical spectra in a row to evaluate the noise of the setup, and so I wanted to average all the spectra and also determine the standard deviation in the absorbances. Averaging the spectra can simply be done taking advantage of the average command:
QSoas> load Spectrum*.dat /flags=spectra
QSoas> average flagged:spectra
However, average does not provide means to make standard deviations, it just takes the average of all but the X column. I wanted to add this feature, but I realized there are already at least two distinct ways to do that...

Quiz Your task is to determine the average and standard deviations of the three spectra located there, (Spectrum-1.dat, Spectrum-2.dat and Spectrum-3.dat). There are at least two ways:
  • One that relies simply on average and on apply-formula, and which requires that you remember how to compute standard deviations.
  • One that is a little more involved, that requires more data manipulation (take a look at contract for instance) and relies on the fact that you can use statistics in apply-formula (and in particular you can use y_stddev to refer to the standard deviation of \(y\)), but which does not require you to know exactly how to compute standard deviations.
To help you, I've added the result in Average.dat. The figure below shows a zoom on the data superimposed to the average (bonus points to find how to display this light red area that corresponds to the standard deviation !). I will post the answer later. In the meantime, feel free to post your own solutions or attempts, hacks, and so on !

About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Iustin Pop: Late report for Nationalpark Bike Marathon 2020

7 October, 2020 - 03:00

I don’t have to mention that 2020 is a special year, so all the normal race plan was out the window, and I was very happy and fortunate to be able to do even one race. And only delayed 3 weeks to write this race report :/ So, here’s the story ☺

Preparing for the race

Because it was a special year, and everything was crazy, I actually managed to do more sports than usual, at least up to end of July. So my fitness, and even body weight, was relatively fine, so I subscribed to the mid-distance race (official numbers: 78km distance, 1570 meters altitude), and then off it went to a proper summer vacation — in a hotel, even.

And while I did do some bike rides during that vacation, from then on my training regime went… just off? I did train, I did ride, I did get significant PRs, but it didn’t “click” anymore. Plus, due to—well, actually not sure what, work or coffee or something—my sleep regime also got completely ruined…

On top of that, I didn’t think about the fact that the race was going to be mid-September, and that high up in the mountains, the weather could have be bad enough (I mean, in 2018 the weather was really bad even in August…) such that I’d need to seriously think about clothing.

Race week

I arrive in Scuol two days before the race, very tired (I think I got only 6 hours of sleep the night before), and definitely not in a good shape. I was feeling bad enough that I was not quite sure I was going to race. At least weather was OK, such that normal summer clothing would suffice. But the race info was mentioning dangerous segments, to be very careful, etc. etc. so I was quite anxious.

Note 1: my wife says, this was not the first time, and likely not the last time that two days before the race I feel like quitting. And as I’m currently on-and-off reading the interesting The Brave Athlete: Calm the Fuck Down and Rise to the Occasion book (by Lesley Paterson and Simon Marshall; it’s an interesting book, not sure if I recommend it or not), I am beginning to think that this is my reaction to races where I have “overshot” my usual distance. Or, in general, races where I fear the altitude gain. Not quite sure, but I think it is indeed the actual cause.

So I spend Thursday evening feeling unwell, and thinking I’ll see how Friday goes. Friday comes, and having slept reasonably well entire night, I pick up my race number, then I take another nap in the afternoon - in total, I’ve slept around 13 hours that day. So I felt much better, and was looking forward to the race.

Saturday morning comes, I manage to wake up early, and get ready in time; almost didn’t panic at all that I’m going to be late.

Note 2: my wife also says that this is the usual way I behave. Hence, it must be most of it a mental issue, rather than real physical one ☺

Race

I reach the train station in time, I get on the train, and by the time the train reached Zernez, I fully calm down. There was am entire hour wait though before the race, and it was quite chilly. Of course I didn’t bring anything beside what I was wearing, relying on temperature getting better later in the day.

During the wait, there were two interesting things happening.

First, we actually got there (in Zernez) before the first people from the long distance passed by, both men and women. Seeing them pass by was cool, thinking they already had ~1’200m altitude in just 30-ish kilometres.

The second thing was, as this was the middle and not the shortest distance, the people in the group looked differently than in previous years. More precisely, they were looking very fit, and I was feeling… fat. Well, I am overweight, so it was expected, but I was feeling it even more than usual. I think only one or two in ten people were looking as fit as me or less… And of course, the pictures post-race show me even less “fit-looking” than I thought. Ah, self-deception is a sweet thing…

And yes, we all had to wear masks, up until the last minute. It was interesting, but not actually annoying - and small enough price for being able to race!

Then the race starts, and as opposed to many other years, it starts slow. I didn’t feel that rush of people starting fast, it was… reasonable?

First part of the race (good)

Thus started the first part of the race, on a new route that I was unfamiliar with. There was not too much climbing, to be honest, and there was some tricky single-trail through the woods, with lots of the roots. I actually had to get off the bike and push it, since it was too difficult to pedal uphill on that path. Other than that, I was managing so far to adjust my efforts well enough that my usual problems related to climbing (lower back pain) didn’t yet appear, even as the overall climbed meters were increasing. I was quite happy at that, and had lots of reserves. To my (pleasant) surprise, two positive things happened:

  • I was never alone, a sign that I wasn’t too far back.
  • I was passing/being passed by people, both on climbs but also on descents! It’s rare, but I did overtake a few people on a difficult trail downhill.

With all the back and forth, a few people became familiar (or at least their kit), and it was fun seeing who is better uphill vs. downhill.

And second part (not so good)

I finally get to (around) S-chanf, on a very nice but small descent, and on flat roads, and start the normal route for the short race. Something was off though - I knew from past years that these last ~47km have around 700-800m altitude, but I had already done around 1000m. So the promised 1571m were likely to be off, by at least 100-150m. I set myself a new target of 1700m, and adjust my efforts based on that.

And then, like clockwork on the 3:00:00 mark, the route exited the forest, the sun got out of the clouds, and the temperature started to increase from 16-17°C to 26°+, with peaks of 31°C. I’m not joking: at 2:58:43, temp was 16°, at 3:00:00, it was 18°, at 3:05:45, it was 26°. Heat and climbing are my two nemeses, and after having a pretty good race for the first 3 hours and almost exactly 1200m of climbing, I started feeling quite miserable.

Well, it was not all bad. There were some nice stretches of flat, where I knew I can pedal strongly and keep up with other people, until my chain dropped, so I had to stop, re-set it, and lose 2 minutes. Sigh.

But, at least, I was familiar with this race, or so I thought. I completely mis-remembered the last ~20km as a two-punch climb, Guarda and Ftan, whereas it is actually a three-punch one: Guarda, Ardez, and only then Ftan. Doesn’t help that Ardez has the nice ruins that I was remembering and which threw me off.

The saddest part of the day was here, on one of the last climbs - not sure if to Guarda or to Arddez, where a guy overtakes me, and tells me he’s glad he finally caught up with me, he almost got me five or six times (!), but I always managed to break off. Always, until now. Now, this was sad (I was huffing and puffing like a steam locomotive now), but also positive, as I never had that before. One good, one bad?

And of course, it was more than 1’700m altitude, it was 1’816m. And the descent to Scuol shorter and it didn’t end as usual with the small but sharp climb which I just love, due to Covid changes.

But, I finished, and without any actual issues, and no dangerous segments as far as I saw. I was anxious for no good reason…

Conclusion (or confusion?)

So this race was interesting: three hours (to the minute) in which I went 43.5km, climbed 1200m, felt great, and was able to push and push. And then the second part, only ~32km, climbed only 600m, but which felt quite miserable.

I don’t know if it was mainly heat, mainly my body giving up after that much climbing (or time?), or both. But it’s clear that I can’t reliably race for more than around these numbers: 3 hours, ~1000+m altitude, in >20°C temperature.

One thing that I managed to achieve though: except due to the technically complex trail at the beginning where I pushed the bike, I did not ever stop and push the bike uphill because I was too tired. Instead, I managed (badly) to do the switch sitting/standing as much as I could motivate myself, and thus continue pushing uphill. This is an achievement for me, since mentally it’s oh so easy to stop and push the bike, so I was quite glad.

As to the race results, they were quite atrocious:

  • age category (men), 38 out of 52 finishers, 4h54m, with first finisher doing 3h09m, so 50% slower (!)
  • overall (men), 138 out of 173 finishers, with first finisher 2h53m.

These results clearly don’t align with my feeling of a good first half of the race, so either it was purely subjective, or maybe in this special year, only really strong people registered for the race, or something else…

One positive aspect though, compared to most other years, was the consistency of my placement (age and overall):

  • Zuoz: 38 / 141
  • S-Chanf: 39 / 141
  • Zernez: 39 / 141
  • Guarda: 38 / 138
  • Ftan: 38 / 138
  • (“next” - whatever this is): 38 / 138
  • Finish: 38 / 138

So despite all my ranting above, and all the stats I’m pulling out of my own race, it looks like my position in the race was fully settled in the really first part, and I didn’t gain nor lose practically anything afterwards. I did dip one place but then gained it back (on the climb to Guarda, even).

The split times (per-segment rankings) are a bit more variable, and show that I was actually fast on the climbs but losing speed on the descents, which I really don’t understand anymore:

  • Zernez-Zuoz (unclear type): 38 / 141
  • Zuoz-S-Chanf (unclear type): 40 / 141
  • S-Chanf-Zernez (mostly downhill): 39 / 143
  • Zernez-Guarda (mostly uphill): 37 / 136
  • Guarda-Ftan (mostly uphill): 37 / 131
  • Ftan-Scuol (mostly downhill): 43 / 156

The difference at the end is striking. I’m visually matching the map positions to km and then use VeloViewer for computing the altitude gain, but Zernez to Guarda is 420m altitude, and Guarda to Ftan is 200m altitude gain, and yet on both, I was faster than my final place, and by quite a few places on overall, only to lose that on the descent (Ftan-Scuol), and by a large margin.

So, amongst all the confusion here, I think the story overall is:

  • indeed I was quite fit for me, so the climbs were better than my place in the race (if that makes sense).
  • however, I’m not actually good at climbing nor fit (watts/kg), so I’m still way back in the pack (oops!).
  • and I do suck at descending, both me (skills) and possible my bike setup as well (too high tyre pressure, etc.) so I lose even more time here…

As usual, the final take-away points are: lose the extra weight that is not needed, get better skills, get better core to be better at climbing.

I’ll finish here with one pic, taken in Guarda (4 hours into the race, more or less):

Climbing in Guarda

Until next year!

Thorsten Alteholz: My Debian Activities in September 2020

6 October, 2020 - 21:04

FTP master

This month I accepted 278 packages and rejected 58. The overall number of packages that got accepted was 304.

Debian LTS

This was my seventy-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 19.75h. During that time I did LTS uploads of:

  • [DLA 2382-1] curl security update for one CVE
  • [DLA 2383-1] nfdump security update for two CVEs
  • [DLA 2384-1] yaws security update for two CVEs

I also started to work on new issues of qemu but had to learn that most of the patches I found have not yet been approved by upstream. So I moved on to python3.5 and cimg. The latter is basically just a header file and I had to find its reverse dependencies to check whether all of them can still be built with the new cimg package. This is still WIP and I hope to upload new versions soon.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty seventh ELTS month.

During my allocated time I uploaded:

  • ELA-284-1 for curl
  • ELA-288-1 for libxrender
  • ELA-289-1 for python3.4

Like in LTS, I also started to work on qemu and encountered the same problems as in LTS above.
When building the new python packages for ELTS and LTS, I used the same VM and encountered memory problems that resulted in random tests failing. This was really annoying as I spent some time just chasing the wind. So up to now only the LTS package got an update and the ELTS one has to wait for October.

Last but not least I did some days of frontdesk duties.

Other stuff

This month I only uploaded some packages to fix bugs:

Reproducible Builds: Reproducible Builds in September 2020

5 October, 2020 - 17:48

Welcome to the September 2020 report from the Reproducible Builds project. In our monthly reports, we attempt to summarise the things that we have been up to over the past month, but if you are interested in contributing to the project, please visit our main website.

This month, the Reproducible Builds project was pleased to announce a donation from Amateur Radio Digital Communications (ARDC) in support of its goals. ARDC’s contribution will propel the Reproducible Builds project’s efforts in ensuring the future health, security and sustainability of our increasingly digital society. Amateur Radio Digital Communications (ARDC) is a non-profit which was formed to further research and experimentation with digital communications using radio, with a goal of advancing the state of the art of amateur radio and to educate radio operators in these techniques. You can view the full announcement as well as more information about ARDC on their website.


In August’s report, we announced that Jennifer Helsby (redshiftzero) launched a new reproduciblewheels.com website to address the lack of reproducibility of Python ‘wheels’. This month, Kushal Das posted a brief follow-up to provide an update on reproducible sources as well.

The Threema privacy and security-oriented messaging application announced that “within the next months”, their apps “will become fully open source, supporting reproducible builds”:

This is to say that anyone will be able to independently review Threema’s security and verify that the published source code corresponds to the downloaded app.

You can view the full announcement on Threema’s website.

Events

Sadly, due to the unprecedented events in 2020, there will be no in-person Reproducible Builds event this year. However, the Reproducible Builds project intends to resume meeting regularly on IRC, starting on Monday, October 12th at 18:00 UTC (full announcement). The cadence of these meetings will probably be every two weeks, although this will be discussed and decided on at the first meeting. (An editable agenda is available.)

On 18th September, Bernhard M. Wiedemann gave a presentation in German titled Wie reproducible builds Software sicherer machen (“How reproducible builds make software more secure”) at the Internet Security Digital Days 2020 conference. (View video.)

On Saturday 10th October, Morten Linderud will give a talk at Arch Conf Online 2020 on The State of Reproducible Builds in the Arch Linux distribution:

The previous year has seen great progress in Arch Linux to get reproducible builds in the hands of the users and developers. In this talk we will explore the current tooling that allows users to reproduce packages, the rebuilder software that has been written to check packages and the current issues in this space.

During the Reproducible Builds summit in Marrakesh, GNU Guix, NixOS and Debian were able to produce a bit-for-bit identical binary when building GNU Mes, despite using three different major versions of GCC. Since the summit, additional work resulted in a bit-for-bit identical Mes binary using tcc and this month, a fuller update was posted by the individuals involved.


Development work

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Debian

Chris Lamb uploaded a number of Debian packages to address reproducibility issues that he had previously provided patches for, including cfingerd (#831021), grap (#870573), splint (#924003) & schroot (#902804)

Last month, an issue was identified where a large number of Debian .buildinfo build certificates had been ‘tainted’ on the official Debian build servers, as these environments had files underneath the /usr/local/sbin directory to prevent the execution of system services during package builds. However, this month, Aurelien Jarno and Wouter Verhelst fixed this issue in varying ways, resulting in a special policy-rcd-declarative-deny-all package.

Building on Chris Lamb’s previous work on reproducible builds for Debian .ISO images, Roland Clobus announced his work in progress on making the Debian Live images reproducible. []

Lucas Nussbaum performed an archive-wide rebuild of packages to test enabling the reproducible=+fixfilepath Debian build flag by default. Enabling the fixfilepath feature will likely fix reproducibility issues in an estimated 500-700 packages. The test revealed only 33 packages (out of 30,000 in the archive) that fail to build with fixfilepath. Many of those will be fixed when the default LLVM/Clang version is upgraded.

79 reviews of Debian packages were added, 23 were updated and 17 were removed this month adding to our knowledge about identified issues. Chris Lamb added and categorised a number of new issue types, including packages that captures their build path via quicktest.h and absolute build directories in documentation generated by Doxygen`, etc.

Lastly, Lukas Puehringer’s uploaded a new version of the in-toto to Debian which was sponsored by Holger Levsen. []

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds too.

In September, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 159 and 160 to Debian:

  • New features:

    • Show “ordering differences” only in strings(1) output by applying the ordering check to all differences across the codebase. []
  • Bug fixes:

    • Mark some PGP tests that they require pgpdump, and check that the associated binary is actually installed before attempting to run it. (#969753)
    • Don’t raise exceptions when cleaning up after guestfs cleanup failure. []
    • Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump against all files that are recognised by file(1) as data. []
  • Codebase improvements:

    • Add some documentation for the EXTERNAL_TOOLS dictionary. []
    • Abstract out a variable we use a couple of times. []
  • diffoscope.org website improvements:

    • Make the (long) demonstration GIF less prominent on the page. []

In addition, Paul Spooren added support for automatically deploying Docker images. []

Website and documentation

This month, a number of updates to the main Reproducible Builds website and related documentation. Chris Lamb made the following changes:

In addition, Holger Levsen re-added the documentation link to the top-level navigation [] and documented that the jekyll-polyglot package is required []. Lastly, diffoscope.org and reproducible-builds.org were transferred to Software Freedom Conservancy. Many thanks to Brett Smith from Conservancy, Jérémy Bobbio (lunar) and Holger Levsen for their help with transferring and to Mattia Rizzolo for initiating this.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

Bernhard M. Wiedemann also reported issues in git2-rs, pyftpdlib, python-nbclient, python-pyzmq & python-sidpy.

Testing framework

The Reproducible Builds project operates a Jenkins-based testing framework to power tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • Debian:

    • Shorten the subject of “nodes have gone offline” notification emails. []
    • Also track bugs that have been usertagged with usrmerge. []
    • Drop abort-related codepaths as that functionality has been removed from Jenkins. []
    • Update the frequency we update base images and status pages. [][][][]
  • Status summary view page:

    • Add support for monitoring systemctl status [] and the number of diffoscope processes [].
    • Show the total number of nodes [] and colourise critical disk space situations [].
    • Improve the visuals with respect to vertical space. [][]
  • Debian rebuilder prototype:

    • Resume building random packages again [] and update the frequency that packages are rebuilt. [][]
    • Use --no-respect-build-path parameter until sbuild 0.81 is available. []
    • Treat the inability to locate some packages as a debrebuild problem, and not as a issue with the rebuilder itself. []
  • Arch Linux:

  • System health checks:

    • Highlight important bad conditions in colour. [][]
    • Add support for detecting more problems, including Jenkins shutdown issues [], failure to upgrade Arch Linux packages [], kernels with wrong permissions [], etc.
  • Misc:

    • Delete old schroot sessions after 2 days, not 3. []
    • Use sudo to cleanup diffoscope schroot sessions. []

In addition, stefan0xC fixed a query for unknown results in the handling of Arch Linux packages [] and Mattia Rizzolo updated the template that notifies maintainers by email of their newly-unreproducible packages to ensure that it did not get caught in junk/spam folders []. Finally, build node maintenance was performed by Holger Levsen [][][][], Mattia Rizzolo [][] and Vagrant Cascadian [][][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Russ Allbery: California general election

5 October, 2020 - 11:09

As normal, probably of direct interest only to California residents and apologies to everyone else since my hand-rolled blog software doesn't do cut tags. I'm only going to cover propositions, since the state-wide elections aren't very interesting and I both don't have strong opinions about the local elections and would guess that almost no one cares.

See the voter guide for the full details on each proposition.

Propositions 15 through 19 were put on the ballot by the legislature and thus were written as well as our regular laws. The remaining propositions are initiatives, which means I default to voting against them because they're usually poorly-written crap.

Proposition 14: NO. I reluctantly supported the original proposition to fund stem cell research with state bonds because it was in the middle of the George W. Bush administration and his weird obsession with ending stem cell research. It seemed worth the cost to maintain the research, and I don't regret doing this. But since then we've reached a compromise on ongoing research, and this proposition looks a lot more like pork spending than investment.

I am in favor of government support of basic research, but I think that's best done by a single grant institution that can pursue a coherent agenda. The federal government, when sane, does a decent job of this, and the California agency created by the previous proposition looks dodgy. The support for this proposition also comes primarily from research institutions that benefit from it. On top of that, there are way higher priorities right now for public investment than a very specific and limited type of medical research that isn't even the most important type of medical research to do right now. There is nothing magic about stem cells other than the fact that they make a certain type of Republican lose their minds. It's time to stop funding this specific resarch specially and roll it into general basic research funding.

Proposition 15: YES. Yes to anything that repeals Proposition 13 in whole or in part. Repealing it for commerical and industrial real estate is a good first step.

Proposition 16: YES. Reverses a bad law to outlaw affirmative action in California. I am in favor of actual reparations, so I am of course in favor of this, which is far, far more mild.

Proposition 17: YES. Restores voting rights to felons after completion of their sentence. I think it's inexcusable that any US citizen cannot vote, including people who are currently incarcerated, so of course I'm in favor of this more mild measure. (You may notice a theme.) When we say everyone should be able to vote, that should mean literally everyone.

Proposition 18: YES. Allows 17-year-olds to vote in California (but not federal) elections in some specific circumstances. I'm generally in favor of lowering the voting age, and this seems inoffensive. (And the arguments against it are stupid.)

Proposition 19: YES. This is a complicated legislative compromise around property tax that strengthens property tax limits for seniors moving within California while removing exemptions against increases for inherited real estate not used as a primary home. Some progressives are opposed to this because it doesn't go far enough and increases exemptions for seniors. I agree that those exemptions aren't needed and shouldn't be added, but closing the inheritance loophole is huge and worth this compromise. It's a tepid improvement for the somewhat better, but it's still worth approving (and was written by the legislature, so it's somewhat better written than the typical initiative).

Proposition 20: NO. Another pile of "anyone who has ever committed a crime deserves to be treated as subhuman" bullshit. Typical harsher sentences and harsher parole nonsense. No to everything like this, always.

Proposition 21: YES. This is my one exception of voting for an initiative, and that's because the California state legislature is completely incapable of dealing with any housing problem.

This is a proposition that overhauls an ill-conceived state-wide restriction on how rent control can be handled. The problem with rent control is that a sane solution to housing problems in this state requires both rent control and massive new construction, and we only get the former and not the latter because the NIMBYism is endemic. (There's a pile of NIMBY crap on my local ballot this year.) I would much rather be approving those things together, because either of them alone makes things worse for a lot of people. So yes, the opponents of this proposition are right: it will make the housing crisis worse, because everyone refuses to deal with the supply side.

That said, we need rent control as part of a humane solution, and the current state-wide rules are bad. For example, they disallow rent control on every property newer than a certain date that's forever fixed. This initiative replaces that with a much saner 15-year rolling window for maximizing profit, which is a better balance.

I hate voting for this because the legislature should have done their job and passed comprehensive housing reform. But since they didn't, this is part of what they should have passed, and I'll vote for it. Particularly since it's opposed by all the huge commercial landlords.

Proposition 22: NO. The "exclude Uber and Lyft from labor law" proposition, which is just as bullshit as it sounds. They're spending all of their venture capital spamming the crap out of everyone in the state to try to get this passed by lying about it. Just stunningly evil companies. If your business model requires exploiting labor, get a better business model.

Proposition 23: NO. So, this is another mess. It appears to be part of some unionization fight between dialysis clinic employees and the for-profit dialysis clinics. I hate everything about this situation, starting from the fact that we have such a thing as for-profit dialysis clinics, which is a crime against humanity.

But this proposition requires some very dodgy things, such as having a doctor on staff at every clinic for... reasons? This is very reminiscent of the bullshit laws about abortion clinics, which are designed to make it more expensive to operate a clinic for no justifiable reason. I'm happy to believe there is a bit more justification here, but this sort of regulation is tricky and should be done by the legislature in a normal law-making process. Medical regulation by initiative is just a horrible idea in every way. So while I am doubtless politically on the side of the proponents of the proposition, this is the wrong tool. Take it to the legislature.

Proposition 24: NO. A deceptively-written supposed consumer privacy law written by tech companies that actually weakens consumer privacy in some critical ways that are profitable for them. No thanks, without even getting to the point that this sort of thing shouldn't be done by initiative.

Proposition 25: YES. Yes, we should eliminate cash bail, which is essentially imprisoning people for being poor. No, this doesn't create a system of government profiling; judges already set bail and can revoke bail for flight risks. (This is not legislation by initiative; the state government already passed this law, but we have a dumb law that lets people oppose legislative action via initiative, so we have to vote to approve the law that our representatives already passed and that should have already gone into effect.)

Enrico Zini: Science links

5 October, 2020 - 05:00

Weather: We can only forecast the weather a few days into the future.

Nuclear: I had no idea Thorium-based nuclear power was a thing.

Fluid dynamics applied to traffic: Traffic Flow and Phantom Jams.

Psychology, economics, and a history of culturally biased experiment results: We aren’t the world.

Sylvain Beucler: git filter-branch and --state-branch - how?

4 October, 2020 - 17:18

I'm mirroring and reworking a large Git repository with git filter-branch (conversion ETA: 20h), and I was wondering how to use --state-branch which is supposed to speed-up later updates, or split a large conversion in several updates.

The documentation is pretty terse, the option can produce weird results (like an identity mapping that breaks all later updates, or calling the expensive tree-filter but discarding the results), wrappers are convoluted, but I got something to work so I'll share

The main point is: run the initial script and the later updates in the same configuration, which means the target branch needs to be reset to the upstream branch each time, before it's rewritten again by filter-branch. In other words, don't re-run it on the rewritten branch, nor attempt some complex merge/cherry-pick.

git fetch
git branch --no-track -f myrewrite origin/master
git filter-branch \
  --xxx-filter ... \
  --xxx-filter ... \
  --state-branch refs/heads/filter-branch/myrewrite \
  -d /dev/shm/filter-branch/$$ -f \
  myrewrite

Updates restart from scratch but only take a few seconds to skim through all the already-rewritten commits, and maintain a stable history.

Note that if the process is interrupted, the state-branch isn't modified, so it's not a stop/resume feature. If you want to split a lenghty conversion, you could simulate multiple upstream updates by checking out successive points in history (e.g. per year using $(git rev-list -1 --before='2020-01-01 00:00:00Z')).

--state-branch isn't meant to rewrite in reverse chronological order either, because all commit ids would constantly change. Still, you can rewrite only the recent history for a quick discardable test.

Be cautious when using/deleting rewritten branches, especially during early tests, because Git tends to save them to multiple places which may desync (e.g. .git/refs/heads/, .git/logs/refs/, .git/packed-refs). Also remember to delete the state-branch between different tests. Last, note the unique temporary directory -d to avoid ruining concurrent tests ^_^'

Ben Hutchings: Debian LTS work, September 2020

4 October, 2020 - 04:38

I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 9.75 hours from August. I only worked 8.25 hours this month, and will return excess hours to the pool.

I attended and participated in the LTS team meeting on the 24th.

I updated linux-4.19 to include the changes in the buster point release, and issued DLA-2385-1.

I began work on an update to the linux (Linux 4.9 kernel) package.

Dirk Eddelbuettel: pinp 0.0.10: More Tweaks

3 October, 2020 - 23:54

A new version of our pinp package arrived on CRAN two days ago, roughly one year after the previous release. The pinp package allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

This release adds another option for a customized date YAML variable suitable for e.g. a published at date thanks to Ilya Kashnitsky, has some tweaks to the README.md as well as support for pandoc columns mode (as a small extension of code from the nice repo by Grant McDermott).

The NEWS entry for this release follows.

Changes in pinp version 0.0.10 (2020-10-01)
  • New document_date YAML variable to optionally set a 'Published at' or alike date (Ilya Kashnitsky in #85).

  • Small tweaks to README.md (Dirk)

  • Support pandoc columns mode (Dirk in #88)

Courtesy of my CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Julian Andres Klode: Google Pixel 4a: Initial Impressions

3 October, 2020 - 18:16

Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6. The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.

Why get a Pixel?

Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect. Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don’t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod.

Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don’t want it to be constantly behind.

Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy.

Size and weight: OnePlus phones keep getting bigger and bigger. By today’s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.

First impressions Accessories

The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we’ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the ‘o’ in Google?

Google sells you fabric cases for 45€. That seems a bit excessive, although I like that a lot of it is recycled.

Haptics

Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it’s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function.

The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it’s a Model M. It just feels great.

The plastic back feels really good, it’s that sort of high quality smooth plastic you used to see on those high-end Nokia devices.

The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.

Software

The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It’s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It’s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone.

There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany.

The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I’ll be like “Oh the font is huge”, but right now, it feels a bit small on the Pixel.

You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I’d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it’s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I’d need to calibrate the screen then - work!).

Migration experience

Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It’s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored.

I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that’s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup).

I’ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don’t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.

Performance

It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance.

Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.

Audio

The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it’s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I’ve never had this before, and it will take some time getting used to.

Final thoughts

This is a boring phone. There’s no wow factor at all. It’s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn’t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a “only 350€” phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is … you guessed it, plenty.

The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620€ on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn’t get much use out of it, seeing as its not available anywhere I am. Because I’m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.

Outlook

The big question for me is whether I’ll be able to adjust to the smaller display. I now have a tablet, so I’m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call.

Oh while we’re talking about calls - I only have a data-only SIM in it, so I could not test calling. I’m transferring to a new phone contract this month, and I’ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10€ discount for being an existing cable customer.

I’m also looking forward to playing around with the camera (especially night sight), and eSIM. And I’m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can’t really judge the screen quality much, as I still have the factory protection film on it, and that’s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case.

I might report back in two weeks when I have spent some more time with the device.

Ritesh Raj Sarraf: First Telescope

3 October, 2020 - 15:27
Curiosity

I guess this would be common to most of us.

While I grew up, right from the childhood itself, the sky was always an intriguing view. The Stars, the Moon, the Eclipses; were all fascinating.

As a child, in my region, religion and culture; the mythology also built up stories around it. Lunar Eclipses have a story of its own. During Solar Eclipses, parents still insist that we do not go out. And to be done with the food eating before/after the eclipse.

Then there’s the Hindu Astrology part, which claims its own theories and drags in mythology along. For example, you’ll still find the Hindu Astrology making recommendations to follow certain practices with the planets, to get auspicious personal results. As far as I know, other religions too have similar beliefs about the planets.

As a child, we are told the Moon to be addressed as an Uncle (चंदा मामा). There’s also a rhyme around it, that many of us must have heard.

And if you look at our god, Lord Mahadev, he’s got a crescent on his head

Lord Mahadev Reality

Fast-forward to today, as I grew, so did some of my understanding. It is fascinating how mankind has achieved so much understanding of our surrounding. You could go through the documentaries on Mars Exploration, for example; to see how the rovers are providing invaluable data.

As a mere individual, there’s a limit to what one can achieve. But the questions flow in free.

  • Is there life beyond us
  • What’s out there in the sky
  • Why is all this the way it is
Hobby

The very first step, for me, for every such curiosity, has been to do the ground work, with the resources I have. To study on the subject. I have done this all my life. For example, I started into the Software domain as: A curiosity => A Hobby => A profession

Same was the case with some of the other hobbies, equally difficult as Astronomy, that I developed a liking for. Just did the ground work, studied on those topics and then applied the knowledge to further improve it and build up some experience.

And star gazing came in no different. As a complete noob, had to start with the A B C on the subject of Astronomy. Familiarize myself with the usual terms. As so on…

PS: Do keep in mind that not all hobbies have a successful end. For example, I always craved to be good with graphic designing, image processing and the likes, where I’ve always failed. Never was able to keep myself motivated enough. Similar was my experience when trying to learn playing a musical instrument. Just didn’t work out for me, then.

There’s also a phase in it, where you fail and then learn from the failures and proceed further, and then eventually succeed. But we all like to talk about the successes. :-)

Astronomy

So far, my impression has been that this topic/domain will not suit most of the people. While the initial attraction may be strong, given the complexity and perseverance that Astronomy requires, most people would lose interest in it very soon.

Then there’s the realization factor. If one goes with an expectation to get quick results, they may get disappointed. It isn’t like a point and shoot device that’d give you results on the spot.

There’s also the expectation side of things. If you are a person more accustomed to taking pretty selfies, which always come right because the phone manufacturer does heavy processing on the images to ensure that you get to see the pretty fake self, for the most of the times; then star gazing with telescopes could be a frustrating experience altogether. What you get to see in the images on the internet will be very different than what you’d be able to see with your eyes and your basic telescope.

There’s also the cost aspect. The more powerful (and expensive) your telescope, the better your view.

And all things aside, it still may get you lose interest, after you’ve done all the ground work and spent a good chunk of money on it. Simply because the object you are gazing at is more a still image, which can quickly get boring for many.

On the other hand, if none of the things obstruct, then the domain of Astronomy can be quite fascinating. It is a continuous learning domain (reminds me of CI in our software field these days). It is just the beginning for us here, and we hope to have a lasting experience in it.

The Internet

I have been indebted to the internet right from the beginning. The internet is what helped me be able to achieve all I wanted. It is one field with no boundaries. If there is a will, there is a way; and often times, the internet is the way.

  • I learnt computers over the internet.
  • Learnt more about gardening and plants over the internet
  • Learnt more about fish care-taking over the internet

And many many more things.

Some of the communities over the internet are a great way to participation. They bridge the age gap, the regional gap and many more.

For my Astronomy need, I was glad to see so many active communities, with great participants, on the internet.

Telescope

While there are multiple options to start star gazing, I chose to start with a telescope. But as someone completely new to this domain, there was a long way to go. And to add to that, the real life: work + family

I spent a good 12+ months reading up on the different types of telescopes, what they are, their differences, their costs, their practical availability etc.

The good thing is that the market has offerings for everything. From a very basic binocular to a fully automatic Maksutov-Cassegrain scope. It all would depend on your budget.

Automatic vs Manual

To make it easy for the users, the market has multiple options in the offering. One could opt-in for a cheap, basic and manually operated telescope; which would require the user to do a lot of ground study. On the other hand, users also have the option of automatic telescopes which do the hard work of locating and tracking the planetary objects.

Either option aside, the end result of how much you’ll be able to observe the sky, still depends on many many more factors: Enthusiasm over time, Light Pollution, Clear Skies, Timing etc.

PS: The planetary objects move at a steady pace. Objects you lock into your view now will be gone out of the FOV in just a matter of minutes.

My Telescope

After spending so much of the time reading up on types of telescopes, my conclusion was that a scope with high aperture and focal length was the way to go forward. This made me shorten the list to Dobsonians. But the Dobsonians aren’t a very cheap telescope, whether manual or automatic.

My final decision made me acquire a 6" Dobsonian Telescope. It is a Newtonian Reflecting Telescope with a 1200mm focal length and 150mm diameter.

Another thing about this subject is that most of the stuff you do in Astronomy; right from the telescope selection, to installation, to star gazing; most of it is DIY, so your mileage may vary with the end result and experience.

For me, installation wasn’t very difficult. I was able to assemble the base Dobsonian mount and the scope in around 2 hours. But the installation manual, I had been provided with, was very brief. I ended up with one module in the mount wrongly fit, which I was able to fix later, with the help of online forums.

Dobsonian Mount

In this image you can see that the side facing out, where the handle will go, is wrong. If fit this way, the handle will not withstand any weight at all.

Correct Panel Side

The right fix of the handle base board. In this image, the handle is on the other side that I’m holding. Because the initial fit put in some damage to the engineered wood, I fixed it up by sealing with some adhesive.

With that, this is what my final telescope looks like.

Final Telescope Clear Skies

While the telescope was ready, the skies were not. For almost next 10 days, we had no clear skies at all. All I could do was wait. Wait so much that I had forgotten to check on the skies. Luckily, my wife noticed clear skies this week for a single day. Clear enough that we could try out our telescope for the very first time.

Me posing for a shot Telescope

As I said earlier, in my opinion, it takes a lot of patience and perseverance on this subject. And most of the things here are DIY.

To start with, we targeted the Moon. Because it is easy. I pointed the scope to the moon, then looked into the finder scope to center it, and then looked through the eyepiece. And blank. Nothing out there. Turns out, the finder scope and the viewer’s angle weren’t aligned. This is common and the first DIY step, when you plan to use your telescope for viewing.

Since our first attempt was unplanned and just random because we luckily spotted that the skies were clear, we weren’t prepared for this. Lucky enough, mapping the difference in the alignment, in the head, is not very difficult.

After a couple of minutes, I could make out the point in the finder scope, where the object if projected, would show proper in the viewer.

With that done, it was just mesmerizing to see the Moon, in a bit more detail, than what I’ve seen all these years of my life.

Moon Moon Moon Moon

The images are not exactly what we saw with our eyes. The view was much more vivid than these pictures. But as a first timer, I really wanted to capture this first moment of a closer view of the Moon.

In the whole process; that of ground work studying about telescopes, installation of the telescope, astronomy basics and many other things; the most difficult part in this entire journey, was to point my phone to the viewing eyepiece, to get a shot of the object. This requirement just introduced me to astrophotography.

And then, Dobsonians aren’t the best model for astrophotography, to what I’ve learnt so far. Hopefully, I’ll find my ways to do some DIY astrophotography with the tools I have. Or extend my arsenal over time.

But overall, we’ve been very pleased with the subject of Astronomy. It is a different feel altogether and we’re glad to have forayed into it.

Steve Kemp: Writing an assembler.

3 October, 2020 - 11:30

Recently I've been writing a couple of simple compilers, which take input in a particular format and generate assembly language output. This output can then be piped through gcc to generate a native executable.

Public examples include this trivial math compiler and my brainfuck compiler.

Of course there's always the nagging thought that relying upon gcc (or nasm) is a bit of a cheat. So I wondered how hard is it to write an assembler? Something that would take assembly-language program and generate a native (ELF) binary?

And the answer is "It isn't hard, it is just tedious".

I found some code to generate an ELF binary, and after that assembling simple instructions was pretty simple. I remember from my assembly-language days that the encoding of instructions can be pretty much handled by tables, but I've not yet gone into that.

(Specifically there are instructions like "add rax, rcx", and the encoding specifies the source/destination registers - with different forms for various sized immediates.)

Anyway I hacked up a simple assembler, it can compile a.out from this input:

.hello   DB "Hello, world\n"
.goodbye DB "Goodbye, world\n"

        mov rdx, 13        ;; write this many characters
        mov rcx, hello     ;; starting at the string
        mov rbx, 1         ;; output is STDOUT
        mov rax, 4         ;; sys_write
        int 0x80           ;; syscall

        mov rdx, 15        ;; write this many characters
        mov rcx, goodbye   ;; starting at the string
        mov rax, 4         ;; sys_write
        mov rbx, 1         ;; output is STDOUT
        int 0x80           ;; syscall


        xor rbx, rbx       ;; exit-code is 0
        xor rax, rax       ;; syscall will be 1 - so set to xero, then increase
        inc rax            ;;
        int 0x80           ;; syscall

The obvious omission is support for "JMP", "JMP_NZ", etc. That's painful because jumps are encoded with relative offsets. For the moment if you want to jump:

        push foo     ; "jmp foo" - indirectly.
        ret

:bar
        nop          ; Nothing happens
        mov rbx,33   ; first syscall argument: exit code
        mov rax,1    ; system call number (sys_exit)
        int 0x80     ; call kernel

:foo
        push bar     ; "jmp bar" - indirectly.
        ret

I'll update to add some more instructions, and see if I can use it to handle the output I generate from a couple of other tools. If so that's a win, if not then it was a fun learning experience:

Junichi Uekawa: Already october.

2 October, 2020 - 07:49
Already october. I tried moving to vscode from emacs but I have so far only installed the editor. emacs is my workflow engine, so it's hard to migrate everything.

Ian Jackson: Mailman vs DKIM - a novel solution

2 October, 2020 - 03:35

tl;dr: Do not configure Mailman to replace the mail domains in From: headers. Instead, try out my small new program which can make your Mailman transparent, so that DKIM signatures survive.

Background and narrative DKIM

NB: This explanation is going to be somewhat simplified. I am going to gloss over some details and make some slightly approximate statements.

DKIM is a new anti-spoofing mechanism for Internet email, intended to help fight spam. DKIM, paired with the DMARC policy system, has been remarkably successful at stemming the flood of joe-job spams. As usually deployed, DKIM works like this:

When a message is originally sent, the author's MUA sends it to the MTA for their From: domain for outward delivery. The From: domain mailserver calculates a cryptographic signature of the message, and puts the signature in the headers of the message.

Obviously not the whole message can be signed, since at the very least additional headers need to be added in transit, and sometimes headers need to be modified too. The signing MTA gets to decide what parts of the message are covered by the signature: they nominate the header fields that are covered by the signature, and specify how to handle the body.

A recipient MTA looks up the public key for the From: domain in the DNS, and checks the signature. If the signature doesn't match, depending on policy (originator's policy, in the DNS, and recipient's policy of course), typically the message will be treated as spam.

The originating site has a lot of control over what happens in practice. They get to publish a formal (DMARC) policy in the DNS which advises recipients what they should do with mails claiming to be from their site As mentioned, they can say which headers are covered by the signature - including the ability to sign the absence of a particular headers - so they can control which headers downstreams can get away with adding or modifying. And they can set a normalisation policy, which controls how precisely the message must match the one that they sent.

Mailman

Mailman is, of course, the extremely popular mailing list manager. There are a lot of things to like about it. I choose to run it myself not just because it's popular but also because it provides a relatively competent web UI and a relatively competent email (un)subscription interfaces, decent bounce handling, and a pretty good set of moderation and posting access controls.

The Xen Project mailing lists also run on mailman. Recently we had some difficulties with messages sent by Citrix staff (including myself), to Xen mailing lists, being treated as spam. Recipient mail systems were saying the DKIM signatures were invalid.

This was in fact true. Citrix has chosen a fairly strict DKIM policy; in particular, they have chosen "simple" normalisation - meaning that signed message headers must precisely match in syntax as well as in a semantic sense. Examining the the failing-DKIM messages showed that this was definitely a factor.

Applying my Opinions about email

My Bayesian priors tend to suggest that a mail problem involving corporate email is the fault of the corporate email. However in this case that doesn't seem true to me.

My starting point is that I think mail systems should not not modify messages unnecessarily. None of the DKIM-breaking modifications made by Mailman seemed necessary to me. I have on previous occasions gone to corporate IT and requested quite firmly that things I felt were broken should be changed. But it seemed wrong to go to corporate IT and ask them to change their published DKIM/DMARC policy to accomodate a behaviour in Mailman which I didn't agree with myself. I felt that instead I shoud put (with my Xen Project hat on) put my own house in order.

Getting Mailman not to modify messages

So, I needed our Mailman to stop modifying the headers. I needed it to not even reformat them. A brief look at the source code to Mailman showed that this was not going to be so easy. Mailman has a lot of features whose very purpose is to modify messages.

Personally, as I say, I don't much like these features. I think the subject line tags, CC list manipulations, and so on, are a nuisance and not really Proper. But they are definitely part of why Mailman has become so popular and I can definitely see why the Mailman authors have done things this way. But these features mean Mailman has to disassemble incoming messages, and then reassemble them again on output. It is very difficult to do that and still faithfully reassemble the original headers byte-for-byte in the case where nothing actually wanted to modify them. There are existing bug reports[1] [2] [3] [4]; I can see why they are still open.

Rejected approach: From:-mangling

This situation is hardly unique to the Xen lists. Many other have struggled with it. The best that seems to have been come up with so far is to turn on a new Mailman feature which rewrites the From: header of the messages that go through it, to contain the list's domain name instead of the originator's.

I think this is really pretty nasty. It breaks normal use of email, such as reply-to-author. It is having Mailman do additional mangling of the message in order to solve the problems caused by other undesirable manglings!

Solution!

As you can see, I asked myself: I want Mailman not modify messages at all; how can I get it to do that? Given the existing structure of Mailman - with a lot of message-modifying functionality - that would really mean adding a bypass mode. It would have to spot, presumably depending on config settings, that messages were not to be edited; and then, it would avoid disassembling and reassembling the message at at all, and bypass the message modification stages. The message would still have to be parsed of course - it's just that the copy send out ought to be pretty much the incoming message.

When I put it to myself like that I had a thought: couldn't I implement this outside Mailman? What if I took a copy of every incoming message, and then post-process Mailman's output to restore the original?

It turns out that this is quite easy and works rather well!

outflank-mailman

outflank-mailman is a 233-line script, plus documentation, installation instructions, etc.

It is designed to run from your MTA, on all messages going into, and coming from, Mailman. On input, it saves a copy of the message in a sqlite database, and leaves a note in a new Outflank-Mailman-Id header. On output, it does some checks, finds the original message, and then combines the original incoming message with carefully-selected headers from the version that Mailman decided should be sent.

This was deployed for the Xen Project lists on Tuesday morning and it seems to be working well so far.

If you administer Mailman lists, and fancy some new software to address this problem, please do try it out.

Matters arising - Mail filtering, DKIM

Overall I think DKIM is a helpful contribution to the fight against spam (unlike SPF, which is fundamentally misdirected and also broken). Spam is an extremely serious problem; most receiving mail servers experience more attempts to deliver spam than real mail, by orders of magnitude. But DKIM is not without downsides.

Inherent in the design of anything like DKIM is that arbitrary modification of messages by list servers is no longer possible. In principle it might be possible to design a system which tolerated modifications reasonable for mailing lists but it would be quite complicated and have to somehow not tolerate similar modifications in other contexts.

So DKIM means that lists can no longer add those unsubscribe footers to mailing list messages. The "new way" (RFC2369, July 1998), to do this is with the List-Unsubscribe header. Hopefully a good MUA will be able to deal with unsubscription semiautomatically, and I think by now an adequate MUA should at least display these headers by default.

Sender:

There are implications for recipient-side filtering too. The "traditional" correct way to spot mailing list mail was to look for Resent-To:, which can be added without breaking DKIM; the "new" (RFC2919, March 2001) correct way is List-Id:, likewise fine. But during the initial deployment of outflank-mailman I discovered that many subscribers were detecting that a message was list traffic by looking at the Sender: header. I'm told that some mail systems (apparently Microsoft's included) make it inconvenient to filter on List-Id.

Really, I think a mailing list ought not to be modifying Sender:. Given Sender:'s original definition and semantics, there might well be reasonable reasons for a mailing list posting to have different From: and and then the original Sender: ought not to be lost. And a mailing list's operation does not fit well into the original definition of Sender:. I suspect that list software likes to put in Sender mostly for historical reasons; notably, a long time ago it was not uncommon for broken mail systems to send bounces to the Sender: header rather than the envelope sender (SMTP MAIL FROM).

DKIM makes this more of a problem. Unfortunately the DKIM specifications are vague about what headers one should sign, but they pretty much definitely include Sender: if it is present, and some materials encourage signing the absence of Sender:. The latter is Exim's default configuration when DKIM-signing is enabled.

Franky there seems little excuse for systems to not readily support and encourage filtering on List-Id, 20 years later, but I don't want to make life hard for my users. For now we are running a compromise configuration: if there wasn't a Sender: in the original, take Mailman's added one. This will result in (i) misfiltering for some messages whose poster put in a Sender:, and (ii) DKIM failures for messages whose originating system signed the absence of a Sender:. I'm going to mine the db for some stats after it's been deployed for a week or so, to see which of these problems is worst and decide what to do about it.

Mail routing

For DKIM to work, messages being sent From: a particular mail domain must go through a system trusted by that domain, so they can be signed.

Most users tend to do this anyway: their mail provider gives them an IMAP server and an authenticated SMTP submission server, and they configure those details in their MUA. The MUA has a notion of "accounts" and according to the user's selection for an outgoing message, connects to the authenticated submission service (usually using TLS over the global internet).

Trad unix systems where messages are sent using the local sendmail or localhost SMTP submission (perhaps by automated systems, or perhaps by human users) are fine too. The smarthost can do the DKIM signing.

But this solution is awkward for a user of a trad MUA in what I'll call "alias account" setups: where a user has an address at a mail domain belonging to different people to the system on which they run their MUA (perhaps even several such aliases for different hats). Traditionally this worked by the mail domain forwarding incoming the mail, and the user simply self-declaring their identity at the alias domain. Without DKIM there is nothing stopping anyone self-declaring their own From: line.

If DKIM is to be enabled for such a user (preventing people forging mail as that user), the user will have to somehow arrange that their trad unix MUA's outbound mail stream goes via their mail alias provider. For a single-user sending unix system this can be done with tolerably complex configuration in an MTA like Exim. For shared systems this gets more awkward and might require some hairy shell scripting etc.

edited 2020-10-01 21:22 and 21:35 +0100 to fix typos and 21:28 to linkify "my small program" in the tl;dr



comments

Sylvain Beucler: Debian LTS and ELTS - September 2020

1 October, 2020 - 23:14

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In September, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 19.75h for LTS (out of my 30 max; all done) and 20h for ELTS (out of my 20 max; all done).

ELTS - Jessie

  • qemu: jessie triage: finish work started in August
  • qemu: backport 5 CVE fixes, perform virtual and physical testing, security upload ELA-283-1
  • libdbi-perl: global triage: clarifications, confirm incomplete and attempt to get upstream action, request new CVE following discussion with security team
  • libdbi-perl: backport 5 CVE fixes, test, security upload ELA-285-1

LTS - Stretch

  • qemu: stretch triage, while working on ELTS update; mark several CVEs unaffected, update patch/status
  • wordpress: global triage: reference new patches, request proper CVE to fix our temporary tracking
  • wordpress: revamp package: upgrade to upstream's stable 4.7.5->4.7.18 to ease future updates, re-apply missing patches, fix past regression and notify maintainer, security upload DLA-2371-1
  • libdbi-perl: common work with ELTS, security upload DLA-2386-1
  • public IRC team meeting

Documentation/Scripts

  • LTS/TestSuites/wordpress: new page with testsuite import and manual tests
  • LTS/TestSuites/qemu: minor update
  • wiki.d.o/Sympa: update Sympa while using it as a libdbi-perl reverse-dep test (update for newer versions, explain how to bootstrap admin access)
  • www.d.o/lts/security: import a couple missing announcements and notify uploaders about procedures
  • Check status for pdns-recursor, following user request
  • Check status for golang-1.7 / CVE-2019-9514 / CVE-2019-9512
  • Attempt to improve cooperation after seeing my work discarded and redone as-is, which sadly isn't the first time; no answer
  • Historical analysis of our CVE fixes: experiment to gather per-CVE tracker history

Molly de Blanc: Free Software Activities – September 2020

1 October, 2020 - 20:42

I haven’t done one of these in a while, so let’s see how it goes.

Debian

The Community Team has been busy. We’re planning a sprint to work on a bigger writing project and have some tough discussions that need to happen.
I personally have only worked on one incident, but we’ve had a few others come in.
I’m attempting to step down from the Outreach team, which is more work than I thought it would be. I had a very complicated relationship with the Outreach team. When no one else was there to take on making sure we did GSoC and Outreachy, I stepped up. It wasn’t really what I wanted to be doing, but it’s important. I’m glad to have more time to focus on other things that feel more aligned with what I’m trying to work on right now.

GNOME

In addition to, you know, work, I joined the Code of Conduct Committee. Always a good time! Rosanna and I presented at GNOME Onboard Africa Virtual about the GNOME CoC. It was super fun!

Digital Autonomy

Karen and I did an interview on FLOSS Weekly with Doc Searls and Dan Lynch. Super fun! I’ve been doing some more writing, which I still hope to publish soon, and a lot of organization on it. I’m also in the process of investigating some funding, as there are a few things we’d like to do that come with price tags. Separately, I started working on a video to explain the Principles. I’m excited!

Misc

I started a call that meets every other week where we talk about Code of Conduct stuff. Good peeps. Into it.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้