Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 7 min 5 sec ago

Evgeni Golov: building a simple KVM switch for 30€

18 January, 2021 - 20:25

Prompted by tweets from Lesley and Dave, I thought about KVM switches again and came up with a rather cheap solution to my individual situation (YMMY, as usual).

As I've written last year, my desk has one monitor, keyboard and mouse and two computers. Since writing that post I got a new (bigger) monitor, but also an USB switch again (a DIGITUS USB 3.0 Sharing Switch) - this time one that doesn't freak out my dock \o/

However, having to switch the used computer in two places (USB and monitor) is rather inconvenient, but also getting an KVM switch that can do 4K@60Hz was out of question.

Luckily, hackers gonna hack, everything, and not only receipt printers (😉). There is a tool called ddcutil that can talk to your monitor and change various settings. And udev can execute commands when (USB) devices connect… You see where this is going?

After installing the package (available both in Debian and Fedora), we can inspect our system with ddcutil detect:

$ sudo ddcutil detect
Invalid display
   I2C bus:             /dev/i2c-4
   EDID synopsis:
      Mfg id:           BOE
      Model:
      Serial number:
      Manufacture year: 2017
      EDID version:     1.4
   DDC communication failed
   This is an eDP laptop display. Laptop displays do not support DDC/CI.

Invalid display
   I2C bus:             /dev/i2c-5
   EDID synopsis:
      Mfg id:           AOC
      Model:            U2790B
      Serial number:
      Manufacture year: 2020
      EDID version:     1.4
   DDC communication failed

Display 1
   I2C bus:             /dev/i2c-7
   EDID synopsis:
      Mfg id:           AOC
      Model:            U2790B
      Serial number:
      Manufacture year: 2020
      EDID version:     1.4
   VCP version:         2.2

The first detected display is the built-in one in my laptop, and those don't support DDC anyways. The second one is a ghost (see ddcutil#160) which we can ignore. But the third one is the one we can (and will control). As this is the only valid display ddcutil found, we don't need to specify which display to talk to in the following commands. Otherwise we'd have to add something like --display 1 to them.

A ddcutil capabilities will show us what the monitor is capable of (or what it thinks, I've heard some give rather buggy output here) -- we're mostly interested in the "Input Source" feature (Virtual Control Panel (VCP) code 0x60):

$ sudo ddcutil capabilities
…
   Feature: 60 (Input Source)
      Values:
         0f: DisplayPort-1
         11: HDMI-1
         12: HDMI-2
…

Seems mine supports it, and I should be able to switch the inputs by jumping between 0x0f, 0x11 and 0x12. You can see other values defined by the spec in ddcutil vcpinfo 60 --verbose, some monitors are using wrong values for their inputs 🙄. Let's see if ddcutil getvcp agrees that I'm using DisplayPort now:

$ sudo ddcutil getvcp 0x60
VCP code 0x60 (Input Source                  ): DisplayPort-1 (sl=0x0f)

And try switching to HDMI-1 using ddcutil setvcp:

$ sudo ddcutil setvcp 0x60 0x11

Cool, cool. So now we just need a way to trigger input source switching based on some event…

There are three devices connected to my USB switch: my keyboard, my mouse and my Yubikey. I do use the mouse and the Yubikey while the laptop is not docked too, so these are not good indicators that the switch has been turned to the laptop. But the keyboard is!

Let's see what vendor and product IDs it has, so we can write an udev rule for it:

$ lsusb
…
Bus 005 Device 006: ID 17ef:6047 Lenovo ThinkPad Compact Keyboard with TrackPoint
…

Okay, so let's call ddcutil setvcp 0x60 0x0f when the USB device 0x17ef:0x6047 is added to the system:

ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="17ef", ATTR{idProduct}=="6047", RUN+="/usr/bin/ddcutil setvcp 0x60 0x0f"
$ sudo vim /etc/udev/rules.d/99-ddcutil.rules
$ sudo udevadm control --reload

And done! Whenever I connect my keyboard now, it will force my screen to use DisplayPort-1.

On my workstation, I deployed the same rule, but with ddcutil setvcp 0x60 0x11 to switch to HDMI-1 and my cheap not-really-KVM-but-in-the-end-KVM-USB-switch is done, for the price of one USB switch (~30€).

Note: if you want to use ddcutil with a Lenovo Thunderbolt 3 Dock (or any other dock using Displayport Multi-Stream Transport (MST)), you'll need kernel 5.10 or newer, which fixes a bug that prevents ddcutil from talking to the monitor using I²C.

Craig Small: Percent CPU for processes

18 January, 2021 - 16:39

The ps program gives a snapshot of the processes running on your Unix-like system. On most Linux installations, this will be the ps program from the procps project.

While you can get a lot of information from the tool, a lot of the fields need further explanation or can give “wrong” or confusing information; or putting it another way, they provide the right information that looks wrong.

One of these confusing fields is the %CPU or pcpu field. You can see this as the third field with the ps aux command. You only really need the u option to see it, but ps aux is a pretty common invokation.

More than 100%?

This post was inspired by procps issue 186 where the submitter said that the sum of the processes cannot be more than the number of CPUs times 100%. If you have 1 CPU then the sum of %CPU for all processes should be 100% or less, have 16 CPUs then 1600% is your maximum number.

Some people reason for the oddity of over 100% CPU as some rounding thing gone wrong and at first I did think that; except I know we get a lot of reports about comparing the top header CPU load vs process load not lining up and its because “they’re different”.

The trick here, is ps is reporting a percentage of what? Or, perhaps to give a better clue, a percentage of when?

PCPU Calculations

So to get to the bottom of this, let’s look at the relevant code. In ps/output.c we have a function pr_pcpu that prints the percent CPU. The relevant lines are:

  total_time = pp->utime + pp->stime;
  if(include_dead_children) total_time += (pp->cutime + pp->cstime);
  seconds = cook_etime(pp);
  if(seconds) pcpu = (total_time * 1000ULL / Hertz) / seconds;

OK, ignoring the include _dead_time line (you get this from the S option and means you include the time this process waited for its children processes) and the scaling (process times are in Jiffies, we have the CPU as 0 to 999 for reasons) you can reduce this down to.

%CPU = ( Tutime + Tstime ) / Tetime

So we find the amount of time the CPU(s) have been busy either in userland or the system, add them together, then divide the sum by the total time. The utime and stime increment like a car’s odometer. So if a process uses one Jiffy of CPU time in userland, that counter goes to 1. If it does it again a few seconds later, then that counter goes to 2.

To give an example, if a process has run for ten seconds and within those ten seconds the CPU has been busy in userland for that process, then we get 10/10 = 100% which makes sense.

Not all Start times are the same

Let’s take another example, a process still consumes ten seconds CPU time but been running for twenty seconds, the answer is 10/20 or 50%. With our single CPU example system both of these cannot be running at the same time otherwise we have 150% CPU utilisation which is not possible.

However, let’s adjust this slightly. We have assumed uniform utilisation. But take the following scenario:

  • At time T: Process P1 starts and uses 100% CPU
  • At time T+10 seconds: Process P1 stops using CPU but still runs, perhaps waiting for I/O or sleeping.
  • Also at time T+10 seconds: Process P2 starts and uses 100% CPU
  • At time T+20 we run the ps command and look at the %CPU column

The output for ps -o times,etimes,pcpu,comm would look something like:

    TIME ELAPSED %CPU COMMAND
      10      20   50 P1
      10      10  100 P2

What we will see is P1 has 10/20 or 50% CPU and P2 has 10/10 or 100% CPU. Add those up, and you have 150% CPU, magic!

The key here is the ELAPSED column. P1 has given you the CPU utilisation across 20 seconds of system time and P2 the CPU utilisation across only 10 seconds. You directly add them together you get the wrong answer.

What’s the point of %CPU?

Probably the %CPU column gives results that a lot of people are not expecting, so what’s the point of it? Don’t use it to see why the CPU is running hot; you can see above those two processes were working the CPU hard at different times. What it is useful for is to see how “busy” a process is, but be warned its an average. It’s helpful for something that starts busy but if the process idles or hardly uses CPU for a week then goes bananas you won’t see it.

The top program, because a lot of its statistics are deltas from the last refresh, is a much better program for this sort of information about what is happening right now.

Craig Small: test2

18 January, 2021 - 07:49

ignore this

Enrico Zini: OpenStreetMap maps links

18 January, 2021 - 06:00

Some interesting renderings of OpenStreetMap data:

If you want to print out local maps, MyOSMatic is a service to generate maps of cities using OpenStreetMap data. The generated maps are available in PNG, PDF and SVG formats and are ready to be printed.

On a terminal? Sure: MapSCII is a Braille & ASCII world map renderer for the console: telnet mapscii.me and explore the map with the mouse, keyboard arrows, and a and z to zoom, c to switch to block character mode, q to quit.

Alternatively you can fly over it, and you might have to dodge the rare map editing bug, or have fun landing on it: A typo created a 212-story monolith in ‘Microsoft Flight Simulator’

Wouter Verhelst: On Statements, Facts, Hypotheses, Science, Religion, and Opinions

17 January, 2021 - 21:03

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days

Anyway...

[[!img Error: Image::Magick is not installed]]

Wouter Verhelst: Software available through Extrepo

17 January, 2021 - 21:03

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software
  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.
Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

Wouter Verhelst: SReview 0.6

17 January, 2021 - 21:03

... isn't ready yet, but it's getting there.

I had planned to release a new version of SReview, my online video review and transcoding system that I wrote originally for FOSDEM but is being used for DebConf, too, after it was set up and running properly for FOSDEM 2020. However, things got a bit busy (both in my personal life and in the world at large), so it fell a bit by the wayside.

I've now also been working on things a bit more, in preparation for an improved administrator's interface, and have started implementing a REST API to deal with talks etc through HTTP calls. This seems to be coming along nicely, thanks to OpenAPI and the Mojolicious plugin for parsing that. I can now design the API nicely, and autogenerate client side libraries to call them.

While at it, because libmojolicious-plugin-openapi-perl isn't available in Debian 10 "buster", I moved the docker containers over from stable to testing. This revealed that both bs1770gain and inkscape changed their command line incompatibly, resulting in me having to work around those incompatibilities. The good news is that I managed to do so in a way that keeps running SReview on Debian 10 viable, provided one installs Mojolicious::Plugin::OpenAPI from CPAN rather than from a Debian package. Or installs a backport of that package, of course. Or, heck, uses the Docker containers in a kubernetes environment or some such -- I'd love to see someone use that in production.

Anyway, I'm still finishing the API, and the implementation of that API and the test suite that ensures the API works correctly, but progress is happening; and as soon as things seem to be working properly, I'll do a release of SReview 0.6, and will upload that to Debian.

Hopefully that'll be soon.

Junichi Uekawa: Yesterday was our monthly Debian meeting.

17 January, 2021 - 08:39
Yesterday was our monthly Debian meeting. We do one every month for Tokyo and Kansai combined, because it's online and no reason to split for now. I presented what I learnt about nodejs packaging in Debian. This time, I started using Emacs for presentation, presenting PDF file. This week I switched most of my login shells to emacsclient, and experimenting. It's interesting how much things break, and how much I depended on .bashrc and .profile being loaded. But most things work and I don't need most things outside of emacs...

Christian Kastner: Keeping your Workstation Silent

17 January, 2021 - 04:27

I require moderately powerful boxes for my work. Moderately powerful boxes generate a non-negligible amount of heat, and therefore require appropriate cooling. Most cooling solutions involve a certain degree of noise, which I dislike.

Water-cooling is supposed to be the most effective solution for noise reduction, but it always felt like too much of a hassle to try to me.

Considering the various solutions I've tried in the past years, two observations I've made really stick out:

  1. Noctua's reputation for CPU cooling is well-earned.

  2. A "silent" (meaning sound-proofed) case, isn't. At least, not anymore.

I've tried numerous coolers in the past, some of monstrous proportions (always thinking that more mass must be better, and reputable brands are equally good), but I was never really satisfied; hence, I was doubtful that trying yet another cooler would make a difference. I'm glad I tried the Noctua NH-D15 anyway. With some tweaking to the fan profile in the BIOS, it's totally inaudible at normal to medium workloads, and just a very gentle hum at full load—subtle enough to disappear in the background.

For the past decade, I've also regularly purchased sound-proofed cases, but this habit appears anachronistic now. Years ago, sound-proofed cases helped contain the noise of a few HDDs. However, all of my boxes now contain NVMe drives (which, to me, are the biggest improvement to computing since CPUs going multi-core).

On the other hand, some of my boxes now contain powerful GPUs used for GPGPU computing, and with the recent higher-end Nvidia and AMD cards all pulling in over 300W, there is a lot of heat to manage. The best way to quickly dump heat is with good airflow. Sound-proofing works against that. Its insulation restricts airflow, which ultimately causes even more noise, as the GPU's fans need to spin at very high RPMs. This is, of course, totally obvious in hindsight.

I removed the front cover from a new GPU box, and the noise went down from almost-hair-dryer to acceptable-whoosh levels. I'm wondering whether a completely open box might not be even more effective.

Abhijith PA: Transition from Thunderbird to Mutt

16 January, 2021 - 11:37

I was going OK with Thunderbird and enigmail(though it have many problems). Normally I go through changelogs before updating packages and rarely do a complete upgrage of my machine. Couple of days ago I did a complete upgrade of system which updated my Thunderbird to latest version and throwing of enigmail plugin for using their native openPGP support. There is a blog from Mozilla which I should’ve read earlier. Thunderbird’s builtin openPGP functionality is still in experimental, atleast not ready for my workflow. I could’ve downgrade to version 68. But I chose to move to my secondary MUA, mutt. I was using mutt for emails and newsletters that I check twice in a year a so.

So I started configuring mutt to handle my big mailboxes. It took three evenings to configure mutt to my workflow. Though the basic setup can be done in less than an hour it is the small nitpicks consumed much of my time. Currently I have isync to pull and keep mails offline. Mutt to read, msmtp to send, abook as the email address book and urlview to see the links in mail. I am still learning notmuch and virtual mailbox ways to filter.

There are ton of articles out there to configure mutt and all related things to it. But I find certain configs very hard to get. So I will write down those.

  • As a long Thunderbird user, I still want my mailbox index to look in a certain way. I used;

     set date_format="%d/%m/%y %I:%M %p"
     set index_format="%3C | %Z %?X?@& ? %-22.17L %-5.50s %> %-22.20D"
    

    This gives following order - serial number, various flags (if there is an attachment, it will shows ‘@’), sender, subject and to extreme right there will be date and time (12h local) of mail

  • If use nano to write mails you can use
    set editor="nano --syntax=email -r 70 -b"
    

    to get 70 char length and mail related syntax highlighting

  • mutt has a way to make new mail notification. The new_mail_command can be used to execute a custom script upon new mail, such as running notify-send. There are many standalone mail notifier such as mailnag, mail notification etc. It all just felt bulky for me. So I ended up making these.

    #!/bin/sh
    accounts="$(awk '/^Channel/ {print $2}' "$MBSYNCRC")"
    for account in $accounts; do
         acc="$(echo "$account" | sed "s/.*\///")"
         new=$(find "$MAILBOX/$acc/Inbox/new/" -type f -newer "$MUTT/.mailsynclastrun" 2> /dev/null)
         newcount=$(echo "$new" | sed '/^\s*$/d' | wc -l)
         if [ "$newcount" -gt "0" ]; then
                 for file in $new; do
                 # Extract subject and sender from mail.
                 from=$(awk '/^From: / && ++n ==1,/^\<.*\>:/' "$file" | perl -CS -MEncode -ne 'print decode("MIME-Header", $_)' | awk '{ $1=""; if (NF>=3)$NF=""; print $0 }' | sed 's/^[[:blank:]]*[\"'\''\<]*//;s/[\"'\''\>]*[[:blank:]]*$//')
                 subject=$(awk '/^Subject: / && ++n == 1,/^\<.*\>: / && ++i == 2' "$file" | head -n 1 | perl -CS -MEncode -ne 'print decode("MIME-Header", $_)' | sed 's/^Subject: //' | sed 's/^{[[:blank:]]*[\"'\''\<]*//;s/[\"'\''\>]*[[:blank:]]*$//' | tr -d '\n')
                 displays="$(pgrep -a Xorg | grep -wo "[0-9]*:[0-9]\+")"
                         for x in $displays; do
                                 export DISPLAY=$x
                                 notify-send -i $MUTT/mutt.png -t 5000 "$account received new message:" "$from: <i>$subject</i>"
                         done
                 done
         fi
    done
    touch "$MUTT/.mailsynclastrun" 
    

    And hooked to mail_new_command. This code snippet is from lukesmith’s mutt-wizard, I barely modified to meet my need. Currently it only look in to ‘Inbox’. I need to modify to check other mailbox folders in future.

So far, everything going okay.

Cons
  • some times mbsync throws EOF and secret key not found error.
  • searching is still a pain in mutt
  • nano’s spell checker also check things which I am replying to.
More to come

Well for now I moved mail from part of the Thunderbird. But Thunderbird was more than a MUA to me. It was my RSS reader, calendar and to-do list manager. I will write more about those once I make a complete transition.

Junichi Uekawa: It's been 20 years since I became a Debian Developer.

16 January, 2021 - 07:22
It's been 20 years since I became a Debian Developer. Lots of fun things happened, and I think fondly of the team. I am no longer active for the past 10 years due to family reasons, and it's surprising that I have been inactive for that long. I still use Debian, and I still participate in the local Debian meetings.

Dirk Eddelbuettel: Rcpp 1.0.6: Some Updates

16 January, 2021 - 06:24

The Rcpp team is proud to announce release 1.0.6 of Rcpp which arrived at CRAN earlier today, and has been uploaded to Debian too. Windows and macOS builds should appear at CRAN in the next few days. This marks the first release on the new six-months cycle announced with release 1.0.5 in July. As reminder, interim ‘dev’ or ‘rc’ releases will often be available in the Rcpp drat repo; this cycle there were four.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2174 packages on CRAN depend on Rcpp for making analytical code go faster and further (which is an 8.5% increase just since the last release), along with 207 in BioConductor.

This release features six different pull requests from five different contributors, mostly fixing fairly small corner cases, plus some minor polish on documentation and continuous integration. Before releasing we once again made numerous reverse dependency checks none of which revealed any issues. So the passage at CRAN was pretty quick despite the large dependency footprint, and we are once again grateful for all the work the CRAN maintainers do.

Changes in Rcpp patch release version 1.0.6 (2021-01-14)
  • Changes in Rcpp API:

    • Replace remaining few uses of EXTPTR_PTR with R_ExternalPtrAddr (Kevin in #1098 fixing #1097).

    • Add push_back and push_front for DataFrame (Walter Somerville in #1099 fixing #1094).

    • Remove a misleading-to-wrong comment (Mattias Ellert in #1109 cleaning up after #1049).

    • Address a sanitizer report by initializing two private bool variables (Benjamin Christoffersen in #1113).

    • External pointer finalizer toggle default values were corrected to true (Dirk in #1115).

  • Changes in Rcpp Documentation:

    • Several URLs were updated to https and/or new addresses (Dirk).
  • Changes in Rcpp Deployment:

    • Added GitHub Actions CI using the same container-based setup used previously, and also carried code coverage over (Dirk in #1128).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() avoids warning from R. (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2616 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub. My sincere thanks to my current sponsors for me keeping me caffeinated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Michael Prokop: Revisiting 2020

16 January, 2021 - 06:07

Mainly to recall what happened last year and to give thoughts and plan for the upcoming year(s) I’m once again revisiting my previous year (previous editions: 2019, 2018, 2017, 2016, 2015, 2014, 2013 + 2012).

Due to the Coronavirus disease (COVID-19) pandemic, 2020 was special™ for several reasons, but overall I consider myself and my family privileged and am very grateful for that.

In terms of IT events, I planned to attend Grazer Linuxdays and DebConf in Haifa/Israel. Sadly Grazer Linuxdays didn’t take place at all, and DebConf took place online instead (which I didn’t really participate in for several reasons). I took part in the well organized DENOG12 + ATNOG 2020/1 online meetings. I still organize our monthly Security Treff Graz (STG) meetups, and for half of the year, those meetings took place online (which worked OK-ish overall IMO).

Only at the beginning of 2020, I managed to play Badminton (still playing in the highest available training class (in german: “Kader”) at the University of Graz / Universitäts-Sportinstitut, USI). For the rest of the year – except for ~2 weeks in October or so – the sessions couldn’t occur.

Plenty of concerts I planned to attend were cancelled for obvious reasons, including the ones I would have played myself. But I managed to attend Jazz Redoute 2020 – Dom im Berg, Martin Grubinger in Musikverein Graz and Emiliano Sampaio’s Mega Mereneu Project at WIST Moserhofgasse (all before the corona situation kicked in). The concert from Tonč Feinig & RTV Slovenia Big Band occurred under strict regulations in Summer. At the beginning of 2020, I also visited Literaturshow “Roboter mit Senf” at Literaturhaus Graz.

The lack of concerts and rehearsals also severely impacted my playing the drums (including at HTU BigBand Graz), which pretty much didn’t take place. :(

Grml-wise we managed to publish release 2020.06, codename Ausgehfuahangl. Regarding jenkins-debian-glue I tried to clarify its state and received some really lovely feedback.

I consider 2020 as the year where I dropped regular usage of Jabber (so far my accounts still exist, but I’m no longer regularly online and am not sure for how much longer I’ll keep my accounts alive as such).

Business-wise it was our seventh year of business with SynPro Solutions GmbH. No big news but steady and ongoing work with my other business duties Grml Solutions and Grml-Forensic.

As usual, I shared childcare with my wife. Due to the corona situation, my wife got a new working schedule, which shuffled around our schedule a bit on Mondays + Tuesdays. Still, we managed to handle the homeschooling/distance learning quite well. Currently we’re sitting in the third lockdown, and yet another round of homeschooling/distance learning is going on those days (let’s see how long…). I counted 112 actual school days in all of 2020 for our older daughter with only 68 school days since our first lockdown on 16th of March, whereas we had 213(!) press conferences by our Austrian government in 2020. (Further rants about the situation in Austria snipped.)

Book reading-wise I managed to complete 60 books (see “Mein Lesejahr 2020“). Once again, I noticed that what felt like good days for me always included reading books, so I’ll try to keep my reading pace for 2021. I’ll also continue with my hobbies “Buying Books” and “Reading Books”, to get worse at Tsundoku.

Hoping for vaccination and a more normal 2021, Schwuppdiwupp!

Steinar H. Gunderson: Bullseye freeze

15 January, 2021 - 03:22

Bullseye is freezing! Yay! (And Trondheim is now below -10.)

It's too late for that kind of change now, but it would have been nice if plocate could have been default for bullseye:

Surprisingly enough, mlocate has gone straight downhill:

It seems that since buster, there's an override in place to change its priority away from standard, and I haven't been able to find anyone who could tell me why. (It was known that it was request moved away from standard for cloud images, which makes a lot of sense, but not for desktop/server images.)

Perhaps for bookworm, we can get a locate back in the default install? plocate really is a much better user experience, in my (admittely biased) opinion. :-)

Antoine Beaupré: New phone: Pixel 4a

14 January, 2021 - 03:50

I'm sorry to announce that I gave up on the Fairphone series and switched to a Google Phone (Pixel 4a) running CalyxOS.

Problems in fairy land

My fairphone2, even if it is less than two years old, is having major problems:

  • from time to time, the screen flickers and loses "touch" until I squeeze it back together
  • the camera similarly disconnects regularly
  • even when it works, the camera is... pretty bad: low light is basically unusable, it's slow and grainy
  • the battery can barely keep up for one day
  • the cellular coverage is very poor, in Canada: I lose signal at the grocery store and in the middle of my house...

Some of those problems are known: the Fairphone 2 is old now. It was probably old even when I got it. But I can't help but feel a little sad to let it go: the entire point of that device was to make it easy to fix. But alas, because it's sold only in Europe, local stores don't carry replacement parts. To be fair, Fairphone did offer to fix the device, but with a 2 weeks turnaround, I had to get another phone anyways.

I did actually try to buy a fairphone3, from Clove. But they did some crazy validation routine. By email, they asked me to provide a photo copy of a driver's license and the credit card, arguing they need to do this to combat fraud. I found that totally unacceptable and asked them to cancel my order. And because I'm not sure the FP3 will fix the coverage issues, I decided to just give up on Fairphone until they officially ship to the Americas.

Do no evil, do not pass go, do not collect 200$

So I got a Google phone, specifically a Pixel 4a. It's a nice device, all small and shiny, but it's "plasticky" - I would have prefered metal, but it seems you need to pay much, much more to get that (in the Pixel 5).

In any case, it's certainly a better form factor than the Fairphone 2: even though the screen is bigger, the device itself is actually smaller and thinner, which feels great. The OLED screen is beautiful, awesome contrast and everything, and preliminary tests show that the camera is much better than the one on the Fairphone 2. (The be fair, again, that is another thing the FP3 improved significantly. And that is with the stock Camera app from CalyxOS/AOSP, so not as good as the Google Camera app, which does AI stuff.)

CalyxOS: success

The Pixel 4a not not supported by LineageOS: it seems every time I pick a device in that list, I manage to miss the right device by one (I bought a Samsung S9 before, which is also unsupported, even though the S8 is). But thankfully, it is supported by CalyxOS.

That install was a breeze: I was hesitant in playing again with installing a custom Android firmware on a phone after fighting with this quite a bit in the past (e.g. htc-one-s, lg-g3-d852). But it turns out their install instructions, mostly using a AOSP alliance device-flasher works absolutely great. It assumes you know about the commandline, and it does require to basically curl | sudo (because you need to download their binary and run it as root), but it Just. Works. It reminded me of how great it was to get the Fairphone with TWRP preinstalled...

Oh, and kudos to the people in #calyxos on Freenode: awesome tech support, super nice folks. An amazing improvement over the ambiance in #lineageos!

Migrating data

Unfortunately, migrating the data was the usual pain in the back. This should improve the next time I do this: CalyxOS ships with seedvault, a secure backup system for Android 10 (or 9?) and later which backs up everything (including settings!) with encryption. Apparently it works great, and CalyxOS is also working on a migration system to switch phones.

But, obviously, I couldn't use that on the Fairphone 2 running Android 7... So I had to, again, improvised. The first step was to install Syncthing, to have an easy way to copy data around. That's easily done through F-Droid, already bundled with CalyxOS (including the privileged extension!). Pair the devices and boom, a magic portal to copy stuff over.

The other early step I took was to copy apps over using the F-Droid "find nearby" functionality. It's a bit quirky, but really helps in copying a bunch of APKs over.

Then I setup a temporary keepassxc password vault on the Syncthing share so that I could easily copy-paste passwords into apps. I used to do this in a text file in Syncthing, but copy-pasting in the text file is much harder than in KeePassDX. (I just picked one, maybe KeePassDroid is better? I don't know.) Do keep a copy of the URL of the service to reduce typing as well.

Then the following apps required special tweaks:

  • AntennaPod has an import/export feature: export on one end, into the Syncthing share, then import on the other. then go to the queue and select all episodes and download
  • the Signal "chat backup" does copy the secret key around, so you don't get the "security number change" warning (even if it prompts you to re-register) - external devices need to be relinked though
  • AnkiDroid, DSub, Nextcloud, and Wallabag required copy-pasting passwords

I tried to sync contacts with DAVx5 but that didn't work so well: the account was setup correctly, but contacts didn't show up. There's probably just this one thing I need to do to fix this, but since I don't really need sync'd contact, it was easier to export a VCF file to Syncthing and import again.

Known problems

One problem with CalyxOS I found is that the fragile little microg tweaks didn't seem to work well enough for Signal. That was unexpected so they encouraged me to file that as a bug.

The other "issue" is that the bootloader is locked, which makes it impossible to have "root" on the device. That's rather unfortunate: I often need root to debug things on Android. But I guess that most things just work out of the box now, so I don't really need it and appreciate the extra security.

OSMand still doesn't have a good import/export story. I ended up sharing the Android/data/net.osmand.plus/files directory and importing waypoints, favorites and tracks by hand. Even though maps are actually in there, it's not possible for Syncthing to write directly to the same directory on the new phone, "thanks" to the new permission system in Android which forbids this kind of inter-app messing around.

Tracks are particularly a problem: my older OSMand setup had all those folders neatly sorting those tracks by month. This makes it really annoying to track every file manually and copy it over. I have mostly given up on that for now, unfortunately. And I'll still need to reconfigure profiles and maps and everything by hand. Sigh. I guess that's a good clearinghouse for my old tracks I never use...

Vincent Fourmond: Taking advantage of Ruby in QSoas

13 January, 2021 - 21:45
First of all, let me all wish you a happy new year, with all my wishes of health and succes. I sincerely hope this year will be simpler for most people as last year !

For the first post of the year, I wanted to show you how to take advantage of Ruby, the programming language embedded in QSoas, to make various things, like:
  • creating a column with the sum of Y values;
  • extending values that are present only in a few lines;
  • renaming datasets using a pattern.
Summing the values in a column When using commands that take formulas (Ruby code), like apply-formula, the code is run for every single point, for which all the values are updated. In particulier, the state of the previous point is not known. However, it is possible to store values in what is called global variables, whose name start with an $ sign. Using this, we can keep track of the previous values. For instance, to create a new column with the sum of the y values, one can use the following approach:
QSoas> eval $sum=0
QSoas> apply-formula /extra-columns=1 $sum+=y;y2=$sum
The first line initializes the variable to 0, before we start summing, and the code in the second line is run for each dataset row, in order. For the first row, for instance, $sum is initially 0 (from the eval line); after the execution of the code, it is now the first value of y. After the second row, the second value of y is added, and so on. The image below shows the resulting y2 when used on:
QSoas> generate-dataset -1 1 x


Extending values in a column Another use of the global variables is to add "missing" data. For instance, let's imagine that a files given the variation of current over time as the potential is changed, but the potential is only changed stepwise and only indicated when it changes:
## time	current	potential
0	0.1	0.5
1	0.2
2	0.3
3	0.2
4	1.2	0.6
5	1.3
...
If you need to have the values everywhere, for instance if you need to split on their values, you could also use a global variable, taking advantage of the fact that missing values are represented by QSoas using "Not A Number" values, which can be detected using the Ruby function nan?:
QSoas> apply-formula "if y2.nan?; then y2=$value; else $value=y2;end"
Note the need of quotes because there are spaces in the ruby code. If the value of y2 is NaN, that is it is missing, then it is taken from the global variable $value else $value is set the current value of y2. Hence, the values are propagated down:
## time	current	potential
0	0.1	0.5
1	0.2	0.5
2	0.3	0.5
3	0.2	0.5
4	1.2	0.6
5	1.3	0.6
...
Of course, this doesn't work if the first value of y2 is missing.

Renaming using a pattern The command save-datasets can be used to save a whole series of datasets to the disk. It can also rename them on the fly, and, using the /mode=rename option, does only the renaming part, without saving. You can make full use of meta-data (see also a first post here)for renaming. The full power is unlocked using the /expression= option. For instance, for renaming the last 5 datasets (so numbers 0 to 4) using a scheme based on the value of their pH meta-data, you can use the following code:
QSoas> save-datasets /mode=rename /expression='"dataset-#{$meta.pH}"' 0..4
The double quotes are cumbersome but necessary, since the outer quotes (') prevent the inner ones (") to be removed and the inner quotes are here to indicate to Ruby that we are dealing with text. The bit inside #{...} is interpreted by Ruby as Ruby code; here it is $meta.pH, the value of the "pH" meta-data. Finally the 0..4 specifies the datasets to work with. So theses datasets will change name to become dataset-7 for pH 7, etc...

About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

John Goerzen: Remote Directory Tree Comparison, Optionally Asynchronous and Airgapped

13 January, 2021 - 00:53

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In the previous installment on store-and-forward backups, I mentioned how easy it is to do with ZFS, and some of the tools that can be used to do it without ZFS. A lot of those tools are a bit less robust, so we need some sort of store-and-forward mechanism to verify backups. To be sure, verifying backups is good with ANY scheme, and this could be used with ZFS backups also.

So let’s say you have a shiny new backup scheme in place, and you’d like to verify that it’s working correctly. To do that, you need to compare the source directory tree on machine A with the backed-up directory tree on machine B.

Assuming a conventional setup, here are some ways you might consider to do that:

  • Just copy everything from machine A to machine B and compare locally
  • Or copy everything from machine A to a USB drive, plug that into machine B, and compare locally
  • Use rsync in dry-run mode and see if it complains about anything

The first two options are not particularly practical for large datasets, though I note that the second is compatible with airgapping. Using rsync requires both systems to be online at the same time to perform the comparison.

What would be really nice here is a tool that would write out lots of information about the files on a system: their names, sizes, last modified dates, maybe even sha256sum and other data. This file would be far smaller than the directory tree itself, would compress nicely, and could be easily shipped to an airgapped system via NNCP, UUCP, a USB drive, or something similar.

Tool choices

It turns out there are already quite a few tools in Debian (and other Free operating systems) to do this, and half of them are named mtree (though, of course, not all mtrees are compatible with each other.) We’ll look at some of the options here.

I’ve made a simple test directory for illustration purposes with these commands:

mkdir test
cd test
echo hi > hi
ln -s hi there
ln hi foo
touch empty
mkdir emptydir
mkdir somethingdir
cd somethingdir
ln -s ../there

I then also used touch to set all files to a consistent timestamp for illustration purposes.

Tool option: getfacl (Debian package: acl)

This comes with the acl package, but can be used with other than ACL purposes. Unfortunately, it doesn’t come with a tool to directly compare its output with a filesystem (setfacl, for instance, can apply the permissions listed but won’t compare.) It ignores symlinks and doesn’t show sizes or dates, so is ineffective for our purposes.

Example output:

$ getfacl --numeric -R test
...
# file: test/hi
# owner: 1000
# group: 1000
user::rw-
group::r--
other::r--
...

Tool option: fmtree, the FreeBSD mtree (Debian package: freebsd-buildutils)

fmtree can prepare a “specification” based on a directory tree, and compare a directory tree to that specification. The comparison also is aware of files that exist in a directory tree but not in the specification. The specification format is a bit on the odd side, but works well enough with fmtree. Here’s a sample output with defaults:

$ fmtree -c -p test
...
# .
/set type=file uid=1000 gid=1000 mode=0644 nlink=1
.               type=dir mode=0755 nlink=4 time=1610421833.000000000
    empty       size=0 time=1610421833.000000000
    foo         nlink=2 size=3 time=1610421833.000000000
    hi          nlink=2 size=3 time=1610421833.000000000
    there       type=link mode=0777 time=1610421833.000000000 link=hi

... skipping ...

# ./somethingdir
/set type=file uid=1000 gid=1000 mode=0777 nlink=1
somethingdir    type=dir mode=0755 nlink=2 time=1610421833.000000000
    there       type=link time=1610421833.000000000 link=../there
# ./somethingdir
..

..

You might be wondering here what it does about special characters, and the answer is that it has octal escapes, so it is 8-bit clean.

To compare, you can save the output of fmtree to a file, then run like this:

cd test
fmtree < ../test.fmtree

If there is no output, then the trees are identical. Change something and you get a line of of output explaining each difference. You can also use fmtree -U to change things like modification dates to match the specification.

fmtree also supports quite a few optional keywords you can add with -K. They include things like file flags, user/group names, various tipes of hashes, and so forth. I'll note that none of the options can let you determine which files are hardlinked together.

Here's an excerpt with -K sha256digest added:

    empty       size=0 time=1610421833.000000000 \
                sha256digest=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    foo         nlink=2 size=3 time=1610421833.000000000 \
                sha256digest=98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4

If you include a sha256digest in the spec, then when you verify it with fmtree, the verification will also include the sha256digest. Obviously fmtree -U can't correct a mismatch there, but of course it will detect and report it.

Tool option: mtree, the NetBSD mtree (Debian package: mtree-netbsd)

mtree produces (by default) output very similar to fmtree. With minor differences (such as the name of the sha256digest in the output), the discussion above about fmtree also applies to mtree.

There are some differences, and the most notable is that mtree adds a -C option which reads a spec and converts it to a "format that's easier to parse with various tools." Here's an example:

$ mtree -c -K sha256digest -p test | mtree -C
. type=dir uid=1000 gid=1000 mode=0755 nlink=4 time=1610421833.0 flags=none 
./empty type=file uid=1000 gid=1000 mode=0644 nlink=1 size=0 time=1610421833.0 flags=none 
./foo type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./hi type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=hi time=1610421833.0 flags=none 
./emptydir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir/there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=../there time=1610421833.0 flags=none 

Most definitely an improvement in both space and convenience, while still retaining the relevant information. Note that if you want the sha256digest in the formatted output, you need to pass the -K to both mtree invocations. I could have done that here, but it is easier to read without it.

mtree can verify a specification in either format. Given what I'm about to show you about bsdtar, this should illustrate why I bothered to package mtree-netbsd for Debian.

Unlike fmtree, the mtree -U command will not adjust modification times based on the spec, but it will report on differences.

Tool option: bsdtar (Debian package: libarchive-tools)

bsdtar is a fascinating program that can work with many formats other than just tar files. Among the formats it supports is is the NetBSD mtree "pleasant" format (mtree -C compatible).

bsdtar can also convert between the formats it supports. So, put this together: bsdtar can convert a tar file to an mtree specification without extracting the tar file. bsdtar can also use an mtree specification to override the permissions on files going into tar -c, so it is a way to prepare a tar file with things owned by root without resorting to tools like fakeroot.

Let's look at how this can work:

$ cd test
$ bsdtar --numeric -cf - --format=mtree .

. time=1610472086.318593729 mode=755 gid=1000 uid=1000 type=dir
./empty time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=0
./foo nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./hi nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./ormat\075mtree time=1610472086.318593729 mode=644 gid=1000 uid=1000 type=file size=5632
./there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=hi
./emptydir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir/there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=../there

You can use mtree -U to verify that as before. With the --options mtree: set, you can also add hashes and similar to the bsdtar output. Since bsdtar can use input from tar, pax, cpio, zip, iso9660, 7z, etc., this capability can be used to create verification of the files inside quite a few different formats. You can convert with bsdtar -cf output.mtree --format=mtree @input.tar. There are some foibles with directly using these converted files with mtree -U, but usually minor changes will get it there.

Side mention: stat(1) (Debian package: coreutils)

This tool isn't included because it won't operate recursively, but is a tool in the similar toolbox.

Putting It Together

I will still be developing a complete non-ZFS backup system for NNCP (or UUCP) in a future post. But in the meantime, here are some ideas you can reflect on:

  • Let's say your backup scheme involves sending a full backup every night. On the source system, you could pipe the generated tar file through something like tee >(bsdtar -cf bcakup.mtree @-) to generate an mtree file in-band while generating the tar file. This mtree file could be shipped over for verification.
  • Perhaps your backup scheme involves sending incremental backup data via rdup or even ZFS, but you would like to periodically verify that everything is good -- that an incremental didn't miss something. Something like mtree -K sha256 -c -x -p / | mtree -C -K sha256 would let you accomplish that.

I will further develop at least one of these ideas in a future post.

Bonus: cross-tool comparisons

In my mtree-netbsd packaging, I added tests like this to compare between tools:

fmtree -c -K $(MTREE_KEYWORDS) | mtree
mtree -c -K $(MTREE_KEYWORDS) | sed -e 's/\(md5\|sha1\|sha256\|sha384\|sha512\)=/\1digest=/' -e 's/rmd160=/ripemd160digest=/' | fmtree
bsdtar -cf - --options 'mtree:uname,gname,md5,sha1,sha256,sha384,sha512,device,flags,gid,link,mode,nlink,size,time,uid,type,uname' --format mtree . | mtree

Molly de Blanc: 1028 Words on Free Software

12 January, 2021 - 23:40

The promise of free software is a near-future utopia, built on democratized technology. This future is just and it is beautiful, full of opportunity and fulfillment for everyone everywhere. We can create the things we dream about when we let our minds wander into the places they want to. We can be with the people we want and need to be, when we want and need to.

This is currently possible with the technology we have today, but it’s availability is limited by the reality of the world we live in – the injustice, the inequity, the inequality. Technology runs the world, but it does not serve the interests of most of us. In order to create a better world, our technology must be transparent, accountable, trustworthy. It must be just. It must be free.

The job of the free software movement is to demonstrate that this world is possible by living its values now: justice, equity, equality. We build them into our technology, and we build technology that make it possible for these values to exist in the world.

At the Free Software Foundation, we liked to say that we used all free software because it was important to show that we could. You can do anything with free software, so we did everything with it. We demonstrated the importance of unions for tech workers and non-profit workers by having one. We organized collectively and protected our rights for the sake of ourselves and one another. We had non-negotiable salaries, based on responsibility level and position. That didn’t mean we worked in an office free from the systemic problems that plague workplaces everywhere, but we were able to think about them differently.

Things were this way because of Richard Stallman – but I view his influence on these things as negative rather than positive. He was a cause that forced these outcomes, rather than being supportive of the desires and needs of others. Rather than indulge in gossip or stories, I would like to jump to the idea that he was supposed to have been deplatformed in October 2019. In resigning from his position as president of the FSF, he certainly lost some of his ability to reach audiences. However, Richard still gives talks. The FSF continues to use his image and rhetoric in their own messaging and materials. They gave him time to speak at their annual conference in 2020. He maintains leadership in the GNU project and otherwise within the FSF sphere. The people who empowered him for so many years are still in charge.

Richard, and the continued respect and space he is given, is not the only problem. It represents a bigger problem. Sexism and racism (among others) run rampant in the community. This happens because of bad actors and, more significantly, by the complacency of organizations, projects, and individuals afraid of losing contributors, respect, or funding. In a sector that has so much money and so many resources, women are still being paid less than men; we deny people opportunities to learn and grow in the name of immediate results; people who aren’t men, who aren’t white, are abused and harassed; people are mentally and emotionally taken advantage of, and we are coerced into burn out and giving up our lives for these companies and projects and we are paid for tolerating all of this by being told we’re doing a good job or making a difference.

But we’re not making a difference. We’re perpetuating the worst of the status quo that we should be fighting against. We must not continue. We cannot. We need to live our ideals as they are, and take the natural next steps in their evolution. We cannot have a world of just technology when we live in a world of exclusion; we cannot have free software if we continue to allow, tolerate, and laud the worst of us. I’ve been in and around free software for seventeen years. Nearly every part of it I’ve participated in has members and leadership that benefit from allowing and encouraging the continuation of maleficence and systemic oppression.

We must purge ourselves of these things – of sexism, racism, injustice, and the people who continue and enable it. There is no space to argue over whether a comment was transphobic – if it hurt a trans person then it is transphobic and it is unacceptable. Racism is a global problem and we must be anti-racist or we are complicit. Sexism is present and all men benefit from it, even if they don’t want to. These are free software issues. These are things that plague software, and these are things software reinforces within our societies.

If a technology is exclusionary, it does not work. If a community is exclusionary, it must be fixed or thrown away. There is no middle ground here. There is no compromise. Without doing this, without taking the hard, painful steps to actually live the promise of user freedom and everything it requires and entails, our work is pointless and free software will fail.

I don’t think it’s too late for there to be a radical change – the radical change – that allows us to create the utopia we want to see in the world. We must do that by acknowledging that just technology leads to a just society, and that a just society allows us to make just technology. We must do that by living within the principles that guide this future now.

I don’t know what will happen if things don’t change soon. I recently saw someone comment that change doesn’t happens unless one person is willing to sacrifice everything to make that change, to lead and inspire others to play small parts. This is unreasonable to ask of or expect from someone. I’ve been burning myself out to meet other people’s expectations for seventeen years, and I can’t keep doing it. Of course I am not alone, and I am not the only one working on and occupied by these problems. More people must step up, not just for my sake, but for the sake of all of us, the work free software needs to do, and the future I dream about.

Petter Reinholdtsen: Latest Jami back in Debian Testing, and scriptable using dbus

12 January, 2021 - 23:00

After a lot of hard work by its maintainer Alexandre Viau and others, the decentralized communication platform Jami (earlier known as Ring), managed to get its latest version into Debian Testing. Several of its dependencies has caused build and propagation problems, which all seem to be solved now.

In addition to the fact that Jami is decentralized, similar to how bittorrent is decentralized, I first of all like how it is not connected to external IDs like phone numbers. This allow me to set up computers to send me notifications using Jami without having to find get a phone number for each computer. Automatic notification via Jami is also made trivial thanks to the provided client side API (as a DBus service). Here is my bourne shell script demonstrating how to let any system send a message to any Jami address. It will create a new identity before sending the message, if no Jami identity exist already:

#!/bin/sh
#
# Usage: $0  
#
# Send  to , create local jami account if
# missing.
#
# License: GPL v2 or later at your choice
# Author: Petter Reinholdtsen


if [ -z "$HOME" ] ; then
    echo "error: missing \$HOME, required for dbus to work"
    exit 1
fi

# First, get dbus running if not already running
DBUSLAUNCH=/usr/bin/dbus-launch
PIDFILE=/run/asterisk/dbus-session.pid
if [ -e $PIDFILE ] ; then
    . $PIDFILE
    if ! kill -0 $DBUS_SESSION_BUS_PID 2>/dev/null ; then
        unset DBUS_SESSION_BUS_ADDRESS
    fi
fi
if [ -z "$DBUS_SESSION_BUS_ADDRESS" ] && [ -x "$DBUSLAUNCH" ]; then
    DBUS_SESSION_BUS_ADDRESS="unix:path=$HOME/.dbus"
    dbus-daemon --session --address="$DBUS_SESSION_BUS_ADDRESS" --nofork --nopidfile --syslog-only < /dev/null > /dev/null 2>&1 3>&1 &
    DBUS_SESSION_BUS_PID=$!
    (
        echo DBUS_SESSION_BUS_PID=$DBUS_SESSION_BUS_PID
        echo DBUS_SESSION_BUS_ADDRESS=\""$DBUS_SESSION_BUS_ADDRESS"\"
        echo export DBUS_SESSION_BUS_ADDRESS
    ) > $PIDFILE
    . $PIDFILE
fi &

dringop() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*
}

dringopreply() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session --print-reply \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*
}

firstaccount() {
    dringopreply ConfigurationManager getAccountList | \
      grep string | awk -F'"' '{print $2}' | head -n 1
}

account=$(firstaccount)

if [ -z "$account" ] ; then
    echo "Missing local account, trying to create it"
    dringop ConfigurationManager addAccount \
      dict:string:string:"Account.type","RING","Account.videoEnabled","false"
    account=$(firstaccount)
    if [ -z "$account" ] ; then
        echo "unable to create local account"
        exit 1
    fi
fi

# Not using dringopreply to ensure $2 can contain spaces
dbus-send --print-reply --session \
  --dest=cx.ring.Ring \
  /cx/ring/Ring/ConfigurationManager \
  cx.ring.Ring.ConfigurationManager.sendTextMessage \
  string:"$account" string:"$1" \
  dict:string:string:"text/plain","$2" 

If you want to check it out yourself, visit the the Jami system project page to learn more, and install the latest Jami client from Debian Unstable or Testing.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Russell Coker: PSI and Cgroup2

12 January, 2021 - 13:50

In the comments on my post about Load Average Monitoring [1] an anonymous person recommended that I investigate PSI. As an aside, why do I get so many great comments anonymously? Don’t people want to get credit for having good ideas and learning about new technology before others?

PSI is the Pressure Stall Information subsystem for Linux that is included in kernels 4.20 and above, if you want to use it in Debian then you need a kernel from Testing or Unstable (Bullseye has kernel 4.19). The place to start reading about PSI is the main Facebook page about it, it was originally developed at Facebook [2].

I am a little confused by the actual numbers I get out of PSI, while for the load average I can often see where they come from (EG have 2 processes each taking 100% of a core and the load average will be about 2) it’s difficult to work out where the PSI numbers come from. For my own use I decided to treat them as unscaled numbers that just indicate problems, higher number is worse and not worry too much about what the number really means.

With the cgroup2 interface which is supported by the version of systemd in Testing (and which has been included in Debian backports for Buster) you get PSI files for each cgroup. I’ve just uploaded version 1.3.5-2 of etbemon (package mon) to Debian/Unstable which displays the cgroups with PSI numbers greater than 0.5% when the load average test fails.

System CPU Pressure: avg10=0.87 avg60=0.99 avg300=1.00 total=20556310510
/system.slice avg10=0.86 avg60=0.92 avg300=0.97 total=18238772699
/system.slice/system-tor.slice avg10=0.85 avg60=0.69 avg300=0.60 total=11996599996
/system.slice/system-tor.slice/tor@default.service avg10=0.83 avg60=0.69 avg300=0.59 total=5358485146

System IO Pressure: avg10=18.30 avg60=35.85 avg300=42.85 total=310383148314
 full avg10=13.95 avg60=27.72 avg300=33.60 total=216001337513
/system.slice avg10=2.78 avg60=3.86 avg300=5.74 total=51574347007
/system.slice full avg10=1.87 avg60=2.87 avg300=4.36 total=35513103577
/system.slice/mariadb.service avg10=1.33 avg60=3.07 avg300=3.68 total=2559016514
/system.slice/mariadb.service full avg10=1.29 avg60=3.01 avg300=3.61 total=2508485595
/system.slice/matrix-synapse.service avg10=2.74 avg60=3.92 avg300=4.95 total=20466738903
/system.slice/matrix-synapse.service full avg10=2.74 avg60=3.92 avg300=4.95 total=20435187166

Above is an extract from the output of the loadaverage check. It shows that tor is a major user of CPU time (the VM runs a ToR relay node and has close to 100% of one core devoted to that task). It also shows that Mariadb and Matrix are the main users of disk IO. When I installed Matrix the Debian package told me that using SQLite would give lower performance than MySQL, but that didn’t seem like a big deal as the server only has a few users. Maybe I should move Matrix to the Mariadb instance. to improve overall system performance.

So far I have not written any code to display the memory PSI files. I don’t have a lack of RAM on systems I run at the moment and don’t have a good test case for this. I welcome patches from people who have the ability to test this and get some benefit from it.

We are probably about 6 months away from a new release of Debian and this is probably the last thing I need to do to make etbemon ready for that.

Related posts:

  1. Load Average Monitoring For my ETBE-Mon [1] monitoring system I recently added a...
  2. CPU Capacity for Virtualisation Today a client asked me to advise him on how...
  3. Load Average Other Unix systems apparently calculate the load average differently to...

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้