Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 39 min 49 sec ago

Emmanuel Kasper: Markdown CMS or Wiki for smallish website

9 October, 2022 - 18:20

Following my markdown craze, I am slowly starting to move my Dokuwiki based homesite to Grav, a flat file Markdown CMS.

PHP will be always be PHP, but the documentation and usage seem sound (all config is either via an admin panel or editing YAML files) and it has professional support. I intend to use this Debian based Dockerfile and Podman to deploy Grav.

Pandoc the talented document converter, hat support for Dokuwiki syntax, Markdown and PHP Markdown extra, so I expected limited hurdles when converting the data.

Sergio Talens-Oliag: Shared networking for Virtual Machines and Containers

9 October, 2022 - 16:45

This entry explains how I have configured a linux bridge, dnsmasq and iptables to be able to run and communicate different virtualization systems and containers on laptops running Debian GNU/Linux.

I’ve used different variations of this setup for a long time with VirtualBox and KVM for the Virtual Machines and Linux-VServer, OpenVZ, LXC and lately Docker or Podman for the Containers.

Required packages

I’m running Debian Sid with systemd and network-manager to configure the WiFi and Ethernet interfaces, but for the bridge I use bridge-utils with ifupdown (as I said this setup is old, I guess ifupdow2 and ifupdown-ng will work too).

To start and stop the DNS and DHCP services and add NAT rules when the bridge is brought up or down I execute a script that uses:

  • ip from iproute2 to get the network information,
  • dnsmasq to provide the DNS and DHCP services (currently only the dnsmasq-base package is needed and it is recommended by network-manager, so it is probably installed),
  • iptables to configure NAT (for now docker kind of forces me to keep using iptables, but at some point I’d like to move to nftables).

To make sure you have everything installed you can run the following command:

sudo apt install bridge-utils dnsmasq-base ifupdown iproute2 iptables
Bridge configuration

The bridge configuration for ifupdow is available on the file /etc/network/interfaces.d/vmbr0:

# Virtual servers NAT Bridge
auto vmbr0
iface vmbr0 inet static
    address         10.0.4.1
    network         10.0.4.0
    netmask         255.255.255.0
    broadcast       10.0.4.255
    bridge_ports    none
    bridge_maxwait  0
    up              /usr/local/sbin/vmbridge ${IFACE} start nat
    pre-down        /usr/local/sbin/vmbridge ${IFACE} stop nat
Warning:

To use a separate file with ifupdown make sure that /etc/network/interfaces contains the line:

source /etc/network/interfaces.d/*

or add its contents to /etc/network/interfaces directly, if you prefer.

This configuration creates a bridge with the address 10.0.4.1 and assumes that the machines connected to it will use the 10.0.4.0/24 network; you can change the network address if you want, as long as you use a private range and it does not collide with networks used in your Virtual Machines all should be OK.

Note:

All my configurations are using IPv4 for now, I was planning to move some things to IPv6 not so long ago, but the truth is that I haven’t had the need nor the time and the Spanish Internet providers are not helping either.

The vmbridge script is used to start the dnsmasq server and setup the NAT rules when the interface is brought up and remove the firewall rules and stop the dnsmasq server when it is brought down.

The vmbridge script

The vmbridge script launches an instance of dnsmasq that binds to the bridge interface (vmbr0 in our case) that is used as DNS and DHCP server.

The DNS server reads the /etc/hosts file to publish local DNS names and forwards all the other requests to the the dnsmasq server launched by NetworkManager that is listening on the loopback interface.

As this server already does catching we disable it for our server, with the added advantage that, if we change networks, new requests go to the new resolvers because the DNS server handled by NetworkManager gets restarted and flushes its cache (this is useful if we connect to a new network that has internal DNS servers that are configured to do split DNS for internal services; if we use this model all requests get the internal address as soon as the DNS server is queried again).

The DHCP server is configured to provide IPs to unknown hosts for a sub range of the addresses on the bridge network and use fixed IPs if the /etc/ethers file has a MAC with a matching hostname on the /etc/hosts file.

To make things work with old DHCP clients the script also adds checksums to the DHCP packets using iptables (when the interface is not linked to a physical device the kernel does not add checksums, but we can fix it adding a rule on the mangle table).

If we want external connectivity we can pass the nat argument and then the script creates a MASQUERADE rule for the bridge network and enables IP forwarding.

The script source code is the following:

/usr/local/sbin/vmbridge
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
LOCAL_DOMAIN="vmnet"
MIN_IP_LEASE="192"
MAX_IP_LEASE="223"
# ---------
# FUNCTIONS
# ---------
get_net() {
  NET="$(
    ip a ls "${BRIDGE}" 2>/dev/null | sed -ne 's/^.*inet \(.*\) brd.*$/\1/p'
  )"
  [ "$NET" ] || return 1
}
checksum_fix_start() {
  iptables -t mangle -A POSTROUTING -o "${BRIDGE}" -p udp --dport 68 \
    -j CHECKSUM --checksum-fill 2>/dev/null || true
}
checksum_fix_stop() {
  iptables -t mangle -D POSTROUTING -o "${BRIDGE}" -p udp --dport 68 \
    -j CHECKSUM --checksum-fill 2>/dev/null || true
}
nat_start() {
  [ "$NAT" = "yes" ] || return 0
  # Configure NAT
  iptables -t nat -A POSTROUTING -s "${NET}" ! -d "${NET}" -j MASQUERADE
  # Enable forwarding (just in case)
  echo 1 >/proc/sys/net/ipv4/ip_forward
}
nat_stop() {
  [ "$NAT" = "yes" ] || return 0
  iptables -t nat -D POSTROUTING -s "${NET}" ! -d "${NET}" \
    -j MASQUERADE 2>/dev/null || true
}
do_start() {
  # Bridge address
  _addr="${NET%%/*}"
  # DNS leases (between .MIN_IP_LEASE and .MAX_IP_LEASE)
  _dhcp_range="${_addr%.*}.${MIN_IP_LEASE},${_addr%.*}.${MAX_IP_LEASE}"
  # Bridge mtu
  _mtu="$(
    ip link show dev "${BRIDGE}" |
      sed -n -e '/mtu/ { s/^.*mtu \([0-9]\+\).*$/\1/p }'
  )"
  # Compute extra dnsmasq options
  dnsmasq_extra_opts=""
  # Disable gateway when not using NAT
  if [ "$NAT" != "yes" ]; then
    dnsmasq_extra_opts="$dnsmasq_extra_opts --dhcp-option=3"
  fi
  # Adjust MTU size if needed
  if [ -n "$_mtu" ] && [ "$_mtu" -ne "1500" ]; then
    dnsmasq_extra_opts="$dnsmasq_extra_opts --dhcp-option=26,$_mtu"
  fi
  # shellcheck disable=SC2086
  dnsmasq --bind-interfaces \
    --cache-size="0" \
    --conf-file="/dev/null" \
    --dhcp-authoritative \
    --dhcp-leasefile="/var/lib/misc/dnsmasq.${BRIDGE}.leases" \
    --dhcp-no-override \
    --dhcp-range "${_dhcp_range}" \
    --domain="${LOCAL_DOMAIN}" \
    --except-interface="lo" \
    --expand-hosts \
    --interface="${BRIDGE}" \
    --listen-address "${_addr}" \
    --no-resolv \
    --pid-file="${PIDF}" \
    --read-ethers \
    --server="127.0.0.1" \
    $dnsmasq_extra_opts
  checksum_fix_start
  nat_start
}
do_stop() {
  nat_stop
  checksum_fix_stop
  if [ -f "${PIDF}" ]; then
    kill "$(cat "${PIDF}")" || true
    rm -f "${PIDF}"
  fi
}
do_status() {
  if [ -f "${PIDF}" ] && kill -HUP "$(cat "${PIDF}")"; then
    echo "dnsmasq RUNNING"
  else
    echo "dnsmasq NOT running"
  fi
}
do_reload() {
  [ -f "${PIDF}" ] && kill -HUP "$(cat "${PIDF}")"
}
usage() {
  echo "Uso: $0 BRIDGE (start|stop [nat])|status|reload"
  exit 1
}
# ----
# MAIN
# ----
[ "$#" -ge "2" ] || usage
BRIDGE="$1"
OPTION="$2"
shift 2
NAT="no"
for arg in "$@"; do
  case "$arg" in
  nat) NAT="yes" ;;
  *) echo "Unknown arg '$arg'" && exit 1 ;;
  esac
done
PIDF="/var/run/vmbridge-${BRIDGE}-dnsmasq.pid"
case "$OPTION" in
start) get_net && do_start ;;
stop) get_net && do_stop ;;
status) do_status ;;
reload) get_net && do_reload ;;
*) echo "Unknown command '$OPTION'" && exit 1 ;;
esac
# vim: ts=2:sw=2:et:ai:sts=2
NetworkManager Configuration

The default /etc/NetworkManager/NetworkManager.conf file has the following contents:

[main]
plugins=ifupdown,keyfile

[ifupdown]
managed=false

Which means that it will leave interfaces managed by ifupdown alone and, by default, will send the connection DNS configuration to systemd-resolved if it is installed.

As we want to use dnsmasq for DNS resolution, but we don’t want NetworkManager to modify our /etc/resolv.conf we are going to add the following file (/etc/NetworkManager/conf.d/dnsmasq.conf) to our system:

/etc/NetworkManager/conf.d/dnsmasq.conf
[main]
dns=dnsmasq
rc-manager=unmanaged

and restart the NetworkManager service:

$ sudo systemctl restart NetworkManager.service

From now on the NetworkManager will start a dnsmasq service that queries the servers provided by the DHCP servers we connect to on 127.0.0.1:53 but will not touch our /etc/resolv.conf file.

Note:

We are going to use the dnsmasq service managed by NetworkManager because it is updated automatically, consumes very little memory and avoids the need of extra tricks to make the server that we will be using for DNS & DHCP notice external DNS configuration changes.

Configuring systemd-resolved

If we start using our own name server but our system has systemd-resolved installed we will no longer need or use the DNS stub; programs using it will use our dnsmasq server directly now, but we keep running systemd-resolved for the host programs that use its native api or access it through /etc/nsswitch.conf (when libnss-resolve is installed).

To disable the stub we add a /etc/systemd/resolved.conf.d/disable-stub.conf file to our machine with the following content:

# Disable the DNS Stub Listener, we use our own dnsmasq
[Resolve]
DNSStubListener=no

and restart the systemd-resolved to make sure that the stub is stopped:

$ sudo systemctl restart systemd-resolved.service
Adjusting /etc/resolv.conf

First we remove the existing /etc/resolv.conf file (it does not matter if it is a link or a regular file) and then create a new one that contains at least the following line (we can add a search line if is useful for us):

nameserver 10.0.4.1

From now on we will be using the dnsmasq server launched when we bring up the vmbr0 for multiple systems:

  • as our main DNS server from the host (if we use the standard /etc/nsswitch.conf and libnss-resolve is installed it is queried first, but the systemd-resolved uses it as forwarder by default if needed),
  • as the DNS server of the Virtual Machines or containers that use DHCP for network configuration and attach their virtual interfaces to our bridge,
  • as the DNS server of docker containers that get the DNS information from /etc/resolv.conf (if we have entries that use loopback addresses the containers that don’t use the host network tend to fail, as those addresses inside the running containers are not linked to the loopback device of the host).
Testing

After all the configuration files and scripts are in place we just need to bring up the bridge interface and check that everything works:

$ # Bring interface up
$ sudo ifup vmbr0
$ # Check that it is available
$ ip a ls dev vmbr0
4: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
          group default qlen 1000
    link/ether 0a:b8:ef:b8:07:6c brd ff:ff:ff:ff:ff:ff
    inet 10.0.4.1/24 brd 10.0.4.255 scope global vmbr0
       valid_lft forever preferred_lft forever
$ # View the listening ports used by our dnsmasq servers
$ sudo ss -tulpan | grep dnsmasq
udp UNCONN 0 0  127.0.0.1:53     0.0.0.0:* users:(("dnsmasq",pid=1733930,fd=4))
udp UNCONN 0 0  10.0.4.1:53      0.0.0.0:* users:(("dnsmasq",pid=1705267,fd=6))
udp UNCONN 0 0  0.0.0.0%vmbr0:67 0.0.0.0:* users:(("dnsmasq",pid=1705267,fd=4))
tcp LISTEN 0 32 10.0.4.1:53      0.0.0.0:* users:(("dnsmasq",pid=1705267,fd=7))
tcp LISTEN 0 32 127.0.0.1:53     0.0.0.0:* users:(("dnsmasq",pid=1733930,fd=5))
$ # Verify that the DNS server works on the vmbr0 address
$ host www.debian.org 10.0.4.1
Name: 10.0.4.1
Address: 10.0.4.1#53
Aliases:

www.debian.org has address 130.89.148.77
www.debian.org has IPv6 address 2001:67c:2564:a119::77
Managing running systems

If we want to update DNS entries and/or MAC addresses we can edit the /etc/hosts and /etc/ethers files and reload the dnsmasq configuration using the vmbridge script:

$ sudo /usr/local/sbin/vmbridge vmbr0 reload

That call sends a signal to the running dnsmasq server and it reloads the files; after that we can refresh the DHCP addresses from the client machines or start using the new DNS names immediately.

Russ Allbery: Tie::ShadowHash 2.01

8 October, 2022 - 23:09

Tie::ShadowHash is a small Perl module that allows one to stack an in-memory modifiable hash on top of a read-only hash obtained from anywhere you can get a hash in Perl (including a tied hash), functioning much like an overlay file system with the same benefits.

This release just (hopefully) fixes a few test suite failures, and is probably not worth announcing except that it bugs me to not announce each software release.

You can get the latest version from CPAN or the Tie::ShadowHash distribution page.

Steve Kemp: Trivial benchmarks of toy languages

8 October, 2022 - 18:15

Over the past few months (years?) I've posted on my blog about the various toy interpreters I've written.

I've used a couple of scripting languages/engines in my professional career, but in public I think I've implemented

Each of these works in similar ways, and each of these filled a minor niche, or helped me learn something new. But of course there's always a question:

  • Which is fastest?

In the real world? It just doesn't matter. For me. But I was curious, so I hacked up a simple benchmark of calculating 12! (i.e. The factorial of 12).

The specific timings will vary based on the system which runs the test(s), but there's no threading involved so the relative performance is probably comparable.

Anyway the benchmark is simple, and I did it "fairly". By that I mean that I didn't try to optimize any particular test-implementation, I just wrote it in a way that felt natural.

The results? Evalfilter wins, because it compiles the program into bytecode, which can be executed pretty quickly. But I was actually shocked ("I wrote a benchmark; The results will blow your mind!") at the second and third result:

BenchmarkEvalFilterFactorial-4      61542     17458 ns/op
BenchmarkFothFactorial-4            44751     26275 ns/op
BenchmarkBASICFactorial-4           36735     32090 ns/op
BenchmarkMonkeyFactorial-4          14446     85061 ns/op
BenchmarkYALFactorial-4              2607    456757 ns/op
BenchmarkTCLFactorial-4               292   4085301 ns/op

here we see that FOTH, my FORTH implementation, comes second. I guess this is an efficient interpreter too, bacause that too is essentially "bytecode". (Looking up words in a dictionary, which really maps to indexes to other words. The stack operations are reasonably simple and fast too.)

Number three? BASIC? I expected better from the other implementations to be honest. BASIC doesn't even use an AST (in my implementation), just walks tokens. I figured the TCO implemented by my lisp would make that number three.

Anyway the numbers mean nothing. Really. But still interesting.

Reproducible Builds: Reproducible Builds in September 2022

8 October, 2022 - 04:25

Welcome to the September 2022 report from the Reproducible Builds project! In our reports we try to outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.

David A. Wheeler reported to us that the US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF).

As David remarked in his post to our mailing list, it “expressly recommends having reproducible builds as part of ‘advanced’ recommended mitigations”. The publication of this document has been accompanied by a press release.

Holger Levsen was made aware of a small Microsoft project called oss-reproducible. Part of, OSSGadget, a larger “collection of tools for analyzing open source packages”, the purpose of oss-reproducible is to:

analyze open source packages for reproducibility. We start with an existing package (for example, the NPM left-pad package, version 1.3.0), and we try to answer the question, Do the package contents authentically reflect the purported source code?

More details can be found in the README.md file within the code repository.

David A. Wheeler also pointed out that there are some potential upcoming changes to the OpenSSF Best Practices badge for open source software in relation to reproducibility. Whilst the badge programme has three certification levels (“passing”, “silver” and “gold”), the “gold” level includes the criterion that “The project MUST have a reproducible build”.

David reported that some projects have argued that this reproducibility criterion should be slightly relaxed as outlined in an issue on the best-practices-badge GitHub project. Essentially, though, the claim is that the reproducibility requirement doesn’t make sense for projects that do not release built software, and that timestamp differences by themselves don’t necessarily indicate malicious changes. Numerous pragmatic problems around excluding timestamps were raised in the discussion of the issue.

Sonatype, a “pioneer of software supply chain management”, issued a press release month to report that they had found:

[…] a massive year-over-year increase in cyberattacks aimed at open source project ecosystems. According to early data from Sonatype’s 8th annual State of the Software Supply Chain Report, which will be released in full this October, Sonatype has recorded an average 700% jump in repository attacks over the last three years.

More information is available in the press release.

A number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb adding a redirect from /projects/ to /who/ in order to keep old or archived links working [], Jelle van der Waa added a Rust programming language example for SOURCE_DATE_EPOCH [][] and Mattia Rizzolo included Protocol Labs amongst our project-level sponsors [].


Debian

There was a large amount of reproducibility work taking place within Debian this month:

  • The nfft source package was removed from the archive, and now all packages in Debian bookworm now have a corresponding .buildinfo file. This can be confirmed and tracked on the associated page on the tests.reproducible-builds.org site.

  • Vagrant Cascadian announced on our mailing list an informal online sprint to help “clear the huge backlog of reproducible builds patches submitted” by performing NMU (Non-Maintainer Uploads). The first such sprint took place on September 22nd with the following results:

    • Holger Levsen:

      • Mailed #1010957 in man-db asking for an update and whether to remove the patch tag for now. This was subsequently removed and the maintainer started to address the issue.
      • Uploaded gmp to DELAYED/15, fixing #1009931.
      • Emailed #1017372 in plymouth and asked for the maintainer’s opinion on the patch. This resulted in the maintainer improving Vagrant’s original patch (and uploading it) as well as filing an issue upstream.
      • Uploaded time to DELAYED/15, fixing #983202.
    • Vagrant Cascadian:

      • Verify and updated patch for mylvmbackup (#782318)
      • Verified/updated patches for libranlip. (#788000, #846975 & #1007137)
      • Uploaded libranlip to DELAYED/10.
      • Verified patch for cclive. (#824501)
      • Uploaded cclive to DELAYED/10.
      • Vagrant was unable to reproduce the underlying issue within #791423 (linuxtv-dvb-apps) and so the bug was marked as “done”.
      • Researched #794398 (in clhep).

    The plan is to repeat these sprints every two weeks, with the next taking place on Thursday October 6th at 16:00 UTC on the #debian-reproducible IRC channel.

  • Roland Clobus posted his 13th update of the status of reproducible Debian ISO images on our mailing list. During the last month, Roland ensured that the live images are now automatically fed to openQA for automated testing after they have been shown to be reproducible. Additionally Roland asked on the debian-devel mailing list about a way to determine the canonical timestamp of the Debian archive. []

  • Following up on last month’s work on reproducible bootstrapping, Holger Levsen filed two bugs against the debootstrap and cdebootstrap utilities. (#1019697 & #1019698)

Lastly, 44 reviews of Debian packages were added, 91 were updated and 17 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated too, including the descriptions of cmake_rpath_contains_build_path [], nondeterministic_version_generated_by_python_param [] and timestamps_in_documentation_generated_by_org_mode []. Furthermore, two new issue types were created: build_path_used_to_determine_version_or_package_name [] and captures_build_path_via_cmake_variables [].

Other distributions

In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.

diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 222 and 223 to Debian, as well as made the following changes:

  • The cbfstools utility is now provided in Debian via the coreboot-utils package so we can enable that functionality within Debian. []

  • Looked into Mach-O support.

  • Fixed the try.diffoscope.org service by addressing a compatibility issue between glibc/seccomp that was preventing the Docker-contained diffoscope instance from spawning any external processes whatsoever []. I also updated the requirements.txt file, as some of the specified packages were no longer available [][].

In addition Jelle van der Waa added support for file version 5.43 [] and Mattia Rizzolo updated the packaging:

  • Also include coreboot-utils in the Build-Depends and Test-Depends fields so that it is available for tests. []
  • Use `pep517 and pip to load the requirements. []
  • Remove packages in Breaks/Replaces that have been obsoleted since the release of Debian bullseye. []
Reprotest

reprotest is our end-user tool to build the same source code twice in widely and deliberate different environments, and checking whether the binaries produced by the builds have any differences. This month, reprotest version 0.7.22 was uploaded to Debian unstable by Holger Levsen, which included the following changes by Philip Hands:

  • Actually ensure that the setarch(8) utility can actually execute before including an architecture to test. []
  • Include all files matching *.*deb in the default artifact_pattern in order to archive all results of the build. []
  • Emit an error when building the Debian package if the Debian packaging version does not patch the “Python” version of reprotest. []
  • Remove an unneeded invocation of the head(1) utility. []
Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. This month, however, the following changes were made:

  • Holger Levsen:

    • Add a job to build reprotest from Git [] and use the correct Git branch when building it [].
  • Mattia Rizzolo:

    • Enable syncing of results from building live Debian ISO images. []
    • Use scp -p in order to preserve modification times when syncing live ISO images. []
    • Apply the shellcheck shell script analysis tool. []
    • In a build node wrapper script, remove some debugging code which was messing up calling scp(1) correctly [] and consquently add support to use both scp -p and regular scp [].
  • Roland Clobus:

    • Track and handle the case where the Debian archive gets updated between two live image builds. []
    • Remove a call to sudo(1) as it is not (or no longer) required to delete old live-build results. []
Contact

As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Edward Betts: Fish shell now has underscore as a number separator (my feature request)

7 October, 2022 - 20:05

In November 2021 I filed a feature request for the fish shell to add underscore as a thousand separator in numbers. My feature request has been implemented and is available in fish 3.5.0, released 16 June 2022.

The fish shell supports mathematical operations using the math command.

edward@x1c9 ~> math 2_000 + 22
2022
edward@x1c9 ~> 

The underscore can be used as a thousand separator, but there are other uses for a number separator. Here's a list taken from a post by Mathias Bynens about the number separator in JavaScript:

// A decimal integer literal with its digits grouped per thousand:
1_000_000_000_000
// A decimal literal with its digits grouped per thousand:
1_000_000.220_720
// A binary integer literal with its bits grouped per octet:
0b01010110_00111000
// A binary integer literal with its bits grouped per nibble:
0b0101_0110_0011_1000
// A hexadecimal integer literal with its digits grouped by byte:
0x40_76_38_6A_73
// A BigInt literal with its digits grouped per thousand:
4_642_473_943_484_686_707n 

Programming languages are gradually adding a number separator to their syntax, I think Perl was the first. Most are languages use underscore, but C++ 14 uses an apostrophe for the number separator.

Reproducible Builds (diffoscope): diffoscope 224 released

7 October, 2022 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 224. This version includes the following changes:

[ Mattia Rizzolo ]
* Fix rlib test failure with LLVM 15. Thanks to Gianfranco Costamagna
  (locutusofborg) for the patch.

You find out more by visiting the project homepage.

Dirk Eddelbuettel: Rblpapi 0.3.14: Updates and Extensions

7 October, 2022 - 05:46

Version 0.3.14 of the Rblpapi package arrived on CRAN earlier today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the fourteenth release since the package first appeared on CRAN in 2016. It comprises a nice PR from Robert Harlow extending support to B-PIPE authentication (for those who have it) along with a few fixes made since the last release in January. The last one provided from a kind assist by Tomas Kalibera who pointed out how to overcome an absolute ‘rpath’ dynamic linker instruction (and as I noticed noticed something I already did in another package – ah well) so that we no longer require StagedInstall: yes.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.14 (2022-10-05)
  • Build configuration was generalized to consider local copies of library and headers (Dirk in #362)

  • A ticker symbol was corrected (Dirk in #368 addressing an issue #366 and #367)

  • Support for B-PIPE was added (Robert Harlow in #369 closing #342)

  • The package no longer requires staged installation thanks to an assist from Tomas Kalibera (Dirk in #373)

  • The retired package fts is no longer suggested (Dirk in #374 closing #372)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Matthew Garrett: Cloud desktops aren't as good as you'd think

6 October, 2022 - 15:31
Fast laptops are expensive, cheap laptops are slow. But even a fast laptop is slower than a decent workstation, and if your developers want a local build environment they're probably going to want a decent workstation. They'll want a fast (and expensive) laptop as well, though, because they're not going to carry their workstation home with them and obviously you expect them to be able to work from home. And in two or three years they'll probably want a new laptop and a new workstation, and that's even more money. Not to mention the risks associated with them doing development work on their laptop and then drunkenly leaving it in a bar or having it stolen or the contents being copied off it while they're passing through immigration at an airport. Surely there's a better way?

This is the thinking that leads to "Let's give developers a Chromebook and a VM running in the cloud". And it's an appealing option! You spend far less on the laptop, and the VM is probably cheaper than the workstation - you can shut it down when it's idle, you can upgrade it to have more CPUs and RAM as necessary, and you get to impose all sorts of additional neat security policies because you have full control over the network. You can run a full desktop environment on the VM, stream it to a cheap laptop, and get the fast workstation experience on something that weighs about a kilogram. Your developers get the benefit of a fast machine wherever they are, and everyone's happy.

But having worked at more than one company that's tried this approach, my experience is that very few people end up happy. I'm going to give a few reasons here, but I can't guarantee that they cover everything - and, to be clear, many (possibly most) of the reasons I'm going to describe aren't impossible to fix, they're simply not priorities. I'm also going to restrict this discussion to the case of "We run a full graphical environment on the VM, and stream that to the laptop" - an approach that only offers SSH access is much more manageable, but also significantly more restricted in certain ways. With those details mentioned, let's begin.

The first thing to note is that the overall experience is heavily tied to the protocol you use for the remote display. Chrome Remote Desktop is extremely appealing from a simplicity perspective, but is also lacking some extremely key features (eg, letting you use multiple displays on the local system), so from a developer perspective it's suboptimal. If you read the rest of this post and want to try this anyway, spend some time working with your users to find out what their requirements are and figure out which technology best suits them.

Second, let's talk about GPUs. Trying to run a modern desktop environment without any GPU acceleration is going to be a miserable experience. Sure, throwing enough CPU at the problem will get you past the worst of this, but you're still going to end up with users who need to do 3D visualisation, or who are doing VR development, or who expect WebGL to work without burning up every single one of the CPU cores you so graciously allocated to their VM. Cloud providers will happily give you GPU instances, but that's going to cost more and you're going to need to re-run your numbers to verify that this is still a financial win. "But most of my users don't need that!" you might say, and we'll get to that later on.

Next! Video quality! This seems like a trivial point, but if you're giving your users a VM as their primary interface, then they're going to do things like try to use Youtube inside it because there's a conference presentation that's relevant to their interests. The obvious solution here is "Do your video streaming in a browser on the local system, not on the VM" but from personal experience that's a super awkward pain point! If I click on a link inside the VM it's going to open a browser there, and now I have a browser in the VM and a local browser and which of them contains the tab I'm looking for WHO CAN SAY. So your users are going to watch stuff inside their VM, and re-compressing decompressed video is going to look like shit unless you're throwing a huge amount of bandwidth at the problem. And this is ignoring the additional irritation of your browser being unreadable while you're rapidly scrolling through pages, or terminal output from build processes being a muddy blur of artifacts, or the corner case of "I work for Youtube and I need to be able to examine 4K streams to determine whether changes have resulted in a degraded experience" which is a very real job and one that becomes impossible when you pass their lovingly crafted optimisations through whatever codec your remote desktop protocol has decided to pick based on some random guesses about the local network, and look everyone is going to have a bad time.

The browser experience. As mentioned before, you'll have local browsers and remote browsers. Do they have the same security policy? Who knows! Are all the third party services you depend on going to be ok with the same user being logged in from two different IPs simultaneously because they lost track of which browser they had an open session in? Who knows! Are your users going to become frustrated? Who knows oh wait no I know the answer to this one, it's "yes".

Accessibility! More of your users than you expect rely on various accessibility interfaces, be those mechanisms for increasing contrast, screen magnifiers, text-to-speech, speech-to-text, alternative input mechanisms and so on. And you probably don't know this, but most of these mechanisms involve having accessibility software be able to introspect the UI of applications in order to provide appropriate input or expose available options and the like. So, I'm running a local text-to-speech agent. How does it know what's happening in the remote VM? It doesn't because it's just getting an a/v stream, so you need to run another accessibility stack inside the remote VM and the two of them are unaware of each others existence and this works just as badly as you'd think. Alternative input mechanism? Good fucking luck with that, you're at best going to fall back to "Send synthesized keyboard inputs" and that is nowhere near as good as "Set the contents of this text box to this unicode string" and yeah I used to work on accessibility software maybe you can tell. And how is the VM going to send data to a braille output device? Anyway, good luck with the lawsuits over arbitrarily making life harder for a bunch of members of a protected class.

One of the benefits here is supposed to be a security improvement, so let's talk about WebAuthn. I'm a big fan of WebAuthn, given that it's a multi-factor authentication mechanism that actually does a good job of protecting against phishing, but if my users are running stuff inside a VM, how do I use it? If you work at Google there's a solution, but that does mean limiting yourself to Chrome Remote Desktop (there are extremely good reasons why this isn't generally available). Microsoft have apparently just specced a mechanism for doing this over RDP, but otherwise you're left doing stuff like forwarding USB over IP, and that means that your USB WebAuthn no longer works locally. It also doesn't work for any other type of WebAuthn token, such as a bluetooth device, or an Apple TouchID sensor, or any of the Windows Hellow support. If you're planning on moving to WebAuthn and also planning on moving to remote VM desktops, you're going to have a bad time.

That's the stuff that comes to mind immediately. And sure, maybe each of these issues is irrelevant to most of your users. But the actual question you need to ask is what percentage of your users will hit one or more of these, because if that's more than an insignificant percentage you'll still be staffing all the teams that dealt with hardware, handling local OS installs, worrying about lost or stolen devices, and the glorious future of just being able to stop worrying about this is going to be gone and the financial benefits you promised would appear are probably not going to work out in the same way.

A lot of this falls back to the usual story of corporate IT - understand the needs of your users and whether what you're proposing actually meets them. Almost everything I've described here is a corner case, but if your company is larger than about 20 people there's a high probability that at least one person is going to fall into at least one of these corner cases. You're going to need to spend a lot of time understanding your user population to have a real understanding of what the actual costs are here, and I haven't seen anyone do that work before trying to launch this and (inevitably) going back to just giving people actual computers.

There are alternatives! Modern IDEs tend to support SSHing out to remote hosts to perform builds there, so as long as you're ok with source code being visible on laptops you can at least shift the "I need a workstation with a bunch of CPU" problem out to the cloud. The laptops are going to need to be more expensive because they're also going to need to run more software locally, but it wouldn't surprise me if this ends up being cheaper than the full-on cloud desktop experience in most cases.

Overall, the most important thing to take into account here is that your users almost certainly have more use cases than you expect, and this sort of change is going to have direct impact on the workflow of every single one of your users. Make sure you know how much that's going to be, and take that into consideration when suggesting it'll save you money.

comments

Jonathan Dowland: git worktrees

6 October, 2022 - 14:04

I work on OpenJDK backports: taking a patch that was committed to a current version of JDK, and adapting it to an older one. There are four main OpenJDK versions that I am concerned with: the current version ("jdk"), 8, 11 and 17. These are all maintained in separate Git(Hub) repositories.

It's very useful to have access to the other JDKs when working on any particular version. For example, to backport a patch from the latest version to 17, where the delta is not too big, a lot of the time you can cherry-pick the patch unmodified. To do git cherry-pick <some-commit> in a git repository tracking JDK17, where <some-commit> is in "jdk", I need the "jdk" repository configured as a remote for my local jdk17 repository.

Maintaining completely separate local git repositories for all four JDK versions, with each of them having a subset of the others added as remotes, adds up to a lot of duplicated data on local storage.

For a little while I was exploring using shared clones: a local clone of another local git repository which share some local metadata. This saves on some disc space, but it does not share the configuration for remotes: so I still have to add any other JDK versions I want as remotes in each shared clone (even if the underlying objects already exist in the shared metadata)

Then I discovered git worktree. The git repositories that I've used up until now have had exactly zero (for a bare clone) or one worktree: in other words, the check-out, the actual source code files. Git does actually support having more than one worktree, which can be achieved like so:

git worktree add --checkout \
    -b jdk8u-master \
    ../jdk.worktree.jdk8u \
    jdk8u-dev/master

The result (in this example) is a new checkout, in this case of a new local branch named jdk8u-master at the sibling directory path jdk.worktree.jdk8u, tracking the remote branch jdk8u-dev/master. Within that checkout, there is a file .git which contains a pointer to (indirectly) the main local repository path:

gitdir: /home/jon/rh/git/jdk/.git/worktrees/jdk.worktree.jdk8u-dev

The directory itself behaves exactly like the real one, in that I can see all the configured remotes, and other checked out branches in other worktree paths:

$ git branch
  JDK-8214520-11u                               + 8284977-jdk11u-dev
  JDK-8268362-11u                               + master
  8231111-jdk11u-merged                         * 8237479-jdk8u-dev

Above, you can see that the current worktree is the branch 8237479-jdk8u-dev, marked (as usual) by the prefix *, and two other branches are checked out in other worktrees, marked by the prefix +.

I only need to configure one local git repository with all of the remotes I am concerned about; I can inspect, compare, cherry-pick, check out, etc. any objects from any of those branches from any of my work trees; there's only one .git directory with all the configuration and storage for the git blobs across all the versions.

Dirk Eddelbuettel: RVowpalWabbit 0.0.17: Maintenance

6 October, 2022 - 05:13

Almost to the week one year since the last maintenance release, we can announce another maintenance release, now at version 0.0.17, of the RVowpalWabbit package. The CRAN maintainers kindly and politly pointed out that I was (cough, cough) apparently the last maintainer who had packages that set StagedInstall: no. Guilty as charged.

RVowpalWabbit is one the two packages; the other one will hopefully follow ‘shortly’. And while I long suspected linking aspects to drive this (this is an old package, newer R packaging of the awesome VowpalWabbit is in rvw, I was plain wrong here. The cause was an absolute path to an included dataset, computed in an example, which then gets serialized. As Tomas Kalibera suggested, we can replace the constant with a function and all is well. So here is 0.0.17.

As noted before, there is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into VowpalWabbit from R go check it out.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: rewrite rule representation

5 October, 2022 - 03:29

I've begun writing up my phd and, not for the first time, I'm pondering issues of how best to represent things. Specifically, rewrite rules.

Here's one way of representing an example rewrite rule:

streamFilter g . streamFilter f = streamFilter (g . f)

This is a fairly succinct representation. It's sort-of Haskell, but not quite. It's an equation. The left-hand side is a pattern: it's intended to describe not one expression but a family of expressions that match. The lower case individual letters g and f are free variables: labelled placeholders within the pattern that can be referred to on the right hand side. I haven't stipulated what defines a free variable and what is a more concrete part of the pattern. It's kind-of Haskell, and I've used the well-known operator . to represent the two stream operators (streamFilters) being connected together. (In practice, when we get to the system where rules are actually applied, the connecting operator is not going to be . at all, so this is also an approximation).

One thing I don't like about . here, despite its commonness, is having to read right-to-left. I adopted the habit of using the lesser-known >>> in a lot of my work (which is defined as (>>>) = flip (.)), which reads left-to-right. And then I have the reverse problem: people aren't familiar with >>>, and, just like ., it's a stand-in anyway.

Towards the beginning of my PhD, I spent some time inventing rewrite rules to operate on pairs of operators taken from a defined, known set. I began representing the rules much as in the example above. Later on, I wanted to encode them as real Haskell, in order to check them more thoroughly. The above rule, I first encoded like this

filterFilterPre     = streamFilter g . streamFilter f
filterFilterPost    = streamFilter (g . f)
prop_filterFilter s = filterFilterPre s == filterFilterPost s

This is real code: the operators were already implemented in StrIoT, and the final expression defined a property for QuickCheck. However, it's still not quite a rewrite rule. The left-hand side, which should be a pattern, is really a concrete expression. The names f and g are masquerading as free variables but are really concretely defined in a preamble I wrote to support running QuickCheck against these things: usually simple stuff like g = odd, etc.

Eventually, I had to figure out how to really implement rewrite rules in StrIoT. There were a few approaches I could take. One would be to express the rules in some abstract form like the first example (but with properly defined semantics) and write a parser for them: I really wanted to avoid doing that.

As a happy accident, the solution I landed on was enabled by the semantics of algebraic-graphs, a Graph library we adopted to support representing a stream-processing topology. I wrote more about that in data-types for representing stream-processing programs.

I was able to implement rewrite rules as ordinary Haskell functions. The left-hand side of the rewrite rule maps to the left-hand side (pattern) part of a function definition. The function body implements the right-hand side. The system that applies the rules attempts to apply each rewrite rule to every sub-graph of a stream-processing program. The rewrite functions therefore need to signal whether or not they're applicable at runtime. For that reason, the return type is wrapped in Maybe, and we provide a catch-all pattern for every rule which simply returns Nothing. The right-hand side implementation can be pretty thorny. On the left-hand side, the stream operator connector we've finally ended up with is Connect from algebraic-graphs.

Here's filter fuse, taken from the full ruleset:

filterFuse :: RewriteRule
filterFuse (Connect (Vertex a@(StreamVertex i (Filter sel1) (p:_) ty _ s1))
                    (Vertex b@(StreamVertex _ (Filter sel2) (q:_) _ _ s2))) =
    let c = a { operator    = Filter (sel1 * sel2)
              , parameters  = [\[| (\p q x -> p x && q x) $(p) $(q) |]]
              , serviceTime = sumTimes s1 sel1 s2
              }
    in Just (removeEdge c c . mergeVertices (`elem` [a,b]) c)

filterFuse _ = Nothing

That's perhaps the simplest rule in the set. (See e.g. hoistOp for one of the worst!)

The question that remains to me, is, which representation, or representations, to use in the thesis? I'm currently planning to skip the abstract example I started with and start with the concrete Haskell encoding using QuickCheck. I'm not sure if it seems weird to have two completely separate implementations of the rules, but the simpler QuickCheck-checked rules are much closer to the "core essence" of the rules than the final implementation in StrIoT. And the derivation of the rules comes earlier in the timeline than the design work that led to the final StrIoT implementation. The middle-option is still compromised, however, by having concrete expressions pretending to be patterns. So I'm not sure.

Dima Kogan: mrcal 2.2 released

4 October, 2022 - 15:05

Today I released mrcal 2.2 (the release notes are available here). This release contains lots of medium-important internal improvements, and is a result of

  • Heavy dogfooding. I use these tools a lot every day, and many things are nicer, easier and work better in 2.2 than in 2.1
  • Not-yet-completed cool new functionality. Some of the required internal improvements for the big new features are being released here.

The biggest single new feature in this release is the interactive graphical tool for examining dense stereo results: accessed via mrcal-stereo --viz stereo.

The next pressing thing is improved documentation. The tour of mrcal is still a good overview of some of the functionality that makes mrcal unique and far better than traditional calibration tools. But it doesn't do a good job of demonstrating how you would actually use mrcal to diagnose and handle common calibration issues. I need to gather some releasable representative data, and write docs around that.

Then I'm going to start finishing the big new features in the roadmap (these are all already functional, but need polish):

  • Triangulation functions in the optimization loop for efficient structure-from-motion
  • Support for non-central projection to remove a significant source of bias present in very wide lenses
  • Improved projection uncertainty quantification to resolve accuracy and performance issues in the current projection uncertainty method

Russ Allbery: Review: The Dragon Never Sleeps

4 October, 2022 - 10:01

Review: The Dragon Never Sleeps, by Glen Cook

Publisher: Night Shade Books Copyright: 1988 Printing: 2008 ISBN: 1-59780-099-6 Format: MOBI Pages: 449

Canon Space is run, in a way, by the noble mercantile houses, who spread their cities, colonies, and mines through the mysterious Web that allows faster-than-light interstellar travel. The true rulers of Canon Space, though, are the Guardships: enormous, undefeatable starships run by crews who have become effectively immortal by repeated uploading and reincarnation. Or, in the case of the Deified, without reincarnation, observing, ordering, advising, and meddling from an entirely virtual existence. The Guardships have enforced the status quo for four thousand years.

House Tregesser thinks they have the means to break the stranglehold of the Guardships. They have contact with Outsiders from beyond Canon Space who can provide advanced technology. They have their own cloning technology, which they use to create backup copies of their elites. And they have Lupo Provik, a quietly brilliant schemer who has devoted his life to destroying Guardships.

This book was so bad. A more sensible person than I would have given up after the first hundred pages, but I was too stubborn. The stubbornness did not pay off.

Sometimes I pick up an older SFF novel and I'm reminded of how much the quality bar in the field has been raised over the past twenty years. It's not entirely fair to treat The Dragon Never Sleeps as typical of 1980s science fiction: Cook is far better known for his Black Company military fantasy series, this is one of his minor novels, and it's only been intermittently in print. But I'm dubious this would have been published at all today.

First, the writing is awful. It's choppy, cliched, awkward, and has no rhythm or sense of beauty. Here's a nearly random paragraph near the beginning of the book as a sample:

He hurled thunders and lightnings with renewed fury. The whole damned universe was out to frustrate him. XII Fulminata! What the hell? Was some malign force ranged against him?

That was his most secret fear. That somehow someone or something was using him the way he used so many others.

(Yes, this is one of the main characters throwing a temper tantrum with a special effects machine.)

In a book of 450 pages, there are 151 chapters, and most chapters switch viewpoint characters. Most of them also end with a question or some vaguely foreboding sentence to try to build tension, and while I'm willing to admit that sometimes works when used sparingly, every three pages is not sparingly.

This book is also weirdly empty of description for its size. We get a few set pieces, a few battles, and a sentence or two of physical description of most characters when they're first introduced, but it's astonishing how little of a mental image I had of this universe after reading the whole book. Cook probably describes a Guardship at some point in this book, but if he does, it completely failed to stick in my memory. There are aliens that everyone recognizes as aliens, so presumably they look different than humans, but for most of them I have no idea how. Very belatedly we're told one important species (which never gets a name) has a distinctive smell. That's about it.

Instead, nearly the whole book is dialogue and scheming. It's clear that Cook is intending to write a story of schemes and counter-schemes and jousting between brilliant characters. This can work if the dialogue is sufficiently sharp and snappy to carry the story. It does not.

"What mischief have you been up to, Kez Maefele?"

"Staying alive in a hostile universe."

"You've had more than your share of luck."

"Perhaps luck had nothing to do with it, WarAvocat. Till now."

"Luck has run out. The Ku Question has run its course. The symbol is about to receive its final blow."

There are hundreds of pages of this sort of thing.

The setting is at least intriguing, if not stunningly original. There are immortal warships oppressing human space, mysterious Outsiders, great house politics, and an essentially immortal alien warrior who ends up carrying most of the story. That's material for a good space opera if the reader slowly learns the shape of the universe, its history, and its landmarks and political factions. Or the author can decline to explain any of that. I suppose that's also a choice.

Here are some things that you may have been curious about after reading my summary, and which I'm still curious about after having finished the book: What laws do the Guardships impose and what's the philosophy behind those laws? How does the economic system work? Who built the Guardships originally, and how? How do the humans outside of Canon Space live? Who are the Ku? Why did they start fighting the humans? How many other aliens are there? What do they think of this? How does the Canon government work? How have the Guardships managed to maintain technologically superior for four thousand years?

Even where the reader gets a partial explanation, such as what Web is and how it was built, it's an unimportant aside that's largely devoid of a sense of wonder. The one piece of world-building that this book is interested in is the individual Guardships and the different ways in which they've handled millennia of self-contained patrol, and even there we only get to see a few of them.

There is a plot with appropriately epic scope, but even that is undermined by the odd pacing. Five, ten, or fifty years sometimes goes by in a sentence. A war starts, with apparently enormous implications for Canon Space, and then we learn that it continues for years without warranting narrative comment. This is done without transitions and without signposts for the reader; it's just another sentence in the narration, mixed in with the rhetorical questions and clumsy foreshadowing.

I would like to tell you that at least the book has a satisfying ending that resolves the plot conflict that it finally reveals to the reader, but I had a hard time understanding why the ending even mattered. The plot was so difficult to follow that I'm sure I missed something, but it's not difficult to follow in the fun way that would make me want to re-read it. It's difficult to follow because Cook doesn't seem able to explain the plot in his head to the reader in any coherent form. I think the status quo was slightly disrupted? Maybe? Also, I no longer care.

Oh, and there's an gene-engineered sex slave in this book, who various male characters are very protective and possessive of, who never develops much of a personality, and who has no noticeable impact on the plot despite being a major character. Yay.

This was one of the worst books I've read in a long time. In retrospect, it was an awful place to start with Glen Cook. Hopefully his better-known works are also better-written, but I can't say I feel that inspired to find out.

Rating: 2 out of 10

Mike Hommey: Announcing git-cinnabar 0.6.0rc1

4 October, 2022 - 05:26

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.10?
  • Full rewrite of git-cinnabar in Rust.
  • Push performance is between twice and 10 times faster than 0.5.x, depending on scenarios.
  • Based on git 2.38.0.
  • git cinnabar fetch now accepts a --tags flag to fetch tags.
  • git cinnabar bundle now accepts a -t flag to give a specific bundlespec.
  • git cinnabar rollback now accepts a --candidates flag to list the metadata sha1 that can be used as target of the rollback.
  • git cinnabar rollback now also accepts a --force flag to allow any commit sha1 as metadata.
  • git cinnabar now has a self-update subcommand that upgrades it when a new version is available. The subcommand is only available when building with the self-update feature (enabled on prebuilt versions of git-cinnabar).

Shirish Agarwal: Death Certificate, Legal Heir, Succession Certificate, and Indian Beaureacracy.

4 October, 2022 - 04:00
Death Certificate

After waiting for almost two, two, and a half months, I finally got mum’s death certificate last week. A part of me was saddened as it felt like I was nailing her or putting nails to the coffin or whatever it is, (even though I’m an Agarwal) I just felt sad and awful. I was told just get a death certificate and your problems will be over. Some people wanted me to give some amount under the table or something which I didn’t want to party of and because of that perhaps it took a month, month and a half more as I came to know later that it had been issued almost a month and a half back. The inflation over the last 8 years of the present Govt. has made the corrupt even more corrupt, all the while projecting and telling others that the others are corrupt. There had been also a few politicians who were caught red-handed but then pieces of evidence & witnesses vanish overnight. I don’t really wanna go in that direction as it would make for an unpleasant reading with no solutions at all unless the present Central Govt. goes out.

Intestate and Will

I came to know the word Intestate. This was a new word/term for me. A lookup told me that intestate means a person dying without putting a will. That legal term comes from U.K. law. I had read a long long time back that almost all our laws have and were made or taken from U.K. law. IIRC, massive sections of the CRPC Act even today have that colonial legacy. While in its (BJP) manifesto that had been shared with the public at the time of the election, they had shared that they will remove a whole swathe of laws that don’t make sense in today’s environment. But when hard and good questions were asked, they trimmed a few, modified a few, and left most of them as it is. Sadly, most of the laws that they did modify increased Government control over people instead of decreasing, It’s been 8 years and yet we still don’t have a Privacy law. They had made something but it was too vague and would have invited suits from day 1 so pretty much on backburner :(. A good insight into what I mean is an article in the Hindu I read a few days back. Once you read that article, I am sure you will have as many questions as I have but sadly no answers.

Law is not supposed to be partisan but today it is. I could cite examples from both the U.S. and UK courts about progressive judgments or the way they go about it, but then again when our people think they know better But this again does not help me apart from setting some kind of background of where we are.) I have on this blog also shared how Africans have been setting new records in transparency and they did it almost 5 years back. For those new to the blog, African countries have been now broadcasting proceedings of their SC for almost 5 years now. I noticed it when privacy law was being debated and a few handles that I follow on Twitter and elsewhere had gone and given their submission in their SC. It was fascinating to not only hear but also read about the case from multiple viewpoints.

And just to remind people, I am sharing all of this from Pune, Maharashtra which is the second-biggest city in Maharashtra that has something like six million people and probably a million or more transitory students, and casual laborers but then again that doesn’t help me other than providing a kind of context to what I’m sharing..

Now a couple of years back or more I had asked mum to make a will. If she wanted to bequeath something to somebody else she could do that, had shared about that. There was some article in Indian Express or elsewhere that told people what they should be doing, especially if they had cost the barrier of age 60. Now for reasons best known to her, she refused and now I have to figure out what is the right way to go about doing things.

Twitter Experiences

Now before Twitter, a few people had been asking me about having a legal heir certificate, while others are asking about a succession certificate and some claim a Death Certificate is enough. Now I asked the same question on Twitter hoping at the max of 5-10 responses but was overwhelmed by the response. I got something like 50-60 odd replies. Probably, one of the better responses was given by Dr. Paras Jain who shared the following –

“Answer is qualified Movable assets nothing required Bank LIC flat with society nomination done nothing required except death certificate. However, each will insist on a notarized indemnity bond If the nomination is not done. Depends on whims & fancy of each mind legal heir certificate,+ all” – Dr. Paras Jain. (cleared up the grammar a little, otherwise, views are of Dr. Paras.)

What was interesting for me is that most people just didn’t give me advice, many of them also shared their own experiences or what they did or went through. I was surprised to learn e.g. that a succession certificate can take up to 6 months or more. Part of me isn’t surprised to learn that as do know we have a huge pendency of cases in High Courts, District Courts leading all the way to the Supreme Court. India Today shared a brief article sharing the same and similar issues. Such delays have become far too common now

Supertech Demolition and Others

Over the last couple of months, a number of high-profile demolitions have taken place and in most cases, the loss has been of homebuyers. See for e.g. the case of Supertech. A much more detailed article was penned by Moneylife. There were a few Muslims whose homes were demolished just a couple of months back that were being celebrated, but now just 2-3 days back a politician by the name of Shrikant Tyagi, a BJP leader, his flat was partly demolished and there was a lot of hue and cry. Although we shouldn’t be discussing on the basis of religion but legality, somehow the idea has been put that there are two kinds of laws, one for the majority, the other for the minority. And this has been going on for the last 8 odd years, hence you see different reactions to the same incidents instead of similar reactions. In all the cases, no strictures are passed either against the Municipality or against lenders.

The most obvious question, let’s say for argument’s sake, I was a homeowner in Supertech. I bought a flat for say 10 lakhs in 2012. According to the courts, today I am supposed to get 22 lakhs at 12% simple interest for 10 years. Let’s say even if the builder was in a position and does honor the order, the homeowner will not get a house in the same area as the circle rate would probably have quadrupled by then at the very least. The circle rate alone might be the above amount. The reason is very simple, a builder buys land on the cheap when there is no development around. His/her/their whole idea is once development happens due to other builders also building flats, the whole area gets developed and they are able to sell the flats at a premium. Even Circle rates get affected as the builder pays below the table and asks the officers of the municipal authority to hike the circle rate every few months. Again, wouldn’t go into much depth as the whole thing is rotten to the core. There are many such projects. I have shared Krishnaraj Rao’s videos on this blog a few times. I am sure there are a few good men like him. At the end, sadly this is where we are

P.S. – I haven’t shared any book reviews this week as this post itself has become too long. I probably may blog about a couple of books in the next couple of days, till later.

Paul Wise: FLOSS Activities September 2022

3 October, 2022 - 19:47
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review Administration
  • Debian QA services: deploy changes
  • Debian wiki: approve accounts
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC
Sponsors

All work was done on a volunteer basis.

Thorsten Alteholz: My Debian Activities in September 2022

3 October, 2022 - 19:01
FTP master

This month I accepted 226 and rejected 33 packages. The overall number of packages that got accepted was 232.

All in all I addressed about 60 RM-bugs and either simply removed the package or added a moreinfo tag. In total I spent 5 hours for this task.

Anyway, I have to repeat my comment from last month: please have a look at the removal page and check whether the created dak command is really what you wanted. It would also help if you check the reverse dependencies and write a comment whether they are important or can be ignored or also file a new bug for them. Each removal must have one bug!

Debian LTS

This was my ninety-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3111-1] mod-wsgi security update for one CVE
  • [#1020596] bullseye-pu: mod-wsgi/4.7.1-3+deb11u1
  • [DLA 3119-1] expat security update for one CVE
  • [DLA 3125-1] libvncserver security update for two CVEs
  • [DLA 3126-1] libsndfile security update for one CVE
  • [DLA 3127-1] libhttp-daemon-perl security update for one CVE
  • [DLA 3130-1] tinyxml security update for one CVE

I also started to work on frr.

Last but not least I did some days of frontdesk duties and took care of issues on security-master.

Debian ELTS

This month was the fiftieth ELTS month.

During my allocated time I uploaded:

  • [ELA-685-1] ntfs-3g security update of Stretch for eight CVE
  • [ELA-686-1] expat security update of Jessie and Stretch for one CVE
  • [ELA-690-1] libvncserver security update of Stretch for one CVE

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

Debian IoT

This month I uploaded new upstream versions or improved packaging of:

Debian Mobcom

This month I started another upload session for new upstrea versions:

Other stuff

This month I uploaded new packages:

Russ Allbery: Review: Jingo

3 October, 2022 - 10:27

Review: Jingo, by Terry Pratchett

Series: Discworld #21 Publisher: Harper Copyright: 1997 Printing: May 2014 ISBN: 0-06-228020-1 Format: Mass market Pages: 455

This is the 21st Discworld novel and relies on the previous Watch novels for characterization and cast development. I would not start here.

In the middle of the Circle Sea, the body of water between Ankh-Morpork and the desert empire of Klatch, a territorial squabble between one fishing family from Ankh-Morpork and one from Klatch is interrupted by a weathercock rising dramatically from the sea. When the weathercock is shortly followed by the city to which it is attached and the island on which that city is resting, it's justification for more than a fishing squabble. It's a good reason for a war over new territory.

The start of hostilities is an assassination attempt on a prince of Klatch. Vimes and the Watch start investigating, but politics outraces police work. Wars are a matter for the nobility and their armies, not for normal civilian leadership. Lord Vetinari resigns, leaving the city under the command of Lord Rust, who is eager for a glorious military victory against their long-term rivals. The Klatchians seem equally eager to oblige.

One of the useful properties of a long series is that you build up a cast of characters you can throw at a plot, and if you can assume the reader has read enough of the previous books, you don't have to spend a lot of time on establishing characterization and can get straight to the story. Pratchett uses that here. You could read this cold, I suppose, because most of the Watch are obvious enough types that the bits of characterization they get are enough, but it works best with the nuance and layers of the previous books. Of course Colon is the most susceptible to the jingoism that prompts the book's title, and of course Angua's abilities make her the best detective. The familiar characters let Pratchett dive right in to the political machinations.

Everyone plays to type here: Vetinari is deftly maneuvering everyone into place to make the situation work out the way he wants, Vimes is stubborn and ethical and needs Vetinari to push him in the right direction, and Carrot is sensible and effortlessly charismatic. Colon and Nobby are, as usual, comic relief of a sort, spending much of the book with Vetinari while not understanding what he's up to. But Nobby gets an interesting bit of characterization in the form of an extended turn as a spy that starts as cross-dressing and becomes an understated sort of gender exploration hidden behind humor that's less mocking than one might expect. Pratchett has been slowly playing more with gender in this series, and while it's simple and a bit deemphasized, I like it.

I think the best part of this book, thematically, is the contrast between Carrot's and Vimes's reactions to the war. Carrot is a paragon of a certain type of ethics in Watch novels, but a war is one of the things that plays to his weaknesses. Carrot follows rules, and wars have rules of a type. You can potentially draw Carrot into them. But Vimes, despite being someone who enforces rules professionally, is deeply suspicious of them, which makes him harder to fool. Pratchett uses one of the Klatchian characters to hold a mirror up to Vimes in ways that are minor spoilers, but that I quite liked.

The argument of jingoism, made by both Lord Rust and by the Klatchian prince, is that wars are something special, outside the normal rules of justice. Vimes absolutely refuses this position. As someone from the US, his reaction to Lord Rust's attempted militarization of the Watch was one of the best moments of the book.

Not a muscle moved on Rust's face. There was a clink as Vimes's badge was set neatly on the table.

"I don't have to take this," Vimes said calmly.

"Oh, so you'd rather be a civilian, would you?"

"A watchman is a civilian, you inbred streak of pus!"

Vimes is also willing to think of a war as a possible crime, which may not be as effective as Vetinari's tricky scheming but which is very emotionally satisfying.

As with most Pratchett books, the moral underpinnings of the story aren't that elaborate: people are people despite cultural differences, wars are bad, and people are too ready to believe the worst of their neighbors. The story arc is not going to provide great insights into human character that the reader did not already have. But watching Vimes stubbornly attempt to do the right thing regardless of the rule book is wholly satisfying, and watching Vetinari at work is equally, if differently, enjoyable.

Not the best Discworld novel, but one of the better ones.

Followed by The Last Continent in publication order, and by The Fifth Elephant thematically.

Rating: 8 out of 10

Marco d'Itri: Debian bookworm on a Lenovo T14s Gen3 AMD

3 October, 2022 - 08:12

I recently upgraded my laptop to a Lenovo T14s Gen3 AMD and I am happy to report that it works just fine with Debian/unstable using a 5.19 kernel.

The only issue is that some firmware files are still missing and I had to install them manually.

Updates are needed for the firmware-amd-graphics package (#1019847) for the Radeon 680M GPU (AMD Rembrandt) and for the firmware-atheros package (#1021157) for the Qualcomm NFA725A Wi-Fi card (which is actually reported as a NFA765).

s2idle (AKA "modern suspend") works too.

For improved energy efficiency it is recommended to switch from the acpi_cpufreq CPU frequency scaling driver to amd_pstate. Please note that so far it is not loaded automatically.

As expected, fwupdmgr can update the system BIOS and the firmware of the NVMe device.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้