Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 2 hours 15 min ago

Enrico Zini: Relationships links

21 September, 2020 - 05:00

The Bouletcorp » Love & Dragons is a strip I like about fairytale relationships.

There are a lot of mainstream expectations about relationships. These links challenge a few of them:

More about emotional work, some more links to follow a previous links post:

Dirk Eddelbuettel: RcppSpdlog 0.0.2: New upstream, awesome new stopwatch

21 September, 2020 - 01:21

Following up on the initial RcppSpdlog 0.0.1 release earlier this week, we are pumped to announce release 0.0.2. It contains upstream version 1.8.0 for spdlog which utilizes (among other things) a new feature in the embedded fmt library, namely completely automated formatting of high resolution time stamps which allows for gems like this (taken from this file in the package and edited down for brevity):

    spdlog::stopwatch sw;                                   // instantiate a stop watch

    // some other code

    sp->info("Elapsed time: {}", sw);

What we see is all there is: One instantiates a stopwatch object, and simply references it. The rest, as they say, is magic. And we get tic / toc alike behaviour in modern C++ at essentially no cost us (as code authors). So nice. Output from the (included in the package) function exampleRsink() (again edited down just a little):

R> RcppSpdlog::exampleRsink()
[13:03:23.845114] [fromR] [I] [thread 793264] Welcome to spdlog!
[...]
[13:03:23.845205] [fromR] [I] [thread 793264] Elapsed time: 0.000103611
[...]
[13:03:23.845281] [fromR] [I] [thread 793264] Elapsed time: 0.00018203
R> 

We see that the two simple logging instances come 10 and 18 microseconds into the call.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.2 (2020-09-17)
  • Upgraded to upstream release 1.8.0

  • Switched Travis CI to using BSPM, also test on macOS

  • Added 'stopwatch' use to main R sink example

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppSpdlog page.

The only sour grapes, again, are once more over the CRAN processing. And just how 0.0.1 was delayed for no good reason for three weeks, 0.0.2 was delayed by three days just because … well that is how CRAN rules sometimes. I’d be even more mad if I had an alternative but I don’t. We remain grateful for all they do but they really could have let this one through even at one-day update delta. Ah well, now we’re three days wiser and of course nothing changed in the package.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: Links September 2020

20 September, 2020 - 20:55

MD5 cracker, find plain text that matches MD5 hash [1].

Debian Quick Image Baker – Debian VM images for various architectures [2].

Insightful article on Scientific American about how dental and orthodontic problems are caused by our modern lifestyle [3].

Cory Doctorow wrote an insightful article for Locus Magazine about Intellectual Property [4]. He makes the case that we are losing our rights due to changes in the legal system that benefits corporations.

Anarcat has an informative blog post about how to stop people using your Mailman installation to attack others [5]. Since version 1:2.1.27-1 the Debian package of Mailman has created a SUBSCRIBE_FORM_SECRET by default on install, but any installation upgraded from an older version of the Debian package won’t have it.

Related posts:

  1. Links July 2020 iMore has an insightful article about Apple’s transition to the...
  2. Links March 2020 Rolling Stone has an insightful article about why the Christian...
  3. Links January 2020 C is Not a Low Level Language [1] is an...

Andy Simpkins: Using an IP camera in conference calls

20 September, 2020 - 16:50

My webcam has broken, something that I have been using a lot during the last few months for some reason.

A friend of mine suggested that I use the mic and camera on my mobile phone instead.  There is a simple app ‘droidcam’ that makes the phone behave as a simple webcam, it also has a client application to run on your PC to capture the web-stream and present it as a video device.  All well and good but I would like to keep propitiatory software off my PCs (I have a hard enough time accepting it on a phone but I have to draw a line somewhere).

I decided that there had to be a simple way to ingest that stream and present it as a local video device on a Linux box.  It turns out that it is a lot simpler than I thought it would be.  I had it working within 10 minutes!

Packages needed:  ffmpeg, v4l2loopback-utils

sudo apt-get install ffmpeg v4l2loopback-utils

Start the loop-back device:

sudo modprobe v4l2loopback

Ingest video and present on loop-back device:

ffmpeg -re -i <URL_VideoStream> -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

Where

-re
Read input at native frame rate. Mainly used to simulate a grab
device, or live input stream (e.g. when reading from a file). Should
not be used with actual grab devices or live input streams (where it
can cause packet loss). By default ffmpeg attempts to read the
input(s) as fast as possible. This option will slow down the reading
of the input(s) to the native frame rate of the input(s). It is
useful for real-time output (e.g. live streaming).
 -i <source>
In this case my phone running droidcam http://10.14.1.100:4747/video
-vcodec rawvideo
Select the output video codec to raw video (as expected for /dev/video#)
-pix_fmt yuv420p
Set pixel format.  in this example tell ffmpeg that the video will
be yuv colour space at 420p resolution
-f v4l2 /dev/video0
Force the output to be in video for Linux 2 (v4l2) format bound to
device /dev/video0

Vincent Bernat: Keepalived and unicast over multiple interfaces

20 September, 2020 - 03:02

Keepalived is a Linux implementation of VRRP. The usual role of VRRP is to share a virtual IP across a set of routers. For each VRRP instance, a leader is elected and gets to serve the IP address, ensuring the high availability of the attached service. Keepalived can also be used for a generic leader election, thanks to its ability to use scripts for healthchecking and run commands on state change.

A simple configuration looks like this:

vrrp_instance gateway1 {
  state BACKUP          # ❶
  interface eth0        # ❷
  virtual_router_id 12  # ❸
  priority 101          # ❹
  virtual_ipaddress {
    2001:db8:ff/64
  }
}

The state keyword in ❶ instructs Keepalived to not take the leader role when starting. Otherwise, incoming nodes create a temporary disruption by taking over the IP address until the election settles. The interface keyword in ❷ defines the interface for sending and receiving VRRP packets. It is also the default interface to configure the virtual IP address. The virtual_router_id directive in ❸ is common to all nodes sharing the virtual IP. The priority keyword in ❹ helps choosing which router will be elected as leader. If you need more information around Keepalived, be sure to check the documentation.

VRRP design is tied to Ethernet networks and requires a multicast-enabled network for communication between nodes. In some environments, notably public clouds, multicast is unavailable. In this case, Keepalived can send VRRP packets using unicast:

vrrp_instance gateway1 {
  state BACKUP
  interface eth0
  virtual_router_id 12
  priority 101
  unicast_peer {
    2001:db8::11
    2001:db8::12
  }
  virtual_ipaddress {
    2001:db8:ff/64 dev lo
  }
}

Another process, like a BGP daemon, should advertise the virtual IP address to the “network”. If needed, Keepalived can trigger whatever action is needed for this by using notify_* scripts.

Until version 2.21 (not released yet), the interface directive is mandatory and Keepalived will transmit and receive VRRP packets on this interface only. If peers are reachable through several interfaces, like on a BGP on the host setup, you need a workaround. A simple one is to use a VXLAN interface:

$ ip -6 link add keepalived6 type vxlan id 6 dstport 4789 local 2001:db8::10 nolearning
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::11
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::12
$ ip link set up dev keepalived6

Learning of MAC addresses is disabled and one generic entry for each peer is added in the forwarding database: transmitted packets are broadcasted to all peers, notably VRRP packets. Have a look at “VXLAN & Linux” for additional details.

vrrp_instance gateway1 {
  state BACKUP
  interface keepalived6
  mcast_src_ip 2001:db8::10
  virtual_router_id 12
  priority 101
  virtual_ipaddress {
    2001:db8:ff/64 dev lo
  }
}

Starting from Keepalived 2.21, unicast_peer can be used without the interface directive. I think using VXLAN is still a neat trick applicable to other situations where communication using broadcast or multicast is needed, while the underlying network provide no support for this.

Vincent Bernat: Syncing NetBox with a custom Ansible module

19 September, 2020 - 22:45

The netbox.netbox collection from Ansible Galaxy provides several modules to update NetBox objects:

- name: create a device in NetBox
  netbox_device:
    netbox_url: http://netbox.local
    netbox_token: s3cret
    data:
      name: to3-p14.sfo1.example.com
      device_type: QFX5110-48S
      device_role: Compute Switch
      site: SFO1

However, if NetBox is not your source of truth, you may want to ensure it stays in sync with your configuration management database1 by removing outdated devices or IP addresses. While it should be possible to glue together a playbook with a query, a loop and some filtering to delete unwanted elements, it feels clunky, inefficient and an abuse of YAML as a programming language. A specific Ansible module solves this issue and is likely more flexible.

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction, as well as “Syncing MySQL tables” for a first simpler example.

Code

The module has the following signature and it syncs NetBox with the content of the provided YAML file:

netbox_sync:
  source: netbox.yaml
  api: https://netbox.example.com
  token: s3cret

The synchronized objects are:

  • sites,
  • manufacturers,
  • device types,
  • device roles,
  • devices, and
  • IP addresses.

In our environment, the YAML file is generated from our configuration management database and contains a set of devices and a list of IP addresses:

devices:
  ad2-p6.sfo1.example.com:
     datacenter: sfo1
     manufacturer: Cisco
     model: Catalyst 2960G-48TC-L
     role: net_tor_oob_switch
  to1-p6.sfo1.example.com:
     datacenter: sfo1
     manufacturer: Juniper
     model: QFX5110-48S
     role: net_tor_gpu_switch
# […]
ips:
  - device: ad2-p6.example.com
    ip: 172.31.115.18/21
    interface: oob
  - device: to1-p6.example.com
    ip: 172.31.115.33/21
    interface: oob
  - device: to1-p6.example.com
    ip: 172.31.254.33/32
    interface: lo0.0
# […]

The network team is not the sole tenant in NetBox. While adding new objects or modifying existing ones should be relatively safe, deleting unwanted objects can be risky. The module only deletes objects it did create or modify. To identify them, it marks them with a specific tag, cmdb. Most objects in NetBox accept tags.

Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    source=dict(type='path', required=True),
    api=dict(type='str', required=True),
    token=dict(type='str', required=True, no_log=True),
    max_workers=dict(type='int', required=False, default=10)
)

result = dict(
    changed=False
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

It contains an additional optional arguments defining the number of workers to talk to NetBox and query the existing objects in parallel to speedup the execution.

Abstracting synchronization

We need to synchronize different object types, but once we have a list of objects we want in NetBox, the grunt work is always the same:

  • check if the objects already exist,
  • retrieve them and put them in a form suitable for comparison,
  • retrieve the extra objects we don’t want anymore,
  • compare the two sets, and
  • add missing objects, update existing ones, delete extra ones.

We code these behaviours into a Synchronizer abstract class. For each kind of object, a concrete class is built with the appropriate class attributes to tune its behaviour and a wanted() method to provide the objects we want.

I am not explaining the abstract class code here. Have a look at the source if you want.

Synchronizing tags and tenants

As a starter, here is how we define the class synchronizing the tags:

class SyncTags(Synchronizer):
    app = "extras"
    table = "tags"
    key = "name"

    def wanted(self):
        return {"cmdb": dict(
            slug="cmdb",
            color="8bc34a",
            description="synced by network CMDB")}

The app and table attributes defines the NetBox objects we want to manipulate. The key attribute is used to determine how to lookup for existing objects. In this example, we want to lookup tags using their names.

The wanted() method is expected to return a dictionary mapping object keys to the list of wanted attributes. Here, the keys are tag names and we create only one tag, cmdb, with the provided slug, color and description. This is the tag we will use to mark the objects we create or modify.

If the tag does not exist, it is created. If it exists, the provided attributes are updated. Other attributes are left untouched.

We also want to create a specific tenant for objects accepting such an attribute (devices and IP addresses):

class SyncTenants(Synchronizer):
    app = "tenancy"
    table = "tenants"
    key = "name"

    def wanted(self):
        return {"Network": dict(slug="network",
                                description="Network team")}
Synchronizing sites

We also need to synchronize the list of sites. This time, the wanted() method uses the information provided in the YAML file: it walks the devices and builds a set of datacenter names.

class SyncSites(Synchronizer):

    app = "dcim"
    table = "sites"
    key = "name"
    only_on_create = ("status", "slug")

    def wanted(self):
        result = set(details["datacenter"]
                     for details in self.source['devices'].values()
                     if "datacenter" in details)
        return {k: dict(slug=k,
                        status="planned")
                for k in result}

Thanks to the use of the only_on_create attribute, the specified attributes are not updated if they are different. The goal of this synchronizer is mostly to collect the references to the different sites for other objects.

>>> pprint(SyncSites(**sync_args).wanted())
{'sfo1': {'slug': 'sfo1', 'status': 'planned'},
 'chi1': {'slug': 'chi1', 'status': 'planned'},
 'nyc1': {'slug': 'nyc1', 'status': 'planned'}}
Synchronizing manufacturers, device types and device roles

The synchronization of manufacturers is pretty similar, except we do not use the only_on_create attribute:

class SyncManufacturers(Synchronizer):

    app = "dcim"
    table = "manufacturers"
    key = "name"

    def wanted(self):
        result = set(details["manufacturer"]
                     for details in self.source['devices'].values()
                     if "manufacturer" in details)
        return {k: {"slug": slugify(k)}
                for k in result}

Regarding the device types, we use the foreign attribute linking a NetBox attribute to the synchronizer handling it.

class SyncDeviceTypes(Synchronizer):

    app = "dcim"
    table = "device_types"
    key = "model"
    foreign = {"manufacturer": SyncManufacturers}

    def wanted(self):
        result = set((details["manufacturer"], details["model"])
                     for details in self.source['devices'].values()
                     if "model" in details)
        return {k[1]: dict(manufacturer=k[0],
                           slug=slugify(k[1]))
                for k in result}

The wanted() method refers to the manufacturer using its key attribute. In this case, this is the manufacturer name.

>>> pprint(SyncManufacturers(**sync_args).wanted())
{'Cisco': {'slug': 'cisco'},
 'Dell': {'slug': 'dell'},
 'Juniper': {'slug': 'juniper'}}
>>> pprint(SyncDeviceTypes(**sync_args).wanted())
{'ASR 9001': {'manufacturer': 'Cisco', 'slug': 'asr-9001'},
 'Catalyst 2960G-48TC-L': {'manufacturer': 'Cisco',
                           'slug': 'catalyst-2960g-48tc-l'},
 'MX10003': {'manufacturer': 'Juniper', 'slug': 'mx10003'},
 'QFX10002-36Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-36q'},
 'QFX10002-72Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-72q'},
 'QFX5110-32Q': {'manufacturer': 'Juniper', 'slug': 'qfx5110-32q'},
 'QFX5110-48S': {'manufacturer': 'Juniper', 'slug': 'qfx5110-48s'},
 'QFX5200-32C': {'manufacturer': 'Juniper', 'slug': 'qfx5200-32c'},
 'S4048-ON': {'manufacturer': 'Dell', 'slug': 's4048-on'},
 'S6010-ON': {'manufacturer': 'Dell', 'slug': 's6010-on'}}

The device roles are defined like this:

class SyncDeviceRoles(Synchronizer):

    app = "dcim"
    table = "device_roles"
    key = "name"

    def wanted(self):
        result = set(details["role"]
                     for details in self.source['devices'].values()
                     if "role" in details)
        return {k: dict(slug=slugify(k),
                        color="8bc34a")
                for k in result}
Synchronizing devices

A device is mostly a name with references to a role, a model, a datacenter and a tenant. These references are declared as foreign keys using the synchronizers defined previously.

class SyncDevices(Synchronizer):
    app = "dcim"
    table = "devices"
    key = "name"
    foreign = {"device_role": SyncDeviceRoles,
               "device_type": SyncDeviceTypes,
               "site": SyncSites,
               "tenant": SyncTenants}
    remove_unused = 10

    def wanted(self):
        return {name: dict(device_role=details["role"],
                           device_type=details["model"],
                           site=details["datacenter"],
                           tenant="Network")
                for name, details in self.source['devices'].items()
                if {"datacenter", "model", "role"} <= set(details.keys())}

The remove_unused attribute is a safety implemented to fail if we have to delete more than 10 devices: this may be the indication there is a bug somewhere, unless one of your datacenter suddenly caught fire.

>>> pprint(SyncDevices(**sync_args).wanted())
{'ad2-p6.sfo1.example.com': {'device_role': 'net_tor_oob_switch',
                             'device_type': 'Catalyst 2960G-48TC-L',
                             'site': 'sfo1',
                             'tenant': 'Network'},
 'to1-p6.sfo1.example.com': {'device_role': 'net_tor_gpu_switch',
                             'device_type': 'QFX5110-48S',
                             'site': 'sfo1',
                             'tenant': 'Network'},
[…]
Synchronizing IP addresses

The last step is to synchronize IP addresses. We do not attach them to a device.2 Instead, we specify the device names in the description of the IP address:

class SyncIPs(Synchronizer):
    app = "ipam"
    table = "ip-addresses"
    key = "address"
    foreign = {"tenant": SyncTenants}
    remove_unused = 1000

    def wanted(self):
        wanted = {}
        for details in self.source['ips']:
            if details['ip'] in wanted:
                wanted[details['ip']]['description'] = \
                    f"{details['device']} (and others)"
            else:
                wanted[details['ip']] = dict(
                    tenant="Network",
                    status="active",
                    dns_name="",        # information is present in DNS
                    description=f"{details['device']}: {details['interface']}",
                    role=None,
                    vrf=None)
        return wanted

There is a slight difficulty: NetBox allows duplicate IP addresses, so a simple lookup is not enough. In case of multiple matches, we choose the best by preferring those tagged with cmdb, then those already attached to an interface:

def get(self, key):
    """Grab IP address from NetBox."""
    # There may be duplicate. We need to grab the "best".
    results = super(Synchronizer, self).get(key)
    if len(results) == 0:
        return None
    if len(results) == 1:
        return results[0]
    scores = [0]*len(results)
    for idx, result in enumerate(results):
        if "cmdb" in result.tags:
            scores[idx] += 10
        if result.interface is not None:
            scores[idx] += 5
    return sorted(zip(scores, results),
                  reverse=True, key=lambda k: k[0])[0][1]
Getting the current and wanted states

Each synchronizer is initialized with a reference to the Ansible module, a reference to a pynetbox’s API object, the data contained in the provided YAML file and two empty dictionaries for the current and expected states:

source = yaml.safe_load(open(module.params['source']))
netbox = pynetbox.api(module.params['api'],
                      token=module.params['token'])

sync_args = dict(
    module=module,
    netbox=netbox,
    source=source,
    before={},
    after={}
)
synchronizers = [synchronizer(**sync_args) for synchronizer in [
    SyncTags,
    SyncTenants,
    SyncSites,
    SyncManufacturers,
    SyncDeviceTypes,
    SyncDeviceRoles,
    SyncDevices,
    SyncIPs
]]

Each synchronizer has a prepare() method whose goal is to compute the current and wanted states. It returns True in case of a difference:

# Check what needs to be synchronized
try:
    for synchronizer in synchronizers:
        result['changed'] |= synchronizer.prepare()
except AnsibleError as e:
    result['msg'] = e.message
    module.fail_json(**result)
Applying changes

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between these states. Each synchronizer registers the current and wanted states in sync_args["before"][table] and sync_args["after"][table] where table is the name of the table for a given NetBox object type. The diff object is a bit elaborate as it is built table by table. This enables Ansible to display the name of each table before the diff representation:

# Compute the diff
if module._diff and result['changed']:
    result['diff'] = [
        dict(
            before_header=table,
            after_header=table,
            before=yaml.safe_dump(sync_args["before"][table]),
            after=yaml.safe_dump(sync_args["after"][table]))
        for table in sync_args["after"]
        if sync_args["before"][table] != sync_args["after"][table]
    ]

# Stop here if check mode is enabled or if no change
if module.check_mode or not result['changed']:
    module.exit_json(**result)

Each synchronizer also exposes a synchronize() method to apply changes and a cleanup() method to delete unwanted objects. Order is important due to the relation between the objects.

# Synchronize
for synchronizer in synchronizers:
    synchronizer.synchronize()
for synchronizer in synchronizers[::-1]:
    synchronizer.cleanup()
module.exit_json(**result)

The complete code is available on GitHub. Compared to using netbox.netbox collection, the logic is written in Python instead of trying to glue Ansible tasks together. I believe this is both more flexible and easier to read, notably when trying to delete outdated objects. While I did not test it, it should also be faster. An alternative would have been to reuse code from the netbox.netbox collection, as it contains similar primitives. Unfortunately, I didn’t think of it until now. 😶

  1. In my opinion, a good option for a source of truth is to use YAML files in a Git repository. You get versioning for free and people can get started with a text editor. ↩︎

  2. This limitation is mostly due to laziness: we do not really care about this information. Our main motivation for putting IP addresses in NetBox is to keep track of the used IP addresses. However, if an IP address is already attached to an interface, we leave this association untouched. ↩︎

Bits from Debian: New Debian Maintainers (July and August 2020)

19 September, 2020 - 21:00

The following contributors were added as Debian Maintainers in the last two months:

  • Chirayu Desai
  • Shayan Doust
  • Arnaud Ferraris
  • Fritz Reichwald
  • Kartik Kulkarni
  • François Mazen
  • Patrick Franz
  • Francisco Vilmar Cardoso Ruviaro
  • Octavio Alvarez
  • Nick Black

Congratulations!

Russell Coker: Burning Lithium Ion Batteries

19 September, 2020 - 14:27

I had an old Nexus 4 phone that was expanding and decided to test some of the theories about battery combustion.

The first claim that often gets made is that if the plastic seal on the outside of the battery is broken then the battery will catch fire. I tested this by cutting the battery with a craft knife. With every cut the battery sparked a bit and then when I levered up layers of the battery (it seems to be multiple flat layers of copper and black stuff inside the battery) there were more sparks. The battery warmed up, it’s plausible that in a confined environment that could get hot enough to set something on fire. But when the battery was resting on a brick in my backyard that wasn’t going to happen.

The next claim is that a Li-Ion battery fire will be increased with water. The first thing to note is that Li-Ion batteries don’t contain Lithium metal (the Lithium high power non-rechargeable batteries do). Lithium metal will seriously go off it exposed to water. But lots of other Lithium compounds will also react vigorously with water (like Lithium oxide for example). After cutting through most of the center of the battery I dripped some water in it. The water boiled vigorously and the corners of the battery (which were furthest away from the area I cut) felt warmer than they did before adding water. It seems that significant amounts of energy are released when water reacts with whatever is inside the Li-Ion battery. The reaction was probably giving off hydrogen gas but didn’t appear to generate enough heat to ignite hydrogen (which is when things would really get exciting). Presumably if a battery was cut in the presence of water while in an enclosed space that traps hydrogen then the sparks generated by the battery reacting with air could ignite hydrogen generated from the water and give an exciting result.

It seems that a CO2 fire extinguisher would be best for a phone/tablet/laptop fire as that removes oxygen and cools it down. If that isn’t available then a significant quantity of water will do the job, water won’t stop the reaction (it can prolong it), but it can keep the reaction to below 100C which means it won’t burn a hole in the floor and the range of toxic chemicals released will be reduced.

The rumour that a phone fire on a plane could do a “China syndrome” type thing and melt through the Aluminium body of the plane seems utterly bogus. I gave it a good try and was unable to get a battery to burn through it’s plastic and metal foil case. A spare battery for a laptop in checked luggage could be a major problem for a plane if it ignited. But a battery in the passenger area seems unlikely to be a big problem if plenty of water is dumped on it to prevent the plastic case from burning and polluting the air.

I was not able to get a result that was even worthy of a photograph. I may do further tests with laptop batteries.

Related posts:

  1. On Burning Platforms Nokia is in the news for it’s CEO announcing that...
  2. Hydrogen Powered Cars Will Never Work One of the most important issues for a commodity fuel...
  3. Base Load Solar Power A frequent criticism of solar power is that the sun...

Jelmer Vernooij: Debian Janitor: Expanding Into Improving Multi-Arch

19 September, 2020 - 07:45

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

As of dpkg 1.16.2 and apt 0.8.13, Debian has full support for multi-arch. To quote from the multi-arch implementation page:

Multiarch lets you install library packages from multiple architectures on the same machine. This is useful in various ways, but the most common is installing both 64 and 32- bit software on the same machine and having dependencies correctly resolved automatically. In general you can have libraries of more than one architecture installed together and applications from one architecture or another installed as alternatives.

The Multi-Arch specification describes a new Multi-Arch header which can be used to indicate how to resolve cross-architecture dependencies.

The existing Debian Multi-Arch hinter is a version of dedup.debian.net that compares binary packages between architectures and suggests fixes to resolve multi-arch problems. It provides hints as to what Multi- Arch fields can be set, allowing the packages to be safely installed in a Multi-Arch world. The full list of almost 10,000 hints generated by the hinter is available at https://dedup.debian.net/static/multiarch-hints.yaml.

Recent versions of lintian-brush now include a command called apply-multiarch-hints that downloads and locally caches the hints and can apply them to a package maintained in Git. For example, to apply multi-arch hints to autosize.js:

 $ debcheckout autosize.js
 declared git repository at https://salsa.debian.org/js-team/autosize.js.git
 git clone https://salsa.debian.org/js-team/autosize.js.git autosize.js ...
 Cloning into 'autosize.js'...
 [...]
 $ cd autosize.js
 $ apply-multiarch-hints
 Downloading new version of multi-arch hints.
 libjs-autosize: Add Multi-Arch: foreign.
 node-autosize: Add Multi-Arch: foreign.
 $ git log -p
 commit 3f8d1db5af4a87e6ebb08f46ddf79f6adf4e95ae (HEAD -> master)
 Author: Jelmer Vernooij <jelmer@debian.org>
 Date:   Fri Sep 18 23:37:14 2020 +0000

     Apply multi-arch hints.
     + libjs-autosize, node-autosize: Add Multi-Arch: foreign.

     Changes-By: apply-multiarch-hints

 diff --git a/debian/changelog b/debian/changelog
 index e7fa120..09af4a7 100644
 --- a/debian/changelog
 +++ b/debian/changelog
 @@ -1,3 +1,10 @@
 +autosize.js (4.0.2~dfsg1-5) UNRELEASED; urgency=medium
 +
 +  * Apply multi-arch hints.
 +    + libjs-autosize, node-autosize: Add Multi-Arch: foreign.
 +
 + -- Jelmer Vernooij <jelmer@debian.org>  Fri, 18 Sep 2020 23:37:14 -0000
 +
  autosize.js (4.0.2~dfsg1-4) unstable; urgency=medium

    * Team upload
 diff --git a/debian/control b/debian/control
 index 01ca968..fbba1ae 100644
 --- a/debian/control
 +++ b/debian/control
 @@ -20,6 +20,7 @@ Architecture: all
  Depends: ${misc:Depends}
  Recommends: javascript-common
  Breaks: ruby-rails-assets-autosize (<< 4.0)
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - NodeJS
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,
 @@ -32,6 +33,7 @@ Package: node-autosize
  Architecture: all
  Depends: ${misc:Depends}
   , nodejs
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - Javascript
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,

The Debian Janitor also has a new multiarch-fixes suite that runs apply-multiarch-hints across packages in the archive and proposes merge requests. For example, you can see the merge request against autosize.js here.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

Sven Hoexter: Avoiding the GitHub WebUI

18 September, 2020 - 17:05

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard.

The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master.

gh functions and aliases
# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"

#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"

### github support functions, most invoked with a PR ID as $1

#primary function to review PRs
function ghs {
    gh pr view ${1}
    gh pr checks ${1}
    gh pr diff ${1}
}

# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc {
    if git status | grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v | grep push | grep -oE 'myorg-[a-z]+')/myteam"
}

# merge a PR and update master if we're not in a different branch
function ghm {
    gh pr merge -d -r ${1}
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
}

# get an overview over the files changed in a PR
function ghf {
    gh pr diff ${1} | diffstat -l
}

# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink {
    local repo="$(git remote -v | grep -E "github.+push" | cut -d':' -f 2 | cut -d'.' -f 1)"
    echo "https://github.com/${repo}/commit/${1}"
}
wtfutil

I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:

github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github

The -label:dependencies is used here to filter out dependabot PRs in the dashboard.

Workflow

Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard.

Security Considerations

The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

Daniel Lange: Fixing the Nextcloud menu to show more than eight application icons

18 September, 2020 - 16:45

I have been late to adopt an on-premise cloud solution as the security of Owncloud a few years ago wasn't so stellar (cf. my comment from 2013 in Encryption files ... for synchronization across the Internet). But the follow-up product Nextcloud has matured quite nicely and we use it for collaboration both in the company and in FLOSS related work at multiple nonprofit organizations.

There is a very annoying "feature" in Nextcloud though that the designers think menu items for apps at the top need to be limited to eight or less to prevent information overload in the header. The whole item discussion is worth reading as it it an archetypical example of design prevalence vs. user choice.

And of course designers think they are right. That's a feature of the trade.
And because they know better there is no user configurable option to extend that 8 items to may be 12 or so which would prevent the annoying overflow menu we are seeing with 10 applications in use:

Luckily code can be changed and there are many comments floating around the Internet to change const minAppsDesktop = 8. In this case it is slightly complicated by the fact that the javascript code is distributed in compressed form (aka "minified") as core/js/dist/main.js and you probably don't want to build the whole beast locally to change one constant.

Basically

const breakpoint_mobile_width = 1024;

const resizeMenu = () => {
    const appList = $('#appmenu li')
    const rightHeaderWidth = $('.header-right').outerWidth()
    const headerWidth = $('header').outerWidth()
    const usePercentualAppMenuLimit = 0.33
    const minAppsDesktop = 8
    let availableWidth = headerWidth - $('#nextcloud').outerWidth() - (rightHeaderWidth > 210 ? rightHeaderWidth : 210)
    const isMobile = $(window).width() < breakpoint_mobile_width
    if (!isMobile) {
        availableWidth = availableWidth * usePercentualAppMenuLimit
    }
    let appCount = Math.floor((availableWidth / $(appList).width()))
    if (isMobile && appCount > minAppsDesktop) {
        appCount = minAppsDesktop
    }
    if (!isMobile && appCount < minAppsDesktop) {
        appCount = minAppsDesktop
    }

    // show at least 2 apps in the popover
    if (appList.length - 1 - appCount >= 1) {
        appCount--
    }

    $('#more-apps a').removeClass('active')
    let lastShownApp
    for (let k = 0; k < appList.length - 1; k++) {
        const name = $(appList[k]).data('id')
        if (k < appCount) {
            $(appList[k]).removeClass('hidden')
            $('#apps li[data-id=' + name + ']').addClass('in-header')
            lastShownApp = appList[k]
        } else {
            $(appList[k]).addClass('hidden')
            $('#apps li[data-id=' + name + ']').removeClass('in-header')
            // move active app to last position if it is active
            if (appCount > 0 && $(appList[k]).children('a').hasClass('active')) {
                $(lastShownApp).addClass('hidden')
                $('#apps li[data-id=' + $(lastShownApp).data('id') + ']').removeClass('in-header')
                $(appList[k]).removeClass('hidden')
                $('#apps li[data-id=' + name + ']').addClass('in-header')
            }
        }
    }

    // show/hide more apps icon
    if ($('#apps li:not(.in-header)').length === 0) {
        $('#more-apps').hide()
        $('#navigation').hide()
    } else {
        $('#more-apps').show()
    }
}

gets compressed during build time to become part of one 15,000+ character line. The relevant portion reads:

var f=function(){var e=s()("#appmenu li"),t=s()(".header-right").outerWidth(),n=s()("header").outerWidth()-s()("#nextcloud").outerWidth()-(t>210?t:210),i=s()(window).width()<1024;i||(n*=.33);var r,o=Math.floor(n/s()(e).width());i&&o>8&&(o=8),!i&&o<8&&(o=8),e.length-1-o>=1&&o--,s()("#more-apps a").removeClass("active");for(var a=0;a<e.length-1;a++){var l=s()(e[a]).data("id");a<o?(s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header"),r=e[a]):(s()(e[a]).addClass("hidden"),s()("#apps li[data-id="+l+"]").removeClass("in-header"),o>0&&s()(e[a]).children("a").hasClass("active")&&(s()(r).addClass("hidden"),s()("#apps li[data-id="+s()(r).data("id")+"]").removeClass("in-header"),s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header")))}0===s()("#apps li:not(.in-header)").length?(s()("#more-apps").hide(),s()("#navigation").hide()):s()("#more-apps").show()}

Well, we can still patch that, can we?

Continue reading "Fixing the Nextcloud menu to show more than eight application icons"

Daniel Lange: Getting rid of the Google cookie consent popup

18 September, 2020 - 16:15

If you clear your browser cookies regularly (as you should do), Google will annoy you with a full screen cookie consent overlay these days. And - of course - there is no "no tracking consent, technically required cookies only" button. You may log in to Google to set your preference. Yeah, I'm sure this is totally following the intent of the EU cookie consent regulation.

Unfortunately none of the big "anti-annoyances" filter lists seem to have picked that one up yet but the friendly folks from the Computerbase Forum [German] to the rescue. User "Sepp Depp" has created the following filter set that WFM:

Add this to your uBlock Origin filter rules:

! Google - remove cookie-consent-popup and restore scoll functionality
google.*##.wwYr3.aID8W.bErdLd
google.*##.aID8W.m114nf.t7xA6
google.*##div[jsname][jsaction^="dg_close"]
google.*##html:style(overflow: visible !important;)
google.*##.widget-consent-fullscreen.widget-consent

Russell Coker: Dell BIOS Updates

17 September, 2020 - 22:18

I have just updated the BIOS on a Dell PowerEdge T110 II. The process isn’t too difficult, Google for the machine name and BIOS, download a shell script encoded firmware image and GPG signature, then run the script on the system in question.

One problem is that the Dell GPG key isn’t signed by anyone. How hard would it be to get a few well connected people in the Linux community to sign the key used for signing Linux scripts for updating the BIOS? I would be surprised if Dell doesn’t employ a few people who are well connected in the Linux community, they should just ask all employees to sign such GPG keys! Failing that there are plenty of other options. I’d be happy to sign the Dell key if contacted by someone who can prove that they are a responsible person in Dell. If I could phone Dell corporate and ask for the engineering department and then have someone tell me the GPG fingerprint I’ll sign the key and that problem will be partially solved (my key is well connected but you need more than one signature).

The next issue is how to determine that a BIOS update works. What you really don’t want is to have a BIOS update fail and brick your system! So the Linux update process loads the image into something (special firmware RAM maybe) and then reboots the system and the reboot then does a critical part of the update. If the reboot doesn’t work then you end up with the old version of the BIOS. This is overall a good thing.

The PowerEdge T110 II is a workstation with an NVidia video card (I tried an ATI card but that wouldn’t boot for unknown reasons). The Nouveau driver has some issues. One thing I have done to work around some Nouveau issues is to create a file “~/.config/plasma-workspace/env/nouveau-broken.sh” (for KDE sessions) with the following contents:

export LIBGL_ALWAYS_SOFTWARE=1

I previously wrote about using this just for Kmail to stop it crashing [1]. But after doing that I still had other problems with video and disabling all GL on the NVidia card was necessary.

The latest problem I’ve had is that even when using that configuration things don’t go well. When I run the “reboot” command I end up with a kernel message about the GPU not responding and then it doesn’t reboot. That means that the BIOS update doesn’t apply, a hard reboot signals to the system that the new BIOS wasn’t good and I end up with the old BIOS again. I discovered that disabling sddm (the latest xdm program in Debian) from starting on boot meant that a reboot command would work. Then I ran the BIOS update script and it’s reboot command worked and gave a successful BIOS update.

So I’ve gone from a 2013 BIOS to a 2018 BIOS! The update list says that some CVEs have been addressed, but the spectre-meltdown-checker doesn’t report any fewer vulnerabilities.

Related posts:

  1. Dell PowerEdge T110 In June 2008 I received a Dell PowerEdge T105 server...
  2. New Dell Server My Dell PowerEdge T105 server (as referenced in my previous...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...

Dirk Eddelbuettel: RcppSpdlog 0.0.1: New and Exciting Logging Package

17 September, 2020 - 07:54

Very thrilled to announce a new package RcppSpdlog which is now on CRAN in its first release 0.0.1. We had tweeted once about the earliest version which had already caught the eyes of Gabi upstream.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

I had meant to package this for a few years now but didn’t find an (easy, elegant) way to completely lift stdout / stderr and related uses which R wants us to remove for smoother operations from R itself including synchronized input/output. It was only a few weeks ago that I realized I should subclass a logger (or, more concretely, a sink for a logger) which could then use R input/output. With the right idea the implementaton was easy, and [Gabi]((https://github.com/gabime) was most helpful in making sure R CMD check would not see one or two remaining C++ i/o operations (which we currently do by not activating a default logger, and substituing REprintf() in one call). So this is now clean and sween and a simple use is included in an example in the package we can show here too (in slightly shorter form minus the documentation header):

// this portmanteau include also defines the r_sink we use below, and which
// diverts all logging to R via the Rcpp::Rcout replacement for std::cout
#include <RcppSpdlog>

// [[Rcpp::export]]
void exampleRsink() {

    std::string logname = "fromR";                          // fix a name for this logger
    auto sp = spdlog::get(logname);                         // retrieve existing one
    if (sp == nullptr) sp = spdlog::r_sink_mt(logname);     // or create new one if needed

    // change log pattern (changed from [%H:%M:%S %z] [%n] [%^---%L---%$] )
    sp->set_pattern("[%H:%M:%S.%f] [%n] [%^%L%$] [thread %t] %v");

    sp->info("Welcome to spdlog!");
    sp->error("Some error message with arg: {}", 1);

    sp->warn("Easy padding in numbers like {:08d}", 12);
    sp->critical("Support for int: {0:d};  hex: {0:x};  oct: {0:o}; bin: {0:b}", 42);
    sp->info("Support for floats {:03.2f}", 1.23456);
    sp->info("Positional args are {1} {0}..", "too", "supported");
    sp->info("{:<30}", "left aligned");

}

The NEWS entry for the first release follows.

Changes in RcppSpdlog version 0.0.1 (2020-09-08)
  • Initial release with added R/Rcpp logging sink example

The only sour grapes, if any, are over the CRAN processing. This was originally uploaded three weeks ago. As a new package, it got extra attention and some truly idiosyncratic attention to two details that were already supplied in the first uploaded version. Yet it needed two rounds of going back and forth for really no great net gain, yet wasting a week each time. I am not all that impressed by this, and not particularly pleased either, but I presume is the “tax” we all pay in order to enjoy the unsurpassed richness of the CRAN repository system which continues to work just flawlessly.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bastian Blank: Salsa hosted 1e6 CI jobs

17 September, 2020 - 02:00

Today, Salsa hosted it's 1,000,000th CI job. The price for hitting the target goes to the Cloud team. The job itself was not that interesting, but it was successful.

Steve Kemp: Implementing a FORTH-like language ..

17 September, 2020 - 01:30

Four years ago somebody posted a comment-thread describing how you could start writing a little reverse-polish calculator, in C, and slowly improve it until you had written a minimal FORTH-like system:

At the time I read that comment I'd just hacked up a simple FORTH REPL of my own, in Perl, and I said "thanks for posting". I was recently reminded of this discussion, and decided to work through the process.

Using only minimal outside resources the recipe worked as expected!

The end-result is I have a working FORTH-lite, or FORTH-like, interpreter written in around 2000 lines of golang! Features include:

  • Reverse-Polish mathematical operations.
  • Comments between ( and ) are ignored, as expected.
    • Single-line comments \ to the end of the line are also supported.
  • Support for floating-point numbers (anything that will fit inside a float64).
  • Support for printing the top-most stack element (., or print).
  • Support for outputting ASCII characters (emit).
  • Support for outputting strings (." Hello, World ").
  • Support for basic stack operations (drop, dup, over, swap)
  • Support for loops, via do/loop.
  • Support for conditional-execution, via if, else, and then.
  • Load any files specified on the command-line
    • If no arguments are included run the REPL
  • A standard library is loaded, from the present directory, if it is present.

To give a flavour here we define a word called star which just outputs a single start-character:

: star 42 emit ;

Now we can call that (NOTE: We didn't add a newline here, so the REPL prompt follows it, that's expected):

> star
*>

To make it more useful we define the word "stars" which shows N stars:

> : stars dup 0 > if 0 do star loop else drop then ;
> 0 stars
> 1 stars
*> 2 stars
**> 10 stars
**********>

This example uses both if to test that the parameter on the stack was greater than zero, as well as do/loop to handle the repetition.

Finally we use that to draw a box:

> : squares 0 do over stars cr loop ;
> 4 squares
****
****
****
****

> 10 squares
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********

For fun we allow decompiling the words too:

> #words 0 do dup dump loop
..
Word 'square'
 0: dup
 1: *
Word 'cube'
 0: dup
 1: square
 2: *
Word '1+'
 0: store 1.000000
 2: +
Word 'test_hot'
  0: store 0.000000
  2: >
  3: if
  4: [cond-jmp 7.000000]
  6: hot
  7: then
..

Anyway if that is at all interesting feel free to take a peak. There's a bit of hackery there to avoid the use of return-stacks, etc. Compared to gforth this is actually more featureful in some areas:

  • I allow you to use conditionals in the REPL - outside a word-definition.
  • I allow you to use loops in the REPL - outside a word-definition.

Find the code here:

Bits from Debian: Debian Local Groups at DebConf20 and beyond

17 September, 2020 - 00:15

There are a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

There has been a session about Debian Local Teams at Debconf20 and it generated generated quite a bit of constructive discussion in the live stream (recording available at https://meetings-archive.debian.net/pub/debian-meetings/2020/DebConf20/), in the session's Etherpad and in the IRC channel (#debian-localgroups). This article is an attempt at summarizing the key points that were raised during that discussion, as well as the plans for the future actions to support new or existent Debian Local Groups and the possibility of setting up a local group support team.

Pandemic situation

During a pandemic it may seem strange to discuss offline meetings, but this is a good time to be planning things for the future. At the same time, the current situation makes it more important than before to encourage local interaction.

Reasoning for local groups

Debian can seem scary for those outside. Already having a connection to Debian - especially to people directly involved in it - seems to be the way through which most contributors arrive. But if one doesn't have a connection, it is not that easy; Local Groups facilitate that by improving networking.

Local groups are incredibly important to the success of Debian since they often help with translations, making us more diverse, support, setting up local bug squashing sprints, establishing a local DebConf team along with miniDebConfs, getting sponsors for the project and much more.

Existence of a Local Groups would also facilitate access to "swag" like stickers and mugs, since people not always have the time to deal with the process of finding a supplier to actually get those made. The activity of local groups might facilitate that by organizing related logistics.

How to deal with local groups, how to define a local group

Debian gathers the information about Local Groups in its Local Groups wiki page (and subpages). Other organisations also have their own schemes, some of them featuring a map, blogs, or clear rules about what constitutes a local group. In the case of Debian there is not a predefined set of "rules", even about the group name. That is perfectly fine, we assume that certain local groups may be very small, or temporary (created around a certain time when they plan several activities, and then become silent). However, the way the groups are named and how they are listed on the wiki page sets expectations with regards to what kinds of activities they involve.

For this reason, we encourage all the Debian Local Groups to review their entries in the Debian wiki, keep it current (e.g. add a line "Status: Active (2020)), and we encourage informal groups of Debian contributors that somehow "meet", to create a new entry in the wiki page, too.

What can Debian do to support Local Groups

Having a centralized database of groups is good (if up-to-date), but not enough. We'll explore other ways of propagation and increasing visibility, like organising the logistics of printing/sending swag and facilitate access to funding for Debian-related events.

Continuation of efforts

Efforts shall continue regarding Local Groups. Regular meetings are happening every two or three weeks; interested people are encouraged to explore some other relevant DebConf20 talks (Introducing Debian Brasil, Debian Academy: Another way to share knowledge about Debian, An Experience creating a local community on a small town), websites like Debian flyers (including other printed material as cube, stickers), visit the events section of the Debian website and the Debian Locations wiki page, and participate in the IRC channel #debian-localgroups at OFTC.

Joey Hess: comically bad shipping estimates and middlemen

16 September, 2020 - 08:06

My inverter has unfortunately died, and I wanted to replace it with the same model. Ideally before I lose the contents of the fridge. It's a 24v inverter, which is not at all as easy to find a replacement for as a 12v inverter would be.

Somehow Walmart was the only retailer that had it available with a delivery estimate: Just 2 days.

It's the second day now, with no indication they've shipped it. I noticed the "sold and shipped by Zoro", so went and found it on that website.

So, the reality is it ships direct from China via container ship. As does every product from Zoro, which all show as 2 day delivery on Walmart's website.

I don't think this is a pandemic thing. I think it's a trying to compete with Amazon and failing thing.

My other comically bad shipping estimate this pandemic was from Amazon though. There was a run this summer on Kayaks, because social distancing is great on the water. I found a high quality inflatable kayak.

Amazon said "only 2 left in stock" and promised delivery in 1 week. One week later, it had not shipped, and they updated the delivery estimate forward 1 week. A week after that, ditto.

Eventually I bought a new model from the same manufacturer, Advanced Elements. Unfortunately, that kayak exploded the second time I inflated it, due to a manufacturing defect.

So I got in touch with Advanced Elements and they offered a replacement. I asked if, instead, they maybe still had any of the older model of kayak I had tried to order. They checked their warehouse, and found "the last one" in a corner somewhere.

No shipping estimate was provided. It arrived in 3 days.

Molly de Blanc: “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” W. Quinn

15 September, 2020 - 20:28

There are a lot of interesting and valid things to say about the philosophy and actual arguments of the “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” by Warren Quinn. Unfortunately for me, none of them are things I feel particularly inspired by. I’m much more attracted to the many things implied in this paper. Among them are the role of social responsibility in making moral decisions.

At various points in the text, Quinn makes brief comments about how we have roles that we need to fulfill for the sake of society. These roles carry with them responsibilities that may supersede our regular moral responsibilities. Examples Quinn makes include being a private life guard (and being responsible for the life of one particular person) and being a trolley driver (and your responsibility is to make sure the train doesn’t kill anyone). This is part of what has led to me brushing Quinn off as another classist. Still, I am interested in the question of whether social responsibilities are more important than moral ones or whether there are times when this might occur.

One of the things I maintain is that we cannot be the best versions of ourselves because we are not living in societies that value our best selves. We survive capitalism. We negotiate climate change. We make decisions to trade the ideal for the functional. For me, this frequently means I click through terms of service, agree to surveillance, and partake in the use and proliferation of oppressive technology. I also buy an iced coffee that comes in a single use plastic cup; I shop at the store with questionable labor practices; I use Facebook.  But also, I don’t give money to panhandlers. I see suffering and I let it pass. I do not get involved or take action in many situations because I have a pass to not. These things make society work as it is, and it makes me work within society.

This is a self-perpetuating, mutually-abusive, co-dependent relationship. I must tell myself stories about how it is okay that I am preferring the status quo, that I am buying into the system, because I need to do it to survive within it and that people are relying on the system as it stands to survive, because that is how they know to survive.

Among other things, I am worried about the psychic damage this causes us. When we view ourselves as social actors rather than moral actors, we tell ourselves it is okay to take non-moral actions (or in-actions); however, we carry within ourselves intuitions and feelings about what is right, just, and moral. We ignore these in order to act in our social roles. From the perspective of the individual, we’re hurting ourselves and suffering for the sake of benefiting and perpetuating an caustic society. From the perspective of society, we are perpetuating something that is not just less than ideal, but actually not good because it is based on allowing suffering.[1]

[1] This is for the sake of this text. I don’t know if I actually feel that this is correct.

My goal was to make this only 500 words, so I am going to stop here.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, August 2020

15 September, 2020 - 17:01

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In August, 237.25 work hours have been dispatched among 14 paid contributors. Their reports are available: Evolution of the situation

August was a regular LTS month once again, even though it was only our 2nd month with Stretch LTS.
At the end of August some of us participated in DebConf 20 online where we held our monthly team meeting. A video is available.
As of now this video is also the only public resource about the LTS survey we held in July, though a written summary is expected to be released soon.

The security tracker currently lists 56 packages with a known CVE and the dla-needed.txt file has 55 packages needing an update.

Thanks to our sponsors

Sponsors that recently joined are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้