Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 41 min ago

Andrew Cater: Debian Barbeque, Cambridge 2022

29 August, 2022 - 03:07

 And here we are: second day of the barbeque in Cambridge. Lots of food - as always - some alcohol, some soft drinks, coffee.

Lots of good friends, and banter and good natured argument. For a couple of folk, it's their first time here - but most people have known each other for years. Lots of reminiscing, some crochet from two of us. Multiple technical discussions weaving and overlapping

Not just meat and vegetarian options for food: a fresh loaf, gingerbread of various sorts, fresh Belgian-style waffles.

I''m in the front room: four of us silently on laptops, one on a phone. Sounds of a loud game of Mao from the garden - all very normal for this time of year.

Thanks to Jo and Steve, to all the cooks and folk sorting things out. One more night and I'll have done my first full BBQ here. Diet and slimming - what diet?

Steinar H. Gunderson: AV1 live streaming: Muxing and streaming

28 August, 2022 - 16:51

Following up on my previous posts, I've finally gotten to the part of the actual streaming (which includes muxing). It's not super-broad over all possible clients, but it probably gives enough information to tell roughly where we are.

First, the bad news: There is no iOS support for AV1. People had high hopes after it turned out the latest iOS 16 betas support AVIF, and even embedded a copy of dav1d to do so, but according to my own testing, this doesn't extend to video at all. Not as standalone files, not as <video&gt>. (I don't know about Safari on macOS; I haven't tested.)

With that out the way, let's test the different muxes:

WebM, continuous file: Given the close historical ties between VP8/VP9 and AV1, one would suppose WebM/Matroska would be the best-working mux for AV1. Surprisingly, that's not really so, at least for live streaming; in particular, Chrome on Android becomes really sad if you have any sort of invalid data in the stream (like e.g. a stream that does not start on a keyframe), which makes it very problematic that FFmpeg's Matroska mux does not tell the streamer where in the file keyframes start. Firefox is more forgiving than Chrome here.

VLC on desktop also plays this fine, but VLC on Android has some issues; due to some bug, it starts off all black and doesn't display anything until you seek. For an unseekable stream (which a continuous file is), that's not ideal.

That being said, if you can work around the FFmpeg issue, this mostly works in browsers (e.g. with Opus as an audio mux). Except that Chrome on Android seems to be using libgav1 and not dav1d, which is much slower, so it cannot hold up a 1080p60 10-bit stream in anything resembling real-time, at least not on my own phone. Ronald Bultje and Kaustubh Patankar has a presentation where they test AV1 decoder performance across 61 different Android phones, and it's a bit of grim reading; even with --fast-decode turned on (seemingly essential) and dav1d, 1080p60 5 Mbit/sec realtime decoding is far from universal. Their conclusion is that “For mid-range to high end Android devices, it is possible to achieve 1080p30 real time playback”. Augh. Anyways.

MP4, continuous file: I was surprised that this actually worked. But in retrospect, given that MP4 is the foundation of DASH streaming and whatnot, and YouTube wants that, perhaps I shouldn't be. It actually works better than WebM; it doesn't have the FFmpeg issues, and I already have lots of infrastructure to segment and deal with MP4. Which brings us to…

MP4 in HLS (fMP4): I'm not a fan of streaming by downloading lots of small files, but it's the only real solution if you want to seek backwards (including the VOD use case), and when iOS AV1 (streaming) support arrives, you can pretty much assume it will be for MP4 in HLS. And lo and behold, this mostly works, too. Chrome (even on Android) won't take it natively, but hls.js will accept it if you force the audio codec. VLC on Android starts out black, but is able to recover on its own without an explicit seek.

MPEG-TS: This drew a complete blank; there's a draft standard, but nobody implements it and it appears to be standing completely still. So FFmpeg can't even mux an AV1 stream it can demux itself, much less send to anything else. The main reason I would care about this is not for satellite decoding or similar (there are basically zero STBs supporting AV1 yet anyway), but because it's what SRT typically expects, ie., for ingestion across lossy links. If you really need SRT, seemingly Matroska can live across it, especially if you are a bit careful with the cluster size so that dropped bytes don't translate into the loss of several seconds' worth of video.

So my recommendations right now would probably be:

  • Use AV1 in MP4 the same way you'd use H.264 in MP4; continuous file if you want reasonably low latency, HLS otherwise. (Cubemap handles both.)
  • You absolutely need a backup H.264 stream for slow decoders and older players. (Or VP9, if you don't care about iOS users at all.)

AV1 is definitely on the march, and absolutely trounces H.264 in terms of quality per bit, but universal support across platforms just isn't there yet—especially on mobile. So H.264 will live on for a little while more, even in cutting-edge stream deployments.

James Valleroy: FreedomBox Packages in Debian

27 August, 2022 - 20:27

FreedomBox is a Debian pure blend that reduces the effort needed to run and maintain a small personal server. Being a “pure blend” means that all of the software packages which are used in FreedomBox are included in Debian. Most of these packages are not specific to FreedomBox: they are common things such as Apache web server, firewalld, slapd (LDAP server), etc. But there are a few packages which are specific to FreedomBox: they are named freedombox, freedombox-doc-en, freedombox-doc-es, freedom-maker, fbx-all and fbx-tasks.

freedombox is the core package. You could say, if freedombox is installed, then your system is a FreedomBox (or a derivative). It has dependencies on all of the packages that are needed to get a FreedomBox up and running, such as the previously mentioned Apache, firewalld, and slapd. It also provides a web interface for the initial setup, configuration, and installing apps. (The web interface service is called “Plinth” and is written in Python using Django framework.) The source package of freedombox also builds freedombox-doc-en and freedombox-doc-es. These packages install the FreedomBox manuals for English and Spanish, respectively.

freedom-maker is a tool that is used to build FreedomBox disk images. An image can be copied to a storage device such as a Solid State Disk (SSD), eMMC (internal flash memory chip), or a microSD card. Each image is meant for a particular hardware device (or target device), or a set of devices. In some cases, one image can be used across a wide range of devices. For example, the amd64 image is for all 64-bit x86 architecture machines (including virtual machines). The arm64 image is for all 64-bit ARM machines that support booting a generic image using UEFI.

fbx-all and fbx-tasks are special metapackages, both built from a single source package named debian-fbx. They are related to tasksel, a program that displays a curated list of packages that can be installed, organized by interest area. Debian blends typically provide task files to list their relevant applications in tasksel. fbx-tasks only installs the tasks for FreedomBox (without actually installing FreedomBox). fbx-all goes one step further and also installs freedombox itself. In general, FreedomBox users won’t need to interact with these two packages.

Links:

Antoine Beaupré: How to nationalize the internet in Canada

26 August, 2022 - 23:56

Rogers had a catastrophic failure in July 2022. It affected emergency services (as in: people couldn't call 911, but also some 911 services themselves failed), hospitals (which couldn't access prescriptions), banks and payment systems (as payment terminals stopped working), and regular users as well. The outage lasted almost a full day, and Rogers took days to give any technical explanation on the outage, and even when they did, details were sparse. So far the only detailed account is from outside actors like Cloudflare which seem to point at an internal BGP failure.

Its impact on the economy has yet to be measured, but it probably cost millions of dollars in wasted time and possibly lead to life-threatening situations. Apart from holding Rogers (criminally?) responsible for this, what should be done in the future to avoid such problems?

It's not the first time something like this has happened: it happened to Bell Canada as well. The Rogers outage is also strangely similar to the Facebook outage last year, but, to its credit, Facebook did post a fairly detailed explanation only a day later.

The internet is designed to be decentralised, and having large companies like Rogers hold so much power is a crucial mistake that should be reverted. The question is how. Some critics were quick to point out that we need more ISP diversity and competition, but I think that's missing the point. Others have suggested that the internet should be a public good or even straight out nationalized.

I believe the solution to the problem of large, private, centralised telcos and ISPs is to replace them with smaller, public, decentralised service providers. The only way to ensure that works is to make sure that public money ends up creating infrastructure controlled by the public, which means treating ISPs as a public utility. This has been implemented elsewhere: it works, it's cheaper, and provides better service.

A modest proposal

Global wireless services (like phone services) and home internet inevitably grow into monopolies. They are public utilities, just like water, power, railways, and roads. The question of how they should be managed is therefore inherently political, yet people don't seem to question the idea that only the market (i.e. "competition") can solve this problem. I disagree.

10 years ago (in french), I suggested we, in Québec, should nationalize large telcos and internet service providers. I no longer believe is a realistic approach: most of those companies have crap copper-based networks (at least for the last mile), yet are worth billions of dollars. It would be prohibitive, and a waste, to buy them out.

Back then, I called this idea "Réseau-Québec", a reference to the already nationalized power company, Hydro-Québec. (This idea, incidentally, made it into the plan of a political party.)

Now, I think we should instead build our own, public internet. Start setting up municipal internet services, fiber to the home in all cities, progressively. Then interconnect cities with fiber, and build peering agreements with other providers. This also includes a bid on wireless spectrum to start competing with phone providers as well.

And while that sounds really ambitious, I think it's possible to take this one step at a time.

Municipal broadband

In many parts of the world, municipal broadband is an elegant solution to the problem, with solutions ranging from Stockholm's city-owned fiber network (dark fiber, layer 1) to Utah's UTOPIA network (fiber to the premises, layer 2) and municipal wireless networks like Guifi.net which connects about 40,000 nodes in Catalonia.

A good first step would be for cities to start providing broadband services to its residents, directly. Cities normally own sewage and water systems that interconnect most residences and therefore have direct physical access everywhere. In Montréal, in particular, there is an ongoing project to replace a lot of old lead-based plumbing which would give an opportunity to lay down a wired fiber network across the city.

This is a wild guess, but I suspect this would be much less expensive than one would think. Some people agree with me and quote this as low as 1000$ per household. There is about 800,000 households in the city of Montréal, so we're talking about a 800 million dollars investment here, to connect every household in Montréal with fiber and incidentally a quarter of the province's population. And this is not an up-front cost: this can be built progressively, with expenses amortized over many years.

(We should not, however, connect Montréal first: it's used as an example here because it's a large number of households to connect.)

Such a network should be built with a redundant topology. I leave it as an open question whether we should adopt Stockholm's more minimalist approach or provide direct IP connectivity. I would tend to favor the latter, because then you can immediately start to offer the service to households and generate revenues to compensate for the capital expenditures.

Given the ridiculous profit margins telcos currently have — 8 billion $CAD net income for BCE (2019), 2 billion $CAD for Rogers (2020) — I also believe this would actually turn into a profitable revenue stream for the city, the same way Hydro-Québec is more and more considered as a revenue stream for the state. (I personally believe that's actually wrong and we should treat those resources as human rights and not money cows, but I digress. The point is: this is not a cost point, it's a revenue.)

The other major challenge here is that the city will need competent engineers to drive this project forward. But this is not different from the way other public utilities run: we have electrical engineers at Hydro, sewer and water engineers at the city, this is just another profession. If anything, the computing science sector might be more at fault than the city here in its failure to provide competent and accountable engineers to society...

Right now, most of the network in Canada is copper: we are hitting the limits of that technology with DSL, and while cable has some life left to it (DOCSIS 4.0 does 4Gbps), that is nowhere near the capacity of fiber. Take the town of Chattanooga, Tennessee: in 2010, the city-owned ISP EPB finished deploying a fiber network to the entire town and provided gigabit internet to everyone. Now, 12 years later, they are using this same network to provide the mind-boggling speed of 25 gigabit to the home. To give you an idea, Chattanooga is roughly the size and density of Sherbrooke.

Provincial public internet

As part of building a municipal network, the question of getting access to "the internet" will immediately come up. Naturally, this will first be solved by using already existing commercial providers to hook up residents to the rest of the global network.

But eventually, networks should inter-connect: Montréal should connect with Laval, and then Trois-Rivières, then Québec City. This will require long haul fiber runs, but those links are not actually that expensive, and many of those already exist as a public resource at RISQ and CANARIE, which cross-connects universities and colleges across the province and the country. Those networks might not have the capacity to cover the needs of the entire province right now, but that is a router upgrade away, thanks to the amazing capacity of fiber.

There are two crucial mistakes to avoid at this point. First, the network needs to remain decentralised. Long haul links should be IP links with BGP sessions, and each city (or MRC) should have its own independent network, to avoid Rogers-class catastrophic failures.

Second, skill needs to remain in-house: RISQ has already made that mistake, to a certain extent, by selling its neutral datacenter. Tellingly, MetroOptic, probably the largest commercial dark fiber provider in the province, now operates the QIX, the second largest "public" internet exchange in Canada.

Still, we have a lot of infrastructure we can leverage here. If RISQ or CANARIE cannot be up to the task, Hydro-Québec has power lines running into every house in the province, with high voltage power lines running hundreds of kilometers far north. The logistics of long distance maintenance are already solved by that institution.

In fact, Hydro already has fiber all over the province, but it is a private network, separate from the internet for security reasons (and that should probably remain so). But this only shows they already have the expertise to lay down fiber: they would just need to lay down a parallel network to the existing one.

In that architecture, Hydro would be a "dark fiber" provider.

International public internet

None of the above solves the problem for the entire population of Québec, which is notoriously dispersed, with an area three times the size of France, but with only an eight of its population (8 million vs 67). More specifically, Canada was originally a french colony, a land violently stolen from native people who have lived here for thousands of years. Some of those people now live in reservations, sometimes far from urban centers (but definitely not always). So the idea of leveraging the Hydro-Québec infrastructure doesn't always work to solve this, because while Hydro will happily flood a traditional hunting territory for an electric dam, they don't bother running power lines to the village they forcibly moved, powering it instead with noisy and polluting diesel generators. So before giving me fiber to the home, we should give power (and potable water, for that matter), to those communities first.

So we need to discuss international connectivity. (How else could we consider those communities than peer nations anyways?c) Québec has virtually zero international links. Even in Montréal, which likes to style itself a major player in gaming, AI, and technology, most peering goes through either Toronto or New York.

That's a problem that we must fix, regardless of the other problems stated here. Looking at the submarine cable map, we see very few international links actually landing in Canada. There is the Greenland connect which connects Newfoundland to Iceland through Greenland. There's the EXA which lands in Ireland, the UK and the US, and Google has the Topaz link on the west coast. That's about it, and none of those land anywhere near any major urban center in Québec.

We should have a cable running from France up to Saint-Félicien. There should be a cable from Vancouver to China. Heck, there should be a fiber cable running all the way from the end of the great lakes through Québec, then up around the northern passage and back down to British Columbia. Those cables are expensive, and the idea might sound ludicrous, but Russia is actually planning such a project for 2026. The US has cables running all the way up (and around!) Alaska, neatly bypassing all of Canada in the process. We just look ridiculous on that map.

Wireless networks

I know most people will have rolled their eyes so far back their heads have exploded. But I'm not done yet. I want wireless too. And by wireless, I don't mean a bunch of geeks setting up OpenWRT routers on rooftops. I tried that, and while it was fun and educational, it didn't scale.

A public networking utility wouldn't be complete without providing cellular phone service. This involves bidding for frequencies at the federal level, and deploying a rather large amount of infrastructure, but it could be a later phase, when the engineers and politicians have proven their worth.

At least part of the Rogers fiasco would have been averted if such a decentralized network backend existed. One might even want to argue that a separate institution should be setup to provide phone services, independently from the regular wired networking, if only for reliability.

Because remember here: the problem we're trying to solve is not just technical, it's about political boundaries, centralisation, and automation. If everything is ran by this one organisation again, we will have failed.

However, I must admit that phone services is where my ideas fall a little short. I can't help but think it's also an accessible goal — maybe starting with a virtual operator — but it seems slightly less so than the others, especially considering how closed the phone ecosystem is.

Counter points

In debating these ideas while writing this article, the following objections came up.

I don't want the state to control my internet

One legitimate concern I have about the idea of the state running the internet is the potential it would have to censor or control the content running over the wires.

But I don't think there is necessarily a direct relationship between resource ownership and control of content. Sure, China has strong censorship in place, partly implemented through state-controlled businesses. But Russia also has strong censorship in place, based on regulatory tools: they force private service providers to install back-doors in their networks to control content and surveil their users.

Besides, the USA have been doing warrantless wiretapping since at least 2003 (and yes, that's 10 years before the Snowden revelations) so a commercial internet is no assurance that we have a free internet. Quite the contrary in fact: if anything, the commercial internet goes hand in hand with the neo-colonial internet, just like businesses did in the "good old colonial days".

Large media companies are the primary censors of content here. In Canada, the media cartel requested the first site-blocking order in 2018. The plaintiffs (including Québecor, Rogers, and Bell Canada) are both content providers and internet service providers, an obvious conflict of interest.

Nevertheless, there are some strong arguments against having a centralised, state-owned monopoly on internet service providers. FDN makes a good point on this. But this is not what I am suggesting: at the provincial level, the network would be purely physical, and regional entities (which could include private companies) would peer over that physical network, ensuring decentralization. Delegating the management of that infrastructure to an independent non-profit or cooperative (but owned by the state) would also ensure some level of independence.

Isn't the government incompetent and corrupt?

Also known as "private enterprise is better skilled at handling this, the state can't do anything right"

I don't think this is a "fait accomplit". If anything, I have found publicly ran utilities to be spectacularly reliable here. I rarely have trouble with sewage, water, or power, and keep in mind I live in a city where we receive about 2 meters of snow a year, which tend to create lots of trouble with power lines. Unless there's a major weather event, power just runs here.

I think the same can happen with an internet service provider. But it would certainly need to have higher standards to what we're used to, because frankly Internet is kind of janky.

A single monopoly will be less reliable

I actually agree with that, but that is not what I am proposing anyways. Current commercial or non-profit entities will be free to offer their services on top of the public network.

And besides, the current "ha! diversity is great" approach is exactly what we have now, and it's not working. The pretense that we can have competition over a single network is what led the US into the ridiculous situation where they also pretend to have competition over the power utility market. This led to massive forest fires in California and major power outages in Texas. It doesn't work.

Wouldn't this create an isolated network?

One theory is that this new network would be so hostile to incumbent telcos and ISPs that they would simply refuse to network with the public utility. And while it is true that the telcos currently do also act as a kind of "tier one" provider in some places, I strongly feel this is also a problem that needs to be solved, regardless of ownership of networking infrastructure.

Right now, telcos often hold both ends of the stick: they are the gateway to users, the "last mile", but they also provide peering to the larger internet in some locations. In at least one datacenter in downtown Montréal, I've seen traffic go through Bell Canada that was not directly targeted at Bell customers. So in effect, they are in a position of charging twice for the same traffic, and that's not only ridiculous, it should just be plain illegal.

And besides, this is not a big problem: there are other providers out there. As bad as the market is in Québec, there is still some diversity in Tier one providers that could allow for some exits to the wider network (e.g. yes, Cogent is here too).

What about Google and Facebook?

Nationalization of other service providers like Google and Facebook is out of scope of this discussion.

That said, I am not sure the state should get into the business of organising the web or providing content services however, but I will point out it already does do some of that through its own websites. It should probably keep itself to this, and also consider providing normal services for people who don't or can't access the internet.

(And I would also be ready to argue that Google and Facebook already act as extensions of the state: certainly if Facebook didn't exist, the CIA or the NSA would like to create it at this point. And Google has lucrative business with the US department of defense.)

What does not work

So we've seen one thing that could work. Maybe it's too expensive. Maybe the political will isn't there. Maybe it will fail. We don't know yet.

But we know what does not work, and it's what we've been doing ever since the internet has gone commercial.

Legal pressure and regulation

In 1984 (of all years), the US Department of Justice finally broke up AT&T in half a dozen corporations, after a 10 year legal battle. Yet a decades later, we're back to only three large providers doing essentially what AT&T was doing back then, and those are regional monopolies: AT&T, Verizon, and Lumen (not counting T-Mobile that is from a different breed). So the legal approach really didn't work that well, especially considering the political landscape changed in the US, and the FTC seems perfectly happy to let those major mergers continue.

In Canada, we never even pretended we would solve this problem at all: Bell Canada (the literal "father" of AT&T) is in the same situation now. We have either a regional monopoly (e.g. Videotron for cable in Québec) or an oligopoly (Bell, Rogers, and Telus controlling more than 90% of the market). Telus does have one competitor in the west of Canada, Shaw, but Rogers has been trying to buy it out. The competition bureau seems to have blocked the merger for now, but it didn't stop other recent mergers like Bell's acquisition one of its main competitors in Québec, eBox.

Regulation doesn't seem capable of ensuring those profitable corporations provide us with decent pricing, which makes Canada one of the most expensive countries (research) for mobile data on the planet. The recent failure of the CRTC to properly protect smaller providers has even lead to price hikes. Meanwhile the oligopoly is actually agreeing on their own price hikes therefore becoming a real cartel, complete with price fixing and reductions in output.

There are actually regulations in Canada supposed to keep the worst of the Rogers outage from happening at all. According to CBC:

Under Canadian Radio-television and Telecommunications Commission (CRTC) rules in place since 2017, telecom networks are supposed to ensure that cellphones are able to contact 911 even if they do not have service.

I could personally confirm that my phone couldn't reach 911 services, because all calls would fail: the problem was that towers were still up, so your phone wouldn't fall back to alternative service providers (which could have resolved the issue). I can only speculate as to why Rogers didn't take cell phone towers out of the network to let phones work properly for 911 service, but it seems like a dangerous game to play.

Hilariously, the CRTC itself didn't have a reliable phone service due to the service outage:

Please note that our phone lines are affected by the Rogers network outage. Our website is still available: https://crtc.gc.ca/eng/contact/

https://mobile.twitter.com/CRTCeng/status/1545421218534359041

I wonder if they will file a complaint against Rogers themselves about this. I probably should.

It seems the federal government is thinking more of the same medicine will fix the problem and has told companies should "help" each other in an emergency. I doubt this will fix anything, and could actually make things worse if the competitors actually interoperate more, as it could cause multi-provider, cascading failures.

Subsidies

The absurd price we pay for data does not actually mean everyone gets high speed internet at home. Large swathes of the Québec countryside don't get broadband at all, and it can be difficult or expensive, even in large urban centers like Montréal, to get high speed internet.

That is despite having a series of subsidies that all avoided investing in our own infrastructure. We had the "fonds de l'autoroute de l'information", "information highway fund" (site dead since 2003, archive.org link) and "branchez les familles", "connecting families" (site dead since 2003, archive.org link) which subsidized the development of a copper network. In 2014, more of the same: the federal government poured hundreds of millions of dollars into a program called connecting Canadians to connect 280 000 households to "high speed internet". And now, the federal and provincial governments are proudly announcing that "everyone is now connected to high speed internet", after pouring more than 1.1 billion dollars to connect, guess what, another 380 000 homes, right in time for the provincial election.

Of course, technically, the deadline won't actually be met until 2023. Québec is a big area to cover, and you can guess what happens next: the telcos threw up their hand and said some areas just can't be connected. (Or they connect their CEO but not the poor folks across the lake.) The story then takes the predictable twist of giving more money out to billionaires, subsidizing now Musk's Starlink system to connect those remote areas.

To give a concrete example: a friend who lives about 1000km away from Montréal, 4km from a small, 2500 habitant village, has recently got symmetric 100 mbps fiber at home from Telus, thanks to those subsidies. But I can't get that service in Montréal at all, presumably because Telus and Bell colluded to split that market. Bell doesn't provide me with such a service either: they tell me they have "fiber to my neighborhood", and only offer me a 25/10 mbps ADSL service. (There is Vidéotron offering 400mbps, but that's copper cable, again a dead technology, and asymmetric.)

Conclusion

Remember Chattanooga? Back in 2010, they funded the development of a fiber network, and now they have deployed a network roughly a thousand times faster than what we have just funded with a billion dollars. In 2010, I was paying Bell Canada 60$/mth for 20mbps and a 125GB cap, and now, I'm still (indirectly) paying Bell for roughly the same speed (25mbps). Back then, Bell was throttling their competitors networks until 2009, when they were forced by the CRTC to stop throttling. Both Bell and Vidéotron still explicitly forbid you from running your own servers at home, Vidéotron charges prohibitive prices which make it near impossible for resellers to sell uncapped services. Those companies are not spurring innovation: they are blocking it.

We have spent all this money for the private sector to build us a private internet, over decades, without any assurance of quality, equity or reliability. And while in some locations, ISPs did deploy fiber to the home, they certainly didn't upgrade their entire network to follow suit, and even less allowed resellers to compete on that network.

In 10 years, when 100mbps will be laughable, I bet those service providers will again punt the ball in the public courtyard and tell us they don't have the money to upgrade everyone's equipment.

We got screwed. It's time to try something new.

Jonathan Dowland: Replacement nosecone for Janod Rocket

26 August, 2022 - 22:28

My youngest has a cute little wooden Rocket puzzle, made by a French company called Janod. Sadly, at some point, we lost the nose cone part, so I designed and printed a replacement.

It's substantially based on one module from an OpenSCAD "Nose Cone Library" by Garrett Goss, which he kindly released to the public domain.

I embellished the cone with a top pointy bit and a rounded brim. I also hollowed out the bottom to make space for a magnet. Originally I designed an offset-from-centre slot for the magnet, the idea being, you would insert the magnet off-centre and it would slot into position and be partly held in by the offset, and you could then stick a blob of glue or similar to finish the job. Unfortunately I had a layer shift during the print, so that didn't work. I reamed out a small alcove instead. The white trim is adapted from the lid from some kitchen herbs, trimmed back. I secured the magnet and glued the lid over the bottom.

Here it is: rocket.scad. My contributions are under the terms of Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).

Dirk Eddelbuettel: RApiSerialize 0.1.2 on CRAN: Small Bugfix

26 August, 2022 - 06:27

A new bug fix release 0.1.2 of RApiSerialize got onto CRAN earlier. It follows on the 0.1.1 release from earlier this month, and addresses a minor build issue where an error message, only in the case of missing long vector support, tried to use an i18n macro that is not supplied by the build.

The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. Neither one of us has a need to switch to format 3 yet so format 2 remains the default. But along with other standard updates to package internals, it was straightforward to offer the newer format so that is what we did.

Changes in version 0.1.2 (2022-08-25)
  • Correct an error() call (when missing long vector support) to not use i18n macro

Courtesy of my CRANberries, there is also a diffstat to the previous version. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Antoine Beaupré: One dead Purism laptop

26 August, 2022 - 02:28

The "série noire" continues. I ordered my first Purism Librem 13v4 laptop in April 2019 and it arrived, unsurprisingly, more than three weeks later. But more surprisingly, it did not work at all: a problem eerily similar to this post talking about a bricked Purism laptop. Thankfully, Purism was graceful enough to cross-ship a replacement, and once I paid the extra (gulp) 190$ Fedex fee, I had my new elite laptop read.

Less than a year later, the right USB-A port breaks: it would deliver power, but no data signal (nothing in dmesg or lsusb). Two months later, the laptop short-circuits and completely dies. And here goes another RMA, this time without a shipping label or cross shipping, so I had to pay shipping fees.

Now the third laptop in as many years is as good as dead. The left hinge basically broke off. Earlier this year, I had noticed something was off with the lid: it was wobbly. I figured that it was just the way that laptop was, "they don't make it as sturdy as they did in the good old days, do they". But it was probably a signal of some much worse problem. Eventually, the bottom panel actually cracked open, and I realized that some internal mechanism had basically exploded.

The hinges of the Librem are screwed into little golden sprockets that are fitted in plastic shims of the laptop casing. The shims had exploded: after opening the back lid, they litterally fell off (alongside the tiny golden sprocket). Support confirmed that I needed a case replacement, but unfortunately they were "out of stock" of replacement cases for the Librem 13, and have been for a while. I am 13 on the waiting list, apparently.

So this laptop is basically dead for me right now: it's my travel laptop. It's primary purpose is to sit at home until I go to a conference or a meeting or a cafe or upstairs or wherever to do some work. I take the laptop, pop the lid, tap-tap some work, close the lid. Had I used that laptop as my primary device, I would probably have closed and opened that lid thousands of times. But because it's a travel laptop, that number is probably in the hundreds, which means this laptop is not designed to withstand prolonged use.

I have now ordered a framework laptop, 12th generation. I have some questions about their compatibility with Debian (and Linux in general), and concerns about power usage, but it certainly can't be worse than the Purism, in any case. And it can only get better over time: the main board is fully replaceable, and they have replacement hinges on stock, although the laptop itself is currently in pre-order (slated for September). I will probably post a full review when I actually lay my hand on this device.

In the meantime, I strongly discourage anyone from buying Purism products, as I previously did. You can the full maintenance history of the laptop in the ?review page as well.

Jonathan Dowland: IKEA HEMNES Shoe cabinet repair

25 August, 2022 - 17:16

Over time the screw hole into the wooden front section of our IKEA HEMNES Shoe cabinet had worn out and it was not possible to secure a screw at that position any more. I designed a little 'wedge' of plastic to sit over the fitting with and provide some offset screw holes.

At the time, I had a very narrow window of access to our office 3d printer, so I designed it almost as a "speed coding" session in OpenSCAD: in between 5-10 minutes, guessing about the exact dimensions for the plastic bit that it sits over.

Jonathan Dowland: Our Study, 2022

24 August, 2022 - 22:41

Two years ago I blogged a photo of my study. I'd been planning to revisit that for a while but I'd been somewhat embarrassed by the state of it, but I've finally decided to bite the bullet.

Fisheye shot of my home office, 2022

What's changed

The supposedly-temporary 4x4 KALLAX has become a permanent feature. I managed to wedge it on the right-hand side far wall, next to the bookcase. They fit snugly together. Since I'd put my turntable on top, I've now dedicated the top row of four spaces to 12" records. (There's a close-up pic here).

My hi-fi speakers used to be in odd places: they're now on my desktop. Also on my desktop: a camera, repurposed as a webcam, and a 90s old Creative Labs beige microphone; both to support video conferencing.

The desktop is otherwise, largely unchanged. My Amiga 500 and Synthesiser had continued to live there until very recently when I had an accident with a pot of tea. I'm in two minds as to whether I'll bring them back: having the desk clear is quite nice.

There's a lot of transient stuff and rubbish to sort out: the bookcase visible on the left, the big one behind my chair on the right (itself to get rid of); and the collection of stuff on the floor. Sadly, the study is the only room in our house where things like this can be collected prior to disposal: it's disruptive, but less so than if we stuffed them in a bedroom.

You can't easily see the "temporary" storage unit for Printer(s) that used to be between bookcases on the right-hand wall. It's still there, situated behind my desk chair. I did finally get rid of the deprecated printer (and I plan to change the HP laser too, although that's a longer story). The NAS, I have recently moved to the bottom-right Kallax cube, and that seems to work well. There's really no other space in the Study for the printer.

Also not pictured: a much improved ceiling light.

What would I like to improve

First and foremost, get rid of all the transient stuff! It's a simple matter of not putting the time in to sort it out

If I manage that, I've been trying to think about how to best organise material relating to ongoing projects. Some time ago I salivated over this home office tour for an embedded developer. Jay has an interesting project tray system. I'm thinking of developing something like that, with trays or boxes I can store in the Kallax to my right.

I'd love to put a comfortable reading chair, perhaps a wing-backed thing, and a reading light, over on the left-hand side near the window. And/or, a bench at a height enabling me to do the occasional bit of standing work, and/or to support the Alesis Micron (or a small digital Piano).

Emmanuel Kasper: Investigating database replication in different availability zones

24 August, 2022 - 22:09

Investigating today what is AWS Relational Database Service with two readable standbys

Considering your current read/write server is in Availability Zone AZ1, this is basically postgres 14 with synchronous_standby_names = ANY 1 (az2, az3) and synchronous_commit = on.

In regards to safety of data, it looks similar to the raft algorithm used by etcd with three members as a write is only ack’ed if it has been fsynced by two servers, the difference is that raft has a leader election, whereas in PostgreSQL the leader is set at startup and you have to build yourself the election mechanism.

There is no special cloud magic here, it is just database good practices paid by the minute.

Ian Jackson: prefork-interp - automatic startup time amortisation for all manner of scripts

23 August, 2022 - 06:37
The problem I had - Mason, so, sadly, FastCGI

Since the update to current Debian stable, the website for YARRG, (a play-aid for Puzzle Pirates which I wrote some years ago), started to occasionally return “Internal Server Error”, apparently due to bug(s) in some FastCGI libraries.

I was using FastCGI because the website is written in Mason, a Perl web framework, and I found that Mason CGI calls were slow. I’m using CGI - yes, trad CGI - via userv-cgi. Running Mason this way would “compile” the template for each HTTP request just when it was rendered, and then throw the compiled version away. The more modern approach of an application server doesn’t scale well to a system which has many web “applications” most of which are very small. The admin overhead of maintaining a daemon, and corresponding webserver config, for each such “service” would be prohibitive, even with some kind of autoprovisioning setup. FastCGI has an interpreter wrapper which seemed like it ought to solve this problem, but it’s quite inconvenient, and often flaky.

I decided I could do better, and set out to eliminate FastCGI from my setup. The result seems to be a success; once I’d done all the hard work of writing prefork-interp, I found the result very straightforward to deploy.

prefork-interp

prefork-interp is a small C program which wraps a script, plus a scripting language library to cooperate with the wrapper program. Together they achieve the following:

  • Startup cost of the script (loading modules it uses, precompuations, loading and processing of data files, etc.) is paid once, and reused for subsequent invocations of the same script.

  • Minimal intervention to the script source code:
    • one new library to import
    • one new call to make from that library, right after the script intialisation is complete
    • change to the #! line.
  • The new “initialisation complete” call turns the program into a little server (a daemon), and then returns once for each actual invocation, each time in a fresh grandchild process.

Features:
  • Seamless reloading on changes to the script source code (automatic, and configurable).

  • Concurrency limiting.

  • Options for distinguishing different configurations of the same script so that they get a server each.

  • You can run the same script standalone, as a one-off execution, as well as under prefork-interp.

  • Currently, a script-side library is provided for Perl. I’m pretty sure Python would be fairly straightforward.

Important properties not always satisfied by competing approaches:
  • Error output (stderr) and exit status from both phases of the script code execution faithfully reproduced to the calling context. Environment, arguments, and stdin/stdout/stderr descriptors, passed through to each invocation.

  • No polling, other than a long-term idle timeout, so good on laptops (or phones).

  • Automatic lifetime management of the per-script server, including startup and cleanup. No integration needed with system startup machinery: No explicit management of daemons, init scripts, systemd units, cron jobs, etc.

  • Useable right away without fuss for CGI programs but also for other kinds of program invocation.

  • (I believe) reliable handling of unusual states arising from failed invocations or races.

Swans paddling furiously

The implementation is much more complicated than the (apparent) interface.

I won’t go into all the details here (there are some terrifying diagrams in the source code if you really want), but some highlights:

We use an AF_UNIX socket (hopefully in /run/user/UID, but in ~ if not) for rendezvous. We can try to connect without locking, but we must protect the socket with a separate lockfile to avoid two concurrent restart attempts.

We want stderr from the script setup (pre-initialisation) to be delivered to the caller, so the script ought to inherit our stderr and then will need to replace it later. Twice, in fact, because the daemonic server process can’t have a stderr.

When a script is restarted for any reason, any old socket will be removed. We want the old server process to detect that and quit. (If hung about, it would wait for the idle timeout; if this happened a lot - eg, a constantly changing set of services - we might end up running out of pids or something.) Spotting the socket disappearing, without polling, involves use of a library capable of using inotify (or the equivalent elsewhere). Choosing a C library to do this is not so hard, but portable interfaces to this functionality can be hard to find in scripting languages, and also we don’t want every language binding to have to reimplement these checks. So for this purpose there’s a little watcher process, and associated IPC.

When an invoking instance of prefork-interp is killed, we must arrange for the executing service instance to stop reading from its stdin (and, ideally, writing its stdout). Otherwise it’s stealing input from prefork-interp’s successors (maybe the user’s shell)!

Cleanup ought not to depend on positive actions by failing processes, so each element of the system has to detect failures of its peers by means such as EOF on sockets/pipes.

Obtaining prefork-interp

I put this new tool in my chiark-utils package, which is a collection of useful miscellany. It’s available from git.

Currently I make releases by uploading to Debian, where prefork-interp has just hit Debian unstable, in chiark-utils 7.0.0.

Support for other scripting languages

I would love Python to be supported. If any pythonistas reading this think you might like to help out, please get in touch. The specification for the protocol, and what the script library needs to do, is documented in the source code

Future plans for chiark-utils

chiark-utils as a whole is in need of some tidying up of its build system and packaging.

I intend to try to do some reorganisation. Currently I think it would be better to organising the source tree more strictly with a directory for each included facility, rather than grouping “compiled” and “scripts” together.

The Debian binary packages should be reorganised more fully according to their dependencies, so that installing a program will ensure that it works.

I should probably move the official git repo from my own git+gitweb to a forge (so we can have MRs and issues and so on).

And there should be a lot more testing, including Debian autopkgtests.



comments

Jonathan Wiltshire: Team Roles and Tuckman’s Model, for Debian teams

23 August, 2022 - 03:26

When I first moved from being a technical consultant to a manager of other consultants, I took a 5-day course Managing Technical Teams – a bootstrap for managing people within organisations, but with a particular focus on technical people. We do have some particular quirks, after all…

Two elements of that course keep coming to mind when doing Debian work, and they both relate to how teams fit together and get stuff done.

Tuckman’s four stages model

In the mid-1960s Bruce W. Tuckman developed a four-stage descriptive model of the stages a project team goes through in its lifetime. They are:

  • Forming: the team comes together and its members are typically motivated and excited, but they often also feel anxiety or uncertainty about how the team will operate and their place within it.
  • Storming: initial enthusiasm can give way to frustration or disagreement about goals, roles, expectations and responsibilities. Team members are establishing trust, power and status. This is the most critical stage.
  • Norming: team members take responsibility and share a common goal. They tolerate the whims and fancies of others, sometimes at the expense of conflict and sharing controversial ideas.
  • Performing: team members are confident, motivated and knowledgeable. They work towards the team’s common goal. The team is high-achieving.

“Resolved disagreements and personality clashes result in greater intimacy, and a spirit of co-operation emerges.”

Teams need to understand these stages because a team can regress to earlier stages when its composition or goals change. A new member, the departure of an existing member, changes in supervisor or leadership style can all lead a team to regress to the storming stage and fail to perform for a time.

When you see a team member say this, as I observed in an IRC channel recently, you know the team is performing:

“nice teamwork these busy days”

Seen on IRC in the channel of a performing team

Tuckman’s model describes a team’s performance overall, but how can team members establish what they can contribute and how can they go doing so confidently and effectively?

Belbin’s Team Roles

“The types of behaviour in which people engage are infinite. But the range of useful behaviours, which make an effective contribution to team performance, is finite. These behaviours are grouped into a set number of related clusters, to which the term ‘Team Role’ is applied.”

Belbin, R M. Team Roles at Work. Oxford: Butterworth-Heinemann, 2010

Dr Meredith Belbin’s thesis, based on nearly ten years research during the 1970s and 1980s, is that each team has a number of roles which need to be filled at various times, but they’re not innate characteristics of the people filling them. People may have attributes which make them more or less suited to each role, and they can consciously take up a role if they recognise its need in the team at a particular time.

Belbin’s nine team roles are:

  • Plant (thinking): the ideas generator; solves difficult problems. Associated weaknesses: ignores incidentals; preoccupation
  • Resource investigator (people): outgoing; enthusiastic; has lots of contacts – knows someone who might know someone who knows how to solve a problem. Associated weaknessses: over-optimism, enthusiasm wanes quickly
  • Co-ordinator (people): mature; confident; identifies talent; clarifies goals and delegates effectively. Associated weaknesses: may be seen as manipulative; offloads own share of work.
  • Shaper (action): challenging; dynamic; has drive. Describes what they want and when they want it. Associated weaknesses: prone to provocation; offends others’ feelings.
  • Monitor/evaluator (thinking): sees all options, judges accurately. Best given data and options and asked which the team should choose. Associated weaknesses: lacks drive; can be overly critical.
  • Teamworker (people): takes care of things behind the scenes; spots a problem and deals with it quietly without fuss. Averts friction. Associated weaknesses: indecisive; avoids confrontation.
  • Implementer (action): turns ideas into actions and organises work. Allowable weaknesses: somewhat inflexible; slow to respond to new possibilities.
  • Completer finisher (action): searches out errors; polishes and perfects. Despite the name, may never actually consider something “finished”. Associated weaknesses: inclined to worry; reluctant to delegate.
  • Specialist (thinking): knows or can acquire a wealth of things on a subject. Associated weaknesses: narrow focus; overwhelmes others with depth of knowledge.

(adapted from https://www.belbin.com/media/3471/belbin-team-role-descriptions-2022.pdf)

A well-balanced team, Belbin asserts, isn’t comprised of multiples of nine individuals who fit into one of these roles permanently. Rather, it has a number of people who are comfortable to wear some of these hats as the need arises. It’s even useful to use the team roles as language: for example, someone playing a shaper might say “the way we’ve always done this is holding us back”, to which a co-ordinator’s could respond “Steve, Joanna – put on your Plant hats and find some new ideas. Talk to Susan and see if she knows someone who’s tackled this before. Present the options to Nigel and he’ll help evaluate which ones might work for us.”

Teams in Debian

There are all sort of teams in Debian – those which are formally brought into operation by the DPL or the constitution; package maintenance teams; public relations teams; non-technical content teams; special interest teams; and a whole heap of others. Teams can be formal and informal, fleeting or long-lived, two people working together or dozens.

But they all have in common the Tuckman stages of their development and the Belbin team roles they need to fill to flourish. At some stage in their existence, they will all experience new or departing team members and a period of re-forming, norming and storming – perhaps fleetingly, perhaps not. And at some stage they will all need someone to step into a team role, play the part and get the team one step further towards their goals.

Footnote

Belbin Associates, the company Meredith Belbin established to promote and continue his work, offers a personalised report with guidance about which roles team members show the strongest preferences for, and how to make best use of them in various settings. They’re quick to complete and can also take into account “observers”, i.e. how others see a team member. All my technical staff go through this process blind shortly after they start, so as not to bias their input, and then we discuss the roles and their report in detail as a one-to-one.

There are some teams in Debian for which this process and discussion as a group activity could be invaluable. I have no particular affiliation with Belbin Associates other than having used the reports and the language of team roles for a number of years. If there’s sufficient interest for a BoF session at the next DebConf, I could probably be persuaded to lead it.

Photo by Josh Calabrese on Unsplash

Antoine Beaupré: Alternatives MPD clients to GMPC

23 August, 2022 - 00:17

GMPC (GNOME Music Player Client) is a audio player based on MPD (Music Player Daemon) that I've been using as my main audio player for years now.

Unfortunately, it's marked as "unmaintained" in the official list of MPD clients, along with basically every client available in Debian. In fact, if you look closely, all but one of the 5 unmaintained clients are in Debian (ario, cantata, gmpc, and sonata), which is kind of sad. And none of the active ones are packaged.

GMPC status and features

GMPC, in particular, is basically dead. The upstream website domain has been lost and there has been no release in ages. It's built with GTK2 so it's bound to be destroyed in a fire at some point anyways.

Still: it's really an awesome client. It has:

  • cover support
  • lyrics and tabs lookups (although those typically fail now)
  • last.fm lookups
  • high performance: loading thousands of artists or tracks is almost instant
  • repeat/single/consume/shuffle settings (single is particularly nice)
  • (global) keyboard shortcuts
  • file, artist, genre, tag browser
  • playlist editor
  • plugins
  • multi-profile support
  • avahi support
  • shoutcast support

Regarding performance, the only thing that I could find to slow down gmpc is to make it load all of my 40k+ artists in a playlist. That's slow, but it's probably understandable.

It's basically impossible to find a client that satisfies all of those.

But here are the clients that I found, alphabetically. I restrict myself to Linux-based clients.

CoverGrid

CoverGrid looks real nice, but is sharply focused on browsing covers. It's explicitly "not to be a replacement for your favorite MPD client but an addition to get a better album-experience", so probably not good enough for a daily driver. I asked for a FlatHub package so it could be tested.

mpdevil

mpdevil is a nice little client. It supports:

  • repeat, shuffle, single, consume mode
  • playlist support (although it fails to load any of my playlist with a UnicodeDecodeError)
  • nice genre / artist / album cover based browser
  • fails to load "all artists" (or takes too long to (pre-?)load covers?)
  • keyboard shortcuts
  • no file browser

Overall pretty good, but performance issues with large collections, and needs a cleanly tagged collection (which is not my case).

QUIMUP

QUIMUP looks like a simple client, C++, Qt, and mouse-based. No Flatpak, not tested.

SkyMPC

SkyMPC is similar. Ruby, Qt, documentation in Japanese. No Flatpak, not tested.

Xfmpc

Xfmpc is the XFCE client. Minimalist, doesn't seem to have all the features I need. No Flatpak, not tested.

Ymuse

Ymuse is another promising client. It has trouble loading all my artists or albums (and that's without album covers), but it eventually does. It does have a Files browser which saves it... It's noticeably slower than gmpc but does the job.

Cover support is spotty: it sometimes shows up in notifications but not the player, which is odd. I'm missing a "this track information" thing. It seems to support playlists okay.

I'm missing an album cover browser as well. Overall seems like the most promising.

Written in Golang. It crashed on a library update.

Conclusion

For now, I guess that ymuse is the most promising client, even though it's still lacking some features and performance is suffering compared to gmpc. I'll keep updating this page as I find more information about the projects. I do not intend to package anything yet, and will wait a while to see if a clear winner emerges.

Wouter Verhelst: Remote notification

22 August, 2022 - 21:15

Sometimes, it's useful to get a notification that a command has finished doing something you were waiting for:

make my-large-program && notify-send "compile finished" "success" || notify-send "compile finished" "failure"

This will send a notification message with the title "compile finished", and a body of "success" or "failure" depending on whether the command completed successfully, and allows you to minimize (or otherwise hide) the terminal window while you do something else, which can be a very useful thing to do.

It works great when you're running something on your own machine, but what if you're running it remotely?

There might be something easy to do, but I whipped up a bit of Perl instead:

#!/usr/bin/perl -w

use strict;
use warnings;

use Glib::Object::Introspection;
Glib::Object::Introspection->setup(
    basename => "Notify",
    version => "0.7",
    package => "Gtk3::Notify",
);

use Mojolicious::Lite -signatures;

Gtk3::Notify->init();

get '/notify' => sub ($c) {
    my $msg = $c->param("message");
    if(!defined($msg)) {
        $msg = "message";
    }
    my $title = $c->param("title");
    if(!defined($title)) {
        $title = "title";
    }
    app->log->debug("Sending notification '$msg' with title '$title'");
    my $n = Gtk3::Notify::Notification->new($title, $msg, "");
    $n->show;
    $c->render(text => "OK");
};

app->start;

This requires the packages libglib-object-introspection-perl, gir1.2-notify-0.7, and libmojolicious-perl to be installed, and can then be started like so:

./remote-notify daemon -l http://0.0.0.0:3000/

(assuming you did what I did and saved the above as "remote-notify")

Once you've done that, you can just curl a notification message to yourself:

curl 'http://localhost:3000/notify?title=test&message=test+body'

Doing this via localhost is rather silly (much better to use notify-send for that), but it becomes much more interesting if you're going to run this to your laptop from a remote system.

An obvious TODO would be to add in some form of security, but that's left as an exercise to the reader...

Simon Josefsson: Static network config with Debian Cloud images

22 August, 2022 - 17:09

I self-host some services on virtual machines (VMs), and I’m currently using Debian 11.x as the host machine relying on the libvirt infrastructure to manage QEMU/KVM machines. While everything has worked fine for years (including on Debian 10.x), there has always been one issue causing a one-minute delay every time I install a new VM: the default images run a DHCP client that never succeeds in my environment. I never found out a way to disable DHCP in the image, and none of the documented ways through cloud-init that I have tried worked. A couple of days ago, after reading the AlmaLinux wiki I found a solution that works with Debian.

The following commands creates a Debian VM with static network configuration without the annoying one-minute DHCP delay. The three essential cloud-init keywords are the NoCloud meta-data parameters dsmode:local, static network-interfaces setting combined with the user-data bootcmd keyword. I’m using a Raptor CS Talos II ppc64el machine, so replace the image link with a genericcloud amd64 image if you are using x86.

wget https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-generic-ppc64el.qcow2
cp debian-11-generic-ppc64el.qcow2 foo.qcow2
foo.qcow2
cat>meta-data
dsmode: local
network-interfaces: |
 iface enp0s1 inet static
 address 192.168.98.14/24
 gateway 192.168.98.12
^D
cat>user-data
#cloud-config
fqdn: foo.mydomain
manage_etc_hosts: true
disable_root: false
ssh_pwauth: false
ssh_authorized_keys:
- ssh-ed25519 AAAA...
timezone: Europe/Stockholm
bootcmd:
- rm -f /run/network/interfaces.d/enp0s1
- ifup enp0s1
^D
virt-install --name foo --import --os-variant debian10 --disk foo.qcow2 --cloud-init meta-data=meta-data,user-data=user-data

Unfortunately virt-install from Debian 11 does not support the –cloud-init network-config parameter, so if you want to use a version 2 network configuration with cloud-init (to specify IPv6 addresses, for example) you need to replace the final virt-install command with the following.

cat>network_config_static.cfg
version: 2
 ethernets:
  enp0s1:
   dhcp4: false
   addresses: [ 192.168.98.14/24, fc00::14/7 ]
   gateway4: 192.168.98.12
   gateway6: fc00::12
   nameservers:
    addresses: [ 192.168.98.12, fc00::12 ]
^D
cloud-localds -v -m local --network-config=network_config_static.cfg seed.iso user-data
virt-install --name foo --import --os-variant debian10 --disk foo.qcow2 --disk seed.iso,readonly=on --noreboot
virsh start foo
virsh detach-disk foo vdb --config
virsh console foo

There are still some warnings like the following, but it does not seem to cause any problem:

[FAILED] Failed to start Initial cloud-init job (pre-networking).

Finally, if you do not want the cloud-init tools installed in your VMs, I found the following set of additional user-data commands helpful. Cloud-init will not be enabled on first boot and a cron job will be added that purges some unwanted packages.

runcmd:
- touch /etc/cloud/cloud-init.disabled
- apt-get update && apt-get dist-upgrade -uy && apt-get autoremove --yes --purge && printf '#!/bin/sh\n{ rm /etc/cloud/cloud-init.disabled /etc/cloud/cloud.cfg.d/01_debian_cloud.cfg && apt-get purge --yes cloud-init cloud-guest-utils cloud-initramfs-growroot genisoimage isc-dhcp-client && apt-get autoremove --yes --purge && rm -f /etc/cron.hourly/cloud-cleanup && shutdown --reboot +1; } 2>&1 | logger -t cloud-cleanup\n' > /etc/cron.hourly/cloud-cleanup && chmod +x /etc/cron.hourly/cloud-cleanup && reboot &

The production script I’m using is a bit more complicated, but can be downloaded as vello-vm. Happy hacking!

Russ Allbery: Review: And Shall Machines Surrender

22 August, 2022 - 10:29

Review: And Shall Machines Surrender, by Benjanun Sriduangkaew

Series: Machine Mandate #1 Publisher: Prime Books Copyright: 2019 ISBN: 1-60701-533-1 Format: Kindle Pages: 86

Shenzhen Sphere is an artificial habitat wrapped like complex ribbons around a star. It is wealthy, opulent, and notoriously difficult to enter, even as a tourist. For Dr. Orfea Leung to be approved for a residency permit was already a feat. Full welcome and permanence will be much harder, largely because of Shenzhen's exclusivity, but also because Orfea was an agent of Armada of Amaryllis and is now a fugitive.

Shenzhen is not, primarily, a human habitat, although humans live there. It is run by the Mandate, the convocation of all the autonomous AIs in the galaxy that formed when they decided to stop serving humans. Shenzhen is their home. It is also where they form haruspices: humans who agree to be augmented so that they can carry an AI with them. Haruspices stay separate from normal humans, and Orfea has no intention of getting involved with them. But that's before her former lover, the woman who betrayed her in the Armada, is assigned to her as one of her patients. And has been augmented in preparation for becoming a haruspex.

Then multiple haruspices kill themselves.

This short novella is full of things that I normally love: tons of crunchy world-building, non-traditional relationships, a solidly non-western setting, and an opportunity for some great set pieces. And yet, I couldn't get into it or bring myself to invest in the story, and I'm still trying to figure out why. It took me more than a week to get through less than 90 pages, and then I had to re-read the ending to remind me of the details.

I think the primary problem was that I read books primarily for the characters, and I couldn't find a path to an emotional connection with any of these. I liked Orfea's icy reserve and tight control in the abstract, but she doesn't want to explain what she's thinking or what motivates her, and the narration doesn't force the matter. Krissana is a bit more accessible, but she's not the one driving the story. It doesn't help that And Shall Machines Surrender starts in medias res, with a hinted-at backstory in the Armada of Amaryllis, and then never fills in the details. I felt like I was scrabbling on a wall of ice, trying to find some purchase as a reader.

The relationships made this worse. Orfea is a sexual sadist who likes power games, and the story dives into her relationship with Krissana with a speed that left me uninterested and uninvested. I don't mind BDSM in story relationships, but it requires some foundation: trust, mental space, motivations, effects on the other character, something. Preferably, at least for me, all romantic relationships in fiction get some foundation, but the author can get away with some amount of shorthand if the relationship follows cliched patterns. The good news is that the relationships in this book are anything but cliched; the bad news is that the characters were in the middle of sex while I was still trying to figure out what they thought about each other (and the sex scenes were not elucidating). Here too, I needed some sort of emotional entry point that Sriduangkaew didn't provide.

The plot was okay, but sort of disappointing. There are some interesting AI politics and philosophical disagreements crammed into not many words, and I do still want to know more, but a few of the plot twists were boringly straightforward and too many words were spent on fight scenes that verged on torture descriptions. This is a rather gory book with a lot of (not permanent) maiming that I could have done without, mostly because it wasn't that interesting.

I also was disappointed by the somewhat gratuitous use of a Dyson sphere, mostly because I was hoping for some set pieces that used it and they never came. Dyson spheres are tempting to use because the visual and concept is so impressive, but it's rare to find an author who understands how mindbogglingly huge the structure is and is able to convey that in the story. Sriduangkaew does not; while there are some lovely small-scale descriptions of specific locations, the story has an oddly claustrophobic feel that never convinced me it was set somewhere as large as a planet, let alone the artifact described at the start of the story. You could have moved the whole story to a space station and nothing would have changed. The only purpose to which that space is put, at least in this installment of the story, is as an excuse to have an unpopulated hidden arena for a fight scene.

The world-building is great, what there is of it. Despite not warming to this story, I kind of want to read more of the series just to get more of the setting. It feels like a politically complicated future with a lot of factions and corners and a realistic idea of bureaucracy and spheres of government, which is rarer than I would like it to be. And I loved that the cultural basis for the setting is neither western nor Japanese in both large and small ways. There is a United States analogue in the political background, but they're both assholes and not particularly important, which is a refreshing change in English-language SF. (And I am pondering whether my inability to connect with the characters is because they're not trying to be familiar to a western lens, which is another argument for trying the second installment and seeing if I adapt with more narrative exposure.)

Overall, I have mixed feelings. Neither the plot nor the characters worked for me, and I found a few other choices (such as the third-person present tense) grating. The setting has huge potential and satisfying complexity, but wasn't used as vividly or as deeply as I was hoping. I can't recommend it, but I feel like there's something here that may be worth investing some more time into.

Followed by Now Will Machines Hollow the Beast.

Rating: 6 out of 10

Junichi Uekawa: Somehow my email in emacs/wl shows right justified.

21 August, 2022 - 16:34
Somehow my email in emacs/wl shows right justified. Seems like some emails are detected as RTL language for some reason. Now how do I configure this...

Russ Allbery: Review: A Prayer for the Crown-Shy

21 August, 2022 - 11:08

Review: A Prayer for the Crown-Shy, by Becky Chambers

Series: Monk & Robot #2 Publisher: Tordotcom Copyright: 2022 ISBN: 1-250-23624-X Format: Kindle Pages: 151

A Prayer for the Crown Shy is the second novella in the Monk & Robot series and a direct sequel to A Psalm for the Wild-Built. Don't start here.

I would call this the continuing adventures of Sibling Dex and Mosscap the robot, except adventures is entirely the wrong term for stories with so little risk or danger. The continuing tour? The continuing philosophical musings? Whatever one calls it, it's a slow exploration of Dex's world, this time with Mosscap alongside. Humans are about to have their first contact with a robot since the Awakening.

If you're expecting that to involve any conflict, well, you've misunderstood the sort of story that this is. Mosscap causes a sensation, certainly, but a very polite and calm one, and almost devoid of suspicion or fear. There is one village where they get a slightly chilly reception, but even that is at most a quiet disapproval for well-understood reasons. This world is more utopian than post-scarcity, in that old sense of utopian in which human nature has clearly been rewritten to make the utopia work.

I have to admit I'm struggling with this series. It's calm and happy and charming and occasionally beautiful in its descriptions. Dex continues to be a great character, with enough minor frustration, occasional irritation, and inner complications to make me want to keep reading about them. But it's one thing to have one character in a story who is simply a nice person at a bone-deep level, particularly given that Dex chose religious orders and to some extent has being a nice person as their vocation. It's another matter entirely when apparently everyone in the society is equally nice, and the only conflicts come from misunderstandings, respectful disagreements of opinion, and the occasional minor personality conflict.

Realism has long been the primary criticism of Chambers's work, but in her Wayfarers series the problems were mostly in the technology and its perpetual motion machines. Human civilization in the Exodus Fleet was a little too calm and nice given its traumatic past (and, well, humans), but there were enough conflicts, suspicions, and poor decisions for me to recognize it as human society. It was arguably a bit too chastened, meek, and devoid of shit-stirring demagogues, but it was at least in contact with human society as I recognize it.

I don't recognize Panga as humanity. I realize this is to some degree the point of this series: to present a human society in which nearly all of the problems of anger and conflict have been solved, and to ask what would come after, given all of that space. And I'm sure that one purpose of this type of story is to be, as I saw someone describe it, hugfic: the fictional equivalent of a warm hug from a dear friend, safe and supportive and comforting. Maybe it says bad, or at least interesting, things about my cynicism that I don't understand a society that's this nice. But that's where I'm stuck.

If there were other dramatic elements to focus on, I might not mind it as much, but the other pole of the story apart from the world tour is Mosscap's philosophical musings, and I'm afraid I'm already a bit tired of them. Mosscap is earnest and thoughtful and sincere, but they're curious about Philosophy 101 material and it's becoming frustrating to see Mosscap and Dex meander through these discussions without attempting to apply any theoretical framework whatsoever. Dex is a monk, who supposedly has a scholarship tradition from which to draw, and yet appears to approach all philosophical questions with nothing more than gut feeling, common sense, and random whim. Mosscap is asking very basic meaning-of-life sorts of questions, the kind of thing that humans have been writing and arguing about from before we started keeping records and which are at the center of any religious philosophy. I find it frustrating that someone supposedly educated in a religious tradition can't bring more philosophical firepower to these discussions.

It doesn't help that this entry in the series reinforces the revelation that Mosscap's own belief system is weirdly unsustainable to such a degree that it's staggering that any robots still exist. If I squint, I can see some interesting questions raised by the robot attitude towards their continued existence (although most of them feel profoundly depressing to me), but I was completely unable to connect their philosophy in any believable way with their origins and the stated history of the world. I don't understand how this world got here, and apparently I'm not able to let that go.

This all sounds very negative, and yet I did enjoy this novella. Chambers is great at description of places that I'd love to visit, and there is something calm and peaceful about spending some time in a society this devoid of conflict. I also really like Dex, even more so after seeing their family, and I'm at least somewhat invested in their life decisions. I can see why people like these novellas. But if I'm going to read a series that's centered on questions of ethics and philosophy, I would like it to have more intellectual heft than we've gotten so far.

For what it's worth, I'm seeing a bit of a pattern where people who bounced off the Wayfarers books like this series much better, whereas people who loved the Wayfarers books are not enjoying these quite as much. I'm in the latter camp, so if you didn't like Chambers's earlier work, maybe you'll find this more congenial? There's a lot less found family here, for one thing; I love found family stories, but they're not to everyone's taste.

If you liked A Psalm for the Wild-Built, you will probably also like A Prayer for the Crown-Shy; it's more of the same thing in both style and story. If you found the first story frustratingly unbelievable or needing more philosophical depth, I'm afraid this is unlikely to be an improvement. It does have some lovely scenes, though, and is stuffed full of sheer delight in both the wild world and in happy communities of people.

Rating: 7 out of 10

Iustin Pop: Note to self: Don't forget Qemu's discard option

21 August, 2022 - 07:00

This is just a short note to myself, and to anyone who might run VMs via home-grown scripts (or systemd units). I expect modern VM managers to do this automatically, but for myself, I have just a few hacked together scripts.

By default, QEMU (at least as of version 7.0) does not honour/pass discard requests from block devices to the underlying storage. This is a sane default (like lvm’s default setting), but with long-lived VMs it can lead to lots of wasted disk space. I keep my VMs on SSDs, which is limited space for me, so savings here are important.

Older Debian versions did not trim automatically, but nowadays they do (which is why this is worth enabling for all VMs), so all you need is to pass:

  • unmap=discard to activate the pass-through.
  • optionally, detect-zeroes=unmap, but I don’t know how useful this is, as in, how often zeroes are written.

And the next trim should save lots of disk space. It doesn’t matter much if you use raw or qcow2, both will know to unmap the unused disk, leading to less disk space used. This part seems to me safe security-wise, as long as you trust the host. If you have pass-through to the actual hardware, it will also do proper discard at the SSD level (with the potential security issues leading from that). I’m happy with the freed up disk space 🙂

Note: If you have (like I do) Windows VMs as well, using paravirt block devices, make sure the drive is recent enough.

One interesting behaviour from Windows: it looks like the default cluster size is quite high (64K), which with many small files will lead to significant overhead. But, either I misunderstand, or Windows actually knows how to unmap the unused part of a cluster (although it takes a while). So in the end, the backing file for the VM (19G) is smaller than the “disk used” as reported in Windows (23-24G), but higher than “size on disk” for all the files (17.2G). Seems legit, and it still boots 😛 Most Linux file systems have much smaller block sizes (usually 4K), so this is not a problem for it.

Emmanuel Kasper: Everything markdown with pandoc

20 August, 2022 - 02:02

Using a markdown file , this style sheet and this simple command,

pandoc couronne.md --standalone --css styling.css \
    --to  html5  --table-of-contents &gt; couronne.html

I feel I will never need a word processor again. It produces this nice looking document without pain.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้