Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 23 min ago

Paul Wise: FLOSS Activities May 2022

1 June, 2022 - 06:46
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review
  • Spam: reported 1 Debian bug reports and 41 Debian mailing list posts
  • Patches: reviewed gt patches
  • Debian packages: sponsored psi-notify
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved cppcheck-gui eta flpsed fluxbox p7zip-full pampi pyqso xboard
    • rejected p7zip (help output), openshot (photo of a physical library), clamav-daemon (movie cartoon character), aptitude (screenshot of random launchpad project), laditools (screenshot of tracker.d.o for src:hello), weboob-qt/chromium-browser/supercollider-vim ((NSFW) selfies), node-split (screenshot of screenshots site), libc6 (Chinese characters alongside a photo of man and bottle)
Administration
  • Debian servers: investigate etckeeper cron mail
  • Debian wiki: investigate account existence, approve accounts
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC
Sponsors

The gensim and libpst work was sponsored. All other work was done on a volunteer basis.

Russell Coker: Links May 2022

31 May, 2022 - 19:26

dontkillmyapp.com is a web site about Android phone vendors who make their phones kill your apps when you don’t want them to [1]. One of the many reasons why Pine and Purism offer the promise of better phones.

This blog post about the Librem 5 camera is interesting [2]. Currently the Librem 5 camera isn’t very usable for me as I just want to point and shoot, but it apparently works well for experts. Taking RAW photos is a good feature that I’d like to have in all my camera phones.

The Russian government being apparently unaware of the Streisand Effect has threatened Wikipedia for publishing facts about the war against Ukraine [3]. We all should publicise this as much as possible. The Wikipedia page is The 2022 Russian Invasion of Ukraine [4].

The Jerusalem Post has an interesting article about whether Mein Kampf should be published and studied in schools [5]. I don’t agree with the conclusions about studying that book in schools, but I think that the analysis of the situation in that article is worth reading. One of the issues I have with teaching Mein Kampf and similar books is the quality of “social studies” teaching at the school I attended, I’m pretty sure that teaching Mein Kampf in any way at that school would just turn out more neo-Nazis. Maybe better schools (IE not Christian private schools) could have productive classes about Mein Kampf

Vanity Fair has an interesting article about the history of the private jet [6].

Current Affairs has an unusually informative article about why blockchain currencies should die in a fire [7].

The Nazi use of methamphetamine is well known, but Time has an insightful article about lesser known aspects of meth use [8]. How they considered meth as separate from the drugs they claimed were for the morally inferior is interesting.

George Monbiot wrote an insightful article comparing the 2008 bank collapse to the current system of unstable food supplies [9].

JWZ wrote an insightful blog post about “Following the Money” regarding the push to reopen businesses even though the pandemic is far from over [10]. His conclusion is that commercial property owners are pushing the governments to give them more money.

PsyPost has an interesting article on the correlation between watching Fox News and lacking knowledge of science and of society [11].

David Brin wrote an interesting paper about Disputation and how that can benefit society [12]. I think he goes too far in some of his claims, but he has interesting points. The overall idea of a Disputation arena for ideas is a really good one. I previously had a similar idea on a much smaller scale of having debates via Wiki [13].

Related posts:

  1. Links Jan 2022 Washington Post has an interesting article on how gender neutral...
  2. Links March 2022 Anarcat wrote a great blog post about switching from OpenNTP...
  3. Links April 2021 Dr Justin Lehmiller’s blog post comparing his official (academic style)...

Russ Allbery: Review: Maskerade

31 May, 2022 - 09:47

Review: Maskerade, by Terry Pratchett

Series: Discworld #18 Publisher: Harper Copyright: 1995 Printing: February 2014 ISBN: 0-06-227552-6 Format: Mass market Pages: 360

Maskerade is the 18th book of the Discworld series, but you probably could start here. You'd miss the introduction of Granny Weatherwax and Nanny Ogg, which might be a bit confusing, but I suspect you could pick it up as you went if you wanted. This is a sequel of sorts to Lords and Ladies, but not in a very immediate sense.

Granny is getting distracted and less interested in day-to-day witching in Lancre. This is not good; Granny is incredibly powerful, and bored and distracted witches can go to dark places. Nanny is concerned. Granny needs something to do, and their coven needs a third. It's not been the same since they lost their maiden member.

Nanny's solution to this problem is two-pronged. First, they'd had their eye on a local girl named Agnes, who had magic but who wasn't interested in being a witch. Perhaps it was time to recruit her anyway, even though she'd left Lancre for Ankh-Morpork. And second, Granny needs something to light a fire under her, something that will get her outraged and ready to engage with the world. Something like a cookbook of aphrodisiac recipes attributed to the Witch of Lancre.

Agnes, meanwhile, is auditioning for the opera. She's a sensible person, cursed her whole life by having a wonderful personality, but a part of her deep inside wants to be called Perdita X. Dream and have a dramatic life. Having a wonderful personality can be very frustrating, but no one in Lancre took either that desire or her name seriously. Perhaps the opera is somewhere where she can find the life she's looking for, along with another opportunity to try on the Perdita name. One thing she can do is sing; that's where all of her magic went.

The Ankh-Morpork opera is indeed dramatic. It's also losing an astounding amount of money for its new owner, who foolishly thought owning an opera would be a good retirement project after running a cheese business. And it's haunted by a ghost, a very tangible ghost who has started killing people.

I think this is my favorite Discworld novel to date (although with a caveat about the ending that I'll get to in a moment). It's certainly the one that had me laughing out loud the most. Agnes (including her Perdita personality aspect) shot to the top of my list of favorite Discworld characters, in part because I found her sensible personality so utterly relatable. She is fascinated by drama, she wants to be in the middle of it and let her inner Perdita goth character revel in it, and yet she cannot help being practical and unflappable even when surrounded by people who use far too many exclamation points. It's one thing to want drama in the abstract; it's quite another to be heedlessly dramatic in the moment, when there's an obviously reasonable thing to do instead. Pratchett writes this wonderfully.

The other half of the story follows Granny and Nanny, who are unstoppable forces of nature and a wonderful team. They have the sort of long-standing, unshakable adult friendship between very unlike people that's full of banter and minor irritations layered on top of a deep mutual understanding and respect. Once they decide to start investigating this supposed opera ghost, they divvy up the investigative work with hardly a word exchanged. Planning isn't necessary; they both know each other's strengths.

We've gotten a lot of Granny's skills in previous books. Maskerade gives Nanny a chance to show off her skills, and it's a delight. She effortlessly becomes the sort of friendly grandmother who blends in so well that no one questions why she's there, and thus manages to be in the middle of every important event. Granny watches and thinks and theorizes; Nanny simply gets into the middle of everything and talks to everyone until people tell her what she wants to know. There's no real doubt that the two of them are going to get to the bottom of anything they want to get to the bottom of, but watching how they get there is a delight.

I love how Pratchett handles that sort of magical power from a world-building perspective. Ankh-Morpork is the Big City, the center of political power in most of the Discworld books, and Granny and Nanny are from the boondocks. By convention, that means they should either be awed or confused by the city, or gain power in the city by transforming it in some way to match their area of power. This isn't how Pratchett writes witches at all. Their magic is in understanding people, and the people in Ankh-Morpork are just as much people as the people in Lancre. The differences of the city may warrant an occasional grumpy aside, but the witches are fully as capable of navigating the city as they are their home town.

Maskerade is, of course, a parody of opera and musicals, with Phantom of the Opera playing the central role in much the same way that Macbeth did in Wyrd Sisters. Agnes ends up doing the singing for a beautiful, thin actress named Christine, who can't sing at all despite being an opera star, uses a truly astonishing excess of exclamation points, and strategically faints at the first sign of danger. (And, despite all of this, is still likable in that way that it's impossible to be really upset at a puppy.) She is the special chosen focus of the ghost, whose murderous taunting is a direct parody of the Phantom. That was a sufficiently obvious reference that even I picked up on it, despite being familiar with with Phantom of the Opera only via the soundtrack.

Apart from that, though, the references were lost on me, since I'm neither a musical nor an opera fan. That didn't hurt my enjoyment of the book in the slightest; in fact, I suspect it's part of why it's in my top tier of Discworld books. One of my complaints about Discworld to date is that Pratchett often overdoes the parody to the extent that it gets in the way of his own (excellent) characters and story. Maybe it's better to read Discworld novels where one doesn't recognize the material being parodied and thus doesn't keep getting distracted by references.

It's probably worth mentioning that Agnes is a large woman and there are several jokes about her weight in Maskerade. I think they're the good sort of jokes, about how absurd human bodies can be, not the mean sort? Pratchett never implies her weight is any sort of moral failing or something she should change; quite the contrary, Nanny considers it a sign of solid Lancre genes. But there is some fat discrimination in the opera itself, since one of the things Pratchett is commenting on is the switch from full-bodied female opera singers to thin actresses matching an idealized beauty standard. Christine is the latter, but she can't sing, and the solution is for Agnes to sing for her from behind, something that was also done in real opera. I'm not a good judge of how well this plot line was handled; be aware, going in, if this may bother you.

What did bother me was the ending, and more generally the degree to which Granny and Nanny felt comfortable making decisions about Agnes's life without consulting her or appearing to care what she thought of their conclusions. Pratchett seemed to be on their side, emphasizing how well they know people. But Agnes left Lancre and avoided the witches for a reason, and that reason is not honored in much the same way that Lancre refused to honor her desire to go by Perdita. This doesn't seem to be malicious, and Agnes herself is a little uncertain about her choice of identity, but it still rubbed me the wrong way. I felt like Agnes got steamrolled by both the other characters and by Pratchett, and it's the one thing about this book that I didn't like. Hopefully future Discworld books about these characters revisit Agnes's agency.

Overall, though, this was great, and a huge improvement over Interesting Times. I'm excited for the next witches book.

Followed in publication order by Feet of Clay, and later by Carpe Jugulum in the thematic sense.

Rating: 8 out of 10

Daniel Lange: Work-around for randomly dropping WiFi connections on ChromeOS

31 May, 2022 - 02:25

The company got me a Chromebook for times when I want to ignore email and not get dragged into code reviews but still be available on IRC and chat. Nifty devices at great price points. I really like them.

These things are meant to be very consumer-style end-user devices. You log in with your Google account and everything works. Until it doesn't.

Just setting it up caused the first issue:

I was always thrown back to a black screen and then another login-screen despite having successfully logged in initially to create the "owner" user of the Chromebook. No error message, not useful UI feedback. Just logging in again and again and again.

The issue is ... not having a GMail account associated with my Google account. Duh! So add a GMail.com address as the primary to your Google account and the initial setup completes. Of course you cannot delete that GMail.com association again because the owner user is linked to the email and not the account. Well, you can delete it but then you cannot configure "owner" items of your Chromebook any more. Great job, Google. Not. Identity management 101 fail.

Kudos to Anurag Chawake for blogging about the issue. The Google support forum thead claims this is solved now. But it didn't work for me, so this may be needing to trickle down through ChromeOS releases or be deployed on more Google infra. Or whatever. We can't tell from outside the Googleplex as - of course - "Rebecca" sheds no light on what the identified "root cause" was:

Once I was able to login to the new Chromebook all worked fine until I started to use ssh sessions. These always hung for 30 seconds to 10 minutes and then resumed with lots of packets lost in between and the last minute or so coming in from buffering in a burst.

This was easy to see in ping as well. The connection essentially dropped dead while the WiFi icon was continuing to show full signal strength. The logs did not show anything useful. These are really hard to access on ChromeOS (JSON format and no useful UI on the Chromebook itself, Google provides a viewer on Google Apps Toolbox but that requires uploading the logs). Better than no logs at all but not really nice.

The ChromeOS bug tracker and its Google corporate counterpart are also not useful at this time.

For reference:
Google ... Device users randomly disconnect from Wi-Fi network
Chromium ... Device users randomly disconnect from Wi-Fi network
Google ... Constant connect and disconnect from WiFi source post-update
Chromium ... Constant connect and disconnect from WiFi source post-update

Playing around with the device on the network showed that it reduced sending power beyond being able to reach the access point any more. This is why disconnecting and re-connecting the WiFi fixes the issue for a few minutes, typically.

Still, there is a better way:

In crosh (the ChromeOS shell available when pressing Alt+Ctrl+T) type:

wifi_power_save disable

This unfortunately only lasts until the next reboot but it does persist over suspend cycles (aka closing the lid).

Bits from Debian: Debian welcomes its new Outreachy interns

30 May, 2022 - 17:00

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy May 2022 - August 2022 round.

Israel Galadima and Michael Ikwuegbu will work on Improve yarn package manager integration with Debian, mentored by Akshay S Dinesh and Pirate Praveen.

Congratulations and welcome to Israel Galadima and Michael Ikwuegbu!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

Gunnar Wolf: On to the next journey

30 May, 2022 - 10:46

Last Wednesday my father, Kurt Bernardo Wolf Bogner, took the steps towards his next journey, the last that would start in this life. I cannot put words to this… so just sharing this with the world will have to suffice. Goodbye to my teacher, my friend, the person I have always looked up to.

Some of his friends were able to put in words more than what I can come up with. If you can read Spanish, you can read the eulogy from the Science Academy of Morelos.

His last project, enjoyable by anybody who reads Spanish, is the book with his account of his youth travels — Asia, Africa and Europe, beteen 1966 and 1970. You can read it online. And I have many printed copies, in case you want one as well.

We will always remember you with love.

Russ Allbery: Review: Steles of the Sky

30 May, 2022 - 10:06

Review: Steles of the Sky, by Elizabeth Bear

Series: Eternal Sky #3 Publisher: Tor Copyright: April 2014 ISBN: 0-7653-2756-2 Format: Hardcover Pages: 429

Steles of the Sky is the third book of the Eternal Sky trilogy and a direct sequel to Shattered Pillars. You should not start here, and ideally you should read all three books close together. They form a single story, and Elizabeth Bear is somewhat notorious for not adding extra explanation to her novels.

By the end of Shattered Pillars, Bear was (mostly) finished adding new factions to this story. Temur is returning home to fight for his people and his clan. His allies are mostly in place, as are his enemies. The hissable villain has remained hissable and boring, but several of his allies are more ambiguous and therefore more interesting (and get considerably more page time). All that remains is to see how Bear will end the story, and what last-minute twists will be thrown in.

Well, that and getting the characters into the right positions, which occupies roughly the first half of the book and dragged a bit. There is an important and long-awaited reunion, Brother Hsiung gets his moment of focus, and the dowager empress gets some valuable character development, all of which did add to the story. But there's also a lot of plodding across the countryside. I also have no idea why the extended detour to Kyiv, began in Shattered Pillars and completed here, is even in this story. It tells us a few new scraps about Erem and its implications, but nothing vital. I felt like everything that happened there could have been done elsewhere or skipped entirely without much loss.

The rest of the book is build-up to the epic conclusion, which is, somewhat unsurprisingly, a giant battle. It was okay, as giant battles go, but it also felt a bit like a fireworks display. Bear makes sure all the guns on the mantle go off by the end of the series, but a lot of them go off at the same time. It robs the plot construction of some of its power.

There's nothing objectionable about this book. It's well-written, does what it sets out to do, brings the story to a relatively satisfying conclusion, provides some memorable set pieces, and is full of women making significant decisions that shape the plot. And yet, when I finished it, my reaction was "huh, okay" and then "oh, good, I can start another book now." Shattered Pillars won me over during the book. Steles of the Sky largely did not.

I think my biggest complaint is one I've had about Bear's world-building before. She hints at some fascinating ideas: curious dragons, skies that vary with the political power currently in control, evil ancient magic, humanoid tigers with their own beliefs and magical system independent from humans, and a sky with a sun so hot that it would burn everything. Over the course of the series, she intrigued me with these ideas and left me eagerly awaiting an explanation. That explanation never comes. The history is never filled in, the tiger society is still only hints, Erem remains a vast mystery, the dragons appear only fleetingly to hint at connections with Erem... and then the book ends.

I'm not sure whether Bear did explain some details and I wasn't paying close enough attention, or if she never intended detailed explanations. (Both are possible! Bear's books are often subtle.) But I wanted so much more. For me, half the fun of SFF world-building is the explanation. I love the hints and the mystery and the sense of lost knowledge and hidden depths... but then I want the characters to find the knowledge and plumb the depths, not just solve their immediate conflict.

This is as good of a book as the first two books of the series on its own merits, but I enjoyed it less because I was hoping for more revelations before the story ended. The characters are all fine, but only a few of them stood out. Hrahima stole every scene she was in, and I would happily read a whole trilogy about her tiger people. Edene came into her own and had some great moments, but they didn't come with the revelations about Erem that I was hungry for. The rest of the large cast is varied and well-written and features a refreshing number of older women, and it wouldn't surprise me to hear that other readers had favorite characters who carried the series for them. But for me the characters weren't compelling enough to overcome my disappointment in the lack of world-building revelations.

The series sadly didn't deliver the payoff that I was looking for, and I can't recommend it given the wealth of excellent fantasy being written today. But if you like Bear's understated writing style and don't need as much world-building payoff as I do, it may still be worth considering.

Rating: 6 out of 10

John Goerzen: Fast, Ordered Unixy Queues over NNCP and Syncthing with Filespooler

30 May, 2022 - 05:39

It seems that lately I’ve written several shell implementations of a simple queue that enforces ordered execution of jobs that may arrive out of order. After writing this for the nth time in bash, I decided it was time to do it properly. But first, a word on the why of it all.

Why did I bother?

My needs arose primarily from handling Backups over Asynchronous Communication methods – in this case, NNCP. When backups contain incrementals that are unpacked on the destination, they must be applied in the correct order.

In some cases, like ZFS, the receiving side will detect an out-of-order backup file and exit with an error. In those cases, processing in random order is acceptable but can be slow if, say, hundreds or thousands of hourly backups have stacked up over a period of time. The same goes for using gitsync-nncp to synchronize git repositories. In both cases, a best effort based on creation date is sufficient to produce a significant performance improvement.

With other cases, such as tar or dar backups, the receiving cannot detect out of order incrementals. In those situations, the incrementals absolutely must be applied with strict ordering. There are many other situations that arise with these needs also. Filespooler is the answer to these.

Existing Work

Before writing my own program, I of course looked at what was out there already. I looked at celeary, gearman, nq, rq, cctools work queue, ts/tsp (task spooler), filequeue, dramatiq, GNU parallel, and so forth.

Unfortunately, none of these met my needs at all. They all tended to have properties like:

  • An extremely complicated client/server system that was incompatible with piping data over existing asynchronous tools
  • A large bias to processing of small web requests, resulting in terrible inefficiency or outright incompatibility with jobs in the TB range
  • An inability to enforce strict ordering of jobs, especially if they arrive in a different order from how they were queued

Many also lacked some nice-to-haves that I implemented for Filespooler:

  • Support for the encryption and cryptographic authentication of jobs, including metadata
  • First-class support for arbitrary compressors
  • Ability to use both stream transports (pipes) and filesystem-like transports (eg, rclone mount, S3, Syncthing, or Dropbox)
Introducing Filespooler

Filespooler is a tool in the Unix tradition: that is, do one thing well, and integrate nicely with other tools using the fundamental Unix building blocks of files and pipes. Filespooler itself doesn’t provide transport for jobs, but instead is designed to cooperate extremely easily with transports that can be written to as a filesystem or piped to – which is to say, almost anything of interest.

Filespooler is written in Rust and has an extensive Filespooler Reference as well as many tutorials on its homepage. To give you a few examples, here are some links:

Basics of How it Works

Filespooler is intentionally simple:

  • The sender maintains a sequence file that includes a number for the next job packet to be created.
  • The receiver also maintains a sequence file that includes a number for the next job to be processed.
  • fspl prepare creates a Filespooler job packet and emits it to stdout. It includes a small header (<100 bytes in most cases) that includes the sequence number, creation timestamp, and some other useful metadata.
  • You get to transport this job packet to the receiver in any of many simple ways, which may or may not involve Filespooler’s assistance.
  • On the receiver, Filespooler (when running in the default strict ordering mode) will simply look at the sequence file and process jobs in incremental order until it runs out of jobs to process.

The name of job files on-disk matches a pattern for identification, but the content of them is not significant; only the header matters.

You can send job data in three ways:

  1. By piping it to fspl prepare
  2. By setting certain environment variables when calling fspl prepare
  3. By passing additional command-line arguments to fspl prepare, which can optionally be passed to the processing command at the receiver.

Data piped in is added to the job “payload”, while environment variables and command-line parameters are encoded in the header.

Basic usage

Here I will excerpt part of the Using Filespooler over Syncthing tutorial; consult it for further detail. As a bit of background, Syncthing is a FLOSS decentralized directory synchronization tool akin to Dropbox (but with a much richer feature set in many ways).

Preparation

First, on the receiver, you create the queue (passing the directory name to -q):

sender$ fspl queue-init -q ~/sync/b64queue

Now, we can send a job like this:

sender$ echo Hi | fspl prepare -s ~/b64seq -i - | fspl queue-write -q ~/sync/b64queue

Let’s break that down:

  • First, we pipe “Hi” to fspl prepare.
  • fspl prepare takes two parameters:
    • -s seqfile gives the path to a sequence file used on the sender side. This file has a simple number in it that increments a unique counter for every generated job file. It is matched with the nextseq file within the queue to make sure that the receiver processes jobs in the correct order. It MUST be separate from the file that is in the queue and should NOT be placed within the queue. There is no need to sync this file, and it would be ideal to not sync it.
    • The -i option tells fspl prepare to read a file for the packet payload. -i - tells it to read stdin for this purpose. So, the payload will consist of three bytes: “Hi\n” (that is, including the terminating newline that echo wrote)
  • Now, fspl prepare writes the packet to its stdout. We pipe that into fspl queue-write:
    • fspl queue-write reads stdin and writes it to a file in the queue directory in a safe manner. The file will ultimately match the fspl-*.fspl pattern and have a random string in the middle.

At this point, wait a few seconds (or however long it takes) for the queue files to be synced over to the recipient.

On the receiver, we can see if any jobs have arrived yet:

receiver$ fspl queue-ls -q ~/sync/b64queue
ID                   creation timestamp          filename
1                    2022-05-16T20:29:32-05:00   fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl

Let’s say we’d like some information about the job. Try this:

receiver$ $ fspl queue-info -q ~/sync/b64queue -j 1
FSPL_SEQ=1
FSPL_CTIME_SECS=1652940172
FSPL_CTIME_NANOS=94106744
FSPL_CTIME_RFC3339_UTC=2022-05-17T01:29:32Z
FSPL_CTIME_RFC3339_LOCAL=2022-05-16T20:29:32-05:00
FSPL_JOB_FILENAME=fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl
FSPL_JOB_QUEUEDIR=/home/jgoerzen/sync/b64queue
FSPL_JOB_FULLPATH=/home/jgoerzen/sync/b64queue/jobs/fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl

This information is intentionally emitted in a format convenient for parsing.

Now let’s run the job!

receiver$ fspl queue-process -q ~/sync/b64queue --allow-job-params base64
SGkK

There are two new parameters here:

  • --allow-job-params says that the sender is trusted to supply additional parameters for the command we will be running.
  • base64 is the name of the command that we will run for every job. It will:
    • Have environment variables set as we just saw in queue-info
    • Have the text we previously prepared – “Hi\n” – piped to it

By default, fspl queue-process doesn’t do anything special with the output; see Handling Filespooler Command Output for details on other options. So, the base64-encoded version of our string is “SGkK”. We successfully sent a packet using Syncthing as a transport mechanism!

At this point, if you do a fspl queue-ls again, you’ll see the queue is empty. By default, fspl queue-process deletes jobs that have been successfully processed.

For more

See the Filespooler homepage.

This blog post is also available as a permanent, periodically-updated page.

Russ Allbery: Review: Kleptopia

29 May, 2022 - 09:11

Review: Kleptopia, by Tom Burgis

Publisher: Harper Copyright: 2020 Printing: September 2021 ISBN: 0-06-323613-3 Format: Kindle Pages: 340

Kleptopia is a nonfiction chronicle of international financial corruption and money laundering told via a focus on the former Soviet republic of Kazakhstan. The primary characters are a British banker named Nigel Wilkins (at the start of the story, the head of compliance in the London office of the Swiss bank BSI), a group of businessmen from Kyrgyzstan and Uzbekistan called the Trio, and a Kazakh oligarch and later political dissident named Mukhtar Ablyazov, although the story spreads beyond them. It is partly a detailed example of what money laundering looks like in practice: where the money comes from, who has it, what they want to do with it, who they hire to move it, and what countries welcome it. But it is more broadly about the use of politics for financial plunder, and what stories we tell about the results.

The title of this book feels a bit like clickbait, and I was worried it would be mostly opinion and policy discussion. It is the exact opposite. Tom Burgis is an investigations correspondent at the Financial Times, and this is a detailed piece of investigative journalism (backed by a hundred pages of notes and detailed attributions). Burgis has a clear viewpoint and expresses it when it seems relevant, but the vast majority of this book is the sort of specific account of people, dates, times, and events that one would expect from a full-length investigative magazine piece, except maintained at book length.

Even aside from the subject matter, I love that writing like this exists. It's often best in magazines, newspapers, and on-line articles for timeliness, since book publishing is slow, but some subjects require the length of a book. Burgis is the sort of reporter who travels the world to meet with sources in person, and who, before publication, presents his conclusions to everyone mentioned for their rebuttals (many of which are summarized or quoted in detail in the notes so that the reader can draw their own credibility conclusions). Whether or not one agrees with his specific conclusions or emphasis, we need more of this sort of journalism in the world.

I knew essentially nothing about Kazakhstan or its politics before reading this book. I also had not appreciated the degree to which exploitation of natural resources is the original source of the money for international money laundering. In both Kazakhstan and Africa (a secondary setting for this book), people get rich largely because of things dug or pumped out of the ground. That money, predictably, does not go to the people doing the hard work of digging and pumping, who work in appalling conditions and are put down with violence if they try to force change. (In one chapter, Burgis tells the harrowing and nightmarish story of Roza Tuletayeva, a striker at the oil field in Zhanaozen.) It's gathered up by the already rich and politically connected, who gained control of former state facilities during the post-Soviet collapse and then maintained their power with violence and corruption. And once they have money, they try to convert it into holdings in European banks, London real estate, and western corporations.

The primary thing I will remember from this book is the degree to which oligarchs rely on being able to move between two systems. They make their money in unstable or authoritarian countries with high degrees of political corruption and violence, and are adept at navigating that world (although sometimes it turns on them; more on that in a moment). But they don't want to leave their money in that world. Someone else could betray them, undermine them, or gain the ear of the local dictator. They rely instead on western countries with strong property rights, stable financial institutions, and vast legal systems devoted to protecting the wealth of people who are already rich. In essence, they play both rule sets against each other: make money through techniques that would be illegal in London, and then move their money to London so that the British government and legal system will protect it against others who are trying to do the same.

What they get out of this is obvious. What London gets out of it is another theme of this book: it's a way for a lot of people to share the wealth without doing any of the crimes. Money laundering is a very lossy activity. Lots of money falls out of the cart along the way, where it is happily scooped up by bankers, lawyers, accountants, public relations consultants, and the rest of the vast machinery of theoretically legitimate business. Everyone wins, except the people the money is being stolen from in the first place. (And the planet; Burgis doesn't talk much about the environment, but I found the image of one-way extraction of irreplaceable natural resources haunting and disturbing.)

Donald Trump does make an appearance here, although not as significant of one as I was expecting given the prominent appearance of Trump crony Felix Sater. Trump is part of the machinery that allows oligarch money to become legally-protected business investment, but it's also clear from Burgis's telling that he is (at least among the money flows Burgis is focused on) a second-tier player with delusions of grandeur. He's more notable for his political acumen and ability to craft media stories than his money-laundering skills.

The story-telling is a third theme of this book. The electorate of the societies into which oligarchs try to move their money aren't fond of crime, mob activity, political corruption, or authoritarian exploitation of workers. If public opinion turns sufficiently strongly against specific oligarchs, the normally friendly mechanisms of law and business regulation may have to take action. The primary defense of laundered money is hiding its origins sufficiently that it's too hard to investigate or explain to a jury, but there are limits to how completely the money can hide given that oligarchs still want to spend it. They need to tell a story about how they are modernizing businessmen, aghast at the lingering poor conditions of their home countries but working earnestly to slowly improve them. And they need to defend those stories against people who might try to draw a clearer link between them and criminal activity.

The competing stories between dissident oligarch Mukhtar Ablyazov and the Kazakh government led by Nursultan Nazarbayev are a centerpiece of this book. Ablyazov is in some sense a hero of the story, someone who attempted to expose the corruption of Nazerbayev's government and was jailed by Nazerbayev for corruption in retaliation. But Burgis makes clear the tricky fact that Ablyazov likely is guilty of at least some (and possibly a lot) of the money laundering that Nazerbayev accused him. It's a great illustration of the perils of simple good or bad labels when trying to understand this world from the outside. All of these men are playing similar games, and all of them are trying to tell stories where they are heroes and their opponents are corrupt thieves. And yet, there is not a moral equivalency. Ablyazov is not a good person, but what Nazerbayev attempted to do to his family is appalling, far worse than anything Ablyazov did, and the stories about Ablyazov that have prevailed to date in British courts are not an attempt to reach the truth.

The major complaint that I have about Kleptopia is that this is a highly complex story with a large number of characters, but Burgis doesn't tell it in chronological order. He jumps forward and backward in time to introduce new angles of the story, and I'm not sure that was the right structural choice. Maybe a more linear story would have been even more confusing, but I got lost in the chronology at several points. Be prepared to put in some work as a reader in keeping the timeline straight.

I also have to warn that this is not in any way a hopeful book. Burgis introduces Nigel Wilkins and his quixotic quest to expose the money laundering activities of European banks early in the book. In a novel that would set up a happy ending in which Wilkins brings down the edifice, or at least substantial portions of it. We do not live in a novel, sadly. Kleptopia is not purely depressing, but by and large the bad guys win. This is not a history of corruption that we've exposed and overcome. It's a chronicle of a problem that is getting worse, and that the US and British governments are failing to address.

Burgis's publishers successfully defended a libel suit in British courts brought by the Eurasian Natural Resources Corporation, the primary corporate entity of the Trio who are the primary villains of this book. I obviously haven't done my own independent research, but after reading this book, including the thorough notes, this looks to me like the type of libel suit that should serve as an advertisement for its target.

I can't say I enjoyed reading this, particularly at the end when the arc becomes clear. Understanding how little governments and institutions care about stopping money laundering is not a pleasant experience. But it's an important book, well and vividly told (except for the chronology), and grounded in solid investigative journalism. Recommended; I'm glad I read it.

Rating: 7 out of 10

Steinar H. Gunderson: Speeding up Samba AD

29 May, 2022 - 01:30

One Weird Trick(TM) for speeding up a slow Samba Active Directory domain controller is seemingly to leave and rejoin the domain. (If you don't have another domain controller, you'll need to join one in temporarily.) Seemingly, not only can you switch to LMDB (which has two fsyncs instead of eight on commit—which matters a lot, especially on non-SSDs, as the Kerberos authentication path has a blocking write to update account statistics), but you also get to regenerate the database, giving you the advantage of any new indexes since last upgrade.

Oh, and SSDs probably help a bunch, too.

kpcyrd: auth-tarball-from-git: Verifying tarballs with signed git tags

28 May, 2022 - 07:00

I noticed there’s a common anti-pattern in some PKGBUILDs, the short scripts that are used to build Arch Linux packages. Specifically we’re looking at the part that references the source code used when building a package:

source=("git+https://github.com/alacritty/alacritty.git#tag=v${pkgver}?signed")
validpgpkeys=('4DAA67A9EA8B91FCC15B699C85CDAE3C164BA7B4'
              'A56EF308A9F1256C25ACA3807EA8F8B94622A6A9')
sha256sums=('SKIP')

This does:

  • authentication: verify the git tag was signed by one of the two trusted keys.
  • pinning: the source code is not pinned and git tags are not immutable, upstream could create a new signed git tag with an identical name and arbitrarily change the source code without the PKGBUILD noticing.

In contrast consider this PKGBUILD:

source=($pkgname-$pkgver.tar.gz::https://github.com/alacritty/alacritty/archive/refs/tags/v$pkgver.tar.gz)
sha256sums=('e48d4b10762c2707bb17fd8f89bd98f0dcccc450d223cade706fdd9cfaefb308')
  • authentication: there’s no signature, so this PKGBUILD doesn’t cryptographically verify authorship.
  • pinning: the source code is cryptographically pinned, it’s addressing the source code by it’s content. There’s nothing upstream can do to silently change the code used by this PKGBUILD.

Personally - if I had to decide between these two - I’d prefer the later because I can always try to authenticate the pinned tarball later on, but it’s impossible to know for sure which source code has been used if all I know is “something that had a valid signature on it”. This set could be infinitely large for all we know!

But is there a way to get both? Consider this PKGBUILD:

makedepends=('auth-tarball-from-git')
source=($pkgname-$pkgver.tar.gz::https://github.com/alacritty/alacritty/archive/refs/tags/v$pkgver.tar.gz
        chrisduerr.pgp
        kchibisov.pgp)
sha256sums=('e48d4b10762c2707bb17fd8f89bd98f0dcccc450d223cade706fdd9cfaefb308'
            '19573dc0ba7a2f003377dc49986867f749235ecb45fe15eb923a74b2ab421d74'
            '5b866e6cb791c58cba2e7fc60f647588699b08abc2ad6b18ba82470f0fd3db3b')

prepare() {
  cd "$pkgname-$pkgver"

  auth-tarball-from-git --keyring ../chrisduerr.pgp --keyring ../kchibisov.pgp \
    --tag v$pkgver --prefix $pkgname-$pkgver \
    https://github.com/alacritty/alacritty.git ../$pkgname-$pkgver.tar.gz
}
  • authentication: auth-tarball-from-git verifies the signed git tag and then verifies the tarball has been built from the commit the tag is pointing to.
  • pinning: the source code and the files containing the public keys are cryptographically pinned.

In this case sha256sums= is the primary line of defense against tampering with build inputs and the git tag is “only” used to document authorship.

For more infos on how this works you can have a look at the auth-tarball-from-git repo, there’s also a section about attacks on signed git tags that you should probably know about.

Thanks

This work is currently crowd-funded on github sponsors. I’d like to thank @SantiagoTorres, @repi and @rgacogne for their support in particular. ♥️

Reproducible Builds (diffoscope): diffoscope 214 released

27 May, 2022 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 214. This version includes the following changes:

[ Chris Lamb ]
* Support both python-argcomplete 1.x and 2.x.

[ Vagrant Cascadian ]
* Add external tool on GNU Guix for xb-tool.

You find out more by visiting the project homepage.

Sergio Talens-Oliag: New Blog Config

27 May, 2022 - 05:00

As promised, on this post I’m going to explain how I’ve configured this blog using hugo, asciidoctor and the papermod theme, how I publish it using nginx, how I’ve integrated the remark42 comment system and how I’ve automated its publication using gitea and json2file-go.

It is a long post, but I hope that at least parts of it can be interesting for some, feel free to ignore it if that is not your case …​ 😉

Hugo ConfigurationTheme settings

The site is using the PaperMod theme and as I’m using asciidoctor to publish my content I’ve adjusted the settings to improve how things are shown with it.

The current config.yml file is the one shown below (probably some of the settings are not required nor being used right now, but I’m including the current file, so this post will have always the latest version of it):

config.yml
baseURL: https://blogops.mixinet.net/
title: Mixinet BlogOps
paginate: 5
theme: PaperMod
destination: public/
enableInlineShortcodes: true
enableRobotsTXT: true
buildDrafts: false
buildFuture: false
buildExpired: false
enableEmoji: true
pygmentsUseClasses: true
minify:
  disableXML: true
  minifyOutput: true
languages:
  en:
    languageName: "English"
    description: "Mixinet BlogOps - https://blogops.mixinet.net/"
    author: "Sergio Talens-Oliag"
    weight: 1
    title: Mixinet BlogOps
    homeInfoParams:
      Title: "Sergio Talens-Oliag Technical Blog"
      Content: >
        ![Mixinet BlogOps](/images/mixinet-blogops.png)
    taxonomies:
      category: categories
      tag: tags
      series: series
    menu:
      main:
        - name: Archive
          url: archives
          weight: 5
        - name: Categories
          url: categories/
          weight: 10
        - name: Tags
          url: tags/
          weight: 10
        - name: Search
          url: search/
          weight: 15
outputs:
  home:
    - HTML
    - RSS
    - JSON
params:
  env: production
  defaultTheme: light
  disableThemeToggle: false
  ShowShareButtons: true
  ShowReadingTime: true
  disableSpecial1stPost: true
  disableHLJS: true
  displayFullLangName: true
  ShowPostNavLinks: true
  ShowBreadCrumbs: true
  ShowCodeCopyButtons: true
  ShowRssButtonInSectionTermList: true
  ShowFullTextinRSS: true
  ShowToc: true
  TocOpen: false
  comments: true
  remark42SiteID: "blogops"
  remark42Url: "/remark42"
  profileMode:
    enabled: false
    title: Sergio Talens-Oliag Technical Blog
    imageUrl: "/images/mixinet-blogops.png"
    imageTitle: Mixinet BlogOps
    buttons:
      - name: Archives
        url: archives
      - name: Categories
        url: categories
      - name: Tags
        url: tags
  socialIcons:
    - name: CV
      url: "https://www.uv.es/~sto/cv/"
    - name: Debian
      url: "https://people.debian.org/~sto/"
    - name: GitHub
      url: "https://github.com/sto/"
    - name: GitLab
      url: "https://gitlab.com/stalens/"
    - name: Linkedin
      url: "https://www.linkedin.com/in/sergio-talens-oliag/"
    - name: RSS
      url: "index.xml"
  assets:
    disableHLJS: true
    favicon: "/favicon.ico"
    favicon16x16:  "/favicon-16x16.png"
    favicon32x32:  "/favicon-32x32.png"
    apple_touch_icon:  "/apple-touch-icon.png"
    safari_pinned_tab:  "/safari-pinned-tab.svg"
  fuseOpts:
    isCaseSensitive: false
    shouldSort: true
    location: 0
    distance: 1000
    threshold: 0.4
    minMatchCharLength: 0
    keys: ["title", "permalink", "summary", "content"]
markup:
  asciidocExt:
    attributes: {}
    backend: html5s
    extensions: ['asciidoctor-html5s','asciidoctor-diagram']
    failureLevel: fatal
    noHeaderOrFooter: true
    preserveTOC: false
    safeMode: unsafe
    sectionNumbers: false
    trace: false
    verbose: false
    workingFolderCurrent: true
privacy:
  vimeo:
    disabled: false
    simple: true
  twitter:
    disabled: false
    enableDNT: true
    simple: true
  instagram:
    disabled: false
    simple: true
  youtube:
    disabled: false
    privacyEnhanced: true
services:
  instagram:
    disableInlineCSS: true
  twitter:
    disableInlineCSS: true
security:
  exec:
    allow:
      - '^asciidoctor$'
      - '^dart-sass-embedded$'
      - '^go$'
      - '^npx$'
      - '^postcss$'

Some notes about the settings:

  • disableHLJS and assets.disableHLJS are set to true; we plan to use rouge on adoc and the inclusion of the hljs assets adds styles that collide with the ones used by rouge.
  • ShowToc is set to true and the TocOpen setting is set to false to make the ToC appear collapsed initially. My plan was to use the asciidoctor ToC, but after trying I believe that the theme one looks nice and I don’t need to adjust styles, although it has some issues with the html5s processor (the admonition titles use <h6> and they are shown on the ToC, which is weird), to fix it I’ve copied the layouts/partial/toc.html to my site repository and replaced the range of headings to end at 5 instead of 6 (in fact 5 still seems a lot, but as I don’t think I’ll use that heading level on the posts it doesn’t really matter).
  • params.profileMode values are adjusted, but for now I’ve left it disabled setting params.profileMode.enabled to false and I’ve set the homeInfoParams to show more or less the same content with the latest posts under it (I’ve added some styles to my custom.css style sheet to center the text and image of the first post to match the look and feel of the profile).
  • On the asciidocExt section I’ve adjusted the backend to use html5s, I’ve added the asciidoctor-html5s and asciidoctor-diagram extensions to asciidoctor and adjusted the workingFolderCurrent to true to make asciidoctor-diagram work right (haven’t tested it yet).
Theme customisations

To write in asciidoctor using the html5s processor I’ve added some files to the assets/css/extended directory:

  1. As said before, I’ve added the file assets/css/extended/custom.css to make the homeInfoParams look like the profile page and I’ve also changed a little bit some theme styles to make things look better with the html5s output:

    custom.css
    /* Fix first entry alignment to make it look like the profile */
    .first-entry { text-align: center; }
    .first-entry img { display: inline; }
    /**
     * Remove margin for .post-content code and reduce padding to make it look
     * better with the asciidoctor html5s output.
     **/
    .post-content code { margin: auto 0; padding: 4px; }
  2. I’ve also added the file assets/css/extended/adoc.css with some styles taken from the asciidoctor-default.css, see this blog post about the original file; mine is the same after formatting it with css-beautify and editing it to use variables for the colors to support light and dark themes:

    adoc.css
    /* AsciiDoctor*/
    table {
        border-collapse: collapse;
        border-spacing: 0
    }
    
    .admonitionblock>table {
        border-collapse: separate;
        border: 0;
        background: none;
        width: 100%
    }
    
    .admonitionblock>table td.icon {
        text-align: center;
        width: 80px
    }
    
    .admonitionblock>table td.icon img {
        max-width: none
    }
    
    .admonitionblock>table td.icon .title {
        font-weight: bold;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        text-transform: uppercase
    }
    
    .admonitionblock>table td.content {
        padding-left: 1.125em;
        padding-right: 1.25em;
        border-left: 1px solid #ddddd8;
        color: var(--primary)
    }
    
    .admonitionblock>table td.content>:last-child>:last-child {
        margin-bottom: 0
    }
    
    .admonitionblock td.icon [class^="fa icon-"] {
        font-size: 2.5em;
        text-shadow: 1px 1px 2px var(--secondary);
        cursor: default
    }
    
    .admonitionblock td.icon .icon-note::before {
        content: "\f05a";
        color: var(--icon-note-color)
    }
    
    .admonitionblock td.icon .icon-tip::before {
        content: "\f0eb";
        color: var(--icon-tip-color)
    }
    
    .admonitionblock td.icon .icon-warning::before {
        content: "\f071";
        color: var(--icon-warning-color)
    }
    
    .admonitionblock td.icon .icon-caution::before {
        content: "\f06d";
        color: var(--icon-caution-color)
    }
    
    .admonitionblock td.icon .icon-important::before {
        content: "\f06a";
        color: var(--icon-important-color)
    }
    
    .conum[data-value] {
        display: inline-block;
        color: #fff !important;
        background-color: rgba(100, 100, 0, .8);
        -webkit-border-radius: 100px;
        border-radius: 100px;
        text-align: center;
        font-size: .75em;
        width: 1.67em;
        height: 1.67em;
        line-height: 1.67em;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        font-style: normal;
        font-weight: bold
    }
    
    .conum[data-value] * {
        color: #fff !important
    }
    
    .conum[data-value]+b {
        display: none
    }
    
    .conum[data-value]::after {
        content: attr(data-value)
    }
    
    pre .conum[data-value] {
        position: relative;
        top: -.125em
    }
    
    b.conum * {
        color: inherit !important
    }
    
    .conum:not([data-value]):empty {
        display: none
    }
  3. The previous file uses variables from a partial copy of the theme-vars.css file that changes the highlighted code background color and adds the color definitions used by the admonitions:

    theme-vars.css
    :root {
        /* Solarized base2 */
        /* --hljs-bg: rgb(238, 232, 213); */
        /* Solarized base3 */
        /* --hljs-bg: rgb(253, 246, 227); */
        /* Solarized base02 */
        --hljs-bg: rgb(7, 54, 66);
        /* Solarized base03 */
        /* --hljs-bg: rgb(0, 43, 54); */
        /* Default asciidoctor theme colors */
        --icon-note-color: #19407c;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #bf6900;
        --icon-caution-color: #bf3400;
        --icon-important-color: #bf0000
    }
    
    .dark {
        --hljs-bg: rgb(7, 54, 66);
        /* Asciidoctor theme colors with tint for dark background */
        --icon-note-color: #3e7bd7;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #ff8d03;
        --icon-caution-color: #ff7847;
        --icon-important-color: #ff3030
    }
  4. The previous styles use font-awesome, so I’ve downloaded its resources for version 4.7.0 (the one used by asciidoctor) storing the font-awesome.css into on the assets/css/extended dir (that way it is merged with the rest of .css files) and copying the fonts to the static/assets/fonts/ dir (will be served directly):

    FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
    curl "$FA_BASE_URL/css/font-awesome.css" \
      > assets/css/extended/font-awesome.css
    for f in FontAwesome.otf fontawesome-webfont.eot \
      fontawesome-webfont.svg fontawesome-webfont.ttf \
      fontawesome-webfont.woff fontawesome-webfont.woff2; do
        curl "$FA_BASE_URL/fonts/$f" > "static/assets/fonts/$f"
    done
  5. As already said the default highlighter is disabled (it provided a css compatible with rouge) so we need a css to do the highlight styling; as rouge provides a way to export them, I’ve created the assets/css/extended/rouge.css file with the thankful_eyes theme:

    rougify style thankful_eyes > assets/css/extended/rouge.css
  6. To support the use of the html5s backend with admonitions I’ve added a variation of the example found on this blog post to assets/js/adoc-admonitions.js:

    adoc-admonitions.js
    // replace the default admonitions block with a table that uses a format
    // similar to the standard asciidoctor ... as we are using fa-icons here there
    // is no need to add the icons: font entry on the document.
    window.addEventListener('load', function () {
      const admonitions = document.getElementsByClassName('admonition-block')
      for (let i = admonitions.length - 1; i >= 0; i--) {
        const elm = admonitions[i]
        const type = elm.classList[1]
        const title = elm.getElementsByClassName('block-title')[0];
    	const label = title.getElementsByClassName('title-label')[0]
    		.innerHTML.slice(0, -1);
        elm.removeChild(elm.getElementsByClassName('block-title')[0]);
        const text = elm.innerHTML
        const parent = elm.parentNode
        const tempDiv = document.createElement('div')
        tempDiv.innerHTML = `<div class="admonitionblock ${type}">
        <table>
          <tbody>
            <tr>
              <td class="icon">
                <i class="fa icon-${type}" title="${label}"></i>
              </td>
              <td class="content">
                ${text}
              </td>
            </tr>
          </tbody>
        </table>
      </div>`
        const input = tempDiv.childNodes[0]
        parent.replaceChild(input, elm)
      }
    })

    and enabled its minified use on the layouts/partials/extend_footer.html file adding the following lines to it:

    {{- $admonitions := slice (resources.Get "js/adoc-admonitions.js")
      | resources.Concat "assets/js/adoc-admonitions.js" | minify | fingerprint }}
    <script defer crossorigin="anonymous" src="{{ $admonitions.RelPermalink }}"
      integrity="{{ $admonitions.Data.Integrity }}"></script>
    Note:

    As I’ve added the font-awesome resources and the standard styling for the asciidoctor admonitions, there is no need to add anything else to make things work.

Remark42 configuration

To integrate Remark42 with the PaperMod theme I’ve created the file layouts/partials/comments.html with the following content based on the remark42 documentation, including extra code to sync the dark/light setting with the one set on the site:

comments.html
<div id="remark42"></div>
<script>
  var remark_config = {
    host: {{ .Site.Params.remark42Url }},
    site_id: {{ .Site.Params.remark42SiteID }},
    url: {{ .Permalink }},
    locale: {{ .Site.Language.Lang }}
  };
  (function(c) {
    /* Adjust the theme using the local-storage pref-theme if set */
    if (localStorage.getItem("pref-theme") === "dark") {
      remark_config.theme = "dark";
    } else if (localStorage.getItem("pref-theme") === "light") {
      remark_config.theme = "light";
    }
    /* Add remark42 widget */
    for(var i = 0; i < c.length; i++){
      var d = document, s = d.createElement('script');
      s.src = remark_config.host + '/web/' + c[i] +'.js';
      s.defer = true;
      (d.head || d.body).appendChild(s);
    }
  })(remark_config.components || ['embed']);
</script>

In development I use it with anonymous comments enabled, but to avoid SPAM the production site uses social logins (for now I’ve only enabled Github & Google, if someone requests additional services I’ll check them, but those were the easy ones for me initially).

To support theme switching with remark42 I’ve also added the following inside the layouts/partials/extend_footer.html file:

{{- if (not site.Params.disableThemeToggle) }}
<script>
/* Function to change theme when the toggle button is pressed */
document.getElementById("theme-toggle").addEventListener("click", () => {
  if (typeof window.REMARK42 != "undefined") {
    if (document.body.className.includes('dark')) {
      window.REMARK42.changeTheme('light');
    } else {
      window.REMARK42.changeTheme('dark');
    }
  }
});
</script>
{{- end }}

With this code if the theme-toggle button is pressed we change the remark42 theme before the PaperMod one (that’s needed here only, on page loads the remark42 theme is synced with the main one using the code from the layouts/partials/comments.html shown earlier).

Development setup

To preview the site on my laptop I’m using docker-compose with the following configuration:

docker-compose.yaml
version: "2"
services:
  hugo:
    build:
      context: ./docker/hugo-adoc
      dockerfile: ./Dockerfile
    image: sto/hugo-adoc
    container_name: hugo-adoc-blogops
    restart: always
    volumes:
      - .:/documents
    command: server --bind 0.0.0.0 -D -F
    user: ${APP_UID}:${APP_GID}
  nginx:
    image: nginx:latest
    container_name: nginx-blogops
    restart: always
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    ports:
      -  1313:1313
  remark42:
    build:
      context: ./docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    container_name: remark42-blogops
    restart: always
    env_file:
      - ./.env
      - ./remark42/env.dev
    volumes:
      - ./remark42/var.dev:/srv/var

To run it properly we have to create the .env file with the current user ID and GID on the variables APP_UID and APP_GID (if we don’t do it the files can end up being owned by a user that is not the same as the one running the services):

$ echo "APP_UID=$(id -u)\nAPP_GID=$(id -g)" > .env

The Dockerfile used to generate the sto/hugo-adoc is:

Dockerfile
FROM asciidoctor/docker-asciidoctor:latest
RUN gem install --no-document asciidoctor-html5s &&\
 apk update && apk add --no-cache curl libc6-compat &&\
 repo_path="gohugoio/hugo" &&\
 api_url="https://api.github.com/repos/$repo_path/releases/latest" &&\
 download_url="$(\
  curl -sL "$api_url" |\
  sed -n "s/^.*download_url\": \"\\(.*.extended.*Linux-64bit.tar.gz\)\"/\1/p"\
 )" &&\
 curl -sL "$download_url" -o /tmp/hugo.tgz &&\
 tar xf /tmp/hugo.tgz hugo &&\
 install hugo /usr/bin/ &&\
 rm -f hugo /tmp/hugo.tgz &&\
 /usr/bin/hugo version &&\
 apk del curl && rm -rf /var/cache/apk/*
# Expose port for live server
EXPOSE 1313
ENTRYPOINT ["/usr/bin/hugo"]
CMD [""]

If you review it you will see that I’m using the docker-asciidoctor image as the base; the idea is that this image has all I need to work with asciidoctor and to use hugo I only need to download the binary from their latest release at github (as we are using an image based on alpine we also need to install the libc6-compat package, but once that is done things are working fine for me so far).

The image does not launch the server by default because I don’t want it to; in fact I use the same docker-compose.yml file to publish the site in production simply calling the container without the arguments passed on the docker-compose.yml file (see later).

When running the containers with docker-compose up (or docker compose up if you have the docker-compose-plugin package installed) we also launch a nginx container and the remark42 service so we can test everything together.

The Dockerfile for the remark42 image is the original one with an updated version of the init.sh script:

Dockerfile
FROM umputun/remark42:latest
COPY init.sh /init.sh

The updated init.sh is similar to the original, but allows us to use an APP_GID variable and updates the /etc/group file of the container so the files get the right user and group (with the original script the group is always 1001):

init.sh
#!/sbin/dinit /bin/sh

uid="$(id -u)"

if [ "${uid}" -eq "0" ]; then
  echo "init container"

  # set container's time zone
  cp "/usr/share/zoneinfo/${TIME_ZONE}" /etc/localtime
  echo "${TIME_ZONE}" >/etc/timezone
  echo "set timezone ${TIME_ZONE} ($(date))"

  # set UID & GID for the app
  if [ "${APP_UID}" ] || [ "${APP_GID}" ]; then
    [ "${APP_UID}" ] || APP_UID="1001"
    [ "${APP_GID}" ] || APP_GID="${APP_UID}"
    echo "set custom APP_UID=${APP_UID} & APP_GID=${APP_GID}"
    sed -i "s/^app:x:1001:1001:/app:x:${APP_UID}:${APP_GID}:/" /etc/passwd
    sed -i "s/^app:x:1001:/app:x:${APP_GID}:/" /etc/group
  else
    echo "custom APP_UID and/or APP_GID not defined, using 1001:1001"
  fi
  chown -R app:app /srv /home/app
fi

echo "prepare environment"

# replace {% REMARK_URL %} by content of REMARK_URL variable
find /srv -regex '.*\.\(html\|js\|mjs\)$' -print \
  -exec sed -i "s|{% REMARK_URL %}|${REMARK_URL}|g" {} \;

if [ -n "${SITE_ID}" ]; then
  #replace "site_id: 'remark'" by SITE_ID
  sed -i "s|'remark'|'${SITE_ID}'|g" /srv/web/*.html
fi

echo "execute \"$*\""
if [ "${uid}" -eq "0" ]; then
  exec su-exec app "$@"
else
  exec "$@"
fi

The environment file used with remark42 for development is quite minimal:

env.dev
TIME_ZONE=Europe/Madrid
REMARK_URL=http://localhost:1313/remark42
SITE=blogops
SECRET=123456
ADMIN_SHARED_ID=sto
AUTH_ANON=true
EMOJI=true

And the nginx/default.conf file used to publish the service locally is simple too:

default.conf
server { 
 listen 1313;
 server_name localhost;
 location / {
    proxy_pass http://hugo:1313;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
 }
 location /remark42/ {
    rewrite /remark42/(.*) /$1 break;
    proxy_pass http://remark42:8080/;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
Production setup

The VM where I’m publishing the blog runs Debian GNU/Linux and uses binaries from local packages and applications packaged inside containers.

To run the containers I’m using docker-ce (I could have used podman instead, but I already had it installed on the machine, so I stayed with it).

The binaries used on this project are included on the following packages from the main Debian repository:

  • git to clone & pull the repository,
  • jq to parse json files from shell scripts,
  • json2file-go to save the webhook messages to files,
  • inotify-tools to detect when new files are stored by json2file-go and launch scripts to process them,
  • nginx to publish the site using HTTPS and work as proxy for json2file-go and remark42 (I run it using a container),
  • task-spool to queue the scripts that update the deployment.

And I’m using docker and docker compose from the debian packages on the docker repository:

  • docker-ce to run the containers,
  • docker-compose-plugin to run docker compose (it is a plugin, so no - in the name).
Note:

On the following sections I’m assuming that the user doing the work belongs to the docker group, that is, has permission to run docker.

Repository checkout

To manage the git repository I’ve created a deploy key, added it to gitea and cloned the project on the /srv/blogops PATH (that route is owned by a regular user that has permissions to run docker, as I said before).

Compiling the site with hugo

To compile the site we are using the docker-compose.yml file seen before, to be able to run it first we build the container images and once we have them we launch hugo using docker compose run:

$ cd /srv/blogops
$ git pull
$ docker compose build
$ if [ -d "./public" ]; then rm -rf ./public; fi
$ docker compose run hugo --

The compilation leaves the static HTML on /srv/blogops/public (we remove the directory first because hugo does not clean the destination folder as jekyll does).

The deploy script re-generates the site as described and moves the public directory to its final place for publishing.

Running remark42 with docker

On the /srv/blogops/remark42 folder I have the following docker-compose.yml:

docker-compose.yml
version: "2"
services:
  remark42:
    build:
      context: ../docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    env_file:
      - ../.env
      - ./env.prod
    container_name: remark42
    restart: always
    volumes:
      - ./var.prod:/srv/var
    ports:
      - 127.0.0.1:8042:8080

The ../.env file is loaded to get the APP_UID and APP_GID variables that are used by my version of the init.sh script to adjust file permissions and the env.prod file contains the rest of the settings for remark42, including the social network tokens (see the remark42 documentation for the available parameters, I don’t include my configuration here because some of them are secrets).

Nginx configuration

The nginx configuration for the blogops.mixinet.net site is as simple as:

server {
  listen 443 ssl http2;
  server_name blogops.mixinet.net;
  ssl_certificate /etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/blogops.mixinet.net/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  access_log /var/log/nginx/blogops.mixinet.net-443.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-443.error.log;
  root /srv/blogops/nginx/public_html;
  location / {
    try_files $uri $uri/ =404;
  }
  include /srv/blogops/nginx/remark42.conf;
}
server {
  listen 80 ;
  listen [::]:80 ;
  server_name blogops.mixinet.net;
  access_log /var/log/nginx/blogops.mixinet.net-80.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-80.error.log;
  if ($host = blogops.mixinet.net) {
    return 301 https://$host$request_uri;
  }
  return 404;
}

On this configuration the certificates are managed by certbot and the server root directory is on /srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason for that is that I want to be able to compile without affecting the running site, the deployment script generates the site on /srv/blogops/public and if all works well we rename folders to do the switch, making the change feel almost atomic.

json2file-go configuration

As I have a working WireGuard VPN between the machine running gitea at my home and the VM where the blog is served, I’m going to configure the json2file-go to listen for connections on a high port using a self signed certificate and listening on IP addresses only reachable through the VPN.

To do it we create a systemd socket to run json2file-go and adjust its configuration to listen on a private IP (we use the FreeBind option on its definition to be able to launch the service even when the IP is not available, that is, when the VPN is down).

The following script can be used to set up the json2file-go configuration:

setup-json2file.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
J2F_DIR="$BASE_DIR/json2file"
TLS_DIR="$BASE_DIR/tls"

J2F_SERVICE_NAME="json2file-go"
J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"
J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"
J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"
J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"

J2F_BASEDIR_FILE="/etc/json2file-go/basedir"
J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"
J2F_CRT_FILE="/etc/json2file-go/certfile"
J2F_KEY_FILE="/etc/json2file-go/keyfile"
J2F_CRT_PATH="$TLS_DIR/crt.pem"
J2F_KEY_PATH="$TLS_DIR/key.pem"

# ----
# MAIN
# ----

# Install packages used with json2file for the blogops site
sudo apt update
sudo apt install -y json2file-go uuid
if [ -z "$(type mkcert)" ]; then
  sudo apt install -y mkcert
fi
sudo apt clean

# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
J2F_DIRLIST="blogops:$(uuid)"
J2F_LISTEN_STREAM="172.31.31.1:4443"

# Configure json2file
[ -d "$J2F_DIR" ] || mkdir "$J2F_DIR"
sudo sh -c "echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"
[ -d "$TLS_DIR" ] || mkdir "$TLS_DIR"
if [ ! -f "$J2F_CRT_PATH" ] || [ ! -f "$J2F_KEY_PATH" ]; then
  mkcert -cert-file "$J2F_CRT_PATH" -key-file "$J2F_KEY_PATH" "$(hostname -f)"
fi
sudo sh -c "echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"
sudo sh -c "echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"
sudo sh -c "cat >'$J2F_DIRLIST_FILE'" <<EOF
$(echo "$J2F_DIRLIST" | tr ';' '\n')
EOF

# Service override
[ -d "$J2F_SERVICE_DIR" ] || sudo mkdir "$J2F_SERVICE_DIR"
sudo sh -c "cat >'$J2F_SERVICE_OVERRIDE'" <<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUP
EOF

# Socket override
[ -d "$J2F_SOCKET_DIR" ] || sudo mkdir "$J2F_SOCKET_DIR"
sudo sh -c "cat >'$J2F_SOCKET_OVERRIDE'" <<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAM
EOF

# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"
sudo systemctl start "$J2F_SERVICE_NAME"
sudo systemctl enable "$J2F_SERVICE_NAME"

# ----
# vim: ts=2:sw=2:et:ai:sts=2
Warning:

The script uses mkcert to create the temporary certificates, to install the package on bullseye the backports repository must be available.

Gitea configuration

To make gitea use our json2file-go server we go to the project and enter into the hooks/gitea/new page, once there we create a new webhook of type gitea and set the target URL to https://172.31.31.1:4443/blogops and on the secret field we put the token generated with uuid by the setup script:

sed -n -e 's/blogops://p' /etc/json2file-go/dirlist

The rest of the settings can be left as they are:

  • Trigger on: Push events
  • Branch filter: *
Warning:

We are using an internal IP and a self signed certificate, that means that we have to review that the webhook section of the app.ini of our gitea server allows us to call the IP and skips the TLS verification (you can see the available options on the gitea documentation).

The [webhook] section of my server looks like this:

[webhook]
ALLOWED_HOST_LIST=private
SKIP_TLS_VERIFY=true

Once we have the webhook configured we can try it and if it works our json2file server will store the file on the /srv/blogops/webhook/json2file/blogops/ folder.

The json2file spooler script

With the previous configuration our system is ready to receive webhook calls from gitea and store the messages on files, but we have to do something to process those files once they are saved in our machine.

An option could be to use a cronjob to look for new files, but we can do better on Linux using inotify …​ we will use the inotifywait command from inotify-tools to watch the json2file output directory and execute a script each time a new file is moved inside it or closed after writing (IN_CLOSE_WRITE and IN_MOVED_TO events).

To avoid concurrency problems we are going to use task-spooler to launch the scripts that process the webhooks using a queue of length 1, so they are executed one by one in a FIFO queue.

The spooler script is this:

blogops-spooler.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
TSP_DIR="$BASE_DIR/tsp"

WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"

# ---------
# FUNCTIONS
# ---------

queue_job() {
  echo "Queuing job to process file '$1'"
  TMPDIR="$TSP_DIR" TS_SLOTS="1" TS_MAXFINISHED="10" \
    tsp -n "$WEBHOOK_COMMAND" "$1"
}

# ----
# MAIN
# ----

INPUT_DIR="$1"
if [ ! -d "$INPUT_DIR" ]; then
  echo "Input directory '$INPUT_DIR' does not exist, aborting!"
  exit 1
fi

[ -d "$TSP_DIR" ] || mkdir "$TSP_DIR"

echo "Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR" -type f | sort | while read -r _filename; do
  queue_job "$_filename"
done

# Use inotifywatch to process new files
echo "Watching for new files under '$INPUT_DIR'"
inotifywait -q -m -e close_write,moved_to --format "%w%f" -r "$INPUT_DIR" |
  while read -r _filename; do
    queue_job "$_filename"
  done

# ----
# vim: ts=2:sw=2:et:ai:sts=2

To run it as a daemon we install it as a systemd service using the following script:

setup-spooler.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
J2F_DIR="$BASE_DIR/json2file"

SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"
SPOOLER_SERVICE_NAME="blogops-j2f-spooler"
SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"

# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"

# ----
# MAIN
# ----

# Install packages used with the webhook processor
sudo apt update
sudo apt install -y inotify-tools jq task-spooler
sudo apt clean

# Configure process service
sudo sh -c "cat > $SPOOLER_SERVICE_FILE" <<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMAND
EOF

# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME" || true
sudo systemctl start "$SPOOLER_SERVICE_NAME"
sudo systemctl enable "$SPOOLER_SERVICE_NAME"

# ----
# vim: ts=2:sw=2:et:ai:sts=2
The gitea webhook processor

Finally, the script that processes the JSON files does the following:

  1. First, it checks if the repository and branch are right,
  2. Then, it fetches and checks out the commit referenced on the JSON file,
  3. Once the files are updated, compiles the site using hugo with docker compose,
  4. If the compilation succeeds the script renames directories to swap the old version of the site by the new one.

If there is a failure the script aborts but before doing it or if the swap succeeded the system sends an email to the configured address and/or the user that pushed updates to the repository with a log of what happened.

The current script is this one:

blogops-webhook.sh
#!/bin/sh

set -e

# ---------
# VARIABLES
# ---------

# Values
REPO_REF="refs/heads/main"
REPO_CLONE_URL="https://gitea.mixinet.net/mixinet/blogops.git"

MAIL_PREFIX="[BLOGOPS-WEBHOOK] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# If the following variable is set to 'true' the pusher gets mail on failures
MAIL_ERRFILE="false"
# If the following variable is set to 'true' the pusher gets mail on success
MAIL_LOGFILE="false"
# gitea's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains
# when the KeepEmailPrivate option is enabled for a user
NO_REPLY_ADDRESS="noreply.example.org"

# Directories
BASE_DIR="/srv/blogops"

PUBLIC_DIR="$BASE_DIR/public"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"

WEBHOOK_BASE_DIR="$BASE_DIR/webhook"
WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"
WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"
WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"
WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"
WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"
WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"

# Files
TODAY="$(date +%Y%m%d)"
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"
WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"
WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"
WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"
WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"
WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"
WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"

# Query to get variables from a gitea webhook json
ENV_VARS_QUERY="$(
  printf "%s" \
    '(.           | @sh "gt_ref=\(.ref);"),' \
    '(.           | @sh "gt_after=\(.after);"),' \
    '(.repository | @sh "gt_repo_clone_url=\(.clone_url);"),' \
    '(.repository | @sh "gt_repo_name=\(.name);"),' \
    '(.pusher     | @sh "gt_pusher_full_name=\(.full_name);"),' \
    '(.pusher     | @sh "gt_pusher_email=\(.email);")'
)"

# ---------
# Functions
# ---------

webhook_log() {
  echo "$(date -R) $*" >>"$WEBHOOK_LOGFILE_PATH"
}

webhook_check_directories() {
  for _d in "$WEBHOOK_SPOOL_DIR" "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" \
    "$WEBHOOK_REJECTED" "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR"; do
    [ -d "$_d" ] || mkdir "$_d"
  done
}

webhook_clean_directories() {
  # Try to remove empty dirs
  for _d in "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" "$WEBHOOK_REJECTED" \
    "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR" "$WEBHOOK_SPOOL_DIR"; do
    if [ -d "$_d" ]; then
      rmdir "$_d" 2>/dev/null || true
    fi
  done
}

webhook_accept() {
  webhook_log "Accepted: $*"
  mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_ACCEPTED_JSON"
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_ACCEPTED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
}

webhook_reject() {
  [ -d "$WEBHOOK_REJECTED_TODAY" ] || mkdir "$WEBHOOK_REJECTED_TODAY"
  webhook_log "Rejected: $*"
  if [ -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
    mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_REJECTED_JSON"
  fi
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_REJECTED_LOGF"
  exit 0
}

webhook_deployed() {
  [ -d "$WEBHOOK_DEPLOYED_TODAY" ] || mkdir "$WEBHOOK_DEPLOYED_TODAY"
  webhook_log "Deployed: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_DEPLOYED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_DEPLOYED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
}

webhook_troubled() {
  [ -d "$WEBHOOK_TROUBLED_TODAY" ] || mkdir "$WEBHOOK_TROUBLED_TODAY"
  webhook_log "Troubled: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_TROUBLED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_TROUBLED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
}

print_mailto() {
  _addr="$1"
  _user_email=""
  # Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,
  # which should match the value of that variable on the gitea 'app.ini' (it
  # is the domain used for emails when the user hides it).
  # shellcheck disable=SC2154
  if [ -n "${gt_pusher_email##*@"${NO_REPLY_ADDRESS}"}" ] &&
    [ -z "${gt_pusher_email##*@*}" ]; then
    _user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""
  fi
  if [ "$_addr" ] && [ "$_user_email" ]; then
    echo "$_addr,$_user_email"
  elif [ "$_user_email" ]; then
    echo "$_user_email"
  elif [ "$_addr" ]; then
    echo "$_addr"
  fi
}

mail_success() {
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_LOGFILE" = "true" ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="OK - $gt_repo_name updated to commit '$gt_after'"
    mail -s "${MAIL_PREFIX}${subject}" "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
}

mail_failure() {
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_ERRFILE" = true ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
    mail -s "${MAIL_PREFIX}${subject}" "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
}

# ----
# MAIN
# ----
# Check directories
webhook_check_directories

# Go to the base directory
cd "$BASE_DIR"

# Check if the file exists
WEBHOOK_JSON_INPUT_FILE="$1"
if [ ! -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
  webhook_reject "Input arg '$1' is not a file, aborting"
fi

# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"
eval "$(jq -r "$ENV_VARS_QUERY" "$WEBHOOK_JSON_INPUT_FILE")"

# Check that the repository clone url is right
# shellcheck disable=SC2154
if [ "$gt_repo_clone_url" != "$REPO_CLONE_URL" ]; then
  webhook_reject "Wrong repository: '$gt_clone_url'"
fi

# Check that the branch is the right one
# shellcheck disable=SC2154
if [ "$gt_ref" != "$REPO_REF" ]; then
  webhook_reject "Wrong repository ref: '$gt_ref'"
fi

# Accept the file
# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"

# Update the checkout
ret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository fetch failed"
  mail_failure
fi
# shellcheck disable=SC2154
git checkout "$gt_after" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository checkout failed"
  mail_failure
fi

# Remove the build dir if present
if [ -d "$PUBLIC_DIR" ]; then
  rm -rf "$PUBLIC_DIR"
fi

# Build site
docker compose run hugo -- >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
# go back to the main branch
git switch main && git pull
# Fail if public dir was missing
if [ "$ret" -ne "0" ] || [ ! -d "$PUBLIC_DIR" ]; then
  webhook_troubled "Site build failed"
  mail_failure
fi

# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
  -exec rm -rf {} \; >>"$WEBHOOK_LOGFILE_PATH" 2>&1 || ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Removal of old site versions failed"
  mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
  webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
  mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ||
    ret="$?"
fi
if [ "$ret" -eq "0" ]; then
  webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
  mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ||
    ret="$?"
fi
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Site switch failed"
  mail_failure
else
  webhook_deployed "Site deployed successfully"
  mail_success
fi

# ----
# vim: ts=2:sw=2:et:ai:sts=2

Dirk Eddelbuettel: RcppAPT 0.0.9: Minor Update

26 May, 2022 - 04:50

A new version of the RcppAPT package with the R interface to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN earlier today.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

This release updates the code to the Apt 2.5.0 release this makes. It makes a cleaner distinction between public and private components of the API. We adjusted one access point to a pattern we already used, and while at it, simplified some of the transition from the pre-Apt 2.0.0 interface. No new features. The NEWS entries follow.

Changes in version 0.0.9 (2022-05-25)
  • Simplified and standardized to only use public API

  • No longer tests and accomodates pre-Apt 2.0 API

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Emmanuel Kasper: One of the strangest bug I have ever seen on Linux

26 May, 2022 - 03:58

Networking starts when you login as root, stops when you log off !

SeLinux messages can be ignored I guess, but we see clearly the devices being activated (it's a Linux bridge)

If you have any explanations I am curious.

Bits from Debian: Debian welcomes the 2022 GSOC interns

24 May, 2022 - 18:15

We are very excited to announce that Debian has selected three interns to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, interns, and details of the tasks to be performed.

Project: Android SDK Tools in Debian

  • Interns: Nkwuda Sunday Cletus and Raman Sarda

The deliverables of this project will mostly be finished packages submitted to Debian sid, both for new packages and updated packages. Whenever possible, we should also try to get patches submitted and merged upstream in the Android sources.

Project: Project: Quality Assurance for Biological and Medical Applications inside Debian

  • Interns: Mohammed Bilal

Deliverables of the project: Continuous integration tests for all Debian Med applications (life sciences, medical imaging, others), Quality Assurance review and bug fixing.

Congratulations and welcome to all the interns!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

Arturo Borrero González: Toolforge Jobs Framework

24 May, 2022 - 02:19

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

This post continues the discussion of Toolforge updates as described in a previous post. Every non-trivial task performed in Toolforge (like executing a script or running a bot) should be dispatched to a job scheduling backend, which ensures that the job is run in a suitable place with sufficient resources.

Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once. The basic principle of running jobs is fairly straightforward:

  • You create a job from a submission server (usually login.toolforge.org).
  • The backend finds a suitable execution node to run the job on, and starts it once resources are available.
  • As it runs, the job will send output and errors to files until the job completes or is aborted.

So far, if a tool developer wanted to work with jobs, the Toolforge Grid Engine backend was the only suitable choice. This is despite the fact that Kubernetes supports this kind of workload natively. The truth is that we never prepared our Kubernetes environment to work with jobs. Luckily that has changed.

We no longer want to run Grid Engine

In a previous blog post we shared information about our desired future for Grid Engine in Toolforge. Our intention is to discontinue our usage of this technology.

Convenient way of running jobs on Toolforge Kubernetes

Some advanced Toolforge users really wanted to use Kubernetes. They were aware of the lack of abstractions or helpers, so they were forced to use the raw Kubernetes API. Eventually, they figured everything out and managed to succeed. The result of this move was in the form of [docs on Wikitech][raws] and a few dozen jobs running on Kubernetes for the first time.

We were aware of this, and this initiative was much in sync with our ultimate goal: to promote Kubernetes over Grid Engine. We rolled up our sleeves and started thinking of a way to abstract and make it easy to run jobs without having to deal with lots of YAML and the raw Kubernetes API.

There is a precedent: the webservice command does exactly that. It hides all the details behind a simple command line interface to start/stop a web app running on Kubernetes. However, we wanted to go even further, be more flexible and prepare ourselves for more situations in the future: we decided to create a complete new REST API to wrap the jobs functionality in Toolforge Kubernetes. The Toolforge Jobs Framework was born.

Toolforge Jobs Framework components

The new framework is a small collection of components. As of this writing, we have three:

  • The REST API — responsible for creating/deleting/listing jobs on the Kubernetes system.
  • A command line interface — to interact with the REST API above.
  • An emailer — to notify users about their jobs activity in the Kubernetes system.

There were a couple of challenges that weren’t trivial to solve. The authentication and authorization against the Kubernetes API was one of them. The other was deciding on the semantics of the new REST API itself. If you are curious, we invite you to take a look at the documentation we have in wikitech.

Open beta phase

Once we gained some confidence with the new framework, in July 2021 we decided to start a beta phase. We suggested some advanced Toolforge users try out the new framework. We tracked this phase in Phabricator, where our collaborators quickly started reporting some early bugs, helping each other, and creating new feature requests.

Moreover, when we launched the Grid Engine migration from Debian 9 Stretch to Debian 10 Buster we took a step forward and started promoting the new jobs framework as a viable replacement for the grid. Some official documentation pages were created on wikitech as well.

As of this writing the framework continues in beta phase. We have solved basically all of the most important bugs, and we already started thinking on how to address the few feature requests that are missing.

We haven’t yet established yet the criteria for leaving the beta phase, but it would be good to have:

  • Critical bugs fixed and most feature requests addressed (or at least somehow planned).
  • Proper automated test coverage. We can do better on testing the different software components to ensure they are as bug free as possible. This also would make sure that contributing changes is easy.
  • REST API swagger integration.
  • Deployment automation. Deploying the REST API and the emailer is tedious. This is tracked in Phabricator.
  • Documentation, documentation, documentation.
Limitations

One of the limitations we bear in mind since early on in the development process of this framework was the support for mixing different programming languages or runtime environments in the same job.

Solving this limitation is currently one of the WMCS team priorities, because this is one of the key workflows that was available on Grid Engine. The moment we address it, the framework adoption will grow, and it will pretty much enable the same workflows as in the grid, if not more advanced and featureful.

Stay tuned for more upcoming blog posts with additional information about Toolforge.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Ulrike Uhlig: How do kids conceive the internet? - part 3

23 May, 2022 - 05:00

I received some feedback on the first part of interviews about the internet with children that I’d like to share publicly here. Thank you! Your thoughts and experiences are important to me!

In the first interview round there was this French girl.

Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.

To this bit, one person reacted saying that they first laughed when reading her proposal, but then felt extremely touched by it.

Another person reacted to the same bit of text:

That’s just brilliant. We spend so much time worrying about how the internet will affect children while overlooking how it has already affected us as parents. It actively harms our relationship with our children (keeping us distracted from their amazing life) and sets a bad example for them.

Too often, when we worry about children, we should look at our own behavior first. Until about that age (9-10+) at least, they are such a direct reflection of us that it’s frightening…

Yet another person reacted to the fact that many of the interviewees in the first round seemed to believe that the internet is immaterial, located somewhere in the air, while being at the same time omnipresent:

It reminds me of one time – about a dozen years ago, when i was still working closely with one of the city high schools – where i’d just had a terrible series of days, dealing with hardware failure, crappy service followthrough by the school’s ISP, and overheating in the server closet, and had basically stayed overnight at the school and just managed to get things back to mostly-functional before kids and teachers started showing up again.

That afternoon, i’d been asked by the teacher of a dystopian fiction class to join them for a discussion of Feed, which they’d just finished reading. i had read it the week before, and came to class prepared for their questions. (the book is about a near-future where kids have cybernetic implants and their society is basically on a runaway communications overload; not a bad Y[oung]A[dult] novel, really!)

The kids all knew me from around the school, but the teacher introduced my appearance in class as “one of the most Internet-connected people” and they wanted to ask me about whether i really thought the internet would “do this kind of thing” to our culture, which i think was the frame that the teacher had prepped them with. I asked them whether they thought the book was really about the Internet, or whether it was about mobile phones. Totally threw off the teacher’s lesson plans, i think, but we had a good discussion.

At one point, one of the kids asked me “if there was some kind of crazy disaster and all the humans died out, would the internet just keep running? what would happen on it if we were all gone?”

all of my labor – even that grueling week – was invisible to him! The internet was an immaterial thing, or if not immaterial, a force of nature, a thing that you accounted for the way you accounted for the weather, or traffic jams. It didn’t occur to him, even having just read a book that asked questions about what hyperconnectivity does to a culture (including grappling with issues of disparate access, effective discrimination based on who has the latest hardware, etc), it didn’t occur to him that this shit all works to the extent that it does because people make it go.

I felt lost trying to explain it to him, because where i wanted to get to with the class discussion was about how we might decide collectively to make it go somewhere else – that our contributions to it, and our labor to perpetuate it (or not) might actually help shape the future that the network helps us slide into. but he didn’t even see that human decisions or labor played a role it in at all, let alone a potentially directive role. We were really starting at square zero, which wasn’t his fault. Or the fault of his classmates that matter – but maybe a little bit of fault on the teacher, who i thought should have been emphasizing this more – but even the teacher clearly thought of the internet as a thing being done to us not as something we might actually drive one way or another. And she’s not even wrong – most people don’t have much control, just like most people can’t control the weather, even as our weather changes based on aggregate human activity.

I was quite impressed by seeing the internet perceived as a force of nature, so we continued this discussion a bit:

that whole story happened before we started talking about “the cloud”, but “the cloud” really reinforces this idea, i think. not that anyone actually thinks that “the cloud” is a literal cloud, but language shapes minds in subtle ways.

(Bold emphasis in the texts are mine.)

Thanks :) I’m happy and touched that these interviews prompted your wonderful reactions, and I hope that there’ll be more to come on this topic. I’m working on it!

Sergio Talens-Oliag: New Blog

23 May, 2022 - 05:00

Welcome to my new Blog for Technical Stuff.

For a long time I was planning to start publishing technical articles again but to do it I wanted to replace my old blog based on ikiwiki by something more modern.

I’ve used Jekyll with GitLab Pages to build the Intranet of the ITI and to generate internal documentation sites on Agile Content, but, as happened with ikiwiki, I felt that things were kind of slow and not as easy to maintain as I would like.

So on Kyso (the Company I work for right now) I switched to Hugo as the Static Site Generator (I still use GitLab Pages to automate the deployment, though), but the contents are written using the Markdown format, while my personal preference is the Asciidoc format.

One thing I liked about Jekyll was that it was possible to use Asciidoctor to generate the HTML simply by using the Jekyll Asciidoc plugin (I even configured my site to generate PDF documents from .adoc files using the Asciidoctor PDF converter) and, luckily for me, that is also possible with Hugo, so that is what I plan to use on this blog, in fact his post is written in .adoc.

My plan is to start publishing articles about things I’m working on to keep them documented for myself and maybe be useful to someone else.

The general intention is to write about Container Orchestration (mainly Kubernetes), CI/CD tools (currently I’m using GitLab CE for that), System Administration (with Debian GNU/Linux as my preferred OS) and that sort of things.

My next post will be about how I build, publish and update the Blog, but probably I will not finish it until next week, once the site is fully operational and the publishing system is tested.

Spoiler Alert !

This is a personal site, so I’m using Gitea to host the code instead of GitLab.

To handle the deployment I’ve configured json2file-go to save the data sent by the hook calls and process it asynchronously using inotify-tools.

When a new file is detected a script parses the JSON file using jq and builds and updates the site if appropiate.

Russ Allbery: Review: On a Sunbeam

22 May, 2022 - 12:06

Review: On a Sunbeam, by Tillie Walden

Publisher: Tillie Walden Copyright: 2016-2017 Format: Online graphic novel Pages: 544

On a Sunbeam is a web comic that was published in installments between Fall 2016 and Spring 2017, and then later published in dead tree form. I read the on-line version, which is still available for free from its web site. It was nominated for an Eisner Award and won a ton of other awards, including the Los Angeles Times Book Prize.

Mia is a new high school graduate who has taken a job with a construction crew that repairs old buildings (that are floating in space, but I'll get to that in a moment). Alma, Elliot, and Charlotte have been together for a long time; Jules is closer to Mia's age and has been with them for a year. This is not the sort of job one commutes to: they live together on a spaceship that travels to the job sites, share meals together, and are more of an extended family than a group of coworkers. It's all a bit intimidating for Mia, but Jules provides a very enthusiastic welcome and some orientation.

The story of Mia's new job is interleaved with Mia's school experience from five years earlier. As a new frosh at a boarding school, Mia is obsessed with Lux, a school sport that involves building and piloting ships through a maze to capture orbs. Sent to the principal's office on the first day of school for sneaking into the Lux tower when she's supposed to be at assembly, she meets Grace, a shy girl with sparkly shoes and an unheard-of single room. Mia (a bit like Jules in the present timeline) overcomes Grace's reticence by being persistently outgoing and determinedly friendly, while trying to get on the Lux team and dealing with the typical school problems of bullies and in-groups.

On a Sunbeam is science fiction in the sense that it seems to take place in space and school kids build flying ships. It is not science fiction in the sense of caring about technological extrapolation or making any scientific sense whatsoever. The buildings that Mia and the crew repair appear to be hanging in empty space, but there's gravity. No one wears any protective clothing or air masks. The spaceships look (and move) like giant tropical fish. If you need realism in your science fiction graphical novels, it's probably best not to think of this as science fiction at all, or even science fantasy despite the later appearance of some apparently magical or divine elements.

That may sound surrealistic or dream-like, but On a Sunbeam isn't that either. It's a story about human relationships, found family, and diversity of personalities, all of which are realistically portrayed. The characters find their world coherent, consistent, and predictable, even if it sometimes makes no sense to the reader. On a Sunbeam is simply set in its own universe, with internal logic but without explanation or revealed rules.

I kind of liked this approach? It takes some getting used to, but it's an excuse for some dramatic and beautiful backgrounds, and it's oddly freeing to have unremarked train tracks in outer space. There's no way that an explanation would have worked; if one were offered, my brain would have tried to nitpick it to the detriment of the story. There's something delightful about a setting that follows imaginary physical laws this unapologetically and without showing the author's work.

I was, sadly, not as much of a fan of the art, although I am certain this will be a matter of taste. Walden mixes simple story-telling panels with sweeping vistas, free-floating domes, and strange, wild asteroids, but she uses a very limited color palette. Most panels are only a few steps away from monochrome, and the colors are chosen more for mood or orientation in the story (Mia's school days are all blue, the Staircase is orange) than for any consistent realism. There is often a lot of detail in the panels, but I found it hard to appreciate because the coloring confused my eye. I'm old enough to have been a comics reader during the revolution in digital coloring and improved printing, and I loved the subsequent dramatic improvement in vivid colors and shading. I know the coloring style here is an intentional artistic choice, but to me it felt like a throwback to the days of muddy printing on cheap paper.

I have a similar complaint about the lettering: On a Sunbeam is either hand-lettered or closely simulates hand lettering, and I often found the dialogue hard to read due to inconsistent intra- and interword spacing or ambiguous letters. Here too I'm sure this was an artistic choice, but as a reader I'd much prefer a readable comics font over hand lettering.

The detail in the penciling is more to my liking. I had occasional trouble telling some of the characters apart, but they're clearly drawn and emotionally expressive. The scenery is wildly imaginative and often gorgeous, which increased my frustration with the coloring. I would love to see what some of these panels would have looked like after realistic coloring with a full palette.

(It's worth noting again that I read the on-line version. It's possible that the art was touched up for the print version and would have been more to my liking.)

But enough about the art. The draw of On a Sunbeam for me is the story. It's not very dramatic or event-filled at first, starting as two stories of burgeoning friendships with a fairly young main character. (They are closely linked, but it's not obvious how until well into the story.) But it's the sort of story that I started reading, thought was mildly interesting, and then kept reading just one more chapter until I had somehow read the whole thing.

There are some interesting twists towards the end, but it's otherwise not a very dramatic or surprising story. What it is instead is open-hearted, quiet, charming, and deeper than it looks. The characters are wildly different and can be abrasive, but they invest time and effort into understanding each other and adjusting for each other's preferences. Personal loss can have huge consequences, and even drive a lot of the allowed to be happy after loss, and to mature independent of the bad things that happen to them. These characters felt like people I would like and would want to get to know (even if Jules would be overwhelming). I enjoyed watching their lives.

This reminded me a bit of a Becky Chambers novel, although it's less invested in being science fiction and sticks strictly to humans. There's a similar feeling that the relationships are the point of the story, and that nearly everyone is trying hard to be good, with differing backgrounds and differing conceptions of good. All of the characters are female or non-binary, which is left as entirely unexplained as the rest of the setting. It's that sort of book.

I wouldn't say this is one of the best things I've ever read, but I found it delightful and charming, and it certainly sucked me in and kept me reading until the end. One also cannot argue with the price, although if I hadn't already read it, I would be tempted to by a paper copy to support the author. This will not be to everyone's taste, and stay far away if you are looking for realistic science fiction, but recommended if you are in the mood for an understated queer character story full of good-hearted people.

Rating: 7 out of 10

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้