Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 25 min ago

Louis-Philippe Véronneau: Montreal's Debian & Stuff - August 2022

7 September, 2022 - 03:15

Our local Debian user group gathered on Sunday August 28th1 at the very hackish Foulab for the August 2022 edition of our "Debian & Stuff" meetings.

As always, the event was a success and we had lots of fun. Nine people showed up, including some new faces and people I hadn't seen in a while:

Animated picture of the people present at the event.

On my side, although I was badly sleep-deprived 2, I still managed to be somewhat productive!

One of the WiFi Access Points we use in our 4-apartment LAN had been boot-looping for a few weeks, after a failed sysupgrade to the latest version of OpenWRT. lavamind and I suspect the flash got corrupted in a way or another during the upgrade process...

Lucky for us, this model has a serial port and runs U-Boot. After a bit of tinkering, some electrical tape and two different serial adapters3, we managed to identify the pin layout and got a shell on the machine. The device has a reset button, but since the kernel panic was happening too soon in the boot process, we weren't able to get into OpenWRT's failsafe mode this way.

Once we had serial access, wiping the flash and re-installing OpenWRT fixed our problem. A quick ansible-playbook run later, the device was back to being usable and configured :)

I was too tired to keep track of what others did, but I took some nice pictures of the pizza we got, and of this nice blow-up Tux wearing a Foulab t-shirt. Enjoy!

As always, thanks to the Debian project for granting us a budget to rent the venue and to buy some food.

  1. Please excuse the late blog post, it's Harvest Season here and I've been quite busy. 

  2. A bad case of wry neck kept me from sleeping properly for a while in August. 

  3. As it turns out, serial connections work better when you use the right pins for TX and RX! 

Shirish Agarwal: Debian on Phone

6 September, 2022 - 23:41
History

Before I start, the game I was talking about is called Cell To Singularity. Now I haven’t gone much in the game as I have shared but think that the Singularity it refers to is the Technological Singularity that people think will happen. Whether that will happen or not is open to debate to one and all. This is going to be a bit long one.

Confession Time :- When I was sharing in the blog post, I had no clue that we actually had sessions on it in this year’s Debconf. I just saw the schedule yesterday and then came to know. Then I saw Guido’s two talks, one at Debconf as well as one as Froscon. In fact, saw the Froscon talk first, and then the one at Debconf. Both the talks are nearly the same except for a thing here or a thing there.

Now because I was not there so my understanding and knowledge would be disadvantageously asymmetrical to Guido and others who were there and could talk and share more. Having a Debian mobile or Debian on the mobile could also make Debian more popular and connectable to the masses, one of the things that were not pointed out in the Debian India BOF sadly. At the same time, there are some facts that are not on the table and hence not thought about.

Being a B.Com person, I have been following not just the technical but also how the economics work and smartphone penetration in India is pretty low or historically been very low, say around 3-4% while the majority that people use, almost 90-95% of the market uses what are called non-smartphones or dumbphones. Especially during the pandemic and even after that the dumbphones market actually went up while smartphones stagnated and even came down. There is a lot of inventory at most of the dealers that they can’t get rid of. From a dealer perspective, it probably makes more sense to buy and sell dumbphones more in number as the turnaround of capital is much faster and easier than for smartphones. I have seen people spend a number of hours and rightly so in order to make their minds up on a smartphone while for a dumbphone, it is a 10-minute thing. Ask around, figure out who is selling at the cheapest, and just buy. Most of these low-end phones are coming from China. In fact, even in the middle and getting even into smartphones, the Chinese are the masters from whom we buy, even as they have occupied Indian territory. In the top five, Samsung comes at number three of four (sharing about Samsung as a fan and having used them.) even though battery times are atrocious, especially with Android 12L. The only hope that most of the smartphone manufacturers have is lowering the sticker prices and hoping that 5G Adoption picks up and that is what they are betting on but that comes with its own share of drawbacks as can be seen.

GNOME, MATE, memory leaks, Payments

FWIW, while I do have GNOME and do use a couple of tools from the GNOME stack, I hate GNOME with a passion. I have been a mate user for almost a decade now and really love the simplicity that mate has vis-a-vis GNOME. And with each release, MATE has only become better. So, it would be nice if we can have MATE on the mobile phone. How ‘adaptive’ the apps might be on the smaller area, I dunno. It would be interesting to find out if and how people are looking at debugging memory leaks on mobile phones. Although finding memory leaks on any platform is good, finding them and fixing them on a mobile phone is pretty much critical as most phones have fixed & relatively small amounts of memory and it is and can get quickly exhausted.

One of the things that were asked in the Q&A was about payments. The interesting thing is both UK and India are the same or markedly similar in regard as far as contactless payments being concerned. What most Indians have or use is basically UPI which is basically backed by your bank. Unlike in some other countries where you have a selection of wallets and even temporary/permanent virtual accounts whereby you can minimize your risks in case your mobile gets stolen or something, here we don’t have that. There are three digital wallets that I know – Paytm – Not used (have heard it’s creepy, but don’t really know), Google pay (Unfortunately, this is the one I use, they bought multiple features, and in the last couple of years have really taken the game away from Paytm but also creepy.). The last one is Samsung Pay (haven’t really used it as their find my phone app. always crashes, dunno how it is supposed to work.) But I do find that the apps. are vulnerable. Every day there is some or other news of fraud happening. Previously, only States like Bihar and Jharkhand used to be infamous for cybercrime as a hub, but now even States like Andhra Pradesh have joined and surpassed them :(. People have lost lakhs and crores, this is just a few days back. Some more info. on UPI can be found here and GitHub has a few implementation examples that anybody could look at and run away with it.

Balancing on three things

For any new mobile phone to crack the market, it has to balance three things. One, achieve economies of scale. Unless, that is not taken care of or done, however good or bad the product might be, it remains a niche and dies after some time. While Guido shared about Openmoko and N900, one of the more interesting bits from a user perspective at least was the OLPC project. There are many nuances that the short article didn’t go through. While I can’t say for other countries, at least in India, no education initiative happens without corruption. And perhaps Nicholas’s hands were tied while other manufacturers would and could do to achieve their sales targets. In India, it flopped because there was no way for volunteers to buy or get OLPC unless they were part of a school or college. There was some traction in FOSS communities, but that died down once OLPC did the partnership with MS-Windows, and proverbially broke the camel’s back. FWIW, I think the idea, the concept, and even the machine were far ahead of their time.

The other two legs are support and Warranty – Without going into any details, I can share and tell there were quite a few OLPC type attempts using conventional laptops or using Android and FOSS or others or even using one of the mainstream distributions but the problems have always been polishing, training and support. Guido talked about privacy as a winning feature but fails to take into account that people want to know that their privacy isn’t being violated. If a mobile phone answers to ‘Hey Google’ does it mean it was passively gathering, storing, and sending info to third parties, we just don’t know. The mobile phone could be part of ‘the right to repair’ profile while at the same time it can force us to ask many questions about the way things currently are and going to be. Six months down the line all the flagships of all companies are working on being able to take and share through satellites (Satellite Internet) and perhaps maybe a few non-flagships. Of course, if you are going to use a satellite, then you are going to drain that much more quickly. In all and every event there are always gonna be tradeoffs.

The Debian-mobile mailing list doesn’t seem to have many takers. The latest I could find there is written by Paul Wise. I am in a similar boat (Samsung; SM-M526B; Lahaina; arm64-v8a) v12. It is difficult to know which release would work on your machine, make sure that the building from the source is not tainted and pristine and needs a way to backup and restore if you need to. I even tried installing GNURoot Debian and the Xserver alternative they had shared but was unable to use the touch interface on the fakeroot instance . The system talks about a back key but what back key I have no clue.

Precursor Events Debconf 2023

As far as precursor events are concerned before Debconf 23 in India, all the festivals that we have could be used to showcase Debian. In fact, the ongoing Ganesh Chaturthi would have been the perfect way to showcase Debian and apps. according to the audience. Even the festival of Durga Puja, Diwali etc. can be used. When commercial organizations use the same festivals, why can’t we? What perhaps we would need to figure out is the funding part as well as getting permissions from Municipal authorities. One of the things for e.g. that we could do is buy either a permanent 24″ monitor or a 34″ TV and use that to display Debian and apps. The bigger, the better. Something that we could use day to day and also is used for events. This would require significant amounts of energy so we could approach companies, small businesses and individuals both for volunteering as well as helping out with funding.

Somebody asked how we could do online stuff and why it is somewhat boring. What could be done for e.g. instead of 4-5 hrs. of things, break it into manageable 45 minute pieces. 4-5 hrs. is long and is gonna fatigue the best of people. Make it into 45-minute negotiable chunks, and intersphere it with jokes, hacks, anecdotes, and war stories. People do not like or want to be talked down to but rather converse. One of the things that I saw many of the artists do is have shows and limit the audience to 20-24 people on zoom call or whatever videoconferencing system you have and play with them. The passive audience enjoys the play between the standup guy and the ‘crowd’ he works on, some of them may be known to him personally so he can push that envelope a bit more. The same thing can be applied here. Share the passion, and share why we are doing something. For e.g. you could do smem -t -k | less and give a whole talk about how memory is used and freed during a session, how are things different on desktop and ARM as far as memory architecture is concerned (if there is). What is being done on the hardware side, what is on the software side and go on and on. Then share about troubleshooting applications. Valgrind is super slow and makes life hell, is there some better app ? Doesn’t matter if you are a front-end or a back-end developer you need to know this and figure out the best way to deal with in your app/program. That would have lot of value. And this is just an e.g. to help trigger more ideas from the community. I am sure others probably have more fun ideas as to what can be done. I am stopping here now otherwise would just go on, till later. Feel free to comment, feedback. Hope it generates some more thinking and excitement on the grey cells.

Jonathan Dowland: Borg corrupted hints file

6 September, 2022 - 22:29

I've been using Borg backup for a couple of years and it has seemingly worked very well for me. One difference I really appreciate from my previous arrangement (rdiff-backup) is the freedom to move large files or file hierarchies around (including between different filesystems) without provoking large backup incrementals.

About a week ago I had my first real problem with Borg: backups started to fail with the following complaints:

Creating archive at "/backup/borg::{hostname}-home-jon-{now:%Y-%m-%dT%H:%M:%S.%f}"
segment 61916 not found, but listed in compaction data
[ further, similar lines ]
Local Exception
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4690, in main
    exit_code = archiver.run(args)
  File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4622, in run
    return set_ec(func(args))
  File "/usr/lib/python3/dist-packages/borg/archiver.py", line 177, in wrapper
    return method(self, args, repository=repository, **kwargs)
  File "/usr/lib/python3/dist-packages/borg/archiver.py", line 595, in do_create
    create_inner(archive, cache)
  File "/usr/lib/python3/dist-packages/borg/archiver.py", line 560, in create_inner
    archive.save(comment=args.comment, timestamp=args.timestamp)
  File "/usr/lib/python3/dist-packages/borg/archive.py", line 530, in save
    self.repository.commit()
  File "/usr/lib/python3/dist-packages/borg/repository.py", line 475, in commit
    self.compact_segments()
  File "/usr/lib/python3/dist-packages/borg/repository.py", line 835, in compact_segments
    assert segments[segment] == 0, 'Corrupted segment reference count - corrupted index or hints'
AssertionError: Corrupted segment reference count - corrupted index or hints

At about the same time I had managed to fill the backup host's root filesystem. I thought the two issues must be related. Although all the files Borg is backing up, and the backup repository it writes to, are located on different partitions, Borg's client-side of things does maintain some caching in /root/.cache/borg. My first idea was that this must have been corrupted by an aborted write, but zapping it did not cure the above problem.

It occurred to me that I run Borg via the convenience wrapper Borgmatic, and it was possible that was failing, but after a short investigation I ruled that out.

Various attempts at running borg check or borg check --repair didn't help either. The underlying filesystem (XFS) passed a filesystem check. There wasn't any obvious complaints about IO errors from the kernel or anything reported in the HDD's SMART data.

What did work, in the end, was removing the file matching $BORG_REPO/*hint* and trying again. Although this is read/written to on the backup partition, it seems filling the root partition caused Borg (1.1.16-3) to corrupt that file.

Everything seems fine following that. I have recently started trying to semi- automatically verify backups on a monthly basis, on a machine independent from the NAS; all the tests I have written so far passed.

Junichi Uekawa: September.

4 September, 2022 - 09:28
September. Digging into why my apt-get doesn't complete.

James Valleroy: File sharing with bepasty

3 September, 2022 - 21:22

One of the apps running on my FreedomBox that I use frequently is bepasty. bepasty is essentially a self-hosted, free software pastebin. It allows you to paste text, or upload any type of file. You can also set an expiration date for when the file or text will automatically be deleted. If you are uploading multiple related files, you can organize them into a list.

bepasty does not have user accounts. Instead, it has shared passwords, where each password is linked to a set of permissions. There are five permissions: Read, List, Create, Delete, and Admin. (The meanings are mostly straightforward, except for Admin, which means the ability to lock and unlock files.) This allows very fine-grained control. For example, if you want someone to be able to upload files to your bepasty, but not view or download anything, than you can generate a password with only the “Create” permission, and give this password to the person who will be uploading files.

To simplify the initial setup in FreedomBox, we generate three passwords by default: one for viewers (List and Read), one for editors (List, Read, Create, and Delete), and one for admins (all permissions). In addition, when no password has been provided, the Read (but not List) permission is provided by default. This allows files to be easily shared by sending just their URLs (and no password required). The URLs contain some random characters, so it is not easy to guess.

I mostly use bepasty for moving files between systems, whether its a physical machine or VPS, or a VM or container that I will use only briefly. Especially in the latter case, it’s nice that I don’t need to do any extra setup (such as copying SSH keys) before I copy my files over.

The bepasty package is available in Debian stable (with a newer version in stable-backports and testing). The many use-cases that it provides, and the well-maintained Debian packaging, made it a compelling choice for integration into FreedomBox, which has included bepasty for one-click installation since version 20.14.

Joachim Breitner: More recursive definitions

3 September, 2022 - 19:31

Haskell is a pure and lazy programming language, and the laziness allows us to write some algorithms very elegantly, by recursively referring to already calculated values. A typical example is the following definition of the Fibonacci numbers, as an infinite stream:

fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Elegant graph traversals

A maybe more practical example is the following calculation of the transitive closure of a graph:

import qualified Data.Set as S
import qualified Data.Map as M

type Graph = M.Map Int (S.Set Int)
transitive1 :: Graph -> Graph
transitive1 g = sets
  where
    sets :: M.Map Int (S.Set Int)
    sets = M.mapWithKey (\v vs -> S.insert v (S.unions [ sets M.! v' | v' <- S.toList vs ])) g

We represent graphs as maps from vertex to their successors vertex, and define the resulting map sets recursively: The set of reachable vertices from a vertex v is v itself, plus those reachable by its successors vs, for which we query sets.

And, despite this apparent self-referential recursion, it works!

ghci> transitive1 $ M.map S.fromList $ M.fromList [(1,[3]),(2,[1,3]),(3,[])]
fromList [(1,fromList [1,3]),(2,fromList [1,2,3]),(3,fromList [3])]
Cyclic graphs ruin it all

These tricks can be very impressive … until someone tries to use it on a cyclic graph and the program just hangs until we abort it:

ghci> transitive1 $ M.map S.fromList $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,fromList ^CInterrupted.

At this point we are thrown back to implement a more pedestrian graph traversal, typically keeping explicit track of vertices that we have seen already:

transitive2 :: Graph -> Graph
transitive2 g = M.fromList [ (v, go S.empty [v]) | v <- M.keys g ]
  where
    go :: S.Set Int -> [Int] -> S.Set Int
    go seen [] = seen
    go seen (v:vs) | v `S.member` seen = go seen vs
    go seen (v:vs) = go (S.insert v seen) (S.toList (g M.! v) ++ vs)

I have written that seen/todo recursion idiom so often in the past, I can almost write it blindly And indeed, this code handles cyclic graphs just fine:

ghci> transitive2 $ M.map S.fromList $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,fromList [1,2,3]),(2,fromList [1,2,3]),(3,fromList [3])]

But this is a bit anticlimactic – Haskell is supposed to be a declarative language, and transitive1 declares my intent just fine!

We can have it all

It seems there actually is a way to write essentially the code in transitive1, and still get the right result in all cases, and I have just published a possible implementation as rec-def. In the module Data.Recursive.Set we find an API that resembles that of Set, with a type R (Set a), and in addition to conversion functions from and to sets, we find the two operations that we needed in transitive1:

rInsert :: Ord a => a -> R (Set a) -> R (Set a)
rUnions :: Ord a => [R (Set a)] -> R (Set a)
getR :: HasPropagator a => R a -> a

Let’s try that:

import Data.Recursive.Set

transitive2 :: Graph -> Graph
transitive2 g = M.map getR sets
  where
    sets :: M.Map Int (R (S.Set Int))
    sets = M.mapWithKey (\v vs -> rInsert v (rUnions [ sets M.! v' | v' <- S.toList vs ])) g

And indeed it works! Magic!

ghci> transitive2 $ M.map S.fromList $ M.fromList [(1,[3]),(2,[1,3]),(3,[])]
fromList [(1,fromList [1,3]),(2,fromList [1,2,3]),(3,fromList [3])]
ghci> transitive2 $ M.map S.fromList $ M.fromList [(1,[2,3]),(2,[1,3]),(3,[])]
fromList [(1,fromList [1,2,3]),(2,fromList [1,2,3]),(3,fromList [3])]

To show off some more, here are small examples:

ghci> let s = rInsert 42 s in getR s
fromList [42]
ghci> :{
  let s1 = rInsert 23 s2
      s2 = rInsert 42 s1
  in getR s1
 :}
fromList [23,42]
How is that possible? Is it still Haskell?

The internal workings of the R (Set a) type will be the topic of a future blog post; let me just briefly mention that it uses unsafe features under the hood, and just keeps applying the equations you gave until a fixed-point is reached. Because it starts with the empty set and all operations provided by Data.Recursive.Set are monotonous (e.g. no difference) it will eventually find the least fixed point.

Despite the unsafe machinery under the hood, I claim that Data.Recursive.Set is itself nicely safe, and does not destroy Haskell’s nice properties like purity, referential transparency and equational reasoning. If you disagree, I’d like to hear about it (here, on Twitter, Reddit or Discourse)! There is a brief discussion at the end of the tutorial in Data.Recursive.Example.

More than sets

The library also provides Data.Recursive.Bool for recursive equations with booleans (preferring False) and Data.Recursive.DualBool (preferring True), and some operations like rMember :: Ord a => a -> R (Set a) -> R Bool can actually connect different types. I plan to add other data types (natural numbers, maps, Maybe, with suitable orders) as demand arises and as I come across nice small example use-cases for the documentation (e.g. finding shortest paths in a graph).

I believe this idiom is practically useful in a wide range of applications (which of course all have some underlying graph structure – but then almost everything in Computer Science is a graph). My original motivation was a program analysis. Imagine you want to find out from where in your program you can run into a division by zero. As long as your program does not have recursion, you can simply keep track of a boolean flag while you traverse the program, keeping track a mapping from function names to whether they can divide by zero – all nice and elegant. But once you allow mutually recursive functions, things become tricky. Unless you use R (Bool)! Simply use laziness, pass the analysis result down when analyzing the function’s right-hand sides, and it just works!

Shirish Agarwal: Fantasy, J.R.R. Tolkein

3 September, 2022 - 12:02
J.R.R. Tolkein

Now unless you have been living under a rock cave, I am sure you know who Mr. Tolkein is. Apparently, the gentleman passed away on 2nd September 1973 at the sprightly age of 80. And this gives fans like me to talk about fantasy, fantasy authors, and the love-hate relationship we have with them. For a matter of record, I am currently reading Babylon Steel by Gaie Sebold. Now while I won’t go into many details (I never like to, if I enjoy a book, I would want the book to be mysterious rather than give praise, simply so that the next person enjoys it as much as I did without having any expectations.) Now this book has plenty of sex so wouldn’t recommend it for teenagers but more perhaps to mature audiences, although for the life of me couldn’t find any rating on the book. I did come across common sense media but unfortunately, it isn’t well known beyond perhaps some people who use it. They sadly don’t have a google/Android app And before anybody comments, I know that Android is no longer interested in supporting FOSS, their loss, not ours but that is entirely a blog post/article in itself. so let’s leave that aside for now.

Fantasy

So before talking about Mr. Tolkien and his creations let’s talk and share a bit about fantasy. We know for a fact that the conscious mind functions at less than 5%, while the other bits are made by the subconscious and the unconscious mind (the three mind model.) So any thought or idea first germinates n either the unconscious or the subconscious part of the mind and then comes into the conscious mind. It is the reason we also dream. That’s the subconscious and unconscious mind at work. While we say fantasy mostly to books, it is all around us and not just in prose but in song, dance, and all sorts of creativity are fantasy. Even Sci-fi actually comes from fantasy. Unfortunately, for reasons best known to people, they took out sci-fi and even divided fantasy into high fantasy and low fantasy. I am not going to go much into that but here’s a helpful link for those who might want to look more into it. Now the question arises, why do people write? I have asked this question many a time to the authors I have met and the answers are as varied as they come. Two of the most common answers are the need to write (an itch they can’t control or won’t control) and the other is it’s extremely healing. In my own case, even writing mere blog posts I found it unburdening. I believe this last part is what drove Mr. Tolkein and the story and arc that LOTR became.

Tolkien, LOTR, World War I

The casual reader might not know but if you followed or were curious about Mr. Tolkien, you would have found out that Mr. Tolkien served in World War 1 or what is known as the Great War. It was supposed to be the war that ended all wars but sadly didn’t. One of the things that set apart Mr. Tolkein from many of his peers was that Mr. Tolkien was very straight about himself and corresponded with people far and wide. There is actually a book called ‘The Letters of J.R.R. Tolkien’ that I hope to get at one of the used book depots. That book spans about 480 pages and gives all the answers as to why Mr. Tolkien made Middle-earth as it was made. I sadly haven’t had the opportunity to get it and it is somewhat expensive. But I’m sure that if World War 1 wouldn’t have happened and Mr. Tolkein hadn’t taken part and experienced what he experienced, we wouldn’t have LOTR. I can bet losing his friends and comrades, and the pain he felt for those around him propelled him to write about land and a race called Hobbits. I haven’t done enough fantasy reading but I do feel that his description of hobbits and the way they were and are is unique. The names and surnames he used were for humor as well as to make a statement about them. Having names such as Harfoots, Padfoot, Took and others just wouldn’t be for fun, would it? Also, the responses and the behavior in the four books by Hobbits are almost human-like. It is almost like they are or were our cousins at one point in time but we allowed ourselves to forget. Even the corruption of humans has been shown as well as self-doubt.

There is another part that I found and find fascinating, unlike most books where there is a single hero, in LOTR we have many heroes and heroines. This again, I would attribute to Mr. Tolkien and the heroism he saw on the battlefield and beyond it. All the tender emotions he shares with readers like us are because either he himself or others around him were subjected to grace and wonderment. This is all I derive from the books, those who have ‘The letters of J.R.R. Tolkein’, feel free to correct me. I was supposed to write this yesterday but real life has its own way.

I could go on and on, perhaps at a later date or time I may expand on it, but it isn’t a coincidence that ‘Lord of the Rings: Rings of Power’ is starting broadcast on the same day when Mr. Tolkein died. In the very end, fantasy is something humans got and does not matter how rich or poor you are, if one were to look, both artists like Michaelangelo and many other artists, who often didn’t have enough to have two square meals in the day, but still somehow were inspired to sketch models of airplanes, flying machines which are shockingly similar to the real thing. Many may not know that almost all primates, including apes, monkeys, squirrels, and even dolphins dream. And all of them have elaborate, complex dreams just as we do. Sadly, this info. is not known by most people otherwise, we would be so much empathetic towards our cousins in the animal kingdom.

Kunal Mehta: Kiwix in Debian, 2022 update

2 September, 2022 - 10:06

Previous updates: 2018, 2021

Kiwix is an offline content reader, best known for distributing copies of Wikipedia. I have been maintaining it in Debian since 2017.

This year most of the work has been keeping all the packages up to date in anticipation of next year's Debian 12 Bookworm release, including several transitions for new libzim and libkiwix versions.

  • libzim: 6.3.0 → 8.0.0
  • zim-tools: 2.1.0 → 3.1.1
  • python-libzim: 0.0.3 → 1.1.1 (with a cherry-picked patch)
  • libkiwix: 9.4.1 → 11.0.0 (with DFSG issues fixed!)
  • kiwix-tools: 3.1.2 → 3.3.0
  • kiwix (desktop): 2.0.5 → 2.2.2

The Debian Package Tracker makes it really easy to keep an eye on all Kiwix-related packages.

All of the "user-facing" packages (zim-tools, kiwix-tools, kiwix) now have very basic autopkgtests that can provide a bit of confidence that the package isn't totally broken. I recommend reading the "FAQ for package maintainers" to learn about all the benefits you get from having autopkgtests.

Finally, back in March I wrote a blog post, How to mirror the Russian Wikipedia with Debian and Kiwix, which got significant readership (compared to most posts on this blog), including being quoted by LWN!

We are always looking for more contributors, please reach out if you're interested. The Kiwix team is one of my favorite groups of people to work with and they love Debian too.

John Goerzen: Dead USB Drives Are Fine: Building a Reliable Sneakernet

2 September, 2022 - 08:43

“OK,” you’re probably thinking. “John, you talk a lot about things like Gopher and personal radios, and now you want to talk about building a reliable network out of… USB drives?”

Well, yes. In fact, I’ve already done it.

What is sneakernet?

Normally, “sneakernet” is a sort of tongue-in-cheek reference to using disconnected storage to transport data or messages. By “disconnect storage” I mean anything like CD-ROMs, hard drives, SD cards, USB drives, and so forth. There are times when loading up 12TB on a device and driving it across town is just faster and easier than using the Internet for the same. And, sometimes you need to get data to places that have no Internet at all.

Another reason for sneakernet is security. For instance, if your backup system is online, and your systems being backed up are online, then it could become possible for an attacker to destroy both your primary copy of data and your backups. Or, you might use a dedicated computer with no network connection to do GnuPG (GPG) signing.

What about “reliable” sneakernet, then?

TCP is often considered a “reliable” protocol. That means that the sending side is generally able to tell if its message was properly received. As with most reliable protocols, we have these components:

  1. After transmitting a piece of data, the sender retains it.
  2. After receiving a piece of data, the receiver sends an acknowledgment (ACK) back to the sender.
  3. Upon receiving the acknowledgment, the sender removes its buffered copy of the data.
  4. If no acknowledgment is received at the sender, it retransmits the data, in case it gets lost in transit.
  5. It reorders any packets that arrive out of order, so that the recipient’s data stream is ordered correctly.

Now, a lot of the things I just mentioned for sneakernet are legendarily unreliable. USB drives fail, CD-ROMs get scratched, hard drives get banged up. Think about putting these things in a bicycle bag or airline luggage. Some of them are going to fail.

You might think, “well, I’ll just copy files to a USB drive instead of move them, and once I get them onto the destination machine, I’ll delete them from the source.” Congratulations! You are a human retransmit algorithm! We should be able to automate this!

And we can.

Enter NNCP

NNCP is one of those things that almost defies explanation. It is a toolkit for building asynchronous networks. It can use as a carrier: a pipe, TCP network connection, a mounted filesystem (specifically intended for cases like this), and much more. It also supports multi-hop asynchronous routing and asynchronous meshing, but these are beyond the scope of this particular article.

NNCP’s transports that involve live communication between two hops already had all the hallmarks of being reliable; there was a positive ACK and retransmit. As of version 8.7.0, NNCP’s ACKs themselves can also be asynchronous – meaning that every NNCP transport can now be reliable.

Yes, that’s right. Your ACKs can flow over tapes and USB drives if you want them to.

I use this for archiving and backups.

If you aren’t already familiar with NNCP, you might take a look at my NNCP page. I also have a lot of blog posts about NNCP.

Those pages describe the basics of NNCP: the “packet” (the unit of transmission in NNCP, which can be tiny or many TB), the end-to-end encryption, and so forth. The new command we will now be interested in is nncp-ack.

The Basic Idea

Here are the basic steps to processing this stuff with NNCP:

  1. First, we use nncp-xfer -rx to process incoming packets from the USB (or other media) device. This moves them into the NNCP inbound queue, deleting them from the media device, and verifies the packet integrity.
  2. We use nncp-ack -node $NODE to create ACK packets responding to the packets we just loaded into the rx queue. It writes a list of generated ACKs onto fd 4, which we save off for later use.
  3. We run nncp-toss -seen to process the incoming queue. The use of -seen causes NNCP to remember the hashes of packets seen before, so a duplicate of an already-seen packet will not be processed twice. This command also processes incoming ACKs for packets we’ve sent out previously; if they pass verification, the relevant packets are removed from the local machine’s tx queue.
  4. Now, we use nncp-xfer -keep -tx -mkdir -node $NODE to send outgoing packets to a given node by writing them to a given directory on the media device. -keep causes them to remain in the outgoing queue.
  5. Finally, we use the list of generated ACK packets saved off in step 2 above. That list is passed to nncp-rm -node $NODE -pkt < $FILE to remove those specific packets from the outbound queue. The reason is that there will never be an ACK of ACK packet (that would create an infinite loop), so if we don’t delete them in this manner, they would hang around forever.

You can see these steps follow the same basic outline on upstream’s nncp-ack page.

One thing to keep in mind: if anything else is running nncp-toss, there is a chance of a race condition between steps 1 and 2 (if nncp-toss gets to it first, it might not get an ack generated). This would sort itself out eventually, presumably, as the sender would retransmit and it would be ACKed later.

Further ideas

NNCP guarantees the integrity of packets, but not ordering between packets; if you need that, you might look into my Filespooler program. It is designed to work with NNCP and can provide ordered processing.

An example script

Here is a script you might try for this sort of thing. It may have more logic than you need – really, you just need the steps above – but hopefully it is clear.

#!/bin/bash

set -eo pipefail

MEDIABASE="/media/$USER"

# The local node name
NODENAME="`hostname`"

# All nodes.  NODENAME should be in this list.
ALLNODES="node1 node2 node3"

RUNNNCP=""
# If you need to sudo, use something like RUNNNCP="sudo -Hu nncp"
NNCPPATH="/usr/local/nncp/bin"

ACKPATH="`mktemp -d`"

# Process incoming packets.
#
# Parameters: $1 - the path to scan.  Must contain a directory
# named "nncp".
procrxpath () {
    while [ -n "$1" ]; do
        BASEPATH="$1/nncp"
        shift
        if ! [ -d "$BASEPATH" ]; then
            echo "$BASEPATH doesn't exist; skipping"
            continue
        fi

        echo " *** Incoming: processing $BASEPATH"
        TMPDIR="`mktemp -d`"

        # This rsync and the one below can help with
        # certain permission issues from weird foreign
        # media.  You could just eliminate it and
        # always use $BASEPATH instead of $TMPDIR below.
        rsync -rt "$BASEPATH/" "$TMPDIR/"

        # You may need these next two lines if using sudo as above.
        # chgrp -R nncp "$TMPDIR"
        # chmod -R g+rwX "$TMPDIR"
        echo "     Running nncp-xfer -rx"
        $RUNNNCP $NNCPPATH/nncp-xfer -progress -rx "$TMPDIR"

        for NODE in $ALLNODES; do
                if [ "$NODE" != "$NODENAME" ]; then
                        echo "     Running nncp-ack for $NODE"

                        # Now, we generate ACK packets for each node we will
                        # process.  nncp-ack writes a list of the created
                        # ACK packets to fd 4.  We'll use them later.
                        # If using sudo, add -C 5 after $RUNNNCP.
                        $RUNNNCP $NNCPPATH/nncp-ack -progress -node "$NODE" \
                           4>> "$ACKPATH/$NODE"
                fi
        done

        rsync --delete -rt "$TMPDIR/" "$BASEPATH/"
        rm -fr "$TMPDIR"
    done
}


proctxpath () {
    while [ -n "$1" ]; do
        BASEPATH="$1/nncp"
        shift
        if ! [ -d "$BASEPATH" ]; then
            echo "$BASEPATH doesn't exist; skipping"
            continue
        fi

        echo " *** Outgoing: processing $BASEPATH"
        TMPDIR="`mktemp -d`"
        rsync -rt "$BASEPATH/" "$TMPDIR/"
        # You may need these two lines if using sudo:
        # chgrp -R nncp "$TMPDIR"
        # chmod -R g+rwX "$TMPDIR"

        for DESTHOST in $ALLNODES; do
            if [ "$DESTHOST" = "$NODENAME" ]; then
                continue
            fi

            # Copy outgoing packets to this node, but keep them in the outgoing
            # queue with -keep.
            $RUNNNCP $NNCPPATH/nncp-xfer -keep -tx -mkdir -node "$DESTHOST" -progress "$TMPDIR"

            # Here is the key: that list of ACK packets we made above - now we delete them.
            # There will never be an ACK for an ACK, so they'd keep sending forever
            # if we didn't do this.
            if [ -f "$ACKPATH/$DESTHOST" ]; then
                echo "nncp-rm for node $DESTHOST"
                $RUNNNCP $NNCPPATH/nncp-rm -debug -node "$DESTHOST" -pkt < "$ACKPATH/$DESTHOST"
            fi

        done

        rsync --delete -rt "$TMPDIR/" "$BASEPATH/"
        rm -rf "$TMPDIR"

        # We only want to write stuff once.
        return 0
    done
}

procrxpath "$MEDIABASE"/*

echo " *** Initial tossing..."

# We make sure to use -seen to rule out duplicates.
$RUNNNCP $NNCPPATH/nncp-toss -progress -seen

proctxpath "$MEDIABASE"/*

echo "You can unmount devices now."

echo "Done."

This post is also available on my webiste, where it may be periodically updated.

Emmanuel Kasper: OpenShift vs. AWS product mapping

1 September, 2022 - 14:45

If you know the Amazon Web Services portfolio, and you are interested in OpenShift or the OKD OpenShift community distribution, this is a table of corresponding technologies.

OpenShift is Red Hat’s Kubernetes distribution: it is basically the upstream Kubernetes delivered with monitoring, logging, CI/CD, underlying OS, tested upgrade paths not found with a manual kubernetes.io kubeadm install.

AWS OpenShift OpenShift upstream project Cloud Trail Kubernetes API Server audit log Kubernetes Cloud Watch OpenShift Monitoring Prometheus AWS Artifact Compliance Operator OpenSCAP AWS Trusted Advisor Insights AWS Marketplace OpenShift Operator Hub AWS Identity and Access Management (IAM) Red Hat SSO Keycloack AWS Elastisc Beanstalk OpenShift Source2Image (S2I) Source2Image (S2I) AWS S3 ODF Rados Gateway Rook RGW AWS Elastic Bloc Storage ODF Rados Block Device Rook RBD AWS Elastic File System ODF Ceph FS Rook CephFS Amazon Simple Notification Service OpenShift Streams for Apache Kafka Apache Kafka Amazon Guard Duty API Server audit log review, ACS Runtime detection Stackrox Amazon Inspector Quay.io container scanner, ACS Vulnerability Assessment Clair, Stackrox AWS Lambda Openshift Serverless* Knative AWS Key Management System could be done with Hashicorp Vault Vault AWS WAF NGINX Ingress Controller Operator with ModSecurity NGINX ModSecurity Amazon Elasticache Redis Enterprise Operator Redis, memcached as alternative AWS Relational Database Service Crunchy Data Operator PostgreSQL

* OpenShift Serverless requires the application to be packaged as a container, something AWS Lamda does not require.

Shirish Agarwal: Culture, Books, Friends

1 September, 2022 - 12:59
Culture

Just before I start, I would like to point out that this post may or would probably be NSFW. Again, what is SFW (Safe at Work) and NSFW that so much depends on culture and perception of culture from wherever we are or wherever we take birth? But still, to be on the safe side I have put it as NSFW. Now there have been a few statements and ideas that gave me a pause. This will be a sort of chaotic blog post as I am in such a phase today.

For e.g. while I do not know which culture or which country this comes from, somebody shared that in some cultures one can talk/comment ‘May your poop be easy’ and with a straight face. I dunno which culture is this but if somebody asked me that I would just die from laughing or maybe poop there itself. While I can understand if it is a constipated person, but a whole culture? Until and unless their DNA is really screwed, I don’t think so but then what do I know? I do know that we shit when we have extreme reactions of either joy or fear. And IIRC, this comes from mammal response when they were in dangerous situations and we got the same as humans evolved. I would really be interested to know which culture is that. I did come to know that the Japanese do wish that you may not experience hard work or something to that effect while ironically they themselves are becoming extinct due to hard work and not enough relaxation, toxic workplace is common in Japan according to social scientists and population experts.

Another term that I couldn’t figure out is ‘The Florida Man Strikes again’ and this term is usually used when somebody does something stupid or something weird. While it is exclusively used in the American context, I am curious to know how that came about. Why does Florida have such people or is it an exaggeration? I have heard the term e.g. ‘What happens in Vegas, stays in Vegas ‘. Think it is also called Sin city although why just Vegas is beyond me?

Omicron-8712 Blood pressure machine

I felt so stupid. I found another site or e-commerce site called Wellness Forever. They had the blood pressure machine I wanted, an Omron-8172. I bought it online and they delivered the same within half an hour. Amazon took six days and in the end, didn’t deliver it at all.

I tried taking measurements from it yesterday. I have yet to figure out what it all means but I did get measurements of 109 SYS, 88 DIA and Pulse is 72. As far as the pulse is concerned, guess that is normal, the others just don’t know. If only I had known this couple of months ago. I was able to register the product as well as download and use the Omron Connect app. For roughly INR 2.5k you have a sort of health monitoring system. It isn’t Star Trek Tricorder in any shape or form but it will have to do while the tricorder gets invented. And while we are on the subject let’s not forget Elizabeth Holmes and the scam called Theranos. It really is something to see How Elizabeth Holmes modeled so much of herself on Steve Jobs mimicking how he left college/education halfway. A part of me is sad that Theranos is not real. Joe Scott just a few days ago shared some perspectives on the same just a few days ago. The idea in itself is pretty seductive, to say the least, and that is the reason the scam went on for more than a decade and perhaps would have been longer if some people hadn’t gotten the truth out.

I do see potentially, something like that coming on as A.I. takes a bigger role in automating testing. Half a decade to a decade from now, who knows if there is an algorithm that is able to do what is needed? If such a product were to come to the marketplace at a decent price, it would revolutionize medicine, especially in countries like India, South Africa, and all sorts of remote places. Especially, with all sorts of off-grid technologies coming and maturing in the marketplace. Before I forget, there is a game called ‘Cell’ on Android that tells or shares about the evolution of life on earth. It also shares credence to the idea that life has come 6 times on Earth and has been destroyed multiple times by asteroids. It is in the idle sort of game format, so you can see the humble beginnings from the primordial soup to various kinds of cells and bacteria to finally a mammal. This is where I am and a long way to go.

Indian Bureaucracy

One of the few things that Britishers gave to India, is the bureaucracy and the bureaucracy tests us in myriad ways. It would be full 2 months on 5th September and I haven’t yet got a death certificate. And I need that for a sundry number of things. The same goes for a disability certificate. What is and was interesting is my trip to the local big hospital called Sassoon Hospital. My mum had shared incidents that occurred in the 1950s when she and the family had come to Pune. According to her, when she was alive, while Sassoon was the place to be, it was big and chaotic and you never knew where you are going. That was in 1950, I had the same experience in 2022. The term/adage ‘the more things change, the more they remain the same’ seems to be held true for Sassoon Hospital.

Btw, those of you who think the Devil exists, he is totally a fallacy. There is a popular myth that the devil comes to deal that he/she/they come to deal with you when somebody close to you passes, I was waiting desperately for him when mum passed. Any deal that he/she/they would have offered me I would have gladly taken, but all my wait was all for nothing. While I believe evil exists, that is manifested by humans and nobody else. The whole idea and story of the devil is just to control young children and nothing beyond that

Debconf 2023, friends, JPEGOptim, and EV’s

Quite a number of friends had gone to Albania this year as India won the right to host Debconf for the year 2023. While I did lurk on the Debconf orga IRC channel, I’m not sure how helpful I would be currently. One news that warmed my heart is some people would be coming to India to check the site way before and make sure things go smoothly. Nothing like having more eyes (in this case bodies) to throw at a problem and hopefully it will be sorted. While I have not been working for the last couple of years, one of the things that I had to do and have been doing is moving a lot of stuff online. This is in part due to the Government’s own intention of having everything on the cloud. One of the things I probably may have shared it more than enough times is that the storage most of these sites give is like the 1990s. I tried jpegoptim and while it works, it degrades the quality of the image quite a bit. The whole thing seems backward, especially as newer and newer smartphones are capturing more data per picture (megapixel resolution), case in point Samsung Galaxy A04 that is being introduced. But this is not only about newer phones, even my earlier phone, Samsung J-5/500 which I bought in 2016 took images at 5 MB. So it is not a new issue but a continuous issue. And almost all Govt. sites have the upper band fixed at 1 MB. But this is not limited to Govt. sites alone, most sites in India are somewhat frozen in the 1990s. And it isn’t as if resources for designing web pages using HTML5, CSS3, Javascript, Python, or Java aren’t available. If worse comes to worst, one can even use amp to make his, her or their point. But this is if they want to do stuff. I would be sharing a few photos with commentary, there are still places where I can put photos apart from social media

Friends

Last week, Saturday suddenly all the friends decided to show up. I have no clue one way or the other why but am glad they showed up.

Mahendra, Akshat, Shirish and Sagar Sukhose (Mangesh’s friend) at Bal Gandharva.. Electric scooter as shared by Akshat seen in Albania Somebody making a real-life replica of Wall Street on F.C. Road (Commercial, all glass) Ganesh Idol near my house Wearing new clothes

I will have to be a bit rapid about what I am sharing above so here goes nothing –

1. The first picture shows Mahendra, Akshat, me, and Sagar Sukhose (Mangesh’s friend). The picture was taken by Mangesh Diwate. We talked quite a bit of various things that could be done in Debian. A few of the things that I shared were (bringing more stuff from BSD to Debian, I am sure there’s still quite a lot of security software that could be advantageous to have in Debian.) The best person to talk to or guide about this would undoubtedly be Paul Wise or as he is affectionally called Pabs. He is one of the shy ones and yet knows so much about how things work. The one and only time I met him is 2016. The other thing that we talked about is porting Debian to one of the phones. This has been done in the past and done by a Puneitie some 4-5 years back. While I don’t recollect the gentleman’s name, I remember that the porting was done on a Motorola phone as that was the easiest to do. He had tried some other mobile but that didn’t work. Making Debian available on phone is hard work. Just to have an idea, I went to the xda developers forum and found out that while M51 has been added, my specific phone model is not there. A Samsung Galaxy M52G Android (samsung; SM-M526B; lahaina; arm64-v8a) v12 . You look at the chat and you understand how difficult the process might be. One of the other ideas that Akshat pitched was Debian Astro, this is something that is close to the heart of many, including me. I also proposed to have some kind of web app or something where we can find and share about the various astronomy and related projects done by various agencies. While there is a NASA app, nothing comes close to JSR and that site just shares stuff, no speculation. There are so many projects taken or being done by the EU, JAXA, ISRO, and even middle-east countries are trying but other than people who are following some of the developments, we hear almost nothing. Even the Chinese have made some long strides but most people know nothing about the same. And it’s sad to know that those developments are not being known, shared, or even speculated about as much as say NASA or SpaceX is. How do we go about it and how do we get people to contribute or ask questions around it would be interesting.

2. The second picture was something that was shared by Akshat. Akshat was sharing how in Albania people are moving on these electric ‘scooters’. I dunno if that is the right word for it or what. I had heard from a couple of friends who had gone to Vietnam a few years ago how most people in Vietnam had modified their scooters and they were snaking lines of electric wires charging scooters. I have no clue whether they were closer to Vespa or something like above. In India, the Govt. is in partnership with the oil, gas, and coal mafia just as it was in Australia (the new Govt. in Australia is making changes) the same thing is here. With the humongous profits that the oil sector provides the petro states and others, Corruption is bound to happen. We talk and that’s the extent of things.

3. The third picture is from a nearby area called F.C. Road or Fergusson College Road. The area has come up quite sharply (commercially) in the last few years. Apparently, Mr. Kushal is making a real-life replica of Wall Street which would be given to commercial tenants. Right now the real estate market is tight in India, we will know how things pan out in the next few years.

4. Number four is an image of a Ganesh idol near my house. There is a 10-day festival of the elephant god that people used to celebrate every year. For the last couple of years because of the pandemic, people were unable to celebrate the festival as it is meant to celebrate. This time some people are going overboard while others are cautious and rightfully so.

5. Last and not least, one of the things that people do at this celebration is to have new clothes, so I shared a photo of a gentleman who had bought and was wearing new clothes. While most countries around the world are similar, Latin America is very similar to India in many ways, perhaps Gunnar can share. especially about religious activities. The elephant god is known for his penchant for sweets and that can be seen from his rounded stomach, that is also how he is celebrated. He is known to make problems disappear or that is supposed to be his thing. We do have something like 4 billion gods, so each one has to be given some work or quality to justify the same

Russ Allbery: Summer haul

1 September, 2022 - 12:26

It's been a while since I posted one of these! Or, really, much of anything else. Busy and distracted this summer and a bit behind on a wide variety of things at the moment, although thankfully not in a bad way.

Sara Alfageeh & Nadia Shammas — Squire (graphic novel)
Travis Baldree — Legends & Lattes (sff)
Leigh Bardugo — Six of Crows (sff)
Miles Cameron — Artifact Space (sff)
Robert Caro — The Power Broker (nonfiction)
Kate Elliott — Servant Mage (sff)
Nicola Griffith — Spear (sff)
Alix E. Harrow — A Mirror Mended (sff)
Tony Judt — Postwar (nonfiction)
T. Kingfisher — Nettle & Bone (sff)
Matthys Levy & Mario Salvadori — Why Buildings Fall Down (nonfiction)
Lev Menand — The Fed Unbound (nonfiction)
Courtney Milan — Trade Me (romance)
Elie Mystal — Allow Me to Retort (nonfiction)
Quenby Olson — Miss Percy's Pocket Guide (sff)
Anu Partanen — The Nordic Theory of Everything (nonfiction)
Terry Pratchett — Hogfather (sff)
Terry Pratchett — Jingo (sff)
Terry Pratchett — The Last Continent (sff)
Terry Pratchett — Carpe Jugulum (sff)
Terry Pratchett — The Fifth Elephant (sff)
Terry Pratchett — The Truth (sff)
Victor Ray — On Critical Race Theory (nonfiction)
Richard Roberts — A Spaceship Repair Girl Supposedly Named Rachel (sff)
Nisi Shawl & Latoya Peterson (ed.) — Black Stars (sff anthology)
John Scalzi — The Kaiju Preservation Society (sff)
James C. Scott — Seeing Like a State (nonfiction)
Mary Sisson — Trang (sff)
Mary Sisson — Trust (sff)
Benjanun Sriduangkaew — And Shall Machines Surrender (sff)
Lea Ypi — Free (nonfiction)

It's been long enough that I've already read and reviewed some of these. Already read and pending review are the next two Pratchett novels in my slow progress working through them. Had to catch up with the Tor.com re-read series.

So many books and quite definitely not enough time at the moment, although I've been doing better at reading this summer than last summer!

Paul Wise: FLOSS Activities August 2022

1 September, 2022 - 11:32
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Debugging
  • Did extensive debugging on a libpst issue but failed to figure out the cause of the issue. Seems to be related to a change to freopen in glibc that fixed compatibility with POSIX.
Review
  • FOSSjobs: approved postings
  • Spam: reported 5 Debian bug reports and 23 Debian mailing list posts
  • Debian packages: sponsored psi-notify (twice)
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved bible-kjv edb-debugger lifeograph links mu-editor unattended-upgrades
    • rejected apt-listchanges/apt-listdifferences (semi-related log file), steam-devices (package description), myspell-es/lighttpd (selfie), fraqtive (Windows), wireguard (logo), kde-telepathy-contact-list (mobile hacking app)
Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages orage, scap-security-guide, libdatetime-format-datemanip-perl
  • Debian IRC: disable anti-spam channel modes for some channels
  • Debian servers: investigate full filesystems
  • Debian wiki: unblock IP addresses, approve accounts, ping accounts with bouncing email
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC.
Sponsors

The sptag, libpst, purple-discord, circuitbreaker work was sponsored. All other work was done on a volunteer basis.

Russell Coker: Links Aug 2022

31 August, 2022 - 20:06

Armor is an interesting technology from Manchester University for stopping rowhammer attacks on DRAM [1]. Unfortunately “armor” is a term used for DRAM that looks fancy for ricers so finding out whether it’s used in production is difficult.

The Reckless Limitless Scope of Web Browsers is an insightful analysis of the size of web specs and why it’s impossible to implement them properly [2].

Framework is a company that makes laptop kits you can assemble and upgrade, interesting concept [3]. I’ll keep buying second hand laptops for less than $400 but if I wanted to spend $1000 then I’d consider one of these.

FS has an insightful article about why unstructured job interviews (IE the vast majority of job interviews) give a bad result [4].

How a child killer inspired Ayn Rand and indirectly conservatives all around the world [5]. Ayn Rand’s love of a notoriously sadistic child killer is well known, but this article has a better discussion of it than most.

60 Minutes had an interesting article on “Foreign Accent Syndrome” where people suddenly sound like they are from another country [6]. 18 minute video but worth watching. Most Autistic people have experience of people claiming that they must be from another country because of the way they speak. Having differences in brain function lead to differences in perceived accent is nothing new.

The IEEE has an interesting article about the creation of the i860, the first million-transistor chip [7].

The Game of Trust is an interactive web site demonstrating the game theory behind trusting other people [8].

Here’s a choose your own adventure game in Twitter (Nitter is a non-tracking proxy for Twitter) [9], can you get your pawn elected Emperor of the Holy Roman Empire?

Related posts:

  1. Links July 2022 Darren Hayes wrote an interesting article about his battle with...
  2. Links Jan 2022 Washington Post has an interesting article on how gender neutral...
  3. Links May 2022 dontkillmyapp.com is a web site about Android phone vendors who...

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, July 2022

31 August, 2022 - 16:43

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

No any major updates on running projects.
Two 1, 2 projects are in the pipeline now.
Tryton project is in a review phase. Gradle projects is still fighting in work.

In July, we put aside 2389 EUR to fund Debian projects.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In July, 14 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 0.00h (out of 14.00h assigned, thus carrying over 14.00h to the next month).
  • Andreas Rönnquist did 0.00h (out of 0.00h assigned and 10.50h from previous period, thus carrying over 10.50h to the next month).
  • Anton Gladky did 23.00h (out of 25.00h assigned, thus carrying over 2.00h to the next month).
  • Ben Hutchings did 3.00h (out of 24.00h assigned, thus carrying over 21.00h to the next month).
  • Dominik George did 0.00h (out of 0.00h assigned and 22.17h from previous period, thus carrying over 22.17h to the next month).
  • Emilio Pozuelo Monfort did 72.00h (out of 35.75h assigned).
  • Enrico Zini did 0.00h (out of 0.00h assigned and 8.00h from previous period, thus carrying over 8.00h to the next month).
  • Markus Koschany did 35.75h (out of 35.75h assigned).
  • Ola Lundqvist did 8.00h (out of 0.00h assigned and 12.00h from previous period, thus carrying over 4.00h to the next month).
  • Roberto C. Sánchez did 14.25h (out of 29.25h assigned and 2.75h from previous period, thus carrying over 17.75h to the next month).
  • Stefano Rivera did 8.00h (out of 6.25h assigned and 20.75h from previous period, thus carrying over 19.00h to the next month).
  • Sylvain Beucler did 3.50h (out of 35.75h assigned, thus carrying over 32.25h to the next month).
  • Thorsten Alteholz did 20.00h (out of 35.75h assigned).
  • Utkarsh Gupta did not report back about their work so we assume they did nothing (out of 35.75 available hours, thus carrying them over to the next month).
Evolution of the situation

In July, we have released 3 DLAs. July was the period, when the Debian Stretch had already ELTS status, but Debian Buster was still in the hands of security team. Many member of LTS used this time to update internal infrastructure, documentation and some internal tickets. Now we are ready to take the next release in our hands: Buster!

Thanks to our sponsors

Sponsors that joined recently are in bold.

Wouter Verhelst: Not currently uploading

31 August, 2022 - 06:22

A notorious ex-DD decided to post garbage on his site in which he links my name to the suicide of Frans Pop, and mentions that my GPG key is currently disabled in the Debian keyring, along with some manufactured screenshots of the Debian NM site that allegedly show I'm no longer a DD. I'm not going to link to the post -- he deserves to be ridiculed, not given attention.

Just to set the record straight, however:

Frans Pop was my friend. I never treated him with anything but respect. I do not know why he chose to take his own life, but I grieved for him for a long time. It saddens me that Mr. Notorious believes it a good idea to drag Frans' name through the mud like this, but then, one can hardly expect anything else from him by this point.

Although his post is mostly garbage, there is one bit of information that is correct, and that is that my GPG key is currently no longer in the Debian keyring. Nothing sinister is going on here, however; the simple fact of the matter is that I misplaced my OpenPGP key card, which means there is a (very very slight) chance that a malicious actor (like, perhaps, Mr. Notorious) would get access to my GPG key and abuse that to upload packages to Debian. Obviously we can't have that -- certainly not from him -- so for that reason, I asked the Debian keyring maintainers to please disable my key in the Debian keyring.

I've ordered new cards; as soon as they arrive I'll generate a new key and perform the necessary steps to get my new key into the Debian keyring again. Given that shipping key cards to South Africa takes a while, this has taken longer than I would have initially hoped, but I'm hoping at this point that by about halfway September this hurdle will have been taken, meaning, I will be able to exercise my rights as a Debian Developer again.

As for Mr. Notorious, one can only hope he will get the psychiatric help he very obviously needs, sooner rather than later, because right now he appears to be more like a goat yelling in the desert.

Ah well.

Jonathan Dowland: Venineth

30 August, 2022 - 18:25

My turntable is temporarily out of action so this post is in lieu of a crate digging update.

Some time ago, I saw Matt Hoye raving about a little indie game called Venineth. I think he said it was the perfect little game to relax to. I don’t regularly play games but I could sometimes do with some help relaxing. I was reminded about Venineth when I saw some buzz about Exo One, a similar-looking game, so I thought I’d give Venineth a go.

A scene near the start of Venineth

It’s great fun: Relaxing, mildly challenging, and satisfying. It didn’t run terribly well on my T470s but I’ve since upgraded to a workstation which should handle it easily so I need to try it again.

The reason this is a substitute for a crate digging post: although I’ve not picked Venineth back up in a while, I have had the soundtrack on a reasonably heavy rotation. It’s an immersive, synth driven instrumental album by polish musician Echo Ray that is good music to listen to when concentrating.

John Goerzen: The PC & Internet Revolution in Rural America

30 August, 2022 - 08:22

Inspired by several others (such as Alex Schroeder’s post and Szczeżuja’s prompt), as well as a desire to get this down for my kids, I figure it’s time to write a bit about living through the PC and Internet revolution where I did: outside a tiny town in rural Kansas. And, as I’ve been back in that same area for the past 15 years, I reflect some on the challenges that continue to play out.

Although the stories from the others were primarily about getting online, I want to start by setting some background. Those of you that didn’t grow up in the same era as I did probably never realized that a typical business PC setup might cost $10,000 in today’s dollars, for instance. So let me start with the background.

Nothing was easy

This story begins in the 1980s. Somewhere around my Kindergarten year of school, around 1985, my parents bought a TRS-80 Color Computer 2 (aka CoCo II). It had 64K of RAM and used a TV for display and sound.

This got you the computer. It didn’t get you any disk drive or anything, no joysticks (required by a number of games). So whenever the system powered down, or it hung and you had to power cycle it – a frequent event – you’d lose whatever you were doing and would have to re-enter the program, literally by typing it in.

The floppy drive for the CoCo II cost more than the computer, and it was quite common for people to buy the computer first and then the floppy drive later when they’d saved up the money for that.

I particularly want to mention that computers then didn’t come with a modem. What would be like buying a laptop or a tablet without wifi today. A modem, which I’ll talk about in a bit, was another expensive accessory. To cobble together a system in the 80s that was capable of talking to others – with persistent storage (floppy, or hard drive), screen, keyboard, and modem – would be quite expensive. Adjusted for inflation, if you’re talking a PC-style device (a clone of the IBM PC that ran DOS), this would easily be more expensive than the Macbook Pros of today.

Few people back in the 80s had a computer at home. And the portion of those that had even the capability to get online in a meaningful way was even smaller.

Eventually my parents bought a PC clone with 640K RAM and dual floppy drives. This was primarily used for my mom’s work, but I did my best to take it over whenever possible. It ran DOS and, despite its monochrome screen, was generally a more capable machine than the CoCo II. For instance, it supported lowercase. (I’m not even kidding; the CoCo II pretty much didn’t.) A while later, they purchased a 32MB hard drive for it – what luxury!

Just getting a machine to work wasn’t easy. Say you’d bought a PC, and then bought a hard drive, and a modem. You didn’t just plug in the hard drive and it would work. You would have to fight it every step of the way. The BIOS and DOS partition tables of the day used a cylinder/head/sector method of addressing the drive, and various parts of that those addresses had too few bits to work with the “big” drives of the day above 20MB. So you would have to lie to the BIOS and fdisk in various ways, and sort of work out how to do it for each drive. For each peripheral – serial port, sound card (in later years), etc., you’d have to set jumpers for DMA and IRQs, hoping not to conflict with anything already in the system. Perhaps you can now start to see why USB and PCI were so welcomed.

Sharing and finding resources

Despite the two computers in our home, it wasn’t as if software written on one machine just ran on another. A lot of software for PC clones assumed a CGA color display. The monochrome HGC in our PC wasn’t particularly compatible. You could find a TSR program to emulate the CGA on the HGC, but it wasn’t particularly stable, and there’s only so much you can do when a program that assumes color displays on a monitor that can only show black, dark amber, or light amber.

So I’d periodically get to use other computers – most commonly at an office in the evening when it wasn’t being used.

There were some local computer clubs that my dad took me to periodically. Software was swapped back then; disks copied, shareware exchanged, and so forth. For me, at least, there was no “online” to download software from, and selling software over the Internet wasn’t a thing at all.

Three Different Worlds

There were sort of three different worlds of computing experience in the 80s:

  1. Home users. Initially using a wide variety of software from Apple, Commodore, Tandy/RadioShack, etc., but eventually coming to be mostly dominated by IBM PC clones
  2. Small and mid-sized business users. Some of them had larger minicomputers or small mainframes, but most that I had contact with by the early 90s were standardized on DOS-based PCs. More advanced ones had a network running Netware, most commonly. Networking hardware and software was generally too expensive for home users to use in the early days.
  3. Universities and large institutions. These are the places that had the mainframes, the earliest implementations of TCP/IP, the earliest users of UUCP, and so forth.

The difference between the home computing experience and the large institution experience were vast. Not only in terms of dollars – the large institution hardware could easily cost anywhere from tens of thousands to millions of dollars – but also in terms of sheer resources required (large rooms, enormous power circuits, support staff, etc). Nothing was in common between them; not operating systems, not software, not experience. I was never much aware of the third category until the differences started to collapse in the mid-90s, and even then I only was exposed to it once the collapse was well underway.

You might say to me, “Well, Google certainly isn’t running what I’m running at home!” And, yes of course, it’s different. But fundamentally, most large datacenters are running on x86_64 hardware, with Linux as the operating system, and a TCP/IP network. It’s a different scale, obviously, but at a fundamental level, the hardware and operating system stack are pretty similar to what you can readily run at home. Back in the 80s and 90s, this wasn’t the case. TCP/IP wasn’t even available for DOS or Windows until much later, and when it was, it was a clunky beast that was difficult.

One of the things Kevin Driscoll highlights in his book called Modem World – see my short post about it – is that the history of the Internet we usually receive is focused on case 3: the large institutions. In reality, the Internet was and is literally a network of networks. Gateways to and from Internet existed from all three kinds of users for years, and while TCP/IP ultimately won the battle of the internetworking protocol, the other two streams of users also shaped the Internet as we now know it. Like many, I had no access to the large institution networks, but as I’ve been reflecting on my experiences, I’ve found a new appreciation for the way that those of us that grew up with primarily home PCs shaped the evolution of today’s online world also.

An Era of Scarcity

I should take a moment to comment about the cost of software back then. A newspaper article from 1985 comments that WordPerfect, then the most powerful word processing program, sold for $495 (or $219 if you could score a mail order discount). That’s $1360/$600 in 2022 money. Other popular software, such as Lotus 1-2-3, was up there as well. If you were to buy a new PC clone in the mid to late 80s, it would often cost $2000 in 1980s dollars. Now add a printer – a low-end dot matrix for $300 or a laser for $1500 or even more. A modem: another $300. So the basic system would be $3600, or $9900 in 2022 dollars. If you wanted a nice printer, you’re now pushing well over $10,000 in 2022 dollars.

You start to see one barrier here, and also why things like shareware and piracy – if it was indeed even recognized as such – were common in those days.

So you can see, from a home computer setup (TRS-80, Commodore C64, Apple ][, etc) to a business-class PC setup was an order of magnitude increase in cost. From there to the high-end minis/mainframes was another order of magnitude (at least!) increase. Eventually there was price pressure on the higher end and things all got better, which is probably why the non-DOS PCs lasted until the early 90s.

Increasing Capabilities

My first exposure to computers in school was in the 4th grade, when I would have been about 9. There was a single Apple ][ machine in that room. I primarily remember playing Oregon Trail on it. The next year, the school added a computer lab. Remember, this is a small rural area, so each graduating class might have about 25 people in it; this lab was shared by everyone in the K-8 building. It was full of some flavor of IBM PS/2 machines running DOS and Netware. There was a dedicated computer teacher too, though I think she was a regular teacher that was given somewhat minimal training on computers. We were going to learn typing that year, but I did so well on the very first typing program that we soon worked out that I could do programming instead. I started going to school early – these machines were far more powerful than the XT at home – and worked on programming projects there.

Eventually my parents bought me a Gateway 486SX/25 with a VGA monitor and hard drive. Wow! This was a whole different world. It may have come with Windows 3.0 or 3.1 on it, but I mainly remember running OS/2 on that machine. More on that below.

Programming

That CoCo II came with a BASIC interpreter in ROM. It came with a large manual, which served as a BASIC tutorial as well. The BASIC interpreter was also the shell, so literally you could not use the computer without at least a bit of BASIC.

Once I had access to a DOS machine, it also had a basic interpreter: GW-BASIC. There was a fair bit of software written in BASIC at the time, but most of the more advanced software wasn’t. I wondered how these .EXE and .COM programs were written. I could find vague references to DEBUG.EXE, assemblers, and such. But it wasn’t until I got a copy of Turbo Pascal that I was able to do that sort of thing myself. Eventually I got Borland C++ and taught myself C as well. A few years later, I wanted to try writing GUI programs for Windows, and bought Watcom C++ – much cheaper than the competition, and it could target Windows, DOS (and I think even OS/2).

Notice that, aside from BASIC, none of this was free, and none of it was bundled. You couldn’t just download a C compiler, or Python interpreter, or whatnot back then. You had to pay for the ability to write any kind of serious code on the computer you already owned.

The Microsoft Domination

Microsoft came to dominate the PC landscape, and then even the computing landscape as a whole. IBM very quickly lost control over the hardware side of PCs as Compaq and others made clones, but Microsoft has managed – in varying degrees even to this day – to keep a stranglehold on the software, and especially the operating system, side. Yes, there was occasional talk of things like DR-DOS, but by and large the dominant platform came to be the PC, and if you had a PC, you ran DOS (and later Windows) from Microsoft.

For awhile, it looked like IBM was going to challenge Microsoft on the operating system front; they had OS/2, and when I switched to it sometime around the version 2.1 era in 1993, it was unquestionably more advanced technically than the consumer-grade Windows from Microsoft at the time. It had Internet support baked in, could run most DOS and Windows programs, and had introduced a replacement for the by-then terrible FAT filesystem: HPFS, in 1988. Microsoft wouldn’t introduce a better filesystem for its consumer operating systems until Windows XP in 2001, 13 years later. But more on that story later.

Free Software, Shareware, and Commercial Software

I’ve covered the high cost of software already. Obviously $500 software wasn’t going to sell in the home market. So what did we have?

Mainly, these things:

  1. Public domain software. It was free to use, and if implemented in BASIC, probably had source code with it too.
  2. Shareware
  3. Commercial software (some of it from small publishers was a lot cheaper than $500)

Let’s talk about shareware. The idea with shareware was that a company would release a useful program, sometimes limited. You were encouraged to “register”, or pay for, it if you liked it and used it. And, regardless of whether you registered it or not, were told “please copy!” Sometimes shareware was fully functional, and registering it got you nothing more than printed manuals and an easy conscience (guilt trips for not registering weren’t necessarily very subtle). Sometimes unregistered shareware would have a “nag screen” – a delay of a few seconds while they told you to register. Sometimes they’d be limited in some way; you’d get more features if you registered. With games, it was popular to have a trilogy, and release the first episode – inevitably ending with a cliffhanger – as shareware, and the subsequent episodes would require registration. In any event, a lot of software people used in the 80s and 90s was shareware. Also pirated commercial software, though in the earlier days of computing, I think some people didn’t even know the difference.

Notice what’s missing: Free Software / FLOSS in the Richard Stallman sense of the word. Stallman lived in the big institution world – after all, he worked at MIT – and what he was doing with the Free Software Foundation and GNU project beginning in 1983 never really filtered into the DOS/Windows world at the time. I had no awareness of it even existing until into the 90s, when I first started getting some hints of it as a port of gcc became available for OS/2. The Internet was what really brought this home, but I’m getting ahead of myself.

I want to say again: FLOSS never really entered the DOS and Windows 3.x ecosystems. You’d see it make a few inroads here and there in later versions of Windows, and moreso now that Microsoft has been sort of forced to accept it, but still, reflect on its legacy. What is the software market like in Windows compared to Linux, even today?

Now it is, finally, time to talk about connectivity!

Getting On-Line

What does it even mean to get on line? Certainly not connecting to a wifi access point. The answer is, unsurprisingly, complex. But for everyone except the large institutional users, it begins with a telephone.

The telephone system

By the 80s, there was one communication network that already reached into nearly every home in America: the phone system. Virtually every household (note I don’t say every person) was uniquely identified by a 10-digit phone number. You could, at least in theory, call up virtually any other phone in the country and be connected in less than a minute.

But I’ve got to talk about cost. The way things worked in the USA, you paid a monthly fee for a phone line. Included in that monthly fee was unlimited “local” calling. What is a local call? That was an extremely complex question. Generally it meant, roughly, calling within your city. But of course, as you deal with things like suburbs and cities growing into each other (eg, the Dallas-Ft. Worth metroplex), things got complicated fast. But let’s just say for simplicity you could call others in your city.

What about calling people not in your city? That was “long distance”, and you paid – often hugely – by the minute for it. Long distance rates were difficult to figure out, but were generally most expensive during business hours and cheapest at night or on weekends. Prices eventually started to come down when competition was introduced for long distance carriers, but even then you often were stuck with a single carrier for long distance calls outside your city but within your state. Anyhow, let’s just leave it at this: local calls were virtually free, and long distance calls were extremely expensive.

Getting a modem

I remember getting a modem that ran at either 1200bps or 2400bps. Either way, quite slow; you could often read even plain text faster than the modem could display it. But what was a modem?

A modem hooked up to a computer with a serial cable, and to the phone system. By the time I got one, modems could automatically dial and answer. You would send a command like ATDT5551212 and it would dial 555-1212. Modems had speakers, because often things wouldn’t work right, and the telephone system was oriented around speech, so you could hear what was happening. You’d hear it wait for dial tone, then dial, then – hopefully – the remote end would ring, a modem there would answer, you’d hear the screeching of a handshake, and eventually your terminal would say CONNECT 2400. Now your computer was bridged to the other; anything going out your serial port was encoded as sound by your modem and decoded at the other end, and vice-versa.

But what, exactly, was “the other end?”

It might have been another person at their computer. Turn on local echo, and you can see what they did. Maybe you’d send files to each other. But in my case, the answer was different: PC Magazine.

PC Magazine and CompuServe

Starting around 1986 (so I would have been about 6 years old), I got to read PC Magazine. My dad would bring copies that were being discarded at his office home for me to read, and I think eventually bought me a subscription directly. This was not just a standard magazine; it ran something like 350-400 pages an issue, and came out every other week. This thing was a monster. It had reviews of hardware and software, descriptions of upcoming technologies, pages and pages of ads (that often had some degree of being informative to them). And they had sections on programming. Many issues would talk about BASIC or Pascal programming, and there’d be a utility in most issues. What do I mean by a “utility in most issues”? Did they include a floppy disk with software?

No, of course not. There was a literal program listing printed in the magazine. If you wanted the utility, you had to type it in. And a lot of them were written in assembler, so you had to have an assembler. An assembler, of course, was not free and I didn’t have one. Or maybe they wrote it in Microsoft C, and I had Borland C, and (of course) they weren’t compatible. Sometimes they would list the program sort of in binary: line after line of a BASIC program, with lines like “64, 193, 253, 0, 53, 0, 87” that you would type in for hours, hopefully correctly. Running the BASIC program would, if you got it correct, emit a .COM file that you could then run. They did have a rudimentary checksum system built in, but it wasn’t even a CRC, so something like swapping two numbers you’d never notice except when the program would mysteriously hang.

Eventually they teamed up with CompuServe to offer a limited slice of CompuServe for the purpose of downloading PC Magazine utilities. This was called PC MagNet. I am foggy on the details, but I believe that for a time you could connect to the limited PC MagNet part of CompuServe “for free” (after the cost of the long-distance call, that is) rather than paying for CompuServe itself (because, OF COURSE, that also charged you per the minute.) So in the early days, I would get special permission from my parents to place a long distance call, and after some nerve-wracking minutes in which we were aware every minute was racking up charges, I could navigate the menus, download what I wanted, and log off immediately.

I still, incidentally, mourn what PC Magazine became. As with computing generally, it followed the mass market. It lost its deep technical chops, cut its programming columns, stopped talking about things like how SCSI worked, and so forth. By the time it stopped printing in 2009, it was no longer a square-bound 400-page beheamoth, but rather looked more like a copy of Newsweek, but with less depth.

Continuing with CompuServe

CompuServe was a much larger service than just PC MagNet. Eventually, our family got a subscription. It was still an expensive and scarce resource; I’d call it only after hours when the long-distance rates were cheapest. Everyone had a numerical username separated by commas; mine was 71510,1421. CompuServe had forums, and files. Eventually I would use TapCIS to queue up things I wanted to do offline, to minimize phone usage online.

CompuServe eventually added a gateway to the Internet. For the sum of somewhere around $1 a message, you could send or receive an email from someone with an Internet email address! I remember the thrill of one time, as a kid of probably 11 years, sending a message to one of the editors of PC Magazine and getting a kind, if brief, reply back!

But inevitably I had…

The Godzilla Phone Bill

Yes, one month I became lax in tracking my time online. I ran up my parents’ phone bill. I don’t remember how high, but I remember it was hundreds of dollars, a hefty sum at the time. As I watched Jason Scott’s BBS Documentary, I realized how common an experience this was. I think this was the end of CompuServe for me for awhile.

Toll-Free Numbers

I lived near a town with a population of 500. Not even IN town, but near town. The calling area included another town with a population of maybe 1500, so all told, there were maybe 2000 people total I could talk to with a local call – though far fewer numbers, because remember, telephones were allocated by the household. There was, as far as I know, zero modems that were a local call (aside from one that belonged to a friend I met in around 1992). So basically everything was long-distance.

But there was a special feature of the telephone network: toll-free numbers. Normally when calling long-distance, you, the caller, paid the bill. But with a toll-free number, beginning with 1-800, the recipient paid the bill. These numbers almost inevitably belonged to corporations that wanted to make it easy for people to call. Sales and ordering lines, for instance. Some of these companies started to set up modems on toll-free numbers. There were few of these, but they existed, so of course I had to try them!

One of them was a company called PennyWise that sold office supplies. They had a toll-free line you could call with a modem to order stuff. Yes, online ordering before the web! I loved office supplies. And, because I lived far from a big city, if the local K-Mart didn’t have it, I probably couldn’t get it. Of course, the interface was entirely text, but you could search for products and place orders with the modem. I had loads of fun exploring the system, and actually ordered things from them – and probably actually saved money doing so. With the first order they shipped a monster full-color catalog. That thing must have been 500 pages, like the Sears catalogs of the day. Every item had a part number, which streamlined ordering through the modem.

Inbound FAXes

By the 90s, a number of modems became able to send and receive FAXes as well. For those that don’t know, a FAX machine was essentially a special modem. It would scan a page and digitally transmit it over the phone system, where it would – at least in the early days – be printed out in real time (because the machines didn’t have the memory to store an entire page as an image). Eventually, PC modems integrated FAX capabilities.

There still wasn’t anything useful I could do locally, but there were ways I could get other companies to FAX something to me. I remember two of them.

One was for US Robotics. They had an “on demand” FAX system. You’d call up a toll-free number, which was an automated IVR system. You could navigate through it and select various documents of interest to you: spec sheets and the like. You’d key in your FAX number, hang up, and US Robotics would call YOU and FAX you the documents you wanted. Yes! I was talking to a computer (of a sorts) at no cost to me!

The New York Times also ran a service for awhile called TimesFax. Every day, they would FAX out a page or two of summaries of the day’s top stories. This was pretty cool in an era in which I had no other way to access anything from the New York Times. I managed to sign up for TimesFax – I have no idea how, anymore – and for awhile I would get a daily FAX of their top stories. When my family got its first laser printer, I could them even print these FAXes complete with the gothic New York Times masthead. Wow! (OK, so technically I could print it on a dot-matrix printer also, but graphics on a 9-pin dot matrix is a kind of pain that is a whole other article.)

My own phone line

Remember how I discussed that phone lines were allocated per household? This was a problem for a lot of reasons:

  1. Anybody that tried to call my family while I was using my modem would get a busy signal (unable to complete the call)
  2. If anybody in the house picked up the phone while I was using it, that would degrade the quality of the ongoing call and either mess up or disconnect the call in progress. In many cases, that could cancel a file transfer (which wasn’t necessarily easy or possible to resume), prompting howls of annoyance from me.
  3. Generally we all had to work around each other

So eventually I found various small jobs and used the money I made to pay for my own phone line and my own long distance costs. Eventually I upgraded to a 28.8Kbps US Robotics Courier modem even! Yes, you heard it right: I got a job and a bank account so I could have a phone line and a faster modem. Uh, isn’t that why every teenager gets a job?

Now my local friend and I could call each other freely – at least on my end (I can’t remember if he had his own phone line too). We could exchange files using HS/Link, which had the added benefit of allowing split-screen chat even while a file transfer is in progress. I’m sure we spent hours chatting to each other keyboard-to-keyboard while sharing files with each other.

Technology in Schools

By this point in the story, we’re in the late 80s and early 90s. I’m still using PC-style OSs at home; OS/2 in the later years of this period, DOS or maybe a bit of Windows in the earlier years. I mentioned that they let me work on programming at school starting in 5th grade. It was soon apparent that I knew more about computers than anybody on staff, and I started getting pulled out of class to help teachers or administrators with vexing school problems. This continued until I graduated from high school, incidentally – often to my enjoyment, and the annoyance of one particular teacher who, I must say, I was fine with annoying in this way.

That’s not to say that there was institutional support for what I was doing. It was, after all, a small school. Larger schools might have introduced BASIC or maybe Logo in high school. But I had already taught myself BASIC, Pascal, and C by the time I was somewhere around 12 years old. So I wouldn’t have had any use for that anyhow.

There were programming contests occasionally held in the area. Schools would send teams. My school didn’t really “send” anybody, but I went as an individual. One of them was run by a local college (but for jr. high or high school students. Years later, I met one of the professors that ran it. He remembered me, and that day, better than I did. The programming contest had problems one could solve in BASIC or Logo. I knew nothing about what to expect going into it, but I had lugged my computer and screen along, and asked him, “Can I write my solutions in C?” He was, apparently, stunned, but said sure, go for it. I took first place that day, leading to some rather confused teams from much larger schools.

The Netware network that the school had was, as these generally were, itself isolated. There was no link to the Internet or anything like it. Several schools across three local counties eventually invested in a fiber-optic network linking them together. This built a larger, but still closed, network. Its primary purpose was to allow students to be exposed to a wider variety of classes at high schools. Participating schools had an “ITV room”, outfitted with cameras and mics. So students at any school could take classes offered over ITV at other schools. For instance, only my school taught German classes, so people at any of those participating schools could take German. It was an early “Zoom room.” But alongside the TV signal, there was enough bandwidth to run some Netware frames. By about 1995 or so, this let one of the schools purchase some CD-ROM software that was made available on a file server and could be accessed by any participating school. Nice! But Netware was mainly about file and printer sharing; there wasn’t even a facility like email, at least not on our deployment.

BBSs

My last hop before the Internet was the BBS. A BBS was a computer program, usually ran by a hobbyist like me, on a computer with a modem connected. Callers would call it up, and they’d interact with the BBS. Most BBSs had discussion groups like forums and file areas. Some also had games. I, of course, continued to have that most vexing of problems: they were all long-distance.

There were some ways to help with that, chiefly QWK and BlueWave. These, somewhat like TapCIS in the CompuServe days, let me download new message posts for reading offline, and queue up my own messages to send later. QWK and BlueWave didn’t help with file downloading, though.

BBSs get networked

BBSs were an interesting thing. You’d call up one, and inevitably somewhere in the file area would be a BBS list. Download the BBS list and you’ve suddenly got a list of phone numbers to try calling. All of them were long distance, of course. You’d try calling them at random and have a success rate of maybe 20%. The other 80% would be defunct; you might get the dreaded “this number is no longer in service” or the even more dreaded angry human answering the phone (and of course a modem can’t talk to a human, so they’d just get silence for probably the nth time that week). The phone company cared nothing about BBSs and recycled their numbers just as fast as any others.

To talk to various people, or participate in certain discussion groups, you’d have to call specific BBSs. That’s annoying enough in the general case, but even more so for someone paying long distance for it all, because it takes a few minutes to establish a connection to a BBS: handshaking, logging in, menu navigation, etc.

But BBSs started talking to each other. The earliest successful such effort was FidoNet, and for the duration of the BBS era, it remained by far the largest. FidoNet was analogous to the UUCP that the institutional users had, but ran on the much cheaper PC hardware. Basically, BBSs that participated in FidoNet would relay email, forum posts, and files between themselves overnight. Eventually, as with UUCP, by hopping through this network, messages could reach around the globe, and forums could have worldwide participation – asynchronously, long before they could link to each other directly via the Internet. It was almost entirely volunteer-run.

Running my own BBS

At age 13, I eventually chose to set up my own BBS. It ran on my single phone line, so of course when I was dialing up something else, nobody could dial up me. Not that this was a huge problem; in my town of 500, I probably had a good 1 or 2 regular callers in the beginning.

In the PC era, there was a big difference between a server and a client. Server-class software was expensive and rare. Maybe in later years you had an email client, but an email server would be completely unavailable to you as a home user. But with a BBS, I could effectively run a server. I even ran serial lines in our house so that the BBS could be connected from other rooms! Since I was running OS/2, the BBS didn’t tie up the computer; I could continue using it for other things.

FidoNet had an Internet email gateway. This one, unlike CompuServe’s, was free. Once I had a BBS on FidoNet, you could reach me from the Internet using the FidoNet address. This didn’t support attachments, but then email of the day didn’t really, either.

Various others outside Kansas ran FidoNet distribution points. I believe one of them was mgmtsys; my memory is quite vague, but I think they offered a direct gateway and I would call them to pick up Internet mail via FidoNet protocols, but I’m not at all certain of this.

Pros and Cons of the Non-Microsoft World

As mentioned, Microsoft was and is the dominant operating system vendor for PCs. But I left that world in 1993, and here, nearly 30 years later, have never really returned. I got an operating system with more technical capabilities than the DOS and Windows of the day, but the tradeoff was a much smaller software ecosystem. OS/2 could run DOS programs, but it ran OS/2 programs a lot better. So if I were to run a BBS, I wanted one that had a native OS/2 version – limiting me to a small fraction of available BBS server software. On the other hand, as a fully 32-bit operating system, there started to be OS/2 ports of certain software with a Unix heritage; most notably for me at the time, gcc. At some point, I eventually came across the RMS essays and started to be hooked.

Internet: The Hunt Begins

I certainly was aware that the Internet was out there and interesting. But the first problem was: how the heck do I get connected to the Internet?

Learning Link and Gopher

ISPs weren’t really a thing; the first one in my area (though still a long-distance call) started in, I think, 1994. One service that one of my teachers got me hooked up with was Learning Link. Learning Link was a nationwide collaboration of PBS stations and schools, designed to build on the educational mission of PBS. The nearest Learning Link station was more than a 3-hour drive away… but critically, they had a toll-free access number, and my teacher convinced them to let me use it. I connected via a terminal program and a modem, like with most other things. I don’t remember much about it, but I do remember a very important thing it had: Gopher. That was my first experience with Gopher.

Learning Link was hosted by a Unix derivative (Xenix), but it didn’t exactly give everyone a shell. I seem to recall it didn’t have open FTP access either. The Gopher client had FTP access at some point; I don’t recall for sure if it did then. If it did, then when a Gopher server referred to an FTP server, I could get to it. (I am unclear at this point if I could key in an arbitrary FTP location, or knew how, at that time.) I also had email access there, but I don’t recall exactly how; probably Pine. If that’s correct, that would have dated my Learning Link access as no earlier than 1992.

I think my access time to Learning Link was limited. And, since the only way to get out on the Internet from there was Gopher and Pine, I was somewhat limited in terms of technology as well. I believe that telnet services, for instance, weren’t available to me.

Computer labs

There was one place that tended to have Internet access: colleges and universities. In 7th grade, I participated in a program that resulted in me being invited to visit Duke University, and in 8th grade, I participated in National History Day, resulting in a trip to visit the University of Maryland. I probably sought out computer labs at both of those. My most distinct memory was finding my way into a computer lab at one of those universities, and it was full of NeXT workstations. I had never seen or used NeXT before, and had no idea how to operate it. I had brought a box of floppy disks, unaware that the DOS disks probably weren’t compatible with NeXT.

Closer to home, a small college had a computer lab that I could also visit. I would go there in summer or when it wasn’t used with my stack of floppies. I remember downloading disk images of FLOSS operating systems: FreeBSD, Slackware, or Debian, at the time. The hash marks from the DOS-based FTP client would creep across the screen as the 1.44MB disk images would slowly download. telnet was also available on those machines, so I could telnet to things like public-access Archie servers and libraries – though not Gopher. Still, FTP and telnet access opened up a lot, and I learned quite a bit in those years.

Continuing the Journey

At some point, I got a copy of the Whole Internet User’s Guide and Catalog, published in 1994. I still have it. If it hadn’t already figured it out by then, I certainly became aware from it that Unix was the dominant operating system on the Internet. The examples in Whole Internet covered FTP, telnet, gopher – all assuming the user somehow got to a Unix prompt. The web was introduced about 300 pages in; clearly viewed as something that wasn’t page 1 material. And it covered the command-line www client before introducing the graphical Mosaic. Even then, though, the book highlighted Mosaic’s utility as a front-end for Gopher and FTP, and even the ability to launch telnet sessions by clicking on links. But having a copy of the book didn’t equate to having any way to run Mosaic. The machines in the computer lab I mentioned above all ran DOS and were incapable of running a graphical browser. I had no SLIP or PPP (both ways to run Internet traffic over a modem) connectivity at home. In short, the Web was something for the large institutional users at the time.

CD-ROMs

As CD-ROMs came out, with their huge (for the day) 650MB capacity, various companies started collecting software that could be downloaded on the Internet and selling it on CD-ROM. The two most popular ones were Walnut Creek CD-ROM and Infomagic. One could buy extensive Shareware and gaming collections, and then even entire Linux and BSD distributions. Although not exactly an Internet service per se, it was a way of bringing what may ordinarily only be accessible to institutional users into the home computer realm.

Free Software Jumps In

As I mentioned, by the mid 90s, I had come across RMS’s writings about free software – most probably his 1992 essay Why Software Should Be Free. (Please note, this is not a commentary on the more recently-revealed issues surrounding RMS, but rather his writings and work as I encountered them in the 90s.) The notion of a Free operating system – not just in cost but in openness – was incredibly appealing. Not only could I tinker with it to a much greater extent due to having source for everything, but it included so much software that I’d otherwise have to pay for. Compilers! Interpreters! Editors! Terminal emulators! And, especially, server software of all sorts. There’d be no way I could afford or run Netware, but with a Free Unixy operating system, I could do all that. My interest was obviously piqued. Add to that the fact that I could actually participate and contribute – I was about to become hooked on something that I’ve stayed hooked on for decades.

But then the question was: which Free operating system? Eventually I chose FreeBSD to begin with; that would have been sometime in 1995. I don’t recall the exact reasons for that. I remember downloading Slackware install floppies, and probably the fact that Debian wasn’t yet at 1.0 scared me off for a time. FreeBSD’s fantastic Handbook – far better than anything I could find for Linux at the time – was no doubt also a factor.

The de Raadt Factor

Why not NetBSD or OpenBSD? The short answer is Theo de Raadt. Somewhere in this time, when I was somewhere between 14 and 16 years old, I asked some questions comparing NetBSD to the other two free BSDs. This was on a NetBSD mailing list, but for some reason Theo saw it and got a flame war going, which CC’d me. Now keep in mind that even if NetBSD had a web presence at the time, it would have been minimal, and I would have – not all that unusually for the time – had no way to access it. I was certainly not aware of the, shall we say, acrimony between Theo and NetBSD. While I had certainly seen an online flamewar before, this took on a different and more disturbing tone; months later, Theo randomly emailed me under the subject “SLIME” saying that I was, well, “SLIME”. I seem to recall periodic emails from him thereafter reminding me that he hates me and that he had blocked me. (Disclaimer: I have poor email archives from this period, so the full details are lost to me, but I believe I am accurately conveying these events from over 25 years ago)

This was a surprise, and an unpleasant one. I was trying to learn, and while it is possible I didn’t understand some aspect or other of netiquette (or Theo’s personal hatred of NetBSD) at the time, still that is not a reason to flame a 16-year-old (though he would have had no way to know my age). This didn’t leave any kind of scar, but did leave a lasting impression; to this day, I am particularly concerned with how FLOSS projects handle poisonous people. Debian, for instance, has come a long way in this over the years, and even Linus Torvalds has turned over a new leaf. I don’t know if Theo has.

In any case, I didn’t use NetBSD then. I did try it periodically in the years since, but never found it compelling enough to justify a large switch from Debian. I never tried OpenBSD for various reasons, but one of them was that I didn’t want to join a community that tolerates behavior such as Theo’s from its leader.

Moving to FreeBSD

Moving from OS/2 to FreeBSD was final. That is, I didn’t have enough hard drive space to keep both. I also didn’t have the backup capacity to back up OS/2 completely. My BBS, which ran Virtual BBS (and at some point also AdeptXBBS) was deleted and reincarnated in a different form. My BBS was a member of both FidoNet and VirtualNet; the latter was specific to VBBS, and had to be dropped. I believe I may have also had to drop the FidoNet link for a time. This was the biggest change of computing in my life to that point. The earlier experiences hadn’t literally destroyed what came before. OS/2 could still run my DOS programs. Its command shell was quite DOS-like. It ran Windows programs. I was going to throw all that away and leap into the unknown.

I wish I had saved a copy of my BBS; I would love to see the messages I exchanged back then, or see its menu screens again. I have little memory of what it looked like. But other than that, I have no regrets. Pursuing Free, Unixy operating systems brought me a lot of enjoyment and a good career.

That’s not to say it was easy. All the problems of not being in the Microsoft ecosystem were magnified under FreeBSD and Linux. In a day before EDID, monitor timings had to be calculated manually – and you risked destroying your monitor if you got them wrong. Word processing and spreadsheet software was pretty much not there for FreeBSD or Linux at the time; I was therefore forced to learn LaTeX and actually appreciated that. Software like PageMaker or CorelDraw was certainly nowhere to be found for those free operating systems either. But I got a ton of new capabilities.

I mentioned the BBS didn’t shut down, and indeed it didn’t. I ran what was surely a supremely unique oddity: a free, dialin Unix shell server in the middle of a small town in Kansas. I’m sure I provided things such as pine for email and some help text and maybe even printouts for how to use it. The set of callers slowly grew over the time period, in fact.

And then I got UUCP.

Enter UUCP

Even throughout all this, there was no local Internet provider and things were still long distance. I had Internet Email access via assorted strange routes, but they were all… strange. And, I wanted access to Usenet. In 1995, it happened.

The local ISP I mentioned offered UUCP access. Though I couldn’t afford the dialup shell (or later, SLIP/PPP) that they offered due to long-distance costs, UUCP’s very efficient batched processes looked doable. I believe I established that link when I was 15, so in 1995.

I worked to register my domain, complete.org, as well. At the time, the process was a bit lengthy and involved downloading a text file form, filling it out in a precise way, sending it to InterNIC, and probably mailing them a check. Well I did that, and in September of 1995, complete.org became mine. I set up sendmail on my local system, as well as INN to handle the limited Usenet newsfeed I requested from the ISP. I even ran Majordomo to host some mailing lists, including some that were surprisingly high-traffic for a few-times-a-day long-distance modem UUCP link!

The modem client programs for FreeBSD were somewhat less advanced than for OS/2, but I believe I wound up using Minicom or Seyon to continue to dial out to BBSs and, I believe, continue to use Learning Link. So all the while I was setting up my local BBS, I continued to have access to the text Internet, consisting of chiefly Gopher for me.

Switching to Debian

I switched to Debian sometime in 1995 or 1996, and have been using Debian as my primary OS ever since. I continued to offer shell access, but added the WorldVU Atlantis menuing BBS system. This provided a return of a more BBS-like interface (by default; shell was still an uption) as well as some BBS door games such as LoRD and TradeWars 2002, running under DOS emulation.

I also continued to run INN, and ran ifgate to allow FidoNet echomail to be presented into INN Usenet-like newsgroups, and netmail to be gated to Unix email. This worked pretty well. The BBS continued to grow in these days, peaking at about two dozen total user accounts, and maybe a dozen regular users.

Dial-up access availability

I believe it was in 1996 that dial up PPP access finally became available in my small town. What a thrill! FINALLY! I could now FTP, use Gopher, telnet, and the web all from home. Of course, it was at modem speeds, but still.

(Strangely, I have a memory of accessing the Web using WebExplorer from OS/2. I don’t know exactly why; it’s possible that by this time, I had upgraded to a 486 DX2/66 and was able to reinstall OS/2 on the old 25MHz 486, or maybe something was wrong with the timeline from my memories from 25 years ago above. Or perhaps I made the occasional long-distance call somewhere before I ditched OS/2.)

Gopher sites still existed at this point, and I could access them using Netscape Navigator – which likely became my standard Gopher client at that point. I don’t recall using UMN text-mode gopher client locally at that time, though it’s certainly possible I did.

The city

Starting when I was 15, I took computer science classes at Wichita State University. The first one was a class in the summer of 1995 on C++. I remember being worried about being good enough for it – I was, after all, just after my HS freshman year and had never taken the prerequisite C class. I loved it and got an A! By 1996, I was taking more classes.

In 1996 or 1997 I stayed in Wichita during the day due to having more than one class. So, what would I do then but… enjoy the computer lab? The CS dept. had two of them: one that had NCD X terminals connected to a pair of SunOS servers, and another one running Windows. I spent most of the time in the Unix lab with the NCDs; I’d use Netscape or pine, write code, enjoy the University’s fast Internet connection, and so forth.

In 1997 I had graduated high school and that summer I moved to Wichita to attend college. As was so often the case, I shut down the BBS at that time. It would be 5 years until I again dealt with Internet at home in a rural community.

By the time I moved to my apartment in Wichita, I had stopped using OS/2 entirely. I have no memory of ever having OS/2 there. Along the way, I had bought a Pentium 166, and then the most expensive piece of computing equipment I have ever owned: a DEC Alpha, which, of course, ran Linux.

ISDN

I must have used dialup PPP for a time, but I eventually got a job working for the ISP I had used for UUCP, and then PPP. While there, I got a 128Kbps ISDN line installed in my apartment, and they gave me a discount on the service for it. That was around 3x the speed of a modem, and crucially was always on and gave me a public IP. No longer did I have to use UUCP; now I got to host my own things! By at least 1998, I was running a web server on www.complete.org, and I had an FTP server going as well.

Even Bigger Cities

In 1999 I moved to Dallas, and there got my first broadband connection: an ADSL link at, I think, 1.5Mbps! Now that was something! But it had some reliability problems. I eventually put together a server and had it hosted at an acquantaince’s place who had SDSL in his apartment. Within a couple of years, I had switched to various kinds of proper hosting for it, but that is a whole other article.

In Indianapolis, I got a cable modem for the first time, with even tighter speeds but prohibitions on running “servers” on it. Yuck.

Challenges

Being non-Microsoft continued to have challenges. Until the advent of Firefox, a web browser was one of the biggest. While Netscape supported Linux on i386, it didn’t support Linux on Alpha. I hobbled along with various attempts at emulators, old versions of Mosaic, and so forth. And, until StarOffice was open-sourced as Open Office, reading Microsoft file formats was also a challenge, though WordPerfect was briefly available for Linux.

Over the years, I have become used to the Linux ecosystem. Perhaps I use Gimp instead of Photoshop and digikam instead of – well, whatever somebody would use on Windows. But I get ZFS, and containers, and so much that isn’t available there.

Yes, I know Apple never went away and is a thing, but for most of the time period I discuss in this article, at least after the rise of DOS, it was niche compared to the PC market.

Back to Kansas

In 2002, I moved back to Kansas, to a rural home near a different small town in the county next to where I grew up. Over there, it was back to dialup at home, but I had faster access at work. I didn’t much care for this, and thus began a 20+-year effort to get broadband in the country. At first, I got a wireless link, which worked well enough in the winter, but had serious problems in the summer when the trees leafed out. Eventually DSL became available locally – highly unreliable, but still, it was something. Then I moved back to the community I grew up in, a few miles from where I grew up. Again I got DSL – a bit better. But after some years, being at the end of the run of DSL meant I had poor speeds and reliability problems. I eventually switched to various wireless ISPs, which continues to the present day; while people in cities can get Gbps service, I can get, at best, about 50Mbps. Long-distance fees are gone, but the speed disparity remains.

Concluding Reflections

I am glad I grew up where I did; the strong community has a lot of advantages I don’t have room to discuss here. In a number of very real senses, having no local services made things a lot more difficult than they otherwise would have been. However, perhaps I could say that I also learned a lot through the need to come up with inventive solutions to those challenges. To this day, I think a lot about computing in remote environments: partially because I live in one, and partially because I enjoy visiting places that are remote enough that they have no Internet, phone, or cell service whatsoever. I have written articles like Tools for Communicating Offline and in Difficult Circumstances based on my own personal experience. I instinctively think about making protocols robust in the face of various kinds of connectivity failures because I experience various kinds of connectivity failures myself.

(Almost) Everything Lives On

In 2002, Gopher turned 10 years old. It had probably been about 9 or 10 years since I had first used Gopher, which was the first way I got on live Internet from my house. It was hard to believe. By that point, I had an always-on Internet link at home and at work. I had my Alpha, and probably also at least PCMCIA Ethernet for a laptop (many laptops had modems by the 90s also). Despite its popularity in the early 90s, less than 10 years after it came on the scene and started to unify the Internet, it was mostly forgotten.

And it was at that moment that I decided to try to resurrect it. The University of Minnesota finally released it under an Open Source license. I wrote the first new gopher server in years, pygopherd, and introduced gopher to Debian. Gopher lives on; there are now quite a few Gopher clients and servers out there, newly started post-2002. The Gemini protocol can be thought of as something akin to Gopher 2.0, and it too has a small but blossoming ecosystem.

Archie, the old FTP search tool, is dead though. Same for WAIS and a number of the other pre-web search tools. But still, even FTP lives on today.

And BBSs? Well, they didn’t go away either. Jason Scott’s fabulous BBS documentary looks back at the history of the BBS, while Back to the BBS from last year talks about the modern BBS scene. FidoNet somehow is still alive and kicking. UUCP still has its place and has inspired a whole string of successors. Some, like NNCP, are clearly direct descendents of UUCP. Filespooler lives in that ecosystem, and you can even see UUCP concepts in projects as far afield as Syncthing and Meshtastic. Usenet still exists, and you can now run Usenet over NNCP just as I ran Usenet over UUCP back in the day (which you can still do as well). Telnet, of course, has been largely supplanted by ssh, but the concept is more popular now than ever, as Linux has made ssh be available on everything from Raspberry Pi to Android.

And I still run a Gopher server, looking pretty much like it did in 2002.

This post also has a permanent home on my website, where it may be periodically updated.

Emmanuel Kasper: Moving blog from blogger.com to wordpress.com

29 August, 2022 - 15:05

I switched from blogger.com the Google Blog platform to the hosted wordpress.com of Automaticc, the WordPress blog engine main authors.
I thus gain:

I lose:

  • free CNAME redirect using my own domain name
  • a bit of advertising-free space. The blog at wordpress.com has a prominent header indicating I am using the free plan, but I am OK so far with that.

What stays the same:

  • Blogger and WordPress.com offer both tag-based RSS feed exports, so I decided to keep for Debian Planet a feed containing only the posts related to free, libre and opensource software.

I was not ready to make the jump to a self hosted static blog generator, as I still wanted to have the possibility to comment, without me having to host the comment subsystem.

On the personal side, I also intend to pause twitter activity, as I notice current microblogging platforms tend to mostly contain flame wars, self promotion, or shared links I could find anyway with a good feed reader.

Dirk Eddelbuettel: littler 0.3.16 on CRAN: Package Updates

29 August, 2022 - 05:54

The seventeenth release of littler as a CRAN package just landed, following in the now sixteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release, the first since last December, further extends install2.r accept multiple repos options thanks to Tatsuya Shima, overhauls and substantially extends installBioc.r thanks to Pieter Moris, and includes a number of (generally smaller) changes I added (see below).

The full change description follows.

Changes in littler version 0.3.16 (2022-08-28)
  • Changes in package

    • The configure code checks for two more headers

    • The RNG seeding matches the current version in R (Dirk)

  • Changes in examples

    • A cowu.r 'check Window UCRT' helper was added (Dirk)

    • A getPandoc.r downloader has been added (Dirk)

    • The -r option tp install2.r has been generalzed (Tatsuya Shima in #95)

    • The rcc.r code / package checker now has valgrind option (Dirk)

    • install2.r now installs to first element in .libPaths() by default (Dirk)

    • A very simple r2u.r help has been added (Dirk)

    • The installBioc.r has been generalized and extended similar to install2.r (Pieter Moris in #103)

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้