Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 2 hours 27 min ago

Russell Coker: More About the PowerEdge T710

15 September, 2020 - 11:30

I’ve got the T710 (mentioned in my previous post [1]) online. When testing the T710 at home I noticed that sometimes the VGA monitor I was using would start flickering when in some parts of the BIOS setup, it seemed that the horizonal sync wasn’t working properly. It didn’t seem to be a big deal at the time. When I deployed it the KVM display that I had planned to use with it mostly didn’t display anything. When the display was working the KVM keyboard wouldn’t work (and would prevent a regular USB keyboard from working if they were both connected at the same time). The VGA output of the T710 also wouldn’t work with my VGA->HDMI device so I couldn’t get it working with my portable monitor.

Fortunately the Dell front panel has a display and tiny buttons that allow configuring the IDRAC IP address, so I was able to get IDRAC going. One thing Dell really should do is allow the down button to change 0 to 9 when entering numbers, that would make it easier to enter 8.8.8.8 for the DNS server. Another thing Dell should do is make the default gateway have a default value according to the IP address and netmask of the server.

When I got IDRAC going it was easy to setup a serial console, boot from a rescue USB device, create a new initrd with the driver for the MegaRAID controller, and then reboot into the server image.

When I transferred the SSDs from the old server to the newer Dell server the problem I had was that the Dell drive caddies had no holes in suitable places for attaching SSDs. I ended up just pushing the SSDs in so they are hanging in mid air attached only by the SATA/SAS connectors. Plugging them in took the space from the above drive, so instead of having 2*3.5″ disks I have 1*2.5″ SSD and need the extra space to get my hand in. The T710 is designed for 6*3.5″ disks and I’m going to have trouble if I ever want to have more than 3*2.5″ SSDs. Fortunately I don’t think I’ll need more SSDs.

After booting the system I started getting alerts about a “fault” in one SSD, with no detail on what the fault might be. My guess is that the SSD in question is M.2 and it’s in a M.2 to regular SATA adaptor which might have some problems. The data seems fine though, a BTRFS scrub found no checksum errors. I guess I’ll have to buy a replacement SSD soon.

I configured the system to use the “nosmt” kernel command line option to disable hyper-threading (which won’t provide much performance benefit but which makes certain types of security attacks much easier). I’ve configured BOINC to run on 6/8 CPU cores and surprisingly that didn’t cause the fans to be louder than when the system was idle. It seems that a system that is designed for 6 SAS disks doesn’t need a lot of cooling when run with SSDs.

Related posts:

  1. Setting Up a Dell T710 Server with DRAC6 I’ve just got a Dell T710 server for LUV and...
  2. Dell PowerEdge T110 In June 2008 I received a Dell PowerEdge T105 server...
  3. Dell PowerEdge T105 Today I received a Dell PowerEDGE T105 for use by...

Norbert Preining: GIMP washed out colors: Color to Alpha and layer recombination

15 September, 2020 - 07:30

Just to remind myself because I keep forgetting it again and again: If you get washed out colors when doing color to alpha and then recombine layers in GIMP, that is due to the new default in GIMP 2.10 that combines layers in linear RGB.

This creates problems because Color to Alpha works in perceptual RGB, and the recombination in linear creates washed out colors. The solution is to right click on the respective layer, select “Composite Space”, and there select “RGB (perceptual)”. Here is the “bug” report that has been open since 2 years.

Hoping that for next time I remember it.

Jonathan McDowell: onak 0.6.1 released

15 September, 2020 - 01:09

Yesterday I did the first release of my OpenPGP compatible keyserver, onak, in 4 years. Actually, 2 releases because I discovered my detection for various versions of libnettle needed some fixing.

It was largely driven by the need to get an updated package sorted for Debian due to the removal of dh-systemd, but it should have come sooner. This release has a number of clean-ups for dealing with the hostility shown to the keyserver network in recent years. In particular it implements some of dkg’s Abuse-Resistant OpenPGP Keystores, and finally adds support for verifying signatures fully. That opens up the ability to run a keyserver that will only allow verifiable updates to keys. This doesn’t tie in with folk who want to run PGP based systems because of the anonymity, but for those of us who think PGP’s strength is in the web of trust it’s pretty handy. And it’s all configurable to taste; you can turn off all the verification if you want, or verify everything but not require any signatures, or even enable v3 keys if you feel like it.

The main reason this release didn’t come sooner is that I’m painfully aware of the bits that are missing. In particular:

  • Documentation. It’s all out of date, it needs a lot of work.
  • FastCGI support. Presently you need to run the separate CGI binaries.
  • SKS Gossip support. onak only supports the email syncing option. If you run a stand alone server this is fine, but Gossip is the right approach for a proper keyserver network.

Anyway. Available locally or via GitHub.

0.6.0 - 13th September 2020

  • Move to CMake over autoconf
  • Add support for issuer fingerprint subpackets
  • Add experimental support for v5 keys
  • Add read-only OpenPGP keyring backed DB backend
  • Move various bits into their own subdirectories in the source tree
  • Add support for full signature verification
  • Drop v3 keys by default when cleaning keys
  • Various code cleanups
  • Implement pieces of draft-dkg-openpgp-abuse-resistant-keystore-03
  • Add support for a fingerprint blacklist (e.g. Evil32)
  • Deprecate the .conf configuration file format
  • Drop version info from armored output
  • Add option to deny new keys and only allow updates to existing keys
  • Various pieces of work removing support for 32 bit key IDs and coping with colliding 64 bit key IDs.
  • Remove support for libnettle versions that lack the full SHA2 suite

0.6.1 - 13th September 2020

  • Fixes for compilation without nettle + with later releases of nettle

Emmanuel Kasper: Quick debugging of a Linux printer via cups command line tools

15 September, 2020 - 00:17
Step by step cups debugging ( here with a network printer)

Which printer queue do I have configured ?
lpstat -p
printer epson is idle.  enabled since Sat Dec 24 13:18:09 2017
#here I have a printer called 'epson", doing nothing, that the cups daemon considers as enabled

Which connection am I using to get to this printer ?
lpstat -v
device for epson: lpd://epson34dea0.local:515/PASSTHRU
# here the locally configured 'epson' printer queue is backed by a network device at the adress epson34dea0.local, to which I am sending my print jobs via the lpd protocol

Is my printer ready ?
lpq
epson is ready
no entries
# here my local print queue 'epson' is accepting print jobs ( which does not say anything about the physical device, it might be offline

If here you local print queue 'epson' is not ready, you can try to reenable it in the cups system with:

sudo cupsenable epson

If you notice that the printer is disabled all the time, because for instance of a flaky network, you can edit /etc/cups/printers.conf and change the ErrorPolicy for each printer from stop-printer to retry-job.
It should be also possible to set this parameter in cupsd.conf

Finally you can print a test page with
lpr /usr/share/cups/data/testprint

Emmanuel Kasper: Using Debian and RHEL troubleshootings containers on Kubernetes & OpenShift

14 September, 2020 - 22:18

You can connect to a running pod with oc/kubectl rsh pod_name, or start a copy of a running pod with oc debug pod_name, but as best practises recommend unprivileged, slim container images, where do you get sosreport, kdump, dig and nmap for troubleshooting ? 

Fortunately you can start either a transient Debian troubleshooting container with:

oc run troubleshooting-pod --stdin --tty --rm --image=docker.io/library/debian:buster

or a Red Hat Entreprise Linux:

oc run troubleshooting-pod --stdin --tty --rm --image=registry.access.redhat.com/rhel7/rhel-tools

Russ Allbery: Review: Who Do You Serve, Who Do You Protect?

14 September, 2020 - 11:27

Review: Who Do You Serve, Who Do You Protect?, edited by Maya Schenwar, et al.

Editor: Maya Schenwar Editor: Joe Macaré Editor: Alana Yu-lan Price Publisher: Haymarket Books Copyright: June 2016 ISBN: 1-60846-684-1 Format: Kindle Pages: 250

Who Do You Serve, Who Do You Protect? is an anthology of essays about policing in the United States. It's divided into two sections: one that enumerates ways that police are failing to serve or protect communities, and one that describes how communities are building resistance and alternatives. Haymarket Books (a progressive press in Chicago) has made it available for free in the aftermath of the George Floyd killing and resulting protests in the United States.

I'm going to be a bit unfair to this book, so let me start by admitting that the mismatch between it and the book I was looking for is not entirely its fault.

My primary goal was to orient myself in the discussion on the left about alternatives to policing. I also wanted to sample something from Haymarket Books; a free book was a good way to do that. I was hoping for a collection of short introductions to current lines of thinking that I could selectively follow in longer writing, and an essay collection seemed ideal for that.

What I had not realized (which was my fault for not doing simple research) is that this is a compilation of articles previously published by Truthout, a non-profit progressive journalism site, in 2014 and 2015. The essays are a mix of reporting and opinion but lean towards reporting. The earliest pieces in this book date from shortly after the police killing of Michael Brown, when racist police violence was (again) reaching national white attention.

The first half of the book is therefore devoted to providing evidence of police abuse and violence. This is important to do, but it's sadly no longer as revelatory in 2020, when most of us have seen similar things on video, as it was to white America in 2014. If you live in the United States today, while you may not be aware of the specific events described here, you're unlikely to be surprised that Detroit police paid off jailhouse informants to provide false testimony ("Ring of Snitches" by Aaron Miguel Cantú), or that Chicago police routinely use excessive deadly force with no consequences ("Amid Shootings, Chicago Police Department Upholds Culture of Impunity" by Sarah Macaraeg and Alison Flowers), or that there is a long history of police abuse and degradation of pregnant women ("Your Pregnancy May Subject You to Even More Law Enforcement Violence" by Victoria Law). There are about eight essays along those lines.

Unfortunately, the people who excuse or disbelieve these stories are rarely willing to seek out new evidence, let alone read a book like this. That raises the question of intended audience for the catalog of horrors part of this book. The answer to that question may also be the publication date; in 2014, the base of evidence and example for discussion had not been fully constructed. This sort of reporting is also obviously relevant in the original publication context of web-based journalism, where people may encounter these accounts individually through social media or other news coverage. In 2020, they offer reinforcement and rhetorical evidence, but I'm dubious that the people who would benefit from this knowledge will ever see it in this form. Those of us who will are already sickened, angry, and depressed.

My primary interest was therefore in the second half of the book: the section on how communities are building resistance and alternatives. This is where I'm going to be somewhat unfair because the state of that conversation may have been different in 2015 than it is now in 2020. But these essays were lacking the depth of analysis that I was looking for.

There is a human tendency, when one becomes aware of an obvious wrong, to simply publicize the horrible thing that is happening and expect someone to do something about it. It's obviously and egregiously wrong, so if more people knew about it, certainly it would be stopped! That has happened repeatedly with racial violence in the United States. It's also part of the common (and school-taught) understanding of the Civil Rights movement in the 1960s: activists succeeded in getting the violence on the cover of newspapers and on television, people were shocked and appalled, and the backlash against the violence created political change.

Putting aside the fact that this is too simplistic of a picture of the Civil Rights era, it's abundantly clear at this point in 2020 that publicizing racist and violent policing isn't going to stop it. We're going to have to do something more than draw attention to the problem. Deciding what to do requires political and social analysis, not just of the better world that we want to see but of how our current world can become that world.

There is very little in that direction in this book. Who Do You Serve, Who Do You Protect? does not answer the question of its title beyond "not us" and "white supremacy." While those answers are not exactly wrong, they're also not pushing the analysis in the direction that I wanted to read.

For example (and this is a long-standing pet peeve of mine in US political writing), it would be hard to tell from most of the essays in this book that any country besides the United States exists. One essay ("Killing Africa" by William C. Anderson) talks about colonialism and draws comparisons between police violence in the United States and international treatment of African and other majority-Black countries. One essay talks about US military behavior oversees ("Beyond Homan Square" by Adam Hudson). That's about it for international perspective. Notably, there is no analysis here of what other countries might be doing better.

Police violence against out-groups is not unique to the United States. No one has entirely solved this problem, but versions of this problem have been handled with far more success than here. The US has a comparatively appalling record; many countries in the world, particularly among comparable liberal democracies in Europe, are doing far better on metrics of racial oppression by agents of the government and of law enforcement violence. And yet it's common to approach these problems as if we have to develop a solution de novo, rather than ask what other countries are doing differently and if we could do some of those things.

The US has some unique challenges, both historical and with the nature of endemic violence in the country, so perhaps such an analysis would turn up too many US-specific factors to copy other people's solutions. But we need to do the analysis, not give up before we start. Novel solutions can lead to novel new problems; other countries have tested, working improvements that could provide a starting framework and some map of potential pitfalls.

More fundamentally, only the last two essays of this book propose solutions more complex than "stop." The authors are very clear about what the police are doing, seem less interested in why, and are nearly silent on how to change it. I suspect I am largely in political agreement with most of the authors, but obviously a substantial portion of the country (let alone its power structures) is not, and therefore nothing is changing. Part of the project of ending police violence is understanding why the violence exists, picking apart the motives and potential fracture lines in the political forces supporting the status quo, and building a strategy to change the politics. That isn't even attempted here.

For example, the "who do you serve?" question of the book's title is more interesting than the essays give it credit. Police are not a monolith. Why do Black people become police officers? What are their experiences? Are there police forces in the United States that are doing better than others? What makes them different? Why do police act with violence in the moment? What set of cultural expectations, training experiences, anxieties, and fears lead to that outcome? How do we change those factors?

Or, to take another tack, why are police not held accountable even when there is substantial public outrage? What political coalition supports that immunity from consequences, what are its fault lines and internal frictions, and what portions of that coalition could be broken off, pealed away, or removed from power? To whom, institutionally, are police forces accountable? What public offices can aspiring candidates run for that would give them oversight capability? This varies wildly throughout the United States; political approaches that work in large cities may not work in small towns, or with county sheriffs, or with the FBI, or with prison guards.

To treat these organizations as a monolith and their motives as uniform is bad political tactics. It gives up points of leverage.

I thought the best essays of this collection were the last two. "Community Groups Work to Provide Emergency Medical Alternatives, Separate from Police," by Candice Bernd, is a profile of several local emergency response systems that divert emergency calls from the police to paramedics, mental health experts, or social workers. This is an idea that's now relatively mainstream, and it seems to be finding modest success where it has been tried. It's more of a harm mitigation strategy than an attempt to deal with the root problem, but we're going to need both.

The last essay, "Building Community Safety" by Ejeris Dixon, is the only essay in this book that is pushing in the direction that I was hoping to read. Dixon describes building an alternative system that can intervene in violent situations without using the police. This is fascinating and I'm glad that I read it.

It's also frustrating in context because Dixon's essay should be part of a discussion. Dixon describes spending years learning de-escalation techniques, doing hard work of community discussion and collective decision-making, and making deep investment in the skills required to handle violence without calling in a dangerous outside force. I greatly admire this approach (also common in parts of the anarchist community) and the people who are willing to commit to it. But it's an immense amount of work, and as Dixon points out, that work often falls on the people who are least able to afford it. Marginalized communities, for whom the police are often dangerous, are also likely to lack both time and energy to invest in this type of skill training. And many people simply will not do this work even if they do have the resources to do it.

More fundamentally, this approach conflicts somewhat with division of labor. De-escalation and social work are both professional skills that require significant time and practice to hone, and as much as I too would love to live in a world where everyone knows how to do some amount of this work, I find it hard to imagine scaling this approach without trained professionals. The point of paying someone to do this work as their job is that the money frees up their time to focus on learning those skills at a level that is difficult to do in one's free time. But once you have an organized group of professionals who do this work, you have to find a way to keep them from falling prey to the problems that plague the police, which requires understanding the origins of those problems. And that's putting aside the question of how large the residual of dangerous crime that cannot be addressed through any form of de-escalation might be, and what organization we should use to address it.

Dixon's essay is great; I wouldn't change anything about it. But I wanted to see the next essay engaging with Dixon's perspective and looking for weaknesses and scaling concerns, and then the next essay that attempts to shore up those weaknesses, and yet another essay that grapples with the challenging philosophical question of a government monopoly on force and how that can and should come into play in violent crime. And then essays on grass-roots organizing in the context of police reform or abolition, and on restorative justice, and on the experience of attempting police reform from the inside, and on how to support public defenders, and on the merits and weaknesses of focusing on electing reform-minded district attorneys. Unfortunately, none of those are here.

Overall, Who Do You Serve, Who Do You Protect? was a disappointment. It was free, so I suppose I got what I paid for, and I may have had a different reaction if I read it in 2015. But if you're looking for a deep discussion on the trade-offs and challenges of stopping police violence in 2020, I don't think this is the place to start.

Rating: 3 out of 10

Enrico Zini: Travel links

14 September, 2020 - 05:00

A few interesting places to visit. Traveling could be complicated, and internet searches could be interesting enough.

For example, churches:

Or fascinating urbanistic projects, for which it's worth to look up photos:

Or nature, like Get Lost in Mega-Tunnels Dug by South American Megafauna

Jonathan Carter: Wootbook / Tongfang laptop

14 September, 2020 - 03:44

Old laptop

I’ve been meaning to get a new laptop for a while now. My ThinkPad X250 is now 5 years old and even though it’s still adequate in many ways, I tend to run out of memory especially when running a few virtual machines. It only has one memory slot, which I maxed out at 16GB shortly after I got it. Memory has been a problem in considering a new machine. Most new laptops have soldered RAM and local configurations tend to ship with 8GB RAM. Getting a new machine with only a slightly better CPU and even just the same amount of RAM as what I have in the X250 seems a bit wasteful. I was eyeing the Lenovo X13 because it’s a super portable that can take up to 32GB of RAM, and it ships with an AMD Ryzen 4000 series chip which has great performance. With Lenovo’s discount for Debian Developers it became even more attractive. Unfortunately that’s in North America only (at least for now) so that didn’t work out this time.

Enter Tongfang

I’ve been reading a bunch of positive reviews about the Tuxedo Pulse 14 and KDE Slimbook 14. Both look like great AMD laptops, supports up to 64GB of RAM and clearly runs Linux well. I also noticed that they look quite similar, and after some quick searches it turns out that these are made by Tongfang and that its model number is PF4NU1F.

I also learned that a local retailer (Wootware) sells them as the Wootbook. I’ve seen one of these before although it was an Intel-based one, but it looked like a nice machine and I was already curious about it back then. After struggling for a while to find a local laptop with a Ryzen CPU and that’s nice and compact and that breaks the 16GB memory barrier, finding this one that jumped all the way to 64GB sealed the deal for me.

This is the specs for the configuration I got:

  • Ryzen 7 4800H 2.9GHz Octa Core CPU (4MB L2 cache, 8MB L3 cache, 7nm process).
  • 64GB RAM (2x DDR4 2666mHz 32GB modules)
  • 1TB nvme disk
  • 14″ 1920×1280 (16:9 aspect ratio) matt display.
  • Real ethernet port (gigabit)
  • Intel Wifi 6 AX200 wireless ethernet
  • Magnesium alloy chassis

This configuration cost R18 796 (€947 / $1122). That’s significantly cheaper than anything else I can get that even starts to approach these specs. So this is a cheap laptop, but you wouldn’t think so by using it.

I used the Debian netinstall image to install, and installation was just another uneventful and boring Debian installation (yay!). Unfortunately it needs the firmware-iwlwifi and firmare-amd-graphics packages for the binary blobs that drives the wifi card and GPU. At least it works flawlessly and you don’t need an additional non-free display driver (as is the case with NVidia GPUs). I haven’t tested the graphics extensively yet, but desktop graphics performance is very snappy. This GPU also does fancy stuff like VP8/VP9 encoding/decoding, so I’m curious to see how well it does next time I have to encode some videos. The wifi upgrade was nice for copying files over. My old laptop maxed out at 300Mbps, this one connects to my home network between 800-1000Mbps. At this speed I don’t bother connecting via cable at home.

I read on Twitter that Tuxedo Computers thinks that it’s possible to bring Coreboot to this device. That would be yet another plus for this machine.

I’ll try to answer some of my own questions about this device that I had before, that other people in the Debian community might also have if they’re interested in this device. Since many of us are familiar with the ThinkPad X200 series of laptops, I’ll compare it a bit to my X250, and also a little to the X13 that I was considering before. Initially, I was a bit hesitant about the 14″ form factor, since I really like the portability of the 12.5″ ThinkPad. But because the screen bezel is a lot smaller, the Wootbook (that just rolls off the tongue a lot better than “the PF4NU1F”) is just slightly wider than the X250. It weighs in at 1.1KG instead of the 1.38KG of the X250. It’s also thinner, so even though it has a larger display, it actually feels a lot more portable. Here’s a picture of my X250 on top of the Wootbook, you can see a few mm of Wootbook sticking out to the right.

Card Reader

One thing that I overlooked when ordering this laptop was that it doesn’t have an SD card reader. I see that some variations have them, like on this Slimbook review. It’s not a deal-breaker for me, I have a USB card reader that’s very light and that I’ll just keep in my backpack. But if you’re ordering one of these machines and have some choice, it might be something to look out for if it’s something you care about.

Keyboard/Touchpad

On to the keyboard. This keyboard isn’t quite as nice to type on as on the ThinkPad, but, it’s not bad at all. I type on many different laptop keyboards and I would rank this keyboard very comfortably in the above average range. I’ve been typing on it a lot over the last 3 days (including this blog post) and it started feeling natural very quickly and I’m not distracted by it as much as I thought I would be transitioning from the ThinkPad or my mechanical desktop keyboard. In terms of layout, it’s nice having an actual “Insert” button again. This is things normal users don’t care about, but since I use mc (where insert selects files) this is a welcome return :). I also like that it doesn’t have a Print Screen button at the bottom of my keyboard between alt and ctrl like the ThinkPad has. Unfortunately, it doesn’t have dedicated pgup/pgdn buttons. I use those a lot in apps to switch between tabs. At leas the Fn button and the ctrl buttons are next to each other, so pressing those together with up and down to switch tabs isn’t that horrible, but if I don’t get used to it in another day or two I might do some remapping. The touchpad has en extra sensor-button on the top left corner that’s used on Windows to temporarily disable the touchpad. I captured it’s keyscan codes and it presses left control + keyscan code 93. The airplane mode, volume and brightness buttons work fine.

I do miss the ThinkPad trackpoint. It’s great especially in confined spaces, your hands don’t have to move far from the keyboard for quick pointer operations and it’s nice for doing something quick and accurate. I painted a bit in Krita last night, and agree with other reviewers that the touchpad could do with just a bit more resolution. I was initially disturbed when I noticed that my physical touchpad buttons were gone, but you get right-click by tapping with two fingers, and middle click with tapping 3 fingers. Not quite as efficient as having the real buttons, but it actually works ok. For the most part, this keyboard and touchpad is completely adequate. Only time will tell whether the keyboard still works fine in a few years from now, but I really have no serious complaints about it.

Display

The X250 had a brightness of 172 nits. That’s not very bright, I think the X250 has about the dimmest display in the ThinkPad X200 range. This hasn’t been a problem for me until recently, my eyes are very photo-sensitive so most of the time I use it at reduced brightness anyway, but since I’ve been working from home a lot recently, it’s nice to sometimes sit outside and work, especially now that it’s spring time and we have some nice days. At full brightness, I can’t see much on my X250 outside. The Wootbook is significantly brighter even (even at less than 50% brightness), although I couldn’t find the exact specification for its brightness online.

Ports

The Wootbook has 3x USB type A ports and 1x USB type C port. That’s already quite luxurious for a compact laptop. As I mentioned in the specs above, it also has a full-sized ethernet socket. On the new X13 (the new ThinkPad machine I was considering), you only get 2x USB type A ports and if you want ethernet, you have to buy an additional adapter that’s quite expensive especially considering that it’s just a cable adapter (I don’t think it contains any electronics).

It has one hdmi port. Initially I was a bit concerned at lack of displayport (which my X250 has), but with an adapter it’s possible to convert the USB-C port to displayport and it seems like it’s possible to connect up to 3 external displays without using something weird like display over usual USB3.

Overall remarks

When maxing out the CPU, the fan is louder than on a ThinkPad, I definitely noticed it while compiling the zfs-dkms module. On the plus side, that happened incredibly fast. Comparing the Wootbook to my X250, the biggest downfall it has is really it’s pointing device. It doesn’t have a trackpad and the touchpad is ok and completely usable, but not great. I use my laptop on a desk most of the time so using an external mouse will mostly solve that.

If money were no object, I would definitely choose a maxed out ThinkPad for its superior keyboard/mouse, but the X13 configured with 32GB of RAM and 128GB of SSD retails for just about double of what I paid for this machine. It doesn’t seem like you can really buy the perfect laptop no matter how much money you want to spend, there’s some compromise no matter what you end up choosing, but this machine packs quite a punch, especially for its price, and so far I’m very happy with my purchase and the incredible performance it provides.

I’m also very glad that Wootware went with the gray/black colours, I prefer that by far to the white and silver variants. It’s also the first laptop I’ve had since 2006 that didn’t come with Windows on it.

The Wootbook is also comfortable/sturdy enough to carry with one hand while open. The ThinkPads are great like this and with many other brands this just feels unsafe. I don’t feel as confident carrying it by it’s display because it’s very thin (I know, I shouldn’t be doing that with the ThinkPads either, but I’ve been doing that for years without a problem :) ).

There’s also a post on Reddit that tracks where you can buy these machines from various vendors all over the world.

Steinar H. Gunderson: mlocate slowness

14 September, 2020 - 03:15
/usr/bin/mlocate asdf > /dev/null  23.28s user 0.24s system 99% cpu 23.536 total
/usr/local/sbin/mlocate-fast asdf > /dev/null  1.97s user 0.27s system 99% cpu 2.251 total

That is just changing mbsstr to strstr. Which I guess causes trouble for EUC_JP locales or something? But guys, scanning linearly through millions of entries is sort of outdated :-(

Russell Coker: Setting Up a Dell T710 Server with DRAC6

13 September, 2020 - 17:57

I’ve just got a Dell T710 server for LUV and I’ve been playing with the configuration options. Here’s a list of what I’ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn’t work for that. I think that a list of how things are broken and what to avoid is more useful.

BIOS

Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won’t want to upgrade the BIOS (why risk breaking something when it’s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it’s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it’s not too bad).

IDRAC

IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the T710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration.

Web Interface

By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge T710 recommends IE 6.

To get a ssl cert that matches the name you want to use (and doesn’t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option “--config-dir /etc/letsencrypt-drac” to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the “--csr” option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format “*_chain.pem“. You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in.

When you access RAC via ssh (see below) you can run the command “racadm sslcsrgen” to generate a CSR that can be used by certbot. So it’s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don’t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I’d script it.

SSH

The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like “TERM=vt100 ssh root@drac“. Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access.

There is a limit to the number of open “sessions” for managing IDRAC, when you ssh to the IDRAC you can run “racadm getssninfo” to get a list of sessions and “racadm closessn -i NUM” to close a session. The closessn command takes a “-a” option to close all sessions but that just gives an error saying that you can’t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem.

I couldn’t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn’t get a match so there’s a reasonable chance that it’s unique to the server (please comment if you know more about this).

Don’t Use Graphical Console

The T710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn’t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won’t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was ‘Exception in thread “Timer-0” java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated‘, Google didn’t turn up any solutions to this. Java 8 didn’t have that problem but had a “connection failed” error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn’t help. The suggestion of running a VM with an old web browser to access IDRAC didn’t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this.

Serial Console

Fortunately IDRAC supports a serial console. Here’s a page summarising Serial console setup for DRAC [1]. Once you have done that put “console=tty0 console=ttyS1,115200n8” on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command “console com2” (there is no option for using a different com port). The serial port seems to be unavailable through the web interface.

If I was supporting many Dell systems I’d probably setup a ssh to JavaScript gateway to provide a web based console access. It’s disappointing that Dell didn’t include this.

If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:

/admin1-> console com2
console: Serial Device 2 is currently in use

So far the only way I’ve discovered to get console access again after that is the command “racadm racreset“. If anyone knows of a better way please let me know. As an aside having “racreset” being so close to “racresetcfg” (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea.

Host Based Management
deb http://linux.dell.com/repo/community/ubuntu xenial openmanage

The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going.

Here are some useful commands:

# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0
RAID Control

The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command “perccli show” gives an overview of the configuration, but the command “perccli /c0 show” to give detailed information on controller 0 SEGVs and the kernel logs a “vsyscall attempted with vsyscall=none” message. Here’s an overview of the vsyscall enmulation issue [2]. Basically I could add “vsyscall=emulate” to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root.

Some versions of IDRAC have a “racadm raid” command that can be run from a ssh session to perform RAID administration remotely, mine doesn’t have that. As an aside the help for the RAC system doesn’t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information.

I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn’t accept my key presses when I tried to do so. In any case this requires downtime and I’d like to be able to change hard drives without rebooting.

I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run “vsyscall_trace perccli” and it works! Thanks Geoffrey Thomas for writing this!

Here are some perccli commands:

# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

Related posts:

  1. New Dell Server My Dell PowerEdge T105 server (as referenced in my previous...
  2. New HP Server I’ve just started work on a new HP server running...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...

Russ Allbery: PGP::Sign 1.03

13 September, 2020 - 05:28

Part of the continuing saga to clean up CPAN testing failures with this module. Test reports uncovered a tighter version dependency for the flags I was using for GnuPG v2 (2.1.23) and a version dependency for GnuPG v1 (1.4.20). As with the previous release, I'm now skipping tests if the version is too old, since this makes the CPAN test results more useful to me.

I also took advantage of this release to push the Debian packaging to Salsa (and the upstream branch as well since it's easier) and update the package metadata, as well as add an upstream metadata file since interest in that in Debian appears to have picked up again.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

Ryan Kavanagh: Configuring OpenIKED VPNs for StrongSwan Clients

13 September, 2020 - 04:42

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called server1.example.org:

# ikectl ca vpn certificate server1.example.org create
# ikectl ca vpn certificate server1.example.org install
Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called server2.example.org:

# ikectl ca vpn certificate server2.example.org create
# ikectl ca vpn certificate server2.example.org export

This last command will produce a tarball server2.example.org.tgz. Copy it over to server2.example.org and install it:

# tar -C /etc/iked -xzpvf server2.example.org.tgz

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org

Pick one of the two hosts to play the “active” role (in this case, server1.example.org). Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from server1.example.org to server2.example.org \
	local server1.example.org peer server2.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from server2.example.org to server1.example.org \
	local server2.example.org peer server1.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from 0.0.0.0/0 to 10.0.1.0/24 \
	local server1.example.org peer any \
	srcid server1.example.org \
	config address 10.0.1.0/24 \
	config name-server 10.0.1.1 \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:

inet 10.0.1.1 255.255.255.0

Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was client.example.org:

# ikectl ca vpn certificate client.example.org create
# ikectl ca vpn certificate client.example.org export

Copy client.example.org.tgz to client and run

# tar -C /etc/ipsec.d/ -xzf client.example.org.tgz -- \
	./private/client.example.org.key \
	./certs/client.example.org.crt ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:

ca example.org
  cacert=ca.crt
  auto=add

conn server1
  keyexchange=ikev2
  right=server1.example.org
  rightid=%server1.example.org
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftsourceip=%config
  leftauth=pubkey
  leftcert=client.example.org.crt
  auto=route

Add the following to /etc/ipsec.secrets:

# space is important
server1.example.org : RSA client.example.org.key

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.

Sources:

Ryan Kavanagh: Configuring OpenIKED VPNs for Road Warriors

13 September, 2020 - 04:42

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called server1.example.org:

# ikectl ca vpn certificate server1.example.org create
# ikectl ca vpn certificate server1.example.org install
Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called server2.example.org:

# ikectl ca vpn certificate server2.example.org create
# ikectl ca vpn certificate server2.example.org export

This last command will produce a tarball server2.example.org.tgz. Copy it over to server2.example.org and install it:

# tar -C /etc/iked -xzpvf server2.example.org.tgz

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org

Pick one of the two hosts to play the “active” role (in this case, server1.example.org). Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from server1.example.org to server2.example.org \
	local server1.example.org peer server2.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from server2.example.org to server1.example.org \
	local server2.example.org peer server1.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from 0.0.0.0/0 to 10.0.1.0/24 \
	local server1.example.org peer any \
	srcid server1.example.org \
	config address 10.0.1.0/24 \
	config name-server 10.0.1.1 \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:

inet 10.0.1.1 255.255.255.0

Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was client.example.org:

# ikectl ca vpn certificate client.example.org create
# ikectl ca vpn certificate client.example.org export

Copy client.example.org.tgz to client and run

# tar -C /etc/ipsec.d/ -xzf client.example.org.tgz -- \
	./private/client.example.org.key \
	./certs/client.example.org.crt ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:

ca example.org
  cacert=ca.crt
  auto=add

conn server1
  keyexchange=ikev2
  right=server1.example.org
  rightid=%server1.example.org
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftsourceip=%config
  leftauth=pubkey
  leftcert=client.example.org.crt
  auto=route

Add the following to /etc/ipsec.secrets:

# space is important
server1.example.org : RSA client.example.org.key

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.

Sources:

Markus Koschany: My Free Software Activities in August 2020

12 September, 2020 - 14:26

Welcome to gambaru.de. Here is my monthly report (+ the first week in September) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • I packaged a new upstream release of teeworlds, the well-known 2D multiplayer shooter with cute characters called tees to resolve a Python 2 bug (although teeworlds is actually a C++ game). The update also fixed a severe remote denial-of-service security vulnerability, CVE-2020-12066. I prepared a patch for Buster and will sent it to the security team later today.
  • I sponsored updates of mgba, a Game Boy Advance emulator, for Ryan Tandy, and osmose-emulator for Carlos Donizete Froes.
  • I worked around a RC GCC 10 bug in megaglest by compiling with -fcommon.
  • Thanks to Gerardo Ballabio who packaged a new upstream version of galois which I uploaded for him.
  • Also thanks to Reiner Herrmann and Judit Foglszinger who fixed a regression (crash) in monsterz due to the earlier port to Python 3. Reiner also made fans of supertuxkart happy by packaging the latest upstream release version 1.2.
Debian Java Misc
  • I was contacted by the upstream maintainer of privacybadger, a privacy addon for Firefox and Chromium, who dislikes the idea of having a stable and unchanging version in Debian stable releases. Obviously I can’t really do much about it although I believe the release team would be open-minded for regular point updates of browser addons though. However I don’t intend to do regular updates for all of my packages in stable unless there is a really good reason to do so. At the moment I’m willing to make an exception for ublock-origin and https-everywhere because I feel these addons should be core browser functionality anyway. I talked about this on our Debian Mozilla Extension Maintainers mailinglist and it seems someone is interested to take over privacybadger and prepare regular stable point updates. Let’s see how it turns out.
  • Finally this month saw the release of ublock-origin 1.29.0 and the creation of two different browser-specific binary packages for Firefox and Chromium. I have talked about it before and I believe two separate packages for ublock-origin are more aligned to upstream development and make the whole addon easier to maintain which benefits users, upstream and maintainers.
  • imlib2, an image library, and binaryen also got updated this month.
Debian LTS

This was my 54. month as a paid contributor and I have been paid to work 20 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2303-1. Issued a security update for libssh fixing 1 CVE.
  • DLA-2327-1. Issued a security update for lucene-solr fixing 1 CVE.
  • DLA-2369-1. Issued a security update for libxml2 fixing 8 CVE.
  • Triaged CVE-2020-14340, jboss-xnio as not-affected for Stretch.
  • Triaged CVE-2020-1394, lucene-solr as no-dsa because the security impact was minor.
  • Triaged CVE-2019-17638, jetty9 as not-affected for Stretch and Buster.
  • squid3: I backported the patches for CVE-2020-15049, CVE-2020-15810, CVE-2020-15811 and CVE-2020-24606 from squid 4 to squid 3.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 27. month and I have been paid to work 14,25 hours on ELTS.

  • ELA-271-1. Issued a security update for squid3 fixing 19 CVE. Most of the work was already done before ELTS started, only the patch for CVE-2019-12529 had to be adjusted for the nettle version in Jessie.
  • ELA-273-1. Issued a security update for nss fixing 1 CVE.
  • ELA-276-1. Issued a security update for libjpeg-turbo fixing 2 CVE.
  • ELA-277-1. Issued a security update for graphicsmagick fixing 1 CVE.
  • ELA-279-1. Issued a security update for imagemagick fixing 3 CVE.
  • ELA-280-1. Issued a security update for libxml2 fixing 4 CVE.

Thanks for reading and see you next time.

Jelmer Vernooij: Debian Janitor: All Packages Processed with Lintian-Brush

12 September, 2020 - 09:15

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

On 12 July 2019, the Janitor started fixing lintian issues in packages in the Debian archive. Now, a year and a half later, it has processed every one of the almost 28,000 packages at least once.

As discussed two weeks ago, this has resulted in roughly 65,000 total changes. These 65,000 changes were made to a total of almost 17,000 packages. Of the remaining packages, for about 4,500 lintian-brush could not make any improvements. The rest (about 6,500) failed to be processed for one of many reasons – they are e.g. not yet migrated off alioth, use uncommon formatting that can't be preserved or failed to build for one reason or another.

Now that the entire archive has been processed, packages are prioritized based on the likelihood of a change being made to them successfully.

Over the course of its existence, the Janitor has slowly gained support for a wider variety of packaging methods. For example, it can now edit the templates for some of the generated control files. Many of the packages that the janitor was unable to propose changes for the first time around are expected to be correctly handled when they are reprocessed.

If you’re a Debian developer, you can find the list of improvements made by the janitor in your packages by going to https://janitor.debian.net/m/$EMAIL.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

Petter Reinholdtsen: Buster update of Norwegian Bokmål edition of Debian Administrator's Handbook almost done

11 September, 2020 - 14:45

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service lulu.com. While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on lulu.com and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Louis-Philippe Véronneau: Hire me!

11 September, 2020 - 13:15

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas!

It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university.

Looking for a job

What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more!

I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open.

For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam.

I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it...

If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy!

Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows.

If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)

  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here. 

Reproducible Builds (diffoscope): diffoscope 160 released

11 September, 2020 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 160. This version includes the following changes:

* Check that pgpdump is actually installed before attempting to run it.
  Thanks to Gianfranco Costamagna (locutusofborg). (Closes: #969753)
* Add some documentation for the EXTERNAL_TOOLS dictionary.
* Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump
  against all files that are recognised by file(1) as "data".

You find out more by visiting the project homepage.

Daniel Silverstone: Broccoli Sync Conversation

10 September, 2020 - 14:49
Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

Reproducible Builds: Reproducible Builds in August 2020

10 September, 2020 - 01:00

Welcome to the August 2020 report from the Reproducible Builds project.

In our monthly reports, we summarise the things that we have been up to over the past month. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. If you’re interested in contributing to the project, please visit our main website.



This month, Jennifer Helsby launched a new reproduciblewheels.com website to address the lack of reproducibility of Python wheels.

To quote Jennifer’s accompanying explanatory blog post:

One hiccup we’ve encountered in SecureDrop development is that not all Python wheels can be built reproducibly. We ship multiple (Python) projects in Debian packages, with Python dependencies included in those packages as wheels. In order for our Debian packages to be reproducible, we need that wheel build process to also be reproducible

Parallel to this, transparencylog.com was also launched, a service that verifies the contents of URLs against a publicly recorded cryptographic log. It keeps an append-only log of the cryptographic digests of all URLs it has seen. (GitHub repo)

On 18th September, Bernhard M. Wiedemann will give a presentation in German, titled Wie reproducible builds Software sicherer machen (“How reproducible builds make software more secure”) at the Internet Security Digital Days 2020 conference.


Reproducible builds at DebConf20

There were a number of talks at the recent online-only DebConf20 conference on the topic of reproducible builds.

Holger gave a talk titled “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from ftp.debian.org are made from their claimed sources. It also served as a general update on the status of reproducible builds within Debian. The video (145 MB) and slides are available.

There were also a number of other talks that involved Reproducible Builds too. For example, the Malayalam language mini-conference had a talk titled എനിയ്ക്കും ഡെബിയനില്‍ വരണം, ഞാന്‍ എന്തു് ചെയ്യണം? (“I want to join Debian, what should I do?”) presented by Praveen Arimbrathodiyil, the Clojure Packaging Team BoF session led by Elana Hashman, as well as Where is Salsa CI right now? that was on the topic of Salsa, the collaborative development server that Debian uses to provide the necessary tools for package maintainers, packaging teams and so on.

Jonathan Bustillos (Jathan) also gave a talk in Spanish titled Un camino verificable desde el origen hasta el binario (“A verifiable path from source to binary”). (Video, 88MB)


Development work

After many years of development work, the compiler for the Rust programming language now generates reproducible binary code. This generated some general discussion on Reddit on the topic of reproducibility in general.

Paul Spooren posted a ‘request for comments’ to OpenWrt’s openwrt-devel mailing list asking for clarification on when to raise the PKG_RELEASE identifier of a package. This is needed in order to successfully perform rebuilds in a reproducible builds context.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Chris Lamb provided some comments and pointers on an upstream issue regarding the reproducibility of a Snap / SquashFS archive file. []

Debian

Holger Levsen identified that a large number of Debian .buildinfo build certificates have been “tainted” on the official Debian build servers, as these environments have files underneath the /usr/local/sbin directory []. He also filed against bug for debrebuild after spotting that it can fail to download packages from snapshot.debian.org [].

This month, several issues were uncovered (or assisted) due to the efforts of reproducible builds.

For instance, Debian bug #968710 was filed by Simon McVittie, which describes a problem with detached debug symbol files (required to generate a traceback) that is unlikely to have been discovered without reproducible builds. In addition, Jelmer Vernooij called attention that the new Debian Janitor tool is using the property of reproducibility (as well as diffoscope when applying archive-wide changes to Debian:

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds. []

56 reviews of Debian packages were added, 38 were updated and 24 were removed this month adding to our knowledge about identified issues. Specifically, Chris Lamb added and categorised the nondeterministic_version_generated_by_python_param and the lessc_nondeterministic_keys toolchain issues. [][]

Holger Levsen sponsored Lukas Puehringer’s upload of the python-securesystemslib pacage, which is a dependency of in-toto, a framework to secure the integrity of software supply chains. []

Lastly, Chris Lamb further refined his merge request against the debian-installer component to allow all arguments from sources.list files (such as [check-valid-until=no]) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure and sent a ping to the team that maintains that code.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In August, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. []
  • Bug fixes:

    • Don’t raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. []
    • Temporarily drop gnumeric from the Debian build-dependencies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns “data”. (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. []
    • Don’t use Python’s repr(object) output in “Calling external command” messages. []
    • Include the filename in the “… not identified by any comparator” message. []
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. []
    • Drop some unused imports [], drop an unnecessary dictionary comprehensions [] and some unnecessary control flow [].
    • Correct typo of “output” in a comment. []
  • Release process:

    • Move generation of debian/tests/control to an external script. []
    • Add some URLs for the site that will appear on PyPI.org. []
    • Update “author” and “author email” in setup.py for PyPI.org and similar. []
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124)
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. []
    • Add an assert_diff helper that loads and compares a fixture output. [][][][]
  • Misc:

In addition, Mattia Rizzolo documented in setup.py that diffoscope works with Python version 3.8 [] and Frazer Clews applied some Pylint suggestions [] and removed some deprecated methods [].

Website

This month, Chris Lamb updated the main Reproducible Builds website and documentation to:

  • Clarify & fix a few entries on the “who” page [][] and ensure that images do not get to large on some viewports [].
  • Clarify use of a pronoun re. Conservancy. []
  • Use “View all our monthly reports” over “View all monthly reports”. []
  • Move a “is a” suffix out of the link target on the SOURCE_DATE_EPOCH age. []

In addition, Javier Jardón added the freedesktop-sdk project [] and Kushal Das added SecureDrop project [] to our projects page. Lastly, Michael Pöhn added internationalisation and translation support with help from Hans-Christoph Steiner [].

Testing framework

The Reproducible Builds project operate a Jenkins-based testing framework to power tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • System health checks:

    • Improve explanation how the status and scores are calculated. [][]
    • Update and condense view of detected issues. [][]
    • Query the canonical configuration file to determine whether a job is disabled instead of duplicating/hardcoding this. []
    • Detect several problems when updating the status of reporting-oriented ‘metapackage’ sets. []
    • Detect when diffoscope is not installable  [] and failures in DNS resolution [].
  • Debian:

    • Update the URL to the Debian security team bug tracker’s Git repository. []
    • Reschedule the unstable and bullseye distributions often for the arm64 architecture. []
    • Schedule buster less often for armhf. [][][]
    • Force the build of certain packages in the work-in-progress package rebuilder. [][]
    • Only update the stretch and buster base build images when necessary. []
  • Other distributions:

    • For F-Droid, trigger jobs by commits, not by a timer. []
    • Disable the Archlinux HTML page generation job as it has never worked. []
    • Disable the alternative OpenWrt rebuilder jobs. []
  • Misc;

Many other changes were made too, including:

Finally, build node maintenance was performed by Holger Levsen [], Mattia Rizzolo [][] and Vagrant Cascadian [][][][]

Mailing list

On our mailing list this month, Leo Wandersleb sent a message to the list after he was wondering how to expand his WalletScrutiny.com project (which aims to improve the security of Bitcoin wallets) from Android wallets to also monitor Linux wallets as well:

If you think you know how to spread the word about reproducibility in the context of Bitcoin wallets through WalletScrutiny, your contributions are highly welcome on this PR []

Julien Lepiller posted to the list linking to a blog post by Tavis Ormandy titled You don’t need reproducible builds. Morten Linderud (foxboron) responded with a clear rebuttal that Tavis was only considering the narrow use-case of proprietary vendors and closed-source software. He additionally noted that the criticism that reproducible builds cannot prevent against backdoors being deliberately introduced into the upstream source (“bugdoors”) are decidedly (and deliberately) outside the scope of reproducible builds to begin with.

Chris Lamb included the Reproducible Builds mailing list in a wider discussion regarding a tentative proposal to include .buildinfo files in .deb packages, adding his remarks regarding requiring a custom tool in order to determine whether generated build artifacts are ‘identical’ in a reproducible context. []

Jonathan Bustillos (Jathan) posted a quick email to the list requesting whether there was a list of To do tasks in Reproducible Builds.

Lastly, Chris Lamb responded at length to a query regarding the status of reproducible builds for Debian ISO or installation images. He noted that most of the technical work has been performed but “there are at least four issues until they can be generally advertised as such”. He pointed that the privacy-oriented Tails operation system, which is based directly on Debian, has had reproducible builds for a number of years now. []


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้