Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 43 min ago

Kees Cook: security things in Linux v5.4

19 February, 2020 - 07:37

Previously: v5.3.

Linux kernel v5.4 was released in late November. The holidays got the best of me, but better late than never! ;) Here are some security-related things I found interesting:

waitid() gains P_PIDFD
Christian Brauner has continued his pidfd work by adding a critical mode to waitid(): P_PIDFD. This makes it possible to reap child processes via a pidfd, and completes the interfaces needed for the bulk of programs performing process lifecycle management. (i.e. a pidfd can come from /proc or clone(), and can be waited on with waitid().)

kernel lockdown
After something on the order of 8 years, Linux can now draw a bright line between “ring 0” (kernel memory) and “uid 0” (highest privilege level in userspace). The “kernel lockdown” feature, which has been an out-of-tree patch series in most Linux distros for almost as many years, attempts to enumerate all the intentional ways (i.e. interfaces not flaws) userspace might be able to read or modify kernel memory (or execute in kernel space), and disable them. While Matthew Garrett made the internal details fine-grained controllable, the basic lockdown LSM can be set to either disabled, “integrity” (kernel memory can be read but not written), or “confidentiality” (no kernel memory reads or writes). Beyond closing the many holes between userspace and the kernel, if new interfaces are added to the kernel that might violate kernel integrity or confidentiality, now there is a place to put the access control to make everyone happy and there doesn’t need to be a rehashing of the age old fight between “but root has full kernel access” vs “not in some system configurations”.

tagged memory relaxed syscall ABI
Andrey Konovalov (with Catalin Marinas and others) introduced a way to enable a “relaxed” tagged memory syscall ABI in the kernel. This means programs running on hardware that supports memory tags (or “versioning”, or “coloring”) in the upper (non-VMA) bits of a pointer address can use these addresses with the kernel without things going crazy. This is effectively teaching the kernel to ignore these high bits in places where they make no sense (i.e. mathematical comparisons) and keeping them in place where they have meaning (i.e. pointer dereferences).

As an example, if a userspace memory allocator had returned the address 0x0f00000010000000 (VMA address 0x10000000, with, say, a “high bits” tag of 0x0f), and a program used this range during a syscall that ultimately called copy_from_user() on it, the initial range check would fail if the tag bits were left in place: “that’s not a userspace address; it is greater than TASK_SIZE (0x0000800000000000)!”, so they are stripped for that check. During the actual copy into kernel memory, the tag is left in place so that when the hardware dereferences the pointer, the pointer tag can be checked against the expected tag assigned to referenced memory region. If there is a mismatch, the hardware will trigger the memory tagging protection.

Right now programs running on Sparc M7 CPUs with ADI (Application Data Integrity) can use this for hardware tagged memory, ARMv8 CPUs can use TBI (Top Byte Ignore) for software memory tagging, and eventually there will be ARMv8.5-A CPUs with MTE (Memory Tagging Extension).

boot entropy improvement
Thomas Gleixner got fed up with poor boot-time entropy and trolled Linus into coming up with reasonable way to add entropy on modern CPUs, taking advantage of timing noise, cycle counter jitter, and perhaps even the variability of speculative execution. This means that there shouldn’t be mysterious multi-second (or multi-minute!) hangs at boot when some systems don’t have enough entropy to service getrandom() syscalls from systemd or the like.

userspace writes to swap files blocked
From the department of “how did this go unnoticed for so long?”, Darrick J. Wong fixed the kernel to not allow writes from userspace to active swap files. Without this, it was possible for a user (usually root) with write access to a swap file to modify its contents, thereby changing memory contents of a process once it got paged back in. While root normally could just use CAP_PTRACE to modify a running process directly, this was a loophole that allowed lesser-privileged users (e.g. anyone in the “disk” group) without the needed capabilities to still bypass ptrace restrictions.

limit strscpy() sizes to INT_MAX
Generally speaking, if a size variable ends up larger than INT_MAX, some calculation somewhere has overflowed. And even if not, it’s probably going to hit code somewhere nearby that won’t deal well with the result. As already done in the VFS core, and vsprintf(), I added a check to strscpy() to reject sizes larger than INT_MAX.

ld.gold support removed
Thomas Gleixner removed support for the gold linker. While this isn’t providing a direct security benefit, ld.gold has been a constant source of weird bugs. Specifically where I’ve noticed, it had been pain while developing KASLR, and has more recently been causing problems while stabilizing building the kernel with Clang. Having this linker support removed makes things much easier going forward. There are enough weird bugs to fix in Clang and ld.lld. ;)

Intel TSX disabled
Given the use of Intel’s Transactional Synchronization Extensions (TSX) CPU feature by attackers to exploit speculation flaws, Pawan Gupta disabled the feature by default on CPUs that support disabling TSX.

That’s all I have for this version. Let me know if I missed anything. :) Next up is Linux v5.5!

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Daniel Silverstone: Subplot volunteers? (Acceptance testing tool)

19 February, 2020 - 03:24

Note: This is a repost from Lars' blog made to widen the reach and hopefully find the right interested parties.

Would you be willing to try Subplot for acceptance testing for one of your real projects, and give us feedback? We're looking for two volunteers.

given a project
when it uses Subplot
then it is successful

Subplot is a tool for capturing and automatically verifying the acceptance criteria for a software project or a system, in a way that's understood by all stakeholders.

In a software project there are always more than one stakeholder. Even in a project one writes for oneself, there are two stakeholders: oneself, and that malicious cretin oneself-in-the-future. More importantly, though, there are typically stakeholders such as end users, sysadmins, clients, software architects, developers, and testers. They all need to understand what the software should do, and when it's in an acceptable state to be put into use: in other words, what the acceptance criteria are.

Crucially, all stakeholders should understand the acceptance criteria the same way, and also how to verify they are met. In an ideal situation, all verification is automated, and happens very frequently.

There are various tools for this, from generic documentation tooling (word processors, text editors, markup languages, etc) to test automation (Cucumber, Selenium, etc). On the one hand, documenting acceptance criteria in a way that all stakeholders understand is crucial: otherwise the end users are at risk of getting something that's not useful to help them, and the project is a waste of everyone's time and money. On the other hand, automating the verification of how acceptance criteria is met is also crucial: otherwise it's done manually, which is slow, costly, and error prone, which increases the risk of project failure.

Subplot aims to solve this by an approach that combines documentation tooling with automated verification.

  • The stakeholders in a project jointly produce a document that captures all relevant acceptance criteria and also describes how they can be verified automatically, using scenarios. The document is written using Markdown.

  • The developer stakeholders produce code to implement the steps in the scenarios. The Subplot approach allows the step implementations to be done in a highly cohesive, de-coupled manner, making such code usually be quite simple. (Test code should be your best code.)

  • Subplot's "docgen" program produces a typeset version as PDF or HTML. This is meant to be easily comprehensible by all stakeholders.

  • Subplot's "codegen" program produces a test program in the language used by the developer stakeholders. This test program can be run to verify that acceptance criteria are met.

Subplot started in in late 2018, and was initially called Fable. It is based on the yarn tool for the same purpose, from 2013. Yarn has been in active use all its life, if not popular outside a small circle. Subplot improves on yarn by improving document generation, markup, and decoupling of concerns. Subplot is not compatible with yarn.

Subplot is developed by Lars Wirzenius and Daniel Silverstone as a hobby project. It is free software, implemented in Rust, developed on Debian, and uses Pandoc and LaTeX for typesetting. The code is hosted on gitlab.com. Subplot verifies its own acceptance criteria. It is alpha level software.

We're looking for one or two volunteers to try Subplot on real projects of their own, and give us feedback. We want to make Subplot good for its purpose, also for people other than us. If you'd be willing to give it a try, start with the Subplot website, then tell us you're using Subplot. We're happy to respond to questions from the first two volunteers, and from others, time permitting. (The reality of life and time constraints is that we can't commit to supporting more people at this time.)

We'd love your feedback, whether you use Subplot or not.

Mike Gabriel: MATE 1.24 landed in Debian unstable

18 February, 2020 - 17:03

Last week, Martin Wimpress (from Ubuntu MATE) and I did a 2.5-day packaging sprint and after that I bundle-uploaded all MATE 1.24 related components to Debian unstable. Thus, MATE 1.24 landed in Debian unstable only four days after the upstream release. I think this was the fastest version bump of MATE in Debian ever.

Packages should have been built by now for most of the 22 architectures supported by Debian. The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Please also refer to the MATE 1.24 upstream release notes for details on what's new and what's changed [2].

Credits

One big thanks goes to Martin Wimpress. Martin and I worked on all the related packages hand in hand. Only this team worked made this very fast upload possible. Martin especially found the fix for a flaw in Python Caja that caused all Python3 based Caja extensions to fail in Caja 1.24 / Python Caja 1.24. Well done!

Another big thanks goes to the MATE upstream team. You again did an awesome job, folks. Much, much appreciated.

Last but not least, a big thanks goes to Svante Signell for providing Debian architecture specific patches for Debian's non-Linux distributions (GNU/Hurd, GNU/kFreeBSD). We will wait now until all MATE 1.24 packages have initially migrated to Debian testing and then follow-up upload his fixes. As in the past, MATE shall be available on as many Debian architectures as possible (ideally: all of them). Saying this, all Debian porters are invited to send us patches, if they see components of MATE Desktop fail on not-so-common architectures.

References

light+love,
Mike Gabriel (aka sunweaver)

Keith Packard: more-iterative-splines

18 February, 2020 - 14:41
Slightly Better Iterative Spline Decomposition

My colleague Bart Massey (who is a CS professor at Portland State University) reviewed my iterative spline algorithm article and had an insightful comment — we don't just want any spline decomposition which is flat enough, what we really want is a decomposition for which every line segment is barely within the specified flatness value.

My initial approach was to keep halving the length of the spline segment until it was flat enough. This definitely generates a decomposition which is flat enough everywhere, but some of the segments will be shorter than they need to be, by as much as a factor of two.

As we'll be taking the resulting spline and doing a lot more computation with each segment, it makes sense to spend a bit more time finding a decomposition with fewer segments.

The Initial Search

Here's how the first post searched for a 'flat enough' spline section:

t = 1.0f;

/* Iterate until s1 is flat */
do {
    t = t/2.0f;
    _de_casteljau(s, s1, s2, t);
} while (!_is_flat(s1));
Bisection Method

What we want to do is find an approximate solution for the function:

flatness(t) = tolerance

We'll use the Bisection method to find the value of t for which the flatness is no larger than our target tolerance, but is at least as large as tolerance - ε, for some reasonably small ε.

float       hi = 1.0f;
float       lo = 0.0f;

/* Search for an initial section of the spline which
 * is flat, but not too flat
 */
for (;;) {

    /* Average the lo and hi values for our
     * next estimate
     */
    float t = (hi + lo) / 2.0f;

    /* Split the spline at the target location
     */
    _de_casteljau(s, s1, s2, t);

    /* Compute the flatness and see if s1 is flat
     * enough
     */
    float flat = _flatness(s1);

    if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {

        /* Stop looking when s1 is close
         * enough to the target tolerance
         */
        if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE))
            break;

        /* Flat: t is the new lower interval bound */
        lo = t;
    } else {

        /* Not flat: t is the new upper interval bound */
        hi =  t;
    }
}

This searches for a place to split the spline where the initial portion is flat but not too flat. I set SNEK_FLAT_TOLERANCE to 0.01, so we'll pick segments which have flatness between 0.49 and 0.50.

The benefit from the search is pretty easy to understand by looking at the number of points generated compared with the number of _de_casteljau and _flatness calls:

Search Calls Points Simple 150 33 Bisect 229 25

And here's an image comparing the two:

A Closed Form Approach?

Bart also suggests attempting to find an analytical solution to decompose the spline. What we need is to is take the flatness function and find the spline which makes it equal to the desired flatness. If the spline control points are a, b, c, and d, then the flatness function is:

ux = (3×b.x - 2×a.x - d.x)²
uy = (3×b.y - 2×a.y - d.y)²
vx = (3×c.x - 2×d.x - a.x)²
vy = (3×c.y - 2×d.y - a.y)²

flat = max(ux, vx) + max(uy, vy)

When the spline is split into two pieces, all of the control points for the new splines are determined by the original control points and the 't' value which sets where the split happens. What we want is to find the 't' value which makes the flat value equal to the desired tolerance. Given that the binary search runs De Casteljau and the flatness function almost 10 times for each generated point, there's a lot of opportunity to go faster with a closed form solution.

Update: Fancier Method Found!

Bart points me at two papers:

  1. Flattening quadratic Béziers by Raph Levien
  2. Precise Flattening of Cubic Bézier Segments by Thomas F. Hain, Athar L. Ahmad, and David D. Langan

Levien's paper offers a great solution for quadratic Béziers by directly computing the minimum set of line segments necessary to approximate within a specified flatness. However, it doesn't generalize to cubic Béziers.

Hain, Ahmad and Langan do provide a directly computed decomposition of a cubic Bézier. This is done by constructing a parabolic approximation to the first portion of the spline and finding a 't' value which produces the desired flatness. There are a pile of special cases to deal with when there isn't a good enough parabolic approximation. But, overall computational cost is lower than a straightforward binary decomposition, plus there's no recursion required.

This second algorithm has the same characteristics as my Bisection method as the last segment may have any flatness from zero through the specified tolerance; Levien's solution is neater in that it generates line segments of similar flatness across the whole spline.

Current Implementation
/*
 * Copyright © 2020 Keith Packard <keithp@keithp.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
 */

#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <math.h>

typedef float point_t[2];
typedef point_t spline_t[4];

uint64_t num_flats;
uint64_t num_points;

#define SNEK_DRAW_TOLERANCE 0.5f
#define SNEK_FLAT_TOLERANCE 0.01f

/*
 * This actually returns flatness² * 16,
 * so we need to compare against scaled values
 * using the SCALE_FLAT macro
 */
static float
_flatness(spline_t spline)
{
    /*
     * This computes the maximum deviation of the spline from a
     * straight line between the end points.
     *
     * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf
     */
    float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0];
    float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1];
    float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0];
    float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1];

    ux *= ux;
    uy *= uy;
    vx *= vx;
    vy *= vy;
    if (ux < vx)
        ux = vx;
    if (uy < vy)
        uy = vy;
    ++num_flats;

    /*
     *If we wanted to return the true flatness, we'd use:
     *
     * return sqrtf((ux + uy)/16.0f)
     */
    return ux + uy;
}

/* Convert constants to values usable with _flatness() */
#define SCALE_FLAT(f)   ((f) * (f) * 16.0f)

/*
 * Linear interpolate from a to b using distance t (0 <= t <= 1)
 */
static void
_lerp (point_t a, point_t b, point_t r, float t)
{
    int i;
    for (i = 0; i < 2; i++)
        r[i] = a[i]*(1.0f - t) + b[i]*t;
}

/*
 * Split 's' into two splines at distance t (0 <= t <= 1)
 */
static void
_de_casteljau(spline_t s, spline_t s1, spline_t s2, float t)
{
    point_t first[3];
    point_t second[2];
    int i;

    for (i = 0; i < 3; i++)
        _lerp(s[i], s[i+1], first[i], t);

    for (i = 0; i < 2; i++)
        _lerp(first[i], first[i+1], second[i], t);

    _lerp(second[0], second[1], s1[3], t);

    for (i = 0; i < 2; i++) {
        s1[0][i] = s[0][i];
        s1[1][i] = first[0][i];
        s1[2][i] = second[0][i];

        s2[0][i] = s1[3][i];
        s2[1][i] = second[1][i];
        s2[2][i] = first[2][i];
        s2[3][i] = s[3][i];
    }
}

/*
 * Decompose 's' into straight lines which are
 * within SNEK_DRAW_TOLERANCE of the spline
 */
static void
_spline_decompose(void (*draw)(float x, float y), spline_t s)
{
    /* Start at the beginning of the spline. */
    (*draw)(s[0][0], s[0][1]);

    /* Split the spline until it is flat enough */
    while (_flatness(s) > SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {
        spline_t    s1, s2;
        float       hi = 1.0f;
        float       lo = 0.0f;

        /* Search for an initial section of the spline which
         * is flat, but not too flat
         */
        for (;;) {

            /* Average the lo and hi values for our
             * next estimate
             */
            float t = (hi + lo) / 2.0f;

            /* Split the spline at the target location
             */
            _de_casteljau(s, s1, s2, t);

            /* Compute the flatness and see if s1 is flat
             * enough
             */
            float flat = _flatness(s1);

            if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {

                /* Stop looking when s1 is close
                 * enough to the target tolerance
                 */
                if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE))
                    break;

                /* Flat: t is the new lower interval bound */
                lo = t;
            } else {

                /* Not flat: t is the new upper interval bound */
                hi =  t;
            }
        }

        /* Draw to the end of s1 */
        (*draw)(s1[3][0], s1[3][1]);

        /* Replace s with s2 */
        memcpy(&s[0], &s2[0], sizeof (spline_t));
    }

    /* S is now flat enough, so draw to the end */
    (*draw)(s[3][0], s[3][1]);
}

void draw(float x, float y)
{
    ++num_points;
    printf("%8g, %8g\n", x, y);
}

int main(int argc, char **argv)
{
    spline_t spline = {
        { 0.0f, 0.0f },
        { 0.0f, 256.0f },
        { 256.0f, -256.0f },
        { 256.0f, 0.0f }
    };
    _spline_decompose(draw, spline);
    fprintf(stderr, "flats %lu points %lu\n", num_flats, num_points);
    return 0;
}

Ulrike Uhlig: Reasons for job burnout and what motivates people in their job

18 February, 2020 - 06:00

Burnout comes in many colours and flavours.

Often, burnout is conceived as a weakness of the person experiencing it: "they can't work under stress", "they lack organizational skills", "they are currently going through grief or a break up, that's why they can't keep up" — you've heard it all before, right?

But what if job burnout would actually be an indicator for a toxic work environment? Or for a toxic work setup?

I had read quite a bit of literature trying to explain burnout before stumbling upon the work of Christina Maslach. She has researched burnout for thirty years and is most well known for her research on occupational burnout. While she observed burnout in the 90ies mostly in caregiver professions, we can see an increase of burnout in many other fields in recent years, such as in the tech industry. Maslach outlines in one of her talks what this might be due to.

More interesting to me is the question why job burnout occurs at all? High workload is only one out of six factors that increase the risk for burnout, according to Christina Maslach and her team.

Factors increasing job burnout
  1. Workload. This could be demand overload, lots of different tasks, lots of context switching, unclear expectations, having several part time jobs, lack of resources, lack of work force, etc.
  2. Lack of control. Absence of agency. Absence of the possibility to make decisions. Impossibility to act on one's own account.
  3. Insufficient reward. Here, we are not solely talking about financial reward, but also about gratitude, recognition, visibility, and celebration of accomplishments.
  4. Lack of community. Remote work, asynchronous communication, poor communication skills, isolation in working on tasks, few/no in-person meetings, lack of organizational caring.
  5. Absence of fairness. Invisible hierarchies, lack of (fair) decision making processes, back channel decision making, financial or other rewards unfairly distributed.
  6. Value conflicts. This could be over-emphasizing on return on investment, making unethical requests, not respecting colleagues' boundaries, the lack of organizational vision, poor leadership.

Interestingly, it is possible to improve one area of risk, and see improvements in all the other areas.

What motivates people?

So, what is it that motivates people, what makes them like their work?
Here, Maslach comes up with another interesting list:

  • Autonomy. This could mean for example to trust colleagues to work on tasks autonomously. To let colleagues make their own decisions on how to implement a feature as long as it corresponds to the code writing guidelines. The responsibility for the task should be transferred along with the task. People need to be allowed to make mistakes (and fix them). Autonomy also means to say goodbye to the expectation that colleagues do everything exactly like we would do it. Instead, we can learn to trust in collective intelligence for coming up with different solutions.
  • Feeling of belonging. This one could mean to search to use synchronous communication whenever possible. To privilege in-person meetings. To celebrate achievements. To make collective decisions whenever the outcome affects the collective (or part of it). To have lunch together. To have lunch together and not talk about work.
  • Competence. Having a working feedback process. Valueing each others' competences. Having the possibility to evolve in the workplace. Having the possibility to get training, to try new setups, new methods, or new tools. Having the possibility to increase one's competences, possibly with the financial backing of the workplace.
  • Positive emotions. Encouraging people to take breaks. Make sure work plannings also include downtime. Encouraging people to take at least 5 weeks of vacation per year. Allowing people to have (paid) time off. Practicing gratitude. Acknowledging and celebrating achievements. Giving appreciation.
  • Psychological safety. Learn to communicate with kindness. Practice active listening. Have meetings facilitated. Condemn harassment, personal insults, sexism, racism, fascism. Condemn silencing of people. Have a possibility to report on code of ethics/conduct abuses. Making sure that people who experience problems or need to share something are not isolated.
  • Fairness. How about exploring inclusive leadership models? Making invisible hierarchies visible (See the concept of rank). Being aware of rank. Have clear and transparent decision making processes. Rewarding people equally. Making sure there is no invisible unpaid work done by always the same people.
  • Meaning. Are the issues that we work on meaningful per se? Do they contribute anything to the world, or to the common good? Making sure that tasks or roles of other colleagues are not belittled. Meaning can also be given by putting tasks into perspective, for example by making developers attend conferences where they can meet users and get feedback on their work. Making sure we don't forget why we wanted to do a job in first place. Getting familiar with the concept of bullshit jobs.

In this list, the words written in bold are what we could call "Needs". The descriptions behind them are what we could call "Strategies". There are always many different strategies to fulfill a need, I've only outlined some of them. I'm sure you can come up with others, please don't hesitate to share them with me.

Holger Levsen: 20200217-SnowCamp

18 February, 2020 - 02:56
SnowCamp 2020

This is just a late reminder that there are still some seats available for SnowCamp, taking place at the end of this week and during the whole weekend somewhere in the Italian mountains.

I believe it will be a really nice opportunity to hack on Debian things and thus I'd hope that there won't be empty seats, though atm this is the case.

The venue is reachable by train and Debian will be covering the cost of accomodation, so you just have to cover transportation and meals.

The event starts in three days, so hurry up and whatever you plans are, change them!

If you have any further questions, join #suncamp (yes!) on irc.debian.org.

Jonathan Dowland: Amiga floppy recovery project scope

17 February, 2020 - 23:05

This is the eighth part in a series of blog posts. The previous post was First successful Amiga disk-dumping session. The whole series is available here: Amiga.

The main goal of my Amiga project is to read the data from my old floppy disks. After a bit of hiatus (and after some gentle encouragement from friends at FOSDEM) I'm nearly done, 150/200 disks attempted so far. Ultimately I intend to get rid of the disks to free up space in my house, and probably the Amiga, too. In the meantime, what could I do with it?

Gotek floppy emulator balanced on the Amiga

The most immediately obvious things are to improve the housing of the emulated floppy disk. My Gotek adaptor is unceremoniously balanced on top of the case. Housing it within the A500 would be much neater. I might try to follow this guide which requires no case modifications and no 3D printed brackets, but instead of soldering new push-buttons, add a separate OLED display and rotary encoder (knob) in a separate housing, such as this 3D-printed wedge-shaped mount on Thingiverse. I do wonder if some kind of side-mounted solution might be better, so the top casing could be removed without having to re-route the wires each time.

3D printed OLED mount, from Amibay

Next would be improving the video output. My A520 video modulator developed problems that are most likely caused by leaking or blown capacitors. At the moment, I have a choice of B&W RF out, or using a 30 year old Philips CRT monitor. The latter is too big to comfortably fit on my main desk, and the blue channel has started to fail. Learning the skills to fix the A520 could be useful as the same could happen to the Amiga itself. Alternatively replacements are very cheap on the second hand market. Or I could look at a 3rd-party equivalent like the RGB4ALL. I have tried a direct, passive socket adaptor on the off-chance my LCD TV supported 15kHz, but alas, it appears it doesn't. This list of monitors known to support 15kHz is very short, so sourcing one is not likely to be easy or cheap. It's possible to buy sophisticated "Flicker Fixers/Scan Doublers" that enable the use of any external display, but they're neither cheap nor common.

My original "tank" Amiga mouse (pictured above) is developing problems with the left mouse button. Replacing the switch looks simple (in this Youtube video) but will require me to invest in a soldering iron, multimeter and related equipment (not necessarily a bad thing). It might be easier to buy a different, more comfortable old serial mouse.

Once those are out of the way, It might be interesting to explore aspects of the system that I didn't touch on as a child: how do you program the thing? I don't remember ever writing any Amiga BASIC, although I had several doomed attempts to use "game makers" like AMOS or SEUCK. What programming language were the commercial games written in? Pure assembly? The 68k is supposed to have a pleasant instruction set for this. Was there ever a practically useful C compiler for the Amiga? I never networked my Amiga. I never played around with music sampling or trackers.

There's something oddly satisfying about the idea of taking a 30 year old computer and making it into a useful machine in the modern era. I could consider more involved hardware upgrades. The Amiga enthusiast community is old and the fans are very passionate. I've discovered a lot of incredible enhancements that fans have built to enhanced their machines, right up to FPGA-powered CPU replacements that can run several times faster than the fastest original m68ks, and also offer digital video out, hundreds of MB of RAM, modern storage options, etc. To give an idea, check out Epsilon's Amiga Blog, which outlines some of the improvements they've made to their fleet of machines.

This is a deep rabbit hole, and I'm not sure I can afford the time (or the money!) to explore it at the moment. It will certainly not rise above my more pressing responsibilities. But we'll see how things go.

Enrico Zini: AI and privacy links

17 February, 2020 - 06:00
Norman by MIT Media Lab ai archive.org 2020-02-17 Norman: World's first psychopath AI. Machine Learning Captcha ai comics archive.org 2020-02-17 Amazon's Rekognition shows its true colors ai consent privacy archive.org 2020-02-17 Mix together a bit of freely accessible facial recognition software and a free live stream of the public space, and what do you get? A powerful stalker tool. Self Driving ai comics archive.org 2020-02-17 So much of "AI" is just figuring out ways to offloa work onto random strangers. Information flow reveals prediction limits in online social activity privacy archive.org 2020-02-17 Information flow reveals prediction limits in online social activity Bagrow et al., arVix 2017 If I know your friends, then I know a lot about you! Suppose you don’t personally use a given app/serv… The NSA’s SKYNET program may be killing thousands of innocent people ai politics archive.org 2020-02-17 «In 2014, the former director of both the CIA and NSA proclaimed that "we kill people based on metadata." Now, a new examination of previously published Snowden documents suggests that many of those people may have been innocent.» What reporter Will Ockenden's metadata reveals about his life privacy archive.org 2020-02-17 We published ABC reporter Will Ockenden's metadata in full and asked you to analyse it. Here's what you got right - and wrong. Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance privacy archive.org 2020-02-17 It's time to shed light on the technical methods and business practices behind third-party tracking. For journalists, policy makers, and concerned consumers, this paper will demystify the fundamentals of third-party tracking, explain the scope of the problem, and suggest ways for users and legislation to fight back against the status quo.

Ben Armstrong: Introducing Dronefly, a Discord bot for naturalists

16 February, 2020 - 23:51

In the past few years, since first leaving Debian as a free software developer in 2016, I’ve taken up some new hobbies, or more accurately, renewed my interest in some old ones.

Screenshot from Dronefly bot tutorial

During that hiatus, I also quietly un-retired from Debian, anticipating there would be some way to contribute to the project in these new areas of interest. That’s still an idea looking for the right opportunity to present itself, not to mention the available time to get involved again.

With age comes an increasing clamor of complaints from your body when you have a sedentary job in front of a screen, and hobbies that rarely take you away from it. You can’t just plunk down in front of a screen and do computer stuff non-stop & just bounce back again at the start of each new day. So in the past several years, getting outside more started to improve my well-being and address those complaints. That revived an old interest in me: nature photography. That, in turn, landed me at iNaturalist, re-ignited my childhood love of learning about the natural world, & hooked me on a regular habit of making observations & uploading them to iNat ever since.

Second, back in the late nineties, I wrote a little library loans renewal reminder project in Python. Python was a pleasure to work with, but that project never took off and soon was forgotten. Now once again, decades later, Python is a delight to be writing in, with its focus on writing readable code & backed by a strong culture of education.

Where Python came to bear on this new hobby was when the naturalists on the iNaturalist Discord server became a part of my life. Last spring, I stumbled upon this group & started hanging out. On this platform, we share what we are finding, we talk about those findings, and we challenge each other to get better at it. It wasn’t long before the idea to write some code to access the iNaturalist platform directly from our conversations started to take shape.

Now, ideally, what happened next would have been for an open platform, but this is where the community is. In many ways, too, other chat platforms (like irc) are not as capable vs. Discord to support the image-rich chat experience we enjoy. Thus, it seemed that’s where the code had to be. Dronefly, an open source Python bot for naturalists built on the Red DiscordBot bot framework, was born in the summer of 2019.

Dronefly is still alpha stage software, but in the short space of six months, has grown to roughly 3k lines of code and is used used by hundreds of users across 9 different Discord servers. It includes some innovative features requested by our users like the related command to discover the nearest common ancestor of one or more named taxa, and the map command to easily access a range map on the platform for all the named taxa. So far as I know, no equivalent features exist yet on the iNat website or apps for mobile. Commands like these put iNat data directly at users’ fingertips in chat, improving understanding of the material with minimal interruption to the flow of conversation.

This tutorial gives an overview of Dronefly’s features. If you’re intrigued, please look me up on the iNaturalist Discord server following the invite from the tutorial. You can try out the bot there, and I’d be happy to talk to you about our work. Even if this is not your thing, do have a look at iNaturalist itself. Perhaps, like me, you’ll find in this platform a fun, rewarding, & socially significant outlet that gets you outside more, with all the benefits that go along with that.

That’s what has been keeping me busy lately. I hope all my Debian friends are well & finding joy in what you’re doing. Keep up the good work!

Russell Coker: DisplayPort and 4K

16 February, 2020 - 06:00
The Problem

Video playback looks better with a higher scan rate. A lot of content that was designed for TV (EG almost all historical documentaries) is going to be 25Hz interlaced (UK and Australia) or 30Hz interlaced (US). If you view that on a low refresh rate progressive scan display (EG a modern display at 30Hz) then my observation is that it looks a bit strange. Things that move seem to jump a bit and it’s distracting.

Getting HDMI to work with 4K resolution at a refresh rate higher than 30Hz seems difficult.

What HDMI Can Do

According to the HDMI Wikipedia page [1], HDMI 1.3–1.4b (introduced in June 2006) supports 30Hz refresh at 4K resolution and if you use 4:2:0 Chroma Subsampling (see the Chroma Subsampling Wikipedia page [2] you can do 60Hz or 75Hz on HDMI 1.3–1.4b. Basically for colour 4:2:0 means half the horizontal and half the vertical resolution while giving the same resolution for monochrome. For video that apparently works well (4:2:0 is standard for Blue Ray) and for games it might be OK, but for text (my primary use of computers) it would suck.

So I need support for HDMI 2.0 (introduced in September 2013) on the video card and monitor to do 4K at 60Hz. Apparently none of the combinations of video card and HDMI cable I use for Linux support that.

HDMI Cables

The Wikipedia page alleges that you need either a “Premium High Speed HDMI Cable” or a “Ultra High Speed HDMI Cable” for 4K resolution at 60Hz refresh rate. My problems probably aren’t related to the cable as my testing has shown that a cheap “High Speed HDMI Cable” can work at 60Hz with 4K resolution with the right combination of video card, monitor, and drivers. A Windows 10 system I maintain has a Samsung 4K monitor and a NVidia GT630 video card running 4K resolution at 60Hz (according to Windows). The NVidia GT630 card is one that I tried on two Linux systems at 4K resolution and causes random system crashes on both, it seems like a nice card for Windows but not for Linux.

Apparently the HDMI devices test the cable quality and use whatever speed seems to work (the cable isn’t identified to the devices). The prices at a local store are $3.98 for “high speed”, $19.88 for “premium high speed”, and $39.78 for “ultra high speed”. It seems that trying a “high speed” cable first before buying an expensive cable would make sense, especially for short cables which are likely to be less susceptible to noise.

What DisplayPort Can Do

According to the DisplayPort Wikipedia page [3] versions 1.2–1.2a (introduced in January 2010) support HBR2 which on a “Standard DisplayPort Cable” (which probably means almost all DisplayPort cables that are in use nowadays) allows 60Hz and 75Hz 4K resolution.

Comparing HDMI and DisplayPort

In summary to get 4K at 60Hz you need 2010 era DisplayPort or 2013 era HDMI. Apparently some video cards that I currently run for 4K (which were all bought new within the last 2 years) are somewhere between a 2010 and 2013 level of technology.

Also my testing (and reading review sites) shows that it’s common for video cards sold in the last 5 years or so to not support HDMI resolutions above FullHD, that means they would be HDMI version 1.1 at the greatest. HDMI 1.2 was introduced in August 2005 and supports 1440p at 30Hz. PCIe was introduced in 2003 so there really shouldn’t be many PCIe video cards that don’t support HDMI 1.2. I have about 8 different PCIe video cards in my spare parts pile that don’t support HDMI resolutions higher than FullHD so it seems that such a limitation is common.

The End Result

For my own workstation I plugged a DisplayPort cable between the monitor and video card and a Linux window appeared (from KDE I think) offering me some choices about what to do, I chose to switch to the “new monitor” on DisplayPort and that defaulted to 60Hz. After that change TV shows on NetFlix and Amazon Prime both look better. So it’s a good result.

As an aside DisplayPort cables are easier to scrounge as the HDMI cables get taken by non-computer people for use with their TV.

Related posts:

  1. 4K Monitors A couple of years ago a relative who uses a...
  2. Sound Device Order with ALSA One problem I have had with my new Dell PowerEdge...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...

Keith Packard: iterative-splines

15 February, 2020 - 12:55
Decomposing Splines Without Recursion

To make graphics usable in Snek, I need to avoid using a lot of memory, especially on the stack as there's no stack overflow checking on most embedded systems. Today, I worked on how to draw splines with a reasonable number of line segments without requiring any intermediate storage. Here's the results from this work:

The Usual Method

The usual method I've used to convert a spline into a sequence of line segments is split the spline in half using DeCasteljau's algorithm recursively until the spline can be approximated by a straight line within a defined tolerance.

Here's an example from twin:

static void
_twin_spline_decompose (twin_path_t *path,
            twin_spline_t   *spline, 
            twin_dfixed_t   tolerance_squared)
{
    if (_twin_spline_error_squared (spline) <= tolerance_squared)
    {
    _twin_path_sdraw (path, spline->a.x, spline->a.y);
    }
    else
    {
    twin_spline_t s1, s2;
    _de_casteljau (spline, &s1, &s2);
    _twin_spline_decompose (path, &s1, tolerance_squared);
    _twin_spline_decompose (path, &s2, tolerance_squared);
    }
}

The _de_casteljau function splits the spline at the midpoint:

static void
_lerp_half (twin_spoint_t *a, twin_spoint_t *b, twin_spoint_t *result)
{
    result->x = a->x + ((b->x - a->x) >> 1);
    result->y = a->y + ((b->y - a->y) >> 1);
}

static void
_de_casteljau (twin_spline_t *spline, twin_spline_t *s1, twin_spline_t *s2)
{
    twin_spoint_t ab, bc, cd;
    twin_spoint_t abbc, bccd;
    twin_spoint_t final;

    _lerp_half (&spline->a, &spline->b, &ab);
    _lerp_half (&spline->b, &spline->c, &bc);
    _lerp_half (&spline->c, &spline->d, &cd);
    _lerp_half (&ab, &bc, &abbc);
    _lerp_half (&bc, &cd, &bccd);
    _lerp_half (&abbc, &bccd, &final);

    s1->a = spline->a;
    s1->b = ab;
    s1->c = abbc;
    s1->d = final;

    s2->a = final;
    s2->b = bccd;
    s2->c = cd;
    s2->d = spline->d;
}

This is certainly straightforward, but suffers from an obvious flaw — there's unbounded recursion. With two splines in the stack frame, each containing eight coordinates, the stack will grow rapidly; 4 levels of recursion will consume more than 64 coordinates space. This can easily overflow the stack of a tiny machine.

De Casteljau Splits At Any Point

De Casteljau's algorithm is not limited to splitting splines at the midpoint. You can supply an arbitrary position t, 0 < t < 1, and you will end up with two splines which, drawn together, exactly match the original spline. I use 1/2 in the above version because it provides a reasonable guess as to how an arbitrary spline might be decomposed efficiently. You can use any value and the decomposition will still work, it will just change the recursion depth along various portions of the spline.

Iterative Left-most Spline Decomposition

What our binary decomposition does is to pick points t0 - tn such that splines t0..t1 through tn-1 .. tn are all 'flat'. It does this by recursively bisecting the spline, storing two intermediate splines on the stack at each level. If we look at just how the first, or 'left-most' spline is generated, that can be represented as an iterative process. At each step in the iteration, we split the spline in half:

S' = _de_casteljau(s, 1/2)

We can re-write this using the broader capabilities of the De Casteljau algorithm by splitting the original spline at decreasing points along it:

S[n] = _de_casteljau(s0, (1/2)ⁿ)

Now recall that the De Casteljau algorithm generates two splines, not just one. One describes the spline from 0..(1/2)ⁿ, the second the spline from (1/2)ⁿ..1. This gives us an iterative approach to generating a sequence of 'flat' splines for the whole original spline:

while S is not flat:
    n = 1
    do
        Sleft, Sright = _decasteljau(S, (1/2)ⁿ)
        n = n + 1
    until Sleft is flat
    result ← Sleft
    S = Sright
result ← S

We've added an inner loop that wasn't needed in the original algorithm, and we're introducing some cumulative errors as we step around the spline, but we don't use any additional memory at all.

Final Code

Here's the full implementation:

/*
 * Copyright © 2020 Keith Packard <keithp@keithp.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
 */

#include <stdbool.h>
#include <stdio.h>
#include <string.h>

typedef float point_t[2];
typedef point_t spline_t[4];

#define SNEK_DRAW_TOLERANCE 0.5f

/* Is this spline flat within the defined tolerance */
static bool
_is_flat(spline_t spline)
{
    /*
     * This computes the maximum deviation of the spline from a
     * straight line between the end points.
     *
     * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf
     */
    float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0];
    float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1];
    float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0];
    float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1];

    ux *= ux;
    uy *= uy;
    vx *= vx;
    vy *= vy;
    if (ux < vx)
        ux = vx;
    if (uy < vy)
        uy = vy;
    return (ux + uy <= 16.0f * SNEK_DRAW_TOLERANCE * SNEK_DRAW_TOLERANCE);
}

static void
_lerp (point_t a, point_t b, point_t r, float t)
{
    int i;
    for (i = 0; i < 2; i++)
        r[i] = a[i]*(1.0f - t) + b[i]*t;
}

static void
_de_casteljau(spline_t s, spline_t s1, spline_t s2, float t)
{
    point_t first[3];
    point_t second[2];
    int i;

    for (i = 0; i < 3; i++)
        _lerp(s[i], s[i+1], first[i], t);

    for (i = 0; i < 2; i++)
        _lerp(first[i], first[i+1], second[i], t);

    _lerp(second[0], second[1], s1[3], t);

    for (i = 0; i < 2; i++) {
        s1[0][i] = s[0][i];
        s1[1][i] = first[0][i];
        s1[2][i] = second[0][i];

        s2[0][i] = s1[3][i];
        s2[1][i] = second[1][i];
        s2[2][i] = first[2][i];
        s2[3][i] = s[3][i];
    }
}

static void
_spline_decompose(void (*draw)(float x, float y), spline_t s)
{
    float       t;
    spline_t    s1, s2;

    (*draw)(s[0][0], s[0][1]);

    /* If s is flat, we're done */
    while (!_is_flat(s)) {
        t = 1.0f;

        /* Iterate until s1 is flat */
        do {
            t = t/2.0f;
            _de_casteljau(s, s1, s2, t);
        } while (!_is_flat(s1));

        /* Draw to the end of s1 */
        (*draw)(s1[3][0], s1[3][1]);

        /* Replace s with s2 */
        memcpy(&s[0], &s2[0], sizeof (spline_t));
    }
    (*draw)(s[3][0], s[3][1]);
}

void draw(float x, float y)
{
    printf("%8g, %8g\n", x, y);
}

int main(int argc, char **argv)
{
    spline_t spline = {
        { 0.0f, 0.0f },
        { 0.0f, 256.0f },
        { 256.0f, -256.0f },
        { 256.0f, 0.0f }
    };
    _spline_decompose(draw, spline);
    return 0;
}

Russell Coker: Self Assessment

15 February, 2020 - 10:57
Background Knowledge

The Dunning Kruger Effect [1] is something everyone should read about. It’s the effect where people who are bad at something rate themselves higher than they deserve because their inability to notice their own mistakes prevents improvement, while people who are good at something rate themselves lower than they deserve because noticing all their mistakes is what allows them to improve.

Noticing all your mistakes all the time isn’t great (see Impostor Syndrome [2] for where this leads).

Erik Dietrich wrote an insightful article “How Developers Stop Learning: Rise of the Expert Beginner” [3] which I recommend that everyone reads. It is about how some people get stuck at a medium level of proficiency and find it impossible to unlearn bad practices which prevent them from achieving higher levels of skill.

What I’m Concerned About

A significant problem in large parts of the computer industry is that it’s not easy to compare various skills. In the sport of bowling (which Erik uses as an example) it’s easy to compare your score against people anywhere in the world, if you score 250 and people in another city score 280 then they are more skilled than you. If I design an IT project that’s 2 months late on delivery and someone else designs a project that’s only 1 month late are they more skilled than me? That isn’t enough information to know. I’m using the number of months late as an arbitrary metric of assessing projects, IT projects tend to run late and while delivery time might not be the best metric it’s something that can be measured (note that I am slightly joking about measuring IT projects by how late they are).

If the last project I personally controlled was 2 months late and I’m about to finish a project 1 month late does that mean I’ve increased my skills? I probably can’t assess this accurately as there are so many variables. The Impostor Syndrome factor might lead me to think that the second project was easier, or I might get egotistical and think I’m really great, or maybe both at the same time.

This is one of many resources recommending timely feedback for education [4], it says “Feedback needs to be timely” and “It needs to be given while there is still time for the learners to act on it and to monitor and adjust their own learning”. For basic programming tasks such as debugging a crashing program the feedback is reasonably quick. For longer term tasks like assessing whether the choice of technologies for a project was good the feedback cycle is almost impossibly long. If I used product A for a year long project does it seem easier than product B because it is easier or because I’ve just got used to it’s quirks? Did I make a mistake at the start of a year long project and if so do I remember why I made that choice I now regret?

Skills that Should be Easy to Compare

One would imagine that martial arts is a field where people have very realistic understanding of their own skills, a few minutes of contest in a ring, octagon, or dojo should show how your skills compare to others. But a YouTube search for “no touch knockout” or “chi” shows that there are more than a few “martial artists” who think that they can knock someone out without physical contact – with just telepathy or something. George Dillman [5] is one example of someone who had some real fighting skills until he convinced himself that he could use mental powers to knock people out. From watching YouTube videos it appears that such people convince the members of their dojo of their powers, and those people then faint on demand “proving” their mental powers.

The process of converting an entire dojo into believers in chi seems similar to the process of converting a software development team into “expert beginners”, except that martial art skills should be much easier to assess.

Is it ever possible to assess any skills if people trying to compare martial art skills often do it so badly?

Conclusion

It seems that any situation where one person is the undisputed expert has a risk of the “chi” problem if the expert doesn’t regularly meet peers to learn new techniques. If someone like George Dillman or one of the “expert beginners” that Erik Dietrich refers to was to regularly meet other people with similar skills and accept feedback from them they would be much less likely to become a “chi” master or “expert beginner”. For the computer industry meetup.com seems the best solution to this, whatever your IT skills are you can find a meetup where you can meet people with more skills than you in some area.

Here’s one of many guides to overcoming Imposter Syndrome [5]. Actually succeeding in following the advice of such web pages is not going to be easy.

I wonder if getting a realistic appraisal of your own skills is even generally useful. Maybe the best thing is to just recognise enough things that you are doing wrong to be able to improve and to recognise enough things that you do well to have the confidence to do things without hesitation.

Related posts:

  1. Load Average Monitoring For my ETBE-Mon [1] monitoring system I recently added a...
  2. university degrees Recently someone asked me for advice on what they can...
  3. priorities for heartbeat services Currently I am considering the priority scheme to use for...

Anisa Kuci: Outreachy post 4 - Career opportunities

14 February, 2020 - 19:21

As mentioned in my last blog posts, Outreachy is very interesting and I got to learn a lot already. Two months have already passed by quickly and there is still one month left for me to continue working and learning.

As I imagine all the other interns are thinking now, I am also thinking about what is going to be the next step for me. After such an interesting experience as this internship, thinking about the next steps is not that simple.

I have been contributing to Free Software projects for quite some years now. I have been part of the only FLOSS community in my country for many years and I grew up together with the community, advocating free software in and around Albania.

I have contributed to many projects, including Mozilla, OpenStreetMap, Debian, GNOME, Wikimedia projects etc. So, I am sure, the FLOSS world is definitely the right place for me to be. I have helped communities grow and I am very enthusiastic about it.

I have been growing up and evolved as a person through contributing to all the projects I have mentioned above. I have gained knowledge that I would not have had a chance to acquire, if it was not for the “sharing knowledge” ideology that is so strong in the FLOSS environment.

Through organizing big and small events from 300 people conferences to 30 people bug squashing parties to 5 people strategy workshops, I have been able to develop skills because the community trusted me with responsibility in event organizing even before I was able to prove myself. I have been supported by great mentors which helped me learn on the job and leave me with practical knowledge that I am happy to continue applying in the FLOSS community. I am thinking about formalizing my education in the marketing or communication areas to also learn some academic background and further strengthen the practical skills.

During Outreachy I have learned to use the bash command line much better. I have learned LaTeX as it was one of the tools that I needed to work on the fundraising materials. I have also improved a lot using git commands and feel much more confident now. I have worked a lot on fundraising while also learning Python very intensively, and programming is definitely a skill that I would love to profound.

I know that foreign languages are something that I enjoy, as I speak English, Italian, Greek and of course my native language Albanian, but lately I learned that programming languages can be as much fun as the natural languages and I am keen on learning more of both.

I love working with people, so I hope in the future I will be able to continue working in environments where you interact with a diverse set of people.

Dirk Eddelbuettel: RcppSimdJson 0.0.1 now on CRAN!

14 February, 2020 - 10:00

A fun weekend-morning project, namely wrapping the outstanding simdjson library by Daniel Lemire (with contributions by Geoff Langdale, John Keiser and many others) into something callable from R via a new package RcppSimdJson lead to a first tweet on January 20, a reference to the brand new github repo, and CRAN upload a few days later—and then two weeks of nothingness.

Well, a little more than nothing as Daniel is an excellent “upstream” to work with who promptly incorporated two changes that arose from preparing the CRAN upload. So we did that. But CRAN being as busy and swamped as they are we needed to wait. The ten days one is warned about. And then some more. So yesterday I did a cheeky bit of “bartering” as Kurt wanted a favour with an updated digest version so I hinted that some reciprocity would be appreciated. And lo and behold he admitted RcppSimdJson to CRAN. So there it is now!

We have some upstream changes already in git, but I will wait a few days to let a week pass before uploading the now synced upstream code. Anybody who wants it sooner knows where to get it on GitHub.

simdjson is a gem. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in persing gigabytes of JSON parsed per second which is quite mindboggling. I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

The NEWS entry (from a since-added NEWS file) for the initial RcppSimdJson upload follows.

Changes in version 0.0.1 (2020-01-24)
  • Initial CRAN upload of first version

  • Comment-out use of stdout (now updated upstream)

  • Deactivate use computed GOTOs for compiler compliance and CRAN Policy via #define

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Carter: Initial experiments with the Loongson Pi 2K

14 February, 2020 - 03:29

Recently, Loongson made some Pi 2K boards available to Debian developers and Aron Xu was kind enough to bring me one to FOSDEM earlier this month. It’s a MIPS64 based board with 2GB RAM, 2 gigabit ethernet cards, an m.2 (SATA) disk slot and a whole bunch more i/o. More details about the board itself is available on the Debian wiki, here is a quick board tour from there:

On my previous blog post I still had the protective wrapping on the acrylic case. Here it is all peeled off and polished after Holger pointed that out to me on IRC. I’ll admit I kind of liked the earthy feel that the protective covers had, but this is nice too.

The reason why I wanted this board is that I don’t have access to any MIPS64 hardware whatsoever, and it can be really useful for getting Calamares to run properly on MIPS64 on Debian. Calamares itself builds fine on this platform, but calamares-settings-debian will only work on amd64 and i386 right now (where it will either install grub-efi or grub-pc depending in which mode you booted, otherwise it will crash during installation). I already have lots of plans for the Bullseye release cycle (and even for Calamares specifically), so I’m not sure if I’ll get there but I’d like to get support for mips64 and arm64 into calamares-settings-debian for the bullseye release. I think it’s mostly just a case of detecting the platforms properly and installing/configuring the right bootloaders. Hopefully it’s that simple.

In the meantime, I decided to get to know this machine a bit better. I’m curious how it could be useful to me otherwise. All its expansion ports definitely seems interesting. First I plugged it into my power meter to check what power consumption looks like. According to this, it typically uses between 7.5W and 9W and about 8.5W on average.

I initially tried it out on an old Sun monitor that I salvaged from a recycling heap. It wasn’t working anymore but my anonymous friend replaced its power supply and it’s CFL backlight with an LED backlight, now it’s a really nice 4:3 monitor for my vintage computers. On a side-note, if you’re into electronics, follow his YouTube channel where you can see him repair things. Unfortunately the board doesn’t like this screen by default (just black screen when xorg started), I didn’t check if it was just a xorg configuration issue or a hardware limitiation, but I just moved it to an old 720P TV that I usually use for my mini collection and it displayed fine there. I thought I’d just mention it in case someone tries this board and wonders why they just see a black screen after it boots.

I was curious whether these Ethernet ports could realistically do anything more than 100mbps (sometimes they go on a bus that maxes out way before gigabit does), so I install iperf3 and gave it a shot. This went through 2 switches that has some existing traffic on it, but the ~85MB/s I got on my first test completely satisfied me that these ports are plenty fast enough.

Since I first saw the board, I was curious about the PCIe slot. I attached an older NVidia (that still runs fine with the free Nouveau driver), also attached some external power to the card and booted it all up…

The card powers on and the fan enthusiastically spins up, but sadly the card is not detected on the Loongson board. I think you need some PC BIOS equivelent stuff to poke the card at the right places so that it boots up properly.

Disk performance is great, as can be expected with the SSD it has on board. It’s significantly better than the extremely slow flash you typically get on development boards.

I was starting to get curious about whether Calamares would run on this. So I went ahead and installed it along with calamares-settings-debian. I wasn’t even sure it would start up, but lo and behold, it did. This is quite possibly the first time Calamares has ever started up on a MIPS64 machine. It started up in Chinese since I haven’t changed the language settings yet in Xfce.

I was curious whether Calamares would start up on the framebuffer. Linux framebuffer support can be really flaky on platforms with weird/incomplete Linux drivers. I ran ‘calamares -platform linuxfb’ from a virtual terminal and it just worked.

This is all very promising and makes me a lot more eager to get it all working properly and get a nice image generated that you can use Calamares to install Debian on a MIPS64 board. Unfortunately, at least for now, this board still needs its own kernel so it would need it’s own unique installation image. Hopefully all the special bits will make it into the mainline Linux kernel before too long. Graphic performance wasn’t good, but I noticed that they do have some drivers on GitHub that I haven’t tried yet, but that’s an experiment for another evening.

Romain Perier: Meetup Debian Toulouse

14 February, 2020 - 01:50
Hi there !

My company Viveris is opening its office for hosting a Debian Meetup in Toulouse this summer (June 5th or June 12th).

Everyone is welcome to this event, we're currently looking for volunteers for presenting demo, lightning talks or conferences (following the talks any kind of hacking session is possible like bugs triaging, coding sprints etc).

Any kind of topic is welcome.

See the announcement (in french) for more details.

Dirk Eddelbuettel: digest 0.6.24: Some more refinements

13 February, 2020 - 06:17

Another new version of digest arrived on CRAN (and also on Debian) earlier today.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 889k monthly downloads with 255 direct reverse dependencies and 7340 indirect reverse dependencies) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes a few month after the previous release. It contains a few contributed fixes, some of which prepare for R 4.0.0 in its current development. This includes a testing change to the matrix/array class, and corrects the registration for the PMurHash routine as pointed out by Tomas Kalibera and Kurt Hornik (who also kindly reminded me to finally upload this as I had made the fix already in December). Moreover, Will Landau sped up one operation affecting his popular drake pipeline toolkit. Lastly, Thierry Onkelinx corrected one more aspect related to sha1.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Paulo Henrique de Lima Santana: Bits from MiniDebCamp Brussels and FOSDEM 2020

12 February, 2020 - 17:00
Bits from MiniDebCamp Brussels and FOSDEM 2020

I traveled to Brussels from January 28th to February 6th to join MiniDebCamp and FOSDEM 2020. It was my second trip to Brussels because I was there in 2019 to join Video Team Sprint and FOSDEM

MiniDebCamp took place at Hackerspace Brussels (HSBXL) for 3 days (January 29-31). My initial idea was travel on 27th and arrive in Brussels on 28th to rest and go to MiniDebCamp on the first day, but I had buy a ticket to leave Brazil on 28th because it was cheaper.

Trip from Curitiba to Brussels

I left Curitiba on 28th at 13:20 and I arrived in São Paulo at 14:30. The flight from São Paulo to Munich departured at 18h and after 12 hours I arrived there at 10h (local time). The flight was 30 minutes late because we had to wait airport staff remove ice on the ground. I was worried because my flight to Brussels would departure at 10:25 and I had to get through by immigration yet.

After walked a lot, I arrrived at immigration desk (there wasn’t line), I got my passaport stamp, walked a lot again, took a train, I arrived in my gate and the flight was late too. So, everything was going well. I departured Munich at 10:40 and I arrived in Brussels on 29th at 12h.

I went from airport to the Hostel Galia by bus, by train and by other bus to check-in and to leave my luggage. On the way I had lunch at “Station Brussel Noord” because I was really hungry, and I arrived at hostel at 15h.

My reservation was on a coletive bedroom, and when I arrived there, I meet Marcos, a brazilian guy from Brasília and he was there to join a internationl Magic card competion. He was in Brussels for the first time and he was a little lost about what he could do in the city. I invited him to go to downtown to looking for a cellphone store because we needed to buy sim-cards. I wanted to buy from Base, and hostel frontdesk people said to us to go to the store at Rue Neuve. I showed Grand-Place to Marcos and after we bought sim-cards, we went to Primark because he needed to buy a towel. It was night and we decided to buy food to have dinner at Hostel. I gave up to go to HSBXL because I was tired and I thought it was not a good idea to go there for the first time at night.

MiniDebCamp day 1

On Thursday (30th) morning I went to HSBXL. I walked from the hostel to “Gare du Midi”, and after walk from on side to other, I finally could find the bus stop. I got off the bus at the fourth stop in front of the hackerspace building. It was a little hard to find the right entrance, but I got it. I arrived at HSBXL room, talked to other DDs there and I could find a empty table to put my laptop. Other DDs were arriving during all day.

I read and answered e-mails and went out to walking in Anderlecht to meet the city and to looking for a place to have lunch because I didn’t want eat sandwich at restaurant on the building. I stoped at Lidl and Aldi stores to buy some food to eat later, and I stoped in a turkish restaurant to have lunch, and the food was very good. After that, I decided to walk a little more to visit the Jean-Claude Van Damme statue to take some photos :-)

Backing to HSBXL my mostly interest at MiniDebCamp was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I was asking some questions to Nicolas about that and he suggested I make a new instalation using the Video Team machine and Buster.

I installed Buster and using USB installer and ansible playbooks it was possible setup the machine as Voctotest. I already had done this setup at home using a simple machine without a Blackmagic card and a camera. From that point, I didn’t know what to do. So, Nicolas come and started to setup the machine first as Voctomix, and after as Gateway. I was watching and learning. After a while, everything worked perfect with a camera.

It was night and the group ordered some pizzas to eat with beers sold by HSBXL. I was celebreting too because during the day I received messages and a call from Rentcars because I was hired by them! Before travel, I went to a interview at Rentcars on the morning and I got a positive answer when I was in Brussels.

Before I left the hackerspace, I received doors codes to open HSBXL next day early. Some days before MiniDebCamp, Holger had asked if someone could open the room friday morning and I answered him I could. I left at 22h and back to the hostel to sleep.

MiniDebCamp day 2

On friday I arrived at HSBXL at 9h and opened the room and I took some photos with empty space. It is amazing how we can use spaces like that in Europe. Last year I was in MiniDebConf Hamburg at Dock Europe. I miss this kind of building and hackerspace in Curitiba.

I installed and setup the Video Team machine again, but this time, I was alone following what Nicolas did before. And everything worked perfectly again. Nicolas asked me to create a new ansible playbook joining voctomix and gateway to make instalation easier, send it as a MR, and test it.

I went out to have lunch in the same restaurant the day before and I discoveried there was a Leonidas factory outlet in front of HSBXL, meaning I could buy belgium chocolates cheaper. I went there and I bought a box with 1,5kg of chocolates.

When I come back to HSBXL, I started to test the new ansible playbook. The test was taking longer than I expected and on the end of the day, Nicolas needed to take away the equipments. It was really great make this hands-on with real equipments used by Video Team. I learned a lot!

To celebrate the MiniDebCamp ending, we had free beer sponsored! I have to say I drank to much and it was complicated arrived at hostel that night :-)

A complete report from DebConf Video Team can be read here.

Many thanks to Nicolas Dandrimont for teaching me Video Team stuff, to Kyle Robbertze for setting up the Video Sprint, to Holger Levsen for organizing MiniDebCamp, and to HSBXL people for receiving us there.

FOSDEM day 1

FOSDEM 2020 took place at ULB on February 1st and 2nd. On the first day I took a train and I listened a group of brazilians talking in portuguese and they were going to FOSDEM too. I arrived there around 9:30 and I went to Debian booth because I was volunteer to help and I was taking t-shirts from Brazil to sell. It was a madness with people buying Debian stuff.

After while I had to leave the booth because I was volunteer to film the talks at Janson auditorium from 11h to 13h. I had done this job last year I decided to do it again because It is a way to help the event, and they gave me a t-shirt and a free meal ticket that I changed for two sandwiches :-)

After lunch, I walked around the booths, got some stickers, talked with peolple, drank some beers from OpenSuse booth, until the end of the day. I left FOSDEM and went to hostel to leave my bag, and I went to the Debian dinner organized by Marco d’Itri at Chezleon.

The dinner was great, with 25 very nice Debian people. After the dinner, we ate waflles and some of us went to Delirium but I decided to go to the hostel to sleep.

FOSDEM day 2

On the second and last day I arrived around 9h, spent some time at Debian booth and I went to Janson auditorium to help again from 10h to 13h.

I got the free meal ticket and after lunch, I walked around, visited booths, and I went to Community devroom to watch talks. The first was “Recognising Burnout” by Andrew Hutchings and listening him I believe I had bournet symptoms organizing DebConf19. The second was “How Does Innersource Impact on the Future of Upstream Contributions?” by Bradley Kuhn. Both talks were great.

After the end of FOSDEM, we went in a group to have dinner at a restaurant near from ULB. We spent a great time together. After the dinnner we took the same train and we did a group photo.

Two days to join Brussels

With the end of MiniDebcamp and FOSDEM I had Monday and Tuesday free before returning to Brazil on Wednesday. I wanted to join Config Management Camp in Ghent, but I decided to stay in Brussels to visit some places. I visited:

  • Carrefour - to buy beers to bring to Brazil :-)

Last day and returning to Brazil

On Wednesday (5th) I woke up early to finish packing and do my check-out. I left the hostel and took a bus, a train and other bus to Brussels Airport. My flight departured at 15:05 to Frankfurt arriving there at 15:55. I thought to visit the city because I had to wait for 6 hours and I read it was possible to looking around with this time. But I was very tired and I decided to stay at airport.

I walked to my gate, got through by immigration to get my passaport stamp, and waited until 22:05 when my flight departured to São Paulo. After 12 hours flying, I arrived in São Paulo at 6h (local time). In São Paulo when we arrive from international flight, we must to take all luggages, and get through customs. After I left my luggage with local airplane company, I went to the gate to wait my flight to Curitiba.

The flight should departure at 8:30 but it was 20 minutes late. So I arrived in Curitiba 10h, took a uber and finally I was at home.

Last words

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here.

All my photos are here

Many thanks to Debian for sponsoring my trip to Brussels, and to DPL Sam Hartman for approving it. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs, and participate in a very important worldwide free software event.

Louis-Philippe Véronneau: Announcing miniDebConf Montreal 2020 -- August 6th to August 9th 2020

12 February, 2020 - 12:00

This is a guest post by the miniDebConf Montreal 2020 orga team on pollo's blog.

Dear Debianites,

We are happy to announce miniDebConf Montreal 2020! The event will take place in Montreal, at Concordia University's John Molson School of Business from August 6th to August 9th 2020. Anybody interested in Debian development is welcome.

Following the announcement of the DebConf20 location, our desire to participate became incompatible with our commitment toward the Boycott, Divestment and Sanctions (BDS) campaign launched by Palestinian civil society in 2005. Hence, many active Montreal-based Debian developpers, along with a number of other Debian developpers, have decided not to travel to Israel in August 2020 for DebConf20.

Nevertheless, recognizing the importance of DebConf for the health of both the developper community and the project as a whole, we decided to organize a miniDebConf just prior to DebConf20 in the hope that fellow developpers who may have otherwise skipped DebConf entirely this year might join us instead. Fellow developpers who decide to travel to both events are of course most welcome.

Registration is open

Registration is open now, and free, so go add your name and details on the Debian wiki.

We'll accept registrations until July 25th, but don't wait too much before making your travel plans! Finding reasonnable accommodation in Montreal during the summer can be hard if you don't make plans in advance.

We have you covered with lots of attendee information already.

Sponsors wanted

We're looking for sponsors willing to help making this event possible. Information on sponsorship tiers can be found here.

Get in touch

We gather on the #debian-quebec on irc.debian.org and the debian-dug-quebec@lists.debian.org list.

Norbert Preining: MuPDF, QPDFView and other Debian updates

12 February, 2020 - 10:03

For those interested, I have updated mupdf (1.16.1), pymupdf (1.16.10), and qpdfview (current bzr sources) to the latest versions and added to my local Debian apt repository:

deb https://www.preining.info/debian/other unstable main
deb-src https://www.preining.info/debian/other unstable main

QPDFView has now the Fitz (MuPDF) backend available.

At the same time I have updated Elixir to 1.10.1. All packages are in source and amd64 binary format. Information on other apt repositories available here can be found at this post.

Enjoy.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้