Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 12 min ago

Dirk Eddelbuettel: RcppZiggurat 0.1.6

21 October, 2020 - 20:14

A new release, now at version 0.1.6, of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and other which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release brings a corrected seed setter and getter which now correctly take care of all four state variables, and not just one. It also corrects a few typos in the vignette. Both were fixed quite a while back, but we somehow managed to not ship this to CRAN for two years.

The NEWS file entry below lists all changes.

Changes in version 0.1.6 (2020-10-18)
  • Several typos were corrected in the vignette (Blagoje Ivanovic in #9).

  • New getters and setters for internal state were added to resume simulations (Dirk in #11 fixing #10).

  • Minor updates to cleanup script and Travis CI setup (Dirk).

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppZiggurat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: Debian donation for Peertube development

21 October, 2020 - 17:30

The Debian project is happy to announce a donation of 10,000 USD to help Framasoft reach the fourth stretch-goal of its Peertube v3 crowdfunding campaign -- Live Streaming.

This year's iteration of the Debian annual conference, DebConf20, had to be held online, and while being a resounding success, it made clear to the project our need to have a permanent live streaming infrastructure for small events held by local Debian groups. As such, Peertube, a FLOSS video hosting platform, seems to be the perfect solution for us.

We hope this unconventional gesture from the Debian project will help us make this year somewhat less terrible and give us, and thus humanity, better Free Software tooling to approach the future.

Debian thanks the commitment of numerous Debian donors and DebConf sponsors, particularly all those that contributed to DebConf20 online's success (volunteers, speakers and sponsors). Our project also thanks Framasoft and the PeerTube community for developing PeerTube as a free and decentralized video platform.

The Framasoft association warmly thanks the Debian Project for its contribution, from its own funds, towards making PeerTube happen.

This contribution has a twofold impact. Firstly, it's a strong sign of recognition from an international project - one of the pillars of the Free Software world - towards a small French association which offers tools to liberate users from the clutches of the web's giant monopolies. Secondly, it's a substantial amount of help in these difficult times, supporting the development of a tool which equally belongs to and is useful to everyone.

The strength of Debian's gesture proves, once again, that solidarity, mutual aid and collaboration are values which allow our communities to create tools to help us strive towards Utopia.

Reproducible Builds: Supporter spotlight: Civil Infrastructure Platform

21 October, 2020 - 07:00

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do.

This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org.

However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.


Chris: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical?

A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source ‘base layer’ of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society’s lifelines, and CIP aims to contribute to and support these important pillars of modern society.


Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today?

A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.


Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals?

A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.


Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific ‘success stories’ that the CIP is particularly proud of?

A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems.

In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.


Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency?

A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.


Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims?

A: Collaboration with other projects is an essential part of how CIP operates — we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an ‘upstream first’ policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.


Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context?

A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.


Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look?

A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.


For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

Dirk Eddelbuettel: RcppArmadillo 0.10.1.0.0

21 October, 2020 - 05:36

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 786 other packages on CRAN.

A little while ago, Conrad released version 10.1.0 of Armadillo, a a new major release. As before, given his initial heads-up we ran two full reverse-depends checks, and as a consequence contacted four packages authors (two by email, two via PR) about a miniscule required change (as Armadillo now defaults to C++11, an old existing setting of avoiding C++11 lead to an error). Our thanks to those who promptly update their packages—truly appreciated. As it turns out, Conrad also softened the error by the time the release ran around.

But despite our best efforts, the release was delayed considerably by CRAN. We had made several Windows test builds but luck had it that on the uploaded package CRAN got itself a (completely spurious segfault—which can happen on a busy machine building machine things at once). Sadly it took three or four days for CRAN to reply our email. After which it took another number of days for them to ponder the behaviour of a few new ‘deprecated’ messaged tickled by at the most ten or so (out of 786) packages. Oh well. So here we are, eleven days after I emailed the rcpp-devel list about the new package being on CRAN but possibly delayed (due to that seg.fault). But during all that time the package was of course available via the Rcpp drat.

The changes in this release are summarized below as usual and are mostly upstream along with an improved Travis CI setup due to the aforementioned use of the bspm package for binaries at Travis.

Changes in RcppArmadillo version 0.10.1.0.0 (2020-10-09)
  • Upgraded to Armadillo release 10.1.0 (Orchid Ambush)

    • C++11 is now the minimum required C++ standard

    • faster handling of compound expressions by trimatu() and trimatl()

    • faster sparse matrix addition, subtraction and element-wise multiplication

    • expanded sparse submatrix views to handle the non-contiguous form of X.cols(vector_of_column_indices)

    • expanded eigs_sym() and eigs_gen() with optional fine-grained parameters (subspace dimension, number of iterations, eigenvalues closest to specified value)

    • deprecated form of reshape() removed from Cube and SpMat classes

    • ignore and warn on use of the ARMA_DONT_USE_CXX11 macro

  • Switch Travis CI testing to focal and BSPM

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: Buster based Bokmål edition of Debian Administrator's Handbook

20 October, 2020 - 23:35

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from lulu.com. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Steve Kemp: Offsite-monitoring, from my desktop.

20 October, 2020 - 14:15

For the past few years I've had a bunch of virtual machines hosting websites, services, and servers. Of course I want them to be available - especially since I charge people money to access at some of them (for example my dns-hosting service) - and that means I want to know when they're not.

The way I've gone about this is to have a bunch of machines running stuff, and then dedicate an entirely separate machine solely for monitoring and alerting. Sure you can run local monitoring, testing that services are available, the root-disk isn't full, and that kind of thing. But only by testing externally can you see if the machine is actually available to end-users, customers, or friends.

A local-agent might decide "I'm fine", but if the hosting-company goes dark due to a fibre cut you're screwed.

I've been hosting my services with Hetzner (cloud) recently, and their service is generally pretty good. Unfortunately I've started to see an increasing number of false-alarms. I'd have a server in Germany, with the monitoring machine in Helsinki (coincidentally where I live!). For the past month I've started to get pinged with a failure every three/four days on average, "service down - dns failed", or "service down - timeout". When the notice would wake me up I'd go check and it would be fine, it was a very transient failure.

To be honest the reason for this is my monitoring is just too damn aggressive, I like to be alerted immediately in case something is wrong. That means if a single test fails I get an alert, as rather than only if a test failed for something more reasonable like three+ consecutive failures.

I'm experimenting with monitoring in a less aggressive fashion, from my home desktop. Since my monitoring tool is a single self-contained golang binary, and it is already packaged as a docker-based container deployment was trivial. I did a little work writing an agent to receive failure-notices, and ping me via telegram - instead of the previous approach where I had an online status-page which I could view via my mobile, and alerts via pushover.

So far it looks good. I've tweaked the monitoring to setup a timeout of 15 seconds, instead of 5, and I've configured it to only alert me if there is an outage which lasts for >= 2 consecutive failures. I guess the TLDR is I now do offsite monitoring .. from my house, rather than from a different region.

The only real reason to write this post was mostly to say that the process of writing a trivial "notify me" gateway to interface with telegram was nice and straightforward, and to remind myself that transient failures are way more common than we expect.

I'll leave things alone for a moment, but it was a fun experiment. I'll keep the two systems in parallel for a while, but I guess I can already predict the outcome:

  • The desktop monitoring will report transient outages now and again, because home broadband isn't 100% available.
  • The heztner-based monitoring, in a different region, will report transient problems, because even hosting companies are not 100% available.
    • Especially at the cheap prices I'm paying.
  • The way to avoid being woken up by transient outages/errors is to be less agressive.
    • I think my paying users will be OK if I find out a services is offline after 5 minutes, rather than after 30 seconds.
    • If they're not we'll have to talk about budgets ..

Reproducible Builds (diffoscope): diffoscope 161 released

20 October, 2020 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 161. This version includes the following changes:

[ Chris Lamb ]
* Fix failing testsuite: (Closes: #972518)
  - Update testsuite to support OCaml 4.11.1. (Closes: #972518)
  - Reapply Black and bump minimum version to 20.8b1.
* Move the OCaml tests to the assert_diff helper.

[ Jean-Romain Garnier ]
* Add support for radare2 as a disassembler.

[ Paul Spooren ]
* Automatically deploy Docker images in the continuous integration pipeline.

You find out more by visiting the project homepage.

Antoine Beaupré: SSH 2FA with Google Authenticator and Yubikey

19 October, 2020 - 22:08

About a lifetime ago (5 years), I wrote a tutorial on how to configure my Yubikey for OpenPGP signing, SSH authentication and SSH 2FA. In there, I used the libpam-oath PAM plugin for authentication, but it turns out that had too many problems: users couldn't edit their own 2FA tokens and I had to patch it to avoid forcing 2FA on all users. The latter was merged in the Debian package, but never upstream, and the former was never fixed at all. So I started looking at alternatives and found the Google Authenticator libpam plugin. A priori, it's designed to work with phones and the Google Authenticator app, but there's no reason why it shouldn't work with hardware tokens like the Yubikey. Both use the standard HOTP protocol so it should "just work".

After some fiddling, it turns out I was right and you can authenticate with a Yubikey over SSH. Here's that procedure so you don't have to second-guess it yourself.

Installation

On Debian, the PAM module is shipped in the google-authenticator source package:

apt install libpam-google-authenticator

Then you need to add the module in your PAM stack somewhere. Since I only use it for SSH, I added this line on top of /etc/pam.d/sshd:

auth required pam_google_authenticator.so nullok

I also used no_increment_hotp debug while debugging to avoid having to renew the token all the time and have more information about failures in the logs.

Then reload ssh (not sure that's actually necessary):

service ssh reload
Creating or replacing tokens

To create a new key, run this command on the server:

google-authenticator -c

This will prompt you for a bunch of questions. To get them all right, I prefer to just call the right ones on the commandline directly:

google-authenticator --counter-based --qr-mode=NONE --rate-limit=1 --rate-time=30 --emergency-codes=1 --window-size=3

Those are actually the defaults, if my memory serves me right, except for the --qr-mode and --emergency-codes (which can't be disabled so I only print one). I disable the QR code display because I won't be using the codes on my phone, but you would obviously keep it if you want to use the app.

Converting to a Yubikey-compatible secret

Unfortunately, the encoding (base32) produced by the google-authenticator command is not compatible with the token expected by the ykpersonalize command used to configure the Yubikey (base16 AKA "hexadecimal", with a fixed 20 bytes length). So you need a way to convert between the two. I wrote a program called oath-convert which basically does this:

read base32
add padding
convert to hex
print

Or, in Python:

def convert_b32_b16(data_b32):
    remainder = len(data_b32) % 8
    if remainder > 0:
        # XXX: assume 6 chars are missing, the actual padding may vary:
        # https://tools.ietf.org/html/rfc3548#section-5
        data_b32 += "======"
    data_b16 = base64.b32decode(data_b32)
    if len(data_b16) < 20:
        # pad to 20 bytes
        data_b16 += b"\x00" * (20 - len(data_b16))
    return binascii.hexlify(data_b16).decode("ascii")

Note that the code assumes a certain token length and will not work correctly for other sizes. To use the program, simply call it with:

head -1 .google_authenticator | oath-convert

Then you paste the output in the prompt:

$ ykpersonalize -1 -o oath-hotp -o append-cr -a
Firmware version 3.4.3 Touch level 1541 Program sequence 2
 HMAC key, 20 bytes (40 characters hex) : [SECRET GOES HERE]

Configuration data to be written to key configuration 1:

fixed: m:
uid: n/a
key: h:[SECRET REDACTED]
acc_code: h:000000000000
OATH IMF: h:0
ticket_flags: APPEND_CR|OATH_HOTP
config_flags: 
extended_flags: 

Commit? (y/n) [n]: y

Note that you must NOT pass the -o oath-hotp8 parameter to the ykpersonalize commandline, which we used to do in the Yubikey howto. That is because Google Authenticator tokens are shorter: it's less secure, but it's an acceptable tradeoff considering the plugin is actually maintained. There's actually a feature request to support 8-digit codes so that limitation might eventually be fixed as well.

Thanks to the Google Authenticator people and Yubikey people for their support in establishing this procedure.

Russell Coker: Video Decoding

19 October, 2020 - 20:14

I’ve had a saga of getting 4K monitors to work well. My latest issue has been video playing, the dreaded mplayer error about the system being too slow. My previous post about 4K was about using DisplayPort to get more than 30Hz scan rate at 4K [1]. I now have a nice 60Hz scan rate which makes WW2 documentaries display nicely among other things.

But when running a 4K monitor on a 3.3GHz i5-2500 quad-core CPU I can’t get a FullHD video to display properly. Part of the process of decoding the video and scaling it to 4K resolution is too slow, so action scenes in movies lag. When running a 2560*1440 monitor on a 2.4GHz E5-2440 hex-core CPU with the mplayer option “-lavdopts threads=3” everything is great (but it fails if mplayer is run with no parameters). In doing tests with apparent performance it seemed that the E5-2440 CPU gains more from the threaded mplayer code than the i5-2500, maybe the E5-2440 is more designed for server use (it’s in a Dell PowerEdge T320 while the i5-2500 is in a random white-box system) or maybe it’s just because it’s newer. I haven’t tested whether the i5-2500 system could perform adequately at 2560*1440 resolution.

The E5-2440 system has an ATI HD 6570 video card which is old, slow, and only does PCIe 2.1 which gives 5GT/s or 8GB/s. The i5-2500 system has a newer ATI video card that is capable of PCIe 3.0, but “lspci -vv” as root says “LnkCap: Port #0, Speed 8GT/s, Width x16” and “LnkSta: Speed 5GT/s (downgraded), Width x16 (ok)”. So for reasons unknown to me the system with a faster PCIe 3.0 video card is being downgraded to PCIe 2.1 speed. A quick check of the web site for my local computer store shows that all ATI video cards costing less than $300 have PCI3 3.0 interfaces and the sole ATI card with PCIe 4.0 (which gives double the PCIe speed if the motherboard supports it) costs almost $500. I’m not inclined to spend $500 on a new video card and then a greater amount of money on a motherboard supporting PCIe 4.0 and CPU and RAM to go in it.

According to my calculations 3840*2160 resolution at 24bpp (probably 32bpp data transfers) at 30 frames/sec means 3840*2160*4*30/1024/1024=950MB/s. PCIe 2.1 can do 8GB/s so that probably isn’t a significant problem.

I’d been planning on buying a new video card for the E5-2440 system, but due to some combination of having a better CPU and lower screen resolution it is working well for video playing so I can save my money.

As an aside the T320 is a server class system that had been running for years in a corporate DC. When I replaced the high speed SAS disks with SSDs SATA disks it became quiet enough for a home workstation. It works very well at that task but the BIOS is quite determined to keep the motherboard video running due to the remote console support. So swapping monitors around was more pain than I felt like going through, I just got it working and left it. I ordered a special GPU power cable but found that the older video card that doesn’t need an extra power cable performs adequately before the cable arrived.

Here is a table comparing the systems.

2560*1440 works well 3840*2160 goes slow System Dell PowerEdge T320 White Box PC from rubbish CPU 2.4GHz E5-2440 3.3GHz i5-2500 Video Card ATI Radeon HD 6570 ATI Radeon R7 260X PCIe Speed PCIe 2.1 – 8GB/s PCIe 3.0 downgraded to PCIe 2.1 – 8GB/s Conclusion

The ATI Radeon HD 6570 video card is one that I had previously tested and found inadequate for 4K support, I can’t remember if it didn’t work at that resolution or didn’t support more than 30Hz scan rate. If the 2560*1440 monitor dies then it wouldn’t make sense to buy anything less than a 4K monitor to replace it which means that I’d need to get a new video card to match. But for the moment 2560*1440 is working well enough so I won’t upgrade it any time soon. I’ve already got the special power cable (specified as being for a Dell PowerEdge R610 for anyone else who wants to buy one) so it will be easy to install a powerful video card in a hurry.

Related posts:

  1. 4K Monitors A couple of years ago a relative who uses a...
  2. DisplayPort and 4K The Problem Video playback looks better with a higher scan...
  3. ATI ES1000 Video on Debian/Squeeze The Problem I’ve just upgraded my Dell PowerEdge T105 [1]...

Louis-Philippe Véronneau: Musings on long-term software support and economic incentives

19 October, 2020 - 11:00

Although I still read a lot, during my college sophomore years my reading habits shifted from novels to more academic works. Indeed, reading dry textbooks and economic papers for classes often kept me from reading anything else substantial. Nowadays, I tend to binge read novels: I won't touch a book for months on end, and suddenly, I'll read 10 novels back to back1.

At the start of a novel binge, I always follow the same ritual: I take out my e-reader from its storage box, marvel at the fact the battery is still pretty full, turn on the WiFi and check if there are OS updates. And I have to admit, Kobo Inc. (now Rakuten Kobo) has done a stellar job of keeping my e-reader up to date. I've owned this model (a Kobo Aura 1st generation) for 7 years now and I'm still running the latest version of Kobo's Linux-based OS.

Having recently had trouble updating my Nexus 5 (also manufactured 7 years ago) to Android 102, I asked myself:

Why is my e-reader still getting regular OS updates, while Google stopped issuing security patches for my smartphone four years ago?

To try to answer this, let us turn to economic incentives theory.

Although not the be-all and end-all some think it is3, incentives theory is not a bad tool to analyse this particular problem. Executives at Google most likely followed a very business-centric logic when they decided to drop support for the Nexus 5. Likewise, Rakuten Kobo's decision to continue updating older devices certainly had very little to do with ethics or loyalty to their user base.

So, what are the incentives that keep Kobo updating devices and why are they different than smartphone manufacturers'?

A portrait of the current long-term software support offerings for smartphones and e-readers

Before delving deeper in economic theory, let's talk data. I'll be focusing on 2 brands of e-readers, Amazon's Kindle and Rakuten's Kobo. Although the e-reader market is highly segmented and differs a lot based on geography, Amazon was in 2015 the clear worldwide leader with 53% of the worldwide e-reader sales, followed by Rakuten Kobo at 13%4.

On the smartphone side, I'll be differentiating between Apple's iPhones and Android devices, taking Google as the barometer for that ecosystem. As mentioned below, Google is sadly the leader in long-term Android software support.

Rakuten Kobo

According to their website and to this Wikipedia table, the only e-readers Kobo has deprecated are the original Kobo eReader and the Kobo WiFi N289, both released in 2010. This makes their oldest still supported device the Kobo Touch, released in 2011. In my book, that's a pretty good track record. Long-term software support does not seem to be advertised or to be a clear selling point in their marketing.

Amazon

According to their website, Amazon has dropped support for all 8 devices produced before the Kindle Paperwhite 2nd generation, first sold in 2013. To put things in perspective, the first Kindle came out in 2007, 3 years before Kobo started selling devices. Like Rakuten Kobo, Amazon does not make promises of long-term software support as part of their marketing.

Apple

Apple has a very clear software support policy for all their devices:

Owners of iPhone, iPad, iPod or Mac products may obtain a service and parts from Apple or Apple service providers for five years after the product is no longer sold – or longer, where required by law.

This means in the worst-case scenario of buying an iPhone model just as it is discontinued, one would get a minimum of 5 years of software support.

Android

Google's policy for their Android devices is to provide software support for 3 years after the launch date. If you buy a Pixel device just before the new one launches, you could theoretically only get 2 years of support. In 2018, Google decided OEMs would have to provide security updates for at least 2 years after launch, threatening not to license Google Apps and the Play Store if they didn't comply.

A question of cost structure

From the previous section, we can conclude that in general, e-readers seem to be supported longer than smartphones, and that Apple does a better job than Android OEMs, providing support for about twice as long.

Even Fairphone, who's entire business is to build phones designed to last and to be repaired was not able to keep the Fairphone 1 (2013) updated for more than a couple years and seems to be struggling to keep the Fairphone 2 (2015) running an up to date version of Android.

Anyone who has ever worked in IT will tell you: maintaining software over time is hard work and hard work by specialised workers is expensive. Most commercial electronic devices are sold and developed by for-profit enterprises and software support all comes down to a question of cost structure. If companies like Google or Fairphone are to be expected to provide long-term support for the devices they manufacture, they have to be able to fund their work somehow.

In a perfect world, people would be paying for the cost of said long-term support, as it would likely be cheaper then buying new devices every few years and would certainly be better for the planet. Problem is, manufacturers aren't making them pay for it.

Economists call this type of problem externalities: things that should be part of the cost of a good, but aren't for one a reason or another. A classic example of an externality is pollution. Clearly pollution is bad and leads to horrendous consequences, like climate change. Sane people agree we should drastically cut our greenhouse gas emissions, and yet, we aren't.

Neo-classical economic theory argues the way to fix externalities like pollution is to internalise these costs, in other words, to make people pay for the "real price" of the goods they buy. In the case of climate change and pollution, neo-classical economic theory is plain wrong (spoiler alert: it often is), but this is where band-aids like the carbon tax comes from.

Still, coming back to long-term software support, let's see what would happen if we were to try to internalise software maintenance costs. We can do this multiple ways.

1 - Include the price of software maintenance in the cost of the device

This is the choice Fairphone makes. This might somewhat work out for them since they are a very small company, but it cannot scale for the following reasons:

  1. This strategy relies on you giving your money to an enterprise now, and trusting them to "Do the right thing" years later. As the years go by, they will eventually look at their books, see how much ongoing maintenance is costing them, drop support for the device, apologise and move on. That is to say, enterprises have a clear economic incentive to promise long-term support and not deliver. One could argue a company's reputation would suffer from this kind of behaviour. Maybe sometime it does, but most often people forget. Political promises are a great example of this.

  2. Enterprises go bankrupt all the time. Even if company X promises 15 years of software support for their devices, if they cease to exist, your device will stop getting updates. The internet is full of stories of IoT devices getting bricked when the parent company goes bankrupt and their servers disappear. This is related to point number 1: to some degree, you have a disincentive to pay for long-term support in advance, as the future is uncertain and there are chances you won't get the support you paid for.

  3. Selling your devices at a higher price to cover maintenance costs does not necessarily mean you will make more money overall — raising more money to fund maintenance costs being the goal here. To a certain point, smartphone models are substitute goods and prices higher than market prices will tend to drive consumers to buy cheaper ones. There is thus a disincentive to include the price of software maintenance in the cost of the device.

  4. People tend to be bad at rationalising the total cost of ownership over a long period of time. Economists call this phenomenon hyperbolic discounting. In our case, it means people are far more likely to buy a 500$ phone each 3 years than a 1000$ phone each 10 years. Again, this means OEMs have a clear disincentive to include the price of long-term software maintenance in their devices.

Clearly, life is more complex than how I portrayed it: enterprises are not perfect rational agents, altruism exists, not all enterprises aim solely for profit maximisation, etc. Still, in a capitalist economy, enterprises wanting to charge for software maintenance upfront have to overcome these hurdles one way or another if they want to avoid failing.

2 - The subscription model

Another way companies can try to internalise support costs is to rely on a subscription-based revenue model. This has multiple advantages over the previous option, mainly:

  1. It does not affect the initial purchase price of the device, making it easier to sell them at a competitive price.

  2. It provides a stable source of income, something that is very valuable to enterprises, as it reduces overall risks. This in return creates an incentive to continue providing software support as long as people are paying.

If this model is so interesting from an economic incentives point of view, why isn't any smartphone manufacturer offering that kind of program? The answer is, they are, but not explicitly5.

Apple and Google can fund part of their smartphone software support via the 30% cut they take out of their respective app stores. A report from Sensor Tower shows that in 2019, Apple made an estimated US$ 16 billion from the App Store, while Google raked in US$ 9 billion from the Google Play Store. Although the Fortune 500 ranking tells us this respectively is "only" 5.6% and 6.5% of their gross annual revenue for 2019, the profit margins in this category are certainly higher than any of their other products.

This means Google and Apple have an important incentive to keep your device updated for some time: if your device works well and is updated, you are more likely to keep buying apps from their store. When software support for a device stops, there is a risk paying customers will buy a competitor device and leave their ecosystem.

This also explains why OEMs who don't own app stores tend not to provide software support for very long periods of time. Most of them only make money when you buy a new phone. Providing long-term software support thus becomes a disincentive, as it directly reduces their sale revenues.

Same goes for Kindles and Kobos: the longer your device works, the more money they make with their electronic book stores. In my opinion, it's likely Amazon and Rakuten Kobo produce quarterly cost-benefit reports to decide when to drop support for older devices, based on ongoing support costs and the recurring revenues these devices bring in.

Rakuten Kobo is also in a more precarious situation than Amazon is: considering Amazon's very important market share, if your device stops getting new updates, there is a greater chance people will replace their old Kobo with a Kindle. Again, they have an important economic incentive to keep devices running as long as they are profitable.

Can Free Software fix this?

Yes and no. Free Software certainly isn't a magic wand one can wave to make everything better, but does provide major advantages in terms of security, user freedom and sometimes costs. The last piece of the puzzle explaining why Rakuten Kobo's software support is better than Google's is technological choices.

Smartphones are incredibly complex devices and have become the main computing platform of many. Similar to the web, there is a race for features and complexity that tends to create bloat and make older devices slow and painful to use. On the other hand, e-readers are simpler devices built for a single task: display electronic books.

Control over the platform is also a key aspect of the cost structure of providing software updates. Whereas Apple controls both the software and hardware side of iPhones, Android is a sad mess of drivers and SoCs, all providing different levels of support over time6.

If you take a look at the platforms the Kindle and Kobo are built on, you'll quickly see they both use Freescale I.MX SoCs. These processors are well known for their excellent upstream support in the Linux kernel and their relative longevity, chips being produced for either 10 or 15 years. This in turn makes updates much easier and less expensive to provide.

So clearly, open architectures, free drivers and open hardware helps tremendously, but aren't enough on their own. One of the lessons we must learn from the (amazing) LineageOS project is how lack of funding hurts everyone.

If there is no one to do the volunteer work required to maintain a version of LOS for your device, it won't be supported. Worse, when purchasing a new device, users cannot know in advance how many years of LOS support they will get. This makes buying new devices a frustrating hit-and-miss experience. If you are lucky, you will get many years of support. Otherwise, you risk your device becoming an expensive insecure paperweight.

So how do we fix this? Anyone with a brain understands throwing away perfectly good devices each 2 years is not sustainable. Government regulations enforcing a minimum support life would be a step in the right direction, but at the end of the day, Capitalism is to blame. Like the aforementioned carbon tax, band-aid solutions can make things somewhat better, but won't fix our current economic system's underlying problems.

For now though, I'll leave fixing the problem of Capitalism to someone else.

  1. My most recent novel binge has been focused on re-reading the Dune franchise. I first read the 6 novels written by Frank Herbert when I was 13 years old and only had vague and pleasant memories of his work. Great stuff. 

  2. I'm back on LineageOS! Nice folks released an unofficial LOS 17.1 port for the Nexus 5 last January and have kept it updated since then. If you are to use it, I would also recommend updating TWRP to this version specifically patched for the Nexus 5. 

  3. Very few serious economists actually believe neo-classical rational agent theory is a satisfactory explanation of human behavior. In my opinion, it's merely a (mostly flawed) lens to try to interpret certain behaviors, a tool amongst others that needs to be used carefully, preferably as part of a pluralism of approaches

  4. Good data on the e-reader market is hard to come by and is mainly produced by specialised market research companies selling their findings at very high prices. Those particular statistics come from a MarketWatch analysis. 

  5. If they were to tell people: You need to pay us 5$/month if you want to receive software updates, I'm sure most people would not pay. Would you? 

  6. Coming back to Fairphones, if they had so much problems providing an Android 9 build for the Fairphone 2, it's because Qualcomm never provided Android 7+ support for the Snapdragon 801 SoC it uses. 

Antoine Beaupré: CDPATH replacements

19 October, 2020 - 04:30

after reading this post I figured I might as well bite the bullet and improve on my CDPATH-related setup, especially because it does not work with Emacs. so i looked around for autojump-related alternatives that do.

What I use now

I currently have this in my .shenv (sourced by .bashrc):

export CDPATH=".:~:~/src:~/dist:~/wikis:~/go/src:~/src/tor"

This allows me to quickly jump into projects from my home dir, or the "source code" (~/src), "work" (src/tor), or wiki checkouts (~/wikis) directories. It works well from the shell, but unfortunately it's very static: if I want a new directory, I need to edit my config file, restart shells, etc. It also doesn't work from my text editor.

Shell jumpers

Those are commandline tools that can be used from a shell, generally with built-in shell integration so that a shell alias will find the right directory magically, usually by keeping track of the directories visited with cd.

Some of those may or may not have integration in Emacs.

autojump fasd z fzf Emacs plugins not integrated with the shell

Those projects can be used to track files inside a project or find files around directories, but do not offer the equivalent functionality in the shell.

projectile elpy
  • home page
  • elpy has a notion of projects, so, by default, will find files in the current "project" with C-c C-f which is useful
bookmarks.el
  • built-in
  • home page
  • "Bookmarks record locations so you can return to them later"
recentf
  • built-in
  • home page
  • "builds a list of recently opened files. This list is is automatically saved across sessions on exiting Emacs - you can then access this list through a command or the menu"
references

https://www.emacswiki.org/emacs/LocateFilesAnywhere

Iustin Pop: Serendipity

19 October, 2020 - 03:15

To start off, let me say it again: I hate light pollution. I really, really hate it. I love the night sky where you look up and see thousands of stars, and constellations besides Ursa Major. As somebody said once, “You haven’t lived until you’ve seen your shadow by the light of the Milky Way”.

But, ahem, I live in a large city, and despite my attempts using star trackers, special filters, etc. you simply can’t escape it. So, whenever we go on vacation in the mountains, I’m trying to think if I an do a bit of astro-photography (not that I’m good at it).

Which bring me to our recent vacation up in the mountains. I was looking forward to it, until in the week before, when the weather prognosis was switching between snow, rain and overcast for the entire week. No actual day or night with clear skies, so… I didn’t take a tripod, I didn’t take a wide lens, and put night photography out of my mind.

Vacation itself was good, especially the quietness of the place, so I usually went to be early-ish and didn’t look outside. The weather was as forecasted - no new snow (but there was enough up in the mountains), but heavy clouds all the time, and the sun only showed itself for a few minutes at a time.

One night I was up a bit longer than usual, working on the laptop and being very annoyed by a buzzing sound. At first I thought maybe I was imagining it, but from time to time it was stopping briefly, so it was a real noise; I started hunting for the source. Not my laptop, not the fridge, not the TV… but it was getting stronger near the window. I open the door to the balcony, and… bam! Very loud noise, from the hotel nearby, where — at midnight — the pool was being cleaned. I look at the people doing the work, trying to estimate how long it’ll be until they finish, but it was looking like a long time.

Fortunately with the door closed the noise was not bad enough to impact my sleep, so I debate getting angry or just resigned, and since it was late, I just sigh, roll my eyes — not metaphorically, but actually roll my eyes and look up, and I can’t believe my eyes. Completely clear sky, no trace of clouds anywhere, and… stars. Lots of starts. I sit there, looking at the sky and enjoying the view, and I think to myself that it won’t look that nice on the camera, for sure. Especially without a real trip, and without a fast lens.

Nevertheless, I grab my camera and — just for kicks — take one handheld picture. To my surprise (and almost disbelief), blurry pixels aside, the photo does look like what I was seeing, so I grab my tiny tripod that I carried along, and (with only a 24-70 zoom lens), grab a photo. And another, and another and then I realise that if I can make the composition work, and find a good shutter speed, this can turn out a good picture.

I didn’t have a remote release, the tripod was not very stable and it cannot point the camera upwards (it’s basically an emergency tripod), so it was quite sub-optimal; still, I try multiple shots (different compositions, different shutter speeds); they look on the camera screen and on the phone pretty good, so just for safety I take a few more, and, very happy, go to bed.

Coming back from vacation, on the large monitor, it turns out that the first 28 out of the 30 pictures were either blurry or not well focused (as I was focusing manually), and the 29th was almost OK but still not very good. Only the last, the really last picture, was technically good and also composition-wise OK. Luck? Foresight? Don’t know, but it was worth deleting 28 pictures to get this one. One of my best night shots, despite being so unprepared

Stars! Lots of stars! And mountains…

Of course, compared to other people’s pictures, this is not special. But for me, it will be a keepsake of how a real night sky should look like.

If you want to zoom in, higher resolution on flickr.

Technically, the challenges for the picture were two-fold:

  • fighting the shutter speed; the light was not the problem, but rather the tripod and lack of remote release: a short shutter speed will magnify tripod issues/movement from the release (although I was using delayed release on the camera), but will prevent star trails, and a long shutter speed will do the exact opposite; in the end, at the focal length I was using, I settled on a 5 second shutter speed.
  • composition: due to the presence of the mountains (which I couldn’t avoid by tilting the camera fully up), this was for me a difficult thing, since it’s more on the artistic side, which is… very subjective; in the end, this turned out fine (I think), but mostly because I took pictures from many different perspectives.

Next time when travelling by car, I’ll surely take a proper tripod ☺

Until next time, clear and dark skies…

François Marier: Using a Let's Encrypt TLS certificate with Asterisk 16.2

18 October, 2020 - 07:45

In order to fix the following error after setting up SIP TLS in Asterisk 16.2:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

I created a Let's Encrypt certificate using certbot:

apt install certbot
certbot certonly --standalone -d hostname.example.com

To enable the asterisk user to load the certificate successfuly (it doesn't permission to access to the certificates under /etc/letsencrypt/), I copied it to the right directory:

cp /etc/letsencrypt/live/hostname.example.com/privkey.pem /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/hostname.example.com/fullchain.pem /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key

Then I set the following variables in /etc/asterisk/sip.conf:

tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key
Automatic renewal

The machine on which I run asterisk has a tricky Apache setup:

  • a webserver is running on port 80
  • port 80 is restricted to the local network

This meant that the certbot domain ownership checks would get blocked by the firewall, and I couldn't open that port without exposing the private webserver to the Internet.

So I ended up disabling the built-in certbot renewal mechanism:

systemctl disable certbot.timer certbot.service
systemctl stop certbot.timer certbot.service

and then writing my own script in /etc/cron.daily/certbot-francois:

#!/bin/bash
TEMPFILE=`mktemp`

# Stop Apache and backup firewall.
/bin/systemctl stop apache2.service
/usr/sbin/iptables-save > $TEMPFILE

# Open up port 80 to the whole world.
/usr/sbin/iptables -D INPUT -j LOGDROP
/usr/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables -A INPUT -j LOGDROP

# Renew all certs.
/usr/bin/certbot renew --quiet

# Restore firewall and restart Apache.
/usr/sbin/iptables -D INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables-restore < $TEMPFILE
/bin/systemctl start apache2.service

# Copy certificate into asterisk.
cp /etc/letsencrypt/live/hostname.example.com/privkey.pem /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/hostname.example.com/fullchain.pem /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
/bin/systemctl restart asterisk.service

# Commit changes to etckeeper.
pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt asterisk
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs."
    echo "$DIFFSTAT"
fi
popd > /dev/null

Dirk Eddelbuettel: digest 0.6.26: Blake3 and Tuning

17 October, 2020 - 23:54

And a new version of digest is now on CRAN will go to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 896k monthly downloads, 279 direct reverse dependencies and 8057 indirect reverse dependencies, or just under half of CRAN) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release brings two nice contributed updates. Dirk Schumacher added support for blake3 (though we could probably push this a little harder for performance, help welcome). Winston Chang benchmarked and tuned some of the key base R parts of the package. Last but not least I flipped the vignette to the lovely minidown, updated the Travis CI setup using bspm (as previously blogged about in r4 #30), and added a package website using Matertial for MkDocs.

My CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Junichi Uekawa: Troubleshooting your audio input.

17 October, 2020 - 08:36
Troubleshooting your audio input. When doing video conferencing sometimes I hear the remote end not doing very well. Especially when your friend tells you he bought a new mic and it didn't sound well, they might be using the wrong configuration on the OS and using the other mic, or they might have a constant noise source in the room that affects the video conferencing noise cancelling algorithms. Yes, noise cancelling algorithms aren't perfect because detecting what is noise is heuristic and better to have low level of noise. Here is the app. I have a video to demonstrate.

Yves-Alexis Perez: iOS 14 USB tethering broken on Linux: looking for documentation and contact at Apple

16 October, 2020 - 19:36

It's a bit of a long shot, but maybe someone on Planet Debian or elsewhere can help us reach the right people at Apple.

Starting with iOS 14, something apparently changed on the way USB tethering (also called Personal Hotspot) is set up, which broke it for people using Linux. The driver in use is ipheth, developped in 2009 and included in the Linux kernel in 2010.

The kernel driver negotiates over USB with the iOS device in order to setup the link. The protocol used by both parties to communicate don't really seemed documented publicly, and it seems the protocol has evolved over time and iOS versions, and the Linux driver hasn't been kept up to date. On macOS and Windows the driver apparently comes with iTunes, and Apple engineers obviously know how to communicate with iOS devices, so iOS 14 is supported just fine.

There's an open bug on libimobildevice (the set of userlands tools used to communicate with iOS devices, although the update should be done in the kernel), with some debugging and communication logs between Windows and an iOS device, but so far no real progress has been done. The link is enabled, the host gets an IP from the device, can ping the device IP and can even resolve name using the device DNS resolver, but IP forwarding seems disabled, no packet goes farther than the device itself.

That means a lot of people upgrading to iOS 14 will suddenly lose USB tethering. While Wi-Fi and Bluetooth connection sharing still works, it's still suboptimal, so it'd be nice to fix the kernel driver and support the latest protocol used in iOS 14.

If someone knows the right contact (or the right way to contact them) at Apple so we can have access to some kind of documentation on the protocol and the state machine to use, please reach us (either to the libimobile device bug or to my email address below).

Thanks!

Jelmer Vernooij: Debian Janitor: How to Contribute Lintian-Brush Fixers

16 October, 2020 - 01:00

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

lintian-brush can currently fix about 150 different issues that lintian can report, but that's still a small fraction of the more than thousand different types of issue that lintian can detect.

If you're interested in contributing a fixer script to lintian-brush, there is now a guide that describes all steps of the process:

  1. how to identify lintian tags that are good candidates for automated fixing
  2. creating test cases
  3. writing the actual fixer

For more information about the Janitor's lintian-fixes efforts, see the landing page.

Gunnar Wolf: I am who I am and that's all that I am

16 October, 2020 - 00:55

Mexico was one of the first countries in the world to set up a national population registry in the late 1850s, as part of the church-state separation that was for long years one of the national sources of pride.

Forty four years ago, when I was born, keeping track of the population was still mostly a manual task. When my parents registered me, my data was stored in page 161 of book 22, year 1976, of the 20th Civil Registration office in Mexico City. Faithful to the legal tradition, everything is handwritten and specified in full. Because, why would they write 1976.04.27 (or even 27 de abril de 1976) when they could spell out día veintisiete de abril de mil novecientos setenta y seis? Numbers seem to appear only for addresses.

So, the State had record of a child being born, and we knew where to look if we came to need this information. But, many years later, a very sensible tecnification happened: all records (after a certain date, I guess) were digitized. Great news! I can now get my birth certificate without moving from my desk, paying a quite reasonable fee (~US$4). What’s there not to like?

Digitally certified and all! So great! But… But… Oh, there’s a problem.

Of course… Making sense of the handwriting as you can see is somewhat prone to failure. And I cannot blame anybody for failing to understand the details of my record.

So, my mother’s first family name is Iszaevich. It was digitized as Iszaerich. Fortunately, they do acknowledge some errors could have made it into the process, and there is a process to report and correct errors.

What’s there not to like?

Oh — That they do their best to emulate a public office using online tools. I followed some links in that link to get the address to contact and yesterday night sent them the needed documents. Quite immediately, I got an answer that… I must share with the world:

Yes, the mailing contact is in the @gmail.com domain. I could care about them not using a @….gob.mx address, but I’ll let it slip. The mail I got says (uppercase and all):

GOOD EVENING,

WE INFORM YOU THAT THE RECEPTION OF E-MAILS FOR REQUESTING
CORRECTIONS IN CERTIFICATES IS ONLY ACTIVE MONDAY THROUGH FRIDAY,
8:00 TO 15:00.

*IN CASE YOU SENT A MAIL OUTSIDE THE WORKING HOURS, IT WILL BE
AUTOMATICALLY DELETED BY THE SERVER*

CORDIAL GREETINGS,

I would only be half-surprised if they were paying the salary of somebody to spend the wee hours of the night receiving and deleting mails from their GMail account.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, September 2020

15 October, 2020 - 21:07

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In September, 208.25 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 12.0h (out of 14h assigned), thus carrying over 2h to October.
  • Adrian Bunk did 14h (out of 19.75h assigned), thus carrying over 5.75h to October.
  • Ben Hutchings did 8.25h (out of 16h assigned and 9.75h from August), but gave back 7.75h, thus carrying over 9.75h to October.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 19.75h (out of 19.75h assigned).
  • Holger Levsen did 5h coordinating/managing the LTS team.
  • Markus Koschany did 31.75h (out of 19.75h assigned and 12h from August).
  • Ola Lundqvist did 9.5h (out of 12h from August), thus carrying 2.5h to October.
  • Roberto C. Sánchez did 19.75h (out of 19.75h assigned).
  • Sylvain Beucler did 19.75h (out of 19.75h assigned).
  • Thorsten Alteholz did 19.75h (out of 19.75h assigned).
  • Utkarsh Gupta did 8.75h (out of 19.75h assigned), while he already anticipated the remaining 11h in August.
Evolution of the situation

September was a regular LTS month with an IRC meeting.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file has 48 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Dirk Eddelbuettel: dang 0.0.12: Two new functions

15 October, 2020 - 07:41

A new release of the dang package is now on CRAN, roughly one year after the last release. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another. More recently added were helpers for data.table to xts conversion and a git repo root finder.

This release adds two functions. One was mentioned just days ago in a tweet by Nathan and is a reworked version of something Colin tweeted about a few weeks ago: a little data wrangling off the kewl rtweet to find maximally spammy accounts per search topic. In other words those who include more than ‘N’ hashtags for given search term. The other is something I, if memory serves, picked up a while back on one of the lists: a base R function to identify non-ASCII characters in a file. It is a C function that is not directly exported by and hence no accessible, so we put it here (with credits, of course). I mentioned it yesterday when announcing tidyCpp as I this C function was the starting point for the new tidyCpp wrapper around some C API of R functions.

The (very short) NEWS entry follows.

Changes in version 0.0.12 (2020-10-14)
  • New functions muteTweets and checkPackageAsciiCode.

  • Updated CI setup.

Courtesy of CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้