Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 35 min ago

Russ Allbery: Review: Aerial Magic Season 1

6 hours 45 min ago

Review: Aerial Magic Season 1, by walkingnorth

Series: Aerial Magic #1 Publisher: LINE WEBTOON Copyright: 2018 Format: Online graphic novel Pages: 156 Aerial Magic is a graphic novel published on the LINE WEBTOON platform by the same author as the wonderful Always Human, originally in weekly episodes. It is readable for free, starting with the prologue. I was going to wait until all seasons were complete and then review the entire work, like I did with Always Human, but apparently there are going to be five seasons and I don't want to wait that long. This is a review of the first season, which is now complete in 25 episodes plus a prologue.

As with Always Human, the pages metadata in the sidebar is a bit of a lie: a very rough guess on how many pages this would be if it were published as a traditional graphic novel (six times the number of episodes, since each episode seems a bit longer than in Always Human). A lot of the artwork is large panels, so it may be an underestimate. Consider it only a rough guide to how long it might take to read.

Wisteria Kemp is an apprentice witch. This is an unusual thing to be — not the witch part, which is very common in a society that appears to use magic in much the way that we use technology, but the apprentice part. Most people training for a career in magic go to university, but school doesn't agree with Wisteria. There are several reasons for that, but one is that she's textblind and relies on a familiar (a crow-like bird named Puppy) to read for her. Her dream is to be accredited to do aerial magic, but her high-school work was... not good, and she's very afraid she'll be sent home after her ten-day trial period.

Magister Cecily Moon owns a magical item repair shop in the large city of Vectum and agreed to take Wisteria on as an apprentice, something that most magisters no longer do. She's an outgoing woman with a rather suspicious seven-year-old, two other employees, and a warm heart. She doesn't seem to have the same pessimism Wisteria has about her future; she instead is more concerned with whether Wisteria will want to stay after her trial period. This doesn't reassure Wisteria, nor do her initial test exercises, all of which go poorly.

I found the beginning of this story a bit more painful than Always Human. Wisteria has such a deep crisis of self-confidence, and I found Cecily's lack of awareness of it quite frustrating. This is not unrealistic — Cecily is clearly as new to having an apprentice as Wisteria is to being one, and is struggling to calibrate her style — but it's somewhat hard reading since at least some of Wisteria's unhappiness is avoidable. I wish Cecily had shown a bit more awareness of how much harder she made things for Wisteria by not explaining more of what she was seeing. But it does set up a highly effective pivot in tone, and the last few episodes were truly lovely. Now I'm nearly as excited for more Aerial Magic as I would be for more Always Human.

walkingnorth's art style is much the same as that in Always Human, but with more large background panels showing the city of Vectum and the sky above it. Her faces are still exceptional: expressive, unique, and so very good at showing character emotion. She occasionally uses an exaggerated chibi style for some emotions, but I feel like she's leaning more on subtlety of expression in this series and doing a wonderful job with it. Wisteria's happy expressions are a delight to look at. The backgrounds are not generally that detailed, but I think they're better than Always Human. They feature a lot of beautiful sky, clouds, and sunrise and sunset moments, which are perfect for walkingnorth's pastel palette.

The magical system underlying this story doesn't appear in much detail, at least yet, but what is shown has an interesting animist feel and seems focused on the emotions and memories of objects. Spells appear to be standardized symbolism that is known to be effective, which makes magic something like cooking: most people use recipes that are known to work, but a recipe is not strictly required. I like the feel of it and the way that magic is woven into everyday life (personal broom transport is common), and am looking forward to learning more in future seasons.

As with Always Human, this is a world full of fundamentally good people. The conflict comes primarily from typical interpersonal conflicts and inner struggles rather than any true villain. Also as with Always Human, the world features a wide variety of unremarked family arrangements, although since it's not a romance the relationships aren't quite as central. It makes for relaxing and welcoming reading.

Also as in Always Human, each episode features its own soundtrack, composed by the author. I am again not reviewing those because I'm a poor music reviewer and because I tend to read online comics in places and at times where I don't want the audio, but if you like that sort of thing, the tracks I listened to were enjoyable, fit the emotions of the scene, and were unobtrusive to listen to while reading.

This is an online comic on a for-profit publishing platform, so you'll have to deal with some amount of JavaScript and modern web gunk. I at least (using up-to-date Chrome on Linux with UMatrix) had fewer technical problems with delayed and partly-loaded panels than I had with Always Human.

I didn't like this first season quite as well as Always Human, but that's a high bar, and it took some time for Always Human to build up to its emotional impact as well. What there is so far is a charming, gentle, and empathetic story, full of likable characters (even the ones who don't seem that likable at first) and a fascinating world background. This is an excellent start, and I will certainly be reading (and reviewing) later seasons as they're published.

walkingnorth has a Patreon, which, in addition to letting you support the artist directly, has various supporting material such as larger artwork and downloadable versions of the music.

Rating: 7 out of 10

Keith Packard: newt-lola

15 hours 34 min ago
Newt: Replacing Bison and Flex

Bison and Flex (or any of the yacc/lex family members) are a quick way to generate reliable parsers and lexers for language development. It's easy to write a token recognizer in Flex and a grammar in Bison, and you can readily hook code up to the resulting parsing operation. However, neither Bison nor Flex are really designed for embedded systems where memory is limited and malloc is to be avoided.

When starting Newt, I didn't hesitate to use them though; it's was nice to use well tested and debugged tools so that I could focus on other parts of the implementation.

With the rest of Newt working well, I decided to go take another look at the cost of lexing and parsing to see if I could reduce their impact on the system.

A Custom Lexer

Most mature languages end up with a custom lexer. I'm not really sure why this has to be the case, but it's pretty rare to run across anyone still using Flex for lexical analysis. The lexical structure of Python is simple enough that this isn't a huge burden; the hardest part of lexing Python code is in dealing with indentation, and Flex wasn't really helping with that part much anyways.

So I decided to just follow the same path and write a custom lexer. The result generates only about 1400 bytes of Thumb code, a significant savings from the Flex version which was about 6700 bytes.

To help make the resulting language LL, I added lexical recognition of the 'is not' and 'not in' operators; instead of attempting to sort these out in the parser, the lexer does a bit of look-ahead and returns a single token for both of these.

Parsing on the Cheap

Many of common languages are "almost" LL; this may come from using recursive-descent parsers. In 'pure' form, recursive descent parsers can only recognize LL languages, but it's easy to hack them up to add a bit of look ahead here and there to make them handle non-LL cases.

Which is one reason we end up using parser generators designed to handle LALR languages instead; that class of grammars does cover most modern languages fairly well, only requiring a small number of kludges to paper over the remaining gaps. I think the other reason I like using Bison is that the way an LALR parser works makes it really easy to handle synthetic attributes during parsing. Synthetic attributes are those built from collections of tokens that match an implicit parse tree of the input.

The '$[0-9]+' notation within Bison actions represent the values of lower-level parse tree nodes, while '$$' is the attribute value passed to higher levels of the tree.

However, LALR parser generators are pretty complicated, and the resulting parse tables are not tiny. I wondered how much space I could save by using a simpler parser structure, and (equally important), one designed for embedded use. Because Python is supposed to be an actual LL language, I decided to pull out an ancient tool I had and give it a try.

Lola: Reviving Ancient Lisp Code

Back in the 80s, I wrote a little lisp called Kalypso. One of the sub-projects resulted an LL parser generator called Lola. LL parsers are a lot easier to understand than LALR parsers, so that's what I wrote.

A program written in a long-disused dialect of lisp offers two choices:

1) Get the lisp interpreter running again

2) Re-write the program in an available language.

I started trying to get Kalypso working again, and decided that it was just too hard. Kalypso was not very portably written and depended on a lot of historical architecture, including the structure of a.out files and the mapping of memory.

So, as I was writing a Python-like language anyways, I decided to transliterate Lola into Python. It's now likely the least "Pythonic" program around as it reflects a lot of common Lisp-like programming ideas. I've removed the worst of the recursive execution, but it is still full of list operations. The Python version uses tuples for lists, and provides a 'head' and 'rest' operation to manipulate them. I probably should have just called these 'car' and 'cdr'...

One benefit of using Lisp was that I could write the grammar as s-expressions and avoid needing a parser for the Lola input language. Of course, Lola is a parser generator, so it actually bootstraps itself by having the grammar for the Lola language written as Python data structures, generating a parser for that and then parsing the user's input. Here's what the Lola grammar looks like, in Lola grammar syntax:

start       : non-term start
        |
        ;
non-term    : SYMBOL @NONTERM@ COLON rules @RULES@ SEMI
        ;
rules       : rule rules-p
        ;
rules-p     : VBAR rule rules-p
        |
        ;
rule        : symbols @RULE@
        ;
symbols     : SYMBOL @SYMBOL@ symbols
        |
        ;

Lola now has a fairly clean input syntax, including the ability to code actions in C (or whatever language). It has two output modules; a Python module that generates just the Python parse table structure, and a C module that generates a complete parser, ready to incorporate into your application much as Bison does.

Lola is available in my git repository, https://keithp.com/cgit/lola.git/

Actions in Bison vs Lola

Remember how I said that Bison makes processing synthetic attributes really easy? Well, the same is not true of the simple LL parser generated by Lola.

Actions in Lola are chucks of C code executed when they appear at to top of the parse stack. However, there isn't any context for them in the parsing process itself; the parsing process discards any knowledge of production boundaries. As a result, the actions have to manually track state on a separate attribute stack. There are pushes to this stack in one action that are expected to be matched by pops in another.

The resulting actions are not very pretty, and writing them somewhat error prone. I'd love to come up with a cleaner mechanism, and I've got some ideas, but those will have to wait for another time.

Bison vs Lola in Newt

Bison generated 4kB of parse tables and a 1470 byte parser. Lola generates 2kB of parse tables and a a 1530 byte parser. So, switching has saved about 2kB of memory. Most of the parser code in both cases is probably the actions, which my guess as to why they're similar. I think the 2kB savings is worth it, but it's a close thing for sure.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2018

15 January, 2019 - 17:26

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, about 224 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 8 hours (out of 8 hours allocated).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 15 hours (out of 20 hours allocated, thus keeping 5 extra hours for January).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 44 hours (out of 30 hours allocated + 39.25 extra hours, thus keeping 25.25 extra hours for January).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 3 hours (out of 4 hours allocated, thus keeping one extra hour for January).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 21 hours (out of 10 hours allocated and 1 extra hour from November and 10 hours additionally allocated during the month).
  • Ola Lundqvist did 8 hours (out of 8 hours allocated + 7 extra hours, thus keeping 7 extra hours for January).
  • Roberto C. Sanchez did 12.75 hours (out of 12 hours allocated + 0.75 extra hours from November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In December we managed to dispatch all the hours available to contributors again, and we had one new contributor in training. Still, we continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file has 31 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Petter Reinholdtsen: CasparCG Server for TV broadcast playout in Debian

15 January, 2019 - 06:10

The layered video playout server created by Sveriges Television, CasparCG Server, entered Debian today. This completes many months of work to get the source ready to go into Debian. The first upload to the Debian NEW queue happened a month ago, but the work upstream to prepare it for Debian started more than two and a half month ago. So far the casparcg-server package is only available for amd64, but I hope this can be improved. The package is in contrib because it depend on the non-free fdk-aac library. The Debian package lack support for streaming web pages because Debian is missing CEF, Chromium Embedded Framework. CEF is wanted by several packages in Debian. But because the Chromium source is not available as a build dependency, it is not yet possible to upload CEF to Debian. I hope this will change in the future.

The reason I got involved is that the Norwegian open channel Frikanalen is starting to use CasparCG for our HD playout, and I would like to have all the free software tools we use to run the TV channel available as packages from the Debian project. The last remaining piece in the puzzle is Open Broadcast Encoder, but it depend on quite a lot of patched libraries which would have to be included in Debian first.

Jonathan Dowland: Amiga/Gotek boot test

15 January, 2019 - 02:42

This is the fourth part in a series of blog posts. The previous post was part 3: preliminaries.

A500 mainboard

In 2015 Game 2.0, a Retro Gaming exhibition visited the Centre for Life in Newcastle. On display were a range of vintage home computers and consoles, rigged up so you could play them. There was a huge range of machines, including some Arcade units and various (relatively) modern VR systems that were drawing big crowds but something else caught my attention: a modest little Amiga A500, running the classic puzzle game, "Lemmings".

A couple of weeks ago I managed to disassemble my Amiga and remove the broken floppy disk drive. The machine was pretty clean under the hood, considering its age. I fed new, longer power and data ribbon cables out of the floppy slot in the case (in order to attach the Gotek Floppy Emulator externally) and re-assembled it.

Success! Lemmings!

I then iterated a bit with setting up disk images and configuring the firmware on the Gotek. It was supplied with FlashFloppy, a versatile and open source firmware that can operate in a number of different modes and read disk images in a variety of formats. I had some Amiga images in "IPF" format, others in "ADF" and also some packages with "RP9" suffixes. After a bit of reading around, I realised the "IPF" ones weren't going to work, the "RP9" ones were basically ZIP archives of other disk images and metadata, and the "ADF" format was supported.

Amiga & peripherals on my desk

For my first boot test of the Gotek adaptor, the disk image really had to be Lemmings. Success! Now that I knew the hardware worked, I spent some time re-arranging my desk at home, to try and squeeze the Amiga, its peripherals and the small LCD monitor alongside the equipment I use for my daily work. It was a struggle but they just about fit.

The next step was to be planning out and testing a workflow for writing to virtual floppies via the Gotek. Unfortunately, before I could get as far as performing the first write test, my hardware developed a problem…

Bits from Debian: "futurePrototype" will be the default theme for Debian 10

14 January, 2019 - 19:15

The theme "futurePrototype" by Alex Makas has been selected as default theme for Debian 10 'buster'.

After the Debian Desktop Team made the call for proposing themes, a total of eleven choices have been submitted, and any Debian contributor has received the opportunity to vote on them in a survey. We received 3,646 responses ranking the different choices, and futurePrototype has been the winner among them.

We'd like to thank all the designers that have participated providing nice wallpapers and artwork for Debian 10, and encourage everybody interested in this area of Debian, to join the Design Team.

Congratulations, Alex, and thank you very much for your contribution to Debian!

Russ Allbery: Review: The Wonder Engine

14 January, 2019 - 12:32

Review: The Wonder Engine, by T. Kingfisher

Series: The Clocktaur War #2 Publisher: Red Wombat Tea Company Copyright: 2018 ASIN: B079KX1XFD Format: Kindle Pages: 318

The Wonder Engine is the second half of The Clocktaur War duology, following Clockwork Boys. Although there is a substantial transition between the books, I think it's best to think of this as one novel published in two parts. T. Kingfisher is a pen name for Ursula Vernon when she's writing books for adults.

The prologue has an honest-to-God recap of the previous book, and I cannot express how happy that makes me. This time, I read both books within a month of each other and didn't need it, but I've needed that sort of recap so many times in the past and am mystified by the usual resistance to including one.

Slate and company have arrived in Anuket City and obtained temporary housing in an inn. No one is trying to kill them at the moment; indeed, the city seems oblivious to the fact that it's in the middle of a war. On the plus side, this means that they can do some unharried investigation into the source of the Clocktaurs, the war machines that are coming ever closer to smashing their city. On the minus side, it's quite disconcerting, and ominous, that the Clocktaurs involve so little apparent expenditure of effort.

The next steps are fairly obvious: pull on the thread of research of the missing member of Learned Edmund's order, follow the Clocktaurs and scout the part of the city they're coming from, and make contact with the underworld and try to buy some information. The last part poses some serious problems for Slate, though. She knows the underworld of Anuket City well because she used to be part of it, before making a rather spectacular exit. If anyone figures out who she is, death by slow torture is the best she can hope for. But the underworld may be their best hope for the information they need.

If this sounds a lot like a D&D campaign, I'm giving the right impression. The thief, ranger, paladin, and priest added a gnole to their company in the previous book, but otherwise closely match a typical D&D party in a game that's de-emphasizing combat. It's a very good D&D campaign, though, with some excellent banter, the intermittent amusement of Vernon's dry sense of humor, and some fascinating tidbits of gnole politics and gnole views on humanity, which were my favorite part of the book.

Somewhat unfortunately for me, it's also a romance. Slate and Caliban, the paladin, had a few exchanges in passing in the first book, but much of The Wonder Engine involves them dancing around each other, getting exasperated with each other, and trying to decide if they're both mutually interested and if a relationship could possibly work. I don't object to the relationship, which is quite fun in places and only rarely drifts into infuriating "why won't you people talk to each other" territory. I do object to Caliban, who Slate sees as charmingly pig-headed, a bit simple, and physically gorgeous, and who I saw as a morose, self-righteous jerk.

As mentioned in my review of the previous book, this series is in part Vernon's paladin rant, and much more of that comes into play here as the story centers more around Caliban and digs into his relationship with his god and with gods in general. Based on Vernon's comments elsewhere, one of the points is to show a paladin in a different (and more religiously realistic) light than the cliche of being one crisis of faith away from some sort of fall. Caliban makes it clear that when you've had a god in your head, a crisis of faith is not the sort of thing that actually happens, since not much faith is required to believe in something you've directly experienced. (Also, as is rather directly hinted, religions tend not to recruit as paladins the people who are prone to thinking about such things deeply enough to tie themselves up in metaphysical knots.) Guilt, on the other hand... religions are very good at guilt.

Caliban is therefore interesting on that level. What sort of person is recruited as a paladin? How does that person react when they fall victim to what they fight in other people? What's the relationship between a paladin and a god, and what is the mental framework they use to make sense of that relationship? The answers here are good ones that fit a long-lasting structure of organized religious warfare in a fantasy world of directly-perceivable gods, rather than fragile, crusading, faith-driven paladins who seem obsessed with the real world's uncertainty and lack of evidence.

None of those combine into characteristics that made me like Caliban, though. While I can admire him as a bit of world-building, Slate wants to have a relationship with him. My primary reaction to that was to want to take Slate aside and explain how she deserves quite a bit better than this rather dim piece of siege equipment, no matter how good he might look without his clothes on. I really liked Slate in the first book; I liked her even better in the second (particularly given how the rescue scene in this book plays out). Personally, I think she should have dropped Caliban down a convenient well and explored the possibilities of a platonic partnership with Grimehug, the gnole, who was easily my second-favorite character in this book.

I will give Caliban credit for sincerely trying, at least in between the times when he decided to act like an insufferable martyr. And the rest of the story, while rather straightforward, enjoyably delivered on the setup in the first book and did so with a lot of good banter. Learned Edmund was a lot more fun as a character by the end of this book than he was when introduced in the first book, and that journey was fun to see. And the ultimate source of the Clocktaurs, and particularly how they fit into the politics of Anuket City, was more interesting than I had been expecting.

This book is a bit darker than Clockwork Boys, including some rather gruesome scenes, a bit of on-screen gore, and quite a lot of anticipation of torture (although thankfully no actual torture scenes). It was more tense and a bit more uncomfortable to read; the ending is not a light romp, so you'll want to be in the right mood for that.

Overall, I do recommend this duology, despite the romance. I suspect some (maybe a lot) of my reservations are peculiar to me, and the romance will work better for other people. If you like Vernon's banter (and if you don't, we have very different taste) and want to see it applied at long novel length in a D&D-style fantasy world with some truly excellent protagonists, give this series a try.

The Clocktaur War is complete with this book, but the later Swordheart is set in the same universe.

Rating: 8 out of 10

Russell Coker: Are Men the Victims?

13 January, 2019 - 20:08

A very famous blog post is Straight White Male: The Lowest Difficulty Setting There Is by John Scalzi [1]. In that post he clearly describes that life isn’t great for straight white men, but that there are many more opportunities for them.

Causes of Death

When this post is mentioned there are often objections, one common objection is that men have a lower life expectancy. The CIA World factbook (which I consider a very reliable source about such matters) says that the US life expectancy is 77.8 for males and 82.3 for females [2]. The country with the highest life expectancy is Monaco with 85.5 for males and 93.4 years for females [3]. The CDC in the US has a page with links to many summaries about causes of death [4]. The causes where men have higher rates in 2015 are heart disease (by 2.1%), cancer (by 1.7%), unintentional injuries (by 2.8%), and diabetes (by 0.4%). The difference in the death toll for heart disease, cancer, unintentional injuries, and diabetes accounts for 7% of total male deaths. The male top 10 lists of causes of death also includes suicide (2.5%) and chronic liver disease (1.9%) which aren’t even in the top 10 list for females (which means that they would each comprise less than 1.6% of the female death toll).

So the difference in life expectancy would be partly due to heart problems (which are related to stress and choices about healthy eating etc), unintentional injuries (risk seeking behaviour and work safety), cancer (the CDC reports that smoking is more popular among men than women [5] by 17.5% vs 13.5%), diabetes (linked to unhealthy food), chronic liver disease (alcohol), and suicide. Largely the difference seems to be due to psychological and sociological issues.

The American Psychological Association has for the first time published guidelines for treating men and boys [6]. It’s noteworthy that the APA states that in the past “psychology focused on men (particularly white men), to the exclusion of all others” and goes on to describe how men dominate the powerful and well paid jobs. But then states that “men commit 90 percent of homicides in the United States and represent 77 percent of homicide victims”. They then go on to say “thirteen years in the making, they draw on more than 40 years of research showing that traditional masculinity is psychologically harmful and that socializing boys to suppress their emotions causes damage that echoes both inwardly and outwardly”. The article then goes on to mention use of alcohol, tobacco, and unhealthy eating as correlated with “traditional” ideas about masculinity. One significant statement is “mental health professionals must also understand how power, privilege and sexism work both by conferring benefits to men and by trapping them in narrow roles”.

The news about the new APA guidelines focuses on the conservative reaction, the NYT has an article about this [7].

I think that there is clear evidence that more flexible ideas about gender etc are good for men’s health and directly connect to some of the major factors that affect male life expectancy. Such ideas are opposed by conservatives.

Risky Jobs

Another point that is raised is the higher rate of work accidents for men than women. In Australia it was illegal for women to work in underground mines (one of the more dangerous work environments) until the late 80’s (here’s an article about this and other issues related to women in the mining industry [8]).

I believe that people should be allowed to work at any job they are qualified for. I also believe that we need more occupational health and safety legislation to reduce the injuries and deaths at work. I don’t think that the fact that a group of (mostly male) politicians created laws to exclude women from jobs that are dangerous and well-paid while also not creating laws to mitigate the danger is my fault. I’ll vote against such politicians at every opportunity.

Military Service

Another point that is often raised is that men die in wars.

In WW1 women were only allowed to serve in the battlefield as nurses. Many women died doing that. Deaths in war has never been an exclusively male thing. Women in many countries are campaigning to be allowed to serve equally in the military (including in combat roles).

As far as I am aware the last war where developed countries had conscription was the Vietnam war. Since then military technology has developed to increasingly complex and powerful weapons systems with an increasing number of civilians and non-combat military personnel supporting each soldier who is directly involved in combat. So it doesn’t seem likely that conscription will be required for any developed country in the near future.

But not being directly involved in combat doesn’t make people safe. NPR has an interesting article about the psychological problems (potentially leading up to suicide) that drone operators and intelligence staff experience [9]. As an aside the article reference two women doing that work.

Who Is Ignoring These Things?

I’ve been accused of ignoring these problems, it’s a general pattern on the right to accuse people of ignoring these straight white male problems whenever there’s a discussion of problems that are related to not being a straight white man. I don’t think that I’m ignoring anything by failing to mention death rates due to unsafe workplaces in a discussion about the treatment of trans people. I try to stay on topic.

The New York Times article I cited shows that conservatives are the ones trying to ignore these problems. When the American Psychological Association gives guidelines on how to help men who suffer psychological problems (which presumably would reduce the suicide rate and bring male life expectancy closer to female life expectancy) they are attacked by Fox etc.

My electronic communication (blog posts, mailing list messages, etc) is mostly connected to the free software community, which is mostly male. The majority of people who read what I write are male. But it seems that the majority of positive feedback when I write about such issues is from women. I don’t think there is a problem of women or left wing commentators failing men. I think there is a problem of men and conservatives failing men.

What Can We Do?

I’m sure that there are many straight white men who see these things as problems but just don’t say anything about it. If you don’t want to go to the effort of writing a blog post then please consider signing your name to someone else’s. If you are known for your work (EG by being a well known programmer in the Linux community) then you could just comment “I agree” on a post like this and that makes a difference while also being really easy to do.

Another thing that would be good is if we could change the hard drinking culture that seems connected to computer conferences etc. Kara has an insightful article on Model View Culture about drinking and the IT industry [10]. I decided that drinking at Linux conferences had got out of hand when about 1/3 of the guys at my table at a conference dinner vomited.

Linux Conf Au (the most prestigious Linux conference) often has a Depression BoF which is really good. I hope they have one this year. As an aside I have problems with depression, anyone who needs someone to talk to about such things and would rather speak to me than attend a BoF is welcome to contact me by email (please take a failure to reply immediately as a sign that I’m behind on checking my email not anything else) or social media.

If you have any other ideas on how to improve things please make a comment here, or even better write a blog post and link to it in a comment.

Related posts:

  1. Links August 2012 Google are providing some really good employee benefits including benefits...
  2. How to Support Straight Marriage There is currently a lot of discussion about how to...
  3. death threats against Kathy Sierra The prominent blogger and author Kathy Sierra has recently cancelled...

Russ Allbery: DocKnot 2.00

13 January, 2019 - 12:12

This is a new major release of the utility I use to generate package documentation. It's the start of a restructure that will eventually let me merge more of my package maintenance tools into this package (and possibly eventually my web site building tools).

The functions previously provided by the docknot command-line tool have been moved to docknot generate, and the arguments have been changed around a bit. There's also a new docknot generate-all, and more default values so that one doesn't have to pass in as many arguments. The Perl module has been similarly restructured, with documentation generation moved into a new App::DocKnot::Generate module.

On the documentation template front, this release also adds a separate TESTING section for Perl modules and changes some of the templating for standard documentation of how to run the test suite.

You can get the latest release from the DocKnot distribution page or from CPAN.

Ben Hutchings: Debian LTS work, December 2018

12 January, 2019 - 06:50

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked 15 hours. I carried the remaining hours over to January.

I prepared and released another stable update for Linux 3.16 (3.16.62) and rebased jessie's linux package on this version, but did not upload a new release yet.

I also discussed the outstanding speculation-related vulnerabilities affecting Xen in jessie.

Joachim Breitner: Teaching to read Haskell

12 January, 2019 - 04:17

TL;DR: New Haskell tutorial at http://haskell-for-readers.nomeata.de/.

Half a year ago, I left the normal academic career path and joined the DFINITY Foundation, a non-profit start-up that builds a blockchain-based “Internet Computer” which will, if everything goes well, provide a general purpose, publicly owned, trustworthy service hosting platform.

DFINITY heavily bets on Haskell as a programming language to quickly develop robust and correct programs (and it was my Haskell experience that opened this door for me). DFINITY also builds heavily on innovative cryptography and cryptographic protocols to make the Internet Computer work, and has assembled an impressive group of crypto researchers.

Crypto is hard, and so is implementing crypto. How do we know that the Haskell code correctly implements what the cryptography researchers designed? Clearly, our researchers will want to review the code and make sure that everything is as intended.

But surprisingly, not everybody is Haskell-literate. This is where I come in, given that I have taught Haskell classes before, and introduce Haskell to those who do not know it well enough yet.

At first I thought I’d just re-use the material I created for the CIS 194 Haskell course at the University of Pennsylvania. But I noticed that I am facing quite a different audience. Instead of young students with fairly little computer scientist background who can spent quite a bit of time to learn to write Haskell, I am now addressing senior and very smart computer scientists with many other important things to do, who want to learn to read Haskell.

Certainly, a number of basics are needed in either case; basic functional programming for example. But soon, the needs diverge:

  • In order to write Haskell, I have to learn how to structure a program, how to read error message and deal with Haskell’s layout rule, but I don’t need to know all idioms and syntax features yet.
  • If I want to read Haskell, I need to navigate possibly big files, recognize existing structure, and deal with a plenitude of syntax, but I don’t need to worry about setting up a compiler or picking the right library.

So I set out to create a new Haskell course, “Haskell for Readers”, that is specifically tailored to this audience. It leaves out a few things that are not necessary for reading Haskell, is relatively concise and densely packed, but it starts with the basics and does not assume prior exposure to functional programming.

As it behooves for a non-profit-foundation, DFINITY is happy for me to develop the lecture material in the open, and release it to the public under a permissive creative commons license, so I invite you to read the in-progress document, and maybe learn something. Of course, my hope is to also get constructive feedback in return, and hence profit from this public release. Sources on GitHub.

Mike Gabriel: Upcoming FreeRDP v1.1 updates for Debian jessie (LTS) and Debian stretch (please test!)

11 January, 2019 - 22:06

Recently, Bernhard Miklautz, Martin Fleisz and myself have been working on old FreeRDP code. Our goal was, to get FreeRDP in Debian jessie LTS and Debian stretch working again against recent Microsoft RDP servers.

It has been done now.

Context

In Debian LTS, we were discussing a complex update of the freerdp (v1.1) package. That was before X-mas.

The status of FreeRDP v1.1 (jessie/stretch) then was and still is:

  • Since March 2018 freerdp in stretch (and jessie) (Git snapshot of never released v1.1) has been unusable against latest Microsoft Windows servers. All MS Windows OS versions switched to RDP proto version 6 plus CredSSP version 3 and the freerdp versions in Debian jessie/stretch do not support that, yet.
  • For people using Debian stretch, the only viable work-around is using freerdp2 from stretch-backports.
  • People using Debian jessie LTS don't have any options (except from upgrading to stretch and using freerdp2 from stretch-bpo).
  • Currently, we know of four unfixed no-DSA CVE issues in freerdp (v1.1) (that are fixed in buster's freerdp2).

With my Debian LTS contributor hat on, I have started working on the open freerdp CVE issues (whose backported fixes luckily appeared in a Ubuntu security update, so not much work on this side) and ...

... I have started backporting the required patches (at least these: [0a,0b,0c]) to get RDP proto version 6 working in Debian jessie's and Debian stretch's freerdp v1.1 version. It turned out later that the third referenced patch [0c] is not required.

With the LTS team it was agreed that this complete endeavour for LTS only makes sense if the stable release team is open to accepting such a complex change to Debian stretch, too.

While working on these patches, I regularly got feedback from FreeRDP upstream developer Bernhard Miklautz. That was before X-mas. Over the X-mas holidays (when I took time off with the family), Bernhard Miklautz and also Martin Fleisz from FreeRDP upstream took over and a couple of days ago I was presented with a working solution. Well done, my friends. Very cool and very awesome!

As already said, recently, more and more people installed FreeRDP v2 from stretch-backports (if on stretch), but we know of many people / sysadmins that are not allowed to use packages from Debian backports' repository. Using FreeRDPv2 from stretch-backports is still a good (actually the best) option for people without strict software policies. But to those, who are not permitted to use software from Debian backports, now we can provide you with a solution.

Please test FreeRDP v1.1 upload candidates

We would love to get some feedback from brave test users. Actually, if the new update works for you, there is no need for giving feedback. However, let us know when things fail for you.

Packages have been upload to my personal staging repository:
https://packages.sunweavers.net/debian/pool/main/f/freerdp/

APT URL (stretch):

deb http://packages.sunweavers.net/debian stretch main

APT URL (jessie):

deb http://packages.sunweavers.net/debian jessie main

Obtain the archive key:

$ wget -qO - http://packages.sunweavers.net/archive.key | sudo apt-key add -

Install the FreeRDP-X11 package:

% sudo apt update
$ sudo apt install freerdp-x11

As the staging repo contains various other packages, please disable that repo immediately after having installed the new FreeRDP package versions. Thanks!

Next steps

The changeset (.debdiff) has already been sent for pre-approval to the Debian stable (aka stretch) release team [2].

I will at least postpone the upload by some more days (let's say 5 days) to give people a chance for giving feedback. When these days are over and once (and if) I have got the release team's ACK to proceed, I will upload the updated package.

Once FreeRDP has been updated in Debian stretch, I will do an immediate upload of nearly the same package (with some formal changes) to Debian jessie LTS (installable via security.debian.org).

For Debian stretch, the updated FreeRDP version will be available to all Debian stretch users with the next Debian stable point release at the latest (if nothing of the above gets delayed). The release team may give this update some priority and make it available via stable-updates prior to the next point release.

For Debian jessie, the updated FreeRDP version will be available once the update has been acknowledged by the Debian stable release team.

References

Dirk Eddelbuettel: pinp 0.0.7: More small YAML options

11 January, 2019 - 18:24

A good six months after the previous release, another small feature release of our pinp package for snazzier one or two column Markdown-based pdf vignettes got onto CRAN minutes ago as another [CRAN-pretest-publish] release indicating a fully automated process (as can be done for packages free of NOTES, WARNING, ERRORS, and without ‘changes to worse’ in their reverse dependency checks).

One new option was suggested (and implemented) by Ilya Kashnitsky: the bold and small subtitle carrying a default of ‘this version built on …’ with the date is now customisable; motivation was for example stating a post-publication DOI which is indeed useful. In working with DOI I also finally realized that I was blocking displays of DOIs in the references: the PNAS style use \doi{} for a footer display (which we use e.g. for vignette info) shadowing the use via the JSS.cls borrowed from the Journal of Statistical Software setup. So we renamed the YAML header option to doi_footer for clarity, still support the old entries for backwards compatibility (yes, we care), renamed the macro for this use — and with an assist from LaTeX wizard Achim Zeileis added a new \doi{} now displaying DOIs in the references as they should! We also improved some internals as e.g. the Travis CI checks but I should blog about that another time, and documented yet more YAML header options in the vignette.

A screenshot of the package vignette can be seen below. Additional screenshots of are at the pinp page.

The NEWS entry for this release follows.

Changes in pinp version 0.0.7 (2019-01-11)
  • Added some more documentation for different YAML header fields.

  • A new option has been added for a 'date_subtitle' (Ilya Kashnitsky in #64 fixing #63).

  • 'doi' YAML option renamed to 'doi_footer' to permit DOIs in refs, 'doi' header still usable (Dirk in #66 fixing #65).

  • The 'doi' macro was redefined to create a hyperlink.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Hideki Yamane: Debian Bug Squash Party Tokyo 2019-01

11 January, 2019 - 08:55
Hi, we'll hold an event "Debian Bug Squash Party Tokyo 2019-01" (19th, Jan).
Happy bug squashing, see you there! :)

Bits from Debian: DebConf19 is looking for sponsors!

11 January, 2019 - 00:30

DebConf19 will be held in Curitiba, Brazil from July 21th to 28th, 2019. It will be preceded by DebCamp, July 14th to 19th, and Open Day on the 20th.

DebConf, Debian's annual developers conference, is an amazing event where Debian contributors from all around the world gather to present, discuss and work in teams around the Debian operating system. It is a great opportunity to get to know people responsible for the success of the project and to witness a respectful and functional distributed community in action.

The DebConf team aims to organize the Debian Conference as a self-sustaining event, despite its size and complexity. The financial contributions and support by individuals, companies and organizations are pivotal to our success.

There are many different possibilities to support DebConf and we are in the process of contacting potential sponsors from all around the globe. If you know any organization that could be interested or who would like to give back resources to FOSS, please consider handing them the sponsorship brochure or contact the fundraising team with any leads. If you are a company and want to sponsor, please contact us at sponsors@debconf.org.

Let’s work together, as every year, on making the best DebConf ever. We are waiting for you at Curitiba!

Jonathan Dowland: Amiga floppy recovery project, part 3: preliminaries

10 January, 2019 - 23:41

This is the third part in a series of blog posts, following Amiga floppy recovery project, part 2.

The first step for my Amiga project was to recover the hardware from my loft and check it all worked.

When we originally bought the A500 (in, I think, 1991) we bought a RAM expansion at the same time. The base model had a whole 512KiB of RAM, but it was common for people to buy a RAM expander that doubled the amount of memory to a whopping 1 MiB. The official RAM expander was the Amiga 501, which fit into a slot on the underside of the Amiga, behind a trapdoor.

The 501 also featured a real-time clock (RTC), which was powered by a backup NiCad battery soldered onto the circuit board. These batteries are notorious for leaking over a long enough time-frame, and our Amiga had been in a loft for at least 20 years. I had heard about this problem when I first dug the machine back out in 2015, and had a vague memory that I checked the board at the time and could find no sign of leakage, but reading around the subject more recently made me nervous, so I double-checked.

AMRAM-NC-issue 2 RAM expansion

Lo and behold, we don't have an official Commodore RAM expander: we were sold a third-party one, an "AMRAM-NC-issue 2". It contains the 512KiB RAM and a DIP switch, but no RTC or corresponding battery, so no leakage. The DIP switch was used to enable and disable the RAM expansion. Curiously it is currently flicked to "disable". I wonder if we ever actually had it switched on?

The follow-on Amiga models A500+ and A600 featured the RTC and battery directly on the machine's mainboard. I wonder if that has resulted in more of those units being irrevocably damaged from leaking batteries, compared to the A500. My neighbours had an A600, but they got rid of it at some point in the intervening decades. If I were looking to buy an Amiga model today, I'd be quite tempted by the A600, due to its low profile, lacking the numpad, and integrated composite video output and RF modulator.

Kickstart 1.3 (firmware) prompt

I wasn't sure whether I was going to have to rescue my old Philips CRT monitor from the loft. It would have been a struggle to find somewhere to house the Amiga and the monitor combined, as my desk at home is already a bit cramped. Our A500 was supplied with a Commodore A520 RF adapter which we never used in the machine's heyday. Over the Christmas break I tested it and it works, meaning I can use the A500 with my trusty 15" TFT TV (which has proven very useful for testing old equipment, far outlasting many monitors I've had since I bought it).

A520 RF modulator and external FDD

Finally I recovered my old Amiga external floppy disk drive. From what I recall this had very light usage in the past, so hopefully it still works, although I haven't yet verified it. I had partially disassembled this back in 2015, intending to re-purpose the drive into the Amiga. Now I have the Gotek, my plans have changed, so I carefully re-assembled it. Compared to enclosures I've used for PC equipment, it's built like a tank!

The next step is to remove the faulty internal floppy disk drive from the A500 and wire up the Gotek. I was thwarted from attempting this over the Christmas break. The Amiga enclosure's top part is screwed on with T9 Torx screws, and I lacked the right screwdriver part to remove it. I've now obtained the right screwdriver bit and can proceed.

Mario Lang: Please delete me from Planet

10 January, 2019 - 16:40

Wow. Hi Debian. Apparently, you've changed even more in a direction I personally never really liked. As a member of a minority group, I feel the need to explain that I highly dislike the way you are currently handling minority groups. And no, I dont feel you are ignoring them. You are giving a select view far too much attention for a technically focused project.

I am blind. I have been belittled, patronized, ignored, patted and everything else you wouldn't want to happen to you as an adult by all sort of people that crossed my path. But I have learnt to deal with it, and move on. Not holding hostage my whole enviroment just because someone happened to call me the wrong name. Come on, I want your problems!!! Really, can we please swap? You'll be blind, and I am sure I can sort out my gender issues.

I have left as a developer about two years ago. But now, I feel like I really dont want to be associated with an organisation that behaves like you currently do. So please, delete me from Planet as well. Trading the extra publicity for a good conscience is a great deal. Thxbye

Joachim Breitner: Nonce sense paper online

10 January, 2019 - 15:04

Nadia Heninger and me just have put the preprint version of our paper “Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies”, to be presented at Financial Crypto 2019, online. In this work, we see how many private keys used on the Bitcoin, Ethereum and Ripple blockchain, as well as in HTTPS and SSH, were used in an unsafe way and hence can be compromised. The resulting numbers are not large – 300 Bitcoin keys, with a balance of around $54 – but it shows (again and again) that it can be tricky to get crypto right, and that if you don’t get it right, you can lose your money.

Brief summary

When you create a cryptographic signatures using ECDSA (the elliptic curve digital signature algorithm), you need to come up with the nonce, a 256 bit random number. It is really important to use a different nonce every time, otherwise it is easy for someone else to take your signatures (which might be stored for everyone to read on the Bitcoin blockchain) and calculate your private key using relatively simple math, and with your private key they can spend all your Bitcoins. In fact, there is evidence that people out there continuously monitor the blockchains for signatures with such repeated nonces and immediately extract the money from compromised keys.

Less well known, but still nothing new to the crypto (as in cryptopgraphy) community is the that an attacker can calculate the key from signature that use different, but similar nonces: For example if they are close by each other (only the low bits differ), or if they differ by exactly a large power of two (only the high bits differ). This uses a fancy and powerful technique based on lattices. Our main contribution here is to bridge crypto (as in cryptopgraphy) and crypto (as in cryptocurrency) and see if such vulnerabilities actually exist out there.

And indeed, there are some. Not many (which is good), but they do exist, and clearly due to more than one source. Unfortunately, it is really hard to find out who made these signatures, and with which code, so we can only guess about the causes of these bugs. A large number of affected signatures are related to multisig transactions, so we believe that maybe hardware tokens could be the cause here.

Observing programming bugs

Even though we could not identify the concrete implementations that caused these problems, we could still observe some interesting details about them. The most curious is certainly this one:

One set of signatures, which incidentally were created by an attacker who emptied out accounts of compromised keys (e.g. those that are created with a weak password, or otherwise leaked onto the internet), was using nonces that shared the low 128 bits, and hence revealed the (already compromised) private key of the account he emptied out. Curiously, these low 128 bits are precisely the upper 128 bits of the private key.

So it very much looks like the attacker hacked up a program that monitors the blockchain and empties out accounts, and probably did so using a memory unsafe language like C, and got the size of the buffers for nonce and key wrong, so maybe they did properly filled the nonce with good random data, but when they wrote the secret key, the memory locations overlapped and they overrode parts of their randomness with the very non-random secret key. Oh well.

Do I need to worry?

Probably not. The official blockchain clients get their crypto right (at least this part), and use properly random nonces, so as a user you don’t have to worry. In fact, since 2016, the Bitcoin client uses deterministic signatures (RFC6979) which completely removes the need for randomness in the process.

If you are using non-standard libraries, or if you write your own crypto routines (which you should only ever do if you have a really really good reason for it) you should make sure that these use RFC6979. This is even more important on embedded devices or hardware tokens where a good source of randomness might be hard to come by.

Discrete logarithm in secp256k1 with lookup table

In the course of this work I wanted to find out if small nonces (<264) were used even when the key created only one of these – the lattice-based attacks need at least two signatures to work. So I created code that calculates the discrete log in the secp256k1 curve up to an exponent of (<264). This is made feasible using a lookup table for smaller exponents (<239 in our case – just large enough to still fit into 2.2TB of RAM).

This exercise turned out to be not very useful; we did not find any additional keys, but I had fun writing up the program, implemented in C and working very close to the raw data, juggling big binary files mmap’ed into memory, and implementing custom lookup indices and such. In the hope that this might be useful to someone, I share the code at https://github.com/nomeata/secp265k1-lookup-table.

Joachim Breitner: Tiny nonces paper is online

10 January, 2019 - 15:04

Nadia Heninger and me just have put the preprint version of our paper “Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies”, to be presented at Financial Crypto 2019, online. In this work, we see how many private keys used on the Bitcoin, Ethereum and Ripple blockchain, as well as in HTTPS and SSH, were used in an unsafe way and hence can be compromised. The resulting numbers are not large – 300 Bitcoin keys, with a balance of around $54 – but it shows (again and again) that it can be tricky to get crypto right, and that if you don’t get it right, you can lose your money.

Brief summary

When you create a cryptographic signatures using ECDSA (the elliptic curve digital signature algorithm), you need to come up with the nonce, a 256 bit random number. It is really important to use a different nonce every time, otherwise it is easy for someone else to take your signatures (which might be stored for everyone to read on the Bitcoin blockchain) and calculate your private key using relatively simple math, and with your private key they can spend all your Bitcoins. In fact, there is evidence that people out there continuously monitor the blockchains for signatures with such repeated nonces and immediately extract the money from compromised keys.

Less well known, but still nothing new to the crypto (as in cryptopgraphy) community is the that an attacker can calculate the key from signature that use different, but similar nonces: For example if they are close by each other (only the low bits differ), or if they differ by exactly a large power of two (only the high bits differ). This uses a fancy and powerful technique based on lattices. Our main contribution here is to bridge crypto (as in cryptopgraphy) and crypto (as in cryptocurrency) and see if such vulnerabilities actually exist out there.

And indeed, there are some. Not many (which is good), but they do exist, and clearly due to more than one source. Unfortunately, it is really hard to find out who made these signatures, and with which code, so we can only guess about the causes of these bugs. A large number of affected signatures are related to multisig transactions, so we believe that maybe hardware tokens could be the cause here.

Observing programming bugs

Even though we could not identify the concrete implementations that caused these problems, we could still observe some interesting details about them. The most curious is certainly this one:

One set of signatures, which incidentally were created by an attacker who emptied out accounts of compromised keys (e.g. those that are created with a weak password, or otherwise leaked onto the internet), was using nonces that shared the low 128 bits, and hence revealed the (already compromised) private key of the account he emptied out. Curiously, these low 128 bits are precisely the upper 128 bits of the private key.

So it very much looks like the attacker hacked up a program that monitors the blockchain and empties out accounts, and probably did so using a memory unsafe language like C, and got the size of the buffers for nonce and key wrong, so maybe they did properly filled the nonce with good random data, but when they wrote the secret key, the memory locations overlapped and they overrode parts of their randomness with the very non-random secret key. Oh well.

Do I need to worry?

Probably not. The official blockchain clients get their crypto right (at least this part), and use properly random nonces, so as a user you don’t have to worry. In fact, since 2016, the Bitcoin client uses deterministic signatures (RFC6979) which completely removes the need for randomness in the process.

If you are using non-standard libraries, or if you write your own crypto routines (which you should only ever do if you have a really really good reason for it) you should make sure that these use RFC6979. This is even more important on embedded devices or hardware tokens where a good source of randomness might be hard to come by.

Discrete logarithm in secp256k1 with lookup table

In the course of this work I wanted to find out if small nonces (<264) were used even when the key created only one of these – the lattice-based attacks need at least two signatures to work. So I created code that calculates the discrete log in the secp256k1 curve up to an exponent of (<264). This is made feasible using a lookup table for smaller exponents (<239 in our case – just large enough to still fit into 2.2TB of RAM).

This exercise turned out to be not very useful; we did not find any additional keys, but I had fun writing up the program, implemented in C and working very close to the raw data, juggling big binary files mmap’ed into memory, and implementing custom lookup indices and such. In the hope that this might be useful to someone, I share the code at https://github.com/nomeata/secp265k1-lookup-table.

Russ Allbery: Review: Bright Earth

10 January, 2019 - 11:26

Review: Bright Earth, by Philip Ball

Publisher: University of Chicago Copyright: 2001 Printing: 2003 ISBN: 0-226-03628-6 Format: Trade paperback Pages: 337

The subtitle Art and the Invention of Color does a good job advertising the topic of Bright Earth: a history of the creation of color pigments for art (specifically European painting; more on that in a moment). It starts with a brief linguistic and scientific introduction to color, sketches what's known about use and creation of color pigments in antiquity, and then settles down for serious historical study starting in the Middle Ages. Ball catalogs pigment choices, discusses manufacturing methods, and briefly surveys the opinions of various schools of art on color from before the Renaissance through to the modern art of today. He also takes two fascinating (albeit too brief) side trips to discuss aging of pigments and the problem of reproducing color art.

This is one of those non-fiction books whose primary joy for me was to introduce me to problems and constraints that were obvious in retrospect but that I'd never thought about. If someone had asked me whether painters were limited in their subject matter and methods by the colors available to them, I probably would have said "huh" and agreed, but I never thought to ask the question. Like a lot of people of my age in the US, I grew up watching Bob Ross's The Joy of Painting and its familiar list of oil paints: phthalo green, alizarin crimson, and so forth. But of course that rich palette is a product of modern chemistry. Early Renaissance painters had to make do with fewer options, many of them requiring painstaking preparation that painters or their assistants did themselves before the popularity of art and the rise of professional color makers. They knew, and were shaped by, their materials in a way that one cannot be when one buys tubes of paint from an art store.

Similarly, I was familiar with additive color mixing from physics and from computer graphics projects, and had assumed that a few reasonable primaries would provide access to the entire palette. I had never considered the now-obvious problem of subtractive mixing with impure primaries: since the pigments are removing colors from white light, mixing together multiple pigments quickly gets you a muddy brown, not a brilliant secondary color. The resulting deep distrust of mixing pigments that dates back to antiquity further limits the options available to painters.

Ball's primary topic is the complicated interplay between painting and science. Many of the new colors of the Renaissance were byproducts or accidents of alchemy, and were deeply entangled in the obsession with the transmutation of metals into gold. Most of the rest were repurposed dyes from the much more lucrative textile business. Enlightenment chemistry gave rise to a whole new palette, but the chemistry of colors is complex and fickle. Going into this book, I had a superficial impression that particular elements or compounds had particular colors, and finding pigments would be a matter of finding substances that happened to have that color. Ball debunks that idea quickly: small variations in chemical structure, and thus small variations in preparation, can produce wildly different colors. Better chemistry led to more and better colors, but mostly by accident or trial and error until surprisingly recently. The process to make a color almost always came first; understanding of why it worked might be delayed centuries.

In school, I was an indifferent art student at best, so a lot of my enjoyment of Bright Earth came from its whirlwind tour of art history through the specific lens of color. I hadn't understood why medieval European paintings seem so artificial and flat before reading this book, or why, to my modern eye, Renaissance work suddenly became more beautiful and interesting. I had also never thought about the crisis that photography caused for painting, or how much that explains of the modern move away from representational art. And I had seriously underestimated the degree to which colors are historically symbolic rather than representational. This material may be old news for those who paid attention in art history courses (or, *cough*, took them in the first place), but I enjoyed the introduction. (I often find topics more approachable when presented through an idiosyncratic lens like this.)

Ball is clear, straightforward, and keeps the overall picture coherent throughout, which probably means that he's simplifying dramatically given that the scope of this book is nothing less than the entire history of European and American painting. But I'm a nearly complete newcomer to this topic, and he kept me afloat despite the flood of references to paintings that I've never seen or thought about, always providing enough detail for me to follow his points about color. You definitely do not have to already know art history to get a lot out of this book.

I do have one caveat and one grumble. The caveat is that, despite the subtitle, this book is not about art in general. It's specifically about painting, and more specifically focused on the subset of painting that qualifies as "fine art." Ball writes just enough about textiles to hint that the vast world of dyes may be even more interesting, and were certainly more important to more people, but textiles are largely omitted from this story. More notably, one would not be able to tell from this book that eastern Asia or Africa or pre-colonial America exist, let alone have their own artistic conventions and history. Ball's topic is relentlessly limited to Europe, and then the United States, except for a few quick trips to India or Afghanistan for raw materials. There's nothing inherently wrong with this — Ball already has more history than he can fully cover in only Europe and the United States — but it would have been nice to read a more explicit acknowledgment and at least a few passing mentions of how other cultures approached this problem.

The grumble is just a minor mismatch of interests between Ball and myself, namely that the one brief chapter on art reproduction was nowhere near enough for me, and I would have loved to read three or four chapters (or a whole book) on that topic. I suspect my lack of appreciation of paintings has a lot to do with the challenges of reproducing works of art in books or on a computer screen, and would have loved more technical detail on what succeeds and what fails and how one can tell whether a reproduction is "correct" or not. I would have traded off a few alchemical recipes for more on that modern problem. Maybe I'll have to find another book.

As mentioned above, I'm not a good person to recommend books about art to anyone who knows something about art. But with that disclaimer, and the warning that the whirlwind tour of art history mixed with the maddening ambiguity of color words can be a bit overwhelming in spots, I enjoyed reading this more than I expected and will gladly recommend it.

Bright Earth does not appear to be available as an ebook, and I think that may be a wise choice. The 66 included color plates help a great deal, and I wouldn't want to read this book without them. Unless any future ebook comes with very good digital reproductions, you may want to read this book in dead tree form.

Rating: 7 out of 10

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้