Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 6 min 42 sec ago

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2018

15 May, 2018 - 22:32

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 183 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for May).
  • Antoine Beaupré did 12h.
  • Ben Hutchings did 17 hours (out of 15h allocated + 2 remaining hours).
  • Brian May did 10 hours.
  • Chris Lamb did 16.25 hours.
  • Emilio Pozuelo Monfort did 11.5 hours (out of 16.25 hours allocated + 5 remaining hours, thus keeping 9.75 extra hours for May).
  • Holger Levsen did nothing (out of 16.25 hours allocated + 16.5 hours remaining, thus keeping 32.75 extra hours for May). He did not get hours allocated for May and is expected to catch up.
  • Hugo Lefeuvre did 20.5 hours (out of 16.25 hours allocated + 4.25 remaining hours).
  • Markus Koschany did 16.25 hours.
  • Ola Lundqvist did 11 hours (out of 14 hours allocated + 9.5 remaining hours, thus keeping 12.5 extra hours for May).
  • Roberto C. Sanchez did 7 hours (out of 16.25 hours allocated + 15.75 hours remaining, but immediately gave back the 25 remaining hours).
  • Santiago Ruano Rincón did 8 hours.
  • Thorsten Alteholz did 16.25 hours.
Evolution of the situation

The number of sponsored hours did not change. But a few sponsors interested in having more than 5 years of support should join LTS next month since this was a pre-requisite to benefit from extended LTS support. I did update Freexian’s website to show this as a benefit offered to LTS sponsors.

The security tracker currently lists 20 packages with a known CVE and the dla-needed.txt file 16. At two week from Wheezy’s end-of-life, the number of open issues is close to an historical low.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Norbert Preining: Specification and Verification of Software with CafeOBJ – Part 2 – First steps with CafeOBJ

15 May, 2018 - 21:38

This blog continues Part 1 and Part 2 of our series on software specification and verification with CafeOBJ.

We will go through basic operations like starting and stopping the CafeOBJ interpreter, getting help, doing basic computations.

Starting and leaving the interpreter

If CafeOBJ is properly installed, a call to cafeobj will greet you with information about the current version of CafeOBJ, as well as build dates and which build system has been used. The following is what is shown on my Debian system with the latest version of CafeOBJ installed:

$ cafeobj
-- loading standard prelude

            -- CafeOBJ system Version 1.5.7(PigNose0.99) --
                   built: 2018 Feb 26 Mon 6:01:31 GMT
                         prelude file: std.bin
                      2018 Apr 19 Thu 2:20:40 GMT
                            Type ? for help
                  -- Containing PigNose Extensions --
                             built on SBCL

After the initial information there is the prompt CafeOBJ> indicating that the interpreter is ready to process your input. By default several files (the prelude as it is called above) is loaded, which defines certain basic sorts and operations.

If you have enough of playing around, simply press Ctrl-D (the Control key and d at the same time), or type in quit:

CafeOBJ> quit
Getting help

Besides the extensive documentation available at the website (reference manual, user manual, tutorials, etc), the reference manual is also always at your fingertips within the CafeOBJ interpreter using the ? group of commands:

  • ? – gives general help
  • ?com class – shows available commands classified by ‘class’
  • ? name – gives the reference manual entry for name
  • ?ex name – gives examples (if available) for name
  • ?ap name – (apropos) searches the reference manual for appearances of name

To give an example on the usage, let us search for the term operator and then look at the documentation concerning one of them:

CafeOBJ> ?ap op
Found the following matches:
 . `:theory  :  ->  { assoc | comm | id:  }`
 . `op  :  ->  {  }`
 . on-the-fly declaration

CafeOBJ> ? op
'op  :  ->  {  }'

Defines an operator by its domain, co-domain, and the term construct.
' is a space separated list of sort names, ' is a
single sort name.

I have shortened the output a bit indicated by ....

Simple computations

By default, CafeOBJ is just a barren landscape, meaning that there are now rules or axioms active. Everything is encapsulated into so called modules (which in mathematical terms are definitions of order-sorted algebras). One of these modules is NAT which allows computations in the natural numbers. To activate a module we use open:

CafeOBJ> open NAT .

The ...again indicate quite some output of the CafeOBJ interpreter loading additional files.

There are two things to note in the above:

  • One finishes a command with a literal dot . – this is necessary due to the complete free syntax of the CafeOBJ language and indicates the end of a statement, similar to semicolons in other programming languages.
  • The prompt has changed to NAT> to indicate that the playground (context) we are currently working are the natural numbers.

To actually carry out computations we use the command red or reduce. Recall from the previous post that the computational model of CafeOBJ is rewriting, and in this setting reduce means kicking of the rewrite process. Let us do this for a simple computation:

%NAT> red 2 + 3 * 4 .
-- reduce in %NAT : (2 + (3 * 4)):NzNat
(0.0000 sec for parse, 0.0000 sec for 2 rewrites + 2 matches)


Things to note in the above output:

  • Correct operator precedence: CafeOBJ correctly computes 14 due to the proper use of operator precedence. If you want to override the parsing you can use additional parenthesis.
  • CafeOBJ even gives a sort (or type) information for the return value: (14):NzNat, indicating that the return value of 14 is of sort NzNat, which refers to non-zero natural numbers.
  • The interpreter tells you how much time it spent in parsing and rewriting.

If we have enough of this playground, we close the opened module with close which returns us to the original prompt:

%NAT> close .

Now if you think this is not so interesting, let us to some more funny things, like computation with rational numbers, which are provided by CafeOBJ in the RAT module. Rational numbers can be written as slashed expressions: a / b. If we don’t want to actually reduce a given expression, we can use parse to tell CafeOBJ to parse the next expression and give us the parsed expression together with a sort:

CafeOBJ> open RAT .
%RAT> parse 3/4 .

Again, CafeOBJ correctly determined that the given value is a non-zero rational number. More complex expression can be parsed the same way, as well as reduced to minimal representation:

%RAT> parse 2 / ( 4 * 3 ) .
(2 / (4 * 3)):NzRat

%RAT> red 2 / ( 4 * 3 ) .
-- reduce in %RAT : (2 / (4 * 3)):NzRat
(0.0000 sec for parse, 0.0000 sec for 2 rewrites + 2 matches)


NAT and RAT are not the only built-in sorts, there are several more, and others can be defined easily (see next blog). The currently available data types, together with their sort order (recall that we are in order sorted algebras, so one sort can contain others):
NzNat < Nat < NzInt < Int < NzRat < Rat
which refer to non-zero natural numbers, natural numbers, non-zero integers, integers, non-zero rational numbers, rational numbers, respectively.

Then there are other data types unrelated (not ordered) to any other:
Triv, Bool, Float, Char, String, 2Tuple, 3Tuple, 4Tuple.


CafeOBJ does not have functions in the usual sense, but operators defined via there arity and a set of (rewriting) equations. Let us take a look at two simple functions in the natural numbers: square which takes one argument and returns the square of it, and a function sos which takes two arguments and returns the sum of squares of the arguments. In mathematical writing: square(a) = a * a and sos(a,b) = a*a + b*b.

This can be translated into CafeOBJ code as follows (from now on I will be leaving out the prompts):

open NAT .
vars A B : Nat
op square : Nat -> Nat .
eq square(A) = A * A .
op sos : Nat Nat -> Nat .
eq sos(A, B) = A * A + B * B .

This first declares two variables A and B to be of sort Nat (note that the module names and sort names are not the same, but the module names are usually the uppercase of the sort names). Then the operator square is introduced by providing its arity. In general an operator can have several input variables, and for each of them as well as the return value we have to provide the sorts:

  op  NAME : Sort1 Sort2 ... -> Sort

defines an operator NAME with several input parameters of the given sorts, and the return sort Sort.

The next line gives one (the only necessary) equation governing the computation rules of square. Equations are introduced by eq (and some variants of it), followed by an expression, and equal sign, and another expression. This indicates that CafeOBJ may rewrite the left expression to the right expression.

In our case we told CafeOBJ that it may rewrite an expression of the form square(A) to A * A, where A can be anything of sort Nat (for now we don't go into details how order-sorted rewriting works in general).

The next two lines do the same for the operator sos.

Having this code in place, we can easily do computations with it by using the already introduced reduce command:

red square(1) .
-- reduce in %NAT : (square(10)):Nat

red sos(10,20) .
-- reduce in %NAT : (f(10,20)):Nat

What to do if one equation does not service? Let us look at a typical recursive definition of sum of natural numbers: sum(0) = 0 and for a > 0 we have sum(a) = a + sum(a-1). This can be easily translated into CafeOBJ as follows:

open NAT .
op sum : Nat -> Nat .
eq sum(0) = 0 .
eq sum(A:NzNat) = A + sum(p A) .
red sum(10) .

where p (for predecessor) indicates the next smaller natural number. This operator is only defined on non-zero natural numbers, though.

In the above fragment we also see a new style of declaring variables, on the fly: The first occurrence of a variable in an equation can carry a sort declaration, which extends all through the equation.

Running the above code we get, not surprisingly 55, in particular:

-- reduce in %NAT : (sum(10)):Nat
(0.0000 sec for parse, 0.0000 sec for 31 rewrites + 41 matches)

As a challenge the reader might try to give definitions of the factorial function and the Fibonacci function, the next blog will present solutions for it.

This concludes the second part. In the next part we will look at defining modules (aka algebras aka theories) and use them to define lists.

Enrico Zini: Starting user software in X

15 May, 2018 - 19:06

There are currently many ways of starting software when a user session starts.

This is an attempt to collect a list of pointers to piece the big picture together. It's partial and some parts might be imprecise or incorrect, but it's a start, and I'm happy to keep it updated if I receive corrections.


man xsession

  • Started by the display manager for example, /usr/share/lightdm/lightdm.conf.d/01_debian.conf or /etc/gdm3/Xsession
  • Debian specific
  • Runs scripts in /etc/X11/Xsession.d/
  • /etc/X11/Xsession.d/40x11-common_xsessionrc sources ~/.xsessionrc which can do little more than set env vars, because it is run at the beginning of X session startup
  • At the end, it starts the session manager (gnome-session, xfce4-session, and so on)
systemd --user
  • Started by pam_systemd, so it might not have a DISPLAY variable set in the environment yet
  • Manages units in:
    • /usr/lib/systemd/user/ where units provided by installed packages belong.
    • ~/.local/share/systemd/user/ where units of packages that have been installed in the home directory belong.
    • /etc/systemd/user/ where system-wide user units are placed by the system administrator.
    • ~/.config/systemd/user/ where the users put their own units.
  • A trick to start a systemd user unit when the X session has been set up and the DISPLAY variable is available, is to call systemctl start from a .desktop autostart file.
dbus activation X session manager xdg autostart Other startup notes ~/.Xauthority

To connect to an X server, a client needs to send a token from ~/.Xauthority, which proves that they can read the user's provate data.

~/.Xauthority contains a token generated by display manager and communicated to X at startup.

To view its contents, use xauth -i -f ~/.Xauthority list

Wouter Verhelst: Digitizing my DVDs

15 May, 2018 - 15:44

I have a rather sizeable DVD collection. The database that I created of them a few years back after I'd had a few episodes where I accidentally bought the same movie more than once claims there's over 300 movies in the cabinet. Additionally, I own a number of TV shows on DVD, which, if you count individual disks, will probably end up being about the same number.

A few years ago, I decided that I was tired of walking to the DVD cabinet, taking out a disc, and placing it in the reader. That instead, I wanted to digitize them and use kodi to be able to watch a movie whenever I felt like it. So I made some calculations, and came up with a system with enough storage (on ZFS, of course) to store all the DVDs without needing to re-encode them.

I got started on ripping most of the DVDs using dvdbackup, but it quickly became apparent that I'd made a miscalculation; where I thought that most of the DVDs would be 4.7G ones, it turns out that most commercial DVDs are actually of the 9G type. Come to think of it, that does make a lot of sense. Additionally, now that I had a home server that had some significant reduntant storage, I found that I had some additional uses for such things. The storage that I had, vast enough though it may be, wouldn't suffice.

So, I gave this some more thought, but then life interfered and nothing happened for a few years.

Recently however, I've picked it up again, changing my workflow. I started using handbrake to re-encode the DVDs so they wouldn't take up quite so much space; having chosen VP9 as my preferred codec, I end up storing the DVDs as about 1 to 2 G per main feature, rather than the 8 to 9 that it used to be -- a significant gain. However, my first workflow wasn't very efficient; I would run the handbrake GUI from my laptop on ssh -X sessions to multiple machines, encoding the videos directly from DVD that way. That worked, but it meant I couldn't shut down my laptop to take it to work without interrupting work that was happening; also, it meant that if a DVD finished encoding in the middle of the night, I wouldn't be there to replace it, so the system would be sitting idle for several hours. Clearly some form of improvement was necessary if I was going to do this in any reasonable amount of time.

So after fooling around a bit, I came up with the following:

  • First, I use dvdbackup -M -r a to read the DVD without re-encoding anything. This can be done at the speed of the optical medium, and can therefore be done much more efficiently than to use handbrake directly from the DVD. The -M option tells dvdbackup to read everything from the DVD (to make a mirror of it, in effect). The -r a option tells dvdbackup to abort if it encounters a read error; I found that DVDs sometimes can be read successfully if I eject the drive and immediately reinsert it, or if I give the disk another clean, or even just try again in a different DVD reader. Sometimes the disk is just damaged, and then using dvdbackup's default mode of skipping the unreadable blocks makes sense, but not in a first attempt.
  • Then, I run a small little perl script that I wrote. It basically does two things:

    1. Run HandBrakeCLI -i <dvdbackup output> --previews 1 -t 0, parse its stderr output, and figure out what the first and the last titles on the DVD are.
    2. Run qsub -N <movie name> -v FILM=<dvdbackup output> -t <first title>-<last title> convert-film
  • The convert-film script is a bash script, which (in its first version) did this:

    mkdir -p "$OUTPUTDIR/$FILM/tmp"
    HandBrakeCLI -x "threads=1" --no-dvdnav -i "$INPUTDIR/$FILM" -e vp9 -E copy -T -t $SGE_TASK_ID --all-audio --all-subtitles -o "$OUTPUTDIR/$FILM/tmp/T${SGE_TASK_ID}.mkv"

    Essentially, that converts a single title to a VP9-encoded matroska file, with all the subtitles and audio streams intact, and forcing it to use only one thread -- having it use multiple threads is useful if you care about a single DVD converting as fast as possible, but I don't, and having four DVDs on a four-core system all convert at 100% CPU seems more efficient than having two convert at about 180% each. I did consider using HandBrakeCLI's options to only extract the "interesting" audio and subtitle tracks, but I prefer to not have dubbed audio (to have subtitled audio instead); since some of my DVDs are originally in non-English languages, doing so gets rather complex. The audio and subtitle tracks don't take up that much space, so I decided not to bother with that in the end.

The use of qsub, which submits the script into gridengine, allows me to hook up several encoder nodes (read: the server plus a few old laptops) to the same queue.

That went pretty well, until I wanted to figure out how far along something was going. HandBrakeCLI provides progress information on stderr, and I can just do a tail -f of the stderr output logs, but that really works well only for one one DVD at a time, not if you're trying to follow along with about a dozen of them.

So I made a database, and wrote another perl script. This latter will parse the stderr output of HandBrakeCLI, fish out the progress information, and put the completion percentage as well as the ETA time into a database. Then it became interesting:

  IF (TG_OP = 'INSERT') OR (TG_OP = 'UPDATE' AND (NEW.progress != OLD.progress) OR NEW.finished = TRUE) THEN
    PERFORM pg_notify('transjob', row_to_json(NEW)::varchar);
$$ LANGUAGE plpgsql;
CREATE TRIGGER transjob_tcn_trigger

This uses PostgreSQL's asynchronous notification feature to send out a notification whenever an interesting change has happened to the table.

#!/usr/bin/perl -w

use strict;
use warnings;

use Mojolicious::Lite;
use Mojo::Pg;


helper dbh => sub { state $pg = Mojo::Pg->new->dsn("dbi:Pg:dbname=transcode"); };

websocket '/updates' => sub {
    my $c = shift;
    my $cb = $c->dbh->pubsub->listen(transjob => sub { $c->send(pop) });
    $c->on(finish => sub { shift->dbh->pubsub->unlisten(transjob => $cb) });


This uses the Mojolicious framework and Mojo::Pg to send out the payload of the "transjob" notification (which we created with the FOR EACH ROW trigger inside PostgreSQL earlier, and which contains the JSON version of the table row) over a WebSocket. Then it's just a small matter of programming to write some javascript which dynamically updates the webpage whenever that happens, and Tadaa! I have an online overview of the videos that are transcoding, and how far along they are.

That only requires me to keep the queue non-empty, which I can easily do by running dvdbackup a few times in parallel every so often. That's a nice saturday afternoon project...

Reproducible builds folks: Reproducible Builds: Weekly report #159

15 May, 2018 - 14:20

Here’s what happened in the Reproducible Builds effort between Sunday May 6 and Saturday May 12 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 94 was uploaded to Debian unstable and PyPI by Chris Lamb. It included contributions already convered by posts in previous weeks as well as new ones from:

Mattia Rizzolo subsequently backported this version to stretch.

After the release of version 94, the development continued with the following contributions from Mattia Rizzolo:

disorderfs development

Version 0.5.3-1 of disorderfs (our FUSE-based filesystem that introduces non-determinism) was uploaded to unstable by Chris Lamb. It included contributions already convered by posts in previous weeks as well as new ones from: development

Mattia Rizzolo made the following changes to our Jenkins-based testing framework, including:


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Russ Allbery: Review: Thanks for the Feedback

15 May, 2018 - 11:35

Review: Thanks for the Feedback, by Douglas Stone & Sheila Heen

Publisher: Penguin Copyright: 2014 Printing: 2015 ISBN: 1-101-61427-7 Format: Kindle Pages: 322

Another book read for the work book club.

I was disappointed when this book was picked. I already read two excellent advice columns (Captain Awkward and Ask a Manager) and have read a lot on this general topic. Many workplace-oriented self-help books also seem to be full a style of pop psychology that irritates me rather than informs. But the point of a book club is that you read the book anyway, so I dove in. And was quite pleasantly surprised.

This book is about receiving feedback, not about giving feedback. There are tons of great books out there about how to give feedback, but, as the authors say in the introduction, almost no one giving you feedback is going to read any of them. It would be nice if we all got better at giving feedback, but it's not going to happen, and you can't control other people's feedback styles. You can control how you receive feedback, though, and there's quite a lot one can do on the receiving end. The footnoted subtitle summarizes the tone of the book: The Science and Art of Receiving Feedback Well (even when it is off base, unfair, poorly delivered, and, frankly, you're not in the mood).

The measure of a book like this for me is what I remember from it several weeks after reading it. Here, it was the separation of feedback into three distinct types: appreciation, coaching, and evaluation. Appreciation is gratitude and recognition for what one has accomplished, independent of any comparison against other people or an ideal for that person. Coaching is feedback aimed at improving one's performance. And evaluation, of course, is feedback that measures one against a standard, and usually comes with consequences (a raise, a positive review, a relationship break-up). We all need all three but different people need different mixes, sometimes quite dramatically so. And one of the major obstacles in the way of receiving feedback well is that they tend to come mixed or confused.

That framework makes it easier to see where one's reaction to feedback often goes off the rails. If you come into a conversation needing appreciation ("I've been working long hours to get this finished on time, and a little thanks would be nice"), but the other person is focused on an opportunity for coaching ("I can point out a few tricks and improvements that will let you not work as hard next time"), the resulting conversation rarely goes well. The person giving the coaching is baffled at the resistance to some simple advice on how to improve, and may even form a negative opinion of the other person's willingness to learn. And the person receiving the feedback comes away feeling unappreciated and used, and possibly fearful that their hard work is merely a sign of inadequate skills. There are numerous examples of similar mismatches.

I found this framing immediately useful, particularly in the confusion between coaching and evaluation. It's very easy to read any constructive advice as negative evaluation, particularly if one is already emotionally low. Having words to put to these types of feedback makes it easier to evaluate the situation intellectually rather than emotionally, and to explicitly ask for clarifying evaluation if coaching is raising those sorts of worries.

The other memorable concept I took away from this book is switchtracking. This is when the two people in a conversation are having separate arguments simultaneously, usually because each person has a different understanding of what the conversation is "really" about. Often this happens when the initial feedback sets off a trigger, particularly a relationship or identity trigger (other concepts from this book), in the person receiving it. The feedback giver may be trying to give constructive feedback on how to lay out a board presentation, but the receiver is hearing that they can't be trusted to talk to the board on their own. The receiver will tend to switch the conversation away to whether or not they can be trusted, quite likely confusing the initial feedback giver, or possibly even prompting another switchtrack into a third topic of whether they can receive criticism well.

Once you become aware of this tendency, you start to see it all over the place. It's sadly common. The advice in the book, which is accompanied with a lot of concrete examples, is to call this out explicitly, clearly separate and describe the topics, and then pick one to talk about first based on how urgent the topics are to both parties. Some of those conversations may still be difficult, but at least both parties are having the same conversation, rather than talking past each other.

Thanks for the Feedback fleshes out these ideas and a few others (such as individual emotional reaction patterns to criticism and triggers that interfere with one's ability to accept feedback) with a lot of specific scenarios. The examples are refreshingly short and to the point, avoiding a common trap of books like this to get bogged down into extended artificial dialogue. There's a bit of a work focus, since we get a lot of feedback at work, but there's nothing exclusively work-related about the advice here. Many of the examples are from personal relationships of other kinds. (I found an example of a father teaching his daughters to play baseball particularly memorable. One daughter takes this as coaching and the other as evaluation, resulting in drastically different reactions.) The authors combine matter-of-fact structured information with a gentle sense of humor and great pacing, making this surprisingly enjoyable to read.

I was feeling oversaturated with information on conversation styles and approaches and still came away from this book with some useful additional structure. If you're struggling with absorbing feedback or finding the right structure to use it constructively instead of getting angry, scared, or depressed, give this a try. It's much better than I had expected.

Rating: 7 out of 10

Daniel Pocock: A closer look at power and PowerPole

15 May, 2018 - 02:25

The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

OSCAL organizer urgently looking for an Apple MacBook PSU

In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

If only Apple used PowerPole...

Why batteries?

The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

Why PowerPole?

Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

The pen is mightier than the sword, but what about the crimper?

The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

Here are some packets of PowerPoles in every size:

Example cables

It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

Distributing power between multiple devices

There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

Need help crimping your cables?

If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

Olivier Berger: Implementing an example Todo-Backend REST API with Symfony 4 and api-platform

14 May, 2018 - 21:20

Todo-Backend lists many implementations of the same REST API with different backend-oriented Web development frameworks.

I’ve proposed my own version using Symfony 4 in PHP, and the api-platform project which helps implementing REST APIs.

I’ve documented the way I did it in the project’s documentation in details, for those curious about Symfony development of a very simple REST API (JSON based). See its README file (of course redacted with the mandatory org-mode ;).

You can find the rest of the code here :

AFAICS api-platform offers a great set of features for Linked-Data/REST development with Symfony in general. However, some tweaks were necessary to conform the TodoBackend specs, mainly because TodoBackend is JSON only and doesn’t support JSON-LD…

Oh, and the hardest part was deploying on Heroku, making sure that the CORS headers would work as expected :-/



Russ Allbery: Review: Twitter and Tear Gas

14 May, 2018 - 10:45

Review: Twitter and Tear Gas, by Zeynep Tufekci

Publisher: Yale University Press Copyright: 2017 ISBN: 0-300-21512-6 Format: Kindle Pages: 312

Subtitled The Power and Fragility of Networked Protest, Twitter and Tear Gas is a close look at the effect of social media (particularly, but not exclusively, Twitter and Facebook) on protest movements around the world. Tufekci pays significant attention to the Tahrir Square protests in Egypt, the Gezi Park protests in Turkey, Occupy Wall Street and the Tea Party in the United States, Black Lives Matter also in the United States, and the Zapatista uprising in Mexico early in the Internet era, as well as more glancing attention to multiple other protest movements since the advent of the Internet. She avoids both extremes of dismissal of largely on-line movements and the hailing of social media as a new era of mass power, instead taking a detailed political and sociological look at how protest movements organized and fueled via social media differ in both strengths and weaknesses from the movements that came before.

This is the kind of book that could be dense and technical but isn't. Tufekci's approach is analytical but not dry or disengaged. She wants to know why some protests work and others fail, what the governance and communication mechanisms of protest movements say about their robustness and capabilities, and how social media has changed the tools and landscape used by protest movements. She's also been directly involved: she's visited the Zapatistas, grew up in Istanbul and is directly familiar with the politics of the Gezi Park protests, and includes in this book a memorable story of being caught in the Antalya airport in Turkey during the 2016 attempted coup. There are some drier and more technical chapters where she's laying the foundations of terminology and analysis, but they just add rigor to an engaging, thoughtful examination of what a protest is and why it works or doesn't work.

My favorite part of this book, by far, was the intellectual structure it gave me for understanding the effectiveness of a protest. That's something about which media coverage tends to be murky, at least in situations short of a full-blown revolutionary uprising (which are incredibly rare). The goal of a protest is to force a change, and clearly sometimes this works. (The US Civil Rights movement and the Indian independence movement are obvious examples. The Arab Spring is a more recent if more mixed example.) However, sometimes it doesn't; Tufekci's example is the protests against the Iraq War. Why?

A key concept of this book is that protests signal capacity, particularly in democracies. That can be capacity to shape a social narrative and spread a point of view, capacity to disrupt the regular operations of a system of authority, or capacity to force institutional change through the ballot box or other political process. Often, protests succeed to the degree that they signal capacity sufficient to scare those currently in power into compromising or acquiescing to the demands of the protest movement. Large numbers of people in the streets matter, but not usually as a show of force. Violent uprisings are rare and generally undesirable for everyone. Rather, they matter because they demand and hold media attention (allowing them to spread a point of view), can shut down normal business and force an institutional response, and because they represent people who can exert political power or be tapped by political rivals.

This highlights one of the key differences between protest in the modern age and protest in a pre-Internet age. The March on Washington at the height of the Civil Rights movement was an impressive demonstration of capacity largely because of the underlying organization required to pull off a large and successful protest in that era. Behind the scenes were impressive logistical and governance capabilities. The same organizational structure that created the March could register people to vote, hold politicians accountable, demand media attention, and take significant and effective economic action. And the government knew it.

One thing that social media does is make organizing large protests far easier. It allows self-organizing, with viral scale, which can create numerically large movements far easier than the dedicated organizational work required prior to the Internet. This makes protest movements more dynamic and more responsive to events, but it also calls into question how much sustained capacity the movement has. The government non-reaction to the anti-war protests in the run-up to the Iraq War was an arguably correct estimation of the signaled capacity: a bet that the anti-war sentiment would not turn into sustained institutional pressure because large-scale street protests no longer indicated the same underlying strength.

Signaling capacity is not, of course, the only purpose of protests. Tufekci also spends a good deal of time discussing the sense of empowerment that protests can create. There is a real sense in which protests are for the protesters, entirely apart from whether the protest itself forces changes to government policies. One of the strongest tools of institutional powers is to make each individual dissenter feel isolated and unimportant, to feel powerless. Meeting, particularly in person, with hundreds of other people who share the same views can break that illusion of isolation and give people the enthusiasm and sense of power to do something about their beliefs. This, however, only becomes successful if the protesters then take further actions, and successful movements have to provide some mechanism to guide and unify that action and retain that momentum.

Tufekci also provides a fascinating analysis of the evolution of government responses to mass protests. The first reaction was media blackouts and repression, often by violence. Although we still see some of that, particularly against out groups, it's a risky and ham-handed strategy that dramatically backfired for both the US Civil Rights movement (due to an independent press that became willing to publish pictures of the violence) and the Arab Spring (due to social media providing easy bypass of government censorship attempts). Governments do learn, however, and have become increasingly adept at taking advantage of the structural flaws of social media. Censorship doesn't work; there are too many ways to get a message out. But social media has very little natural defense against information glut, and the people who benefit from the status quo have caught on.

Flooding social media forums with government propaganda or even just random conspiratorial nonsense is startlingly effective. The same lack of institutional gatekeepers that destroys the effectiveness of central censorship also means there are few trusted ways to determine what is true and what is fake on social media. Governments and other institutional powers don't need to convince people of their point of view. All they need to do is create enough chaos and disinformation that people give up on the concept of objective truth, until they become too demoralized to try to weed through the nonsense and find verifiable and actionable information. Existing power structures by definition benefit from apathy, disengagement, delay, and confusion, since they continue to rule by default.

Tufekci's approach throughout is to look at social media as a change and a new tool, which is neither inherently good or bad but which significantly changes the landscape of political discourse. In her presentation (and she largely convinced me in this book), the social media companies, despite controlling the algorithms and platform, don't particularly understand or control the effects of their creation except in some very narrow and profit-focused ways. The battlegrounds of "fake news," political censorship, abuse, and terrorist content are murky swamps less out of deliberate intent and more because companies have built a platform they have no idea how to manage. They've largely supplanted more traditional political spheres and locally-run social media with huge international platforms, are now faced with policing the use of those platforms, and are way out of their depth.

One specific example vividly illustrates this and will stick with me. Facebook is now one of the centers of political conversation in Turkey, as it is in many parts of the world. Turkey has a long history of sharp political divisions, occasional coups, and a long-standing, simmering conflict between the Turkish government and the Kurds, a political and ethnic minority in southeastern Turkey. The Turkish government classifies various Kurdish groups as terrorist organizations. Those groups unsurprisingly disagree. The arguments over this inside Turkey are vast and multifaceted.

Facebook has gotten deeply involved in this conflict by providing a platform for political arguments, and is now in the position of having to enforce their terms of service against alleged terrorist content (or even simple abuse), in a language that Facebook engineers largely don't speak and in a political context that they largely know nothing about. They of course hire Turkish speakers to try to understand that content to process abuse reports. But, as Tufekci (a Turkish native) argues, a Turkish speaker who has the money, education, and family background to be working in an EU Facebook office in a location like Dublin is not randomly chosen from the spectrum of Turkish politics. They are more likely to have connections to or at least sympathies for the Turkish government or business elites than to be related to a family of poor and politically ostracized Kurds. It's therefore inevitable that bias will be seen in Facebook's abuse report handling, even if Facebook management intends to stay neutral.

For Turkey, you can substitute just about any other country about which US engineers tend to know little. (Speaking as a US native, that's a very long list.) You may even be able to substitute the US for Turkey in some situations, given that social media companies tend to outsource the bulk of the work to countries that can provide low-paid workers willing to do the awful job of wading through the worst of humanity and attempting to apply confusing and vague terms of service. Much of Facebook's content moderation is done in the Philippines, by people who may or may not understand the cultural nuances of US political fights (and, regardless, are rarely given enough time to do more than cursorily glance at each report).

This is already a long review and there still more important topics in this book I've not touched on, such as movement governance. (As both an advocate for and critic of consensus-based decision-making, Tufekci's example of governance in Occupy Wall Street had me both fascinated and cringing.) This is excellent stuff, full of personal anecdotes and entertaining story-telling backed by thoughtful and structured analysis. If you have felt mystified by the role that protests play in modern politics, I highly recommend reading this.

Rating: 9 out of 10

Norbert Preining: Gaming: Lara Croft – Rise of the Tomb Raider: 20 Year Celebration

14 May, 2018 - 06:29

I have to admit, this is the first time that I playing something like this. Somehow, Lara Croft – Rise of the Tomb Raider was on sale, and some of the trailers were so well done that I was tempted in getting this game. And to my surprise, it actually works pretty well on Linux, too – yeah!

So I am a first time player in this kind of league, and had a hard time getting used to controlling lovely Lara, but it turned out easier than I thought – although I guess a controller instead of mouse and kbd would be better. One starts out somewhere in the moutains (I probably bought the game because there is so much of mountaineering in the parts I have seen till now trying to evade breaking crevices, jumping from ledges to ledges, getting washed away by avalanches, full program.

But my favorite till now in the game is that Lara always carries an ice ax. Completely understandable in the mountain trips, where she climbs frozen ice falls, hanging cliffs, everything, like a super-pro. Wow, I would like to be such an ice climber! BUt even in the next mission in Syria, she still has her ice ax with here, very conveniently dangling from her side. How suuuper-cool!

After being washed away by an avalanche we find Lara back on a trip in Syria, followed and nearly killed by the mysterious Trinity organization. During the Syria operation she needs to learn quite a lot of Greek, unfortunately the player doesn’t have to learn with her – I could need some polishing of my Ancient Greek.

The game is a first-of-its-kind for me, with long cinematic parts between the playing actions. The switch between cinematic and play is so well done that I sometimes have the feeling I need to control Lara during these times, too. The graphics are also very stunning to my eyes, impressive.

I never have played and Lara game or seen and Lara movie, but my first association was to the Die Hard movie series – always these dirty clothes, scratches and dirt covered body. Lara is no exception here. Last but not least, the deaths of Lara (one – at least I – often dies in these games) are often quite funny and entertaining: spiked in some tombs, smashed to pieces by a falling stone column, etc. I really have to learn it the hard way.

I only have finished two expeditions, no idea how many of them are there to come. But seems like I will continue. Good thing is that there are lots of restart points and permanent saves, so if one dies, or the computer dies, one doesn’t have to redo the whole bunch. Well done.

Renata D'Avila: Debian Women in Curitiba

14 May, 2018 - 03:49

This post is long overdue, but I have been so busy lately that I didn't have the time to sit down and write it in the past few weeks. What have I been busy with? Let's start with this event, that happened back in March:

Debian Women meeting in Curitiba (March 10th, 2018)

At MiniDebConf Curitiba last year, few women attended. And, as I mentioned on a previous post, there was not even a single women speaking at MiniDebConf last year.

I didn't want MiniDebConf Curitiba 2018 to be a repeat of last year. Why? In part, because I have involved in other tech communities and I know it doesn't have to be like that (unless, of course, the community insists in being mysoginistic...).

So I came up with the idea of having a meeting for women in Curitiba one month before MiniDebConf. The main goal was to create a good enviroment for women to talk about Debian, whether they had used GNU/Linux before or not, whether they were programmers or not.

Miriam and Kira, two other women from the state of Parana interested in Debian, came along and helped out with planning. We used a collaborative pad to organize the tasks and activities and to create the text for the folder about Debian we had printed (based on Debian's documentation).

For the final version of the folder, it's important to acknowledge the help Luciana gave us, all the way from Minas Gerais. She collaborated with the translations, reviewed the texts and fixed the layout.

The final odg file, in Portuguese, can be downloaded here: folder_debian_30cm.odg

Very quickly, because we had so little time (we settled on a date and a place a little over one month before the meeting), I created a web page and put it online the only way I could at that moment, using Github Pages.

We used Mate Hackers' instance of to register for the meeting, simply because we had to plan accordingly. This was the address for registration:

Through the Training Center, a Brazilian tech community, we got to Lucio, who works at Pipefy and offered us the space so we could hold the meeting. Thank you, Lucio, Training Center and Pipefy!

Because Miriam and I weren't in Curitiba, we had to focus the promotion of this meeting online. Not the ideal when someone wants to be truly inclusive, but we worked with the resources we had. We reached out to TechLadies and invited them - as we did with many other groups.

This was our schedule:


09:00 - Welcome coffee

10:00 - What is Free Software? Copyright, licenses, sharing

10:30 - What is Debian?

12:00 - Lunch Break


14:30 - Internships with Debian - Outreachy and Google Summer of Code

15:00 - Install fest / helping with users issues

16:00 - Editing the Debian wiki to register this meeting

17:30 - Wrap up

Take outs from the meeting:

  • Because we knew more or less how many people would attend, we were able to buy the food accordingly right before the meeting - and ended up spending much less than if we had ordered some kind of catering.

  • Sadly, it would be almost as expensive to print a dozen of folders than it would be to print out hundred of them. So we ended up printing 100 folders (which was expensive enough). The good part is that we would end up handing them out during MiniDebConf Curitiba.

  • We attempted a live stream of the meeting using Jitsi, but I don't think we were very successful, because we didn't have a microphone for the speakers.

  • Most of our public ended up being women who, in fact, already knew and/or used Debian, but weren't actively involved with the community.

  • It was during this meeting that the need for a mailing list in Portuguese for women interested in Debian came up. Because, yes, in a country where English is taught so poorly in the schools, the language can still be a barrier. We also wanted to keep in touch and share information about the Brazilian community and what we are doing. We want next years' DebConf to have a lot of women, specially Brazilian women who are interested and/or who are users and/or contribute to Debian. The request for this mailing list would be put through by Helen during MiniDebConf, using the bug report system. If you can, please support us:

Pictures from the meeting:

Our breakfast table!

Miriam's talk: What is Free Software? Copyright, licenses, sharing

Miriam and Renata's talk: What is Debian?

Renata talking about internships with Debian

Thank you to all the women who participated!

And to our lovely staff. Thank you, Lucio, for getting us the space and thank you, Pipefy!

This has been partly documented at Debian Wiki (DebianWomen/History) because the very next day after this meeting, Debian Wiki completely blocked ProtonVPN from even accessing the Wiki. Awesome. If anyone is able to, feel free to copy/paste any of this text there.

Russ Allbery: Review: Deep Work

13 May, 2018 - 11:32

Review: Deep Work, by Cal Newport

Publisher: Grand Central Copyright: January 2016 ISBN: 1-4555-8666-8 Format: Kindle Pages: 287

If you follow popular psychology at all, you are probably aware of the ongoing debate over multitasking, social media, smartphones, and distraction. Usually, and unfortunately, this comes tainted by generational stereotyping: the kids these days who spend too much time with their phones and not enough time getting off their elders' lawns, thus explaining their inability to get steady, high-paying jobs in an economy designed to avoid steady, high-paying jobs. However, there is some real science under the endless anti-millennial think-pieces. Human brains are remarkably bad at multitasking, and it causes significant degredation of performance. Worse, that performance degredation goes unnoticed by the people affected, who continue to think they're performing tasks at their normal proficiency. This comes into harsh conflict with modern workplaces heavy on email and chat systems, and even harsher conflict with open plan offices.

Cal Newport is an associate professor of computer science at Georgetown University with a long-standing side profession of writing self-help books, initially focused on study habits. In this book, he argues that the ability to do deep work — focused, concentrated work that pushes the boundaries of what one understands and is capable of — is a valuable but diminishing skill. If one can develop both the habit and the capability for it (more on that in a moment), it can be extremely rewarding and a way of differentiating oneself from others in the same field.

Deep Work is divided into two halves. The first half is Newport's argument that deep work is something you should consider trying. The second, somewhat longer half is his techniques for getting into and sustaining the focus required.

In making his case for this approach, Newport puts a lot of effort into avoiding broader societal prescriptions, political stances, or even general recommendations and tries to keep his point narrow and focused: the ability to do deep, focused work is valuable and becoming rarer. If you develop that ability, you will have an edge. There's nothing exactly wrong with this, but much of it is obvious and he belabors it longer than he needed to. (That said, I'm probably more familiar with research on concentration and multitasking than some.)

That said, I did like his analysis of busyness as a proxy for productivity in many workplaces. The metrics and communication methods most commonly used in office jobs are great at measuring responsiveness and regular work on shallow tasks in the moment, and bad at measuring progress towards deeper, long-term goals, particularly ones requiring research or innovation. The latter is recognized and rewarded once it finally pays off, but often treated as a mysterious capability some people have and others don't. Meanwhile, the day-to-day working environment is set up to make it nearly impossible, in Newport's analysis, to develop and sustain the habits required to achieve those long-term goals. It's hard to read this passage and not be painfully aware of how much time one spends shallowly processing email, and how that's rewarded in the workplace even though it rarely leads to significant accomplishments.

The heart of this book is the second half, which is where Deep Work starts looking more like a traditional time management book. Newport lays out four large areas of focus to increase one's capacity for deep work: create space to work deeply on a regular basis, embrace boredom, quit social media, and cut shallow work out of your life. Inside those areas, he provides a rich array of techniques, some rather counter-intuitive, that have worked for him. This is in line with traditional time management guidance: focus on a few important things at a time, get better at saying no, put some effort into planning your day and reviewing that plan, and measure what you're trying to improve. But Newport has less of a focus on any specific system and more of a focus on what one should try to cut out of one's life as much as possible to create space for thinking deeply about problems.

Newport's guidance is constructed around the premise (which seems to have some grounding in psychological research) that focused, concentrated work is less a habit that one needs to maintain than a muscle that one needs to develop. His contention is that multitasking and interrupt-driven work isn't just a distraction that can be independently indulged or avoided each day, but instead degrades one's ability to concentrate over time. People who regularly jump between tasks lose the ability to not jump between tasks. If they want to shift to more focused work, they have to regain that ability with regular, mindful practice. So, when Newport says to embrace boredom, it's not just due to the value of quiet and unstructured moments. He argues that reaching for one's phone to scroll through social media in each moment of threatened boredom undermines one's ability to focus in other areas of life.

I'm not sure I'm as convinced as Newport is, but I've been watching my own behavior closely since I read this book and I think there's some truth here. I picked this book up because I've been feeling vaguely dissatisfied with my ability to apply concentrated attention to larger projects, and because I have a tendency to return to a comfort zone of unchallenging tasks that I already know how to do. Newport would connect that to a job with an open plan office, a very interrupt-driven communications culture, and my personal habits, outside of work hours, of multitasking between TV, on-line chat, and some project I'm working on.

I'm not particularly happy about that diagnosis. I don't like being bored, I greatly appreciate the ability to pull out my phone and occupy my mind while I'm waiting in line, and I have several very enjoyable hobbies that only take "half a brain," which I neither want to devote time to exclusively nor want to stop doing entirely. But it's hard to argue with the feeling that my brain skitters away from concentrating on one thing for extended periods of time, and it does feel like an underexercised muscle.

Some of Newport's approach seems clearly correct: block out time in your schedule for uninterrupted work, find places to work that minimize distractions, and batch things like email and work chat instead of letting yourself be constantly interrupted by them. I've already noticed how dramatically more productive I am when working from home than working in an open plan office, even though the office doesn't bother me in the moment. The problems with an open plan office are real, and the benefits seem largely imaginary. (Newport dismantles the myth of open office creativity and contrasts it with famously creative workplaces like MIT and Bell Labs that used a hub and spoke model, where people would encounter each other to exchange ideas and then retreat into quiet and isolated spaces to do actual work.) And Newport's critique of social media seemed on point to me: it's not that it offers no benefits, but it is carefully designed to attract time and attention entirely out of proportion to the benefits that it offers, because that's the business model of social media companies.

Like any time management book, some of his other advice is less convincing. He makes a strong enough argument for blocking out every hour of your day (and then revising the schedule repeatedly through the day as needed) that I want to try it again, but I've attempted that in the past and it didn't go well at all. I'm similarly dubious of my ability to think through a problem while walking, since most of the problems I work on rely on the ability to do research, take notes, or start writing code while I work through the problem. But Newport presents all of this as examples drawn from his personal habits, and cares less about presenting a system than about convincing the reader that it's both valuable and possible to carve out thinking space for oneself and improve one's capacity for sustained concentration.

This book is explicitly focused on people with office jobs who are rewarded for tackling somewhat open-ended problems and finding creative solutions. It may not resonate with people in other lines of work, particularly people whose jobs are the interrupts (customer service jobs, for example). But its target profile fits me and a lot of others in the tech industry. If you're in that group, I think you'll find this thought-provoking.

Recommended, particularly if you're feeling harried, have the itch to do something deeper or more interesting, and feel like you're being constantly pulled away by minutia.

You can get a sample of Newport's writing in his Study Habits blog, although be warned that some of the current moral panic about excessive smartphone and social media use creeps into his writing there. (He's currently working on a book on digital minimalism, so if you're allergic to people who have caught the minimalism bug, his blog will be more irritating than this book.) I appreciated him keeping the moral panic out of this book and instead focusing on more concrete and measurable benefits.

Rating: 8 out of 10

Russ Allbery: Review: Always Human

12 May, 2018 - 10:13

Review: Always Human, by walkingnorth

Publisher: LINE WEBTOON Copyright: 2015-2017 Format: Online graphic novel Pages: 336

Always Human is a graphic novel published on the LINE WEBTOON platform. It was originally published in weekly updates and is now complete in two "seasons." It is readable for free, starting with episode one. The pages metadata in the sidebar is therefore a bit of a lie: it's my guess on how many pages this would be if it were published as a traditional graphic novel (four times the number of episodes), provided as a rough guide of how long it might take to read (and because I have a bunch of annual reading metadata that I base on page count, even if I have to make up the concept of pages).

Always Human is set in a 24th century world in which body modifications for medical, cosmetic, and entertainment purposes are ubiquitous. What this story refers to as "mods" are nanobots that encompass everything from hair and skin color changes through protection from radiation to allow interplanetary travel to anti-cancer treatments. Most of them can be trivially applied with no discomfort, and they've largely taken over the fashion industry (and just about everything else). The people of this world spend as little time thinking about their underlying mechanics as we spend thinking about synthetic fabrics.

This is why Sunati is so struck by the young woman she sees at the train station. Sunati first noticed her four months ago, and she's not changed anything about herself since: not her hair, her eye color, her skin color, or any of the other things Sunati (and nearly everyone else) change regularly. To Sunati, it's a striking image of self-confidence and increases her desire to find an excuse to say hello. When the mystery woman sneezes one day, she sees her opportunity: offer her a hay-fever mod that she carries with her!

Alas for Sunati's initial approach, Austen isn't simply brave or quirky. She has Egan's Syndrome, an auto-immune problem that makes it impossible to use mods. Sunati wasn't expecting her kind offer to be met with frustrated tears. In typical Sunati form, she spends a bunch of time trying to understand what happened, overthinking it, hoping to see Austen again, and freezing when she does. Lucky for Sunati, typical Austen form is to approach her directly and apologize, leading to an explanatory conversation and a trial date.

Always Human is Sunati and Austen's story: their gentle and occasionally bumbling romance, Sunati's indecisiveness and tendency to talk herself out of communicating, and Austen's determined, relentless, and occasionally sharp-edged insistence on defining herself. It's not the sort of story that has wars, murder mysteries, or grand conspiracies; the external plot drivers are more mundane concerns like choice of majors, meeting your girlfriend's parents, and complicated job offers. It's also, delightfully, not the sort of story that creates dramatic tension by occasionally turning the characters into blithering idiots.

Sunati and Austen are by no means perfect. Both of them do hurt each other without intending to, both of them have blind spots, and both of them occasionally struggle with making emergencies out of things that don't need to be emergencies. But once those problems surface, they deal with them with love and care and some surprisingly good advice. My first reading was nervous. I wasn't sure I could trust walkingnorth not to do something stupid to the relationship for drama; that's so common in fiction. I can reassure you that this is a place where you can trust the author.

This is also a story about disability, and there I don't have the background to provide the same reassurance with much confidence. However, at least from my perspective, Always Human reliably treats Austen as a person first, weaves her disability into her choices and beliefs without making it the cause of everything in her life, and tackles head-on some of the complexities of social perception of disabilities and the bad tendency to turn people into Inspirational Disabled Role Model. It felt to me like it struck a good balance.

This is also a society that's far more open about human diversity in romantic relationships, although there I think it says more about where we currently are as a society than what the 24th century will "actually" be like. The lesbian relationship at the heart of the story goes essentially unremarked; we're now at a place where that can happen without making it a plot element, at least for authors and audiences below a certain age range. The (absolutely wonderful) asexual and non-binary characters in the supporting cast, and the one polyamorous relationship, are treated with thoughtful care, but still have to be remarked on by the characters.

I think this says less about walkingnorth as a writer than it does about managing the expectations of the reader. Those ideas are still unfamiliar enough that, unless the author is very skilled, they have to choose between dragging the viciousness of current politics into the story (which would be massively out of place here) or approaching the topic with an earnestness that feels a bit like an after-school special. walkingnorth does the latter and errs on the side of being a little too didactic, but does it with a gentle sense of openness that fits the quiet and supportive mood of the whole story. It feels like a necessary phase that we have to go through between no representation at all and the possibility of unremarked representation, which we're approaching for gay and lesbian relationships.

You can tell from this review that I mostly care about the story rather than the art (and am not much of an art reviewer), but this is a graphic novel, so I'll try to say a few things about it. The art seemed clearly anime- or manga-inspired to me: large eyes as the default, use of manga conventions for some facial expressions, and occasional nods towards a chibi style for particularly emotional scenes. The color palette has a lot of soft pastels that fit the emotionally gentle and careful mood. The focus is on human figures and shows a lot of subtlety of facial expressions, but you won't get as much in the way of awe-inspiring 24th century backgrounds. For the story that walkingnorth is telling, the art worked extremely well for me.

The author also composed music for each episode. I'm not reviewing it because, to be honest, I didn't enable it. Reading, even graphic novels, isn't that sort of multimedia experience for me. If, however, you like that sort of thing, I have been told by several other people that it's quite good and fits the mood of the story.

That brings up another caution: technology. A nice thing about books, and to a lesser extent traditionally-published graphic novels, is that whether you can read it doesn't depend on your technological choices. This is a web publishing platform, and while apparently it's a good one that offers some nice benefits for the author (and the author is paid for their work directly), it relies on a bunch of JavaScript magic (as one might expect from the soundtrack). I had to fiddle with uMatrix to get it to work and still occasionally saw confusing delays in the background loading some of the images that make up an episode. People with more persnickety ad and JavaScript blockers have reported having trouble getting it to display at all. And, of course, one has to hope that the company won't lose interest or go out of business, causing Always Human to disappear. I'd love to buy a graphic novel on regular paper at some point in the future, although given the importance of the soundtrack to the author (and possible contracts with the web publishing company), I don't know if that will be possible.

This is a quiet, slow, and reassuring story full of gentle and honest people who are trying to be nice to each other while navigating all the tiny conflicts that still arise in life. It wasn't something I was looking for or even knew I would enjoy, and turned out to be exactly what I wanted to read when I found it. I devoured it over the course of a couple of days, and am now eagerly awaiting the author's next work (Aerial Magic). It is unapologetically cute and adorable, but that covers a solid backbone of real relationship insight. Highly recommended; it's one of the best things I've read this year.

Many thanks to James Nicoll for writing a review of this and drawing it to my attention.

Rating: 9 out of 10

Norbert Preining: MySql DataTime/TimeStamp fields and Scala

12 May, 2018 - 08:21

In one of my work projects we use Play Framework on Scala to provide an API (how surprising ;-). For quite some time I was hunting after lots of milliseconds, time the API answer was just terrible late compared to hammering directly at the MySql server. It turned out to be a problem of interaction between MySql DateTime format and Scala.

It sounded like to nice an idea to save our traffic data in a MySql database with the timestamp saved in a DateTime or Timestamp column. Display in the Mysql Workbench looks nice and easily readable. But somehow our API server’s response was horribly slow, especially when there were several requests. Hours and hours of tuning of SQL code, trying to turning of sorting, adding extra indices, nothing of all that to any avail.

The solution was rather trivial, the actual time lost is not in the SQL part, nor in the processing in our Scala code, but in the conversion from MySql DateTime/Timestamp object to Scala/Java Timestamp. We are using ActiveRecord for Scala, a very nice and convenient library, which converts MySql DateTime/Timestamps to Scala/Java Timestamps. But this conversion seems, especially for a large number of entries, to become rather slow. With months of traffic data and hundreds of thousands of timestamps to convert, the API collapsed to unacceptable response times.

Converting the whole pipeline (from data producer to database and api) to use plain simple Long boosted the API performance considerably. Lesson learned, don’t use MySql DateTime/Timestamp if you need lots of conversions.

Sune Vuorela: Modern C++ and Qt – part 2.

12 May, 2018 - 01:18

I recently did a short tongue-in-cheek blog post about Qt and modern C++. In the comments, people discovered that several compilers effectively can optimize std::make_unique<>().release() to a simple new statement, which was kind of a surprise to me.

I have recently written a new program from scratch (more about that later), and I tried to force myself to use standard library smartpointers much more than what I normally have been doing.

I ended up trying to apply a set of rules for memory handling to my code base to try to see where it could end.

  • No naked delete‘s
  • No new statements, unless it was handed directly to a Qt function taking ownership of the pointer. (To avoid sillyness like the previous one)
  • Raw pointers in the code are observer pointers. We can do this in new code, but in older code it is hard to argue that.

It resulted in code like

m_document = std::make_unique<QTextDocument>();
    auto layout = std::make_unique<QHBoxLayout>();
    auto textView = std::make_unique<QTextBrowser>();

By it self, it is quite ok to work with, and we get all ownership transfers documented. So maybe we should start code this way.

But there is also a hole in the ownership pass around, but given Qt methods doesn’t throw, it shouldn’t be much of a problem.

More about my new fancy / boring application at a later point.

I still haven’t fully embraced the c++17 thingies. My mental baseline is kind of the compiler in Debian Stable.

Sven Hoexter: Replacing hp-health on gen10 HPE DL360

11 May, 2018 - 22:40

A small followup regarding the replacement of hp-health and hpssacli. Turns out a few things have to be replaced, lucky all you already running on someone else computer where you do not have to take care of the hardware.


According to the super nice and helpful Craig L. at HPE they're planing an update for the MCP ssacli for Ubuntu 18.04. This one will also support the SmartArray firmware 1.34. If you need it now you should be able to use the one released for RHEL and SLES. I did not test it.

replacing hp-health

The master plan is to query the iLO. Basically there are two ways. Either locally via hponcfg or remotely via a Perl script sample provided by HPE along with many helpful RIBCL XML file examples. Both approaches are not cool because you've to deal with a lot of XML, so opt for the 3rd way und use the awesome python-hpilo module (part of Debian/stretch) which abstracts all the RIBCL XML stuff nicely away from you.

If you'd like to have a taste of it, I had to reset a few ilo passwords to something sane, without quotes, double quotes and backticks, and did it like this:


function writeRIBCL {
  ssh $host "echo \"<RIBCL VERSION=\\\"2.0\\\"><LOGIN USER_LOGIN=\\\"adminname\\\" PASSWORD=\\\"password\\\"><USER_INFO MODE=\\\"write\\\"><MOD_USER USER_LOGIN=\\\"Administrator\\\"><PASSWORD value=\\\"$pw\\\"/></MOD_USER></USER_INFO></LOGIN></RIBCL>\" > /tmp/setpw.xml"
  ssh $host "sudo hponcfg -f /tmp/setpw.xml && rm /tmp/setpw.xml"

pwfile="ilo-pwlist-$(date +%s)"

for x in $(seq -w 004 006); do
  pw=$(pwgen -n 24 1)
  echo "${host},${pw}" >> $pwfile

After I regained access to all iLO devices I used the hpilo_cli helper to add a monitoring user:

while read -r line; do
  host=$(echo $line|cut -d',' -f 1)
  pw=$(echo $line|cut -d',' -f 2)
  hpilo_cli -l Administrator -p $pw $host add_user user_login="monitoring" user_name="monitoring" password="secret" admin_priv=False remote_cons_priv=False reset_server_priv=False virtual_media_priv=False config_ilo_priv=False

done < ${1}

The helper script to actually query the iLO interfaces from our monitoring is, in comparison to those ad-hoc shell hacks, rather nice:

import hpilo, argparse

parser = argparse.ArgumentParser()
parser.add_argument("component", help="HW component to query", choices=['battery', 'bios_hardware', 'fans', 'memory', 'network', 'power_supplies', 'processor', 'storage', 'temperature'])
parser.add_argument("host", help="iLO Hostname or IP address to connect to")
args = parser.parse_args()

def askIloHealth(component, host, user, password):
    ilo = hpilo.Ilo(host, user, password)
    health = ilo.get_embedded_health()

You can also take a look at a more detailed state if you pprint the complete stuff returned by "get_embedded_health". If you still like to go down the RIBCL road, this is the XML:

   <LOGIN USER_LOGIN="adminname" PASSWORD="password">
      <SERVER_INFO MODE="read">

This whole approach of using the iLO should work since iLO 3. I tested versions 4 and 5.

Daniel Kahn Gillmor: E-mail Cryptography

11 May, 2018 - 12:00

I've been working on cryptographic e-mail software for many years now, and i want to set down some of my observations of what i think some of the challenges are. I'm involved in Autocrypt, which is making great strides in sensible key management (see the last section below, which is short not because i think it's easy, but because i think Autocrypt has covered this area quite well), but there are additional nuances to the mechanics and user experience of e-mail encryption that i need to get off my chest.

Feedback welcome!

Table of contents: Cryptography and E-mail Messages

Cryptographic protection (i.e., digital signatures, encryption) of e-mail messages has a complex history. There are several different ways that various parts of an e-mail message can be protected (or not), and those mechanisms can be combined in a huge number of ways.

In contrast to the technical complexity, users of e-mail tend to expect a fairly straightforward experience. They also have little to no expectation of explicit cryptographic protections for their messages, whether for authenticity, for confidentiality, or for integrity.

If we want to change this -- if we want users to be able to rely on cryptographic protections for some e-mail messages in their existing e-mail accounts -- we need to be able to explain those protections without getting in the user's way.

Why expose cryptographic protections to the user at all?

For a new messaging service, the service itself can simply enumerate the set of properties that all messages exchanged through the service must have, design the system to bundle those properties with message deliverability, and then users don't need to see any of the details for any given message. The presence of the message in that messaging service is enough to communicate its security properties to the extent that the users care about those properties.

However, e-mail is a widely deployed, heterogenous, legacy system, and even the most sophisticated users will always interact with some messages that lack cryptographic protections.

So if we think those protections are meaningful, and we want users to be able to respond to a protected message at all differently from how they respond to an unprotected message (or if they want to know whether the message they're sending will be protected, so they can decide how much to reveal in it), we're faced with the challenge of explaining those protections to users at some level.


The best level to display cryptographic protects for a typical e-mail user is on a per-message basis.

Wider than per-message (e.g., describing protections on a per-correspondent or a per-thread basis) is likely to stumble on mixed statuses, particularly when other users switch e-mail clients that don't provide the same cryptographic protections, or when people are added to or removed from a thread.

Narrower than per-message (e.g., describing protections on a per-MIME-part basis, or even within a MIME part) is too confusing: most users do not understand the structure of an e-mail message at a technical level, and are unlikely to be able to (or want to) spend any time learning about it. And a message with some cryptographic protection and other tamperable user-facing parts is a tempting vector for attack.

So at most, an e-mail should have one cryptographic state that covers the entire message.

At most, the user probably wants to know:

  • Is the content of this message known only to me and the sender (and the other people in Cc)? (Confidentiality)

  • Did this message come from the person I think it came from, as they wrote it? (Integrity and Authenticity)

Any more detail than this is potentially confusing or distracting.


Is it possible to combine the two aspects described above into something even simpler? That would be nice, because it would allow us to categorize a message as either "protected" or "not protected". But there are four possible combinations:

  • unsigned cleartext messages: these are clearly "not protected"

  • signed encrypted messages: these are clearly "protected" (though see further sections below for more troubling caveats)

  • signed cleartext messages: these are useful in cases where confidentiality is irrelevant -- posts to a publicly-archived mailing list, for example, or announcement e-mails about a new version of some piece of software. It's hard to see how we can get away with ignoring this category.

  • unsigned encrypted messages: There are people who send encrypted messages who don't want to sign those messages, for a number of reasons (e.g., concern over the reuse/misuse of their signing key, and wanting to be able to send anonymous messages). Whether you think those reasons are valid or not, some signed messages cannot be validated. For example:

    • the signature was made improperly,
    • the signature was made with an unknown key,
    • the signature was made using an algorithm the message recipient doesn't know how to interpret
    • the signature was made with a key that the recipient believes is broken/bad

    We have to handle receipt of signed+encrypted messages with any of these signature failures, so we should probably deal with unsigned encrypted messages in the same way.

My conclusion is that we need to be able to represent these states separately to the user (or at least to the MUA, so it can plan sensible actions), even though i would prefer a simpler representation.

Note that some other message encryption schemes (such as those based on shared symmetric keying material, where message signatures are not used for authenticity) may not actually need these distinctions, and can therefore get away with the simpler "protected/not protected" message state. I am unaware of any such scheme being used for e-mail today.

Partial protections

Sadly, the current encrypted e-mail mechanisms are likely to make even these proposed two indicators blurry if we try to represent them in detail. To avoid adding to user confusion, we need to draw some bright lines.

  • For integrity and authenticity, either the entire message is signed and integrity-checked, or it isn't. We must not report messages as being signed when only a part of the message is signed, or when the signature comes from someone not in the From: field. We should probably also not present "broken signature" status any differently that we present unsigned mail. See discussion on the enigmail mailing list about some of these tradeoffs.

  • For confidentiality, the user likely cares that the entire message was confidential. But there are some circumstances (e.g., when replying to an e-mail, and deciding whether to encrypt or not) when they likely care if any part of the message was confidential (e.g. if an encrypted part is placed next to a cleartext part).

It's interesting (and frustrating!) to note that these are scoped slightly differently -- that we might care about partial confidentiality but not about partial integrity and authenticity.

Note that while we might care about partial confidentiality, actually representing which parts of a message were confidential represents a signficant UI challenge in most MUAs.

To the extent that a MUA decides it wants to display details of a partially-protected message, i recommend that MUA strip/remove all non-protected parts of the message, and just show the user the (remaining) protected parts. In the event that a message has partial protections like this, the MUA may need to offer the user a choice of seeing the entire partially-protected message, or the stripped down message that has complete protections.

To the extent that we expect to see partially-protected messages in the real world, further UI/UX exploration would be welcome. It would be great to imagine a world where those messages simply don't exist though :)

Cryptographic Mechanism

There are three major categories of cryptographic protection for e-mail in use today: Inline PGP, PGP/MIME, and S/MIME.

Inline PGP

I've argued elsewhere (and it remains true) that Inline PGP signatures are terrible. Inline PGP encryption is also terrible, but in different ways:

  • it doesn't protect the structure of the message (e.g., the number and size of attachments is visible)

  • it has no way of protecting confidential message headers (see the Protected Headers section below)

  • it is very difficult to safely represent to the user what has been encrypted and what has not, particularly if the message body extends beyond the encrypted block.

No MUA should ever emit messages using inline PGP, either for signatures or for encryption. And no MUA should ever display an inline-PGP-signed block as though it was signed. Don't even bother to validate such a signature.

However, some e-mails will arrive using inline PGP encryption, and responsible MUAs probably need to figure out what to show to the user in that case, because the user wants to know what's there. :/


PGP/MIME and S/MIME are roughly equivalent to one another, with the largest difference being their certificate format. PGP/MIME messages are signed/encrypted with certificates that follow the OpenPGP specification, while S/MIME messages rely on certificates that follow the X.509 specification.

The cryptographic protections of both PGP/MIME and S/MIME work at the MIME layer, providing particular forms of cryptographic protection around a subtree of other MIME parts.

Both standards have very similar existing flaws that must be remedied or worked around in order to have sensible user experience for encrypted mail.

This document has no preference of one message format over the other, but acknowledges that it's likely that both will continue to exist for quite some time. To the extent possible, a sensible MUA that wants to provide the largest coverage will be able to support both message formats and both certificate formats, hopefully with the same fixes to the underlying problems.

Cryptographic Envelope

Given that the plausible standards (PGP/MIME and S/MIME) both work at the MIME layer, it's worth thinking about the MIME structure of a cryptographically-protected e-mail messages. I introduce here two terms related to an e-mail message: the "Cryptographic Envelope" and the "Cryptographic Payload".

Consider the MIME structure of a simple cleartext PGP/MIME signed message:

0A └┬╴multipart/signed
0B  ├─╴text/plain
0C  └─╴application/pgp-signature

Consider also the simplest PGP/MIME encrypted message:

1A └┬╴multipart/encrypted
1B  ├─╴application/pgp-encrypted
1C  └─╴application/octet-stream
1D     ╤ <<decryption>>
1E     └─╴text/plain

Or, an S/MIME encrypted message:

2A └─╴application/pkcs7-mime; smime-type=enveloped-data
2B     ╤ <<decryption>>
2C     └─╴text/plain

Note that the PGP/MIME decryption step (denoted "1D" above) may also include a cryptographic signature that can be verified, as a part of that decryption. This is not the case with S/MIME, where the signing layer is always separated from the encryption layer.

Also note that any of these layers of protection may be nested, like so:

3A └┬╴multipart/encrypted
3B  ├─╴application/pgp-encrypted
3C  └─╴application/octet-stream
3D     ╤ <<decryption>>
3E     └┬╴multipart/signed
3F      ├─╴text/plain
3G      └─╴application/pgp-signature

For an e-mail message that has some set of these layers, we define the "Cryptographic Envelope" as of the layers of cryptographic protection that start at the root of the message and extend until the first non-cryptographic MIME part is encountered.

Cryptographic Payload

We can call the first non-cryptographic MIME part we encounter (via depth-first search) the "Cryptographic Payload". In the examples above, the Cryptographic Payload parts are labeled 0B, 1E, 2C, and 3F. Note that the Cryptographic Payload itself could be a multipart MIME object, like 4E below:

4A └┬╴multipart/encrypted
4B  ├─╴application/pgp-encrypted
4C  └─╴application/octet-stream
4D     ╤ <<decryption>>
4E     └┬╴multipart/alternative
4F      ├─╴text/plain
4G      └─╴text/html

In this case, the full subtree rooted at 4E is the "Cryptographic Payload".

The cryptographic properties of the message should be derived from the layers in the Cryptographic Envelope, and nothing else, in particular:

  • the cryptographic signature associated with the message, and
  • whether the message is "fully" encrypted or not.

Note that if some subpart of the message is protected, but the cryptographic protections don't start at the root of the MIME structure, there is no message-wide cryptographic envelope, and therefore there either is no Cryptographic Payload, or (equivalently) the whole message (5A here) is the Cryptographic Payload, but with a null Cryptographic Envelope:

5A └┬╴multipart/mixed
5B  ├┬╴multipart/signed
5C  │├─╴text/plain
5D  │└─╴application/pgp-signature
5E  └─╴text/plain

Note also that if there are any nested encrypted parts, they do not count toward the Cryptographic Envelope, but may mean that the message is "partially encrypted", albeit with a null Cryptographic Envelope:

6A └┬╴multipart/mixed
6B  ├┬╴multipart/encrypted
6C  │├─╴application/pgp-encrypted
6D  │└─╴application/octet-stream
6E  │   ╤ <<decryption>>
6F  │   └─╴text/plain
6G  └─╴text/plain
Layering within the Envelope

The order and number of the layers in the Cryptographic Envelope might make a difference in how the message's cryptographic properties should be considered.

signed+encrypted vs encrypted+signed

One difference is whether the signature is made over the encrypted data, or whether the encryption is done over the signature. Encryption around a signature means that the signature was hidden from an adversary. And a signature around the encryption indicates that sender may not know the actual contents of what was signed.

The common expectation is that the signature will be inside the encryption. This means that the signer likely had access to the cleartext, and it is likely that the existence of the signature is hidden from an adversary, both of which are sensible properties to want.

Multiple layers of signatures or encryption

Some specifications define triple-layering: signatures around encryption around signatures. It's not clear that this is in wide use, or how any particular MUA should present such a message to the user.

In the event that there are multiple layers of protection of a given kind in the Cryptographic Envelope, the message should be marked based on the properties of the inner-most layer of encryption, and the inner-most layer of signing. The main reason for this is simplicity -- it is unclear how to indicate arbitrary (and potentially-interleaved) layers of signatures and encryption.

(FIXME: what should be done if the inner-most layer of signing can't be validated for some reason, but one of the outer layers of signing does validate? ugh MIME is too complex…)

Signed messages should indicate the intended recipient

Ideally, all signed messages would indicate their intended recipient as a way of defending against some forms of replay attack. For example, Alice signs a signed message to Bob that says "please perform task X"; Bob reformats and forwards the message to Charlie as though it was directly from Alice. Charlie might now believes that Alice is asking him to do task X, instead of Bob.

Of course, this concern also includes encrypted messages that are also signed. However, there is no clear standard for how to include this information in either an encrypted message or a signed message.

An e-mail specific mechanism is to ensure that the To: and Cc: headers are signed appropriately (see the "Protected Headers") below.

See also Vincent Breitmoser's proposal of Intended Recipient Fingerprint for OpenPGP as a possible OpenPGP-specific implementation.

However: what should the MUA do if a message is encrypted but no intended recipients are listed? Or what if a signature clearly indicates the intended recipients, but does not include the current reader? Should the MUA render the message differently somehow?

Protected Headers

Sadly, e-mail cryptographic protections have traditionally only covered the body of the e-mail, and not the headers. Most users do not (and should not have to) understand the difference. There are two not-quite-standards for protecting the headers:

  • message wrapping, which puts an entire e-mail message (message/rfc822 MIME part) "inside" the cryptographic protections. This is also discussed in RFC 5751 §3.1. I don't know of any MUAs that implement this.

  • memory hole, which puts headers on the top-level MIME part directly. This is implemented in Enigmail and K-9 mail.

These two different mechanisms are roughly equivalent, with slight differences in how they behave for clients who can handle cryptographic mail but have not implemented them. If a MUA is capable of interpreting one form successfully, it probably is also capable of interpreting the other.

Note that in particular, the cryptographic headers for a given message ought to be derived directly from the headers present (in one of the above two ways) in the root element of the Cryptographic Payload MIME subtree itself. If headers are stored anywhere else (e.g. in one of the leaf nodes of a complex Payload), they should not propagate to the outside of the message.

If the headers the user sees are not protected, that lack of protection may need to be clearly explained and visible to the user. This is unfortunate because it is potentially extremely complex for the UI.

The types of cryptographic protections can differ per header. For example, it's relatively straightforward to pack all of the headers inside the Cryptographic Payload. For a signed message, this would mean that all headers are signed. This is the recommended approach when generating an encrypted message. In this case, the "outside" headers simply match the protected headers. And in the case that the outsider headers differ, they can simply be replaced with their protected versions when displayed to the user. This defeats the replay attack described above.

But for an encrypted message, some of those protected headers will be stripped from the outside of the message, and others will be placed in the outer header in cleartext for the sake of deliverability. In particular, From: and To: and Date: are placed in the clear on the outside of the message.

So, consider a MUA that receives an encrypted, signed message, with all headers present in the Cryptographic Payload (so all headers are signed), but From: and To: and Date: in the clear on the outside. Assume that the external Subject: reads simply "Encrypted Message", but the internal (protected) Subject: is actually "Thursday's Meeting".

When displaying this message, how should the MUA distinguish between the Subject: and the From: and To: and Date: headers? All headers are signed, but only Subject: has been hidden. Should the MUA assume that the user understands that e-mail metadata like this leaks to the MTA? This is unfortuately true today, but not something we want in the long term.

Message-ID and threading headers

Messages that are part of an e-mail thread should ensure that Message-Id: and References: and In-Reply-To: are signed, because those markers provide contextual considerations for the signed content. (e.g., a signed message saying "I like this plan!" means something different depending on which plan is under discussion).

That said, given the state of the e-mail system, it's not clear what a MUA should do if it receives a cryptographically-signed e-mail message where these threading headers are not signed. That is the default today, and we do not want to incur warning fatigue for the user. Furthermore, unlike Date: and Subject: and From: and To: and Cc:, the threading headers are not usually shown directly to the user, but instead affect the location and display of messages.

Perhaps there is room here for some indicator at the thread level, that all messages in a given thread are contextually well-bound? Ugh, more UI complexity.

Protecting Headers during e-mail generation

When generating a cryptographically-protected e-mail (either signed or encrypted or both), the sending MUA should copy all of the headers it knows about into the Cryptographic Payload using one of the two techniques referenced above. For signed-only messages, that is all that needs doing.

The challenging question is for encrypted messages: what headers on the outside of the message (outside the Cryptographic Envelope) can be to be stripped (removed completely) or stubbed (replaced with a generic or randomized value)?

Subject: should obviously be stubbed -- for most users, the subject is directly associated with the body of the message (it is not thought of as metadata), and the Subject is not needed for deliverability. Since some MTAs might treat a message without a Subject: poorly, and arbitrary Subject lines are a nuisance, it is recommended to use the exact string below for all external Subjects:

Subject: Encrypted Message

However, stripping or stubbing other headers is more complex.

The date header can likely be stripped from the outside of an encrypted message, or can have it its temporal resolution made much more coarse. However, this doesn't protect much information from the MTAs that touch the message, since they are likely to see the message when it is in transit. It may protect the message from some metadata analysis as it sits on disk, though.

The To: and Cc: headers could be stripped entirely in some cases, though that may make the e-mail more prone to being flagged as spam. However, some e-mail messages sent to Bcc groups are still deliverable, with a header of

To: undisclosed-recipients:;

Note that the Cryptographic Envelope itself may leak metadata about the recipient (or recipients), so stripping this information from the external header may not be useful unless the Cryptographic Envelope is also stripped of metadata appropriately.

The From: header could also be stripped or stubbed. It's not clear whether such a message would be deliverable, particularly given DKIM and DMARC rules for incoming domains. Note that the MTA will still see the SMTP MAIL FROM: verb before the message body is sent, and will use the address there to route bounces or DSNs. However, once the message is delivered, a stripped From: header is an improvement in the metadata available on-disk. Perhaps this is something that a friendly/cooperative MTA could do for the user?

Even worse is the Message-Id: header and the associated In-Reply-To: and References: headers. Some MUAs (like notmuch) rely heavily on the Message-Id:. A message with a stubbed-out Message-Id would effectively change its Message-Id: when it is decrypted. This may not be a straightforward or safe process for MUAs that are Message-ID-centric. That said, a randomized external Message-ID: header could help to avoid leaking the fact that the same message was sent to multiple people, so long as the message encryption to each person was also made distinct.

Stripped In-Reply-To: and References: headers are also a clear metadata win -- the MTA can no longer tell which messages are associated with each other. However, this means that an incoming message cannot be associated with a relevant thread without decrypting it, something that some MUAs may not be in a position to do.

Recommendation for encrypted message generation in 2018: copy all headers during message generation; stub out only the Subject for now.

Bold MUAs may choose to experiment with stripping or stubbing other fields beyond Subject:, possibly in response to some sort of signal from the recipient that they believe that stripping or stubbing some headers is acceptable. Where should such a signal live? Perhaps a notation in the recipient's certificate would be useful.

Key management

Key management bedevils every cryptographic scheme, e-mail or otherwise. The simplest solution for users is to automate key management as much as possible, making reasonable decisions for them. The Autocrypt project outlines a sensible approach here, so i'll leave most of this section short and hope that it's covered by Autocrypt. While fully-automated key management is likely to be susceptible either to MITM attacks or trusted third parties (depending on the design), as a community we need to experiment with ways to provide straightforward (possibly gamified?) user experience that enables and encourages people to do key verification in a fun and simple way. This should probably be done without ever mentioning the word "key", if possible. Serious UI/UX work is needed. I'm hoping future versions of Autocrypt will cover that territory.

But however key management is done, the result for the e-mail user experience is that that the MUA will have some sense of the "validity" of a key being used for any particular correspondent. If it is expressed at all, it should be done as simply as possible by default. In particular, MUAs should avoid confusing the user with distinct (nearly orthogonal) notions of "trust" and "validity" while reading messages, and should not necessarily associate the validity of a correspondent's key with the validity of a message cryptographically associated with that correspondent's key. Identity not the same thing as message integrity, and trustworthiness is not the same thing as identity either.

Key changes over time

Key management is hard enough in the moment. With a store-and-forward system like e-mail, evaluating the validity of a signed message a year after it was received is tough. Your concept of the correspondent's correct key may have changed, for example. I think our understanding of what to do in this context is not currently clear.

Shirish Agarwal: Reviewing Agent 6

11 May, 2018 - 03:39

The city I come from, Pune has been experiencing somewhat of a heat-wave. So I have been cutting off lot of work and getting lot of back-dated reading done. One of the first books I read was Tom Rob Smith’s Agent 6 . Fortunately , I read only the third book and not the first two which from the synopsis seem to be more gruesome than the one which I read, so guess there is something to be thankful for.

While I was reading about the book, I had thought that MGB is a fictious organization thought of by the author. But a quick look in wikipedia told that it is what KGB was later based upon.

I found the book to be both an easy read as well as a layered book. I was lucky to get a big print version of the book so I was able to share the experience with my mother as well. The book is somewhat hefty as it tops out around 600 pages although it’s told to be 480 pages on amazon.

As I had shared previously I had read Russka and how had been disappointed to see how the Russian public were disappointed time and again for democracy. I do understand that the book (Russka) itself is/was written by a western author and could have tapped into some unconscious biases but seemed to be accurate as to whatever I could find from public resources, that story though I may return to in a future date but this time would be for Agent 6 .

I found the book pretty immersive and at the same time lead me thinking on so many threads the author touches but then moves on. I was left wondering and many times just had to sleep, think deep thoughts as there was quite to chew on.

I am not going to spoil any surprises except to say there are quite a few twists and the ending is also what I didn’t expect.

At the end, if you appreciate politics, history, bit of adventure and have a bit of patience, the book is bound to reward you. It is not meant to be a page-turner but if you are one who enjoys savoring your drink you are going to enjoy it thoroughly.

Jonathan McDowell: Home Automation: Getting started with MQTT

11 May, 2018 - 02:53

I’ve been thinking about trying to sort out some home automation bits. I’ve moved from having a 7 day heating timer to a 24 hour timer and I’d forgotten how annoying that is at weekends. I’d like to monitor temperatures in various rooms and use that, along with presence detection, to be a bit more intelligent about turning the heat on. Equally I wouldn’t mind tying my Alexa in to do some voice control of lighting (eventually maybe even using DeepSpeech to keep everything local).

Before all of that I need to get the basics in place. This is the first in a series of posts about putting together the right building blocks to allow some useful level of home automation / central control. The first step is working out how to glue everything together. A few years back someone told me MQTT was the way forward for IoT applications, being more lightweight than a RESTful interface and thus better suited to small devices. At the time I wasn’t convinced, but it seems they were right and MQTT is one of the more popular ways of gluing things together.

I found the HiveMQ series on MQTT Essentials to be a pretty good intro; my main takeaway was that MQTT allows for a single message broker to enable clients to publish data and multiple subscribers to consume that data. TLS is supported for secure data transfer and there’s a whole bunch of different brokers and client libraries available. The use of a broker is potentially helpful in dealing with firewalling; clients and subscribers only need to talk to the broker, rather than requiring any direct connection.

With all that in mind I decided to set up a broker to play with the basics. I made the decision that it should run on my OpenWRT router - all the devices I want to hook up can easily see that device, and if it’s down then none of them are going to be able to get to a broker hosted anywhere else anyway. I’d seen plenty of info about Mosquitto and it’s already in the OpenWRT package repository. So I sorted out a Let’s Encrypt cert, installed Moquitto and created a couple of test users:

opkg install mosquitto-ssl
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user1 foo
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user2 bar
chown mosquitto /etc/mosquitto/mosquitto.users
chmod 600 /etc/mosquitto/mosquitto.users

I then edited /etc/mosquitto/mosquitto.conf and made sure the following are set. In particular you need cafile set in order to enable TLS:

port 8883
cafile /etc/ssl/lets-encrypt-x3-cross-signed.pem
certfile /etc/ssl/mqtt.crt
keyfile /etc/ssl/mqtt.key

log_dest syslog

allow_anonymous false

password_file /etc/mosquitto/mosquitto.users
acl_file /etc/mosquitto/mosquitto.acl

Finally I created /etc/mosquitto/mosquitto.acl with the following:

user user1
topic readwrite #

user user2
topic read ro/#
topic readwrite test/#

That gives me user1 who has full access to everything, and user2 with readonly access to the ro/ tree and read/write access to the test/ tree.

To test everything was working I installed mosquitto-clients on a Debian test box and in one window ran:

mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo

and in another:

mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test/message' -m 'Hello World!' -u user2 -P bar

(without the --capath it’ll try a plain TCP connection rather than TLS, and not produce a useful error message) which resulted in the mosquitto_sub instance outputting:

test/message Hello World!


mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test2/message' -m 'Hello World!' -u user2 -P bar

resulted in no output due to the ACL preventing it. All good and ready to actually make use of - of which more later.

Daniel Stender: AFL in Ubuntu 18.04 is broken

10 May, 2018 - 23:21

At is has been reported on the discussion list for American Fuzzy Lop lately, unfortunately the fuzzer is broken in Ubuntu 18.04 “Bionic Beaver”. Ubuntu Bionic ships AFL 2.52b, which is the current version at the moment of writing this blog post. The particular problem comes from the accompanying gcc-7 package, which is pulled by afl via the build-essential package. It was noticed in the development branch for the next Debian release by continuous integration (#895618) that introducing a triplet-prefixed as in gcc-7 7.3.0-16 (like same was changed for gcc-8, see #895251) affected the -B option in way that afl-gcc (the gcc wrapper) can’t use the shipped assembler (/usr/lib/afl-as) anymore to install the instrumentation into the target binary (#896057, thanks to Jakub Wilk for spotting the problem). As a result, the instrumented fuzzying and other things in afl doesn’t work:

$ afl-gcc --version
 afl-cc 2.52b by <>
 gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
$ afl-gcc -o test-instr test-instr.c 
 afl-cc 2.52b by <>
$ afl-fuzz -i in -o out -- ./test-instr
 afl-fuzz 2.52b by <>
 [+] You have 2 CPU cores and 1 runnable tasks (utilization: 50%).
 [+] Try parallel jobs - see /usr/share/doc/afl-doc/docs/parallel_fuzzing.txt.
 [*] Creating hard links for all input files...
 [*] Validating target binary...
 [-] Looks like the target binary is not instrumented! The fuzzer depends on
     compile-time instrumentation to isolate interesting test cases while
     mutating the input data. For more information, and for tips on how to
     instrument binaries, please see /usr/share/doc/afl-doc/docs/README.
     When source code is not available, you may be able to leverage QEMU
     mode support. Consult the README for tips on how to enable this.
     (It is also possible to use afl-fuzz as a traditional, "dumb" fuzzer.
     For that, you can use the -n option - but expect much worse results.)
 [-] PROGRAM ABORT : No instrumentation detected
          Location : check_binary(), afl-fuzz.c:6920

The same error message is put out e.g. by afl-showmap. gcc-7 7.3.0-18 fixes this. As an alternative before this becomes available, afl-clang which uses the clang compiler might be used instead to prepare the target binary properly:

$ afl-clang --version
 afl-cc 2.52b by <>
 clang version 4.0.1-10 (tags/RELEASE_401/final)
$ afl-clang -o test-instr test-instr.c 
 afl-cc 2.52b by <>
 afl-as 2.52b by <>
 [+] Instrumented 6 locations (64-bit, non-hardened mode, ratio 100%)


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้