Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 52 min 42 sec ago

Thorsten Alteholz: My Debian Activities in October 2017

6 November, 2017 - 00:08

FTP assistant

Again, this month almost the same numbers as last month appeared in the statistics. I accepted 214 packages and rejected 22 uploads. The overall number of packages that got accepted this month was only 339.

Debian LTS

This was my fortieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 20.75h. During that time I did LTS uploads of:

  • [DLA 1125-1] botan1.10 security update for one CVE
  • [DLA 1127-1] sam2p security update for 6 CVEs
  • [DLA 1143-1] curl security update for one CVE
  • [DLA 1149-1] wget security update for two CVEs

I also took care of radare2 twice and marked all four CVEs as not-affected for Wheezy and Jessie. As nobody else wanted to address the issues in wireshark yet, I now started to work on this package.

Last but not least I did one week of frontdesk duties.

Other stuff

During October I took care of some bugs and at one go uploaded new upstream versions of hoel and duktape (this had to be done twice as I introduced an new bug with the first upload :-(). I only fixed bugs in glewlwyd and smstools. This month I also sponsored an upload of printrun.

After about ten years of living without any power outage, some construction worker decided to cut a cable near my place. Unfortunately one of my computers used for recording TV shows did not boot after the cable had been repaired and I had to switch some timers to other boxes. All in all this was too much stress and I purchased some USVs from APC. As apcupsd was orphaned, I took the opportunity to adopt it as DOPOM for this month.

My license pasting project now contains 31 license templates for your debian/copyright. The list of available texts can be obtained with:


The license text itself is available under the given links, for example with



Dirk Eddelbuettel: pinp 0.0.4: Small tweak

5 November, 2017 - 23:34

A maintenance release of our pinp package for snazzier one or two column vignettes is now on CRAN as of yesterday.

In version 0.0.3, we disabled the default \pnasbreak command we inherit from the PNAS LaTeX style. That change turns out to have been too drastic. So we reverted yet added a new YAML front-matter option skip_final_break which, if set to TRUE, will skip this break. With a default value of FALSE we maintain prior behaviour.

A screenshot of the package vignette can be seen below. Additional screenshots of are at the pinp page.

The NEWS entry for this release follows.

Changes in pinp version 0.0.4 (2017-11-04)
  • Correct NEWS headers from 'tint' to 'pinp' (#45).

  • New front-matter variables ‘skip_final_break’ skips the \pnasbreak on final page which back as default (#47).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: Sweep in Peace

5 November, 2017 - 09:17

Review: Sweep in Peace, by Ilona Andrews

Series: Innkeeper Chronicles #2 Publisher: NYLA Copyright: 2015 ISBN: 1-943772-32-0 Format: Kindle Pages: 302

This is the sequel to Clean Sweep. You could pick up the background as you go along, but the character relationships benefit from reading the series in order.

Dina's inn is doing a bit better, but it still desperately needs guests. That means she's not really in a position to say no when an Arbitrator shows up at her door and asks her to host a peace summit. Lucky for the Arbitrator, since every other inn on Earth did say no.

Nexus has been the site of a viciously bloody conflict between the vampires, the Hope-Crushing Horde, and the Merchants of Baha-char for years. All sides have despaired of finding any form of peace. The vampires and the Horde have both deeply entrenched themselves in a cycle of revenge. The Merchants have the most strategic position and an apparently unstoppable warrior. The situation is hopeless; by far the most likely outcome will be open warfare inside the inn, which would destroy its rating and probably Dina's future as an innkeeper. Dina will need all of her power and caution just to stop that; peace seems beyond any possibility, but thankfully isn't her problem. Maybe the Arbitrator can work some miracle if she can just keep everyone alive.

And well fed. Which is another problem. She has enough emergency money for the food, but somehow cook for forty people from four different species while keeping them all from killing each other? Not a chance. She's going to have to hire someone somehow, someone good, even though she can't afford to pay.

Sweep in Peace takes this series farther out of urban fantasy territory and farther into science fiction, and also ups the stakes (and the quality of the plot) a notch. We get three moderately interesting alien species with only slight trappings of fantasy, a wonderful alien chef who seems destined to become a regular in the series, and a legitimately tricky political situation. The politics and motives aren't going to win any awards for deep and subtle characterization, but that isn't what the book is going for. It's trying to throw enough challenges at Dina to let her best characteristics shine, and it does that rather well.

The inn continues to be wonderful, although I hope it becomes more of a character in its own right as the series continues. Dina's reshaping of it for guests, and her skill at figuring out the rooms her guests would enjoy, is my favorite part of these books. She cares about making rooms match the personality of her guests, and I love books that give the character a profession that matters to them even if it's unrelated to the plot. I do wish Andrews would find a few other ways for Dina to use her powers for combat beyond tentacles and burying people in floors, but that's mostly a quibble.

You should still not expect great literature. I guessed the big plot twist several chapters before it happened, and the resolution is, well, not how these sorts of political situations resolve in the real world. But there is not a stupid love affair, there are several interesting characters, and one of the recurring characters gets pretty solid and somewhat unusual characterization. And despite taking the plot in a more serious direction, Sweep in Peace retains its generally lighthearted tone and firm conviction in Dina's ability to handle just about anything. Also, the chef is wonderful.

One note: Partway into the book, I started getting that "oh, this is a crossover" feeling (well-honed by years of reading comic books). As near as I can tell from a bit of research, Andrews pulled in some of their characters from the Edge series. This was a bit awkward, in the "who are these people and why do they seem to have more backstory than any of the other supporting characters" cross-over sort of way, but the characters that were pulled in were rather intriguing. I might have to go read the Edge books now.

Anyway, if you liked Clean Sweep, this is better in pretty much every way. Recommended.

Followed by One Fell Sweep.

Rating: 8 out of 10

Junichi Uekawa: It's already November.

5 November, 2017 - 07:33
It's already November. Been reading up a bit on C++17 features and improvements. Nice.

Alexander Wirt: debconf mailinglists moved to

5 November, 2017 - 04:45

Today I had the pleasure to move the debconf mailinglists to That means that the following mailinglists:

are now hosted on Please update any documentation or bookmarks you have.

Next step would be to join debconf again ;).

Steinar H. Gunderson: Trøndisk 2017 live stream

4 November, 2017 - 14:18

We're streaming live from Trøndisk 2017, first round of the Norwegian ultimate frisbee series, today from 0945 CET and throughout the day/weekend. It's an interesting first for Nageru in that it's sports, where everything happens much faster and there are more demands for overlay graphics (I've made a bunch of CasparCG templates). I had hoped to get to use Narabu in this, but as the (unfinished) post series indicates, I simply had to prioritize other things. There's plenty of new things for us anyway, not the least that I'll be playing and not operating. :-)

Feel free to tune in to the live stream, although we don't have international stream reflectors. It's a fun sport with many nice properties. :-) There will be YouTube not too long after the day is over, too.

Louis-Philippe Véronneau: Migrating my website to Pelican

4 November, 2017 - 11:00

After too much time lying to myself, telling myself things like "I'll just add this neat feature I want on my blog next week", I've finally made the big jump, ditched django and migrated my website to Pelican.

I'm going to the Cambridge Mini-Debconf at the end of the month for the Debconf Videoteam Autumn sprint and I've taken the task of making daily sprint reports for the team. That in return means I have to publish my blog on Planet Debian. My old website not having feeds made this a little hard and this perfect storm gave me the energy to make the migration happen.

Anyway, django was fun. Building a (crappy) custom blogging engine with it taught me some rough basics, but honestly I don't know why I ever thought it was a good idea.

Don't get me wrong: django is great and should definitely be used for large and complicated websites. My blog just ain't one.

Migrating to Pelican was pretty easy since it also uses Jinja2 templates and generates content from Mardown. The hardest part was actually bending it to replicate the weird and specific behavior I wanted it to have.

So yeah, woooo, I migrated to Pelican. Who cares, right? Well, if you are amongst the very, very few people who read the blog posts I mainly write for myself, you'll be please to know that:

  • Tags are now implemented
  • You can subscribe to a wide array of ATOM feeds to follow my blog

Here's a bonus picture of a Pelican from Wikimedia, just for the sake of it:

Dirk Eddelbuettel: tint 0.0.4: Small enhancements

4 November, 2017 - 08:02

A maintenance release of the tint package arrived on CRAN earlier today. Its name expands from tint is not tufte as the package offers a fresher take on the Tufte-style for html and pdf presentations.

A screenshot of the pdf variant is below.

This release brings some minor enhancements and polish, mostly learned from having done the related pinp (two-column vignette in the PNAS style) and linl (LaTeX letter) RMarkdown-wrapper packages; see below for details from the NEWS.Rd file.

Changes in tint version 0.0.4 (2017-11-02)
  • Skeleton files are also installed as vignettes (#20).

  • A reference to the Tufte source file now points to tint (Ben Marwick in #19, later extended to other Rmd files).

  • Several spelling and grammar errors were corrected too (#13 and #16 by R. Mark Sharp and Matthew Henderson)

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible builds folks: Reproducible Builds: Weekly report #131

4 November, 2017 - 00:58

Here's what happened in the Reproducible Builds effort between Sunday October 22 and Saturday October 28 2017:

Past Events Upcoming/current events Documentation updates

Bernhard Wiedemann started The Unreproducible Package which "is meant as a practical way to demonstrate the various ways that software can break reproducible builds using just low level primitives without requiring external existing programs that implement these primitives themselves.

It is structured so that one subdirectory demonstrates one class of issues in some variants observed in the wild."

Reproducible work in other projects

Hush, a fork of ZCash, opened an issue into reproducible builds.

A new tag was added to lintian (lint checker for Debian packages) to ensure that changelog entry timestamps are strictly increasing. This avoids certain real-world issues with identical timestamps, documented in Debian #843773.

Packages reviewed and fixed, and bugs filed

Patches sent upstream:

  • Bernhard M. Wiedemann:
    • gtranslator, embedded build timestamps
    • libgda, embedded build timestamps
    • mariadb, embedded build timestamps
    • nim, embedded build timestamps

Debian bug reports:

Reviews of unreproducible packages

14 package reviews have been added, 35 have been updated and 28 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (4)
strip-nondeterminism development

Version 0.040-1 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks, as well as new ones from:

  • Mattia Rizzolo:
    • Don't open the original file in write mode
reprotest development

Development continued in git:

  • Ximin Luo:
    • New features:
      • Support a domain_host variation.
      • Support a --print-sudoers feature.
    • Documentation:
      • Note some caveats about the existing git versions as a self-reminder not to release it yet.
      • Updates about our assumptions and rearrange sudo into its own section.
    • Bug fixes:
      • main: When dropping privs, make sure the user can still move in theroot.
      • tests: fix, need to preserve env for su
      • build: Don't fail when the build produces a broken symlink
      • main, presets: Properly drop privs when running the build. (Closes: #877813)
    • Code quality:
      • Improve logging to try to get to the bottom of the jenkins failures
      • Tweak tests to avoid some build dependencies
      • build: Name temporary directories after reprotest not autopkgtest development

Development continued in git:

  • Chris Lamb:
    • New features:
      • Add API endpoint to fetch specific .buildinfo files for a certain package/version/architecture, and optimise it. (Closes: #25)
    • Bug fixes:
      • Always show SHA256, regardless of viewport size. (Closes: #27)
      • Actually filter by source package
reproducible-website development
  • Holger Levsen:
    • RWS3 Berlin 2017:
      • Add CoyIM, Arch Linux, LEDE, LEAP,, Bazel, coreboot.
      • Make some sponsors visible.
      • Add short paragraph explaining that registration is mandatory.

This week's edition was written by Ximin Luo, Chris Lamb, Bernhard M. Wiedemann and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Raphaël Hertzog: My Free Software Activities in October 2017

3 November, 2017 - 17:53

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I had 1.5h left from September too. During this time, I finally finished my work on exiv2: I completed the triage of all CVE, backported 3 patches to the version in wheezy and released DLA-1147-1.

I also did some review of the oldest entries in dla-needed. I reclassified a bunch of CVE on zoneminder and released DLA-1145-1 for the most problematic issue on that package. Many other packages got their CVE reclassified as not worth an update: xbmc, check-mk, rbenv, phamm, yaml-cpp. For mosquitto, I released DLA-1146-1.

I filed #879001 (security issue) and #879002 (removal suggestion) on libpam4j. This library is no longer used by any other package in Debian, so it could be removed instead of costing us time in support.

Misc Debian work

After multiple months of wait, I was allowed to upload my schroot stable update (#864297).

After ack from the d-i release manager, I pushed my pkgsel changes and uploaded version 0.46 of the package: this brings unattended-upgrades support in the installer. It’s now installed by default.

I nudged the upstream developer of gnome-shell-timer to get a new release for GNOME 3.26 compatibility and packaged it.

Finally, I was pleased to merge multiple patches from Ville Skyttä on Distro Tracker (the software powering It looks like Ville will continue to contribute on a regular basis, yay. \o/ He already helped me to fix the remaining blockers for the switch to Python 3.

Not really Debian related, but I also filed a bug against Tryton that I discovered after upgrading to the latest version.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Chris Lamb: Faking cleaner URLs in the Debian BTS

3 November, 2017 - 15:21

Debian bug #846500 requests that the Bug Tracking System moves the canonical URL for a given bug from:

… to the shorter, cleaner and generally less ugly:

(The latter currently redirects to the former.)

However, whilst we wait for a fix we can abuse the window.history object from the HTML History API to fake this locally:

var m = window.location.href
if (!m) return;

for (var x of document.getElementsByTagName("a")) {
  var href = x.getAttribute("href");
  if (href && href.match(/^[^:]+\.cgi/)) {
      // Mangle relative URIs; <base> tag does not DTRT
      x.setAttribute('href', "/cgi-bin/" + href);

history.replaceState({}, "", "/" + m[1] + window.location.hash);

This should work with most "user script" managers — I happen to use TamperMonkey in Chrome.

Joerg Jaspert: Automated wifi login, update 2

3 November, 2017 - 14:24

Seems my blog lately just consist of updates to my automated login script for the ICE wifi… But I do hate the entirely useless “Click a button” crap, every day, twice. I’ve seen it once, now leave me alone, please.

Updated script:


# (Some) docs at



TIMEOUT="/usr/bin/timeout -k 20 15"

case ${ACTION} in
        CONID=${CONNECTION_ID:-$(iwconfig $IFACE | grep ESSID | cut -d":" -f2 | sed 's/^[^"]*"\|"[^"]*$//g')}
        if [[ ${CONID} == WIFIonICE ]]; then
            COOKIETMP=$(mktemp -p ${TMPDIR} nmwifionice.XXXXXXXXX)
            trap "rm -f ${COOKIETMP}" EXIT TERM HUP INT QUIT
            csrftoken=$(${TIMEOUT} ${WGET} -q -O - --keep-session-cookies --save-cookies=${COOKIETMP} --referer ${REFERER} ${LOGIN}  | grep -oP  'CSRFToken"\ value="\K[0-9a-z]+')
            if [[ -z ${csrftoken} ]]; then
                echo "CSRFToken is empty"
                exit 0
            sleep 1
            ${TIMEOUT} ${WGET} -q -O - --load-cookies=${COOKIETMP} --post-data="login=true&connect=connect&CSRFToken=${csrftoken}" --referer ${REFERER} ${LOGIN} >/dev/null
        # We are not interested in this

Jaldhar Vyas: New Hackergotchi

3 November, 2017 - 11:49

The sole purpose of this post is to check my new hackergotchi looks ok on Debian Planet. As I've lost a lot of weight and this is the second Diwali I've managed to survive without regaining, I thought I should update it to a more accurate depiction.

Russell Coker: Work Stuff

3 November, 2017 - 11:21

Does anyone know of a Linux support company that provides 24*7 support to Ruby and PHP applications? I have a client that is looking for such a company.

Also I’m looking for more consulting work. If anyone knows of an organisation that needs some SE Linux consulting, or support for any of the FOSS software I’ve written then let me know. I take payment by Paypal and Bitcoin as well as all the usual ways. I can make a private build of any of my FOSS software to suit your requirements or if you want features that could be used by other people (and don’t conflict with the general use cases) I can add them on request. Small changes start at $100.

Related posts:

  1. Debian Work and Upstream Steve Kemp writes about security issues with C programs [1]....
  2. Getting Work in Another Country Often there are possibilities to earn more money or gain...
  3. Increasing Efficiency through Less Work I have just read an interesting article titled Why Crunch...

Rogério Brito: Comparison of JDK installation of various Linux distributions

3 November, 2017 - 08:13

Today I spent some time in the morning seeing how one would install the JDK on Linux distributions. This is to create a little comparative tutorial to teach introductory Java.

Installing the JDK is, thanks to the OpenJDK developers in Debian and Ubuntu (Matthias Klose and helpers), a very easy task. You simply type something like:

apt-get install openjdk-8-jdk

Since for a student it is better to have everything for experiments, I install the full version, not only the -headless version. Given my familiarity with Debian/Ubuntu, I didn't have to think about the way of installing it, of course.

But as this is a tutorial meant to be as general as I can, I tried also to include instructions on how to install Java on other distributions. The first two that came to my mind were openSUSE and Fedora.

Both use the RPM package format for their "native" packages (in the same sense that Debian uses DEB packages for "native" packages). But they use different higher-level tools to install such packages: Fedora uses a tool called dnf, while openSUSE uses zypper.

To try these distributions, I got their netinstall ISOs and used qemu/kvm to install on a virtual machine. I used the following to install/run the virtual machines (the example below, is, of course, for openSUSE):

qemu-system-x86_64 -enable-kvm -m 4096 -smp 2 -net nic,model=e1000 -net user -drive index=0,media=disk,cache=unsafe,file=suse.qcow2 -cdrom openSUSE-Leap-42.3-NET-x86_64.iso

The names of the packages also change from one distribution to another. On Fedora, I had to use:

dnf install java-1.8.0-openjdk-devel

On openSUSE, I had to use:

zypper install java-1_8_0-openjdk-devel

Note that one distribution uses dots in the names of the packages while the other uses underscores.

One interesting thing that I noticed with dnf was that, when I used it, it automatically refreshed the package lists from the network, something which I forgot, and it was a pleasant surprise. I don't know about zypper, but I guess that it probably had fresh indices when the installation finished.

Both installations were effortless after I knew the names of the packages to install.

Oh, BTW, in my 5 minute exploration with these distributions, I noticed that if you don't want the JDK, but only the JRE, then you omit the -devel suffix. It makes sense when you think about it, for consistency with other packages, but Debian's conventions also make sense (JRE with -jre suffix, JDK with -jdk suffix).

I failed miserably to use Fedora's prebaked, vanilla cloud image, as I couldn't login on this image and I decided to just install the whole OS on a fresh virtual machine.

I don't have instructions on how to install on Gentoo nor on Arch, though.

I now see how hard it is to cover instructions/provide software for as many distributions as you wish, given the multitude of package managers, conventions etc.

Steinar H. Gunderson: Introducing Narabu, part 5: Encoding

3 November, 2017 - 03:34

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3 and part 4 first.

At this point, we've basically caught up with where I am, so thing are less set in stone. However, let's look at what qualitatively differentiates encoding from decoding; unlike in interframe video codecs (where you need to do motion vector search and stuff), encoding and decoding are very much mirror images of each other, so we can intuitively expect them to be relatively similar in performance. The procedure is DCT, quantization, entropy coding, and that's it.

One important difference is in the entropy coding. Since our rANS encoding is non-adaptive (a choice made largely for simplicity, but also because our streams are so short) works by first signaling a distribution and then encoding each coefficient using that distribution. However, we don't know that distribution until we've DCT-ed all blocks in the picture, so we can't just DCT each block and entropy code the coefficients on-the-fly.

There are a few ways to deal with this:

  • DCT and quantize first, then store to a temporary array, count up all the values, make rANS distributions, and then entropy code from that array. This is conceptually the simplest, but see below.
  • Use some sort of approximate distribution, which also saves us from doing the actual counting. This has the problem that different images look different, so it loses efficiency (and could do so quite badly). Furthermore, it means this approximate distribution could never have a 0 for any frequency, which also is a net-negative when we're talking about only 12-bit resolution.
  • Use the values from the previous frame, since video tends to have correlation between frames (and we can thus expect the distribution to be a better fit than some generic distribution decided ahead of time). Has many of the same problems as previous point, though, plus we get back the overhead of counting. Still, an interesting possibility that I haven't tried in practice yet.
  • Do DCT and quantization just to get the values for counting, then do it all over again later when we have the distribution. Saves the store to the temporary array (the loads will be the same, since we need to load the image twice), at the cost of doing DCT twice. I haven't tried this either.
  • A variant on the previous one: Do DCT on a few random blocks (10%?) to get an approximate distribution. I did something similar to this at some point, and it was surprisingly good, but I haven't gone further with it.

As you can see, tons of possible strategies here. For simplicity, I've ended up with the former, although this could very well be changed at some point. There are some interesting subproblems, though:

First of all, we need to decide the data type of this temporary array. The DCT tends to concentrate energy into fewer coefficients (which is a great property for compression!), so even after quantization, some of them will get quite large. This means we cannot store them in an 8-bit texture; however, even the bigger ones are very rarely bigger than 10 bits (including a sign bit), so using 16-bit textures wastes precious memory bandwidth.

I ended up slicing the coefficients by horizontal index and then pairing them up (so that we generate pairs 0+7, 1+6, 2+5 and 3+4 for the first line of the 8x8 DCT block, 8+15, 9+14, 10+13 and 11+12 for the next line, etc.). This allows us to pack two coefficients into a 16-bit texture, for an average of 8 bits per coefficient, which is what we want. It makes for some slightly fiddly clamping and bit-packing since we are packing signed values, but nothing really bad.

Second, and perhaps surprisingly enough, counting efficiently is nontrivial. We want a histogram over what coefficients are used the more often, ie., for each coefficient, we want something like ++counts[dist][coeff] (recall we have four distinct distributions). However, since we're in a massively parallel algorithm, this add needs to be atomic, and since values like e.g. 0 are super-common, all of our GPU cores will end up fighting over the cache line containing counts[dist][0]. This is… not fast. Think 10 ms/frame not fast.

Local memory to the rescue again; all modern GPUs have fast atomic adds to local memory (basically integrating adders into the cache, as I understand it, although I might have misunderstood here). This means we just make a suitably large local group, build up our sub-histogram in local memory and then add all nonzero buckets (atomically) to the global histogram. This improved performance dramatically, to the point where it was well below 0.1 ms/frame.

However, our histogram is still a bit raw; it sums to 1280x720 = 921,600 values, but we want an approximation that sums to exactly 4096 (12 bits), with some additional constraints (like no nonzero coefficients). Charles Bloom has an exposition of a nearly optimal algorithm, although it took me a while to understand it. The basic idea is: Make a good approximation by multiplying each frequency by 4096/921600 (rounding intelligently). This will give you something that very nearly rounds to 4096—either above or below, e.g. 4101. For each step you're above or below the total (five in this case), find the best single coefficient to adjust (most entropy gain, or least loss); Bloom is using a heap, but on the GPU, each core is slow but we have many of them, so it's better just to try all 256 possibilities in parallel and have a simple voting scheme through local memory to find the best one. And then finally, we want a cumulative distribution function, but that is simple through a parallel prefix sum on the 256 elements.

And then finally, we can take our DCT coefficients and the finished rANS distribution, and write the data! We'll have to leave some headroom for the streams (I've allowed 1 kB for each, which should be ample except for adversarial data—and we'll probably solve that just by truncating the writes and accepting the corruption), but we'll compact them when we write to disk.

Of course, the Achilles heel here is performance. Where decoding 720p (luma only) on my GTX 950 took 0,4 ms or so, encoding is 1,2 ms or so, which is just too slow. Remember that 4:2:2 is twice that, and we want multiple streams, so 2,4 ms per frame is eating too much. I don't really know why it's so slow; the DCT isn't bad, the histogram counting is fast, it's just the rANS shader that's slow for some reason I don't understand, and also haven't had the time to really dive deeply into. Of course, a faster GPU would be faster, but I don't think I can reasonably demand that people get a 1080 just to encode a few video streams.

Due to this, I haven't really worked out the last few kinks. In particular, I haven't implemented DC coefficient prediction (it needs to be done before tallying up the histograms, so it can be a bit tricky to do efficiently, although perhaps local memory will help again to send data between neighboring DCT blocks). And I also haven't properly done bounds checking in the encoder or decoder, but it should hopefully be simple as long as we're willing to accept that evil input decodes into garbage instead of flagging errors explicitly. It also depends on a GLSL extension that my Haswell laptop doesn't have to get 64-bit divides when precalculating the rANS tables; I've got some code to simulate 64-bit divides using 32-bit, but it doesn't work yet.

The code as it currently stands is in this Git repository; you can consider it licensed under GPLv3. It's really not very user-friendly at this point, though, and rather rough around the edges.

Next time, we'll wrap up with some performance numbers. Unless I magically get more spare time in the meantime and/or some epiphany about how to make the encoder faster. :-)

Bits from Debian: New Debian Developers and Maintainers (September and October 2017)

3 November, 2017 - 02:30

The following contributors got their Debian Developer accounts in the last two months:

  • Allison Randal (wendar)
  • Carsten Schoenert (tijuca)
  • Jeremy Bicha (jbicha)
  • Luca Boccassi (bluca)
  • Michael Hudson-Doyle (mwhudson)
  • Elana Hashman (ehashman)

The following contributors were added as Debian Maintainers in the last two months:

  • Ervin Hegedüs
  • Tom Marble
  • Lukas Schwaighofer
  • Philippe Thierry


Antoine Beaupré: October 2017 report: LTS, feed2exec beta, pandoc filters, git mediawiki

2 November, 2017 - 23:12
Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This time I worked on the famous KRACK attack, git-annex, golang and the continuous stream of GraphicsMagick security issues.

WPA & KRACK update

I spent most of my time this month on the Linux WPA code, to backport it to the old (~2012) wpa_supplicant release. I first published a patchset based on the patches shipped after the embargo for the oldstable/jessie release. After feedback from the list, I also built packages for i386 and ARM.

I have also reviewed the WPA protocol to make sure I understood the implications of the changes required to backport the patches. For example, I removed the patches touching the WNM sleep mode code as that was introduced only in the 2.0 release. Chunks of code regarding state tracking were also not backported as they are part of the state tracking code introduced later, in 3ff3323. Finally, I still have concerns about the nonce setup in patch #5. In the last chunk, you'll notice peer->tk is reset, to_set to negotiate a new TK. The other approach I considered was to backport 1380fcbd9f ("TDLS: Do not modify RNonce for an TPK M1 frame with same INonce") but I figured I would play it safe and not introduce further variations.

I should note that I share Matthew Green's observations regarding the opacity of the protocol. Normally, network protocols are freely available and security researchers like me can easily review them. In this case, I would have needed to read the opaque 802.11i-2004 pdf which is behind a TOS wall at the IEEE. I ended up reading up on the IEEE_802.11i-2004 Wikipedia article which gives a simpler view of the protocol. But it's a real problem to see such critical protocols developed behind closed doors like this.

At Guido's suggestion, I sent the final patch upstream explaining the concerns I had with the patch. I have not, at the time of writing, received any response from upstream about this, unfortunately. I uploaded the fixed packages as DLA 1150-1 on October 31st.


The next big chunk on my list was completing the work on git-annex (CVE-2017-12976) that I started in August. It turns out doing the backport was simpler than I expected, even with my rusty experience with Haskell. Type-checking really helps in doing the right thing, especially considering how Joey Hess implemented the fix: by introducing a new type.

So I backported the patch from upstream and notified the security team that the jessie and stretch updates would be similarly easy. I shipped the backport to LTS as DLA-1144-1. I also shared the updated packages for jessie (which required a similar backport) and stretch (which didn't) and those Sebastien Delafond published those as DSA 4010-1.


Up next was yet another security vulnerability in the Graphicsmagick stack. This involved the usual deep dive into intricate and sometimes just unreasonable C code to try and fit a round tree in a square sinkhole. I'm always unsure about those patches, but the test suite passes, smoke tests show the vulnerability as fixed, and that's pretty much as good as it gets.

The announcement (DLA 1154-1) turned out to be a little special because I had previously noticed that the penultimate announcement (DLA 1130-1) was never sent out. So I made a merged announcement to cover both instead of re-sending the original 3 weeks late, which may have been confusing for our users.

Triage & misc

We always do a bit of triage even when not on frontdesk duty, so I:

I also did smaller bits of work on:

The latter reminded me of the concerns I have about the long-term maintainability of the golang ecosystem: because everything is statically linked, an update to a core library (say the SMTP library as in CVE-2017-15042, thankfully not affecting LTS) requires a full rebuild of all packages including the library in all distributions. So what would be a simple update in a shared library system could mean an explosion of work on statically linked infrastructures. This is a lot of work which can definitely be error-prone: as I've seen in other updates, some packages (for example the Ruby interpreter) just bit-rot on their own and eventually fail to build from source. We would also have to investigate all packages to see which one include the library, something which we are not well equipped for at this point.

Wheezy was the first release shipping golang packages but at least it's shipping only one... Stretch has shipped with two golang versions (1.7 and 1.8) which will make maintenance ever harder in the long term.

We build our computers the way we build our cities--over time, without a plan, on top of ruins. - Ellen Ullman

Other free software work

This month again, I was busy doing some serious yak shaving operations all over the internet, on top of publishing two of my largest LWN articles to date (2017-10-16-strategies-offline-pgp-key-storage and 2017-10-26-comparison-cryptographic-keycards).

feed2exec beta

Since I announced this new project last month I have released it as a beta and it entered Debian. I have also wrote useful plugins like the wayback plugin that saves pages on the Wayback machine for eternal archival. The archive plugin can also similarly save pages to the local filesystem. I also added bash completion, expanded unit tests and documentation, fixed default file paths and a bunch of bugs, and refactored the code. Finally, I also started using two external Python libraries instead of rolling my own code: the pyxdg and requests-file libraries, the latter which I packaged in Debian (and fixed a bug in their test suite).

The program is working pretty well for me. The only thing I feel is really missing now is a retry/fail mechanism. Right now, it's a little brittle: any network hiccup will yield an error email, which are readable to me but could be confusing to a new user. Strangely enough, I am particularly having trouble with (local!) DNS resolution that I need to look into, but that is probably unrelated with the software itself. Thankfully, the user can disable those with --loglevel=ERROR to silence WARNINGs.

Furthermore, some plugins still have some rough edges. For example, The Transmission integration would probably work better as a distinct plugin instead of a simple exec call, because when it adds new torrents, the output is totally cryptic. That plugin could also leverage more feed parameters to save different files in different locations depending on the feed titles, something would be hard to do safely with the exec plugin now.

I am keeping a steady flow of releases. I wish there was a way to see how effective I am at reaching out with this project, but unfortunately GitLab doesn't provide usage statistics... And I have received only a few comments on IRC about the project, so maybe I need to reach out more like it says in the fine manual. Always feels strange to have to promote your project like it's some new bubbly soap...

Next steps for the project is a final review of the API and release production-ready 1.0.0. I am also thinking of making a small screencast to show the basic capabilities of the software, maybe with asciinema's upcoming audio support?

Pandoc filters

As I mentioned earlier, I dove again in Haskell programming when working on the git-annex security update. But I also have a small Haskell program of my own - a Pandoc filter that I use to convert the HTML articles I publish on into a Ikiwiki-compatible markdown version. It turns out the script was still missing a bunch of stuff: image sizes, proper table formatting, etc. I also worked hard on automating more bits of the publishing workflow by extracting the time from the article which allowed me to simply extract the full article into an almost final copy just by specifying the article ID. The only thing left is to add tags, and the article is complete.

In the process, I learned about new weird Haskell constructs. Take this code, for example:

-- remove needless blockquote wrapper around some tables
-- haskell newbie tips:
-- @ is the "at-pattern", allows us to define both a name for the
-- construct and inspect the contents as once
-- {} is the "empty record pattern": it basically means "match the
-- arguments but ignore the args"
cleanBlock (BlockQuote t@[Table {}]) = t

Here the idea is to remove <blockquote> elements needlessly wrapping a <table>. I can't specify the Table type on its own, because then I couldn't address the table as a whole, only its parts. I could reconstruct the whole table bits by bits, but it wasn't as clean.

The other pattern was how to, at last, address multiple string elements, which was difficult because Pandoc treats spaces specially:

cleanBlock (Plain (Strong (Str "Notifications":Space:Str "for":Space:Str "all":Space:Str "responses":_):_)) = []

The last bit that drove me crazy was the date parsing:

-- the "GAByline" div has a date, use it to generate the ikiwiki dates
-- this is distinct from cleanBlock because we do not want to have to
-- deal with time there: it is only here we need it, and we need to
-- pass it in here because we do not want to mess with IO (time is I/O
-- in haskell) all across the function hierarchy
cleanDates :: ZonedTime -> Block -> [Block]
-- this mouthful is just the way the data comes in from
-- LWN/Pandoc. there could be a cleaner way to represent this,
-- possibly with a record, but this is complicated and obscure enough.
cleanDates time (Div (_, [cls], _)
                 [Para [Str month, Space, Str day, Space, Str year], Para _])
  | cls == "GAByline" = ikiwikiRawInline (ikiwikiMetaField "date"
                                           (iso8601Format (parseTimeOrError True defaultTimeLocale "%Y-%B-%e,"
                                                           (year ++ "-" ++ month ++ "-" ++ day) :: ZonedTime)))
                        ++ ikiwikiRawInline (ikiwikiMetaField "updated"
                                             (iso8601Format time))
                        ++ [Para []]
-- other elements just pass through
cleanDates time x = [x]

Now that seems just dirty, but it was even worse before. One thing I find difficult in adapting to coding in Haskell is that you need to take the habit of writing smaller functions. The language is really not well adapted to long discourse: it's more about getting small things connected together. Other languages (e.g. Python) discourage this because there's some overhead in calling functions (10 nanoseconds in my tests, but still), whereas functions are a fundamental and important construction in Haskell that are much more heavily optimized. So I constantly need to remind myself to split things up early, otherwise I can't do anything in Haskell.

Other languages are more lenient, which does mean my code can be more dirty, but I feel get things done faster then. The oddity of Haskell makes frustrating to work with. It's like doing construction work but you're not allowed to get the floor dirty. When I build stuff, I don't mind things being dirty: I can cleanup afterwards. This is especially critical when you don't actually know how to make things clean in the first place, as Haskell will simply not let you do that at all.

And obviously, I fought with Monads, or, more specifically, "I/O" or IO in this case. Turns out that getting the current time is IO in Haskell: indeed, it's not a "pure" function that will always return the same thing. But this means that I would have had to change the signature of all the functions that touched time to include IO. I eventually moved the time initialization up into main so that I had only one IO function and moved that timestamp downwards as simple argument. That way I could keep the rest of the code clean, which seems to be an acceptable pattern.

I would of course be happy to get feedback from my Haskell readers (if any) to see how to improve that code. I am always eager to learn.

Git remote MediaWiki

Few people know that there is a MediaWiki remote for Git which allow you to mirror a MediaWiki site as a Git repository. As a disaster recovery mechanism, I have been keeping such a historical backup of the Amateur radio wiki for a while now. This originally started as a homegrown Python script to also convert the contents in Markdown. My theory then was to see if we could switch from Mediawiki to Ikiwiki, but it took so long to implement that I never completed the work.

When someone had the weird idea of renaming a page to some impossible long name on the wiki, my script broke. I tried to look at fixing it and then remember I also had a mirror running using the Git remote. It turns out it also broke on the same issue and that got me looking in the remote again. I got lost in a zillion issues, including fixing that specific issue, but I especially looked at the possibility of fetching all namespaces because I realized that the remote fetches only a part of the wiki by default. And that drove me to submit namespace support as a patch to the git mailing list. Finally, the discussion came back to how to actually maintain that contrib: in git core or outside? Finally, it looks like I'll be doing some maintenance that project outside of git, as I was granted access to the GitHub organisation...

Galore Yak Shaving

Then there's the usual hodgepodge of fixes and random things I did over the month.

There is no [web extension] only XUL! - Inside joke

Steve Kemp: Possibly retiring

2 November, 2017 - 05:00

For the past few years I've hosted a service for spam-testing blog/forum comments, and I think it is on the verge of being retired.

The service presented a simple API for deciding whether an incoming blog/forum comment was SPAM, in real-time. I used it myself for two real reasons:

  • For the Debian Administration website.
    • Which is now retired.
  • For my blog
    • Which still sees a lot of spam comments, but which are easy to deal with because I can execute Lua scripts in my mail-client

As a result of the Debian-Administration server cleanup I'm still in the process of tidying up virtual machines, and servers. It crossed my mind that retiring this spam-service would allow me to free up another host.

Initially the service was coded in Perl using XML/RPC. The current version of the software, version 2, is written as a node.js service, and despite the async-nature of the service it is still too heavy-weight to live on the host which runs most of my other websites.

It was suggested to me that rewriting it in golang might allow it to process more requests, with fewer resources, so I started reimplementing the service in golang at 4AM this morning:

The service does the minimum:

  • Receives incoming HTTP POSTS
  • Decodes the body to a struct
  • Loops over that struct and calls each "plugin" to process it.
    • If any plugin decides this is spam, it returns that result.
  • Otherwise if all plugins have terminated then it decides the result is "OK".

I've ported several plugins, I've got 100% test-coverage of those plugins, and the service seems to be faster than the node.js version - so there is hope.

Of course the real test will be when it is deployed for real. If it holds up for a few days I'll leave it running. Otherwise the retirement notice I placed on the website, which chances are nobody will see, will be true.

The missing feature at the moment is keeping track of the count of spam-comments rejected/accepted on a per-site basis. Losing that information might be a shame, but I think I'm willing to live with it, if the alternative is closing down..

Jonathan Dowland: In defence of "Thought for the Day"

1 November, 2017 - 22:02

BBC Radio 4's long-running "Thought for the Day" has been in the news this week, as several presenters for the Today Programme have criticised the segment in an interview with Radio Times magazine.

One facet of the criticism was whether the BBC should be broadcasting three minutes of religious content daily when more than half of the population are atheist.

I'm an atheist and in my day-to-day life I have almost zero interaction with people of faith, certainly none where faith is a topic of conversation. However when I was an undergrad at Durham, I was a member of St John's College which has a Christian/Anglican/Evangelical heritage, and I met a lot of religious friends during my time there.

What I find a little disturbing about the lack of faithful people in my day-to-day life, compared to then, is how it shines a light on how disjoint our society is. This has become even more apparent with the advent of the "filter bubble" and how irreconcilable factions are around topics like Brexit, Trump, etc.

For these reasons I appreciate Thought for the Day and hearing voices from communities that I normally have little to do with. I can agree with the complaints about the lack of diversity, and I particularly enjoy hearing Thoughts from Muslims, Sikhs, Hindus, Buddhists, and such.

Another criticism levelled against the segment was that it can be "preachy". I haven't found that myself. I get the impression that most of the monologues are carefully constructed to be as unpreaching as possible. I can usually appreciate the ethical content of the talks, without having to buy into the faith aspect.

Interestingly the current principal of St John's College, David Wilkinson, has written his own response to the interview for the Radio Times.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้