Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 14 min ago

Vincent Fourmond: Releases 1.19.1 of Tioga and 0.13.1 of ctioga2

4 September, 2015 - 02:29
I've just released the versions 1.19.1 of Tioga and 0.13.1 of ctioga2. They both fix installation problems with recent versions of Ruby. Update as usual, though that isn't strictly necessary if you've managed to install them properly.
~ gem update tioga ctioga2

John Goerzen: There’s still a chance to save WiFi

4 September, 2015 - 02:12

You may not know it, but wifi is under assault in the USA due to proposed FCC regulations about modifications to devices with modular radios. In short, it would make it illegal for vendors to sell devices with firmware that users can replace. This is of concern to everyone, because Wifi routers are notoriously buggy and insecure. It is also of special concern to amateur radio hobbyists, due to the use of these devices in the Amateur Radio Service (FCC Part 97).

I submitted a comment to the FCC about this, which I am pasting in here. This provides a background and summary of the issues for those that are interested. Here it is:

My comment has two parts: one, the impact on the Amateur Radio service; and two, the impact on security. Both pertain primarily to the 802.11 (“Wifi”) services typically operating under Part 15.

The Amateur Radio Service (FCC part 97) has long been recognized by the FCC and Congress as important to the nation. Through it, amateurs contribute to scientific knowledge, learn skills that bolster the technological competitiveness of the United States, and save lives through their extensive involvement in disaster response.

Certain segments of the 2.4GHz and 5GHz Wifi bands authorized under FCC Part 15 also fall under the frequencies available to licensed amateurs under FCC Part 97 [1].

By scrupulously following the Part 97 regulations, many amateur radio operators are taking devices originally designed for Part 15 use and modifying them for legal use under the Part 97 Amateur Radio Service. Although the uses are varied, much effort is being devoted to fault-tolerant mesh networks that provide high-speed multimedia communications in response to a disaster, even without the presence of any traditional infrastructure or Internet backbone. One such effort [2] causes users to replace the firmware on off-the-shelf Part 15 Wifi devices, reconfiguring them for proper authorized use under Part 97. This project has many vital properties, particularly the self-discovery of routes between nodes and self-healing nature of the mesh network. These features are not typically available in the firmware of normal Part 15 devices.

It should also be noted that there is presently no vendor of Wifi devices that operate under Part 97 out of the box. The only route available to amateurs is to take Part 15 devices and modify them for Part 97 use.

Amateur radio users of these services have been working for years to make sure they do not cause interference to Part 15 users [3]. One such effort takes advantage of the modular radio features of consumer Wifi gear to enable communication on frequencies that are within the Part 97 allocation, but outside (though adjacent) to the Part 15 allocation. For instance, the chart at [1] identifies frequencies such as 2.397GHz or 5.660GHz that will never cause interference to Part 15 users because they lie entirely outside the Part 15 Wifi allocation.

If the FCC prevents the ability of consumers to modify the firmware of these devices, the following negative consequences will necessarily follow:

1) The use of high-speed multimedia or mesh networks in the Amateur Radio service will be sharply curtailed, relegated to only outdated hardware.

2) Interference between the Amateur Radio service — which may use higher power or antennas with higher gain — and Part 15 users will be expanded, because Amateur Radio service users will no longer be able to intentionally select frequencies that avoid Part 15.

3) The culture of inventiveness surrounding wireless communication will be curtailed in an important way.

Besides the impact on the Part 97 Amateur Radio Service, I also wish to comment on the impact to end-user security. There have been a terrible slew of high-profile situations where very popular consumer Wifi devices have had incredible security holes. Vendors have often been unwilling to address these issues [4].

Michael Horowitz maintains a website tracking security bugs in consumer wifi routers [5]. Sadly these bugs are both severe and commonplace. Within just the last month, various popular routers have been found vulnerable to remote hacking [6] and platforms for launching Distributed Denial-of-Service (DDoS) attacks [7]. These impacted multiple models from multiple vendors. To make matters worse, most of these issues should have never happened in the first place, and were largely the result of carelessness or cluelessness on the part of manufacturers.

Consumers should not be at the mercy of vendors to fix their own networks, nor should they be required to trust unauditable systems. There are many well-regarded efforts to provide better firmware for Wifi devices, which still keep them operating under Part 15 restrictions. One is OpenWRT [8], which supports a wide variety of devices with a system built upon a solid Linux base.

Please keep control of our devices in the hands of consumers and amateurs, for the benefit of all.










Petter Reinholdtsen: Book cover for the Free Culture book finally done

4 September, 2015 - 02:00

Creating a good looking book cover proved harder than I expected. I wanted to create a cover looking similar to the original cover of the Free Culture book we are translating to Norwegian, and I wanted it in vector format for high resolution printing. But my inkscape knowledge were not nearly good enough to pull that off.

But thanks to the great inkscape community, I was able to wrap up the cover yesterday evening. I asked on the #inkscape IRC channel on Freenode for help and clues, and Marc Jeanmougin (Mc-) volunteered to try to recreate it based on the PDF of the cover from the HTML version. Not only did he create a SVG document with the original and his vector version side by side, he even provided an instruction video explaining how he did it. But the instruction video is not easy to follow for an untrained inkscape user. The video is a recording on how he did it, and he is obviously very experienced as the menu selections are very quick and he mentioned on IRC that he did use some keyboard shortcuts that can't be seen on the video, but it give a good idea about the inkscape operations to use to create the stripes with the embossed copyright sign in the center.

I took his SVG file, copied the vector image and re-sized it to fit on the cover I was drawing. I am happy with the end result, and the current english version look like this:

I am not quite sure about the text on the back, but guess it will do. I picked three quotes from the official site for the book, and hope it will work to trigger the interest of potential readers. The Norwegian cover will look the same, but with the texts and bar code replaced with the Norwegian version.

The book is very close to being ready for publication, and I expect to upload the final draft to Lulu in the next few days and order a final proof reading copy to verify that everything look like it should before allowing everyone to order their own copy of Free Culture, in English or Norwegian Bokmål. I'm waiting to give the the productive proof readers a chance to complete their work.

Steinar H. Gunderson: Intel GPU memory bandwidth

3 September, 2015 - 17:30

Two days ago, I commented I was seeing only 1/10th or so of the theoretical bandwidth my Intel GPU should have been able to push, and asked if anyone could help be figure out the discrepancy. Now, several people (including the helpful people at #intel-gfx) helped me understand more of the complete picture, so I thought I'd share:

First of all, my application was pushing (more or less) a 1024x576 texture from a FBO to another, in fp16 RGBA, so eight bytes per pixel. This was measured to take 1.3 ms (well, sometimes long and sometimes shorter); 1024x576x8 bytes / 1.3 ms = 3.6 GB/sec. Given that the spec sheet for my CPU says 25.6 GB/sec, that's the basis for my “about 1/10th” number. (There's no separate number for the GPU bandwidth, since the CPU and GPU share memory subsystem and even the bottom level of the cache.)

But it turns out these numbers are not bidirectional as I thought they'd be; they cover both read and write. So I need to include the write bandwidth into the equation as well (and I was writing to a 1280x720 framebuffer). So with that into the picture, the number goes up to 9.3 GB/sec. In synthetic benchmarks, I was able to push this to 9.8, but no further.

So we're still a bit over a factor 2x off. But lo and behold, the quoted number is assuming dual memory channels—and Lenovo has only fitted the X240 with a single RAM chip, with no possibility of adding more! So the theoretical number is 12.8 GB/sec, not 25.6. ~75% of the theoretical memory bandwidth is definitely within what I'd say is reasonable.

So, to sum up: Me neglecting to count the writes, and Lenovo designing the X240 with a memory subsystem reminiscent of the Amiga. Thanks to all developers!

Luca Falavigna: Resource control with systemd

2 September, 2015 - 23:31

I’m receiving more requests for upload accounts to the Deb-o-Matic servers lately (yay!), but that means the resources need to be monitored and shared between the build daemons to prevent server lockups.

My servers are running systemd, so I decided to give systemd.resource-control a try. My goal was to assign lower CPU shares to the build processes (debomatic itself, sbuild, and all the related tools), in order to avoid blocking other important system services from being spawned when necessary.

I created a new slice, and set a lower CPU share weight:
$ cat /etc/systemd/system/debomatic.slice

Then, I assigned the slice to the service unit file controlling debomatic daemons by adding the Slice=debomatic.slice option under the Service directive.

That was not enough, though, as some processes were assigned to the user slice instead, which groups all the processes spawned by users:

This is probably because schroot spawns a login shell, and systemd considers it belonging to a different process group. So, I had to launch the command systemctl set-property user.slice CPUShares=512, so all processes belonging to the user.slice will receive the same share of the debomatic ones. I consider this a workaround, I’m open to suggestions how to properly solve this issue :)

I’ll try to explore more options in the coming days, so I can improve my knowledge of systemd a little bit more :)

Steinar H. Gunderson: Intensity Shuttle Linux driver

2 September, 2015 - 04:54

I've released bmusb, a free driver for the Intensity Shuttle, a $199 HDMI/component/S-video/composite capture card. (They also have SDI versions that are somewhat more expensive, but I don't have those and haven't tested.) Unfortunately newer firmwares have blocked out 1080p60, but I've done stable 720p60 capture on my X240; for instance, here's a proof-of-concept video of capture from my PS3 being sent through Movit, my library for high-quality, high-performance video filters.

On a related note, does anyone know if Intel's GPU division has a devrel point of contact, like ATI and NVIDIA have? I'm having problems with my Haswell (gen6) mobile GPU getting only 1/10 or so of the main memory bandwidth the specs say (3 GB/sec vs. 25.6 GB/sec), and I don't really understand why—the preliminary analysis is that it somehow can't handle deal with high latency.

Enrico Zini: if-you-know-a-browser-developer

1 September, 2015 - 22:25
If you happen to know a browser developer...

Do you happen to know a developer of Firefox or Chrome or some other mainstream browser?

If so, can you please talk to them about our experiments with Client Certificate authentication in Debian?

Client Certificate authentication rocks; with just a couple of little tweaks in the interface, it would be pretty close to perfect.

Visiting sites without using a certificate

If I want to browse a site unauthenticated instead of using a certificate, at the moment I can hit "Cancel" on the certificate popup menu, and it works nicely. I feel quite confused when I do that, though, because it's not clear to me if I am canceling use of certificates, or canceling the visit to the site.

Can you please change the wording on the Cancel button to something more descriptive?

See/change current certificate selection

My top wish is, once I choise to use (or not use) a certificate for a site, to be able to see which certificate I'm using and possibly change it.

At the moment I did not find a way to see what certificate I'm using, and the browser will remember the choice until it gets closed and reopened.

At the moment I can use a Private or Incognito window to switch identities or to stop authenticated access and continue anonymously, and that helps me immensely.

I think however that the ultimate solution could be to have the https lockpad popup show an indication of what certificate is currently being used, and offer a way to re-trigger certificate selection. That would be so cool.

Also, once the certificate choice can be seen and changed at any time, it could just get remembered so that sites can be visited again without any prompts, even after the browser has been closed and reopened. That would be, to me, the ultimate convenience.

Thanks! <3

Thank you very much for all the work you have already put into this: I have been told that a few years ago using client certificate was unthinkable, and now it seems to be down to just a couple of papercuts. And SPKAC/keygen seriously rocks!

I have been constantly impressed by how well this all works right now.

Lunar: Reproducible builds: week 18 in Stretch cycle

1 September, 2015 - 19:51

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Bdale Garbee uploaded tar/1.28-1 which includes the --clamp-mtime option. Patch by Lunar.

Aurélien Jarno uploaded glibc/2.21-0experimental1 which will fix the issue were locales-all did not behave exactly like locales despite having it in the Provides field.

Lunar rebased the pu/reproducible_builds branch for dpkg on top of the released 1.18.2. This made visible an issue with udebs and automatically generated debug packages.

The summary from the meeting at DebConf15 between ftpmasters, dpkg mainatainers and reproducible builds folks has been posted to the revelant mailing lists.

Packages fixed

The following 70 packages became reproducible due to changes in their build dependencies: activemq-activeio, async-http-client, classworlds, clirr, compress-lzf, dbus-c++, felix-bundlerepository, felix-framework, felix-gogo-command, felix-gogo-runtime, felix-gogo-shell, felix-main, felix-shell-tui, felix-shell, findbugs-bcel, gco, gdebi, gecode, geronimo-ejb-3.2-spec, git-repair, gmetric4j, gs-collections, hawtbuf, hawtdispatch, jack-tools, jackson-dataformat-cbor, jackson-dataformat-yaml, jackson-module-jaxb-annotations, jmxetric, json-simple, kryo-serializers, lhapdf, libccrtp, libclaw, libcommoncpp2, libftdi1, libjboss-marshalling-java, libmimic, libphysfs, libxstream-java, limereg, maven-debian-helper, maven-filtering, maven-invoker, mochiweb, mongo-java-driver, mqtt-client, netty-3.9, openhft-chronicle-queue, openhft-compiler, openhft-lang, pavucontrol, plexus-ant-factory, plexus-archiver, plexus-bsh-factory, plexus-cdc, plexus-classworlds2, plexus-component-metadata, plexus-container-default, plexus-io, pytone, scolasync, sisu-ioc, snappy-java, spatial4j-0.4, tika, treeline, wss4j, xtalk, zshdb.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #797027 on zyne by Chris Lamb: switch to pybuild to get rid of .pyc files.
  • #797180 on python-doit by Chris Lamb: sort output when creating completion script for bash and zsh.
  • #797211 on apt-dater by Chris Lamb: fix implementation of SOURCE_DATE_EPOCH.
  • #797215 on getdns by Chris Lamb: fix call to dpkg-parsechangelog in debian/rules.
  • #797254 on splint by Chris Lamb: support SOURCE_DATE_EPOCH for version string.
  • #797296 on shiro by Chris Lamb: remove username from build string.
  • #797408 on splitpatch by Reiner Herrmann: use SOURCE_DATE_EPOCH to set manpage date.
  • #797410 on eigenbase-farrago by Reiner Herrmann: sets the comment style to scm-safe which tells ResourceGen that no timestamps should be included.
  • #797415 on apparmor by Reiner Herrmann: sorting with the locale set to C. CAPABILITIES
  • #797419 on resiprocate by Reiner Herrmann: set the embedded hostname to a static value.
  • #797427 on jam by Reiner Herrmann: sorting with the locale set to C.
  • #797430 on ii-esu by Reiner Herrmann: sort source list using C locale.
  • #797431 on tatan by Reiner Herrmann: sort source list using C locale.

Chris Lamb also noticed that binaries shipped with libsilo-bin did not work.

Documentation update

Chris Lamb and Ximin Luo assembled a proper specification for SOURCE_DATE_EPOCH in the hope to convince more upstreams to adopt it. Thanks to Holger it is published under a non-Debian domain name.

Lunar documented easiest way to solve issues with file ordering and timestamps in tarballs that came with tar/1.28-1.

Some examples on how to use SOURCE_DATE_EPOCH have been improved to support systems without GNU date.

armhf is finally being tested, which also means the remote building of Debian packages finally works! This paves the way to perform the tests on even more architectures and doing variations on CPU and date. Some packages even produce the same binary Arch:all packages on different architectures (1, 2). (h01ger)

Tests for FreeBSD are finally running. (h01ger)

As it seems the gcc5 transition has cooled off, we schedule sid more often than testing again on amd64. (h01ger)

disorderfs has been built and installed on all build nodes (amd64 and armhf). One issue related to permissions for root and unpriviliged users needs to be solved before disorderfs can be used on (h01ger)


Version 0.011-1 has been released on August 29th. The new version updates dh_strip_nondeterminism to match recent changes in debhelper. (Andrew Ayer)


disorderfs, the new FUSE filesystem to ease testing of filesystem-related variations, is now almost ready to be used. Version 0.2.0 adds support for extended attributes. Since then Andrew Ayer also added support to reverse directory entries instead of shuffling them, and arbitrary padding to the number of blocks used by files.

Package reviews

142 reviews have been removed, 48 added and 259 updated this week.

Santiago Vila renamed the not_using_dh_builddeb issue into varying_mtimes_in_data_tar_gz_or_control_tar_gz to align better with other tag names.

New issue identified this week: random_order_in_python_doit_completion.

37 FTBFS issues have been reported by Chris West (Faux) and Chris Lamb.


h01ger gave a talk at FrOSCon on August 23rd. Recordings are already online.

These reports are being reviewed and enhanced every week by many people hanging out on #debian-reproducible. Huge thanks!

Rapha&#235;l Hertzog: My Free Software Activities in August 2015

1 September, 2015 - 18:49

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

  • Prepared and released DLA-301-1 fixing 2 CVE in python-django.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 11 commits to the security tracker.

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That’s always good.

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new “overlay” filesystem that replaces “aufs” in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian’s SSO).

While the numerous discussions I had during DebConf can’t be qualified as “work”, they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the “Debian in the corporate IT” BoF led by Michael Meskes (Credativ’s CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian’s Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there’s no upstream fix for more than 8 months, I went back to the former version 2.9.1. It’s in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it’s not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It’s not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn’t solve the issue for users of other init systems so it’s not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It’s working nice, and this authentication scheme is far easier to support. Good job, Enrico! broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported “Hashsum mismatch” on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously “aufs” was hardcoded).


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Bits from Debian: New Debian Developers and Maintainers (July and August 2015)

1 September, 2015 - 18:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller


Russ Allbery: Review: Kanban

1 September, 2015 - 11:06

Review: Kanban, by David J. Anderson

Publisher: Blue Hole Copyright: 2010 ISBN: 0-9845214-0-2 Format: Trade paperback Pages: 240

Another belated review, this time of a borrowed book. Which I can now finally return! A delay in the review of this book might be a feature if I had actually used it for, as the subtitle puts it, successful evolutionary change in my technology business. Sadly, I haven't, so it's just late.

So, my background: I've done a lot of variations of traditional project management for IT projects (both development and more operational ones), both as a participant and as a project manager. (Although I've never done the latter as a full-time job, and have no desire to do so.) A while back at Stanford, my team adopted Agile, specifically Scrum, so I did a fair bit of research about Scrum including a couple of training courses. Since then, at Dropbox, I've used a few different variations on a vaguely Agile-inspired planning process, although it's not particularly consistent with any one system.

I've been hearing about Kanban for a while and have friends who swear by it, but I only had a vague idea of how it worked. That seemed like a good excuse to read a book.

And Anderson's book is a good one if, like me, you're looking for a basic introduction. It opens with a basic description and definition, talks about motivation and the expected benefits, and then provides a detailed description of Kanban as a system. The tone is on the theory side, written using the terminology of (I presume) management theory and operations management, areas about which I know almost nothing, but the terminology wasn't so heavy as to make the book hard to read. Anderson goes into lots of detail, and I thought he did a good job of distinguishing between basic principles, optional techniques, and variations that may be appropriate for particular environments.

If you're also not familiar, the basic concept of Kanban is to organize work using an in-progress queue. It eschews time-bounded planning entirely in favor of staging work at the start of a sequence of queues and letting the people working on each queue pull from the previous queue when they're ready to take on a new task. As you might guess from that layout, Kanban was originally invented for assembly-line manufacturing (at Toyota). That was one of the problems that I had with it, or at least the presentation in this book: most of my software development doesn't involve finishing one part of something and handing it off to someone else, which made it hard to identify with the pipeline model. Anderson has clearly spent a lot of time working with large-scale programming shops, including outsourced development firms, with dedicated QA and operations roles. This is not at all how Silicon Valley agile development works, so parts of this book felt like missives from a foreign country.

That said, the key point of Kanban is not the assembly line but the work limit. One of the defining characteristics of Kanban, at least as Anderson presents it, is that one does not try to estimate work and tile it based on estimates, not even to the extent that Scrum and other Agile methodologies do within the sprint. Instead, each person takes as long as they take on the things they're working on, and pulls a new task when they have free capacity. The system instead puts a limit on how many things they can have in progress at a time. The problem of pipeline stalls is dealt with both via continuous improvement and "swarming" of a problem to unblock the line (since other teams may not be able to work until the block is fixed), and with being careful about the sizing of work items (I'm used to calling them stories) that go in the front end.

Predictability, which Scrum uses story sizing and team velocity analysis to try to achieve, is basically statistical. One uses a small number of buckets of sizes of stories, and the whole pipeline will finish some number of work items per unit time, with a variance. The promise made to clients and other teams is that some percentage of the work items will be finished within some time frame from when they enter the system. Most importantly, these are all descriptive properties, determined statistically after the fact, rather than planned properties worked out through story sizing and extensive team discussion. If you, like me, are pretty thoroughly sick of two-hour sprint planning meetings and endless sizing exercises, this is rather appealing.

My problem with most work planning systems is that I think they overplan and put too much weight our ability to estimate how long work will take. Kanban is very appealing when viewed through that lens: it gives up on things we're bad at in favor of simple measurement and building a system with enough slack that it can handle work of various different sizes. As mentioned at the top, I haven't had a chance to try it (and I'm not sure it's a good fit with the inter-group planning methods in use at my current workplace), but I came away from this book wanting to do so.

If, like me, your experience is mostly with small combined teams or individual programming work, Anderson's examples may seem rather large, rather formal, and bureaucratic. But this is a solid introduction and well worth reading if your only experience with Agile is sprint planning, story writing and sizing, and fast iterations.

Rating: 7 out of 10

Norbert Preining: PiwigoPress release 2.31

1 September, 2015 - 07:36

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates new features for the sidebar widget, and better interoperability with some Piwigo galleries.

The new features are:

  • Selection of images: Up to now images for the widget were selected at random. The current version allows selecting images either at random (the default as before), but also in ascending or descending order by various criteria (uploaded, availability time, id, name, etc). With this change it is now possible to always display the most recent image(s) from a gallery.
  • Interoperability: Some Piwigo galleries don’t have thumbnail sized representatives. For these galleries the widget was broken and didn’t display any image. We now check for either square or thumbnail.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

Junichi Uekawa: I've been writing ELF parser for fun using C++ templates to see how much I can simplify.

1 September, 2015 - 04:36
I've been writing ELF parser for fun using C++ templates to see how much I can simplify. I've been reading bionic loader code enough these days that I already know how it would look like in C and gradually converted into C++ but it's nevertheless fun to have a pet project that kind of grows.

Matthew Garrett: Working with the kernel keyring

1 September, 2015 - 00:18
The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.


Mart&iacute;n Ferrari: Romania

31 August, 2015 - 13:59

It's been over 2 years since I decided to start a new, nomadic life. I had the idea of blogging about this experience as it happened, but not only I am incredibly lazy when it comes to writing, most of the time I have been too busy just enjoying this lifestyle!

The TL;DR version of these last 2 years:

  • Lounged a bit in Ireland after leaving work, went on a great road trip along the West coast.
  • Lived in Nice 3 months, back in the same house I lived between 2009 and 2010.
    • During that time, my dad visited and took him for a trip around Nothern Italy, the Côte d'Azur and Paris; then travelled to DebConf in Switzerland, visited Gregor in Innsbruck, and travelled back to Nice by train, crossing the alps and a big chunk of Italy.
  • Then went to Buenos Aires for 3 months (mom was very happy).
  • Back in Europe, attended Fosdem, and moved to Barcelona for 3 months; so far, the best city I ever lived in!
  • Went back to Dublin for a while, ended up staying over 8 months, including getting a temporary job at a normal office (booo!).
    • Although one of these months I spent travelling in the States (meh).
    • And of course, many more short trips, including a couple of visits to Barcelona, Lille, Nice, and of course Brussels for Fosdem.
  • Again went to Buenos Aires, only 2 months this time.
  • Another month in Dublin, then holidays visiting my friends in Lisbon, wedding in Oviedo, and a road trip around Asturias and Galicia.
  • From Spain I flew to Prague and stayed for 2 months (definitely not enough).
  • Quick trip to Dublin, then CCC camp and DebConf in Germany.

And now, I am in Cluj-Napoca, Romania.

View from my window


Mart&iacute;n Ferrari: IkiWiki

31 August, 2015 - 11:55

I haven't posted in a very long time. Not only because I suck at this, but also because IkiWiki decided to stop working with OpenID, so I can't use the web interface any more to post.. Very annoying.

Already spent a good deal of time trying to find a solution, without any success.. I really don't want to migrate to another software again, but this is becoming a showstopper for me.


Russ Allbery: Review: Through Struggle, The Stars

31 August, 2015 - 10:54

Review: Through Struggle, The Stars, by John J. Lumpkin

Series: Human Reach #1 Publisher: John J. Lumpkin Copyright: July 2011 ISBN: 1-4611-9544-6 Format: Kindle Pages: 429

Never let it be said that I don't read military SF. However, it can be said that I read books and then get hellishly busy and don't review them for months. So we'll see if I can remember this well enough to review it properly.

In Lumpkin's future world, mankind has spread to the stars using gate technology, colonizing multiple worlds. However, unlike most science fiction futures of this type, it's not all about the United States, or even the United States and Russia. The great human powers are China and Japan, with the United States relegated to a distant third. The US mostly maintains its independence from either, and joins the other lesser nations and UN coalitions to try to pick up a few colonies of its own. That's the context in which Neil and Rand join the armed services: the former as a pilot in training, and the latter as an army grunt.

This is military SF, so of course a war breaks out. But it's a war between Japan and China: improved starship technology and the most sophisticated manufacturing in the world against a much larger economy with more resources and a larger starting military. For reasons that are not immediately clear, and become a plot point later on, the United States president immediately takes an aggressive tone towards China and pushes the country towards joining the war on the side of Japan.

Most of this book is told from Neil's perspective, following his military career during the outbreak of war. His plans to become a pilot get derailed as he gets entangled with US intelligence agents (and a bad supervisor). The US forces are not in a good place against China, struggling when they get into ship-to-ship combat, and Neil's ship goes on a covert mission to attempt to complicate the war with political factors. Meanwhile, Rand tries to help fight off a Chinese invasion of one of the rare US colony worlds.

Through Struggle, The Stars does not move quickly. It's over 400 pages, and it's a bit surprising how little happens. What it does instead is focus on how the military world and the war feels to Neil: the psychological experience of wanting to serve your country but questioning its decisions, the struggle of working with people who aren't always competent but who you can't just make go away, the complexities of choosing a career path when all the choices are fraught with politics that you didn't expect to be involved in, and the sheer amount of luck and random events involved in the progression of one's career. I found this surprisingly engrossing despite the slow pace, mostly because of how true to life it feels. War is not a never-ending set of battles. Life in a military ship has moments when everything is happening far too quickly, but other moments when not much happens for a long time. Lumpkin does a great job of reflecting that.

Unfortunately, I thought there were two significant flaws, one of which means I probably won't seek out further books in this series.

First, one middle portion of the book switches away from Neil to follow Rand instead. The first part of that involves the details of fighting orbiting ships with ground-based lasers, which was moderately interesting. (All the technological details of space combat are interesting and thoughtfully handled, although I'm not the sort of reader who will notice more subtle flaws. But David Weber this isn't, thankfully.) But then it turns into a fairly generic armed resistance story, which I found rather boring.

It also ties into the second and more serious flaw: the villain. The overall story is constructed such that it doesn't need a personal villain. It's about the intersection of the military and politics, and a war that may be ill-conceived but that is being fought anyway. That's plenty of conflict for the story, at least in my opinion. But Lumpkin chose to introduce a specific, named Chinese character in the villain role, and the characterization is... well.

After he's humiliated early in the story by the protagonists, Li Xiao develops an obsession with killing them, for his honor, and then pursues them throughout the book in ways that are sometimes destructive to the Chinese war efforts. It's badly unrealistic compared to the tone of realism taken by the rest of the story. Even if someone did become this deranged, it's bizarre that a professional military (and China's forces are otherwise portrayed as fairly professional) would allow this. Li reads like pure caricature, and despite being moderately competent apart from his inexplicable (but constantly repeated) obsession, is cast in a mustache-twirling role of personal villainy. This is weirdly out of place in the novel, and started irritating me enough that it took me out of the story.

Through Struggle, The Stars is the first book of a series, and does not resolve much by the end of the novel. That plus its length makes the story somewhat unsatisfying. I liked Neil's development, and liked him as a character, and those who like the details of combat mixed with front-lines speculation about politics may enjoy this. But a badly-simplified mustache-twirling victim and some extended, uninteresting bits mar the book enough that I probably won't seek out the sequels.

Followed by The Desert of Stars.

Rating: 6 out of 10

Andrew Cater: Rescuing a Windows 10 failed install using GParted Live on CD

31 August, 2015 - 03:09
Windows 10 is here, for better or worse. As the family sysadmin, I've been tasked to update the Windows machines: ultimately, failure modes are not well documented and I needed Free software and several hours to recover a vital machine.

The "free upgrade for users of prior Windows versions" is a limited time offer for a year from launch. Microsoft do not offer licence keys for the upgrade: once a machine has updated to Windows 10 and authenticated on the 'Net, then a machine can be re-installed and will be regarded by Microsoft as pre-authorised. Users don't get the key at any point.

Although Microsoft have pushed the fact that this can be done through Windows Update, there is the option to use Microsoft's Media Creation tool to do the upgrade directly on the Windows machine concerned. This would be necessary to get the machine to upgrade and register before a full clean install of Windows 10 from media.

This Media Creation Tool failed for me on three machines with "Unable to access System reserved partition'

All the machines have SSDs from various makers: a colleague suggested that resizing the partition might enable the upgrade to continue.  Of the machines that failed, all were running Windows 7 - two were running using BIOS, one using UEFI boot on a machine with no Legacy/CSM mode.

Using GParted live .iso  - itself based on Debian Live - allowed me to resize the System partition from 100MiB to 200MiB by moving the Windows partition but  Windows became unbootable.

In two cases, I was able to boot from DVD Windows installation media and make Windows bootable again at which point the Microsoft Media Creation Tool could be used to install Windows 10

The UEFI boot machine proved more problematic: I had to create a Windows 7 System Repair disk and repair Windows booting before Windows 10 could proceed.

My Windows-using colleaague had used only Windows-based recovery disks: using Linux tools allowed me to repair Windows installations I couldn't boot

Antonio Terceiro: DebConf15, testing debian packages, and packaging the free software web

31 August, 2015 - 02:12

This is my August update, and by the far the coolest thing in it is Debconf.


I don’t get tired of saying it is the best conference I ever attended. First it’s a mix of meeting both new people and old friends, having the chance to chat with people whose work you admire but never had a chance to meet before. Second, it’s always quality time: an informal environment, interesting and constructive presentations and discussions.

This year the venue was again very nice. Another thing that was very nice was having so many kids and families. This was no coincidence, since this was the first DebConf in which there was organized childcare. As the community gets older, this a very good way of keeping those who start having kids from being alienated from the community. Of course, not being a parent yet I have no idea how actually hard is to bring small kids to a conference like DebConf. ;-)

I presented two talks:

  • Tutorial: Functional Testing of Debian packages, where I introduced the basic concepts of DEP-8/autopkgtest, and went over several examples from my packages giving tips and tricks on how to write functional tests for Debian packages.
  • Packaging the Free Software Web for the end user, where I presented the motivation for, and the current state of shak, a project I am working on to make it trivial for end users to install server side applications in Debian. I spent quite some hacking time during DebConf finishing a prototype of the shak web interface, which was demonstrated live in the talk (of course, as usual with live demos, not everything worked :-)).

There was also the now traditional Ruby BoF, where discussed the state and future of the Ruby ecosystem in Debian; and an in promptu Ruby packaging workshop where we introduced the basics of packaging in general, and Ruby packaging specifically.

Besides shak, I was able to hack on a few cool things during DebConf:

  • debci has been updated with a first version of the code to produce britney hints files that block packages that fail their tests from migrating to testing. There are some issues to be sorted out together with the release team to make sure we don’t block packages unecessarily, e.g. we don’t want to block packages that never passed their test suite — most the test suite, and not the package, is broken.
  • while hacking I ended up updating jquery to the newest version in the 1.x series, and in fact adopting it I guess. This allowed emeto drop the embedded jquery copy I used to have in the shak repository, and since then I was able to improve the build to produce an output that is identical, except for a build timestamp inside a comment and a few empty lines, to the one produced by upstream, without using grunt (.
Miscellaneous updates

DebConf team: DebConf15: Farewell, and thanks for all the Fisch (Posted by DebConf Team)

31 August, 2015 - 01:24

A week ago, we concluded our biggest DebConf ever! It was a huge success.

We are overwhelmed by the positive feedback, for which we’re very grateful. We want to thank you all for participating in the talks; speakers and audience alike, in person or live over the global Internet — it wouldn’t be the fantastic DebConf experience without you!

Many of our events were recorded and streamed live, and are now available for viewing, as are the slides and photos.

To share a sense of the scale of what all of us accomplished together, we’ve compiled a few statistics:

  • 555 attendees from 52 countries (including 28 kids)
  • 216 scheduled events (183 talks and workshops), of which 119 were streamed and recorded
  • 62 sponsors and partners
  • 169 people sponsored for food & accommodation
  • 79 professional and 35 corporate registrations

Our very own designer Valessio Brito made a lovely video of impressions and images of the conference.

Your browser does not support the video tag.

We’re collecting impressions from attendees as well as links to press articles, including Linux Weekly News coverage of specific sessions of DebConf. If you find something not yet included, please help us by adding links to the wiki.

We tried a few new ideas this year, including a larger number of invited and featured speakers than ever before.

On the Open Weekend, some of our sponsors presented their career opportunities at our job fair, which was very well attended.

And a diverse selection of entertainment options provided the necessary breaks and ample opportunity for socialising.

On the last Friday, the Oscar-winning documentary “Citizenfour” was screened, with some introductory remarks by Jacob Appelbaum and a remote address by its director, Laura Poitras, and followed by a long Q&A session by Jacob.

DebConf15 was also the first DebConf with organised childcare (including a Teckids workshop for kids of age 8-16), which our DPL Neil McGovern standardised for the future: “it’s a thing now,” he said.

The participants used the week before the conference for intensive work, sprints and workshops, and throughout the main conference, significant progress was made on Debian and Free Software. Possibly the most visible was the endeavour to provide reproducible builds, but the planning of the next stable release “stretch” received no less attention. Groups like the Perl team, the diversity outreach programme and even DebConf organisation spent much time together discussing next steps and goals, and hundreds of commits were made to the archive, as well as bugs closed.

DebConf15 was an amazing conference, it brought together hundreds of people, some oldtimers as well as plenty of new contributors, and we all had a great time, learning and collaborating with each other, says Margarita Manterola of the organiser team, and continues: The whole team worked really hard, and we are all very satisfied with the outcome. Another organiser, Martin Krafft adds: We mainly provided the infrastructure and space. A lot of what happened during the two weeks was thanks to our attendees. And that’s what makes DebConf be DebConf.

Our organisation was greatly supported by the staff of the conference venue, the Jugendherberge Heidelberg International, who didn’t take very long to identify with our diverse group, and who left no wishes untried. The venue itself was wonderfully spacious and never seemed too full as people spread naturally across the various conference rooms, the many open areas, the beergarden, the outside hacklabs and the lawn.

The network installed specifically for our conference in collaboration with the nearby university, the neighbouring zoo, and the youth hostel provided us with a 1 Gbps upstream link, which we managed to almost saturate. The connection will stay in place, leaving the youth hostel as one with possibly the fastest Internet connection in the state.

And the kitchen catered high-quality food to all attendees and their special requirements. Regional beer and wine, as well as local specialities, were provided at the bistro.

DebConf exists to bring people together, which includes paying for travel, food and accomodation for people who could not otherwise attend. We would never have been able to achieve what we did without the support of our generous sponsors, especially our Platinum Sponsor Hewlett-Packard. Thank you very much.

See you next year in Cape Town, South Africa!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้