Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 1 min ago

Simon Richter: CEC remote control with Onkyo Receivers

29 October, 2015 - 01:34

For future reference: if you have an Onkyo AV receiver, these are picky about the CEC logical address of the player. If you have a default installation of Kodi, the default type is "recording device", and the logical address is 14 -- this is rejected.

If you change the CEC device configuration, which can be found in .kodi/userdata/peripheral_data/ to use a device_type of 4 (which maps to a logical address of 4), then the receiver will start sending key events to Kodi.

You may also need to set your remote control mode to 32910, by pressing the Display button in combination with the input source selector your Pi is connected to for three seconds, and then entering the code.

This setting is not available from the GUI, so you really have to change the XML file.

Next step: finding out why Kodi ignores these key presses. :(

Daniel Pocock: Free Real-Time Communications workshop in Manchester, 2 November

28 October, 2015 - 23:29

Manchester Free Software and MadLab are hosting a workshop on Free and Federated Real-Time Communications with Free Software next Monday night, 2 November from 7pm.

Add this to your calendar (iCalendar link)

Users and developers are invited to come and discover the opportunities for truly free and secure chat, voice and video with WebRTC, mobile VoIP and the free desktop.

This is an interactive workshop with practical, hands-on opportunities to explore this topic, see what works and what doesn't and learn about troubleshooting techniques to help developers like myself improve the software. This is also a great opportunity for any Java developers who would like to get some exposure to the Android platform.

People wishing to fully participate in all the activities are encouraged to bring a laptop with the Wireshark software and Android tools (especially adb, the command line utility) or if you prefer to focus on WebRTC, have the latest version of Firefox and Chrome installed.

If you are keen to run the DruCall module for WebRTC or your own SIP or XMPP server, you can try setting it up as described in the RTC Quick Start Guide and come along to the workshop with any questions you have.

A workshop near you?

Manchester has a history of great technological innovation, including the first stored program computer and it is a real thrill for me to offer this workshop there.

FSFE Manchester ran a workshop evaluating the performance of free software softphones back in 2012.

Over the coming months, I would like to be able to hold further workshops in other locations to get feedback from people who are trying this technology, including the Lumicall app, JSCommunicator and DruCall. If you are interested in helping organize such an event or you have any other feedback about this topic, please come and discuss it on the Free RTC mailing list.

Martin-Éric Racine: xf86-video-geode: Last call, dernier sévice

28 October, 2015 - 14:36

I guess that the time has finally come to admit that, as far as upstream development is concerned, the Geode X.Org driver is reaching retirement age:

While there have indeed been recent contributions by a number of developers to keep it compilable against recent X releases, the Geode driver has accumulated too much cruft from the Cyrix and NSC days, and it hasn't seen any active contribution from AMD in a long time. Besides, nowadays, Xserver pretty much assumes that its runs on an X driver that leverages its matching kernel driver and thus won't require root priviledges to launch. This isn't the case with the Geode driver, since it directly probes FBDEV and MSR, both of which reside in /dev and require root priviledges to access.

On Debian, as a stopgap measure, the package now Recommends a legacy wrapper that enforces operation as root. Meanwhile, other distributions are mercilessly droping all X drivers that don't leverage KMS. Basically, unless a miracle happens really quick, Geode will soon become unusable on X.

Back when AMD was still involved, a concensus had been reached that, since the Geode series doesn't offer any sort of advanced graphic capabilities, the most sensible option would indeed be to make a KMS driver and let Xserver use its generic modeline driver on top of that, then drop the Geode X driver entirely. Amazingly enough, someone did start working on a KMS driver for Geode LX, but it never made it as far as the Linux kernel tree (additionally, Gitorious seems to be down, but I have a copy of the driver's Git tree on hand, if anyone is interested). While I'll still be accepting and merging patches to the Geode X driver, our best long-term option would be to finalize the KMS driver and have it merged into Linux ASAP.

John Goerzen: The Train to Galesburg

28 October, 2015 - 08:13

Sometimes, children are so excited you just can’t resist. Jacob and Oliver have been begging for a train trip for awhile now, so Laura and I took advantage of a day off school to take them to the little town of Galesburg, IL for a couple days.

Galesburg is a special memory for me; nearly 5 years ago, it was the first time Jacob and I took an Amtrak trip somewhere, just the two of us. And, separately, Laura’s first-ever train trip had been to Galesburg to visit friends.

There was excitement in the air. I was asked to supply a bedtime story about trains — I did. On the way to the train station — in the middle of the night — there was excited jabbering about trains. Even when I woke them up, they lept out of bed and raced downstairs, saying, “Dad, why aren’t you ready yet?”

As the train was passing through here at around 4:45AM, and we left home with some time to spare, we did our usual train trip thing of stopping at the one place open at such a time: Druber’s Donuts.

Much as Laura and I might have enjoyed some sleep once we got on the train, Jacob and Oliver weren’t having it. Way too much excitement was in the air. Jacob had his face pressed against the window much of the time, while Oliver was busy making “snake trains” from colored clay — complete with eyes.

The boys were dressed up in their train hats and engineer overalls, and Jacob kept musing about what would happen if somebody got confused and thought that he was the real engineer. When an Amtrak employee played along with that later, he was quite thrilled!

We were late enough into Galesburg that we ate lunch in the dining car. A second meal there — what fun! Here they are anxiously awaiting the announcement that the noon reservations could make their way to the dining car. Oh, and jockeying for position to see who would be first and get to do the all-important job of pushing the button to open the doors between train cars.

Even waiting for your food can be fun.

Upon arriving, we certainly couldn’t leave the train station until our train did, even though it was raining.

Right next to the train station is the Discovery Depot Children’s Museum. It was a perfect way to spend a few hours. Jacob really enjoyed the building wall, where you can assemble systems that use gravity (really a kinetic/potential energy experiment wall) to funnel rubber balls all over the place. He sticks out his tongue when he’s really thinking. Fun to watch.

Meanwhile, Oliver had a great time with the air-powered tube system, complete with several valves that can launch things through a complicated maze of transparent tubes.

They both enjoyed pretending I was injured and giving me rides in the ambulance. I was diagnosed with all sorts of maladies — a broken leg, broken nose. One time Jacob held up the pretend stethoscope to me, and I said “ribbit.” He said, “Dad, you’ve got a bad case of frog! You will be in the hospital 190 days!” Later I would make up things like “I think my gezotnix is all froibled” and I was ordered to never leave the ambulance again. He told the story of this several times.

After the museum closed, we ate supper. Keep in mind the boys had been up since the middle of the night without sleeping and were still doing quite well! They did start to look a bit drowsy — I thought Oliver was about to fall asleep, but then their food came. And at the hotel, they were perfectly happy to invent games involving jumping off the bed.

Saturday, we rode over to Peck Park. We had heard about this park from members of our church in Kansas, but oddly even the taxi drivers hadn’t ever heard of it. It’s well known as a good place to watch trains, as it has two active lines that cross each other at a rail bridge. And sure enough, in only a little while, we took in several trains.

The rest of that morning, we explored Galesburg. We visited an antique mall and museum, saw the square downtown, and checked out a few of the shops — my favorite was the Stray Cat, featuring sort of a storefront version of Etsy with people selling art from recycled objects. But that wasn’t really the boys’ thing, so we drifted out of there on our way to lunch at Baked, where we had some delicious deep-dish pizza.

After that, we still had some time to kill before getting back on the train. We discussed our options. And what do you know — we ended up back at the children’s museum. We stopped at a bakery to get the fixins for a light supper on the train, and ate a nice meal in the dining car once we got on. Then, this time, they actually slept.

Before long, it was 3AM again and time to get back off the train. Oliver was zonked out sleepy. Somehow I managed to get his coat and backpack on him despite him being totally limp, and carried him downstairs to get off the train. Pretty soon we walked to our car and drove home.

We tucked them in, and then finally tucked ourselves in. Sometimes being really tired is well worth it.

Martín Ferrari: Tales from the SRE trenches: Dev vs Ops

27 October, 2015 - 21:05

This is the second part in a series of articles about SRE, based on the talk I gave in the Romanian Association for Better Software.

On the first part, I introduced briefly what is SRE. Today, I present some concrete ways in which SRE tried to make things better, by stopping the war between developers and SysAdmins.

Dev vs Ops: the eternal battle

So, it starts at looking at the problem: how to increase the reliability of the service? It turns out that some of the biggest sources of outages are new launches: a new feature that seemed innocuous somehow managed to bring the whole site down.

Devs want to launch, and Ops want to have a quiet weekend, and this is were the struggle begins. When launches are problematic, bureaucracy is put in place to minimise the risks: launch reviews, checklists, long-lived canaries. This is followed by development teams finding ways of side-stepping those hurdles. Nobody is happy.

One of the key aspects of SRE is to avoid this conflict completely, by changing the incentives, so these pressures between development and operations disappear. At Google, they achieve this with a few different strategies:

Have an SLA for your service

Before any service can be supported by SRE, it has to be determined what is the level of availability that it must achieve to make the users and the company happy: this is called the Service Level Agreement (SLA).

The SLA will define how availability is measured (for example, percentage of queries handled successfully in less than 50ms during the last quarter), and what is the minimum acceptable value for it (the Service Level Objective, or SLO). Note that this is a product decision, not a technical one.

This number is very important for an SRE team and its relationship with the developers. It is not taken lightly, and it should be measured and enforced rigorously (more on that later).

Only a few things on earth really require 100% availability (pacemakers, for example), and achieving really high availability is very costly. Most of us are dealing with more mundane affairs, and in the case of websites, there are many other things that fail pretty often: network glitches, OS freezes, browsers being slow, etc.

So an SLO of 100% is almost never a good idea, and in most cases it is impossible to reach. In places like Google an SLO of "five nines" (99.999%) is not uncommon, and this means that the service can't fail completely for more than 5 minutes across a whole year!

Measure and report performance against SLA/SLO

Once you have a defined SLA and SLO, it is very important that these are monitored accurately and reported constantly. If you wait for the end of the quarter to produce a hand-made report, the SLA is useless, as you only know you broke it when it is too late.

You need automated and accurate monitoring of your service level, and this means that the SLA has to be concrete and actionable. Fuzzy requirements that can't be measured are just a waste of time.

This is a very important tool for SRE, as it allows to see the progression of the service over time, detect capacity issues before they become outages, and at the same time show how much downtime can be taken without breaking the SLA. Which brings us to one core aspect of SRE:

Use error budgets and gate launches on them

If SLO is the minimum rate of availability, then the result of calculating 1 - SLO is what fraction of the time a service can fail without failing out of the SLA. This is called an error budget, and you get to use it the way you want it.

If the service is flaky (e.g. it fails consistently 1 of every 10000 requests), most of that budget is just wasted and you won't have any margin for launching riskier changes.

On the other hand, a stable service that does not eat the budget away gives you the chance to bet part of it on releasing more often, and getting your new features quicker to the user.

The moment the error budget is spent, no more launches are allowed until the average goes back out of the red.

Once everyone can see how the service is performing against this agreed contract, many of the traditional sources of conflict between development and operations just disappear.

If the service is working as intended, then SRE does not need to interfere on new feature launches: SRE trusts the developers' judgement. Instead of stopping a launch because it seems risky or under-tested, there are hard numbers that take the decisions for you.

Traditionally, Devs get frustrated when they want to release, but Ops won't accept it. Ops thinks there will be problems, but it is difficult to back this feeling with hard data. This fuels resentment and distrust, and management is never pleased. Using error budgets based on already established SLAs means there is nobody to get upset at: SRE does not need to play bad cop, and SWE is free to innovate as much as they want, as long as things don't break.

At the same time, this provides a strong incentive for developers to avoid risking their budget in poorly-prepared launches, to perform staged deployments, and to make sure the error budget is not wasted by recurrent issues.

That's all for today. The next article will continue delving into how traditional tensions between Devs and Ops are played in the SRE world.


Francesca Ciceri: Emacs

27 October, 2015 - 20:06

"Every now and then I install and try emacs, just because. Usually this happens:

(aghisla talking about editors -- quoted with permission)

Dirk Eddelbuettel: Rcpp now used by over 500 CRAN packages

27 October, 2015 - 19:02

This morning, Rcpp reached another round milestone: 501 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations, and even excluding one or two packages with an accidental declaration that do not use it). The graph is on the left depicts the growth of Rcpp usage over time. And there are a full seventy more on BioConductor in its development branch (but BioConductor is not included in the chart).

Rcpp cleared 300 packages less than a year ago. It passed 400 packages in June when I only tweeted about it this June (while traveling for Rcpp training at U Zuerich, the R Summit at CBS, and the fabulous useR! 2015 at U Aalborg; so no blog post). The first and less detailed part uses manually saved entries, the second half of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of user package is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of last year, six percent this July and now stand at 6.77 percent, or about one in fourteen R packages.

500 user packages is very humbling, a staggering number and a big responsibility. We will out best try to keep Rcpp as performant and reliable as it has been so that the next set of packages can rely on it---just like these 500 do.

So with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Wouter Verhelst: Adventures with modern electronics

27 October, 2015 - 17:25

I got me a new television last wednesday. The previous one was still functional, but with its 20 inch display it was (by today's standards) a very small one. In itself that wasn't an issue, but combined with my rather large movie collection and the 5.1 surround set that I bought a few years ago, my living room was starting to look slightly ridiculous. The new TV is a Panasonic 40" UHD ("4K") 3D-capable so-called smart thing. Unfortunately, all did not go well.

When I initially hooked it up, I found it confusingly difficult to figure out how to make it display something useful from the HDMI input rather than from the (not yet connected) broadcast cable. This eventually turned out to be due to the fact that the HDMI input was not selectable by a button marked "AUX" or similar, but by a button marked with some pictogram on a location near the teletext controls, which was pretty much the last place I'd look for such a thing.

After crossing that bridge, I popped in the correct film for that particular day and started watching it. The first thing I noticed, however, was that something was odd with the audio. It turned out that the TV as well as the 5.1 amplifier support the CEC protocol, which allows the TV to control some functionality of the A/V receiver. Unfortunately, the defaults were set in the TV to route audio to the TV's speakers, rather than to the 5.1 amp. This was obviously wrong, and it took me well over an hour to figure out why that was happening, and how I could fix it. My first solution at that time was to disable CEC on the amplifier, so that I could override where the audio would go there. Unfortunately, that caused the audio and video to go out of sync; not very pleasant. In addition, the audio would drop out every twenty seconds or so, which if you're trying to watch a movie is horribly annoying, and eventually I popped the DVD into a machine with analog 5.1 audio and component HD video outputs; not the best quality, but at least I could stop getting annoyed about it.

Over the next few days, I managed to get the setup working better and better:

  • The audio dropping was caused by an older HDMI cable being used. I didn't know this, but apparently there are several versions of HDMI wiring, and if an older cable is used then the amount of data that can be passed over the line is not as high. Since my older TV didn't do 1080p (only 1080i) I didn't notice this before getting the new set, but changing out some of the HDMI cables fixed that issue.
  • After searching the settings a bit, I found that the TV does have a setting for making it route audio to the 5.1 amp, so I'm back to using CEC, which has several advantages.
  • The hardcoding of one particular type of video to one particular input in the 5.1 amp that I complained about in that other post does seem to have at least some merit: it turns out that this is part of the CEC as well, and so when I change from HDMI input to broadcast data, the TV will automatically switch the amp's input to the "TV" input, too. That's pretty cool, even though I had to buy yet another cable (this time a TOSLINK one) to make it work well.

There's just on thing remaining: when I go into the channel list and try to move some channels around, the TV has the audacity to tell me that it's "not allowed". I mean, I paid for it, so I get to say what's allowed, not you, thankyouverymuch. Anyway, I'm sure I'll figure that out eventually.

The TV also has some 3D capability, but unfortunately it's one of those that require active 3D glasses, so the set that I bought at the movie theatre a while ago won't work. So after spending several tens of euros on extra cabling, I'll have to spend even more on a set of 3D glasses. They'll probably be brand-specific, too. Ah well.

It's a bit odd, in my opinion, that it takes me almost a week to get all that stuff to properly work. Ten years ago, the old TV had some connections, a remote, and that was it; you hook it up and you're done. Not anymore.

Chris Lamb: ImportError: cannot import name add_to_builtins under Django 1.9

27 October, 2015 - 16:20

Whilst upgrading projects to Django 1.9, I found myself repeatedly searching for the following code snippet.

So if you also used to use add_to_builtins to avoid tedious {% load module %} calls in your templates files and are now seeing a traceback like:

Traceback (most recent call last):
  File "django/core/management/", line 324, in execute
  File "django/", line 18, in setup
  File "django/apps/", line 108, in populate
  File "django/apps/", line 202, in import_models
    self.models_module = import_module(models_module_name)
  File "/usr/lib/python2.7/importlib/", line 37, in import_module
  File "myproject/myproject/utils/", line 1, in <module>
    from django.template.base import add_to_builtins
ImportError: cannot import name add_to_builtins

... then you need to move to using the TEMPLATES setting instead.

This setting replaces a number of your existing settings, including TEMPLATE_CONTEXT_PROCESSORS, TEMPLATE_DIRS, TEMPLATE_LOADERS:

    'BACKEND': 'django.template.backends.django.DjangoTemplates',
    'DIRS': [
        os.path.join(BASE_DIR, 'templates'),
    'APP_DIRS': True,
    'OPTIONS': {
        'context_processors': [
        'builtins': [

Place the modules you previously loaded with add_to_builtins to the builtins key under OPTIONS.

You can read more in the release notes for Django 1.9, as well as read about the TEMPLATES setting generally.

Marco d'Itri: Per-process netfilter rules

27 October, 2015 - 10:02

This article documents how the traffic of specific Linux processes can be subjected to a custom firewall or routing configuration, thanks to the magic of cgroups. We will use the Network classifier cgroup, which allows tagging the packets sent by specific processes.

To create the cgroup which will be used to identify the processes I added something like this to /etc/rc.local:

mkdir /sys/fs/cgroup/net_cls/unlocator
/bin/echo 42 > /sys/fs/cgroup/net_cls/unlocator/net_cls.classid
chown md: /sys/fs/cgroup/net_cls/unlocator/tasks

The tasks file, which controls the membership of processes in a cgroup, is made writeable by my user: this way I can add new processes without becoming root. 42 is the arbitrary class identifier that the kernel will associate with the packets generated by the member processes.

A command like systemd-cgls /sys/fs/cgroup/net_cls/ can be used to explore which processes are in which cgroup.

I use a simple shell wrapper to start a shell or a new program as members of this cgroup:

#!/bin/sh -e

if [ ! -d /sys/fs/cgroup/net_cls/$CGROUP_NAME/ ]; then
  echo "The $CGROUP_NAME net_cls cgroup does not exist!" >&2
  exit 1

/bin/echo $$ > /sys/fs/cgroup/net_cls/$CGROUP_NAME/tasks

if [ $# = 0 ]; then
  exec ${SHELL:-/bin/sh}

exec "$@"

My first goal is to use a special name server for the DNS queries of some processes, thanks to a second dnsmasq process which acts as a caching forwarder.





Description=dnsmasq - Second instance

ExecStartPre=/usr/sbin/dnsmasq --test
ExecStart=/usr/sbin/dnsmasq --keep-in-foreground --conf-file=/etc/dnsmasq2.conf
ExecReload=/bin/kill -HUP $MAINPID


Do not forget to enable the new service:

systemctl enable dnsmasq2
systemctl start dnsmasq2

Since the cgroup match extension is not yet available in a released version of iptables, you will first need to build and install it manually:

git clone git://
cd iptables
make -k
sudo cp extensions/ /lib/xtables/
sudo chmod -x /lib/xtables/

The netfilter configuration required is very simple: all DNS traffic from the marked processes is redirected to the port of the local dnsmasq2:

iptables -t nat -A OUTPUT -m cgroup --cgroup 42 -p udp --dport 53 -j REDIRECT --to-ports 5354
iptables -t nat -A OUTPUT -m cgroup --cgroup 42 -p tcp --dport 53 -j REDIRECT --to-ports 5354

For related reasons, I also need to disable IPv6 for these processes:

ip6tables -A OUTPUT -m cgroup --cgroup 42 -j REJECT

I use a different cgroup to force some programs to use my office VPN by first setting a netfilter packet mark on their traffic:

iptables -t mangle -A OUTPUT -m cgroup --cgroup 43 -j MARK --set-mark 43

The packet mark is then used to policy-route this traffic using a dedicate VRF, i.e. routing table 43:

ip rule add fwmark 43 table 43

This VPN VRF just contains a default route for the VPN interface:

ip route add default dev tun0 table 43

Depending on your local configuration it may be a good idea to also add to the VPN VRF the routes of your local interfaces:

ip route show scope link proto kernel \
  | xargs -I ROUTE ip route add ROUTE table 43

Since the source address selection happens before the traffic is diverted to the VPN, we also need to source-NAT to the VPN address the marked packets:

iptables -t nat -A POSTROUTING -m mark --mark 43 --out-interface tun0 -j MASQUERADE

Carl Chenet: db2twitter: get data in database, build and send tweets

27 October, 2015 - 06:00

db2twitter automatically extracts fields from your database, use them to feed a template of tweet and send the tweet.

db2twitter is developed by and run for, the job board of th french-speaking Free Software and Opensource community.

Imagine you want to send tweets like this sentence : MyTux hires a #Django developer

Let’s say you have a MySQL database with a database « mytuxjobs », a table « jobs » and fields « title » and « url »

db2twitter will need the following informations:


Lets define a template for our tweet:

tweet=MyTux hires a {}{}"

db2twitter will get data from the database, builds a tweet filling the wildcard of the tweet template with the data from the database, and send the tweet. By default the last row in the given table is used. Cool isn’t it?

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

Steve Kemp: It begins - a new mail client, with lua scripting

27 October, 2015 - 03:00

Once upon a time I wrote a mail-client, which worked in the console directly via Maildir manipulation.

My mail client was written in C++, and used Lua for scripting unlike clients such as mutt, alpine, and similar alternatives which don't have complete scripting support.

I've pondered several times whether to restart this project, but I think it is the right thing to do.

The original lumail client has a rich API, but it is very ad-hoc and random. Functions were added where they seemed like a good idea, but with no real planning, and although there are grouped functions that operate similarly there isn't a lot of consistency. The implementation is clean in places, elegant in others, and horrid in yet more parts.

This time round everything is an object, accessible to Lua, with Lua, and for Lua. This time round all the drawing-magic is will be written in Lua.

So to display a list of Maildirs I create a bunch of objects, one for each Maildir, and then the Lua function Maildir.to_string is called. That function looks like this:

-- This method returns the text which is displayed when a maildir is
-- to be show in maildir-mode.
function Maildir.to_string(self)
   local total  = self:total_messages()
   local unread = self:unread_messages()
   local path   = self:path()

   local output = string.format( "[%05d / %05d] - %s", unread, total, path );

   if ( unread > 0 ) then
      output = "$[RED]" .. output

   if ( string.find( output, "Automated." ) ) then
      output = strip_colour( output )
      output = "$[YELLOW]" .. output

   return output

The end result is something that looks like this:

[00001 / 00010 ] -
[00000 / 00023 ] - Automated.root

The formatting can thus be adjusted clearly, easily, and without hacking the core of the client. Providing I implement the apporpriate methods to the Maildir object.

It's still work in progress. You can view maildirs, view indexes, and view messages. You cannot reply, forward, or scroll properly. That said the hard part is done now, and I'm reasonably happy with it.

The sample configuration file is a bit verbose, but a good demonstration regardless.

See the code, if you wish, online here:

Lunar: Reproducible builds: week 26 in Stretch cycle

26 October, 2015 - 23:24

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Stefano Rivera uploaded python-cffi/1.3.0-1 which makes the generated code order deterministic for anonymous unions and anonymous structs. Reported by Tristan Seligmann, and fixed uptream.

Mattia Rizzolo created a bug report to continue the discussion on storing cryptographic checksums of the installed .deb in dpkg database. This follows the discussion that happened in June and is a pre-requisite to add checksums to .buildinfo files.

Niko Tyni identified why the Vala compiler would generate code in varying order. A better patch than his initial attempt still needs to be written.

Packages fixed

The following 15 packages became reproducible due to changes in their build dependencies: alt-ergo, approx, bin-prot, caml2html, coinst, dokujclient, libapreq2, mwparserfromhell, ocsigenserver, python-cryptography, python-watchdog, slurm-llnl, tyxml, unison2.40.102, yojson.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

pbuilder has been updated to version 0.219~bpo8+1 on all eight build nodes. (Mattia Rizzolo, h01ger)

Packages that FTBFS but for which no open bugs have been recorded are now tested again after 3 days. Likewise for “depwait” packages. (h01ger)

Out of disk situations will not cause IRC notifications anymore. (h01ger)

Documentation update

Lunar continued to work on writing documentation for the future website.

Package reviews

44 reviews have been removed, 81 added and 48 updated this week.

Chris West and Chris Lamb identified 70 “fail to build from source” issues.


h01ger presented the project in Mexico City at the 3er Congreso de Seguridad de la Información. It seems there is an interest for academic papers related to reproducible builds.

Bryan has been doing hard work to improve reproducibility for OpenWrt. He wrote a report linking to the patches and test results he published.

C.J. Adams-Collier: Some statistics from the router at the cabin

26 October, 2015 - 23:23

sip0 is a GRE tunnel between the router and the colo box in Seattle, the payload of which is encapsulated as ipsec traffic before being transmitted over the Ubiquity equipment to the switch that the CenturyLink DSL modem attaches to. I don’t get centurylink easter eggs in my search results when I use this interface.

eth9 is the local gigabit transceiver. Attached directly to a SIP phone which bridges to a gigabit switch, on which the Ubiquity equipment participates.

lo is the local loopback interface, of course. Good old 127/8

usb1 is the 100Mbit USB NIC to which the WRT54G with ssid ‘’ is attached. The vast majority of the traffic originates on this interface.

cjac@wanjet1:~$ uptime
 09:11:18 up 32 days, 18:33,  6 users,  load average: 0.00, 0.01, 0.05
cjac@wanjet1:~$ sudo ifconfig
[sudo] password for cjac: 
eth9      Link encap:Ethernet  HWaddr 00:26:b9:e3:9b:47  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::226:b9ff:fee3:9b47/64 Scope:Link
          RX packets:176060020 errors:4 dropped:12 overruns:0 frame:2
          TX packets:131573705 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:216753073131 (201.8 GiB)  TX bytes:30777442696 (28.6 GiB)
          Interrupt:20 Memory:f5400000-f5420000 

lo        Link encap:Local Loopback  
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3423 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3423 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:392714 (383.5 KiB)  TX bytes:392714 (383.5 KiB)

sip0      Link encap:UNSPEC  HWaddr 64-40-6A-02-30-30-3A-30-00-00-00-00-00-00-00-00  
          inet addr:  P-t-P:  Mask:
          inet6 addr: fe80::200:5efe:6440:6a02/64 Scope:Link
          inet6 addr: 2607:ff08:f5:1338::6440:6b02/126 Scope:Global
          RX packets:468792 errors:0 dropped:0 overruns:0 frame:0
          TX packets:297314 errors:30556 dropped:5 overruns:0 carrier:30549
          collisions:0 txqueuelen:0 
          RX bytes:512744498 (488.9 MiB)  TX bytes:21652345 (20.6 MiB)

usb1      Link encap:Ethernet  HWaddr 40:3c:fc:01:35:a5  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::423c:fcff:fe01:35a5/64 Scope:Link
          RX packets:131960908 errors:0 dropped:0 overruns:0 frame:0
          TX packets:175522396 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28447118558 (26.4 GiB)  TX bytes:216131320364 (201.2 GiB)

Arturo Borrero González: Debian meeting at Sevilla (October 2015) report

26 October, 2015 - 21:40

There was a Debian meeting at Sevilla last week.

The meeting was meant to be informal, just to know each other, hang out and drink some beers. There are a several people involved or interested in Debian and FLOSS here, but we barely know in person.

The event was announced with more than 1 month of anticipation in several mailing list (debian lists, local lists and university lists as well) and there was about 14 people interested in the event who said it explicitly.

You can see the Wiki page I set:

From the 14 people, 3 contacted before the event to announce they won't attend. The lasting 11 people were supposed to attend :-) In the end, we were 4 people and 3 of us know each other beforehand.

The attenders:

  • vejeta (Juan, perhaps the main supporter of the meeting, Debian user)
  • Pablo Neira (Netfilter upstream project and Linux kernel maintainer)
  • Javier Barroso (IT architect in the local goverment and Debian user)
  • me (the only directly involved in the Debian project, I'm DM)

This is a picture of that moment:

The truth is that we had a very good afternoon, but I personally missed some more people.
I remember Ana Guerrero telling me that a meeting of just 4-5 is indeed a victory :-)

We will have to try again :-)

Simon Josefsson: Combining Dnsmasq and Unbound

26 October, 2015 - 17:52

For my home office network I have been using Dnsmasq for some time. Dnsmasq provides me with DNS, DHCP, DHCPv6, and IPv6 Router Advertisement. I run dnsmasq on a Debian Jessie server, but it works similar with OpenWRT if you want to use a smaller device. My entire /etc/dnsmasq.d/local configuration used to look like this:


Here dhcp-authoritative enable DHCP. interface=eth1 says to listen on eth1 only, which is my internal (IPv4 NAT) network. I try to keep track of the MAC address of all my devices in a /etc/ethers file, so I use read-ethers to have dnsmasq give stable IP addresses for them. The dhcp-range is used to enable DHCP and DHCPv6 on my internal network. The dhcp-option=option6:dns-server,[::] statement is needed to inform the DHCP clients of the DNS resolver’s IPv6 address, otherwise they would only get the IPv4 DNS server address. The enable-ra parameter enables IPv6 router advertisement on the internal network, thereby removing the need to run radvd too — useful since I prefer to use copyleft software.

Recently I had a desire to use DNSSEC, and enabled it in Dnsmasq using the following statements:


The dnssec keyword enable DNSSEC validation in dnsmasq, using the indicated trust-anchor (get the root-anchors from IANA). The dnssec-check-unsigned deserves some more discussion. The dnsmasq manpage describes it as follows:

As a default, dnsmasq does not check that unsigned DNS replies are legitimate: they are assumed to be valid and passed on (without the “authentic data” bit set, of course). This does not protect against an attacker forging unsigned replies for signed DNS zones, but it is fast. If this flag is set, dnsmasq will check the zones of unsigned replies, to ensure that unsigned replies are allowed in those zones. The cost of this is more upstream queries and slower performance.

For example, this means that dnsmasq’s DNSSEC functionality is not secure against active man-in-the-middle attacks between dnsmasq and the DNS server it is using. Even if used DNSSEC properly, an attacker could fake that it was unsigned to dnsmasq, and I would get potentially incorrect values in return. We all know that the Internet is not a secure place, and your threat model should include active attackers. I believe this mode should be the default in dnsmasq, and users should have to configure dnsmasq to not be in that mode if they really want to (with the obvious security warning).

Running with this enabled for a couple of days resulted in frustration about not being able to reach a couple of domains. The behaviour was that my clients would hang indefinitely or get a SERVFAIL, both resulting in lack of service. You can enable query logging in dnsmasq with log-queries and enabling this I noticed three distinct form of error patterns:

jow13gw dnsmasq 460 - -  forwarded to
jow13gw dnsmasq 460 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply is BOGUS DNSKEY
jow13gw dnsmasq 547 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply is BOGUS DS
jow13gw dnsmasq 547 - -  validation result is BOGUS

The first only happened intermittently, the second did not cause any noticeable problem, and the final one was reproducible. To be fair, I only found the last example after starting to search for problem reports (see post confirming bug).

At this point, I had a confirmed bug in dnsmasq that affect my use-case. I want to use official packages from Debian on this machine, so installing newer versions manually is not an option. So I started to look into alternatives for DNS resolving, and quickly found Unbound. Installing it was easy:

apt-get install unbound

I created a local configuration file in /etc/unbound/unbound.conf.d/local.conf as follows:

	interface: ::1
	interface: 2001:9b0:104:42::2
	access-control: allow
	access-control: ::1 allow
	access-control: allow
	access-control: 2001:9b0:104:42::2/64 allow
	outgoing-interface: 2001:9b0:1:1a04::2
#	log-queries: yes
#	verbosity: 2

The interface keyword determine which IP addresses to listen on, here I used the loopback interface and the local address of the physical network interface for my internal network. The access-control allows recursive DNS resolving from those networks. And outgoing-interface specify my external Internet-connected interface. log-queries and/or verbosity are useful for debugging.

To make things work, dnsmasq has to stop providing DNS services. This can be achieved with the port=0 keyword, however that will also disable informing DHCP clients about the DNS server to use. So this has to be added in manually. I ended up adding the two following lines to /etc/dnsmasq.d/local:


Restarting unbound and dnsmasq now leads to working (and secure) internal DNSSEC-aware name resolution over both IPv4 and IPv6. I can verify that resolution works, and that Unbound verify signatures and reject bad domains properly with dig as below, or use online DNSSEC resolver test page although I’m not sure how confident you can be in the result from that page.

$ host has address mail is handled by 1
$ host
;; connection timed out; no servers could be reached

I use Munin to monitor my services, and I was happy to find a nice Unbound Munin plugin. I installed the file in /usr/share/munin/plugins/ and created a Munin plugin configuration file /etc/munin/plugin-conf.d/unbound as follows:

user root
env.statefile /var/lib/munin-node/plugin-state/root/unbound.state
env.unbound_conf /etc/unbound/unbound.conf
env.unbound_control /usr/sbin/unbound-control
env.spoof_warn 1000
env.spoof_crit 100000

I run munin-node-configure --shell|sh to enable it. To work unbound has to be configured as well, so I create a /etc/unbound/unbound.conf.d/munin.conf as follows.

	extended-statistics: yes
	statistics-cumulative: no
	statistics-interval: 0
	control-enable: yes

The graphs may be viewed at my munin instance.

Russ Allbery: Review: Hawk

26 October, 2015 - 10:09

Review: Hawk, by Steven Brust

Series: Vlad Taltos #14 Publisher: Tor Copyright: October 2014 ISBN: 0-7653-2444-X Format: Hardcover Pages: 320

This is the fourteenth book in the Vlad Taltos series (not counting the various associated books and other series), builds directly on the long-term plot arc of the series (finally!), and is deeply entangled with Vlad's friends and former life as a Jhereg boss. As you might imagine from that introduction, this is absolutely not the place to start with this series.

For the past few books, Brust has been following a pattern of advancing the series plot in one book and then taking the next book to fill in past history or tell some side story. That means, following Tiassa, we were due some series advancement, and that's exactly what we get. We also, finally, get some more details about Lady Teldra. Nothing all that revelatory, but certainly intriguing, and more than just additional questions (at last). When Brust finally takes this gun off the wall and fires it, the resulting bits of world-building might be even better than Issola.

At its heart, though, Hawk is a caper novel. If you're like me, you're thinking "it's about time." I think this is the sort of story Brust excels at, particularly with Vlad as his protagonist. Even better, unlike some of the other multi-part novels, this is a book-length caper focused on a very important goal, and with the potential to get rid of some annoyances in Vlad's life that have lingered for rather too long. We see many of Vlad's Dragaeran friends, but (apart from Daymar) mostly in glimpses. This is Vlad's book, with heavy helpings of Loiosh.

The caper is also a nicely twisty one, involving everything from different types of magic to the inner workings of the Jhereg organization. As is typical for Vlad's schemes, there are several false fronts and fake goals, numerous unexpected twists, and a rather fun guest appearance. Oh, and lots and lots of snark, of course. I think my favorite part of the book was the interaction between Vlad and Kragar, which added a lot of emotional depth both to this story and to some of the previous stories of Vlad's life as a Jhereg. And I'm hoping that where Brust leaves things at the end of this book implies a Vlad who is more free to act, to see his friends, and to get entangled in Imperial politics, since I think that leads to the best stories.

Of course, if Brust holds to pattern, the next book will be backfill or side stories and we'll have to wait longer for a continuation of the main story. As much as I like those side stories, I'm hoping Brust will break pattern. I'm increasingly eager to see where this story will go. The all-too-brief interaction with Sethra in this book promises so much for the future.

If you like the Vlad Taltos books overall, you'll probably like this one. It's a return to the old scheming Vlad, but tempered by more experience and different stakes. There's a bit of lore, a bit of world-building, and a lot of Vlad being tricky. This series is still going strong fourteen books in.

Rating: 8 out of 10

Craig Small: ps standards and locales

26 October, 2015 - 08:52

I looked at two interesting issues today around the ps program in the procps project. One had a solution and the other I’m puzzled about.

ps User-defined Format

Issue #9 was quite the puzzle. The output of ps changed depending if a different option had a hyphen before it or not.

First, the expected output

$ ps p $$ -o pid=pid,comm=comm
 pid comm
31612 bash

Next, the unusual output.

$ ps -p $$ -o pid=pid,comm=comm

The difference being the second we have -p not p.  The second unexpected thing about this is, it was designed that way. Unix98 standard apparently permits this sort of craziness.  To me it is a useless feature that will more likely than not confuse people. Within ps, depending on what sort of flags you start with, you use a sysv parser or a bsd parser. One of them triggered the Unix98 compatibility option while the other did not, hence the change in behavior. The next version of procps will ship with a ps that has the user-defined output format of the first example. I had a look at the latest standard, IEEE 1003.1-2013, doesn’t seem to have anything like that in it.


Short Month Length

This one has got me stuck. A user has reported in issue #5 that when they use their locale columns such as start time get mis-aligned because their short month is longer than 3 characters. They also mention some other columns for memory etc are not long enough but that’s a generic problem that is impossible to fix sensibly.

OK, for the month problem the fix would be to know what the month length is and set the column width for those columns to that plus a bit more for the other fixed components, simple really.  Except; how do you know for a specific locale what their short month length is?  I always assumed it was three!  I haven’t found anything that has this information. Note, I’m not looking for strlen() but some function that has the maximum length for short month names (e.g. Jan, Feb, Mar etc).  This also got me thinking how safe some of those date to string functions are if you have a static buffer as the destination. It’s not safe to assume they will be “DD MMM YYYY” because there might be more Ms.


So if you know how to work out the short month name length, let me know!

Johannes Schauer: unshare without superuser privileges

26 October, 2015 - 01:44

TLDR: With the help of Helmut Grohne I finally figured out most of the bits necessary to unshare everything without becoming root (though one might say that this is still cheated because the suid root tools newuidmap and newgidmap are used). I wrote a Perl script which documents how this is done in practice. This script is nearly equivalent to using the existing commands lxc-usernsexec [opts] -- unshare [opts] -- COMMAND except that these two together cannot be used to mount a new proc. Apart from this problem, this Perl script might also be useful by itself because it is architecture independent and easily inspectable for the curious mind without resorting to (it is heavily documented at nearly 2 lines of comments per line of code on average). It can be retrieved here at

Long story: Nearly two years after my last last rant about everything needing superuser privileges in Linux, I'm still interested in techniques that let me do more things without becoming root. Helmut Grohne had told me for a while about unshare(), or user namespaces as the right way to have things like chroot without root. There are also reports of LXC containers working without root privileges but they are hard to come by. A couple of days ago I had some time again, so Helmut helped me to get through the major blockers that were so far stopping me from using unshare in a meaningful way without executing everything with sudo.

My main motivation at that point was to let dpkg-buildpackage when executed by sbuild be run with an unshared network namespace and thus without network access (except for the loopback interface) because like pbuilder I wanted sbuild to enforce the rule not to access any remote resources during the build. After several evenings of investigating and doctoring at the Perl script I mentioned initially, I came to the conclusion that the only place that can unshare the network namespace without disrupting anything is schroot itself. This is because unsharing inside the chroot will fail because dpkg-buildpackage is run with non-root privileges and thus the user namespace has to be unshared. But this then will destroy all ownership information. But even if that wasn't the case, the chroot itself is unlikely to have (and also should not) tools like ip or newuidmap and newgidmap installed. Unsharing the schroot call itself also will not work. Again we first need to unshare the user namespace and then schroot will complain about wrong ownership of its configuration file /etc/schroot/schroot.conf. Luckily, when contacting Roger Leigh about this wishlist feature in bug#802849 I was told that this was already implemented in its git master \o/. So this particular problem seems to be taken care of and once the next schroot release happens, sbuild will make use of it and have unshare --net capabilities just like pbuilder already had since last year.

With the sbuild case taken care of, the rest of this post will introduce the Perl script I wrote. The name user-unshare is really arbitrary. I just needed some identifier for the git repository and a filename.

The most important discovery I made was, that Debian disables unprivileged user namespaces by default with the patch add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by-default.patch to the Linux kernel. To enable it, one has to first either do

echo 1 | sudo tee /proc/sys/kernel/unprivileged_userns_clone > /dev/null


sudo sysctl -w kernel.unprivileged_userns_clone=1

The tool tries to be like unshare(1) but with the power of lxc-usernsexec(1) to map more than one id into the new user namespace by using the programs newgidmap and newuidmap. Or in other words: This tool tries to be like lxc-usernsexec(1) but with the power of unshare(1) to unshare more than just the user and mount namespaces. It is nearly equal to calling:

lxc-usernsexec [opts] -- unshare [opts] -- COMMAND

Its main reason of existence are:

  • as a project for me to learn how unprivileged namespaces work
  • written in Perl which means:
    • architecture independent (same executable on any architecture)
    • easily inspectable by other curious minds
  • tons of code comments to let others understand how things work
  • no need to install the lxc package in a minimal environment (perl itself might not be called minimal either but is present in every Debian installation)
  • not suffering from being unable to mount proc

I hoped that systemd-nspawn could do what I wanted but it seems that its requirement for being run as root will not change any time soon

Another tool in Debian that offers to do chroot without superuser privileges is linux-user-chroot but that one cheats by being suid root.

Had I found lxc-usernsexec earlier I would've probably not written this. But after I found it I happily used it to get an even better understanding of the matter and further improve the comments in my code. I started writing my own tool in Perl because that's the language sbuild was written in and as mentioned initially, I intended to use this script with sbuild. Now that the sbuild problem is taken care of, this is not so important anymore but I like if I can read the code of simple programs I run directly from /usr/bin without having to retrieve the source code first or use

The only thing I wasn't able to figure out is how to properly mount proc into my new mount namespace. I found a workaround that works by first mounting a new proc to /proc and then bind-mounting /proc to whatever new location for proc is requested. I didn't figure out how to do this without mounting to /proc first partly also because this doesn't work at all when using lxc-usernsexec and unshare together. In this respect, this perl script is a bit more powerful than those two tools together. I suppose that the reason is that unshare wasn't written with having being called without superuser privileges in mind. If you have an idea what could be wrong, the code has a big FIXME about this issue.

Finally, here a demonstration of what my script can do. Because of the /proc bug, lxc-usernsexec and unshare together are not able to do this but it might also be that I'm just not using these tools in the right way. The following will give you an interactive shell in an environment created from one of my sbuild chroot tarballs:

$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net \
    --uts --mount --fork -- sh -c 'ip link set lo up && ip addr && \
    hostname hoothoot-chroot && \
    tar -C /tmp/buildroot -xf /srv/chroot/unstable-amd64.tar.gz; \
    /usr/sbin/chroot /tmp/buildroot /sbin/runuser -s /bin/bash - josch && \
    umount /tmp/buildroot/proc && rm -rf /tmp/buildroot'
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ whoami
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ hostname
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ ls -lha /proc | head
total 0
dr-xr-xr-x 218 nobody nogroup    0 Oct 25 19:06 .
drwxr-xr-x  22 root   root     440 Oct  1 08:42 ..
dr-xr-xr-x   9 root   root       0 Oct 25 19:06 1
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 15
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 16
dr-xr-xr-x   9 root   root       0 Oct 25 19:06 7
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 8
dr-xr-xr-x   4 nobody nogroup    0 Oct 25 19:06 acpi
dr-xr-xr-x   6 nobody nogroup    0 Oct 25 19:06 asound

Of course instead of running this long command we can also instead write a small shell script and execute that instead. The following does the same things as the long command above but adds some comments for further explanation:


set -exu

# I'm using /tmp because I have it mounted as a tmpfs

# bring the loopback interface up
ip link set lo up

# show that the loopback interface is really up
ip addr

# make use of the UTS namespace being unshared
hostname hoothoot-chroot

# extract the chroot tarball. This must be done inside the user namespace for
# the file permissions to be correct.
# tar will fail to call mknod and to change the permissions of /proc but we are
# ignoring that
tar -C "$rootdir" -xf /srv/chroot/unstable-amd64.tar.gz || true

# run chroot and inside, immediately drop permissions to the user "josch" and
# start an interactive shell
/usr/sbin/chroot "$rootdir" /sbin/runuser -s /bin/bash - josch

# unmount /proc and remove the temporary directory
umount "$rootdir/proc"
rm -rf "$rootdir"

and then:

$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./

As mentioned in the beginning, the tool is nearly equivalent to calling lxc-usernsexec [opts] -- unshare [opts] -- COMMAND but because of the problem with mounting proc (mentioned earlier), lxc-usernsexec and unshare cannot be used with above example. If one tries anyways one will only get:

$ lxc-usernsexec -m b:0:1000:1 -m b:1:558752:1 -- unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./
unshare: mount /tmp/buildroot/proc failed: Invalid argument

I'd be interested in finding out why that is and how to fix it.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้