Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 13 min 46 sec ago

Enrico Zini: Ansible config for my stereo

10 April, 2017 - 01:54

I bought a Raspberry Pi 2 and its case. I could not reuse the existing SD card because it wants a MicroSD.

A wise person once told me:

First you do it, then you document it, then you automate it.

I had done the first two, and now I've redone the whole setup with ansible, here: stereo.tar.xz.

Sam Hartman: When "when" is too hard a question: SQLAlchemy, Python datetime, and ISO8601

10 April, 2017 - 01:39
A new programmer asked on a work chat room how timezones are handled in databases. He asked if it was a good idea to store things in UTC. The senior programmers all laughed as we told some of our horror stories with timezones. Yes, UTC is great; if only it were that simple.
About a week later I was designing the schema for a blue sky project I'm implementing. I had to confront time in all its Pythonic horror.
Let's start with the datetime.datetime class. Datetime objects optionally include a timezone. If no timezone is present, several methods such as timestamp treat the object as a local time in the system's timezone. The timezone method returns a POSIX timestamp, which is always expressed in UTC, so knowing the input timezone is important. The now method constructs such an object from the current time.
However other methods act differently. The utcnow method constructs a datetime object that has the UTC time, but is not marked with a timezone. So, for example datetime.fromtimestamp(datetime.utcnow().timestamp()) produces the wrong result unless your system timezone happens to have the same offset as UTC.
It's also possible to construct a datetime object that includes a UTC time and is marked as having a UTC time. The utcnow method never does this, but you can pass the UTC timezone into the now method and get that effect. As you'd expect, the timestamp method returns the correct result on such a datetime.
Now enter SQLAlchemy, one of the more popular Python ORMs. Its DATETIME type has an argument that tries to request a column capable of storing a a timezone from the underlying database. You aren't guaranteed to get this though; some databases don't provide that functionality. With PostgreSQL, I do get such a column, although something in SQLAlchemy is not preserving the timezones (although it is correctly adjusting the time). That is, I'll store a UTC time in an object, flush it to my session, and then read back the same time represented in my local timezone (marked as my local timezone). You'd think this would be safe.
Enter SQLite. SQLite makes life hard for people wanting to store time; it seems to want to store things as strings. That's fairly incompatible with storing a timezone and doing any sort of comparisons on dates. SQLAlchemy does not try to store a timezone in SQLite. It just trims any timezone information from the datetime. So, if I do something like
d =
obj.date_col = d
assert obj.date_col == d # fails
assert obj.date_col.timestamp() == d.timestamp() # fails
assert d == obj.date_col.replace(tzinfo = timezone.utc) # finally succeeds

There are some unfortunate consequences of this. If you mark your datetimes with timezone information (even if it is always the same timezone), whether two datetimes representing the same datetime compare equal depends on whether objects have been flushed to the session yet. If you don't mark your objects with timezones, then you may not store timezone information on other databases.
At least if you use only the methods we've discussed so far, you're reasonably safe if you use local time everywhere in your application and don't mark your datetimes with timezones. That's undesirable because as our new programmer correctly surmised, you really should be using UTC. This is particularly true if users of your database might span multiple timezones.
You can use UTC time and not mark your objects as UTC. This will give the wrong data with a database that actually does support timezones, but will sort of work with SQLite. You need to be careful never to convert your datetime objects into POSIX time as you'll get the wrong result.
It turns out that my life was even more complicated because parts of my project serialize data into JSON. For that serialization, I've chosen ISO 8601. You've probably seen that format: '2017-04-09T18:17:27.340410+00:00. Datetime provides the convenient isoformat method to print timestamps in the ISO 8601 format. If the datetime has a timezone indication, it is included in the ISO formatted string. If not, then no timezone indication is included. You know how I mentioned that datetime takes a string without a timezone marker as local time? Yeah, well, that's not what 8601 does: UTC all the way, baby! And at least the parser in the iso8601 module will always include timezone markers. So, if you use datetime to print a timestamp without a timezone marker and then read that back in to construct a new datetime on the deserialization side, then you'll get the wrong time. OK, so mark things with timezones then. Well, if you use local time, then the time you get depends on whether you print the ISO string before or after session flush (before or after SQLAlchemy trims the timezone information as it goes to SQLite).
It turns out that I had the additional complication of one side of my application using SQLite and one side using PostgreSQL. Remember how I mentioned that something between SQLAlchemy and PostgreSQL was recasting my times in local timezone (although keeping the time the same)? Well, consider how that's going to work. I serialize with the timezone marker on the PostgreSQL side. I get a ISO8601 localtime marked with the correct timezone marker. I deserialize on the SQLite side. Before session flush, I get a local time marked as localtime. After session flush, I get a local time with no marking. That's bad. If I further serialize on the SQLite side, I'll get that local time incorrectly marked as UTC. Moreover, all the times being locally generated on the SQLite side are UTC, and as we explored, SQLite really only wants one timezone in play.
I eventually came up with the following approach:

  1. If I find myself manipulating a time without a timezone marking, assert that its timezone is UTC not localtime.

  2. Always use UTC for times coming into the system.

  3. If I'm generating an ISO 8601 time from a datetime that has a timezone marker in a timezone other than UTC, represent that time as a UTC-marked datetime adjusting the time for the change in timezone.

This is way too complicated. I think that both datetime and SQLAlchemy's SQLite time handling have a lot to answer for. I think SQLAlchemy's core time handling may also have some to answer for, but I'm less sure of that.

Antoine Beaupré: Contribute your skills to Debian in Montreal, April 14 2017

9 April, 2017 - 22:06

Join us in Montreal, on April 14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to).

Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian.

The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome

Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated.

Here's what we will be doing:

  • We will triage bug reports that are blocking the release of the upcoming version of Debian.

  • Debian package maintainers will fix some of these bugs.

Goals and principles

This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet.

We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend.

We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting the Debian Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after.

The space where this event will take place is unfortunately not accessible to wheelchairs. Food (including vegetarian options) should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best.

What we will be doing

This will be an informal session to confirm and fix bugs in Debian. If you have never worked with Debian packages, this is a good opportunity to learn about packaging and bugtracker usage.

Bugs flagged as Release Critical are blocking the release of the upcoming version of Debian. To fix them, it helps to make sure the bug report documents the up-to-date status of the bug, and of its resolution. One does not need to be a programmer to do this work! For example, you can try and reproduce bugs in software you use... or in software you will discover. This helps package maintainers better focus their work.

We will also try to actually fix bugs by testing patches and uploading fixes into Debian itself. Antoine Beaupré, a seasoned Debian developer, will be available to sponsor uploads and teach people about basic Debian packaging skills.

Where? When? How to register?

See for the exact address and time.

Christoph Egger: Secured OTP Server (ASIS CTF 2017)

9 April, 2017 - 20:20

This weekend was ASIS Quals weekend again. And just like last year they have quite a lot of nice crypto-related puzzles which are fun to solve (and not "the same as every ctf").

Actually Secured OTP Server is pretty much the same as the First OTP Server (actually it's a "fixed" version to enforce the intended attack). However the template phrase now starts with enough stars to prevent simple root.:

def gen_otps():
    template_phrase = '*************** Welcome, dear customer, the secret passphrase for today is: '

    OTP_1 = template_phrase + gen_passphrase(18)
    OTP_2 = template_phrase + gen_passphrase(18)

    otp_1 = bytes_to_long(OTP_1)
    otp_2 = bytes_to_long(OTP_2)

    nbit, e = 2048, 3
    privkey = RSA.generate(nbit, e = e)
    pubkey  = privkey.publickey().exportKey()
    n = getattr(privkey.key, 'n')

    r = otp_2 - otp_1
    if r < 0:
        r = -r
    IMP = n - r**(e**2)
    if IMP > 0:
        c_1 = pow(otp_1, e, n)
        c_2 = pow(otp_2, e, n)
    return pubkey, OTP_1[-18:], OTP_2[-18:], c_1, c_2

Now let A = template * 2^(18*8), B = passphrase. This results in OTP = A + B. c therefore is (A+B)^3 mod n == A^3 + 3A^2b + 3AB^2 + B^3. Notice that only B^3 is larger than N and is statically known. Therefore we can calculate A^3 // N and add that to c to "undo" the modulo operation. With that it's only iroot and long_to_bytes to the solution. Note that we're talking about OTP and C here. The code actually produced two OTP and C values but you can use either one just fine.


import sys
from util import bytes_to_long
from gmpy2 import iroot

PREFIX = b'*************** Welcome, dear customer, the secret passphrase for today is: '
OTPbase = bytes_to_long(PREFIX + b'\x00' * 18)

N = 27990886688403106156886965929373472780889297823794580465068327683395428917362065615739951108259750066435069668684573174325731274170995250924795407965212988361462373732974161447634230854196410219114860784487233470335168426228481911440564783725621653286383831270780196463991259147093068328414348781344702123357674899863389442417020336086993549312395661361400479571900883022046732515264355119081391467082453786314312161949246102368333523674765325492285740191982756488086280405915565444751334123879989607088707099191056578977164106743480580290273650405587226976754077483115441525080890390557890622557458363028198676980513

WRAPPINGS = (OTPbase ** 3) // N

C = 13094996712007124344470117620331768168185106904388859938604066108465461324834973803666594501350900379061600358157727804618756203188081640756273094533547432660678049428176040512041763322083599542634138737945137753879630587019478835634179440093707008313841275705670232461560481682247853853414820158909864021171009368832781090330881410994954019971742796971725232022238997115648269445491368963695366241477101714073751712571563044945769609486276590337268791325927670563621008906770405196742606813034486998852494456372962791608053890663313231907163444106882221102735242733933067370757085585830451536661157788688695854436646


val, _ = iroot(x, 3)
bstr = "%x" % int(val)

for i in range(0, len(bstr) // 2):
    sys.stdout.write(chr(int(bstr[2*i:2*i+2], 16)))


Michael Stapelberg: what’s new since the launch?

9 April, 2017 - 18:23

On 2017-01-18, I announced that had been modernized. Let me catch you up on a few things which happened in the meantime:

  • Debian experimental was added to I was surprised to learn that adding experimental only required 52MB of disk usage. Further, Debian contrib was added after realizing that contrib licenses are compatible with the DFSG.
  • Indentation in some code examples was fixed upstream in mandoc.
  • Address-bar search should now also work in Firefox, which apparently requires a title attribute on the opensearch XML file reference.
  • manpages now specify their language in the HTML tag so that search engines can offer users the most appropriate version of the manpage.
  • I contributed mandocd(8) to the mandoc project, which debiman now uses for significantly faster manpage conversion (useful for disaster recovery/development). An entire run previously took 2 hours on my workstation. With this change, it takes merely 22 minutes. The effects are even more pronounced on manziarly, the VM behind
  • Thanks to Peter Palfrader (weasel) from the Debian System Administrators (DSA) team, is now serving its manpages (and most of its redirects) from Debian’s static mirroring infrastructure. That way, planned maintenance won’t result in service downtime. I contributed README.static-mirroring.txt, which describes the infrastructure in more detail.

The list above is not complete, but rather a selection of things I found worth pointing out to the larger public.

There are still a few things I plan to work on soon, so stay tuned :).

Matthew Garrett: A quick look at the Ikea Trådfri lighting platform

9 April, 2017 - 07:16
Ikea recently launched their Trådfri smart lighting platform in the US. The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent. Hardware-wise, the device is pretty minimal - it seems to be based on the Cypress[1] WICED IoT platform, with 100MBit ethernet and a Silicon Labs Zigbee chipset. It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports. As IoT devices go, it's pleasingly minimal.

That single port seems to be a COAP server running with DTLS and a pre-shared key that's printed on the bottom of the device. When you start the app for the first time it prompts you to scan a QR code that's just a machine-readable version of that key. The Android app has code for using the insecure COAP port rather than the encrypted one, but the device doesn't respond to queries there so it's presumably disabled in release builds. It's also local only, with no cloud support. You can program timers, but they run on the device. The only other service it seems to run is an mdns responder, which responds to the _coap._udp.local query to allow for discovery.

From a security perspective, this is pretty close to ideal. Having no remote APIs means that security is limited to what's exposed locally. The local traffic is all encrypted. You can only authenticate with the device if you have physical access to read the (decently long) key off the bottom. I haven't checked whether the DTLS server is actually well-implemented, but it doesn't seem to respond unless you authenticate first which probably covers off a lot of potential risks. The SoC has wireless support, but it seems to be disabled - there's no antenna on board and no mechanism for configuring it.

However, there's one minor issue. On boot the device grabs the current time from (fine) but also hits . That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal. Realistically, this is only a problem if someone already has enough control over your network to mess with your DNS, and being wired-only makes this pretty unlikely. I'd be surprised if it's ever used as a real avenue of attack.

Overall: as far as design goes, this is one of the most secure IoT-style devices I've looked at. I haven't examined the COAP stack in detail to figure out whether it has any exploitable bugs, but the attack surface is pretty much as minimal as it could be while still retaining any functionality at all. I'm impressed.

[1] Formerly Broadcom


Dirk Eddelbuettel: #4: Simpler shoulders()

9 April, 2017 - 02:09

Welcome to the fourth post in the repulsively random R ramblings series, or R4 for short.

My twitter feed was buzzing about a nice (and as yet unpublished, ie not-on-CRAN) package by Dirk Schumacher which compiles a a list of packages (ordered by maintainer count) for your current session (or installation or ...) with a view towards saying thank you to those whose packages we rely upon. Very nice indeed.

I had a quick look and run it twice ... and had a reaction of ewwww, really? as running it twice gave different results as on the second instance a boatload of tibblyverse packages appeared. Because apparently kids these day can only slice data that has been tidied or something.

So I had another quick look ... and put together an alternative version using just base R (as there was only one subfunction that needed reworking):

format_pkg_df <- function(df) { # non-tibblyverse variant
    tb <- table(df[,2])
    od <- order(tb, decreasing=TRUE)
    ndf <- data.frame(maint=names(tb)[od], npkgs=as.integer(tb[od]))
    colpkgs <- function(m, df) { paste(df[ df$maintainer == m, "pkg_name"], collapse=",") }
    ndf[, "pkg"] <- sapply(ndf$maint, colpkgs, df)

Running this in the ESS session I had open gives:

R> shoulders()  ## by Dirk Schumacher, with small modifications
                               maint npkgs                                                                 pkg
1 R Core Team <>     9 compiler,graphics,tools,utils,grDevices,stats,datasets,methods,base
2 Dirk Eddelbuettel <>     4                                  RcppTOML,Rcpp,RApiDatetime,anytime
3  Matt Dowle <>     1                                                          data.table

and for good measure a screen is below:

I think we need a catchy moniker for R work using good old base R. SoberVerse? GrumbyOldFolksR? PlainOldR? Better suggestions welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Arturo Borrero González: openvpn deployment with Debian Stretch

7 April, 2017 - 12:00

Debian Stretch feels like an excellent release by the Debian project. The final stable release is about to happen in the short term.

Among the great things you can do with Debian, you could set up a VPN using the openvpn software.

In this blog post I will describe how I’ve deployed myself an openvpn server using Debian Stretch, my network environment and my configurations & workflow.

Before all, I would like to reference my requisites and the characteristics of what I needed:

  • a VPN server which allows internet clients to access our datacenter internal network (intranet) securely
  • strong authentications mechanisms for the users (user/password + client certificate)
  • the user/password information is stored in a LDAP server of the datacenter
  • support for several (hundreds?) of clients
  • only need to route certain subnets (intranet) through the VPN, not the entire network traffic of the clients
  • full IPv4 & IPv6 dual stack support, of course
  • a group of system admins will perform changes to the configurations, adding and deleting clients

I agree this is a rather complex scenario and not all the people will face these requirements.

The service diagram has this shape:

(DIA source file)

So, it works like this:

  1. clients connect via internet to our openvpn server,
  2. the openvpn server validates the connection and the tunnel is established (green)
  3. now the client is virtually inside our network (blue)
  4. the client wants to access some intranet resource, the tunnel traffic is NATed (red)

Our datacenter intranet is using public IPv4 addressing, but the VPN tunnels use private IPv4 addresses. To don’t mix public and private address NAT is used. Obviously we don’t want to invest public IPv4 addresses in our internal tunnels. We don’t have this limitations in IPv6, we could use public IPv6 addresses within the tunnels. But we prefer sticking to a hard dual stack IPv4/IPv6 approach and also use private IPv6 addresses inside the tunnels and also NAT the IPv6 from private to public.

This way, there are no differences in how IPv4 and IPv6 network are managed.

We follow this approach for the addressing:

  • client 1 tunnel:, fd00:0:1::11
  • client 1 public NAT: x.x.x.11, x:x::11
  • client 2 tunnel:, fd00:0:1::12
  • client 2 public NAT: x.x.x.12, x:x::12
  • […]

The NAT runs in the VPN server, since this is kind of a router. We use nftables for this task.

As the final win, I will describe how we manage all this configuration using the git version control system. Using git we can track which admin made which change. A git hook will deploy the files from the git repo itself to /etc/ so the services can read them.

The VPN server networking configuration is as follows (/etc/network/interfaces file, adjust to your network environments):

auto lo
iface lo inet loopback

# main public IPv4 address of
allow-hotplug eth0
iface eth0 inet static
        address x.x.x.4
        gateway x.x.x.1

# main public IPv6 address of
iface eth0 inet6 static
        address x:x:x:x::4
        netmask 64
        gateway x:x:x:x::1

# NAT Public IPv4 addresses (used to NAT tunnel of client 1)
auto eth0:11
iface eth0:11 inet static
        address x.x.x.11

# NAT Public IPv6 addresses (used to NAT tunnel of client 1)
iface eth0:11 inet6 static
        address x:x:x:x::11
        netmask 64

# NAT Public IPv4 addresses (used to NAT tunnel of client 2)
auto eth0:12
iface eth0:12 inet static
        address x.x.x.12

# NAT Public IPv6 addresses (used to NAT tunnel of client 2)
iface eth0:12 inet6 static
        address x:x:x:x::12
        netmask 64

Thanks to the amazing and tireless work of the Alberto Gonzalez Iniesta (DD), the openvpn package in debian is in very good shape, ready to use.

In, install the required packages:

% sudo aptitude install openvpn openvpn-auth-ldap nftables git sudo

Two git repositories will be used, one for the openvpn configuration and another for nftables (the nftables config is described later):

% sudo mkdir -p /srv/git/
% sudo git init --bare /srv/git/
% sudo mkdir -p /srv/git/
% sudo git init --bare /srv/git/
% sudo chown -R :git /srv/git/*
% sudo chmod -R g+rw /srv/git/*

The repositories belong to the git group, a system group we create to let systems admins operate the server using git:

% sudo addgroup --system git
% sudo adduser admin1 git
% sudo adduser admin2 git

For the openvpn git repository, we need at least this git hook (file /srv/git/ with execution permission):


UNAME=$(uname -n)

        echo "${UNAME} ${NAME} $1 ..."

info "checkout latest data to $GIT_WORK_TREE"
sudo git checkout -f
info "cleaning untracked files and dirs at $GIT_WORK_TREE"
sudo git clean -f -d

For this hook to work, sudo permissions are required (file /etc/sudoers.d/openvpn-git):

User_Alias      OPERATORS = admin1, admin2
Defaults        env_keep += "GIT_WORK_TREE"
OPERATORS       ALL=(ALL) NOPASSWD:/usr/bin/git checkout -f
OPERATORS       ALL=(ALL) NOPASSWD:/usr/bin/git clean -f -d

Please review this sudoers file to match your environment and security requirements.

The openvpn package deploys several systemd services:

% dpkg -L openvpn | grep service

We don’t need all of them, we can use the simple openvpn.service:

% sudo systemctl edit --full openvpn.service

And put a content like this:

% systemctl cat openvpn.service
# /etc/systemd/system/openvpn.service
Description=OpenVPN server
ExecStart=/usr/sbin/openvpn --daemon ovpn --status /run/openvpn/%i.status 10 --cd /etc/openvpn --config /etc/openvpn/server.conf --writepid /run/openvpn/
ExecReload=/bin/kill -HUP $MAINPID
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw

We can move on now to configure nftables to perform the NATs.

First, it’s good to load the NAT configuration at boot time, so you need a service file like this (/etc/systemd/system/nftables.service):

ExecStart=/usr/sbin/nft -f ruleset.nft
ExecReload=/usr/sbin/nft -f ruleset.nft
ExecStop=/usr/sbin/nft flush ruleset

The nftables git hooks are implemented as described in nftables managed with git. We are interested in the git hooks:

(file /srv/git/


UNAME=$(uname -n)

        echo "${UNAME} ${NAME} $1 ..."

info "checkout latest data to $GIT_WORK_TREE"
sudo git checkout -f
info "cleaning untracked files and dirs at $GIT_WORK_TREE"
sudo git clean -f -d

info "deploying new ruleset"
set -e
cd $NFT_ROOT && sudo nft -f $RULESET
info "new ruleset deployment was OK"

This hook moves our nftables configuration to /etc/nftables.d and then applies it to the kernel. So a single commit changes the runtime configuration of the server.

You could implement some QA using the git hook update, check this file!

Remember, git hooks requires exec permissions to work. Of course, you will need again a sudo policy for these nft hooks.

Finally, we can start configuring both openvpn and nftables using git. For the VPN you will require the configure the PKI side: server certificates, and the CA signing your client’s certificates. You can check openvpn’s own documentation about this.

Your first commit for openvpn could be the server.conf file:

plugin		/usr/lib/openvpn/ common-auth
mode		server
user		nobody
group		nogroup
port		1194
proto		udp6

cert		/etc/ssl/private/vpn.example.com_pub.crt
key		/etc/ssl/private/vpn.example.com_priv.pem
ca		/etc/ssl/cacert/clients_ca.pem
dh		/etc/ssl/certs/dh2048.pem
cipher		AES-128-CBC

dev		tun
topology	subnet
server-ipv6	fd00:0:1:35::/64

client-config-dir ccd
max-clients	100
inactive	43200
keepalive	10 360

log-append	/var/log/openvpn.log
status		/var/log/openvpn-status.log
status-version	1
verb		4
mute		20

Don’t forget the ccd/ directory. This directory contains a file per user using the VPN service. Each file is named after the CN of the client certificate:

# private addresses for client 1
ifconfig-ipv6-push	fd00:0:1::11/64

# routes to the intranet network
push "route-ipv6 x:x:x:x::/64"
push "route x.x.3.128"
# private addresses for client 2
ifconfig-ipv6-push	fd00:0:1::12/64

# routes to the intranet network
push "route-ipv6 x:x:x:x::/64"
push "route x.x.3.128"

You end with at leats these files in the openvpn git tree:


Please note that if you commit a change to ccd/, the changes are read at runtime by openvpn. In the other hand, changes to server.conf require you to restart the openvpn service by hand.

Remember, the addressing is like this:

(DIA source file)

In the nftables git tree, you should put a ruleset like this (a single file named ruleset.nft is valid):

flush ruleset
table ip nat {
	map mapping_ipv4_snat {
		type ipv4_addr : ipv4_addr
		elements = { : x.x.x.11, : x.x.x.12 }

	map mapping_ipv4_dnat {
		type ipv4_addr : ipv4_addr
		elements = {	x.x.x.11 :,
				x.x.x.12 : }

	chain prerouting {
		type nat hook prerouting priority -100; policy accept;
		dnat to ip daddr map @mapping_ipv4_dnat

	chain postrouting {
		type nat hook postrouting priority 100; policy accept;
		oifname "eth0" snat to ip saddr map @mapping_ipv4_snat
table ip6 nat {
	map mapping_ipv6_snat {
		type ipv6_addr : ipv6_addr
		elements = {	fd00:0:1::11 : x:x:x::11,
				fd00:0:1::12 : x:x:x::12 }

	map mapping_ipv6_dnat {
		type ipv6_addr : ipv6_addr
		elements = {	x:x:x::11 : fd00:0:1::11,
				x:x:x::12 : fd00:0:1::12 }

	chain prerouting {
		type nat hook prerouting priority -100; policy accept;
		dnat to ip6 daddr map @mapping_ipv6_dnat

	chain postrouting {
		type nat hook postrouting priority 100; policy accept;
		oifname "eth0" snat to ip6 saddr map @mapping_ipv6_snat
table inet filter {
	chain forward {
		type filter hook forward priority 0; policy accept;
		# some forwarding filtering policy, if required, for both IPv4 and IPv6

Since the server is in fact routing packets between the tunnel and the public network, we require forwarding enabled in sysctl:

net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1

Of course, the VPN clients will require a client.conf file which looks like this:

remote 1194
dev tun
proto udp
resolv-retry infinite
verb 5
user nobody
group nogroup
ca      /etc/ssl/cacert/server_ca.crt
pkcs12  /home/user/mycertificate.p12
verify-x509-name name
cipher AES-128-CBC

Workflow for the system admins:

  1. git clone the openvpn repo
  2. modify ccd/ and server.conf
  3. git commit the changes, push to the server
  4. if server.conf was modified, restart openvpn
  5. git clone the nftables repo
  6. modify ruleset
  7. git commit the changes, push to the server

Comments via email welcome!

Dirk Eddelbuettel: #3: Follow R-devel

7 April, 2017 - 10:10

Welcome to the third post in the rarely relevant R recommendation series, or R4 for short.

Today will be brief, but of some importance. In order to know where R is going next, few places provide a better vantage point than the actual ongoing development.

A few years ago, I mentioned to Duncan Murdoch how straightforward the setup of my CRANberries feed (and site) was. After all, static blog compilers converting textual input to html, rss feed and whatnot have been around for fifteen years (though they keep getting reinvented). He took this to heart and built the (not too pretty) R-devel daily site (which also uses a fancy diff tool as it shows changes in NEWS) as well as a more general description of all available sub-feeds. I follow this mostly through blog aggregations -- Google Reader in its day, now Feedly. A screenshot is below just to show that it doesn't have to be ugly just because it is on them intertubes:

This shows a particularly useful day when R-devel folded into the new branch for what will be the R 3.4.0 release come April 21. The list of upcoming changes is truly impressive and quite comprehensive -- and the package registration helper, focus of posts #1 and #2 here, is but one of these many changes.

One function I learned about that day is tools::CRAN_package_db(), a helper to get a single (large) data.frame with all package DESCRIPTION information. Very handy. Others may have noticed that CRAN repos now have a new top-level file PACKAGES.rds and this function does indeed just fetch it--which you could do with a similar one-liner in R-release as well. Still very handy.

But do read about changes in R-devel and hence upcoming changes in R 3.4.0. Lots of good things coming our way.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Norbert Preining: Stella Stejskal – Im Mezzanin

7 April, 2017 - 09:04

A book about being woman, mother in a modern but still traditional society. About happiness and fulfillment, love and sex, responsibility and dependency, about life: Stella Stejskal‘s Im Mezzanin (in German).

Written by a friend back from my old times in Vienna, Stella Stejskal, like me an emigrant, I saw parts of this book during writing, and I was happy to see the final product and read it.

The first novel of Stella turns around Anna, the protagonist, a wife, mother, and woman, trying to find her balance between the house, family, kids, work, and her incredible power and thrive to live, live to the fullest.

Dein Leben in der Vorstadt, im Einfamilienhaus mit dem Garten, macht Dich nicht glücklich. Vordergründig hast Du alles, was eine Frau sich wünscht und doch fehlt Dir etwas Wesentliches: Verlangen und Leidenschaft.

(Your life in the suburb, in the one-family home with garden, it doesn’t make you happy. On the surface you do have everything what a woman could wish for, but you are missing something essential: desire and passion)

The author does not shy away from explicit language without ever dropping into the Vernacolo, the banalities. She manages to convey the incredible tension many of those being stretched out between the necessities of daily life and the need for a more personal life.

Last but not least, I loved this book for quoting one of my most favorite lines from a song:

Konstantin Weckers Was passiert in den Jahren, drangen leise durch den Raum. “Komm, wir gehen mit der Flut und verwandeln mit den Wellen unsere Angst in neuen Mut”, sang ich mit und dachte an den Sommer …

For those capable of German, very recommendable.

Reproducible builds folks: Reproducible Builds: week 101 in Stretch cycle

7 April, 2017 - 05:29

Here's what happened in the Reproducible Builds effort between Sunday March 26 and Saturday April 1 2017:

Media coverage

Sylvain Beucler wrote a follow-up post Practical basics of reproducible builds 2, which like last weeks article is about his experiences making software build reproducibly.

Reproducible work in other projects

Colin Watson started writing a patch to make launchpad store .buildinfo files. (It's not yet deployed.)

Toolchain development and fixes

Ximin Luo continued to work on BUILD_PATH_PREFIX_MAP patches for GCC 6 and dpkg.

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Mattia Rizzolo:

Reviews of unreproducible packages

49 package reviews have been added, 25 have been updated and 42 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (4)
  • Mattia Rizzolo (1)
diffoscope development

diffoscope 81 was uploaded to experimental by Chris Lamb. It included contributions from:

  • Chris Lamb
    • Correct meaningless "1234-content" metadata when introspecting files within archives. This was a regression since #854723 due to the use of auto-incrementing on-disk filenames. (Closes: #858223)
  • Ximin Luo
    • Improve ISO9660/DOS/MBR check.
reprotest development

reprotest development continued in git, including contributions from:

  • Ximin Luo:
    • Preserve directory structure when copying artifacts. development development continued in git, including contributions from:

  • Chris Lamb:
    • Tidy rejection of supported formats.
    • Don't parse "Format:" header as the source package version.
reproducible-website development

Holger switched and to letsencrypt certificates.


This week's edition was written by Ximin Luo and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Norbert Preining: Planet Earth I – From Pole to Pole

6 April, 2017 - 08:11

In preparation for watching the new Planet Earth II (BBC page, Wiki) by David Attenborough, I re-watched the original 2006 BBC Planet Earth, episode one, From Pole to Pole, fortunately it is now available on Netflix. I have forgotten how great it is. I was moved to tears.

This is now 10 years old but still top of the technique, even compared to the new Cosmos series. The nature shots, often in slow motion, give spectacular views onto animals hardly seen in this way. Here is a male Superb bird of paradise posing in front of a female, making a very strange impression (bad screenshot, sorry).

White sharks are dangerous, we all know, but did you know that they can actually fly? Well, not actually fly, but they can jump as far as completely leaving the sea:

The first episode finishes in the Okavango Delta with elephants bathing in the sea. Here is a young club dancing under water.

Deeply impressed, deeply moved. In light of anti-environmental anti-climate-change anti-intelligence comedian at the White House, more of us should actually see these great pieces of journalism, let it influence our thinking, and hopefully vote differently when it is our turn again.

Steinar H. Gunderson: Nageru 1.5.0 released

6 April, 2017 - 05:30

I just released version 1.5.0 of Nageru, my live video mixer. The biggest feature is obviously the HDMI/SDI live output, but there are lots of small nuggets everywhere; it's been four months in the making. I'll simply paste the NEWS entry here:

Nageru 1.5.0, April 5th, 2017

  - Support for low-latency HDMI/SDI output in addition to (or instead of) the
    stream. This currently only works with DeckLink cards, not bmusb. See the
    manual for more information.

  - Support changing the resolution from the command line, instead of locking
    everything to 1280x720.

  - The A/V sync code has been rewritten to be more in line with Fons
    Adriaensen's original paper. It handles several cases much better,
    in particular when trying to match 59.94 and 60 Hz sources to each other.
    However, it might occasionally need a few extra seconds on startup to
    lock properly if startup is slow.

  - Add support for using x264 for the disk recording. This makes it possible,
    among other things, to run Nageru on a machine entirely without VA-API

  - Support for 10-bit Y'CbCr, both on input and output. (Output requires
    x264 disk recording, as Quick Sync Video does not support 10-bit H.264.)
    This requires compute shader support, and is in general a little bit
    slower on input and output, due to the extra amount of data being shuffled
    around. Intermediate precision is 16-bit floating-point or better,
    as before.

  - Enable input mode autodetection for DeckLink cards that support it.
    (bmusb mode has always been autodetected.)

  - Add functionality to add a time code to the stream; useful for debugging

  - The live display is now both more performant and of higher image quality.

  - Fix a long-standing issue where the preview displays would be too bright
    when using an NVIDIA GPU. (This did not affect the finished stream.)

  - Many other bugfixes and small improvements.

1.5.0 is on its way into Debian experimental (it's too late for the stretch release, especially as it also depends on Movit and bmusb from experimental), or you can get it from the home page as always.

Jonathan Carter: GNOME Shell Extensions in Debian 9.0

6 April, 2017 - 04:27

GNOME 3 introduced an extensions framework that allows its users to extend the desktop shell by writing extensions using JavaScript and CSS. It works quite well and dozens of extensions have already been uploaded to the extensions site. Some of these solve some annoyances that users typically share with GNOME, while others add useful functionality.

During DebCamp last year, I started packaging some of these for Debian. That’s been going really well. Now that Ubuntu is finally dropping Unity in favour of GNOME, it helps to serve as a nudge to get this blog post out that’s been stuck in drafts. These extensions also make their way into Ubuntu and other Debian/Ubuntu derivatives.

Here are some extensions I’ve been packaging that’s already in the archive:


Provides a multitude of options for the shell dock. Not only really useful but also well maintained by upstream, see their website for more info. This is a great extension if you support previous Unity users, since you can set your panel to look and behave very similarly to Unity. I think the app launcher is slightly better in Gnome because apps are easier to discover.


Simple extension that hides the “Activities” button from the top left corner.


Speeds up shell animations. Animations can make the system more usable, but it can also be either distracting or cause some slight delays while waiting for the animation to complete. This gives you a sliding scale that lets you choose how much you’d like to speed it up.


Simple extension that moves the clock from the center of the panel to the right.


In gnome-shell, network manager doesn’t automatically refresh the list of available network, which can be quite annoying. Currently a user has to turn wifi off and back on in order to see a refreshed list. This has been fixed upstream and will be in the next version of GNOME. In the meantime, this extension fixes that.


Items in the top panel contain dropdown arrows, which are useful for new users who might not be aware that they expand into more entries. For more experience users, the arrows tend to result in extra clutter in the panel, this extension hides those arrows.


This allows you to hover over the volume control indicator and scroll up and down to increase/decrease the volume. Probably another extension that should really just be integrated into gnome-shell by default.


Many new laptops either don’t have a hard disk L.E.D. anymore, or they hide it so that it’s not really all that visible. This extension shows you hard disk activity in your panel. There’s also work being done to make it report reads and writes separately. I’ll be looking at backporting that when it’s available.


Allows you to disconnect from the current network without having to turn off wi-fi entirely.


Title bars can be incredibly pixel-hungry, which isn’t great on small displays. This extension hides the title bar when a window is maximised, and adds control buttons for that window to the top panel.



Displays a trash icon to the top panel when there are items in trash. From there you can view or delete the trash contents.


This extension adds some tweaks for users of multiple monitors. It’s most useful feature is that you can have desktop overviews on both displays and easily move apps between them.

More extensions

Here are some more extensions packaged in Debian that others have packaged:

  • gnome-shell-extension-shortcuts – shows a keyboard shortcut overlay
  • gnome-shell-extension-show-ip – shows ip address in the notification area
  • gnome-shell-extension-autohidetopbar – hide the top panel
  • gnome-shell-extension-caffeine – prevents computer from suspending when enabled
  • gnome-shell-extension-mediaplayer – control mediaplayer from the system menu
  • gnome-shell-extension-redshift – change colour temperature to improve attention span and sleep paterns
  • gnome-shell-extension-suspend-button – adds suspend shortcut
  • gnome-shell-extension-taskbar – adds plenty of options for the top panel and optional bottom taskbar
  • ugnome-shell-extension-top-icons-plus – moves system tray icons from the bottom hidden menu to the top
  • gnome-shell-extension-weather – weather report in panel
  • gnome-shell-extensions-gpaste – clipbook manager
  • gnome-shell-extension-onboard – on screen keyboard manager
Didn’t make it this time

Both Debian and Ubuntu are in feature freeze right now, and the following didn’t make it in the archives in time, but should be in the following releases (they’re still installable via the extensions site in the meantime):

  • gnome-shell-extension-remove-round-corners – remove round corners on top panel
  • gnome-shell-extension-proxy-switcher – adds menu in system menu to quickly switch between proxy settings
  • gnome-shell-extension-apt-update-indicator – show apt status and available upgrades
  • gnome-shell-extension-uptime-indicator- indicator that displays uptime
  • gnome-shell-extension-dash-to-panel – great extension that combines the dock into the top panel
  • gnome-shell-extension-tilix-dropdown – shortcut configurator for tilix’s quake mode
Next steps Consider auto-enable

Currently, when you install these debian packages, (most of) the extensions won’t be enabled by default. Users have to use the gnome-tweak-tool to enable them after installation. The rationale behind this is that a system administrator of a network of computers might only want to enable certain extensions for certain users. After some more consideration, I think such administrators will probably already have a system (like a configuration management system) in place to manage this. So, to make it easier for the typical user, I think it’s worth considering enabling these by default with installation. Feedback welcome :)

Debian team

The list of packaged extensions are growing fast, and it would be nice to have these team-maintained. It might be a good idea to start a team for this or use an existing team under the Debian gnome team namespace.

Packaging guide

I’ve packaged enough Gnome extensions to be aware of the typical gotchas and things that need fixing. They’re overall easy to package and a good place to start for someone who wants to get into packaging. I want to put together a good short guide on how to properly package gnome-shell extensions.

Anything else?

Any other extensions you’d like to see packaged? Let me know. Even better, package it yourself and help test my extension package guide (once it exists) so that we can improve that too.

Norbert Preining: Gaming: Quern – Undying Thoughts

5 April, 2017 - 10:55

I have been an addict of Myst like games since the very beginning. Solving mind boggling riddles by logical means (instead of weapons) was always my preferred gaming. And it seems 2016 had a great share of games fitting to my taste: Obduction, The Eyes of Ara, The Witness, and last but not least Quern – Undying Thoughts. Due to work, research, online courses, diapers, and some real life (these are also the excuses for my long silence on this blog) it took me ages to complete this games, but with a bit of help I finally manged it.

Those who have ever played any game from the Myst series (Myst, Riven etc) know what you get: Several worlds/areas to explore, finding clues, solving riddles, lots of reading, leading to a finish where you have to decide between two fates.

The game leads you through a variety of surrounding, from open air with gorgeous views and lovely crafted details, caves with dark corners and hidden chambers, even diving excursions are included, till you reach the final showdown at a huge underground facility. The graphics of the game are extremely beautifully crafted which makes you want to book a trip to this island.

Click to view slideshow.

I have to admit that due to the long time over which I had to play the game, as well as my missing Sherlock Holmes traits, some of the riddles were really hard for me. Sometimes the clues are so insidiously hidden or inconspicuously placed that one just easily misses them. But that makes the game a real challenge.

For me Quern manages a great balance between a riddle game and an exploration game. I don’t like games where you just have to go around and click each and every nook to make sure you find all pieces (typical example, the room series), but on the other hand games that only consists of solving riddles without searching for them are half the fun. Myst, Riven, and Quern, too, all of them strike a delicate balance between these two poles and deliver an absolute astonishing experience.

Fully recommended, 100/100 for me!

Thomas Lange: FAI website now supports HTTPS

4 April, 2017 - 19:41

The FAI webpage is now reachable via HTTPS. You can also access the package repository at via HTTPS if you use this line in /etc/apt/sources.list:

deb jessie koeln

Thanks to Let's Encrypt for making this possible.


Ritesh Raj Sarraf: Fixing Hardware Bugs

4 April, 2017 - 18:02

Bugs can be annoying. Especially the ones that crash or hang and do not have a root cause. A good example of such annoyance can be kernel bugs, where  a faulty hardware/device driver hinders the kernel's suspend/resume process. Because, as a user, while in the middle of your work, you suspend your machine hoping to resume your work, back when at your destination. But, during suspend, or during resume, randomly the bug triggers leaving you with no choice but a hardware reset. Ultimately, resulting in you losing the entire work state you were in.


Such is a situation I encountered with my 2 year old, Lenovo Yoga 2 13. For 2 years, I had been living with this bug with all the side-effects mentioned.

Mar 01 18:43:28 learner kernel: usb 2-4: new high-speed USB device number 38 using xhci_hcd
Mar 01 18:43:54 learner kernel: usb 2-4: new high-speed USB device number 123 using xhci_hcd
Mar 01 18:44:00 learner kernel: usb 2-4: new high-speed USB device number 125 using xhci_hcd
Mar 01 18:44:11 learner kernel: usb 2-4: new high-speed USB device number 25 using xhci_hcd
Mar 01 18:44:16 learner kernel: usb 2-4: new high-speed USB device number 26 using xhci_hcd
Mar 01 18:44:22 learner kernel: usb 2-4: new high-speed USB device number 27 using xhci_hcd
Mar 01 18:44:22 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:22 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:22 learner kernel: usb 2-4: new high-speed USB device number 28 using xhci_hcd
Mar 01 18:44:23 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:23 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:23 learner kernel: usb 2-4: new high-speed USB device number 29 using xhci_hcd
Mar 01 18:44:23 learner kernel: usb 2-4: Device not responding to setup address.
Mar 01 18:44:23 learner kernel: usb 2-4: Device not responding to setup address.
Mar 01 18:44:23 learner kernel: usb 2-4: device not accepting address 29, error -71
Mar 01 18:44:24 learner kernel: usb 2-4: new high-speed USB device number 30 using xhci_hcd
Mar 01 18:44:24 learner kernel: usb 2-4: Device not responding to setup address.
Mar 01 18:44:24 learner kernel: usb 2-4: Device not responding to setup address.
Mar 01 18:44:24 learner kernel: usb 2-4: device not accepting address 30, error -71
Mar 01 18:44:24 learner kernel: usb usb2-port4: unable to enumerate USB device
Mar 01 18:44:24 learner kernel: usb 2-4: new high-speed USB device number 31 using xhci_hcd
Mar 01 18:44:24 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:25 learner kernel: usb 2-4: new high-speed USB device number 32 using xhci_hcd
Mar 01 18:44:30 learner kernel: usb 2-4: new high-speed USB device number 33 using xhci_hcd
Mar 01 18:44:30 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:31 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:31 learner kernel: usb 2-4: new high-speed USB device number 34 using xhci_hcd
Mar 01 18:44:36 learner kernel: usb 2-4: new high-speed USB device number 35 using xhci_hcd
Mar 01 18:44:36 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:36 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:37 learner kernel: usb 2-4: new high-speed USB device number 36 using xhci_hcd
Mar 01 18:44:37 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:37 learner kernel: usb 2-4: device descriptor read/64, error -71
Mar 01 18:44:37 learner kernel: usb 2-4: new high-speed USB device number 37 using xhci_hcd
Mar 01 18:44:37 learner kernel: usb 2-4: Device not responding to setup address.
Mar 01 18:44:37 learner kernel: usb 2-4: Device not responding to setup address.
Mar 01 18:44:38 learner kernel: usb 2-4: device not accepting address 37, error -71
Mar 01 18:44:38 learner kernel: usb 2-4: new high-speed USB device number 38 using xhci_hcd
Mar 01 18:44:38 learner kernel: usb 2-4: Device not responding to setup address.


Mar 02 13:34:05 learner kernel: usb 2-4: new high-speed USB device number 45 using xhci_hcd
Mar 02 13:34:05 learner kernel: usb 2-4: new high-speed USB device number 46 using xhci_hcd
Mar 02 13:34:05 learner kernel: usb 2-4: New USB device found, idVendor=0bda, idProduct=0129
Mar 02 13:34:05 learner kernel: usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
Mar 02 13:34:05 learner kernel: usb 2-4: Product: USB2.0-CRW
Mar 02 13:34:05 learner kernel: usb 2-4: Manufacturer: Generic
Mar 02 13:34:05 learner kernel: usb 2-4: SerialNumber: 20100201396000000
Mar 02 13:34:06 learner kernel: usb 2-4: USB disconnect, device number 46
Mar 02 13:34:16 learner kernel: usb 2-4: new high-speed USB device number 47 using xhci_hcd
Mar 02 13:34:21 learner kernel: usb 2-4: new high-speed USB device number 48 using xhci_hcd
Mar 02 13:34:26 learner kernel: usb 2-4: new high-speed USB device number 49 using xhci_hcd
Mar 02 13:34:32 learner kernel: usb 2-4: new high-speed USB device number 51 using xhci_hcd
Mar 02 13:34:37 learner kernel: usb 2-4: new high-speed USB device number 52 using xhci_hcd
Mar 02 13:34:43 learner kernel: usb 2-4: new high-speed USB device number 54 using xhci_hcd
Mar 02 13:34:43 learner kernel: usb 2-4: new high-speed USB device number 55 using xhci_hcd
Mar 02 13:34:49 learner kernel: usb 2-4: new high-speed USB device number 57 using xhci_hcd
Mar 02 13:34:55 learner kernel: usb 2-4: new high-speed USB device number 58 using xhci_hcd
Mar 02 13:35:00 learner kernel: usb 2-4: new high-speed USB device number 60 using xhci_hcd
Mar 02 13:35:06 learner kernel: usb 2-4: new high-speed USB device number 61 using xhci_hcd
Mar 02 13:35:11 learner kernel: usb 2-4: new high-speed USB device number 63 using xhci_hcd
Mar 02 13:35:17 learner kernel: usb 2-4: new high-speed USB device number 64 using xhci_hcd
Mar 02 13:35:22 learner kernel: usb 2-4: new high-speed USB device number 65 using xhci_hcd
Mar 02 13:35:28 learner kernel: usb 2-4: new high-speed USB device number 66 using xhci_hcd
Mar 02 13:35:33 learner kernel: usb 2-4: new high-speed USB device number 68 using xhci_hcd
Mar 02 13:35:39 learner kernel: usb 2-4: new high-speed USB device number 69 using xhci_hcd
Mar 02 13:35:44 learner kernel: usb 2-4: new high-speed USB device number 70 using xhci_hcd
Mar 02 13:35:50 learner kernel: usb 2-4: new high-speed USB device number 71 using xhci_hcd
Mar 02 13:35:50 learner kernel: usb 2-4: Device not responding to setup address.
Mar 02 13:35:50 learner kernel: usb 2-4: Device not responding to setup address.
Mar 02 13:35:50 learner kernel: usb 2-4: device not accepting address 71, error -71
Mar 02 13:35:50 learner kernel: usb 2-4: new high-speed USB device number 73 using xhci_hcd
Mar 02 13:35:51 learner kernel: usb 2-4: new high-speed USB device number 74 using xhci_hcd
Mar 02 13:35:56 learner kernel: usb 2-4: new high-speed USB device number 75 using xhci_hcd
Mar 02 13:35:57 learner kernel: usb 2-4: new high-speed USB device number 77 using xhci_hcd
Mar 02 13:36:03 learner kernel: usb 2-4: new high-speed USB device number 78 using xhci_hcd
Mar 02 13:36:08 learner kernel: usb 2-4: new high-speed USB device number 79 using xhci_hcd
Mar 02 13:36:14 learner kernel: usb 2-4: new high-speed USB device number 80 using xhci_hcd
Mar 02 13:36:20 learner kernel: usb 2-4: new high-speed USB device number 83 using xhci_hcd
Mar 02 13:36:26 learner kernel: usb 2-4: new high-speed USB device number 86 using xhci_hcd


Thanks to the Linux USB maintainers, we tried investigating the issue, which resulted in uncovering other bugs. Unfortunately, this bug was concluded as a possible hardware bug. The only odd bit is that this machine has a Windows 8.1 copy still lying on the spare partition, where the issue was not seen at all. It could very well be that it was not a hardware bug at all, or a hardware bug which had a workaround in the Windows driver.

But, the results of the exercise weren't much useful to me because I use the machine under the Linux kernel most of the time.


So, this March 2017, with 2 years completion on me purchasing the device, I was annoyed enough by the bugs. That led me trying out finding other ways to taming this issue.

Lenovo has some variations of this device. I know that it comes with multiple options for the storgae and the wifi component. I'm not sure if there are more differences.

The majority of the devices are connected over the xHCI bus on this machine. If a single device is faulty, or has faulty hardware; it could screw up the entire user experience for that machine. Such is my case. Hardware manufacturers could do a better job if they could provide a means to disable hardware, for example in the BIOS. HP shipped machines have such an option in the BIOS where you can disable devices that do not have an important use case for the user. Good example of such devices are Fingerprint Readers, SD Card Readers, LOMs and mabye Bluetooth too. At least the latter should apply for Linux users, as majority of us have an unpleasant time getting Bluetooth to work out of the box.

But on my Lenovo Yoga, it came with a ridiculous BIOS/UEFI, with very very limited options for change. Thankfully, they did have an option to set the booting mode for the device, giving the choices of Legacy Boot and UEFI.

Back to the topic, with 2 years of living with the bug, and no clarity on if and whether it was a hardware bug or a driver bug, it left me with no choice but to open up the machine.

Next to the mSATA HDD sits the additional board, which houses the Power, USB, Audio In, and the SD Card reader.


Opening that up, I got the small board. I barely use the SD Card reader, and given the annoyances I had to suffer because of it, there was no more mercy in killing that device.

So, next was to unsolder the SD Card reader completely.

Once done, and fitted back into the machine, everything has been working awesomely great in the last 2 weeks. This entire fix costed me र् 0. So sometimes, fixing a bug is all that matters.

In the Hindi language, a nice phrase for such a scenario remnids me of the great Chanakya, "साम, दाम, दंड और भेद".

Categories: Keywords: Like: 

Matthias Klumpp: On Tanglu

4 April, 2017 - 15:12

It’s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices.

Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!).

On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren’t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools.

During the past year however, the project’s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn’t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of “junior jobs” is a much better starting point. We also invested almost zero time in making our project known and create the necessary “buzz” and excitement that’s actually needed to sustain a project like this. Doing more in the advertisement domain and “help newcomers” area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn’t enough, talking about it is of crucial importance and that is something I knew about, but didn’t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn’t enough, community building is of equal importance.

Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well.

The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn’t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project’s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn’t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there’s always that one guy doing it). I still haven’t found a great way to solve this – it’s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation.

Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus – that’s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can’t split myself and KDE’s cloning machine remains broken, so I can’t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it).

Good news everyone! (sort of)

So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the “make bundling nice!” department).

So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It’s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work.

If it turns out that absolutely nothing works and we can’t attract new people to help with Tanglu, it would mean that there generally isn’t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing

The only thing that had to stop is leaving our users in the dark on what is happening.

Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about

If you are interested in contributing to Tanglu, get in touch with us! We have an IRC channel #tanglu-devel on Freenode (go there for quicker responses!), forums and mailinglists,

It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.

Vincent Fourmond: Variations around ls -lrt

4 April, 2017 - 03:12
I have been using almost compulsively ls -lrt for a long time now. As per the ls man page, this command lists the files of the current directory, with the latest files at the end, so that they are the ones that show up just above your next command-line. This is very convenient to work with, hmmm, not-so-well-organized directories, because it just shows what you're working on, and you can safely ignore the rest. Typical example is a big Downloads directory, for instance, but I use it everywhere. I quickly used alias lrt="ls -lrt" to make it easier, but... I though I might have as well a reliable way to directly use what I saw. So I came up with the following shell function (zsh, but probably works with most Bourne-like shells):
lrt() {
    ls -lrt "$@"
    lrt="$(ls -rt "$@" | tail -n1)"
This small function runs ls -lrt as usual, but also sets the $lrt shell variable to the latest file, so you can use it in your next commands ! Especially useful for complex file names. Demonstration:
22:05 vincent@ashitaka ~/Downloads lrt
-rw-r--r-- 1 vincent vincent   1490027 Apr  2 15:44
-rw-r--r-- 1 vincent vincent    668566 Apr  3 22:05 1-s2.0-S0013468617305947-main.pdf
22:06 vincent@ashitaka ~/Downloads cp -v $lrt ~/nice-paper.pdf
'1-s2.0-S0013468617305947-main.pdf' -> '/home/vincent/nice-paper.pdf'
This saves typing the name of the 1-s2.0-S0013468617305947-main.pdf: in this case, automatic completion doesn't help much, since many files in my Downloads directory start with the same orefix... I hope this helps !

Markus Koschany: My Free Software Activities in March 2017

3 April, 2017 - 23:49

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Android, Java, Games and LTS topics, this might be interesting for you.

Debian Android
  • A new upstream release of apktool was uploaded to experimental.
Debian Games
  • I packaged new upstream releases of megaglest and megaglest-data.
  • I fixed a bug in pangzero (#857474) that crashed the game when someone pressed the pause key. The updated package will be part of Stretch.
  • The severity was inflated and the issue debatable but since it took less time to “fix” bug #857801 in dopewars than writing this sentence, I did it anyway.
  • I fixed bug #857236 and #857845 in holotz-castle. Background: There are various packages in Debian that ship a considerable amount of documentation which is usually a good thing. We always strive to optimize packages and reducing the package size is one option. In the past people thought that symlinking the doc directory of an arch:all (architecture-independent) package to an an arch:any (architecture-dependent) package saves disk space because it is not necessary to duplicate the same content on every architecture. Unfortunately this feature, dh-installdocs –link-doc, is broken by design (#766711) and in its current state not usable for this use case. As a consequence I filed a bug report against #857851, asked for an improvement of piuparts’ status reports and also filed #857852 against dpkg which was later cloned into #858036 for debhelper. In a nutshell I would like to see better documentation how to use dh-maintscript-helper and *.maintscript files. I also believe it would be nice to simplify the latter by using only one file.
Debian Java
  • I packaged version 5.4 of sweethome3d and added myself to Uploaders and closed two bugs (#854030),(#856769)
  • I fixed an RC bug (#856626) in lucene-solr, more precisely in one of the configuration files of solr-tomcat, a search engine with Tomcat integration, that prevented the server from starting.
  • I am still investigating an RC issue (#857343) in logback. This is a potential security vulnerability that allows remote attackers to execute arbitrary code. My first patch was incomplete and more backported code from the latest upstream release is required. Unfortunately upstream was not very helpful in tracking down the necessary code changes. My question still remains unanswered.
  • Netbeans (#837081): Netbeans has been crashing from time to time. It is not easy to trigger the issue but it is related to libatk-wrapper-java-jni and the Accessibility ToolKit (ATK). I have cloned bug number #837081 as #858700 for now because I don’t think it can be fixed in Netbeans.
Debian LTS

This was my thirteenth month as a paid contributor and I have been paid to work 14,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 06. March until 13. March I was in charge of our LTS frontdesk. I triaged security issues in qbittorrent, imagemagick, freetype, glibc, vim, suricada, texlive-base, web2py, lxc, r-base, mysql-5.5, partclone, irrsi, wordpress, mupdf and php5.
  • DLA-846-1. Issued a security update for libzip-ruby fixing 1 CVE.
  • DLA-853-1. Issued a security update for pidgin fixing 1 CVE.
  • DLA-855-1. Issued a security update for roundcube fixing 1 CVE.
  • DLA-860-1. Issued a security update for wordpress fixing 3 CVE.
  • DLA-870-1. Issued a security update for libplist fixing 3 CVE.
  • DLA-872-1. Issued a security update for xrdp fixing 1 CVE.
  • DLA-875-1. Issued a security update for php5 fixing 3 CVE.
  • March 2017 also saw a new version of MediathekView (now in experimental).

Thanks for reading and see you next time.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้