Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 45 min ago

Mark Brown: We show up

11 February, 2017 - 19:12

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.

Noah Meyerhans: Using FAI to customize and build your own cloud images

11 February, 2017 - 14:42

At this past November's Debian cloud sprint, we classified our image users into three broad buckets in order to help guide our discussions and ensure that we were covering the common use cases. Our users fit generally into one of the following groups:

  1. People who directly launch our image and treat it like a classic VPS. These users most likely will be logging into their instances via ssh and configuring it interactively, though they may also install and use a configuration management system at some point.
  2. People who directly launch our images but configure them automatically via launch-time configuration passed to the cloud-init process on the agent. This automatic configuration may optionally serve to bootstrap the instance into a more complete configuration management system. The user may or may not ever actually log in to the system at all.
  3. People who will not use our images directly at all, but will instead construct their own image based on ours. They may do this by launching an instance of our image, customizing it, and snapshotting it, or they may build a custom image from scratch by reusing and modifying the tools and configuration that we use to generate our images.

This post is intended to help people in the final category get started with building their own cloud images based on our tools and configuration. As I mentioned in my previous post on the subject, we are using the FAI project with configuration from the fai-cloud-images. It's probably a good idea to get familiar with FAI and our configs before proceeding, but it's not necessary.

You'll need to use FAI version 5.3.4 or greater. 5.3.4 is currently available in stretch and jessie-backports. Images can be generated locally on your non-cloud host, or on an existing cloud instance. You'll likely find it more convenient to use a cloud instance so you can avoid the overhead of having to copy disk images between hosts. For the most part, I'll assume throughout this document that you're generating your image on a cloud instance, but I'll highlight the steps where it actually matters. I'll also be describing the steps to target AWS, though the general workflow should be similar if you're targeting a different platform.

To get started, install the fai-server package on your instance and clone the fai-cloud-images git repository. (I'll assume the repository is cloned to /srv/fai/config.) In order to generate your own disk image that generally matches what we've been distributing, you'll use a command like:

sudo fai-diskimage --hostname stretch-image --size 8G \

This command will create an 8 GB raw disk image at /tmp/stretch-image.raw, create some partitions and filesystems within it, and install and configure a bunch of packages into it. Exactly what packages it installs and how it configures them will be determined by the FAI config tree and the classes provided on the command line. The package_config subdirectory of the FAI configuration contains several files, the names of which are FAI classes. Activating a given class by referencing it on the fai-diskimage command line instructs FAI to process the contents of the matching package_config file if such a file exists. The files use a simple grammar that provides you with the ability to request certain packages to be installed or removed.

Let's say for example that you'd like to build a custom image that looks mostly identical to Debian's images, but that also contains the Apache HTTP server. You might do that by introducing a new file to package_config/HTTPD file, as follows:

PACKAGES install

Then, when running fai-diskimage, you'll add HTTPD to the list of classes:

sudo fai-diskimage --hostname stretch-image --size 8G \

Aside from custom package installation, you're likely to also want custom configuration. FAI allows the use of pretty much any scripting language to perform modifications to your image. A common task that these scripts may want to perform is the installation of custom configuration files. FAI provides the fcopy tool to help with this. Fcopy is aware of FAI's class list and is able to select an appropriate file from the FAI config's files subdirectory based on classes. The scripts/EC2/10-apt script provides a basic example of using fcopy to select and install an apt sources.list file. The files/etc/apt/sources.list/ subdirectory contains both an EC2 and a GCE file. Since we've enabled the EC2 class on our command line, fcopy will find and install that file. You'll notice that the sources.list subdirectory also contains a preinst file, which fcopy can use to perform additional actions prior to actually installing the specified file. postinst scripts are also supported.

Beyond package and file installation, FAI also provides mechanisms to support debconf preseeding, as well as hooks that are executed at various stages of the image generation process. I recommend following the examples in the fai-cloud-images repo, as well as the FAI guide for more details. I do have one caveat regarding the documentation, however: FAI was originally written to help provision bare-metal systems, and much of its documentation is written with that use case in mind. The cloud image generation process is able to ignore a lot of the complexity of these environments (for example, you don't need to worry about pxeboot and tftp!) However, this means that although you get to ignore probably half of the FAI Guide, it's not immediately obvious which half it is that you get to ignore.

Once you've generated your raw image, you can inspect it by telling Linux about the partitions contained within, and then mount and examine the filesystems. For example:

admin@ip-10-0-0-64:~$ sudo partx --show /tmp/stretch-image.raw
 1  2048 16777215 16775168   8G      ed093314-01
admin@ip-10-0-0-64:~$ sudo partx -a /tmp/stretch-image.raw 
partx: /dev/loop0: error adding partition 1
admin@ip-10-0-0-64:~$ lsblk 
xvda      202:0    0      8G  0 disk 
├─xvda1   202:1    0 1007.5K  0 part 
└─xvda2   202:2    0      8G  0 part /
loop0       7:0    0      8G  0 loop 
└─loop0p1 259:0    0      8G  0 loop 
admin@ip-10-0-0-64:~$ sudo mount /dev/loop0p1 /mnt/
admin@ip-10-0-0-64:~$ ls /mnt/
bin/   dev/  home/        initrd.img.old@  lib64/       media/  opt/   root/  sbin/  sys/  usr/  vmlinuz@
boot/  etc/  initrd.img@  lib/             lost+found/  mnt/    proc/  run/   srv/   tmp/  var/  vmlinuz.old@

In order to actually use your image with your cloud provider, you'll need to register it with them. Strictly speaking, these are the only steps that are provider specific and need to be run on your provider's cloud infrastructure. AWS documents this process in the User Guide for Linux Instances. The basic workflow is:

  1. Attach a secondary EBS volume to your EC2 instance. It must be large enough to hold the raw disk image you created.
  2. Use dd to write your image to the secondary volume, e.g. sudo dd if=/tmp/stretch-image.raw of=/dev/xvdb
  3. Use the script in the fail-cloud-image repo to snapshot the volume and register the resulting snapshot with AWS as a new AMI. Example: ./ vol-04351c30c46d7dd6e

The script must be run with access to AWS credentials that grant access to several EC2 API calls: describe-snapshots, create-snapshot, and register-image. It recognizes a --help command-line flag and several options that modify characteristics of the AMI that it registers. When completes, it will print the AMI ID of your new image. You can now work with this image using standard AWS workflows.

As always, we welcome feedback and contributions via the debian-cloud mailing list or #debian-cloud on IRC.

Jonas Meurer: debian lts report 2017.01

10 February, 2017 - 23:07
Debian LTS report for January 2017

January 2017 was my fifth month as a Debian LTS team member. I was allocated 12 hours and had 6,75 hours left over from December 2016. This makes a total of 18,75 hours. Unfortunately I found less time than expected to work on Debian LTS in January. In total, I spent 9 hours on the following security updates:

  • DLA 787-1: XSS protection via Content Security Policy for otrs2
  • DLA 788-1: fix vulnerability in pdns-recursor by dropping illegitimate long querys
  • DLA 798-1: fix multiple vulnerabilities in pdns

Rhonda D'Vine: Anouk

10 February, 2017 - 19:19

I need music to be more productive. Sitting in an open workspace it helps to shut off outside noice too. And often enough I just turn cmus into shuffle mode and let it play what comes along. Yesterday I just stumbled upon a singer again that I fell in love with her voice a long time ago. This is about Anouk.

The song was on a compilation series that I followed because it so easily brought great groups to my attention in a genre that I simply love. It was called "Crossing All Over!" and featured several groups that I digged further into and still love to listen to.

Anyway, don't want to delay the songs for you any longer, so here they are:

  • Nobody's Wife: The first song I heard from her, and her voice totally catched me.
  • Lost: A more quite song for a break.
  • Modern World: A great song about the toxic beauty norms that society likes to paint. Lovely!

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Dirk Eddelbuettel: anytime 0.2.1

10 February, 2017 - 18:37

An updated anytime package arrived at CRAN yesterday. This is release number nine, and the first with a little gap to the prior release on Christmas Eve as the features are stabilizing, as is the implementation.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to either POSIXct or Date objects -- and to do so without requiring a format string. See the anytime page, or the GitHub for a few examples.

This releases addresses two small things related to the anydate() and utcdate() conversion (see below) and adds one nice new format, besides some internal changes detailed below:

R> library(anytime)
R> anytime("Thu Sep 01 10:11:12 CDT 2016")
[1] "2016-09-01 10:11:12 CDT"
R> anytime("Thu Sep 01 10:11:12.123456 CDT 2016") # with frac. seconds
[1] "2016-09-01 10:11:12.123456 CDT"

Of course, all commands are also fully vectorised. See the anytime page, or the GitHub for more examples.

Changes in anytime version 0.2.1 (2017-02-09)
  • The new DatetimeVector class from Rcpp is now used, and proper versioned Depends: have been added (#43)

  • The anydate and utcdate functions convert again from factor and ordered (#46 closing #44)

  • A format similar to RFC 28122 but with additonal timezone text can now be parsed (#48 closing #47)

  • Conversion from POSIXt to Date now also respect the timezone (#50 closing #49)

  • The internal .onLoad functions was updated

  • The Travis setup uses https to fetch the run script

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve Kemp: Old packages are interesting.

9 February, 2017 - 21:29

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional.

At the end of his post he said :

evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux:

  • lenny had 0.4.3-1.
  • etch had nothing.
  • squeeze had 0.4.6-1.
  • wheezy has release 0.5.1-1.
  • jessie has release 0.5.1-1.
  • stretch has release 0.5.1-1.
  • sid has release 0.5.1-1.

I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:

 firefox '%s'

That meant there was an obvious security problem.

It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

Charles Plessy: Beware of libinput 1.6.0-1

9 February, 2017 - 20:22

Since I updated this evening, touch to click with my touchpad is almost totally broken. Fortunately, a correction is pending.

Sven Hoexter: Limit host access based on LDAP groupOfUniqueNames with sssd

9 February, 2017 - 19:02

For CentOS 4 to CentOS 6 we used pam_ldap to restrict host access to machines, based on groupOfUniqueNames listed in an openldap. With RHEL/CentOS 6 RedHat already deprecated pam_ldap and highly recommended to use sssd instead, and with RHEL/CentOS 7 they finally removed pam_ldap from the distribution.

Since pam_ldap supported groupOfUniqueNames to restrict logins a bigger collection of groupOfUniqueNames were created to restrict access to all kind of groups/projects and so on. But sssd is in general only able to filter based on an "ldap_access_filter" or use the host attribute via "ldap_user_authorized_host". That does not allow the use of "groupOfUniqueNames". So to allow a smoth migration I had to configure sssd in some way to still support groupOfUniqueNames. The configuration I ended up with looks like this:

autofs_provider = none 
ldap_schema = rfc2307bis
# to work properly we've to keep the search_base at the highest level
ldap_search_base = ou=foo,ou=people,o=myorg
ldap_default_bind_dn = cn=ro,ou=ldapaccounts,ou=foo,ou=people,o=myorg
ldap_default_authtok = foobar
id_provider = ldap
auth_provider = ldap
chpass_provider = none
ldap_uri = ldaps://ldapserver:636
ldap_id_use_start_tls = false
cache_credentials = false
ldap_tls_cacertdir = /etc/pki/tls/certs
ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt
ldap_tls_reqcert = allow
ldap_group_object_class = groupOfUniqueNames
ldap_group_member = uniqueMember
access_provider = simple
simple_allow_groups = fraappmgmtt

domains = hostacl
services = nss, pam
config_file_version = 2

Important side note: With current sssd versions you're more or less forced to use ldaps with a validating CA chain, though hostnames are not required to match the CN/SAN so far.

Relevant are:

  • set the ldap_schema to rfc2307bis to use a schema that knows about groupOfUniqueNames at all
  • set the ldap_group_object_class to groupOfUniqueNames
  • set the the ldap_group_member to uniqueMember
  • use the access_provider simple

In practise what we do is match the member of the groupOfUniqueNames to the sssd internal group representation.

The best explanation about the several possible object classes in LDAP for group representation I've found so far is unfortunately in a german blog post. Another explanation is in the LDAP wiki. In short: within a groupOfUniqueNames you'll find a full DN, while in a posixGroup you usually find login names. Different kind of object class requires a different handling.

Next step would be to move auth and nss functionality to sssd as well.

Vincent Bernat: Integration of a Go service with systemd

9 February, 2017 - 15:00

Unlike other programming languages, Go’s runtime doesn’t provide a way to reliably daemonize a service. A system daemon has to supply this functionality. Most distributions ship systemd which would fit the bill. A correct integration with systemd is quite straightforward. There are two interesting aspects: readiness & liveness.

As an example, we will daemonize this service whose goal is to answer requests with nifty 404 errors:

package main

import (

func main() {
    l, err := net.Listen("tcp", ":8081")
    if err != nil {
        log.Panicf("cannot listen: %s", err)
    http.Serve(l, nil)

You can build it with go build 404.go.

Here is the service file, 404.service1:

Description=404 micro-service



The classic way for an Unix daemon to signal its readiness is to daemonize. Technically, this is done by calling fork(2) twice (which also serves other intents). This is a very common task and the BSD systems, as well as some other C libraries, supply a daemon(3) function for this purpose. Services are expected to daemonize only when they are ready (after reading configuration files and setting up a listening socket, for example). Then, a system can reliably initialize its services with a simple linear script:

ntpd -s

Each daemon can rely on the previous one being ready to do its work. The sequence of actions is the following:

  1. syslogd reads its configuration, activates /dev/log, daemonizes.
  2. unbound reads its configuration, listens on, daemonizes.
  3. ntpd reads its configuration, connects to NTP peers, waits for clock to be synchronized2, daemonizes.

With systemd, we would use Type=fork in the service file. However, Go’s runtime does not support that. Instead, we use Type=notify. In this case, systemd expects the daemon to signal its readiness with a message to an Unix socket. go-systemd package handles the details for us:

package main

import (


func main() {
    l, err := net.Listen("tcp", ":8081")
    if err != nil {
        log.Panicf("cannot listen: %s", err)
    daemon.SdNotify(false, "READY=1") // ❶
    http.Serve(l, nil)                // ❷

It’s important to place the notification after net.Listen() (in ❶): if the notification was sent earlier, a client would get “connection refused” when trying to use the service. When a daemon listens to a socket, connections are queued by the kernel until the daemon is able to accept them (in ❷).

If the service is not run through systemd, the added line is a no-op.


Another interesting feature of systemd is to watch the service and restart it if it happens to crash (thanks to the Restart=on-failure directive). It’s also possible to use a watchdog: the service sends watchdog keep-alives at regular interval. If it fails to do so, systemd will restart it.

We could insert the following code just before http.Serve() call:

go func() {
    interval, err := daemon.SdWatchdogEnabled(false)
    if err != nil || interval == 0 {
    for {
        daemon.SdNotify(false, "WATCHDOG=1")
        time.Sleep(interval / 3)

However, this doesn’t add much value: the goroutine is unrelated to the core business of the service. If for some reason, the HTTP part gets stuck, the goroutine will happily continue to send keep-alives to systemd.

In our example, we can just do a HTTP query before sending the keep-alive. The internal loop can be replaced with this code:

for {
    _, err := http.Get("") // ❸
    if err == nil {
        daemon.SdNotify(false, "WATCHDOG=1")
    time.Sleep(interval / 3)

In ❸, we connect to the service to check if it’s still working. If we get some kind of answer, we send a watchdog keep-alive. If the service is unavailable or if http.Get() gets stuck, systemd will trigger a restart.

There is no universal recipe. However, checks can be split into two groups:

  • Before sending a keep-alive, you execute an active check on the components of your service. The keep-alive is sent only if all checks are successful. The checks can be internal (like in the above example) or external (for example, check with a query to the database).

  • Each component reports its status, telling if it’s alive or not. Before sending a keep-alive, you check the reported status of all components (passive check). If some components are late or reported fatal errors, don’t send the keep-alive.

If possible, recovery from errors (for example, with a backoff retry) and self-healing (for example, by reestablishing a network connection) is always better, but the watchdog is a good tool to handle the worst cases and avoid too complex recovery logic.

For example, if a component doesn’t know how to recover from an exceptional condition3, instead of using panic(), it could signal its situation before dying. Another dedicated component could try to resolve the situation by restarting the faulty component. If it fails to reach an healthy state in time, the watchdog timer will trigger and the whole service will be restarted.

  1. Depending on the distribution, this should be installed in /lib/systemd/system or /usr/lib/systemd/system. Check with the output of the command pkg-config systemd --variable=systemdsystemunitdir. 

  2. This highly depends on the NTP daemon used. OpenNTPD doesn’t wait unless you use the -s option. ISC NTP doesn’t either unless you use the --wait-sync option. 

  3. An example of an exceptional condition is to reach the limit on the number of file descriptors. Self-healing from this situation is difficult and it’s easy to get stuck in a loop. 

Dirk Eddelbuettel: RcppArmadillo 0.7.700.0.0

9 February, 2017 - 08:24

Time for another update of RcppArmadillo with a new release 0.7.700.0.0 based on a fresh Armadillo 7.700.0. Following my full reverse-dependency check of 318 package (commit of log here), CRAN took another day to check again.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 318 other packages on CRAN -- an increase of 20 just since the last CRAN release of 0.7.600.1.0 in December!

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.700.0.0 (2017-02-07)
  • Upgraded to Armadillo release 7.700.0 ("Rogue State")

    • added polyfit() and polyval()

    • added second form of log_det() to directly return the result as a complex number

    • added range() to statistics functions

    • expanded trimatu()/trimatl() and symmatu()/symmatl() to handle sparse matrice

Changes in RcppArmadillo version 0.7.600.2.0 (2017-01-05)
  • Upgraded to Armadillo release 7.600.2 (Coup d'Etat Deluxe)

    • Bug fix to memory allocation for fields

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Iustin Pop: Solarized colour theme

9 February, 2017 - 07:18

A while back I was looking for some information on the web, and happened upon a blog post about the subject. I don't remember what I was looking for, but on the same blog, there was a screen shot of what I then learned was the Solarized theme. This caught my eye that I decided to try it myself ASAP.

Up until last year, I've been using for many years the 'black on light yellow' xterm scheme. This is good during the day, but too strong during night, so on some machines I switched to 'white on black', but this was not entirely satisfying.

The solarized theme promises to have consistent colours over both light and dark background, which would help to make my setups finally consistent, and extends to a number of programs. Amongst these, there are themes for mutt on both light and dark backgrounds using only 16 colours. This was good, as my current hand-built theme is based on 256 colours, and this doesn't work well in the Linux console.

So I tried changing my terminal to the custom colours, played with it for about 10 minutes, then decided that its contrast is too low, bordering on unreadable. I switch to another desktop where I still had open an xterm using white-on-black, and—this being at night—my eyes immediately go 'no no no too high contrast'. In about ten minutes I got so used to it that the old theme was really really uncomfortable. There was no turning back now ☺

Interestingly, the light theme was not that much better than black-on-light-yellow, as that theme is already pretty well behaved. But I still migrated for consistency.


Starting from the home page and the internet, I found resources for:

  • Vim and Emacs (for which I use the debian package elpa-solarized-theme).
  • Midnight Commander, for which I currently use peel's theme, although I'm not happy with it; interestingly, the default theme almost works on 16-custom-colours light terminal scheme, but not quite on the dark one.
  • Mutt, which is both in the main combined repository but also on the separate one. I'm not really happy with mutt's theme either, but that seems mostly because I was using a quite different theme before. I'll try to improve what I feel is missing over time.
  • dircolors; I found this to be an absolute requirement for good readability of ls --color, as the defaults are too bad
  • I also took the opportunity to unify my git diff and colordiff theme, but this was not really something that I found and took 'as-is' from some repository; I basically built my own theme.
16 vs 256 colours

The solarized theme/configuration can be done in two ways:

  • by changing the Xresources/terminal 16 basic colours to custom RGB values, or:
  • by using approximations from the fixed 256 colours available in the xterm-256color terminfo

Upstream recommends the custom ones, as they are precisely tuned, instead of using the approximated ones; honestly I don't know if they would make a difference. It's too bad upstream went silent a few years back, as technically it's possible to override also colours above 16 in the 256-colour palette, but in any case, each of the two options has its own cons:

  • using customised 16-colour means that all terminal programs get the new colours scheme, even if they were designed (colour-wise) based on the standard values; this makes some things pretty unreadable (hence the need to fix dircolors), but at least somewhat consistent.
  • using 256-colour palette, unchanged programs stay the same, but now they look very different than the programs that were updated to solarized; note thought I haven't tested this, but that's how I understand things would be.

So either way it's not perfect.

Desktop-wide consistency

Also not perfect is that for proper consistent look, many more programs would have to be changed; but I don't see that happening in today's world. I've seen for example 3 or 4 Midnight Commander themes, but none of them were actually in the spirit of solarized, even though they were tweaked for solarized.

Even between vim and emacs, which both have one canonical solarized theme, the look is close but not really the same (looking at the markdown source for this blog post: URLs, headers and spelling mistakes are all different), but this might be due not necessarily the theme itself.

So no global theme consistency (I'd wish), but still, I find this much better on the eyes and not lower on readability after getting used to it.

Thanks Ethan!

Manuel A. Fernandez Montecelo: FOSDEM 2017: People, RISC-V and ChaosKey

9 February, 2017 - 06:52

This year, for the first time, I attended FOSDEM.

There I met...


... including:

  • friends that I don't see very often;
  • old friends that I didn't expect to see there, some of whom decided to travel from far away in the last minute;
  • met people in person for the first time, which previously I had known only though the internet -- one of whom is a protagonist in a previous blog entry, about the Debian port for OpenRISC;

I met new people in:

  • bars/pubs,
  • restaurants,
  • breakfast tables at lodgings,
  • and public transport.

... from the first hour to the last hour of my stay in Brussels.

In summary, lots of people around.

I also hoped to meet or spend some (more) time with a few people, but in the end I didn't catch them, our could not spend as much time with them as I would wish.

For somebody like me who enjoys quiet time by itsef, it was a bit too intensive in terms of interacting with people. But overall it was a nice winter break, definitely worth to attend, and even a much better experience than what I had expected.

Talks / Events

Of course, I also attended a few talks, some of which were very interesting; although the event is so (sometimes uncomfortably) crowded that the rooms were full more often than not, in which case it was not possible to enter (the doors were closed) or there were very long queues for waiting.

And with so many talks crammed into a weekend, I had so many schedule clashes with the talks that I had pre-selected as interesting, that I ended up missing most of them.

In terms of technical stuff, I have specially enjoyed the talk by Arun Thomas RISC-V -- Open Hardware for Your Open Source Software, and some conversations related with toolchain stuff and other upstream stuff, as well as on the Debian port for RISC-V.

The talk Resurrecting dinosaurs, what can possibly go wrong? -- How Containerised Applications could eat our users, by Richard Brown, was also very good.


Apart from that, I have witnessed a shady cash transaction in a bus from the city centre to FOSDEM in exchange for hardware, not very unlike what I had read about only days before.

So I could not help but to get involved in a subsequent transaction myself, to get my hands laid upon a ChaosKey.

Steve Kemp: Old packages are interesting.

9 February, 2017 - 05:00

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional.

At the end of his post he said :

evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux:

  • lenny had 0.4.3-1.
  • etch had nothing.
  • squeeze had 0.4.6-1.
  • wheezy has release 0.5.1-1.
  • jessie has release 0.5.1-1.
  • stretch has release 0.5.1-1.
  • sid has release 0.5.1-1.

I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:

 firefox '%s'

That meant there was an obvious security problem.

It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

Antoine Beaupré: Reliably generating good passwords

9 February, 2017 - 00:00

Passwords are used everywhere in our modern life. Between your email account and your bank card, a lot of critical security infrastructure relies on "something you know", a password. Yet there is little standard documentation on how to generate good passwords. There are some interesting possibilities for doing so; this article will look at what makes a good password and some tools that can be used to generate them.

There is growing concern that our dependence on passwords poses a fundamental security flaw. For example, passwords rely on humans, who can be coerced to reveal secret information. Furthermore, passwords are "replayable": if your password is revealed or stolen, anyone can impersonate you to get access to your most critical assets. Therefore, major organizations are trying to move away from single password authentication. Google, for example, is enforcing two factor authentication for its employees and is considering abandoning passwords on phones as well, although we have yet to see that controversial change implemented.

Yet passwords are still here and are likely to stick around for a long time until we figure out a better alternative. Note that in this article I use the word "password" instead of "PIN" or "passphrase", which all roughly mean the same thing: a small piece of text that users provide to prove their identity.

What makes a good password?

A "good password" may mean different things to different people. I will assert that a good password has the following properties:

  • high entropy: hard to guess for machines
  • transferable: easy to communicate for humans or transfer across various protocols for computers
  • memorable: easy to remember for humans

High entropy means that the password should be unpredictable to an attacker, for all practical purposes. It is tempting (and not uncommon) to choose a password based on something else that you know, but unfortunately those choices are likely to be guessable, no matter how "secret" you believe it is. Yes, with enough effort, an attacker can figure out your birthday, the name of your first lover, your mother's maiden name, where you were last summer, or other secrets people think they have.

The only solution here is to use a password randomly generated with enough randomness or "entropy" that brute-forcing the password will be practically infeasible. Considering that a modern off-the-shelf graphics card can guess millions of passwords per second using freely available software like hashcat, the typical requirement of "8 characters" is not considered enough anymore. With proper hardware, a powerful rig can crack such passwords offline within about a day. Even though a recent US National Institute of Standards and Technology (NIST) draft still recommends a minimum of eight characters, we now more often hear recommendations of twelve characters or fourteen characters.

A password should also be easily "transferable". Some characters, like & or !, have special meaning on the web or the shell and can wreak havoc when transferred. Certain software also has policies of refusing (or requiring!) some special characters exactly for that reason. Weird characters also make it harder for humans to communicate passwords across voice channels or different cultural backgrounds. In a more extreme example, the popular Signal software even resorted to using only digits to transfer key fingerprints. They outlined that numbers are "easy to localize" (as opposed to words, which are language-specific) and "visually distinct".

But the critical piece is the "memorable" part: it is trivial to generate a random string of characters, but those passwords are hard for humans to remember. As xkcd noted, "through 20 years of effort, we've successfully trained everyone to use passwords that are hard for human to remember but easy for computers to guess". It explains how a series of words is a better password than a single word with some characters replaced.

Obviously, you should not need to remember all passwords. Indeed, you may store some in password managers (which we'll look at in another article) or write them down in your wallet. In those cases, what you need is not a password, but something I would rather call a "token", or, as Debian Developer Daniel Kahn Gillmor (dkg) said in a private email, a "high entropy, compact, and transferable string". Certain APIs are specifically crafted to use tokens. OAuth, for example, generates "access tokens" that are random strings that give access to services. But in our discussion, we'll use the term "token" in a broader sense.

Notice how we removed the "memorable" property and added the "compact" one: we want to efficiently convert the most entropy into the shortest password possible, to work around possibly limiting password policies. For example, some bank cards only allow 5-digit security PINs and most web sites have an upper limit in the password length. The "compact" property applies less to "passwords" than tokens, because I assume that you will only use a password in select places: your password manager, SSH and OpenPGP keys, your computer login, and encryption keys. Everything else should be in a password manager. Those tools are generally under your control and should allow large enough passwords that the compact property is not particularly important.

Generating secure passwords

We'll look now at how to generate a strong, transferable, and memorable password. These are most likely the passwords you will deal with most of the time, as security tokens used in other settings should actually never show up on screen: they should be copy-pasted or automatically typed in forms. The password generators described here are all operated from the command line. Password managers often have embedded password generators, but usually don't provide an easy way to generate a password for the vault itself.

The previously mentioned xkcd cartoon is probably a common cultural reference in the security crowd and I often use it to explain how to choose a good passphrase. It turns out that someone actually implemented xkcd author Randall Munroe's suggestion into a program called xkcdpass:

    $ xkcdpass
    estop mixing edelweiss conduct rejoin flexitime

In verbose mode, it will show the actual entropy of the generated passphrase:

    $ xkcdpass -V
    The supplied word list is located at /usr/lib/python3/dist-packages/xkcdpass/static/default.txt.
    Your word list contains 38271 words, or 2^15.22 words.
    A 6 word password from this list will have roughly 91 (15.22 * 6) bits of entropy,
    assuming truly random word selection.
    estop mixing edelweiss conduct rejoin flexitime

Note that the above password has 91 bits of entropy, which is about what a fifteen-character password would have, if chosen at random from uppercase, lowercase, digits, and ten symbols:

    log2((26 + 26 + 10 + 10)^15) = approx. 92.548875

It's also interesting to note that this is closer to the entropy of a fifteen-letter base64 encoded password: since each character is six bits, you end up with 90 bits of entropy. xkcdpass is scriptable and easy to use. You can also customize the word list, separators, and so on with different command-line options. By default, xkcdpass uses the 2 of 12 word list from 12 dicts, which is not specifically geared toward password generation but has been curated for "common words" and words of different sizes.

Another option is the diceware system. Diceware works by having a word list in which you look up words based on dice rolls. For example, rolling the five dice "1 4 2 1 4" would give the word "bilge". By rolling those dice five times, you generate a five word password that is both memorable and random. Since paper and dice do not seem to be popular anymore, someone wrote that as an actual program, aptly called diceware. It works in a similar fashion, except that passwords are not space separated by default:

    $ diceware

Diceware can obviously change the output to look similar to xkcdpass, but can also accept actual dice rolls for those who do not trust their computer's entropy source:

    $ diceware -d ' ' -r realdice -w en_orig
    Please roll 5 dice (or a single dice 5 times).
    What number shows dice number 1? 4
    What number shows dice number 2? 2
    What number shows dice number 3? 6
    Aspire O's Ester Court Born Pk

The diceware software ships with a few word lists, and the default list has been deliberately created for generating passwords. It is derived from the standard diceware list with additions from the SecureDrop project. Diceware ships with the EFF word list that has words chosen for better recognition, but it is not enabled by default, even though diceware recommends using it when generating passwords with dice. That is because the EFF list was added later on. The project is currently considering making the EFF list be the default.

One disadvantage of diceware is that it doesn't actually show how much entropy the generated password has — those interested need to compute it for themselves. The actual number depends on the word list: the default word list has 13 bits of entropy per word (since it is exactly 8192 words long), which means the default 6 word passwords have 78 bits of entropy:

    log2(8192) * 6 = 78

Both of these programs are rather new, having, for example, entered Debian only after the last stable release, so they may not be directly available for your distribution. The manual diceware method, of course, only needs a set of dice and a word list, so that is much more portable, and both the diceware and xkcdpass programs can be installed through pip. However, if this is all too complicated, you can take a look at Openwall's passwdqc, which is older and more widely available. It generates more memorable passphrases while at the same time allowing for better control over the level of entropy:

    $ pwqgen
    $ pwqgen random=78

For some reason, passwdqc restricts the entropy of passwords between the bounds of 24 and 85 bits. That tool is also much less customizable than the other two: what you see here is pretty much what you get. The 4096-word list is also hardcoded in the C source code; it comes from a Usenet sci.crypt posting from 1997.

A key feature of xkcdpass and diceware is that you can craft your own word list, which can make dictionary-based attacks harder. Indeed, with such word-based password generators, the only viable way to crack those passwords is to use dictionary attacks, because the password is so long that character-based exhaustive searches are not workable, since they would take centuries to complete. Changing from the default dictionary therefore brings some advantage against attackers. This may be yet another "security through obscurity" procedure, however: a naive approach may be to use a dictionary localized to your native language (for example, in my case, French), but that would deter only an attacker that doesn't do basic research about you, so that advantage is quickly lost to determined attackers.

One should also note that the entropy of the password doesn't depend on which word list is chosen, only its length. Furthermore, a larger dictionary only expands the search space logarithmically; in other words, doubling the word-list length only adds a single bit of entropy. It is actually much better to add a word to your password than words to the word list that generates it.

Generating security tokens

As mentioned before, most password managers feature a way to generate strong security tokens, with different policies (symbols or not, length, etc). In general, you should use your password manager's password-generation functionality to generate tokens for sites you visit. But how are those functionalities implemented and what can you do if your password manager (for example, Firefox's master password feature) does not actually generate passwords for you?

pass, the standard UNIX password manager, delegates this task to the widely known pwgen program. It turns out that pwgen has a pretty bad track record for security issues, especially in the default "phoneme" mode, which generates non-uniformly distributed passwords. While pass uses the more "secure" -s mode, I figured it was worth removing that option to discourage the use of pwgen in the default mode. I made a trivial patch to pass so that it generates passwords correctly on its own. The gory details are in this email. It turns out that there are lots of ways to skin this particular cat. I was suggesting the following pipeline to generate the password:

    head -c $entropy /dev/random | base64 | tr -d '\n='

The above command reads a certain number of bytes from the kernel (head -c $entropy /dev/random) encodes that using the base64 algorithm and strips out the trailing equal sign and newlines (for large passwords). This is what Gillmor described as a "high-entropy compact printable/transferable string". The priority, in this case, is to have a token that is as compact as possible with the given entropy, while at the same time using a character set that should cause as little trouble as possible on sites that restrict the characters you can use. Gillmor is a co-maintainer of the Assword password manager, which chose base64 because it is widely available and understood and only takes up 33% more space than the original 8-bit binary encoding. After a lengthy discussion, the pass maintainer, Jason A. Donenfeld, chose the following pipeline:

    read -r -n $length pass < <(LC_ALL=C tr -dc "$characters" < /dev/urandom)

The above is similar, except it uses tr to directly to read characters from the kernel, and selects a certain set of characters ($characters) that is defined earlier as consisting of [:alnum:] for letters and digits and [:graph:] for symbols, depending on the user's configuration. Then the read command extracts the chosen number of characters from the output and stores the result in the pass variable. A participant on the mailing list, Brian Candler, has argued that this wastes entropy as the use of tr discards bits from /dev/urandom with little gain in entropy when compared to base64. But in the end, the maintainer argued that reading "reading from /dev/urandom has no [effect] on /proc/sys/kernel/random/entropy_avail on Linux" and dismissed the objection.

Another password manager, KeePass uses its own routines to generate tokens, but the procedure is the same: read from the kernel's entropy source (and user-generated sources in case of KeePass) and transform that data into a transferable string.


While there are many aspects to password management, we have focused on different techniques for users and developers to generate secure but also usable passwords. Generating a strong yet memorable password is not a trivial problem as the security vulnerabilities of the pwgen software showed. Furthermore, left to their own devices, users will generate passwords that can be easily guessed by a skilled attacker, especially if they can profile the user. It is therefore essential we provide easy tools for users to generate strong passwords and encourage them to store secure tokens in password managers.

Note: this article first appeared in the Linux Weekly News.

Alberto García: QEMU and the qcow2 metadata checks

8 February, 2017 - 15:52

When choosing a disk image format for your virtual machine one of the factors to take into considerations is its I/O performance. In this post I’ll talk a bit about the internals of qcow2 and about one of the aspects that can affect its performance under QEMU: its consistency checks.

As you probably know, qcow2 is QEMU’s native file format. The first thing that I’d like to highlight is that this format is perfectly fine in most cases and its I/O performance is comparable to that of a raw file. When it isn’t, chances are that this is due to an insufficiently large L2 cache. In one of my previous blog posts I wrote about the qcow2 L2 cache and how to tune it, so if your virtual disk is too slow, you should go there first.

I also recommend Max Reitz and Kevin Wolf’s qcow2: why (not)? talk from KVM Forum 2015, where they talk about a lot of internal details and show some performance tests.

qcow2 clusters: data and metadata

A qcow2 file is organized into units of constant size called clusters. The cluster size defaults to 64KB, but a different value can be set when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

Clusters can contain either data or metadata. A qcow2 file grows dynamically and only allocates space when it is actually needed, so apart from the header there’s no fixed location for any of the data and metadata clusters: they can appear mixed anywhere in the file.

Here’s an example of what it looks like internally:

In this example we can see the most important types of clusters that a qcow2 file can have:

  • Header: this one contains basic information such as the virtual size of the image, the version number, and pointers to where the rest of the metadata is located, among other things.
  • Data clusters: the data that the virtual machine sees.
  • L1 and L2 tables: a two-level structure that maps the virtual disk that the guest can see to the actual location of the data clusters in the qcow2 file.
  • Refcount table and blocks: a two-level structure with a reference count for each data cluster. Internal snapshots use this: a cluster with a reference count >= 2 means that it’s used by other snapshots, and therefore any modifications require a copy-on-write operation.
Metadata overlap checks

In order to detect corruption when writing to qcow2 images QEMU (since v1.7) performs several sanity checks. They verify that QEMU does not try to overwrite sections of the file that are already being used for metadata. If this happens, the image is marked as corrupted and further access is prevented.

Although in most cases these checks are innocuous, under certain scenarios they can have a negative impact on disk write performance. This depends a lot on the case, and I want to insist that in most scenarios it doesn’t have any effect. When it does, the general rule is that you’ll have more chances of noticing it if the storage backend is very fast or if the qcow2 image is very large.

In these cases, and if I/O performance is critical for you, you might want to consider tweaking the images a bit or disabling some of these checks, so let’s take a look at them. There are currently eight different checks. They’re named after the metadata sections that they check, and can be divided into the following categories:

  1. Checks that run in constant time. These are equally fast for all kinds of images and I don’t think they’re worth disabling.
    • main-header
    • active-l1
    • refcount-table
    • snapshot-table
  2. Checks that run in variable time but don’t need to read anything from disk.
    • refcount-block
    • active-l2
    • inactive-l1
  3. Checks that need to read data from disk. There is just one check here and it’s only needed if there are internal snapshots.
    • inactive-l2

By default all tests are enabled except for the last one (inactive-l2), because it needs to read data from disk.

Disabling the overlap checks

Tests can be disabled or enabled from the command line using the following syntax:

-drive file=hd.qcow2,overlap-check.inactive-l2=on
-drive file=hd.qcow2,overlap-check.snapshot-table=off

It’s also possible to select the group of checks that you want to enable using the following syntax:

-drive file=hd.qcow2,overlap-check.template=none
-drive file=hd.qcow2,overlap-check.template=constant
-drive file=hd.qcow2,overlap-check.template=cached
-drive file=hd.qcow2,overlap-check.template=all

Here, none means that no tests are enabled, constant enables all tests from group 1, cached enables all tests from groups 1 and 2, and all enables all of them.

As I explained in the previous section, if you’re worried about I/O performance then the checks that are probably worth evaluating are refcount-block, active-l2 and inactive-l1. I’m not counting inactive-l2 because it’s off by default. Let’s look at the other three:

  • inactive-l1: This is a variable length check because it depends on the number of internal snapshots in the qcow2 image. However its performance impact is likely to be negligible in all cases so I don’t think it’s worth bothering with.
  • active-l2: This check depends on the virtual size of the image, and on the percentage that has already been allocated. This check might have some impact if the image is very large (several hundred GBs or more). In that case one way to deal with it is to create an image with a larger cluster size. This also has the nice side effect of reducing the amount of memory needed for the L2 cache.
  • refcount-block: This check depends on the actual size of the qcow2 file and it’s independent from its virtual size. This check is relatively expensive even for small images, so if you notice performance problems chances are that they are due to this one. The good news is that we have been working on optimizing it, so if it’s slowing down your VMs the problem might go away completely in QEMU 2.9.

The qcow2 consistency checks are useful to detect data corruption, but they can affect write performance.

If you’re unsure and you want to check it quickly, open an image with overlap-check.template=none and see for yourself, but remember again that this will only affect write operations. To obtain more reliable results you should also open the image with cache=none in order to perform direct I/O and bypass the page cache. I’ve seen performance increases of 50% and more, but whether you’ll see them depends a lot on your setup. In many cases you won’t notice any difference.

I hope this post was useful to learn a bit more about the qcow2 format. There are other things that can help QEMU perform better, and I’ll probably come back to them in future posts, so stay tuned!


My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the rest of the QEMU development team.

Shirish Agarwal: Sex, death and nature

8 February, 2017 - 14:56


There is/was a somewhat controversial book by Osho which I read long back ‘Sambhog se Samadhi Ki Aur‘ or the English version ‘From Sex to Superconsciousness

While I can’t say I understand or understood it all, read it about a decade back, the main point shared in the book was that if you are able to achieve bliss/orgasm during sex, you might be able to have a glimpse of super-consciousness.

I had to share the above context as I had gone to a meetup couple of weeks back had gone to a meetup where a friend, Dr. Swati Shome is attempting to write an educational book for teenagers to talk about sex. I did help her a bit in the past I tried to share some of the concerns I had as my generation didn’t have any guidance from parents or teachers. Most of us were left to our own devices which is similar to today’s children as well with the exception that they have the web. You look at both the books, both written in Pune (my home-town) and both talking about the same subject but from so different a view-point. If you see the comments on the meetup page, it really pains to see people’s concerns. I don’t know if there is any solution to the widespread ignorance, myth-making etc. and hence felt a bit sad . Sharing a small clip I had seen few months back.

Just to give a bit of context, the law as has been shared as passed in 2015 has happened after the 2012 Delhi Gang Rape. A part of it is also that the Indian society still frowns upon live-in relationships so in part it may also be a push-back from the conservatives. After all the BJP, a right of the center party has been in power for 2.5 years now so it’s possible that they were part of it. As I don’t have enough knowledge of what the actual case was, who were the litigants and the defendants, the lawyers and the judge involved, I cannot further speculate. If somebody has more info. or link please pass it on. It would be interesting to know if it was a single bench ruling or 3-5 judge bench.

The yin-yan symbol I had shared becomes a bit more apt as in quite a few cultures, including Indian and Japanese, the two are seen as parts of the same coin. One life-giving, the other life-taking or not even taking but converting into something else.


That came few days later when I was reading an article about sleep. The purpose of sleep, is to forget . It was slightly strange and yet interesting article. What disturbed me though, was the bit about the mouse being killed and his brain being sliced. I tried to find many a justification for it, but none I could have peace with. And the crux of that is because the being, the creature’s wilful consent hasn’t been taken. In nature’s eyes humans and mice are one and the same. We don’t get any special passes due to the fact that we are human. A natural disaster doesn’t care whether you are small or big, fat or strong, mouse or wo/man, coward or brave. It’s sheer luck and after disaster preparedness that people and animals get saved or not.

I thought quite a bit that instead of animals being used for scientific experiments, why don’t we use actual humans. While I’m sure PETA supporters probably may have spear-headed this idea for a long time, but it doesn’t mean I can’t come to this realization by myself. After all, it’s not about pandering to a group but rather what I think is right.

Passing the baton to humans does have its own knotty problems though. For any such kind of endeavour, people’s participation and wilful consent would be needed.

While humans can and do give wilful consent, it is difficult problem as you don’t know the situation in which that consent has been taken. We all know about Organ trafficking . Many people especially from lower economic background may be enticed and cheated with the whole economics for science. In most Indian middle and higher-middle classes religion plays a part even though with ‘death’ the body is cremated and is supposed to scatter among the Pancha Mahaboota, the five elements.

I, for one have no hang-ups if some scientist were to slice my brain to find something, provided I’m dead or for that matter any part of the body. If more people thought like that, probably we wouldn’t have to specially grow and then kill lab mice and guinea-pigs to test out theories. Possibly medical innovations would probably be a lot faster than now. Ironically, most medical innovations have happened during wars and continues to do so till date.

Comments, ideas, suggestions and criticisms all are welcome.

Filed under: Miscellenous Tagged: #Death, #Innocence, #Medical Innovatiion, #Medicine, #Murder, #PETA, #Sex, #sleep, #war, education, exploitation, nature

Vincent Bernat: Write your own terminal emulator

8 February, 2017 - 06:30

I was an happy user of rxvt-unicode until I got a laptop with an HiDPI display. Switching from a LoDPI to a HiDPI screen and back was a pain: I had to manually adjust the font size on all terminals or restart them.

VTE is a library to build a terminal emulator using the GTK+ toolkit, which handles DPI changes. It is used by many terminal emulators, like GNOME Terminal, evilvte, sakura, termit and ROXTerm. The library is quite straightforward and writing a terminal doesn’t take much time if you don’t need many features.

Let’s see how to write a simple one.

A simple terminal§

Let’s start small with a terminal with the default settings. We’ll write that in C. Another supported option is Vala.

#include <vte/vte.h>

main(int argc, char *argv[])
    GtkWidget *window, *terminal;

    /* Initialise GTK, the window and the terminal */
    gtk_init(&argc, &argv);
    terminal = vte_terminal_new();
    window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
    gtk_window_set_title(GTK_WINDOW(window), "myterm");

    /* Start a new shell */
    gchar **envp = g_get_environ();
    gchar **command = (gchar *[]){g_strdup(g_environ_getenv(envp, "SHELL")), NULL };
        NULL,       /* working directory  */
        command,    /* command */
        NULL,       /* environment */
        0,          /* spawn flags */
        NULL, NULL, /* child setup */
        NULL,       /* child pid */
        NULL, NULL);

    /* Connect some signals */
    g_signal_connect(window, "delete-event", gtk_main_quit, NULL);
    g_signal_connect(terminal, "child-exited", gtk_main_quit, NULL);

    /* Put widgets together and run the main loop */
    gtk_container_add(GTK_CONTAINER(window), terminal);

You can compile it with the following command:

gcc -O2 -Wall $(pkg-config --cflags --libs vte-2.91) term.c -o term

And run it with ./term:

More features§

From here, you can have a look at the documentation to alter behavior or add more features. Here are three examples.


You can define the 16 basic colors with the following code:

#define CLR_R(x)   (((x) & 0xff0000) >> 16)
#define CLR_G(x)   (((x) & 0x00ff00) >>  8)
#define CLR_B(x)   (((x) & 0x0000ff) >>  0)
#define CLR_16(x)  ((double)(x) / 0xff)
#define CLR_GDK(x) (const GdkRGBA){ .red = CLR_16(CLR_R(x)), \
                                    .green = CLR_16(CLR_G(x)), \
                                    .blue = CLR_16(CLR_B(x)), \
                                    .alpha = 0 }
    &(GdkRGBA){ .alpha = 0.85 },
    (const GdkRGBA[]){
}, 16);

While you can’t see it on the screenshot1, this also enables background transparency.

Miscellaneous settings§

VTE comes with many settings to change the behavior of the terminal. Consider the following code:

vte_terminal_set_scrollback_lines(VTE_TERMINAL(terminal), 0);
vte_terminal_set_scroll_on_output(VTE_TERMINAL(terminal), FALSE);
vte_terminal_set_scroll_on_keystroke(VTE_TERMINAL(terminal), TRUE);
vte_terminal_set_rewrap_on_resize(VTE_TERMINAL(terminal), TRUE);
vte_terminal_set_mouse_autohide(VTE_TERMINAL(terminal), TRUE);

This will:

  • disable the scrollback buffer,
  • not scroll to the bottom on new output,
  • scroll to the bottom on keystroke,
  • rewrap content when terminal size change, and
  • hide the mouse cursor when typing.
Update the window title§

An application can change the window title using XTerm control sequences (for example, with printf "\e]2;${title}\a"). If you want the actual window title to reflect this, you need to define this function:

static gboolean
on_title_changed(GtkWidget *terminal, gpointer user_data)
    GtkWindow *window = user_data;
    return TRUE;

Then, connect it to the appropriate signal, in main():

g_signal_connect(terminal, "window-title-changed", 
    G_CALLBACK(on_title_changed), GTK_WINDOW(window));
Final words§

I don’t need much more as I am using tmux inside each terminal. In my own copy, I have also added the ability to complete a word using ones from the current window or other windows (also known as dynamic abbrev expansion). This requires to implement a terminal daemon to handle all terminal windows with one process, similar to urxvtcd.

While writing a terminal “from scratch”2 suits my need, it may not be worth it. evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it. You should also note that the primary goal of VTE is to be a library to support GNOME Terminal. Notably, if a feature is not needed for GNOME Terminal, it won’t be added to VTE. If it already exists, it will likely to be deprecated and removed.

  1. Transparency is handled by the composite manager (Compton, in my case). 

  2. For some definition of “scratch” since the hard work is handled by VTE

Carl Chenet: The Gitlab database incident and the Backup Checker project

8 February, 2017 - 06:00

The database incident of 2017/01/31 and the resulting data loss reminded everyone (at least for the next days) how it’s easy to lose data, even when you think all your systems are safe.

Being really interested by the process of backing up data, I read with interest the report (kudos to the Gitlab company for being so transparent about it) and I was soooo excited to find the following sentence:

Regular backups seem to also only be taken once per 24 hours, though team-member-1 has not yet been able to figure out where they are stored. According to team-member-2 these don’t appear to be working, producing files only a few bytes in size.

Whoa, guys! I’m so sorry for you about the data loss, but from my point of view I was so excited to find a big FOSS company publicly admitting and communicating about a perfect use case for the Backup Checker project, a Free Software I’ve been writing these last years.

Data loss: nobody cares before, everybody cries after

Usually people don’t care about the backups. It’s a serious business for web hosters and the backup team from big companies but otherwise and in other places, nobody cares.

Usually everybody agrees about how backups are important but few people make them or install an automatized system to create backups and the day before, nobody verifies they are usable. The reason is obvious: it’s totally boring, and in some cases e.g for large archives, difficult.

Because verifying backups is boring for humans, I launched the Backup Checker project in order to automatize this task.

Backup Checker offers a wide range of features, checking lots of different archives (tar.{gz,bz2,xz}, zip, tree of files and offer lots of different tests (hash sum, size {equal, smaller/greater than}, unix rights, …,). Have a look at the official documentation for a exhaustive list of features and possible tests.

Automatize the controls of your backups with Backup Checker

Checking your backups means to describe in a configuration file how a backup should be, e.g a gzipped database dump. You usually know about what size the archive is going to be, what the owner and the group owner should be.

Even easier, with Backup Checker you can generate this list of criterias from an actual archive, and remove uneeded criterias to create a template you can re-use for different kind of archives.

Ok, 2 minutes of your time for a real word example, I use an existing database sql dump in an tar.gz archive to automatically create the list describing this backup:

$ backupchecker -G database-dump.tar.gz
$ cat database-dump.list
mtime| 1486480274.2923253

database.sql| =7854803 uid|1000 gid|1000 owner|chaica group|chaica mode|644 type|f mtime|1486480253.0

Now, just remove parameters too precise from this list to get a backup template. Here is a possible result:

database.sql| >6m uid|1000 gid|1000 mode|644 type|f

We define here a template for the archive, meaning that the database.sql file in the archive should have a size greater than 6 megabytes, be owned by the user with the uid of 1000 and the group with a gid of 1000, this file should have the mode 644 and be a regular file. In order to use a template instead of the complete list, you also need to remove the sha512 from the .conf file.

Pretty easy hmm? Ok, just for fun, lets replicate the part of the database incident mentioned above and write an archive with an empty sql dump inside an archive:

$ touch /tmp/database.sql && \
tar zcvf /tmp/database-dump.tar.gz /tmp/database.sql && \
cp /tmp/database-dump.tar.gz .

Now we launch Backup Checker with the previously created template. If you didn’t change the name of database-dump.list file, the command should only be:

$ backupchecker -C database-dump.conf
$ cat a.out 
WARNING:root:1 file smaller than expected while checking /tmp/article-backup-checker/database-dump.tar.gz: 
WARNING:root:database.sql size is 0. Should have been bigger than 6291456.

The automatized controls of Backup Checker trigger a warning in the log file. The empty sql dump has been identified inside the archive.

A step further

As you could read in this article, verifying some of your backups is not a time consuming task, given the fact you have a FOSS project dedicated to this task, with an easy way to realize a template of your backups and to use it.

This article provided a really simple example of such a use case, the Backup Checker has lots of features to offer when verifying your backups. Read the official documentation for more complete descriptions of the available possibilities.

Data loss, especially for projets storing user data is always a terrible event in the life of an organization. Lets try to learn from mistakes which could happen to anyone and build better backup systems.

More information about the Backup Checker project



Craig Small: WordPress 4.7.2

8 February, 2017 - 03:53

When WordPress originally announced their latest security update, there were three security fixes. While all security updates can be serious, they didn’t seem too bad. Shortly after, they updated their announcement with a fourth and more serious security problem.

I have looked after the Debian WordPress package for a while. This is the first time I have heard people actually having their sites hacked almost as soon as this vulnerability was announced.

If you are running WordPress 4.7 or 4.7.1, your website is vulnerable and there are bots out there looking for it. You should immediately upgrade to 4.7.2 (or, if there is a later 4.7.x version to that).  There is now updated Debian wordpress version 4.7.2 packages for unstable, testing and stable backports.

For stable, you are on a patched version 4.1 which doesn’t have this specific vulnerability (it was introduced in 4.7) but you should be using 4.1+dfsg-1+deb8u12 which has the fixes found in 4.7.1 ported back to 4.1 code.

Bits from Debian: DebConf17: Call for Proposals

8 February, 2017 - 03:00

The DebConf Content team would like to Call for Proposals for the DebConf17 conference, to be held in Montreal, Canada, from August 6 through August 12, 2017.

You can find this Call for Proposals in its latest form at:

Please refer to this URL for updates on the present information.

Submitting an Event

Submit an event proposal and describe your plan. Please note, events are not limited to traditional presentations or informal sessions (BoFs). We welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be beneficial to the Debian community.

Please include a short title, suitable for a compact schedule, and an engaging description of the event. You should use the field "Notes" to provide us information such as additional speakers, scheduling restrictions, or any special requirements we should consider for your event.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (like workshops) could have different durations. Please choose the most suitable duration for your event and explain any special requests.

You will need to create an account on the site, to submit a talk. We'd encourage Debian account holders (e.g. DDs) to use Debian SSO when creating an account. But this isn't required for everybody, you can sign up with an e-mail address and password.


The first batch of accepted proposals will be announced in April. If you depend on having your proposal accepted in order to attend the conference, please submit it as soon as possible so that it can be considered during this first evaluation period.

All proposals must be submitted before Sunday 4 June 2017 to be evaluated for the official schedule.

Topics and Tracks

Though we invite proposals on any Debian or FLOSS related subject, we have some broad topics on which we encourage people to submit proposals, including:

  • Blends
  • Debian in Science
  • Cloud and containers
  • Social context
  • Packaging, policy and infrastructure
  • Embedded
  • Systems administration, automation and orchestration
  • Security

You are welcome to either suggest more tracks, or become a coordinator for any of them; please refer to the Content Tracks wiki page for more information on that.

Code of Conduct

Our event is covered by a Code of Conduct designed to ensure everyone's safety and comfort. The code applies to all attendees, including speakers and the content of their presentations. For more information, please see the Code on the Web, and do not hesitate to contact us at if you have any questions or are unsure about certain content you'd like to present.

Video Coverage

Providing video of sessions amplifies DebConf achievements and is one of the conference goals. Unless speakers opt-out, official events will be streamed live over the Internet to promote remote participation. Recordings will be published later under the DebConf license, as well as presentation slides and papers whenever available.

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsor Savoir-Faire Linux. DebConf17 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

In case of any questions, or if you wanted to bounce some ideas off us first, please do not hesitate to reach out to us at

We hope to see you in Montreal!

The DebConf team


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้