Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 17 min 47 sec ago

Enrico Zini: Shutdown button for my Raspberry Pi

1 April, 2017 - 05:10

My Raspberry Pi hifi component setup lacked a way to cleanly shutdown the system without ssh.

I wished the Raspberry Pi had a button so I can tell it to shutdown.

I added one to the GPIO connector.

It turns out many wishes can come true when one has a GPIO board.

This is the /usr/local/bin/stereo-gpio script that reacts to the button press and triggers a shutdown:

import RPi.GPIO as GPIO
import time
import subprocess

def on_button(pin):
    print("Button pressed", pin)


GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)

while True:
    pin = GPIO.wait_for_edge(18, GPIO.FALLING)
    if pin == 18:
        subprocess.check_call(["/bin/systemctl", "halt"])

This is the /etc/systemd/system/stereo-gpio.service systemd unit file that runs the script as a daemon:

Description=Stereo GPIO manager



Then systemctl start stereo-gpio to start the script, and systemctl enable stereo-gpio to start the script at boot.

Chris Lamb: Free software activities in March 2017

1 April, 2017 - 05:05

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Fixed two issues in, a web-based version of the diffoscope in-depth and content-aware diff utility:
    • Fix command-line API breakage. (commit)
    • Use over (commit)
  • Made a number of improvements to, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change), including:
    • Correctly detecting the distribution to build with for some tags. (commit)
    • Use Lintian from the backports repository where appropriate. (#44)
    • Don't build upstream/ branches even if they contain .travis.yml files. (commit)
  • Fixed an issue in django-staticfiles-dotd, my Django staticfiles adaptor to concatentate .d-style directories, where some .d directories were being skipped. This was caused by modifying the contents of a Python list during iteration. (#3)
  • Performed some miscelleanous cleanups in django12factor, a Django utility to make projects adhere better to the 12-factor web-application philosophy. (#58)
  • Submitted a pull request for Doomsday-Engine, a portable, enhanced source port of Doom, Heretic and Hexen, to make the build reproducible (#16)
  • Created a pull request for gdata-python-client (a Python client library for Google APIs) to make the build reproducible. (#56)
  • Authored a pull request for the MochaJS JavaScript test framework to make the build reproducible. (#2727)
  • Filed a pull request against vine, a Python promises library, to avoid non-determinstic default keyword argument appearing in the documentation. (#12)
  • Filed an issue for the Redis key-value database addressing build failures on the MIPS architecture. (#3874)
  • Submitted a bug report against xdotool — a tool to automate window and keyboard interactions — reporting a crash when searching after binding an action with behave. (#169)
  • Reviewed a pull request from Dan Palmer for django-email-from-template, a library to send emails in Django generated entirely from the templating system, which intends to add an option to send mails upon transaction commit.
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features/optimisations:
    • Extract squashfs archive in one go rather than per-file, speeding up ISO comparison by ~10x.
    • Add support for .docx and .odt files via docx2txt & odt2txt. (#859056).
    • Add support for PGP files via pgpdump. (#859034).
    • Add support for comparing Pcap files. (#858867).
    • Compare GIF images using gifbuild. (#857610).
  • Bug fixes:
    • Ensure that we really are using ImageMagick and not the GraphicsMagick compatibility layer. (#857940).
    • Fix and add test for meaningless 1234-content metadata when introspecting archives. (#858223).
    • Fix detection of ISO9660 images processed with isohybrid.
    • Skip icc tests if the Debian-specific patch is not present. (#856447).
    • Support newer versions of cbfstool to avoid test failures. (#856446).
    • Update the progress bar prior to working to ensure filename is in sync.
  • Cleanups:
    • Use /usr/share/dpkg/ over manual calls to dpkg-parsechangelog in debian/rules.
    • Ensure tests and the runtime environment can locate binaries in /usr/sbin (eg. tcpdump).


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Fix a possible endless loop while stripping .ar files due to trusting the file's own file size data. (#857975).
  • Add support for testing files we should reject and include the filename when evaluating fixtures. is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Add support for Format: 1.0. (#20).
  • Don't parse Format: header as the source package version. (#21).
  • Show the reproducible status of packages.


I submitted my platform for the 2017 Debian Project Leader Elections. This was subsequently covered on LWN and I have been participating in the discussions on the debian-vote mailing list since then.

Patches contributed Debian LTS

This month I have been paid to work 14.75 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 848-1 for the freetype font library fixing a denial of service vulnerability.
  • Issued DLA 851-1 for wget preventing a header injection attack.
  • Issued DLA 863-1 for the deluge BitTorrent client correcting a cross-site request forgery vulnerability.
  • Issued DLA 864-1 for jhead (an EXIF metadata tool) patching an arbitrary code execution vulnerability.
  • Issued DLA 865-1 for the suricata intrusion detection system, fixing an IP protocol matching error.
  • Issued DLA 871-1 for python3.2 fixing a TLS stripping vulnerability in the smptlib library.
  • Issued DLA 873-1 for apt-cacher preventing a HTTP response splitting vulnerability.
  • Issued DLA 876-1 for eject to prevent an issue regarding the checking of setuid(2) and setgid(2) return values.
  • python-django:
    • 1:1.10.6-1 — New upstream bugfix release.
    • 1:1.11~rc1-1 — New upstream release candidate.
  • redis:
    • 3:3.2.8-2 — Avoid conflict between RuntimeDirectory and tmpfiles.d(5) both attempting to create /run/redis with differing permissions. (#856116)
    • 3:3.2.8-3 — Revert the creation of a /usr/bin/redis-check-rdb to /usr/bin/redis-server symlink to avoid a dangling symlink if only the redis-tools package is installed. (#858519)
  • gunicorn 19.7.0-1 & 19.7.1-1 — New upstream releases.
  • adminer 4.3.0-1 — New upstream release.

Finally, I also made the following non-maintainer uploads (NMUs):

Debian bugs filed

I additionally filed 5 bugs for packages that access the internet during build against golang-github-mesos-mesos-go, ipywidgets, ruby-bunny, ruby-http & sorl-thumbnail.

I also filed 13 FTBFS bugs against android-platform-frameworks-base, ariba, calendar-exchange-provider, cylc, git, golang-github-grpc-ecosystem-go-grpc-prometheus, node-dateformat, python-eventlet, python-tz, sogo-connector, spyder-memory-profiler, sushi & tendermint-go-rpc.

FTP Team

As a Debian FTP assistant I ACCEPTed 121 packages: 4pane, adql, android-platform-system-core, android-sdk-helper, braillegraph, deepnano, dh-runit, django-auth-ldap, django-dirtyfields, drf-extensions, gammaray, gcc-7, gnome-keysign, golang-code.gitea-sdk, golang-github-bluebreezecf-opentsdb-goclient, golang-github-bsm-redeo, golang-github-cupcake-rdb, golang-github-denisenkom-go-mssqldb, golang-github-exponent-io-jsonpath, golang-github-facebookgo-ensure, golang-github-facebookgo-freeport, golang-github-facebookgo-grace, golang-github-facebookgo-httpdown, golang-github-facebookgo-stack, golang-github-facebookgo-subset, golang-github-go-openapi-loads, golang-github-go-openapi-runtime, golang-github-go-openapi-strfmt, golang-github-go-openapi-validate, golang-github-golang-geo, golang-github-gorilla-pat, golang-github-gorilla-securecookie, golang-github-issue9-assert, golang-github-issue9-identicon, golang-github-jaytaylor-html2text, golang-github-joho-godotenv, golang-github-juju-errors, golang-github-kisielk-gotool, golang-github-kubernetes-gengo, golang-github-lpabon-godbc, golang-github-lunny-log, golang-github-makenowjust-heredoc, golang-github-mrjones-oauth, golang-github-nbutton23-zxcvbn-go, golang-github-neelance-sourcemap, golang-github-ngaut-deadline, golang-github-ngaut-go-zookeeper, golang-github-ngaut-log, golang-github-ngaut-pools, golang-github-ngaut-sync2, golang-github-optiopay-kafka, golang-github-quobyte-api, golang-github-renstrom-dedent, golang-github-sergi-go-diff, golang-github-siddontang-go, golang-github-smartystreets-go-aws-auth, golang-github-xanzy-go-cloudstack, golang-github-xtaci-kcp, golang-github-yohcop-openid-go, graywolf, haskell-raaz, hfst-ospell, hikaricp, iptraf-ng, kanboard-cli, kcptun, kreport, libbluray, libcatmandu-store-elasticsearch-perl, libcsfml, libnet-prometheus-perl, libosmocore, libpandoc-wrapper-perl, libseqlib, matrix-synapse, mockldap, nfs-ganesha, node-buffer, node-pako, nose-el, nvptx-tools, nx-libs, open-ath9k-htc-firmware, pagein, paleomix, pgsql-ogr-fdw, profanity, pyosmium, python-biotools, python-django-extra-views, python-django-otp, python-django-push-notifications, python-dnslib, python-gmpy, python-gmpy2, python-holidays, python-kanboard, python-line-profiler, python-pgpy, python-pweave, python-raven, python-xapian-haystack, python-xopen, r-cran-v8, repetier-host, ruby-jar-dependencies, ruby-maven-libs, ruby-psych, ruby-retriable, seafile-client, spyder-unittest, stressant, systray-mdstat, telegram-desktop, thawab, tigris, tnseq-transit, typesafe-config, vibe.d, x2goserver & xmlrpc-c.

I additionally filed 14 RC bugs against packages that had incomplete debian/copyright files against: golang-github-cupcake-rdb, golang-github-sergi-go-diff, graywolf, hfst-ospell, libbluray, pgsql-ogr-fdw, python-gmpy, python-gmpy2, python-pgpy, python-xapian-haystack, repetier-host, telegram-desktop, tigris & xmlrpc-c.

Holger Levsen: 20170331-pillow-fight

1 April, 2017 - 01:26
Pillow fight

For some time I struggled how to tell this short story… so it started with some trolls (clever, but still despiteful, or maybe just with too much energy) who made a pillow with a picture of the initial systemd author, Lennart Poettering, and put it on the sofas of the CCC Hamburg hackerspace…

Which I perceived as clever trolling, as it really was a harmless pillow, which one could use to innocently punch Lennart in the face, or rather not, as denial of this was easy and included, eg of course this pillow was also suitable for hugging!

For some time, I didn't know how to react. Some clever person just turned the pillow cover upsidedown, so at least we had a white pillow instead. But still, I wanted a better reaction…

And then I met Lennart at in Brno and realized, that I would meet him again the following weekend in Brussels… so on Saturday morning of FOSDEM 2017 I embarrassed Lennart (and myself) a little, but it was very much worth it! And so he signed it:

To put this into perspective or why I think the original pillow was a bad idea: some people hate systemd, fine (though I do think systemd is a fine piece of software, but you can surely disagree about this), just, please keep in mind: "hate/fight the game, not the players".

Ross Gammon: Resurrecting my old Drupal Site

31 March, 2017 - 17:36

As I have previously blogged, I recently managed to resurrect my old Drupal site that ran in the Amazon AWS cloud, and get it working again on a new host. I have just written up a summary of how I battled through the process, which can be found here.

Unfortunately, I took a long time to write it up. So it is not as detailed as I originally intended. But if like me you run a Drupal site, or you did and it is also broken, then feel free to follow the link for a read. It may at least give some ideas to follow up. I made heavy use of DrupalVM. If you are just starting out with a Drupal website, and you have more than FTP access to your hosting, I recommend using  DrupalVM (which is built with Vagrant & Ansible) for local development and testing.

Enrico Zini: Raspberry Pi as a Hi-Fi component

31 March, 2017 - 14:06

I have a 25 years old Technics hifi system that still works fine, and I gave it a new life by replacing the CD player and cassette player modules with a Raspberry Pi.


Each component of the hifi has a mains input and a mains plug that is used to power the next component. The element where the main power lead goes in is the radio component, which has a remote control receiver, a watch and a timer, and will power on the rest of the system when turned on by its power button, the remote control, or the alarm function.

I disconnected the cassette and cd player modules, and plugged the Raspberry Pi phone charger/power supply in the free plug behind the amplifier module, at the end of the (now very short) power lead chain.

I also connected the audio output of the Raspberry Pi to the CD input of my stereo. The advantage of CD over AUX is that the remote control buttons for switching audio sources don't cover the AUX inputs.

With alsamixer I adjusted the output volume to match that of the radio component, so that I can switch between the two without surprising jumps in volume. I used alsactl store to save the mixer levels.

Now when I turn the hifi on I also turn the Raspberry Pi on, and when I turn the hifi off, I also cut power from the Raspberry Pi.

Operating system

Operating system install instructions:

  1. I downloaded a Raspbian Jessie Lite image
  2. I put it on an SD card
  3. I created an empty ssh file on the boot partition
  4. I put the SD card on the Raspberry Pi and turned on the stereo.
  5. ssh pi@raspberrypi password raspberry
  6. sudo raspi-config to change the hostname, the password, and to enlarge the root partition to include all the rest of the space available in the SD card.
Music Player Daemon

This is the set up of the music player part, with mpd.

apt install mpd

The configuration file is /etc/mpd.conf. The changes I made are:

Make mpd accessible from my local network:

bind_to_address         "any"

Make mpd discoverable:

zeroconf_enabled                "yes"
zeroconf_name                   "stereo"

Allow anyone who visits me to control the playlist, and only me to run admin functions:

password                        "SECRET@read,add,control,admin"
default_permissions             "read,add,control"

At my first try, mpd hung when changing songs. I had to disable dmix by uncommenting the device option in the audio_output configuration. use_mmap is cargo-culted from the archlinux wiki.

audio_output {
        type            "alsa"
        name            "My ALSA Device"
        device          "hw:0,0"        # optional
        use_mmap        "yes"

If at some point I'll decide to use other audio software on the system, I'll probably want to play via pulseaudio.

Sending music to the stereo

I made a little script to sync the music directory on my laptop with /var/lib/mpd/music:


rsync -avz --filter=". sync-stereo.filter" --copy-links --prune-empty-dirs --delete ./ pi@stereo:/var/lib/mpd/music

ssh pi@stereo "chmod u=rwX,go=rX -R /var/lib/mpd/music"

It uses this sync-stereo.filter rules file for rsync:

hide /_archive
include */
include **.mp3
hide *
mpd clients mpc
$ mpc -h stereo status
UltraCat - Unexpected Little Happenings
[playing] #15/22   0:03/4:06 (1%)
volume: 80%   repeat: off   random: on    single: off   consume: off

On my phone I installed M.A.L.P. and now I have a remote control for mpd.

In its settings, I made a profile for home where I just had to set the hostname for the stereo and the admin password.


On my laptop I installed cantata, set the hostname and password in the preferences, and had the client ready.


Now I can take the remote control of my hi-fi, turn it on, and after a while mpd will resume playing the song that was playing when I last shut it down.

I also have realtime player status on my phone and on my laptop, and can control music from either at any time. Friends who visit me can do that as well.

Everything was rather straightforward, well documented and easy to replicate. The hardware is cheap and very easy to come by.

Dirk Eddelbuettel: #2: Even Easier Package Registration

31 March, 2017 - 10:31

Welcome to the second post in rambling random R recommendation series, or R4 for short.

Two days ago I posted the initial (actual) post. It provided context for why we need package registration entries (tl;dr: because R CMD check now tests for it, and because it The Right Thing to do, see documentation in the posts). I also showed how generating such a file src/init.c was essentially free as all it took was single call to a new helper function added to R-devel by Brian Ripley and Kurt Hornik.

Now, to actually use R-devel you obviously need to have it accessible. There are a myriad of ways to achieve that: just compile it locally as I have done for years, use a Docker image as I showed in the post -- or be creative with eg Travis or win-builder both of which give you access to R-devel if you're clever about it.

But as no good deed goes unpunished, I was of course told off today for showing a Docker example as Docker was not "Easy". I think the formal answer to that is baloney. But we leave that aside, and promise to discuss setting up Docker at another time.

R is after all ... just R. So below please find a script you can save as, say, ~/bin/pnrrs.r. And calling it---even with R-release---will generate the same code snippet as I showed via Docker. Call it a one-off backport of the new helper function -- with a half-life of a few weeks at best as we will have R 3.4.0 as default in just a few weeks. The script will then reduce to just the final line as the code will be present with R 3.4.0.



.find_calls_in_package_code <- tools:::.find_calls_in_package_code
.read_description <- tools:::.read_description

## all what follows is from R-devel aka R 3.4.0 to be

package_ff_call_db <- function(dir) {
    ## A few packages such as CDM use base::.Call
    ff_call_names <- c(".C", ".Call", ".Fortran", ".External",
                       "base::.C", "base::.Call",
                       "base::.Fortran", "base::.External")

    predicate <- function(e) {
        (length(e) > 1L) &&
            ![[1L]]), ff_call_names))

    calls <- .find_calls_in_package_code(dir,
                                         predicate = predicate,
                                         recursive = TRUE)
    calls <- unlist(Filter(length, calls))

    if(!length(calls)) return(NULL)

    attr(calls, "dir") <- dir

native_routine_registration_db_from_ff_call_db <- function(calls, dir = NULL, character_only = TRUE) {
    if(!length(calls)) return(NULL)

    ff_call_names <- c(".C", ".Call", ".Fortran", ".External")
    ff_call_args <- lapply(ff_call_names,
                           function(e) args(get(e, baseenv())))
    names(ff_call_args) <- ff_call_names
    ff_call_args_names <-
                      function(e) names(formals(e))), setdiff,

        dir <- attr(calls, "dir")

    package <- # drop name
        as.vector(.read_description(file.path(dir, "DESCRIPTION"))["Package"])

    symbols <- character()
    nrdb <-
               function(e) {
                   if (startsWith(deparse(e[[1L]]), "base::"))
                       e[[1L]] <- e[[1L]][3L]
                   ## First figure out whether ff calls had '...'.
                   pos <- which(unlist(Map(identical,
                                           lapply(e, as.character),
                   ## Then match the call with '...' dropped.
                   ## Note that only .NAME could be given by name or
                   ## positionally (the other ff interface named
                   ## arguments come after '...').
                   if(length(pos)) e <- e[-pos]
                   ## drop calls with only ...
                   if(length(e) < 2L) return(NULL)
                   cname <- as.character(e[[1L]])
                   ## The help says
                   ## '.NAME' is always matched to the first argument
                   ## supplied (which should not be named).
                   ## But some people do (Geneland ...).
                   nm <- names(e); nm[2L] <- ""; names(e) <- nm
                   e <-[[cname]], e)
                   ## Only keep ff calls where .NAME is character
                   ## or (optionally) a name.
                   s <- e[[".NAME"]]
                   if( {
                       s <- deparse(s)[1L]
                       if(character_only) {
                           symbols <<- c(symbols, s)
                   } else if(is.character(s)) {
                       s <- s[1L]
                   } else { ## expressions
                       symbols <<- c(symbols, deparse(s))
                   ## Drop the ones where PACKAGE gives a different
                   ## package. Ignore those which are not char strings.
                   if(!is.null(p <- e[["PACKAGE"]]) &&
                      is.character(p) && !identical(p, package))
                   n <- if(length(pos)) {
                            ## Cannot determine the number of args: use
                            ## -1 which might be ok for .External().
                        } else {
                                            ff_call_args_names[[cname]]))) - 1L
                   ## Could perhaps also record whether 's' was a symbol
                   ## or a character string ...
                   cbind(cname, s, n)
    nrdb <-, nrdb)
    nrdb <-, stringsAsFactors = FALSE)

    if(NROW(nrdb) == 0L || length(nrdb) != 3L)
        stop("no native symbols were extracted")
    nrdb[, 3L] <- as.numeric(nrdb[, 3L])
    nrdb <- nrdb[order(nrdb[, 1L], nrdb[, 2L], nrdb[, 3L]), ]
    nms <- nrdb[, "s"]
    dups <- unique(nms[duplicated(nms)])

    ## Now get the namespace info for the package.
    info <- parseNamespaceFile(basename(dir), dirname(dir))
    ## Could have ff calls with symbols imported from other packages:
    ## try dropping these eventually.
    imports <- info$imports
    imports <- imports[lengths(imports) == 2L]
    imports <- unlist(lapply(imports, `[[`, 2L))

    info <- info$nativeRoutines[[package]]
    ## Adjust native routine names for explicit remapping or
    ## namespace .fixes.
    if(length(symnames <- info$symbolNames)) {
        ind <- match(nrdb[, 2L], names(symnames), nomatch = 0L)
        nrdb[ind > 0L, 2L] <- symnames[ind]
    } else if(!character_only &&
              any((fixes <- info$registrationFixes) != "")) {
        ## There are packages which have not used the fixes, e.g. utf8latex
        ## fixes[1L] is a prefix, fixes[2L] is an undocumented suffix
        nrdb[, 2L] <- sub(paste0("^", fixes[1L]), "", nrdb[, 2L])
            nrdb[, 2L] <- sub(paste0(fixes[2L]), "$", "", nrdb[, 2L])
    ## See above.
    if(any(ind <- ![, 2L], imports))))
        nrdb <- nrdb[!ind, , drop = FALSE]

    ## Fortran entry points are mapped to l/case
    dotF <- nrdb$cname == ".Fortran"
    nrdb[dotF, "s"] <- tolower(nrdb[dotF, "s"])

    attr(nrdb, "package") <- package
    attr(nrdb, "duplicates") <- dups
    attr(nrdb, "symbols") <- unique(symbols)

format_native_routine_registration_db_for_skeleton <- function(nrdb, align = TRUE, include_declarations = FALSE) {

    fmt1 <- function(x, n) {
        c(if(align) {
              paste(format(sprintf("    {\"%s\",", x[, 1L])),
                    format(sprintf(if(n == "Fortran")
                                       "(DL_FUNC) &F77_NAME(%s),"
                                       "(DL_FUNC) &%s,",
                                   x[, 1L])),
                    format(sprintf("%d},", x[, 2L]),
                           justify = "right"))
          } else {
              sprintf(if(n == "Fortran")
                          "    {\"%s\", (DL_FUNC) &F77_NAME(%s), %d},"
                          "    {\"%s\", (DL_FUNC) &%s, %d},",
                      x[, 1L],
                      x[, 1L],
                      x[, 2L])
          "    {NULL, NULL, 0}")

    package <- attr(nrdb, "package")
    dups <- attr(nrdb, "duplicates")
    symbols <- attr(nrdb, "symbols")

    nrdb <- split(nrdb[, -1L, drop = FALSE],
                  factor(nrdb[, 1L],
                         levels =
                             c(".C", ".Call", ".Fortran", ".External")))

    has <- vapply(nrdb, NROW, 0L) > 0L
    nms <- names(nrdb)
    entries <- substring(nms, 2L)
    blocks <- Map(function(x, n) {
                      c(sprintf("static const R_%sMethodDef %sEntries[] = {",
                                n, n),
                        fmt1(x, n),

    decls <- c(
        "/* FIXME: ",
        "   Add declarations for the native routines registered below.",

    if(include_declarations) {
        decls <- c(
            "/* FIXME: ",
            "   Check these declarations against the C/Fortran source code.",
            if(NROW(y <- nrdb$.C)) {
                 args <- sapply(y$n, function(n) if(n >= 0)
                                paste(rep("void *", n), collapse=", ")
                                else "/* FIXME */")
                c("", "/* .C calls */",
                  paste0("extern void ", y$s, "(", args, ");"))
            if(NROW(y <- nrdb$.Call)) {
                args <- sapply(y$n, function(n) if(n >= 0)
                               paste(rep("SEXP", n), collapse=", ")
                               else "/* FIXME */")
               c("", "/* .Call calls */",
                  paste0("extern SEXP ", y$s, "(", args, ");"))
            if(NROW(y <- nrdb$.Fortran)) {
                 args <- sapply(y$n, function(n) if(n >= 0)
                                paste(rep("void *", n), collapse=", ")
                                else "/* FIXME */")
                c("", "/* .Fortran calls */",
                  paste0("extern void F77_NAME(", y$s, ")(", args, ");"))
            if(NROW(y <- nrdb$.External))
                c("", "/* .External calls */",
                  paste0("extern SEXP ", y$s, "(SEXP);"))

    headers <- if(NROW(nrdb$.Call) || NROW(nrdb$.External))
        c("#include <R.h>", "#include <Rinternals.h>")
    else if(NROW(nrdb$.Fortran)) "#include <R_ext/RS.h>"
    else character()

      "#include <stdlib.h> // for NULL",
      "#include <R_ext/Rdynload.h>",
      if(length(symbols)) {
            "  The following symbols/expresssions for .NAME have been omitted",
            "", strwrap(symbols, indent = 4, exdent = 4), "",
            "  Most likely possible values need to be added below.",
            "*/", "")
      if(length(dups)) {
            "  The following name(s) appear with different usages",
            "  e.g., with different numbers of arguments:",
            "", strwrap(dups, indent = 4, exdent = 4), "",
            "  This needs to be resolved in the tables and any declarations.",
            "*/", "")
      unlist(blocks, use.names = FALSE),
      ## We cannot use names with '.' in: WRE mentions replacing with "_"
      sprintf("void R_init_%s(DllInfo *dll)",
              gsub(".", "_", package, fixed = TRUE)),
      sprintf("    R_registerRoutines(dll, %s);",
                            paste0(entries, "Entries"),
                     collapse = ", ")),
      "    R_useDynamicSymbols(dll, FALSE);",

package_native_routine_registration_db <- function(dir, character_only = TRUE) {
    calls <- package_ff_call_db(dir)
    native_routine_registration_db_from_ff_call_db(calls, dir, character_only)

package_native_routine_registration_db <- function(dir, character_only = TRUE) {
    calls <- package_ff_call_db(dir)
    native_routine_registration_db_from_ff_call_db(calls, dir, character_only)

package_native_routine_registration_skeleton <- function(dir, con = stdout(), align = TRUE,
                                                         character_only = TRUE, include_declarations = TRUE) {
    nrdb <- package_native_routine_registration_db(dir, character_only)
                align, include_declarations),

package_native_routine_registration_skeleton(".")  ## when R 3.4.0 is out you only need this line

Here I use /usr/bin/r as I happen to like littler a lot, but you can use Rscript the same way.

Easy enough now?

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Michal &#268;iha&#345;: Tests coverage from Windows builds revisited

30 March, 2017 - 15:30

Few months ago, I've written about getting coverage information from many platforms into one report. That approach worked, but I've always felt guilty for pushing almost thousand of files to Codecov.

This week I finally found time to revisit this and make it work better and faster. Actually just uploading these files took about 30 minutes. Together with slow tests execution (which took about 30 minutes as well), we were reaching AppVeyor build time limits and builds did timeout quite often, what is not really nice result.

I've started with rewriting the wrapper used to execute OpenCppCoverage. I've originally used Python for that, which is nice, but I thought the overhead must be noticeable. As it is not possible to execute Python script directly from CTest, it was wrapped in simple bat file, adding another overhead. Reimplementing this in C showed that there is indeed overhead, but this is not going to save more time than few minutes.

Next obvious step was to look at uploading coverage files as this is really something what should be fast and not take such enormous time. When writing the original post, I've already tried to merge coverage data using OpenCppCoverage, but that showed to be too slow to make it actually work (the testsuite didn't complete in given 60 minutes).

I was also looking at existing solutions to merge Cobertura XML files, but I've found nothing working reasonably fast. The problem is that all of these always merge two files at one step, making merging thousand files really slow job as you're constantly generating, parsing and processing the resulting xml file for 1000 times. Also these solutions are probably more generic that what I needed.

In the end these files are just simple XML and merging them should not be hard. I was able to quickly write Python script to merge them. It does not support all of the Cobertura attributes, it just merges line based coverage (as this is the only thing which OpenCppCoverage generates), but works pretty fast and reliable.

Overall the build time went down from 60 minutes to 35 minutes and I don't see much space to improve besides improving OpenCppCoverage speed, what is really out of scope for me. Actually older version ( performs way faster than current one (0.9.6), which is 2-3 times slower.

Filed under: Debian English Gammu | 0 comments

Shirish Agarwal: The tale of the dancing girl – #nsfw

30 March, 2017 - 07:18

Demonstration of a Lapdance – Wikipedia

The post will be adult/mature in nature. So those below 18 please excuse. The post is about an anecdote almost 20 years to date, The result its being posted is I had a dinner with a friend to whom I shared this and he thought it would be nice if I shared this hence sharing it. The conversation was about being young and foolish in which I shared the anecdote. The blog post was supposed to be about ‘Aadhar’ which shocked me both in the way no political discourse happened and the way the public as well as public policy was gamed but that would have to wait for another day.


I left college in 1995. The anecdote/incident probably happened couple of years earlier so probably 1992-1993. At that time, I was in my teens and as a typical teenager I made few friends. One of those friends, who would remain nameless as since we drifted apart, and as I have not take permission from him, taking his name would not be a good idea.

Anyway, this gentleman, let’s call him Mr. X as an example. Couple of months before, he had bought an open jeep, similar but very different from the jeep being shown below. Open Jeep had become a fashion statement few months back (in those days) as a Salman khan starred movie had that and anybody who had money wanted one just like that.

Illustration of an open jeep, sadly its a military one – wikipedia

Those days we didn’t have cell-phones and I had given my land-line phone number to very few friends as in those days, as the land-lines were a finicky instrument.

One fine morning, I get a call from my friend telling he is going to come near my place and I should meet him at some xyz place and we would go for a picnic for the whole day and it is possible that we might return next day. As it was holidays and only a fool would throw away a chance to have a ride in open air jeep, I immediately agreed. I shared that my friends had organized a picnic and giving another friend’s number (who didn’t know anything) got permission and went to meet Mr. X. This was very early morning, around 0600 hrs. . After meeting him, he told that we would be going to Mumbai, take some more friends from there and then move on. In those days, a railway ticket from Shivaji Nagar to V.T. (now C.S.T.) costed INR 30/- . I had been to Mumbai few times before for various technical conferences and knew few cheap places to eat, I knew that going via train, we could go and come back spending at the most INR 150/- and still have some change left-over (today’s meal at a roadside/street vendor easily passes that mark).

The Journey

I shared with him that it will be costly and I don’t have any money to cover the fuel expenses and he said he would shoulder the expenses, he just wanted my company for the road. Those days, it was the scenic Old Mumbai-Pune highway and we took plenty of stops to admire that ghats (hills and valleys together). That journey must have taken around 7-8 hours while today by new Expressway, you could do the same thing by 2.5/3 hours. Anyhow, we reached to some swanky hotel in South Mumbai. South Mumbai was not the financial powerhouse that it today is, there was mix of very old buildings and new buildings like the swanky hotel that we had checked in. I have no memory nor any idea whether it was 1 class, 3 class or 5 class and could have cared less as had been tired from the journey. We checked in, I had a long warm water bath and then slept in the king-size bed with curtains drawn. Evening came and we took the jeep and picked up 2-3 of his friends who were from my age or a year or two older and we went to Nariman Point. Seeing the Queen’s necklace from Nariman Point at night is a sight in itself. Keeping with the innocence, I was under the impression that we had arrived at our destination, at this our host, Mr. X and his Bombaiya friends had a quiet laugh saying its a young night still. We must have whiled away couple of hours, having chai and throwing rocks in the sea.

The Meeting

After a while, Mr. X took us to another swanky place. My eyes were out of my sockets as this seemed to be as elitist a place as could be. I saw many White European women in various stages of undress pole-dancing and lap-dancing. I had recently (in those days) come to know the term but was under the impression that it was something that happened in Europe and States only. I had no idea that lap-dancing was older than my birth as according to Wikipedia. So looking back now, I am not surprised that in two decades the concept crossed the oceans. Again, Mr. X being the host, agreed to bear all the costs and all of us had food, drink and a lap-dance from any of the dancers on the floor. As I was young and probably shy (still am) I asked Mr. X’s help to pick a girl/woman for me. The woman whom he picked was auburn-aired, was either my age or a year or two older/younger to me. What proceeded next was about 20-30 minutes of totally sexualized erotic experience. While he and all his friends picked girls to go all the way, I was hesitant to let loose. Maybe it was due to my lack of courage or inexperience, maybe it was not in my city so couldn’t predict the outcome, maybe was just afraid that reality might mar fantasy, I dunno till date. Although we kissed and necked a lot, I guess that should count for something.

The conversation

After all my friends had gone to the various rooms, sometime after I excused myself, went to the loo myself, peed a bit, splashed cold water on self, came out and had couple of glasses of water and came back to my seat. The lady came back and I shared that I was not interested in going further and while she was beautiful, I just didn’t have the guts. I did ask her if she would give me company though for sometime as I didn’t know anyone else at that place. Our conversation was more about her than me as I had more or less an average life upto that moment. There were only three unorthodox things that I had done before meeting her. I had drunk wines of different types, smoked weed and had a Magic Mushrooms experience the year before with another group of friends I had made there. Goa in those days was simply magical in those days but that probably would need its story/own blog post.

When I enquired about her, she shared she was from Russia and she rattled off more than half a dozen places around the world where she had been to and this was her second or third stint in Mumbai and she wasn’t at all unhappy about the lifestyle and choices she was leading. I had no answer for her as a young penniless college-going student. Her self-confidence and the way she carried herself was impressive, with or without clothes. During course of the conversation she shared a couple of contacts from whom I could get better weed at slightly higher price if I were in Goa.

Few months later, those contacts turned out to be true. After sometime, we took all the women and ourselves, around 8-9 people in his jeep (how he negotiated that is beyond me) went to a hygienic Pani puri and Bhel (puffed rice mixed with variety of spices typically tomato, potato, coriander chutney as well as Tamarind Chutney among other things) place and moved them to tears (the spices in bhel and Pani puri did it for them) and this was when we had explicitly asked the bhel-wala guy to make it extremely mild with just a hint of spice in it. Anyways, sometime later, we dropped them at the same place, dropped his friends and came back to the hotel we booked and got drunk again.


Few years later, it came in the newspapers/media that while India had broken out of financial isolation just few years back (1991) and were profiting from it, many countries of the former USSR were going the other way around and hence there was huge human trafficking and immigration that had taken place. This was in-line with what the lady/woman/Miss X had shared with me.

The latest trigger

The latest trigger happened couple of months back where I learnt of a hero flight attendant saving a girl from human-trafficking.

Till date, I am unsure whether she was doing it willingly or putting a brave smile in front of me, because even if she had confided me in any way, I probably would have been too powerless to help her in any-way. I just don’t know.

Foolishness thy name

While my friend took advantage of my innocence and introduced me to a world which otherwise I would probably not know exists, it could have easily have gone some other way as well.

While I’m still unsure of the choices I made, I was and am happy that I was able to strike a conversation with her and attempt to reach the person therein. Was it the truth or an elaborate fabricated lie to protect myself and herself, this I will never know.


I understand the fact that as a ‘customer’ or somebody who is taking part in either of those performances or experiences it isn’t easy in any way to know/say that whether the performer is doing it wilfully or not as the experiences are in tightly controlled settings.

Filed under: Miscellenous Tagged: #anecdote, #confusion, #elitist, #growing up, #lap dance, #NSFW, #Open Jeep, Mumbai

Dirk Eddelbuettel: #1: Easy Package Registration

29 March, 2017 - 18:52

Welcome to the first actual post in the R4 series, following the short announcement earlier this week.


Last month, Brian Ripley announced on r-devel that registration of routines would now be tested for by R CMD check in r-devel (which by next month will become R 3.4.0). A NOTE will be issued now, this will presumably turn into a WARNING at some point. Writing R Extensions has an updated introduction) of the topic.

Package registration has long been available, and applies to all native (i.e. "compiled") function via the .C(), .Call(), .Fortran() or .External() interfaces. If you use any of those -- and .Call() may be the only truly relevant one here -- then is of interest to you.

Brian Ripley and Kurt Hornik also added a new helper function: tools::package_native_routine_registration_skeleton(). It parses the R code of your package and collects all native function entrypoints in order to autogenerate the registration. It is available in R-devel now, will be in R 3.4.0 and makes adding such registration truly trivial.

But as of today, it requires that you have R-devel. Once R 3.4.0 is out, you can call the helper directly.

As for R-devel, there have always been at least two ways to use it: build it locally (which we may cover in another R4 installment), or using Docker. Here will focus on the latter by relying on the Rocker project by Carl and myself.

Use the Rocker drd Image

We assume you can run Docker on your system. How to add it on Windows, macOS or Linux is beyond our scope here today, but also covered extensively elsewhere. So we assume you can execute docker and e.g. bring in the 'daily r-devel' image drd from our Rocker project via

~$ docker pull rocker/drd

With that, we can use R-devel to create the registration file very easily in a single call (which is a long command-line we have broken up with one line-break for the display below).

The following is real-life example when I needed to add registration to the RcppTOML package for this week's update:

~/git/rcpptoml(master)$ docker run --rm -ti -v $(pwd):/mnt rocker/drd \  ## line break
             RD --slave -e 'tools::package_native_routine_registration_skeleton("/mnt")'
#include <R.h>
#include <Rinternals.h>
#include <stdlib.h> // for NULL
#include <R_ext/Rdynload.h>

/* FIXME: 
   Check these declarations against the C/Fortran source code.

/* .Call calls */
extern SEXP RcppTOML_tomlparseImpl(SEXP, SEXP, SEXP);

static const R_CallMethodDef CallEntries[] = {
    {"RcppTOML_tomlparseImpl", (DL_FUNC) &RcppTOML_tomlparseImpl, 3},
    {NULL, NULL, 0}

void R_init_RcppTOML(DllInfo *dll)
    R_registerRoutines(dll, NULL, CallEntries, NULL, NULL);
    R_useDynamicSymbols(dll, FALSE);
Decompose the Command

We can understand the docker command invocation above through its components:

  • docker run is the basic call to a container
  • --rm -ti does subsequent cleanup (--rm) and gives a terminal (-t) that is interactive (-i)
  • -v $(pwd):/mnt uses the -v a:b options to make local directory a available as b in the container; here $(pwd) calls print working directory to get the local directory which is now mapped to /mnt in the container
  • rocker/drd invokes the 'drd' container of the Rocker project
  • RD is a shorthand to the R-devel binary inside the container, and the main reason we use this container
  • -e 'tools::package_native_routine_registration_skeleton("/mnt") calls the helper function of R (currently in R-devel only, so we use RD) to compute a possible init.c file based on the current directory -- which is /mnt inside the container

That it. We get a call to the R function executed inside the Docker container, examining the package in the working directory and creating a registration file for it which is display to the console.

Copy the Output to src/init.c

We simply copy the output to a file src/init.c; I often fold one opening brace one line up.


We also change one line in NAMESPACE from (for this package) useDynLib("RcppTOML") to useDynLib("RcppTOML", .registration=TRUE). Adjust accordingly for other package names.

That's it!

And with that we a have a package which no longer provokes the NOTE as seen by the checks page. Calls to native routines are now safer (less of a chance for name clashing), get called quicker as we skip the symbol search (see the WRE discussion) and best of all this applies to all native routines whether written by hand or written via a generator such as Rcpp Attributes.

So give this a try to get your package up-to-date.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Lars Wirzenius: A tiny PC as a router

29 March, 2017 - 17:35

We needed a router and wifi access point in the office, and simultaneously both I and my co-worker Ivan needed such a thing at our respective homes. After some discussion, and after reading articles in Ars Technica about building PCs to act as routers, we decided to do just that.

  • The PC solution seem to offer better performance, but this is actually not a major reason for us.

  • We want to have systems we understand and can hack. A standard x86 PC running Debian sounds ideal to use.

  • Why not a cheap commercial router? They tend to be opaque and mysterious, and can't be managed with standard tooling such as Ansible. They may or may not have good security support. Also, they may or may not have sufficient functionality to be nice things, such as DNS for local machines, or the full power if iptables for firewalling.

  • Why not OpenWRT? Some models of commercial routers are supported by OpenWRT. Finding good hardware that is also supported by OpenWRT is a task in itself, and not the kind of task especially I like to do. Even if one goes this route, the environment isn't quite a standard Linux system, because of various hardware limitations. (OpenWRT is a worthy project, just not our preference.)

We got some hardware:

Component Model Cost Barebone Qotom Q190G4, VGA, 2x USB 2.0, 134x126x36mm, fanless 130€ CPU Intel J1900, 2-2.4GHz quad-core - NIC Intel WG82583, 4x 10/100/1000 - Memory Crucial CT102464BF160B, 8GB DDR3L-1600 SODIMM 1.35V CL11 40€ SSD Kingston SSDNow mS200, 60GB mSATA 42€ WLAN AzureWave AW-NU706H, Ralink RT3070L, 300M 802.11b/g/n, half mPCIe 17€ mPCIe adapter Half to full mPCIe adapter 3€ Antennas 2x 2.4/5GHz 6dBi, RP-SMA, U.FL Cables 7€

These were bought at various online shops, including AliExpress and

After assembling the hardware, we installed Debian on them:

  • Connect the PC to a monitor (VGA) and keyboard (USB), as well as power.

  • I built a "factory image" to be put on the SSD, and a USB stick installer image, which includes the factory one. Write the installer image on a USB stick, boot off that, then copy the factory image to the SSD and reboot off the SSD.

  • The router now runs a very bare-bones, stripped-down Debian system, which runs a DHCP server on eth3 (marked LAN4 on the box). You can log as root on the console (no password), or via ssh, but for ssh you need to replace the /home/ansible/.ssh/authorized_keys file with one that contains only your public ssh key.

  • Connect a laptop to the Ethernet port marked LAN4, and get an IP address with DHCP.

  • Log in with ssh to ansible@, and verify that sudo id works without password. Except you can't do this, unless you put in your ssh key in the authorized keys file above.

  • Git clone the ansible playbooks, adjust their parameters in minipc-router.yml as wanted, and run the playbook. Then reboot the router again.

  • You should now have wifi, routing (with NAT), and be generally speaking able to do networking.

There's a lot of limitations and problems:

  • There's no web UI for managing anything. If you're not comfortable doing sysadmin via ssh (with or without ansible), this isn't for you.

  • No IPv6. We didn't want to enable it yet, until we understand it better. You can, if you want to.

  • No real firewalling, but adjust roles/router/files/ferm.conf as you wish.

  • The router factory image is 4 GB in size, and our SSD is 60 GB. That's a lot of wasted space.

  • The router factory image embeds our public keys in the ansible user's authorized keys file for ssh. This is because we built this for ourselves first. If there's interest by others in using the images, we'll solve this.

  • Probably a lot of stupid things. Feel free to tell us what it is ( would be a good address for that).

If you'd like to use the images and Ansible playbooks, please do. We'd be happy to get feedback, bug reports, and patches. Send them to me ( or my ticketing system (

Daniel Pocock: Brexit: If it looks like racism, if it smells like racism and if it feels like racism, who else but a politician could argue it isn't?

29 March, 2017 - 12:33

Since the EU referendum got under way in the UK, it has become almost an everyday occurence to turn on the TV and hear some politician explaining "I don't mean to sound racist, but..." (example)

Of course, if you didn't mean to sound racist, you wouldn't sound racist in the first place, now would you?

The reality is, whether you like politics or not, political leaders have a significant impact on society and the massive rise in UK hate crimes, including deaths of Polish workers, is a direct reflection of the leadership (or profound lack of it) coming down from Westminster. Maybe you don't mean to sound racist, but if this is the impact your words are having, maybe it's time to shut up?

Choosing your referendum

Why choose to have a referendum on immigration issues and not on any number of other significant topics? Why not have a referendum on nuking Mr Putin to punish him for what looks like an act of terrorism against the Malaysian Airlines flight MH17? Why not have a referendum on cutting taxes or raising speed limits, turning British motorways into freeways or an autobahn? Why choose to keep those issues in the hands of the Government, but invite the man-in-a-white-van from middle England to regurgitate Nigel Farage's fears and anxieties about migrants onto a ballot paper?

Even if David Cameron sincerely hoped and believed that the referendum would turn out otherwise, surely he must have contemplated that he was playing Russian Roulette with the future of millions of innocent people?

Let's start at the top

For those who are fortunate enough to live in parts of the world where the press provides little exposure to the antics of British royalty, an interesting fact you may have missed is that the Queen's husband, Prince Philip, Duke of Edinburgh is actually a foreigner. He was born in Greece and has Danish and German ancestry. Migration (in both directions) is right at the heart of the UK's identity.

Home office minister Amber Rudd recently suggested British firms should publish details about how many foreign people they employ and in which positions. She argued this is necessary to help boost funding for training local people.

If that is such a brilliant idea, why hasn't it worked for the Premier League? It is a matter of public knowledge how many foreigners play football in England's most prestigious division, so why hasn't this caused local clubs to boost training budgets for local recruits? After all, when you consider that England hasn't won a World Cup since 1966, what have they got to lose?

All this racism, it's just not cricket. Or is it? One of the most remarkable cricketers to play for England in recent times, Kevin Pietersen, dubbed "the most complete batsman in cricket" by The Times and "England's greatest modern batsman" by the Guardian, was born in South Africa. In the five years he was contracted to the Hampshire county team, he only played one match, because he was too busy representing England abroad. His highest position was nothing less than becoming England's team captain.

Are the British superior to every other European citizen?

One of the implications of the rhetoric coming out of London these days is that the British are superior to their neighbours, entitled to have their cake and eat it too, making foreigners queue up at Paris' Gare du Nord to board the Eurostar while British travelers should be able to walk or drive into European countries unchallenged.

This superiority complex is not uniquely British, you can observe similar delusions are rampant in many of the places where I've lived, including Australia, Switzerland and France. America's Donald Trump has taken this style of politics to a new level.

Look in the mirror Theresa May: after British 10-year old schoolboys Robert Thompson and Jon Venables abducted, tortured, murdered and mutilated 2 year old James Bulger in 1993, why not have all British schoolchildren fingerprinted and added to the police DNA database? Why should "security" only apply based on the country where people are born, their religion or skin colour?

In fact, after Brexit, people like Venables and Thompson will remain in Britain while a Dutch woman, educated at Cambridge and with two British children will not. If that isn't racism, what is?

Running foreigner's off the roads

Theresa May has only been Prime Minister for less than a year but she has a history of bullying and abusing foreigners in her previous role in the Home Office. One example of this was a policy of removing driving licenses from foreigners, which has caused administrative chaos and even taken away the licenses of many people who technically should not have been subject to these regulations anyway.

Shouldn't the DVLA (Britain's office for driving licenses) simply focus on the competence of somebody to drive a vehicle? Bringing all these other factors into licensing creates a hostile environment full of mistakes and inconvenience at best and opportunities for low-level officials to engage in arbitrary acts of racism and discrimination.

Of course, when you are taking your country on the road to nowhere, who needs a driving license anyway?

What does "maximum control" over other human beings mean to you?

The new British PM has said she wants "maximum control" over immigrants. What exactly does "maximum control" mean? Donald Trump appears to be promising "maximum control" over Muslims, Hitler sought "maximum control" over the Jews, hasn't the whole point of the EU been to avoid similar situations from ever arising again?

This talk of "maximum control" in British politics has grown like a weed out of the UKIP. One of their senior figures has been linked to kidnappings and extortion, which reveals a lot about the character of the people who want to devise and administer these policies. Similar people in Australia aspire to jobs in the immigration department where they can extort money out of people for getting them pushed up the queue. It is no surprise that the first member of Australia's parliament ever sent to jail was put there for obtaining bribes and sexual favours from immigrants. When Nigel Farage talks about copying the Australian immigration system, he is talking about creating jobs like these for his mates.

Even if "maximum control" is important, who really believes that a bunch of bullies in Westminster should have the power to exercise that control? Is May saying that British bosses are no longer competent to make their own decisions about who to employ or that British citizens are not reliable enough to make their own decisions about who they marry and they need a helping hand from paper-pushers in the immigration department?

Echoes of the Third Reich

Most people associate acts of mass murder with the Germans who lived in the time of Adolf Hitler. These are the stories told over and and over again in movies, books and the press.

Look more closely, however, and it appears that the vast majority of Germans were not in immediate contact with the gas chambers. Even Gobels' secretary writes that she was completely oblivious to it all. Many people were simply small cogs in a big bad machine. The clues were there, but many of them couldn't see the big picture. Even if they did get a whiff of it, many chose not to ask questions, to carry on with their comfortable lives.

Today, with mass media and the Internet, it is a lot easier for people to discover the truth if they look, but many are still reluctant to do so.

Consider, for example, the fingerprint scanners installed in British post offices and police stations to fingerprint foreigners and criminals (as if they have something in common). If all the post office staff refused to engage in racist conduct the fingerprint scanners would be put out of service. Nonetheless, these people carry on, just doing their job, just following orders. It was through many small abuses like this, rather than mass murder on every street corner, that Hitler motivated an entire nation to serve his evil purposes.

Technology like this is introduced in small steps: first it was used for serious criminals, then anybody accused of a crime, then people from Africa and next it appears they will try and apply it to all EU citizens remaining in the UK.

How will a British man married to a French woman explain to their children that mummy has to be fingerprinted by the border guard each time they return from vacation?

The Nazis pioneered biometric technology with the tracking numbers branded onto Jews. While today's technology is electronic and digital, isn't it performing the same function?

There is no middle ground between "soft" and "hard" brexit

An important point for British citizens and foreigners in the UK to consider today is that there is no compromise between a "soft" Brexit and a "hard" Brexit. It is one or the other. Anything less (for example, a deal that is "better" for British companies and worse for EU citizens) would imply that the British are a superior species and it is impossible to imagine the EU putting their stamp on such a deal. Anybody from the EU who is trying to make a life in the UK now is playing a game of Russian Roulette - sure, everything might be fine if it morphs into "soft" Brexit, but if Theresa May has her way, at some point in your life, maybe 20 years down the track, you could be rounded up by the gestapo and thrown behind bars for a parking violation. There has already been a five-fold increase in the detention of EU citizens in British concentration camps and they are using grandmothers from Asian countries to refine their tactics for the efficient removal of EU citizens. One can only wonder what type of monsters Theresa May has been employing to run such inhumane operations.

This is not politics

Edmund Burke's quote "The only thing necessary for the triumph of evil is for good men to do nothing" comes to mind on a day like today. Too many people think it is just politics and they can go on with their lives and ignore it. Barely half the British population voted in the referendum. This is about human beings treating each other with dignity and respect. Anything less is abhorrent and may well come back to bite.

Michal &#268;iha&#345;: Gammu 1.38.2

29 March, 2017 - 11:00

Yesterday Gammu 1.38.2 has been released. This is bugfix release fixing for example USSD or MMS decoding in some situations.

The Windows binaries are available as well. These are built using AppVeyor and will help bring Windows users back to latest versions.

Full list of changes and new features can be found on Gammu 1.38.2 release page.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Keith Packard: DRM-lease

29 March, 2017 - 05:22
DRM display resource leasing (kernel side)

So, you've got a fine head-mounted display and want to explore the delights of virtual reality. Right now, on Linux, that means getting the window system to cooperate because the window system is the DRM master and holds sole access to all display resources. So, you plug in your device, play with RandR to get it displaying bits from the window system and then carefully configure your VR application to use the whole monitor area and hope that the desktop will actually grant you the boon of page flipping so that you will get reasonable performance and maybe not even experience tearing. Results so far have been mixed, and depend on a lot of pieces working in ways that aren't exactly how they were designed to work.

We could just hack up the window system(s) and try to let applications reserve the HMD monitors and somehow removing them from the normal display area so that other applications don't randomly pop up in the middle of the screen. That would probably work, and would take advantage of much of the existing window system infrastructure for setting video modes and performing page flips. However, we've got a pretty spiffy standard API in the kernel for both of those, and getting the window system entirely out of the way seems like something worth trying.

I spent a few hours in Hobart chatting with Dave Airlie during LCA and discussed how this might actually work.

  1. Use KMS interfaces directly from the VR application to drive presentation to the HMD.

  2. Make sure the window system clients never see the HMD as a connected monitor.

  3. Maybe let logind (or other service) manage the KMS resources and hand them out to the window system and VR applications.

  1. Don't make KMS resources appear and disappear. It turns out applications get confused when the set of available CRTCs, connectors and encoders changes at runtime.
An Outline for Multiple DRM masters

By the end of our meeting in Hobart, Dave had sketched out a fairly simple set of ideas with me. We'd add support in the kernel to create additional DRM masters. Then, we'd make it possible to 'hide' enough state about the various DRM resources so that each DRM master would automagically use disjoint subsets of resources. In particular, we would.

  1. Pretend that connectors were always disconnected

  2. Mask off crtc and encoder bits so that some of them just didn't seem very useful.

  3. Block access to resources controlled by other DRM masters, just in case someone tried to do the wrong thing.

Refinement with Eric over Swedish Pancakes

A couple of weeks ago, Eric Anholt and I had breakfast at the original pancake house and chatted a bit about this stuff. He suggested that the right interface for controlling these new DRM masters was through the existing DRM master interface, and that we could add new ioctls that the current DRM master could invoke to create and manage them.

Leasing as a Model

I spent some time just thinking about how this might work and came up with a pretty simple metaphor for these new DRM masters. The original DRM master on each VT "owns" the output resources and has final say over their use. However, a DRM master can create another DRM master and "lease" resources it has control over to the new DRM master. Once leased, resources cannot be controlled by the owner unless the owner cancels the lease, or the new DRM master is closed. Here's some terminology:

DRM Master
Any DRM file which can perform mode setting.
The original DRM Master, created by opening /dev/dri/card*
A DRM master which has leased out resources to one or more other DRM masters.
A DRM master which controls resources leased from another DRM master. Each Lessee leases resources from a single Lessor.
Lessee ID
An integer which uniquely identifies a lessee within the tree of DRM masters descending from a single Owner.
The contract between the Lessor and Lessee which identifies which resources which may be controlled by the Lessee. All of the resources must be owned by or leased to the Lessor.

With Eric's input, the interface to create a lease was pretty simple to write down:

int drmModeCreateLease(int fd,
               const uint32_t *objects,
               int num_objects,
               int flags,
               uint32_t *lessee_id);

Given an FD to a DRM master, and a list of objects to lease, a new DRM master FD is returned that holds a lease to those objects. 'flags' can be any combination of O_CLOEXEC and O_NONBLOCK for the newly minted file descriptor.

Of course, the owner might want to take some resources back, or even grant new resources to the lessee. So, I added an interface that rewrites the terms of the lease with a new set of objects:

int drmModeChangeLease(int fd,
               uint32_t lessee_id,
               const uint32_t *objects,
               int num_objects);

Note that nothing here makes any promises about the state of the objects across changes in the lease status; the lessor and lessee are expected to perform whatever modesetting is required for the objects to be useful to them.

Window System Integration

There are two ways to integrate DRM leases into the window system environment:

  1. Have logind "lease" most resources to the window system. When a HMD is connected, it would lease out suitable resources to the VR environment.

  2. Have the window system "own" all of the resources and then add window system interfaces to create new DRM masters leased from its DRM master.

I'll probably go ahead and do 2. in X and see what that looks like.

One trick with any of this will be to hide HMDs from any RandR clients listening in on the window system. You probably don't want the window system to tell the desktop that a new monitor has been connected, have it start reconfiguring things, and then have your VR application create a new DRM master, making the HMD appear to have disconnected to the window system and have that go reconfigure things all over again.

I'm not sure how this might work, but perhaps having the VR application register something like a passive grab on hot plug events might make sense? Essentially, you want it to hear about monitor connect events, go look to see if the new monitor is one it wants, and if not, release that to other X clients for their use. This can be done in stages, with the ability to create a new DRM master over X done first, and then cleaning up the hotplug stuff later on.

Current Status

I hacked up the kernel to support the drmModeCreateLease API, and then hacked up kmscube to run two threads with different sets of KMS resources. That ran for nearly a minute before crashing and requiring a reboot. I think there may be some locking issues with page flips from two threads to the same device.

I think I also made the wrong decision about how to handle lessors closing down. I tried to let the lessors get deleted and then 'orphan' the lessees. I've rewritten that so that lessees hold a reference on their lessor, keeping the lessor in place until the lessee shuts down. I've also written the kernel parts of the drmModeChangeLease support.

  • What should happen when a Lessor is closed? Should all access to controlled resources be revoked from all descendant Lessees?

    Proposed answer -- lessees hold a reference to their lessor so that the entire tree remains in place. A Lessor can clean up before exiting by revoking lessee access if it chooses.

  • How about when a Lessee is closed? Should the Lessor be notified in some way?

  • CRTCs and Encoders have properties. Should these properties be automatically included in the lease?

    Proposed answer -- no, userspace is responsible for constructing the entire lease.

Joachim Breitner: Birthday greetings communication behaviour

29 March, 2017 - 00:42

Randall Munroe recently mapped how he communicated with his social circle. As I got older recently, I had an opportunity to create a similar statistics that shows how people close to me chose to fulfil their social obligations:

Communication variants

(Diagram created with the xkcd-font and using these two stackoverflow answers.)

In related news: Heating 3½ US cups of water to a boil takes 7 minutes and 40 seconds on one particular gas stove, but only 3 minutes and 50 seconds with an electric kettle, despite the 110V-induced limitation to 1.5kW.

Sylvain Beucler: Practical basics of reproducible builds 2

28 March, 2017 - 22:24

Let's review what we learned so far:

  • compiler version need to be identical and recorded
  • build options and their order needs to be identical and recorder
  • build path needs to be identical and recorded
    (otherwise debug symbols - and BuildIDs - change)
  • diffoscope helps checking for differences in build output

We stopped when compiling a PE .exe produced a varying output.
It turns out that PE carries a build date timestamp.

The spec says that bound DLLs timestamps are refered to in the "Delay-Load Directory Table". Maybe that's also the date Windows displays when a system-wide DLL is about to be replaced, too.
Build timestamps looks unused in .exe files though.

Anyway, Stephen Kitt pointed out (thanks!) that Debian's MinGW linker binutils-mingw-w64 has an upstream-pending patch that sets the timestamp to SOURCE_DATE_EPOCH if set.

Alternatively, one can pass -Wl,--no-insert-timestamp to set it to 0 (though see caveats below):

$ i686-w64-mingw32.static-gcc -Wl,--no-insert-timestamp hello.c -o hello.exe 
$ md5sum hello.exe 
298f98d74e6e913628a8b74514eddcb2  hello.exe
$ /opt/mxe/usr/bin/i686-w64-mingw32.static-gcc -Wl,--no-insert-timestamp hello.c -o hello.exe 
$ md5sum hello.exe 
298f98d74e6e913628a8b74514eddcb2  hello.exe

If we don't care about debug symbols, unlike with ELF, stripped PE binaries look stable too!

$ cd repro/
$ i686-w64-mingw32.static-gcc hello.c -o hello.exe && i686-w64-mingw32.static-strip hello.exe
$ md5sum hello.exe 
6e07736bf8a59e5397c16e799699168d  hello.exe
$ i686-w64-mingw32.static-gcc hello.c -o hello.exe && i686-w64-mingw32.static-strip hello.exe
$ md5sum hello.exe 
6e07736bf8a59e5397c16e799699168d  hello.exe
$ cd ..
$ cp -a repro repro2/
$ cd repro2/
$ i686-w64-mingw32.static-gcc hello.c -o hello.exe && i686-w64-mingw32.static-strip hello.exe
$ md5sum hello.exe 
6e07736bf8a59e5397c16e799699168d  hello.exe

Now that we have the main executable covered, what about the dependencies?
Let's see how well MXE compiles SDL2:

$ cd /opt/mxe/
$ cp -a ./usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp
$ rm -rf * && git checkout .
$ make sdl2
$ md5sum ./usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp/libSDL2.a 
68909ab13181b1283bd1970a56d41482  ./usr/i686-w64-mingw32.static/lib/libSDL2.a
68909ab13181b1283bd1970a56d41482  /tmp/libSDL2.a

Neat - what about another build directory?

$ cd /usr/srx/mxe
$ make sdl2
$ md5sum usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp/libSDL2.a 
c6c368323927e2ae7adab7ee2a7223e9  usr/i686-w64-mingw32.static/lib/libSDL2.a
68909ab13181b1283bd1970a56d41482  /tmp/libSDL2.a
$ ls -l ./usr/i686-w64-mingw32.static/lib/libSDL2.a /tmp/libSDL2.a 
-rw-r--r-- 1 me me 5861536 mars  23 21:04 /tmp/libSDL2.a
-rw-r--r-- 1 me me 5862488 mars  25 19:46 ./usr/i686-w64-mingw32.static/lib/libSDL2.a

Well that was expected.
But what about the filesystem order?
With such an automated build, could potential variations in the order of files go undetected?
Would the output be different on another filesystem format (ext4 vs. btrfs...)?

It was a good opportunity to test the disorderfs fuse-based tool.
And while I'm at it, check if reprotest is easy enough to use (the manpage is scary).
Let's redo our basic tests with it - basic usage is actually very simple:

$ apt-get install reprotest disorderfs faketime
$ reprotest 'make hello' 'hello'
will vary: environment
will vary: fileordering
will vary: home
will vary: kernel
will vary: locales
will vary: exec_path
will vary: time
will vary: timezone
will vary: umask
--- /tmp/tmpk5uipdle/control_artifact/
+++ /tmp/tmpk5uipdle/experiment_artifact/
│   --- /tmp/tmpk5uipdle/control_artifact/hello
├── +++ /tmp/tmpk5uipdle/experiment_artifact/hello
├── stat {}
│ │ @@ -1,8 +1,8 @@
│ │  
│ │    Size: 8632       Blocks: 24         IO Block: 4096   regular file
│ │  Links: 1
│ │ -Access: (0755/-rwxr-xr-x)  Uid: ( 1000/      me)   Gid: ( 1000/      me)
│ │ +Access: (0775/-rwxrwxr-x)  Uid: ( 1000/      me)   Gid: ( 1000/      me)
│ │  
│ │  Modify: 1970-01-01 00:00:00.000000000 +0000
│ │  
│ │   Birth: -
# => OK except for permissions

$ reprotest 'make hello && chmod 755 hello' 'hello'
Reproduction successful
No differences in hello
c8f63b73265e69ab3b9d44dcee0ef1d2815cdf71df3c59635a2770e21cf462ec  hello

$ reprotest 'make hello CFLAGS="-g -O2"' 'hello'
# => lots of differences, as expected

Now let's apply to the MXE build.
We keep the same build path, and also avoid using linux32 (because MXE would then recompile all the host compiler tools for 32-bit):

$ reprotest --dont-vary build_path,kernel 'touch src/ && make sdl2 && cp -a usr/i686-w64-mingw32.static/lib/libSDL2.a .' 'libSDL2.a'
Reproduction successful
No differences in libSDL2.a
d9a39785fbeee5a3ac278be489ac7bf3b99b5f1f7f3e27ebf3f8c60fe25086b5  libSDL2.a

That checks!
What about a full MXE environment?

$ reprotest --dont-vary build_path,kernel 'make clean && make sdl2 sdl2_gfx sdl2_image sdl2_mixer sdl2_ttf libzip gettext nsis' 'usr'
# => changes in installation dates
# => timestamps in .exe files (dbus, ...)
# => libicu doesn't look reproducible (derb.exe, genbrk.exe, genccode.exe...)
# => apparently ar timestamp variations in libaclui

Most libraries look reproducible enough.
ar differences may go away at FreeDink link time since I'm aiming at a static build. Let's try!

First let's see how FreeDink behaves with stable dependencies.
We can compile with -Wl,--no-insert-timestamp and strip the binaries in a first step.
There are various issues (timestamps, permissions) but first let's check the executables themselves:

$ cd freedink/
$ reprotest --dont-vary build_path 'mkdir cross-woe-32/ && cd cross-woe-32/ && export PATH=/opt/mxe/usr/bin:$PATH && LDFLAGS='-Wl,--no-insert-timestamp' ../configure --host=i686-w64-mingw32.static --enable-static && make -j$(nproc) && make install-strip DESTDIR=$(pwd)/destdir' 'cross-woe-32/destdir/usr/local/bin'
# => executables are identical!

# Same again, just to make sure
$ reprotest --dont-vary build_path 'mkdir cross-woe-32/ && cd cross-woe-32/ && export PATH=/opt/mxe/usr/bin:$PATH && LDFLAGS='-Wl,--no-insert-timestamp' ../configure --host=i686-w64-mingw32.static --enable-static && make -j$(nproc) && make install-strip DESTDIR=$(pwd)/destdir' 'cross-woe-32/destdir/usr/local/bin'
│   --- /tmp/tmp2yw0sn4_/control_artifact/bin/freedink.exe
├── +++ /tmp/tmp2yw0sn4_/experiment_artifact/bin/freedink.exe
│ │ @@ -2,20 +2,20 @@
│ │  00000010: b800 0000 0000 0000 4000 0000 0000 0000  ........@.......
│ │  00000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
│ │  00000030: 0000 0000 0000 0000 0000 0000 8000 0000  ................
│ │  00000040: 0e1f ba0e 00b4 09cd 21b8 014c cd21 5468  ........!..L.!Th
│ │  00000050: 6973 2070 726f 6772 616d 2063 616e 6e6f  is program canno
│ │  00000060: 7420 6265 2072 756e 2069 6e20 444f 5320  t be run in DOS 
│ │  00000070: 6d6f 6465 2e0d 0d0a 2400 0000 0000 0000  mode....$.......
│ │ -00000080: 5045 0000 4c01 0a00 e534 0735 0000 0000  PE..L....4.5....
│ │ +00000080: 5045 0000 4c01 0a00 0000 0000 0000 0000  PE..L...........
│ │  00000090: 0000 0000 e000 0e03 0b01 0219 00f2 3400  ..............4.
│ │  000000a0: 0022 4e00 0050 3b00 c014 0000 0010 0000  ."N..P;.........
│ │  000000b0: 0010 3500 0000 4000 0010 0000 0002 0000  ..5...@.........
│ │  000000c0: 0400 0000 0100 0000 0400 0000 0000 0000  ................
│ │ -000000d0: 00e0 8900 0004 0000 7662 4e00 0200 0000  ........vbN.....
│ │ +000000d0: 00e0 8900 0004 0000 89f8 4e00 0200 0000  ..........N.....
│ │  000000e0: 0000 2000 0010 0000 0000 1000 0010 0000  .. .............
│ │  000000f0: 0000 0000 1000 0000 00a0 8700 b552 0000  .............R..
│ │  00000100: 0000 8800 d02d 0000 0050 8800 5006 0000  .....-...P..P...
│ │  00000110: 0000 0000 0000 0000 0000 0000 0000 0000  ................
│ │  00000120: 0060 8800 4477 0100 0000 0000 0000 0000  .`..Dw..........
│ │  00000130: 0000 0000 0000 0000 0000 0000 0000 0000  ................
│ │  00000140: 0440 8800 1800 0000 0000 0000 0000 0000  .@..............
├── stat {}
│ │ │ @@ -1,8 +1,8 @@
│ │ │  
│ │ │    Size: 5121536       Blocks: 10008      IO Block: 4096   regular file
│ │ │  Links: 1
│ │ │  Access: (0755/-rwxr-xr-x)  Uid: ( 1000/      me)   Gid: ( 1000/      me)
│ │ │  
│ │ │ -Modify: 2017-03-26 01:26:35.233841833 +0000
│ │ │ +Modify: 2017-03-26 01:27:01.829592505 +0000
│ │ │  
│ │ │   Birth: -

AFAIU there is something random in the linking phase, and sometimes the timestamp is removed, sometimes it's not.
Not very easy to track but I believe I reproduced it with the "hello" example:

# With MXE:
$ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello' 'hello'
# => different
# => maybe because it imports the build timestamp from -lSDL2main

# With Debian's MinGW (but without SOURCE_DATE_EPOCH):
$ reprotest 'i686-w64-mingw32-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello' 'hello'
Reproduction successful
No differences in hello
0b2d99dc51e2ad68ad040d90405ed953a006c6e58599beb304f0c2164c7b83a2  hello

# Let's remove -Dmain=SDL_main and let our main() have precedence over the one in -lSDL2main:
$ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello' 'hello'
Reproduction successful
No differences in hello
6c05f75eec1904d58be222cc83055d078b4c3be8b7f185c7d3a08b9a83a2ef8d  hello

$ LANG=C i686-w64-mingw32.static-ld --version  # MXE
GNU ld (GNU Binutils) 2.25.1
Copyright (C) 2014 Free Software Foundation, Inc.
$ LANG=C i686-w64-mingw32-ld --version  # Debian
GNU ld (GNU Binutils)
Copyright (C) 2016 Free Software Foundation, Inc.

It looks like there is a random behavior in binutils 2.25, coupled with SDL2's wrapping of my main().

So FreeDink is nearly reproducible, except for this build timestamp issue that pops up in all kind of situations. In the worse case I can zero it out, or patch MXE's binutils until they upgrade.

More importantly, what if I recompile FreeDink and the dependencies twice?

$ (cd /opt/mxe/ && make clean && make sdl2 sdl2_gfx sdl2_image sdl2_mixer sdl2_ttf glm libzip gettext nsis)
$ (mkdir cross-woe-32/ && cd cross-woe-32/ \
  && export PATH=/opt/mxe/usr/bin:$PATH \
  && LDFLAGS="-Wl,--no-insert-timestamp" ../configure --host=i686-w64-mingw32.static --enable-static \
  && make V=1 -j$(nproc) \
  && make install-strip DESTDIR=$(pwd)/destdir)
$ mv cross-woe-32/ cross-woe-32-1/

# Same again...
$ mv cross-woe-32/ cross-woe-32-2/

$ diff -ru cross-woe-32-1/destdir/ cross-woe-32-2/destdir/

I could not reproduce the build timestamp issue in the stripped binaries, though it was still varying in the unstripped src/freedinkedit.exe.

I mentioned there was other changes noticed by diffoscope.

  • Changes in file timestamps.

That one is interesting.
Could be ignored, but we want to generate an identical binary package/archive too, right?
That's where archive meta-data matters.
make INSTALL="$(which install) install -p" could help for static files, but not generated ones.
The doc suggests clamping all files to SOURCE_DATE_EPOCH - i.e. all generated files will have that their date set as that timestamp:

$ export SOURCE_DATE_EPOCH=$(date +%s) \
  && reprotest --dont-vary build_path \
  'make ... && find destdir/ -newermt "@${SOURCE_DATE_EPOCH}" -print0 | xargs -0r touch --no-dereference --date="@${SOURCE_DATE_EPOCH}"' 'cross-woe-32/destdir/'
  • Changes in directory permissions

Caused by varying umask.
I attempted to mitigate the issue by playing with make install MKDIR_P="mkdir -p -m 755" (1).
However even mkdir -p -m ... does not set permissions for intermediate directories.
Maybe it's better to set and record the umask...

So, aside from minor issues such as BuildIDs and build timestamps, the toolchain is pretty stable as of now.
The issue is more about fixing and recording the build environment.
Which is probably the next challenge

Dirk Eddelbuettel: nanotime 0.1.2

28 March, 2017 - 18:32

A new minor version of the nanotime package for working with nanosecond timestamps arrived yesterday on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

This release just arranges things neatly before Leonardo Silvestri and I may shake things up with a possible shift to doing it all in S4 as we may need the added rigour for nanotime object operations for use in his ztsdb project.

Changes in version 0.1.2 (2017-03-27)
  • The as.integer64 function is now exported as well.

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible builds folks: Reproducible Builds: week 100 in Stretch cycle

28 March, 2017 - 14:37

Welcome to the 100th issue of this weekly news! We hope you have been enjoying our posts and would love to receive some feedback via our mailing list!

Anyway, here's what happened in the Reproducible Builds effort between Sunday March 19 and Saturday March 25 2017:

Reproducible Builds Hackathon Hamburg 2017

The Reproducible Builds Hamburg Hackathon 2017 (or RB-HH-2017 for short) is a 3-day hacking event taking place May 5th-7th in the CCC Hamburg Hackerspace inside Frappant, as collective art space located witin a historical monument in Hamburg, Germany. The event is open to everyone and we still have some free seats. If you wish to attend, please register your interest as soon as possible.

Media coverage Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reviews of unreproducible packages

30 package reviews have been added, 13 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (2)
diffoscope development development Misc.

This week's edition was written by Chris Lamb & Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Axel Beckert: System Tray Icon to Monitor a Linux Software RAID Locally

28 March, 2017 - 09:09
About a year ago I bought a new workstation computer for myself at home. It’s a Tuxedo XUX_Cube which is advertised as gaming PC. But I ordered a slightly atypical non-gamer configuration:

  • As much RAM as possible (64 GB)
  • Intel i7 CPU, but the low power variant
  • Only with the onboard Intel graphics card. (No need for NVidia binary crap drivers.)
  • 2× Samsung 128 GB SSD for OS and $HOME plus 2× 3 TB WD Red disks for media storage; both pairs set up as RAID 1
  • Bitfenix Prodigy-M case in Orange. (Not available in Tuxedo Computer’s online shop, but they nevertheless ordered it for me. :-)

Of course the box runs Debian. To be more precise, it runs Debian Sid with sysvinit-core as init system and i3 as window manager. As I usually have no monitoring clients on my laptops and private workstations, I rather often felt the urge to do a cat /proc/mdstat on that box.

So at some point I wanted something like smart-notifier, but for Linux Software (MD) RAIDs. And since I found nothing, I did what Open Source guys usually do in such cases: I wrote it myself — of course in Perl — and called it systray-mdstat.

First I wondered about which build system would be most suitable for that task, but in the end I once again went with Dist::Zilla for the upstream build system and hence dh-dist-zilla for the Debian packaging.

Ideas for the actual implementation were taken from Wouter’s fdpowermon for the systray icon framework in Perl and Myon’s mdstat Xymon plugin for an already proven logic to parse /proc/mdstat. (Both, Wouter and Myon have stated in a GnuPG-signed e-mail that I copied less code than would validate their copyrights, so I was able to license it under a single license, namely GNU GPL version 3.)

As of now, systray-mdstat is also available as package in Debian Unstable. It won’t make it to Stretch as its first line of code has been written after the soft-freeze for Stretch was already in place.

Axel Beckert: Maintaining Debian Packages of Perl Modules with dh-dist-zilla

28 March, 2017 - 08:59
Maintaining Debian packages of Perl modules usually can be done with the common git-buildpackage (aka gbp) workflow with its three git branches master (or debian), upstream and pristine-tar:

  • upstream contains the upstream code as imported from upstream’s release tar-balls.
  • pristine-tar contains the binary diffs between the contents of the upstream branch and the original tar-ball. This mostly contains meta-data (timestamps, permissions, file owners, etc.) as git doesn’t store them.
  • master (or debian) which contains upstream plus packaging.

This also works more or less fine for Perl modules, where the Debian package maintainer is also the upstream developer. In that case mostly the upstream branch is used (and then maybe called master while the Debian packaging branch is then called debian).

But the files needed for a proper so called “CPAN distribution” of a Perl module often contain redundant information (version numbers, required modules, etc.) which needs to be maintained. And for that, many people prefer Don’t Repeat Yourself (DRY) as a principle.


One nice and common tool for that is Dist::Zilla or short dzil. It generates most redundant but required data out of a central source, e.g. Dist::Zilla’s dist.ini or the contained .pm files, etc. dzil build creates tar ball which contains all files necessary by CPAN.

But now we have a dilemma: Debian expects those generated files inside the upstream branch while the files are only generated from other files in that branch. There are multiple solutions, but all of them involve committing generated files to the git repository:

  • Commit them into the upstream branch. Disadvantage: You’ll likely later forget which files were generated and which weren’t.
  • Commit the generated files into a separated branch, e.g. use master (original code), upstream (original code + stuff generated by dzil build, maybe imported with git-import-orig), pristine-tar and a debian (based on upstream) branches.

librun-parts-perl aka Run::Parts (a Perl wrapper around and a pure-perl implementation of Debian’s run-parts tool) was initially maintained in the latter way.

But especially in cases where we just need a Perl module packaged as .deb without uploading it to CPAN (e.g. project-internal modules), this is a tedious workflow and overkill. It would be much nicer if debhelper would just call dzil to generate all the stuff it needs to build the package.


Well, you can do that now, at least with Debian Jessie. This is what dh-dist-zilla does: It is a debhelper sequence plugin which calls dzil build and dzil clean in the right moment and takes care that all dh_auto_* commands look in the directory with the generated files instead of the rather clean project root directory.

To use dh-dist-zilla, you just need to add a build-dependency on it and the Dist::Zilla plugins you use, and add --with dist-zilla to your minimal dh-style debian/rules file:

#!/usr/bin/make -f

	dh $@ --with dist-zilla

That’s it.

With regards to workflow and git branches, you may still want to use separate branches for upstream work and debian work, and you may want to continue to use pristine-tar, but you don’t have to commit generated files to git anymore and you can maintain a clean master branch with nearly no redundancy.

And if you need to generate to final upstream tar ball for you debian package, just call dh get-orig-source or maybe easier to use with tab completion dh_dist_zilla_origtar.

This is how the librun-parts-perl package is maintained nowadays. There’s otherwise not much difference to the old, classically maintained versions.

More DRY

Next step in the DRY evolution is to reduce redundancies between upstream (Dist::Zilla based) packaging and the Debian packaging. There are a few tools available, partially brand new, partially not yet packaged:

I wouldn’t be surprised if there’s more to come in this area.

P.S.: I actually started this blog posting in September 2014 and never finished it until now. Had to kick out some already outdated again stuff, but also could add some more recent things.

Dirk Eddelbuettel: #0: Introducing R^4

27 March, 2017 - 20:10

So I had been toying with the idea of getting back to the blog and more regularly writing / posting little tips and tricks. I even starting taking some notes but because perfect is always the enemy of the good it never quite materialized.

But the relatively broad discussion spawned by last week's short rant on Suggests != Depends made a few things clear. There appears to be an audience. It doesn't have to be long. And it doesn't have to be too polished.

So with that, let's get the blogging back from micro-blogging.

This note forms post zero of what will a new segment I call R4 which is shorthand for relatively random R rambling.

Stay tuned.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้