Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 45 min ago

Jonathan McDowell: GnuK on the Maple Mini

8 February, 2017 - 01:34

Last weekend, as a result of my addiction to buying random microcontrollers to play with, I received some Maple Minis. I bought the Baite clone direct from AliExpress - so just under £3 each including delivery. Not bad for something that’s USB capable, is based on an ARM and has plenty of IO pins.

I’m not entirely sure what my plan is for the devices, but as a first step I thought I’d look at getting GnuK up and running on it. Only to discover that chopstx already has support for the Maple Mini and it was just a matter of doing a ./configure --vidpid=234b:0000 --target=MAPLE_MINI --enable-factory-reset ; make. I’d hoped to install via the DFU bootloader already on the Mini but ended up making it unhappy so used SWD by following the same steps with OpenOCD as for the FST-01/BusPirate. (SWCLK is D21 and SWDIO is D22 on the Mini). Reset after flashing and the device is detected just fine:

usb 1-1.1: new full-speed USB device number 73 using xhci_hcd
usb 1-1.1: New USB device found, idVendor=234b, idProduct=0000
usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 1-1.1: Product: Gnuk Token
usb 1-1.1: Manufacturer: Free Software Initiative of Japan
usb 1-1.1: SerialNumber: FSIJ-1.2.3-87155426

And GPG is happy:

$ gpg --card-status
Reader ...........: 234B:0000:FSIJ-1.2.3-87155426:0
Application ID ...: D276000124010200FFFE871554260000
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 87155426
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

While GnuK isn’t the fastest OpenPGP smart card implementation this certainly seems to be one of the cheapest ways to get it up and running. (Plus the fact that chopstx already runs on the Mini provides me with a useful basis for other experimentation.)

Olivier Berger: Making Debian stable/jessie images for OpenStack with bootstrap-vz and cloud-init

8 February, 2017 - 00:09

I’m investigating the creation of VM images for different virtualisation solutions.

Among the target platforms is a destop as a service platform based on an OpenStack public cloud.

We’ve been working with bootstrap-vz for creating VMs for Vagrant+VirtualBox so I wanted to test its use for OpenStack.

There are already pre-made images available, including official Debian ones, but I like to be able to re-create things instead of depending on some external magic (which also means to be able to optimize, customize and avoid potential MitM, of course).

It appears that bootstrap-vz can be used with cloud-init provided that some bits of config are specified.

In particular the cloud_init plugin of bootstrap-vz requires a metadata_source set to “NoCloud, ConfigDrive, OpenStack, Ec2“. Note we explicitely spell it ‘OpenStack‘ and not ‘Openstack‘ as was mistakenly done in the default Debian cloud images (see

The following snippet of manifest provides the necessary bits :

name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d}
  name: kvm
  - virtio_pci
  - virtio_blk
  workspace: /target
  # create or reuse a tarball of packages
  tarball: true
  release: jessie
  architecture: amd64
  bootloader: grub
  charmap: UTF-8
  locale: en_US
  timezone: UTC
  backing: raw
    #type: gpt
    type: msdos
      filesystem: ext4
      size: 4GiB
      size: 512MiB
  # change if another mirror is closer
    password: whatever
    username: debian
    # Note we explicitely spell it 'OpenStack' and not 'Openstack' as done in the default Debian cloud images (see
    metadata_sources: NoCloud, ConfigDrive, OpenStack, Ec2
  # admin_user:
  #   username: Administrator
  #   password: Whatever
    # reduce the size by around 250 Mb
    zerofree: true

I’ve tested this with the bootstrap-vz version in stretch/testing (0.9.10+20170110git-1) for creating jessie/stable image, which were booted on the OVH OpenStack public cloud. YMMV.

Hope this helps

Sven Hoexter: Dell Latitude E7470 hold and mark with upper left touchpad button

7 February, 2017 - 18:55

Recently some of my coworkers and I experienced an issue with using the upper left touchpad button on our Dell Latitude E7470 and similar laptops (E5xxx from the current generation). Some time in January we could no longer hold down this button and select text with the touchpad. Using the left button below the touchpad still worked. This hit my coworker running Fedora and myself running Debian/stretch. So I first thought that it's likely a libinput issue (same version in Debian/stretch and Fedora and I recently pulled that in as an update), somehow blacklisting the upper left key because it's connected to the trackpoint. So I filled #99594 upstream. While this was not very helpful at first, and according to Peter very unlikely to be related to libinput, another coworker using Debian/jessie found this issue to hit him when he upgraded the backports kernel in use from 4.8 to 4.9. That finally led to the conclusion that it's a bug in the Linux alps driver, which is already fixed in 4.10 and probably 4.9.6.

Until the Debian kernel team pulls in a fresh 4.9 point release I'm using 4.10-rc6 from experimental. For Debian/jessie + backports kernel user it might be more convenient to just stay at 4.8 in case this issue annoys you.

Kudos to Peter, Benjamin, TW and WW for the help in locating the origin of this issue!

Lessons learned:

  • I should've started with the painful downgrade of xorg and libinput via snapshot.d.o before opening the bugreport.
  • A lot more of the touchpad related hardware support is nowadays in the kernel and not in the xorg layer. Either that was just my personal historic misunderstanding, or it was different 10 years ago.
  • There is an interesting set of slides from Benjamin related to debuging input device issues.

Junichi Uekawa: According to annual health check my weight has not increased the last year.

7 February, 2017 - 18:36
According to annual health check my weight has not increased the last year. Hopefully that's because of going to the gym.

Sean Whitton: reclaiming conversation

7 February, 2017 - 09:52

On Friday night I attended a talk by Sherry Turkle called “Reclaiming Conversation: The Power of Talk in a Digital Age”. Here are my notes.

Turkle is an anthropologist who interviews people from different generations about their communication habits. She has observed cross-generational changes thanks to (a) the proliferation of instant messaging apps such as WhatsApp and Facebook Messenger; and (b) fast web searching from smartphones.

Her main concern is that conversation is being trivialised. Consider six or seven college students eating a meal together. Turkle’s research has shown that the etiquette among such a group has shifted such that so long as at least three people are engaged in conversation, others at the table feel comfortable turning their attention to their smartphones. But then the topics of verbal conversation will tend away from serious issues – you wouldn’t talk about your mother’s recent death if anyone at the table was texting.

There are also studies that purport to show that the visibility of someone’s smartphone causes them to take a conversation less seriously. The hypothesis is that the smartphone is a reminder of all the other places they could be, instead of with the person they are with.

A related cause of the trivialisation of conversation is that people are far less willing to make themselves emotionally vulnerable by talking about serious matters. People have a high degree of control over the interactions that take place electronically (they can think about their reply for much longer, for example). Texting is not open-ended in the way a face-to-face conversation is. People are unwilling to give up this control, so they choose texting over talking.

What is the upshot of these two respects in which conversation is being trivialised? Firstly, there are psycho-social effects on individuals, because people are missing out on opportunities to build relationships. But secondly, there are political effects. Disagreeing about politics immediately makes a conversation quite serious, and people just aren’t having those conversations. This contributes to polarisation.

Note that this is quite distinct from the problems of fake news and the bubbling effects of search engine algorithms, including Facebook’s news feed. It would be much easier to tackle fake news if people talked about it with people around them who would be likely to disagree with them.

Turkle understands connection as a capacity for solitude and also for conversation. The drip feed of information from the Internet prevents us from using our capacity for solitude. But then we fail to develop a sense of self. Then when we finally do meet other people in real life, we can’t hear them because we just use them to try to establish a sense of self.

Turkle wants us to be more aware of the effects that our smartphones can have on conversations. People very rarely take their phone out during a conversation because they want to escape from that conversation. Instead, they think that the phone will contribute to that conversation, by sharing some photos, or looking up some information online. But once the phone has come out, the conversation almost always takes a turn for the worse. If we were more aware of this, we would have access to deeper interactions.

A further respect in which the importance of conversation is being downplayed is in the relationships between teachers and students. Students would prefer to get answers by e-mail than build a relationship with their professors, but of course they are expecting far too much of e-mail, which can’t teach them in the way interpersonal contact can.

All the above is, as I said, cross-generational. Something that is unique to millenials and below is that we seek validation for the way that we feel using social media. A millenial is not sure how they feel until they send a text or make a broadcast (this makes them awfully dependent on others). Older generations feel something, and then seek out social interaction (presumably to share, but not in the social media sense of ‘share’).

What does Turkle think we can do about all this? She had one positive suggestion and one negative suggestion. In response to student or colleague e-mails asking for something that ought to be discussed face-to-face, reply “I’m thinking.” And you’ll find they come to you. She doesn’t want anyone to write “empathy apps” in response to her findings. For once, more tech is definitely not the answer.

Turkle also made reference to the study reported here and here and here.

Joachim Breitner: Why prove programs equivalent when your compiler can do that for you?

7 February, 2017 - 07:38

Last week, while working on CodeWorld, via a sequence of yak shavings, I ended up creating a nicely small library that provides Control.Applicative.Succs, a new applicative functor. And because I am trying to keep my Haskell karma good1, I wanted to actually prove that my code fulfills the Applicative and Monad laws.

This led me to inserted writing long comments into my code, filled with lines like this:

The second Applicative law:

  pure (.) <*> Succs u us <*> Succs v vs <*> Succs w ws
= Succs (.) [] <*> Succs u us <*> Succs v vs <*> Succs w ws
= Succs (u .) (map (.) us) <*> Succs v vs <*> Succs w ws
= Succs (u . v) (map ($v) (map (.) us) ++ map (u .) vs) <*> Succs w ws
= Succs (u . v) (map (($v).(.)) us ++ map (u .) vs) <*> Succs w ws
= Succs ((u . v) w) (map ($w) (map (($v).(.)) us ++ map (u .) vs) ++ map (u.v) ws)
= Succs ((u . v) w) (map (($w).($v).(.)) us ++ map (($w).(u.)) vs ++ map (u.v) ws)
= Succs (u (v w)) (map (\u -> u (v w)) us ++ map (\v -> u (v w)) vs ++ map (\w -> u (v w)) ws)
= Succs (u (v w)) (map ($(v w)) us ++ map u (map ($w) vs ++ map v ws))
= Succs u us <*> Succs (v w) (map ($w) vs ++ map v ws)
= Succs u us <*> (Succs v vs <*> Succs w ws)

Honk if you have done something like this before!

I proved all the laws, but I was very unhappy. I have a PhD on something about Haskell and theorem proving. I have worked with Isabelle, Agda and Coq. Both Haskell and theorem proving is decades old. And yet, I sit here, and tediously write manual proofs by hand. Is this really the best we can do?

Of course I could have taken my code, rewritten it in, say, Agda, and proved it correct there. But (right now) I don’t care about Agda code. I care about my Haskell code! I don’t want to write it twice, worry about copying mistakes and mismatchs in semantics, and have external proofs to maintain. Instead, I want to prove where I code, and have the proofs checked together with my code!

Then it dawned to me that this is, to some extent, possible. The Haskell compiler comes with a sophisticated program transformation machinery, which is meant to simplify and optimize code. But it can also be used to prove Haskell expressions to be equivalent! The idea is simple: Take two expressions, run both through the compiler’s simplifier, and check if the results are the same. If they are, then the expressions are, as far as the compiler is concerned, equivalent.

A handful of hours later, I was able to write proof tasks like

app_law_2 = (\ a b (c::Succs a) -> pure (.) <*> a <*> b <*> c)
        === (\ a b c -> a <*> (b <*> c))

and others into my source file, and the compiler would tell me happily:

[1 of 1] Compiling Successors       ( Successors.hs, Successors.o )
GHC.Proof: Proving getCurrent_proof1 …
GHC.Proof: Proving getCurrent_proof2 …
GHC.Proof: Proving getCurrent_proof3 …
GHC.Proof: Proving ap_star …
GHC.Proof: Proving getSuccs_proof1 …
GHC.Proof: Proving getSuccs_proof2 …
GHC.Proof: Proving getSuccs_proof3 …
GHC.Proof: Proving app_law_1 …
GHC.Proof: Proving app_law_2 …
GHC.Proof: Proving app_law_3 …
GHC.Proof: Proving app_law_4 …
GHC.Proof: Proving monad_law_1 …
GHC.Proof: Proving monad_law_2 …
GHC.Proof: Proving monad_law_3 …
GHC.Proof: Proving return_pure …
GHC.Proof proved 15 equalities

This is how I want to prove stuff about my code!

Do you also want to prove stuff about your code? I packaged this up as a GHC plugin in the Haskell library ghc-proofs (not yet on Hackage). The README of the repository has a bit more detail on how to use this plugin, how it works, what its limitations are and where this is heading.

This is still only a small step, but finally there is a step towards low threshold program equivalence proofs in Haskell.

  1. Or rather recover my karma after such abominations such as ghc-dup, seal-module or ghc-heap-view.

Martin Pitt: Migrated blog from WordPress to Hugo

7 February, 2017 - 03:04

My WordPress blog got hacked two days ago and now twice today. This morning I purged MySQL and restored a good backup from three days ago, changed all DB and WordPress passwords (both the old and new ones were long and autogenerated ones), but not even an hour after the redeploy the hack was back. (It can still be seen on Planet Debian and Planet Ubuntu. Neither the Apache logs nor the Journal had anything obvious, nor were there any new files in global or user www directories, so I’m a bit stumped how this happened. Certainly not due to bruteforcing a password, that would both have shown in the logs and also have triggered ban2fail, so this looks like an actual vulnerability.

I upgraded to WordPress 4.7.1 a few days ago, and apparently 4.7.2 fixes a few vulnerabilities, although all of them don’t sound like they would match my situation. jessie-backports is still at 4.7.1, so I missed that update. But either way, all WordPress blogs hosted on my server are down for the time being.

I took this as motivation to finally migrate to something more robust. WordPress has tons of features that I never need, and also a lot of overhead (dynamic generation, MySQL, its own user/passwords, etc.). I had a look around, and it seems Hugo and Blogofile are nice contenders – no privileges, no database, outputting static files, input is Markdown (so much nicer to type than HTML!), and maintaining your blog in git and previewing the changes on my local laptop are straightforward. I happened to try Hugo first, and like it enough to give it an extended try – you have plenty of themes to choose from and they are straightforward to customize, so I don’t need to spend a lot of time learning and crafting CSS.

I ran the WordPress to Hugo Exporter, and it produced remarkable results – fairly usable HTML → Markdown and metadata conversion, it keeps all the original URLs, and it’s painless to use. Nicely done!

So here it is, on to a much more secure server now! \o/

Wouter Verhelst: FOSDEM 2017 is finished...

6 February, 2017 - 20:53

... but that doesn't mean the work is over.

One big job that needs to happen after the conference is to review and release the video recordings that were made. With several hundreds of videos to be checked and only a handful of people with the ability to do so, review was a massive job that for the past three editions took several months; e.g., in 2016 the last video work was done in July, when the preparation of the 2017 edition had already started.

Obviously this is suboptimal, and therefore another solution was required. After working on it for quite a while (in my spare time), I came up with SReview, a video review and transcoding system written in Perl.

An obvious question that could be asked is why I wrote yet another system, though, and did not use something that already existed. The short answer to that is "because what's there did not exactly do what I wanted to". The somewhat longer answer also involves the fact that I felt like writing something from scratch.

The full story, however, is this: there isn't very much out there, and what does exist is flawed in some ways. I am aware of three other review systems that are or were used by other conferences:

  1. A bunch of shell scripts that were written by the DebConf video team and hooked into the penta database. Nobody but DebConf ever used it. It allowed review via an NFS share and a webinterface, and required people to watch .dv files directly from the filesystem in a media player. For this and other reasons, it could only ever be used from the conference itself. If nothing else, that final limitation made it impossible for FOSDEM to use it, but even if that wasn't the case it was still too basic to ever be useful for a conference the size of FOSDEM.
  2. A review system written by the CCC "voc" team. I've never actually seen it in use, but I've heard people describe it. It involves a complicated setup of NFS (or was it HTTP?) servers, short MPEG-4 transport stream segments, a FUSE filesystem, and kdenlive, which took someone several days to set up as an experiment back at DebConf15. Critically, important parts of it are also not licensed as free software, which to me rules it out for a tool in support of FOSDEM. Even if that wasn't the case, however, I'm still not sure it would be ideal; this system requires intimate knowledge of how it works from its user, which makes it harder for us to crowdsource the review to the speaker, as I had planned to.
  3. Veyepar. This one gets many things right, and we used it for video review at DebConf from DebConf14 onwards, as well as FOSDEM 2014 (but not 2015 or 2016). Unfortunately, it also gets many things wrong. Most of these can be traced back to the fact that Carl, as he freely admits, is not a programmer; he's more of a sysadmin type who also manages to cobble together a few scripts now and then. Some of the things it gets wrong are minor issues that would theoretically be fixable with a minimal amount of effort; others would be more involved. It is also severely underdocumented, and so as a result it is rather tedious for someone not very familiar with the system to be able to use it. On a more personal note, veyepar is also written in the wrong language, so while I might have spent some time improving it, I ended up starting from scratch.

Something all these systems have in common is that they try to avoid postprocessing as much as possible. This only makes sense; if you have to deal with loads and loads of video recordings, having to do too much postprocessing only ensures that it won't get done...

Despite the issues that I have with it, I still think that veyepar is a great system, and am not ashamed to say that SReview borrows many ideas and concepts from it. However, it does things differently in some areas, too:

  1. A major focus has been on making the review form be as easy to use as possible. While there is still room for improvement (and help would certainly be welcome in that area from someone with more experience in UI design than me), I think the SReview review form is much easier to use than the veyepar one (which has so many options that it's pretty hard to understand sometimes).
  2. SReview assumes that as soon as there are recordings in a given room sufficient to fill all the time that a particular event in that room was scheduled for, the whole event is available. It will then generate a first rough cut, and send a notification to the speaker in question, as well as the people who organized the devroom. The reviewer will then almost certainly be required to request a second (and possibly third or fourth) cut, but I think the advantage of that is that it makes the review workflow be more intuitive and easier to understand.
  3. Where veyepar requires one or more instances of per-state scripts to be running (which will then each be polling the database and just start a transcode or cut or whatever script as needed), SReview uses a single "dispatch" script, which needs to be run once for the whole system (if using an external scheduler) or once per core that may be used (if not using an external scheduler), and which does all the database polling required. The use of an external scheduler seemed more appropriate, given that things like gridengine exist; gridengine is a job scheduler which allows one to submit a job to be ran on any node in a cluster, along with the resources that this particular job requires, and which will then either find an appropriate node to run the job on, or will put the job in a "pending" state until the required resources can be found. This allows me to more easily add extra encoding capacity when required, and allows me to also do things like allocate less resources to a particular part of the whole system, even while jobs are already running, without necessarily needing to abort jobs that might be using those resources.

The system seems to be working fine, although there's certainly still room for improvement. I'm thinking of using it for DebConf17 too, and will therefore probably work on improving it during DebCamp.

Additionally, the experience of using it for FOSDEM 2017 has given me ideas of where to improve it further, so it can be used more easily by different parties, too. Some of these have been filed as issues against a "1.0" milestone on github, but others are only newly formed in my gray matter and will need some thinking through before they can be properly implemented. Certainly, it looks like this will be something that'll give me quite some fun developing further.

In the mean time, if you're interested in the state of a particular video of FOSDEM 2017, have a look at the video overview page, which lists all talks along with their review/transcode status. Also, if you were a speaker or devroom organizer at FOSDEM 2017, please check your mailbox and review your talk! With your help, we should hopefully be able to release all our videos by the end of the week.

Update (2017-02-06 17:18): clarified my position on the qualities of some of the other systems after feedback from people who were a bit disappointed by my description of them... and which was fair enough. Apologies.

Jaldhar Vyas: Don't Believe Everything You Read on Debian Planet

6 February, 2017 - 12:41

Martin Pitt won the popular vote.

Russell Coker: SE Linux in Debian/Stretch

6 February, 2017 - 10:17

Debian/Stretch has been frozen. Before the freeze I got almost all the bugs in policy fixed, both bugs reported in the Debian BTS and bugs that I know about. This is going to be one of the best Debian releases for SE Linux ever.

Systemd with SE Linux is working nicely. The support isn’t as good as I would like, there is still work to be done for systemd-nspawn. But it’s close enough that anyone who needs to use it can use audit2allow to generate the extra rules needed. Systemd-nspawn is not used by default and it’s not something that a new Linux user is going to use, I think that expert users who are capable of using such features are capable of doing the extra work to get them going.

In terms of systemd-nspawn and some other rough edges, the issue is the difference between writing policy for a single system vs writing policy that works for everyone. If you write policy for your own system you can allow access for a corner case without a lot of effort. But if I wrote policy to allow access for every corner case then they might add up to a combination that can be exploited. I don’t recommend blindly adding the output of audit2allow to your local policy (be particularly wary of access to shadow_t and write access to etc_t, lib_t, etc). But OTOH if you have a system that’s running in enforcing mode that happens to have one daemon with more access than is ideal then all the other daemons will still be restricted.

As for previous releases I plan to keep releasing updates to policy packages in my own apt repository. I’m also considering releasing policy source to updates that can be applied on existing Stretch systems. So if you want to run the official Debian packages but need updates that came after Stretch then you can get them. Suggestions on how to distribute such policy source are welcome.

Please enjoy SE Linux on Stretch. It’s too late for most bug reports regarding Stretch as most of them won’t be sufficiently important to justify a Stretch update. The vast majority of SE Linux policy bugs are issues of denying wanted access not permitting unwanted access (so not a security issue) and can be easily fixed by local configuration, so it’s really difficult to make a case for an update to Stable. But feel free to send bug reports for Buster (Stretch+1).

Related posts:

  1. Debian SE Linux Status June 2012 It’s almost the Wheezy freeze time and I’ve been working...
  2. SE Linux Status in Debian 2012-01 Since my last SE Linux in Debian status report [1]...
  3. Debian SSH and SE Linux I have just filed Debian bug report #556644 against the...

Daniel Stender: Howto create a Debian 9 preview as Vagrant box with Packer

6 February, 2017 - 07:00

I’ve got some little scripts and a template here to automatically create Vagrant boxes from cutting edge Debian testing daily snapshots (netinstall ISO image) using HashiCorp’s Packer.

To create Vagrant boxes with these, you first need a running binary of Packer. There is a Debian package available if that’s also your working environment, but Packer is going to be introduced into the stable branch with the upcoming Stretch release itself. However, Ubuntu already has it, and some other derivatives, too. And there are prebuild binaries available from the developer’s site which run fine out-of-the-box (you just have to put the single binary somewhere into you $PATH, or expand that to find it). The JSON template should run with any Packer which is available for any of the different systems.

Vagrant itself isn’t needed to build the box with Packer, but Virtualbox is of course needed to pre bake the machine image within a virtual machine. In Debian the base binaries of Virtualbox are in the contrib archive section, so that source might be added to /etc/apt/sources.list, if haven’t already. The scripts have been tested to run with 5.1.10, and I haven’t seen that any late versions are demanded in particular, but of course heavily outdated versions might not work properly.

Packer installs the guest additions ISO file for Virtualbox into the virtual machine (and the shipped provisioning script then builds them inside). For that, the Debian package which ships that (which is in non-free) is recognized if it is installed, and then could be used by Packer. When the ISO isn’t available nowhere on the working machine the builder then automatically downloads the corresponding ISO from

When the tarball with the scripts is unpacked, just do make create and the process should run through, if Packer and Virtualbox are available. If your environment doesn’t have GNU Make nor wget you might want to copy the relevant lines from the Makefile and run it manually. But if it does, just do it like this:

/tmp/debian-testing-vagrantbox$ make create
virtualbox-iso output will be in this color.
==> virtualbox-iso: Downloading or copying Guest additions
    virtualbox-iso: Downloading or copying: file:///usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Downloading or copying ISO
    virtualbox-iso: Downloading or copying:
    virtualbox-iso: Download progress: 10%
    virtualbox-iso: Download progress: 96%
==> virtualbox-iso: Starting HTTP server on port 8219
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 2885)
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Waiting for SSH to become available...

The Virtualbox window then pops up and the build process continues within the virtual machine for a while. You might want to file a Github issue when there’s a problem on your machine, please! (please include the tail of your packer.log)

The Packer template (debian-testing-vagrant.json) is described in the documentation of the virtualbox-iso builder. A preseeding script for the Debian Installer (preseed.cfg) is also included which gets injected into the virtual build environment during the build process. The creation progress of the Debian base installation could be easily monitored since the Virtualbox window is fully shown during the Packer run (if you “loose” your mouse pointer by clicking inside that window, do <Right>+<Control> to escape). For good performance, a fast internet connection is needed since a whole base system must be downloaded – if that’s available the whole automated process very only takes a couple of minutes to complete on a non-SSD machine.

When Packer has finished and a fresh box is created (the size is about 690 MB), it then could be used with Vagrant. Just add the new box with:

/tmp/debian-testing-vagrantbox$ vagrant box add stretch-preview
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'stretch-preview' (v0) for provider: 
    box: Unpacking necessary files from: file:///tmp/debian-testing-vagrantbox/
==> box: Successfully added box 'stretch-preview' (v0) for 'virtualbox'!

It then could be initialized within a random working directory with:

/tmp/myproject$ vagrant init stretch-preview
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`` for more information on using Vagrant.

After that, you could launch the virtual box with:

/tmp/myproject$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'stretch-preview'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: myproject_default_1486321215067_75270
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address:
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /tmp/myproject

Then you can SSH into it by doing (touch is used here only to point to the shared folder):

/tmp/myproject$ touch hello!
/tmp/myproject$ vagrant ssh -- -X

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
/usr/bin/xauth:  file /home/vagrant/.Xauthority does not exist
vagrant@packer-virtualbox-iso-1486319595:~$ $ cat /etc/debian_version
vagrant@packer-virtualbox-iso-1486319595:~$ ls /vagrant/
hello!  Vagrantfile
vagrant@packer-virtualbox-iso-1486319595:~$ exit
Connection to closed.
/tmp/myproject$ vagrant halt
==> default: Attempting graceful shutdown of VM...

If you haven’t worked with Vagrant before, maybe this is appealing. The experience differs from using a chroot. Packer makes it very convenient to keep freshly created boxes for it coming. Inside the box, to change the pre installed us keyboard layout just do sudo dpkg-reconfigure keyboard-configuration (no password needed for sudo), and then sudo systemctl restart keyboard-setup.service.

Have fun!

Iustin Pop: Short rant/review of La La Land

6 February, 2017 - 06:16

Warning: Spoilers below. Rant below. Much angry, MANY ALL-CAPS. You've been warned!

So, today we went to see "La La Land", because I've heard good things about it, and because I do enjoy good musicals. And because of this, I wrote this post, instead of what I originally had in mind (related to kernel configuration).

Was it a good movie? Definitely yes. Was it a good musical? So and so. Did I like the ending? HELL NO, over and over NO.

The movie itself was much better than I expected. I don't read plot details in advance nor real reviews, so I expected more of a musical, and less of a good plot. But the movie had a very good plot. Two young people, striving to fulfil their artistic dreams, fall in love, and they fight through-sometimes helping, sometimes hindering each other—until, finally, each gets their own breakthrough, etc.

The choice of actress was spot on—halfway through the movie, I was thinking that I can't imagine the same plot played by a different actress. Of course many other actresses could have played the part, but Emma Stone played so well, I have trouble seeing the same character with the same always half-happy, half-sad attitude. The choice of actor was I think OK—at first I was in doubt, but he played also well. Or maybe it was just that I couldn't identify with him at first. Not that I identify well with artists in general ☺

The dance scenes were OK, and the singing good, but as I said, the musical part was secondary to the actual struggles of the characters. The movie itself was, technically, very well done; a lot of filming was in bars/clubs/locations with difficult lighting, and the shooting was very good. They also had a scene on a pier, looking towards the ocean and the setting sun, and the characters walking towards the beach—so heavily back-lighted, and I kept thinking "If I get only one shot this perfectly exposed and colour correct(ed), I'm happy". So high notes here.

Back to the plot. The story of how she and him fought their own struggles was very nicely told. Tick-tack, up and down (hope and rejection), leaning on the other to get morale back, is a captivating story. The cliff-hanger at the pre-end with her career, the going back home, the last minute save, all very well told.

So at this stage, I would have given the movie a 9/10. And I was happy.

Then we have the usual "one character has to go away to a far away country for a long time", except in this case it was just 4 months. And they have the usual discussion "what do we do with our relation, where do we take it", and she says "I will always love you", to which he replies "And I will too" (or equivalent).

In my mind, this means they'll have to survive during the break, they'll have to also survive through his touring months/years, but in the end love will be stronger. Because this is what the movie told us until now, that she made it because of him, and he made it because of her. Neither of them would have been this strong without the other (he wouldn't have picked up the invitation from his old pal, she wouldn't have gone to the final audition request nor write the play which got her the audition/recognition). Estimated movie ending: awesome.

And then… something happens. The timeline jumps 5 years in the future (as expected), and she is famous, married (WITH SOMEONE ELSE) and happy mother of a 3-year old. Through fate, she and her husband enter the club of Sebastian (as he also fulfilled his dream), she and Sebastian see each other, he plays their song, during which we're served a re-run of the movie but in stupid "everything goes well" style (all bad events eliminated), in which it is she and Sebastian who enter the club (which belongs now to somebody else), and then we're back in real time, song ends, she and her husband leave, but before that she and Sebastian exchange one last smile, THE END.

And I'm sitting there, not believing my eyes. WHAT THE? So I get home, not write this post for four hours to calm down, but I can't. Because this doesn't make sense. AT ALL.

What does the internet say? Quoting from this CNN article, written exactly today. The director says:

"That ending was there from the get-go," [director Damien Chazelle] told CNN in a recent interview. "I think I just have a thing about love stories where the lovers don't wind up together at the end; I find it very romantic."

Huh, excuse me?

"I think there's a reason why most of the greatest love stories in history don't end with happily ever after," Chazelle said. "To me, if you're telling a story about love, love has to be bigger than the characters." Chazelle sees Mia and Sebastian's love as a "third character" and something that "lives on." "[The ending gives] you that sense that even if the relationship itself might be over in practical terms, the love is not over," he said. "The love lasts, and I think that's just a beautiful kind of thing."

OH FOR THE LOVE OF. This is a wishy-washy explanation that tries to approach the thing from the artistic side. No, this is bullshit, because of multiple things. Let me try to roll back and explain what I think was the intention.

  1. An earlier fight between Mia and Sebastian points to the fact that they're both very dedicated to their careers, and this means it's hard for them to stay together if they both chase their dream. He has to be on tour, and she has to rehearse for her play, so they won't see each other for at least two weeks (in this instance). Later, she calls him and leaves a message that she hasn't seen him in a while (complex scene which ends in another fight, which is very well done). So we see the conflict that seems to say "You can't have a relation of equals; one party has to give up their dream". While this might be partially true in the real world, I don't go to movies to see the real world.

  2. After the year-long window into their life, I can't think that either Sebastian or Mia can be really successful without the other; because they are so alike, so passionate about their dreams, that a normal person wouldn't be able to understand and push the other when they need. However, the ending show both Mia and Sebastian quite successful, so one has to wonder: did they make it alone? Sebastian seems so (we don't see a partner for him), Mia unclear, likely not. How did Sebastian get through? What did Mia find in her husband?

  3. This is very one-sided, since I'm a man, so bear with me: Sebastian helped Mia through her tough time. Once she got the breakthrough (and they split), she found somebody else, and I have to wonder in what circumstances they met. In the sense that maybe her husband only knew "successful Mia" and not "struggling/aspiring Mia". Her husband seems completely oblivious to all the eye contact between Mia and Sebastian in the club, seems to know Sebastian/about Sebastian not. How deep is their relation?

  4. This is still one sided, sorry. When they break up (before Mia leaves for Paris), Sebastian asks "so where do we go from here?". Mia says "Nowhere". He asks once more, she rejects him again. So after one year of mad love and cries and happy moments, he gives up over two sentences? He's been following his dream (proper Jazz) in spite of all downturns in life until then, but he gives up on his real love over this? It doesn't make sense; trying to identify my self with the character, I can't reconcile this scene at all, unless he didn't really love her.

So no, I don't see them ending apart as romantic. I see it as the director is saying "You can't have both love and your [career] dreams. Choose either.", and he gives the "love" fake ending in the mini-re-roll of the movie, and the "career" wrong ending in the actual ending. And worse, he does it by negating significant parts of the character development done until now.

Moreover, this conclusion is wrong. Wrong because this is a movie, and if movies don't manage to make you dream that you can achieve all, if movies tell you "choose either", then all is lost. Their love is not a separate character; them struggling to find each other in the successful phase of their life, learning to adapt to the new "he" and "she", would be the third thing. As it was shown, their love is simply a young love, that can't really survive the changes in life; they each said "I'll love you forever", but with this ending it sounds more "I'll cherish the memory of young you forever". Or differently said, it sounded like a cheap excuse to use when ending their relationship, in order to not negate the relationship itself.

My version of the movie is another half hour long. It explains how Sebastian get over the "only jazz is pure old jazz" and manages to build a successful business around his old-style-but-modern jazz, instead of the pop-style jazz of the touring band (while thinking about her). It explains how Mia becomes a successful actress and gets over her first/second movies (while thinking about him), because one movie doesn't make one really successful (that reminds me: 3 year old child after 5 year forward-jump? when/how did her career go?). Hell, make it even more bitter—show how their correspondence starts strong but becomes more and more sporadic over time, dying after the first 2 years. Show how both of them try other relations, and not find the same spark that they had before.

And then, after they have matured, they meet again. And, just like the first time, they fall for each other, once again. She for his music, him for her passion for acting/for acting itself. She finds that him naming his club after her suggestion is oh-so-grown-up-and-sweet, he is happy that she finally grew into what he saw in her from the beginning. And he sings their song once more.

But no. I'm not an artist, so I can only get the "die hope die die die love because I can" version. I still recommend the movie, but not the "after 5 years" scenes.

Also, I didn't get time to bike today nor yesterday, so all you really get here is an ANGRY RANT. Because while I drink the coffee black and the tea without sugar, I like my happy endings, DAMN IT.

Dirk Eddelbuettel: random 0.2.6

6 February, 2017 - 06:04

A pure maintenance release of the random package for truly (hardware-based) random numbers as provided by is now on CRAN. As requested by CRAN, we made running tests optional. Not running tests is clearly one way of not getting (spurious, networking-related) failures ...

Courtesy of CRANberries comes a diffstat report for this release. Current and previous releases are available here as well as on CRAN.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Andreas Metzler: Testing gnutls28 3.3.8-6+deb8u5 (stable)

6 February, 2017 - 00:34

I am in the process of trying to fix CVE-2017-533[4567] for jessie and have created a preliminary candidate for uploading to stable. Source (and amd64 binaries for the trusting) is available here and also on the gnutls28_jessie branch in GIT.

I would appreciate testing and feedback to gnutls28 at Thanks!

Vincent Bernat: A Makefile for your Go project

5 February, 2017 - 20:28

My most loathed feature of Go is the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. Hopefully, this issue is slowly starting to be accepted by the main authors. In the meantime, you can workaround this problem with more opinionated tools (like gb) or by crafting your own Makefile.

For the later, you can have a look at Filippo Valsorda’s example or my own take which I describe in more details here. This is not meant to be an universal Makefile but a relatively short one with some batteries included. It comes with a simple “Hello World!” application.

Project structure§

For a standalone project, vendoring is a must-have1 as you cannot rely on your dependencies to not introduce backward-incompatible changes. Some packages are using versioned URLs but most of them aren’t. There is currently no standard tool to handle vendoring. My personal take is to vendor all dependencies with Glide.

It is a good practice to split an application into different packages while the main one stay fairly small. In the hellogopher example, the CLI is handled in the cmd package while the application logic for printing greetings is in the hello package:

├── cmd/
│   ├── hello.go
│   ├── root.go
│   └── version.go
├── glide.lock (generated)
├── glide.yaml
├── vendor/ (dependencies will go there)
├── hello/
│   ├── root.go
│   └── root_test.go
├── main.go
├── Makefile
Down the rabbit hole§

Let’s take a look at the various “features” of the Makefile.

GOPATH handling§

Since all dependencies are vendored, only our own project needs to be in the GOPATH:

PACKAGE  = hellogopher
GOPATH   = $(CURDIR)/.gopath
BASE     = $(GOPATH)/src/$(PACKAGE)

    @mkdir -p $(dir $@)
    @ln -sf $(CURDIR) $@

The base import path is hellogopher, not this shortens imports and makes them easily distinguishable from imports of dependency packages. However, your application won’t be go get-able. This is a personal choice and can be adjusted with the $(PACKAGE) variable.

We just create a symlink from .gopath/src/hellogopher to our root directory. The GOPATH environment variable is automatically exported to the shell commands of the recipes. Any tool should work fine after changing the current directory to $(BASE). For example, this snippet builds the executable:

.PHONY: all
all: | $(BASE)
    cd $(BASE) && $(GO) build -o bin/$(PACKAGE) main.go
Vendoring dependencies§

Glide is a bit like Ruby’s Bundler. In glide.yaml, you specify what packages you need and the constraints you want on them. Glide computes a glide.lock file containing the exact versions for each dependencies (including recursive dependencies) and download them in the vendor/ folder. I choose to check into the VCS both glide.yaml and glide.lock files. It’s also possible to only check in the first one or to also check in the vendor/ directory. A work-in-progress is currently ongoing to provide a standard dependency management tool with a similar workflow.

We define two rules2:

GLIDE = glide

glide.lock: glide.yaml | $(BASE)
    cd $(BASE) && $(GLIDE) update
    @touch $@
vendor: glide.lock | $(BASE)
    cd $(BASE) && $(GLIDE) --quiet install
    @ln -sf . vendor/src
    @touch $@

We use a variable to invoke glide. This enables a user to easily override it (for example, with make GLIDE=$GOPATH/bin/glide).

Using third-party tools§

Most projects need some third-party tools. We can either expect them to be already installed or compile them in our private GOPATH. For example, here is the lint rule:

BIN    = $(GOPATH)/bin
GOLINT = $(BIN)/golint

$(BIN)/golint: | $(BASE) # ❶
    go get

.PHONY: lint
lint: vendor | $(BASE) $(GOLINT) # ❷
    @cd $(BASE) && ret=0 && for pkg in $(PKGS); do \
        test -z "$$($(GOLINT) $$pkg | tee /dev/stderr)" || ret=1 ; \
     done ; exit $$ret

As for glide, we let the user a chance to override which golint executable to use. By default, it uses a private copy. But a user can use its own copy with make GOLINT=/usr/bin/golint.

In ❶, we have the recipe to build the private copy. We simply issue go get3 to download and build golint. In ❷, the lint rule executes golint on each package contained in the $(PKGS) variable. We’ll explain this variable in the next section.

Working with non-vendored packages only§

Some commands need to be provided with a list of packages. Because we use a vendor/ directory, the shortcut ./... is not what we expect as we don’t want to run tests on our dependencies4. Therefore, we compose a list of packages we care about:

PKGS = $(or $(PKG), $(shell cd $(BASE) && \
    env GOPATH=$(GOPATH) $(GO) list ./... | grep -v "^$(PACKAGE)/vendor/"))

If the user has provided the $(PKG) variable, we use it. For example, if they want to lint only the cmd package, they can invoke make lint PKG=hellogopher/cmd which is more intuitive than specifying PKGS.

Otherwise, we just execute go list ./... but we remove anything from the vendor directory.


Here are some rules to run tests:

TEST_TARGETS := test-default test-bench test-short test-verbose test-race
.PHONY: $(TEST_TARGETS) check test tests
test-bench:   ARGS=-run=__absolutelynothing__ -bench=.
test-short:   ARGS=-short
test-verbose: ARGS=-v
test-race:    ARGS=-race

check test tests: fmt lint vendor | $(BASE)
    @cd $(BASE) && $(GO) test -timeout $(TIMEOUT)s $(ARGS) $(PKGS)

A user can invoke tests in different ways:

  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.
Tests coverage§

go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you’ll need some additional tools.

COVERAGE_MODE    = atomic
COVERAGE_XML     = $(COVERAGE_DIR)/coverage.xml

.PHONY: test-coverage test-coverage-tools
test-coverage-tools: | $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) # ❸
test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -Iseconds)
test-coverage: fmt lint vendor test-coverage-tools | $(BASE)
    @mkdir -p $(COVERAGE_DIR)/coverage
    @cd $(BASE) && for pkg in $(PKGS); do \ # ❹
        $(GO) test \
            -coverpkg=$$($(GO) list -f '{{ join .Deps "\n" }}' $$pkg | \
                    grep '^$(PACKAGE)/' | grep -v '^$(PACKAGE)/vendor/' | \
                    tr '\n' ',')$$pkg \
            -covermode=$(COVERAGE_MODE) \
            -coverprofile="$(COVERAGE_DIR)/coverage/`echo $$pkg | tr "/" "-"`.cover" $$pkg ;\
    @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE)
    @$(GO) tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML)

First, we define some variables to let the user override them. We also require the following tools (in ❸):

  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.

The rules to build those tools are similar to the rule for golint described a few sections ago.

In ❹, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Final result§

While the main goal of using a Makefile was to work around GOPATH, it’s also a good place to hide the complexity of some operations, notably around test coverage.

The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks!

  1. In Go, “vendoring” is about both bundling and dependency management. As the Go ecosystem matures, the bundling part (fixed snapshots of dependencies) may become optional but the vendor/ directory may stay for dependency management (retrieval of the latest versions of dependencies matching a set of constraints). 

  2. If you don’t want to automatically update glide.lock when a change is detected in glide.yaml, rename the target to deps-update and make it a phony target. 

  3. There is some irony for bad mouthing go get and then immediately use it because it is convenient. 

  4. I think ./... should not include the vendor/ directory by default. Dependencies should be trusted to have run their own tests in the environment they expect them to succeed. Unfortunately, this is unlikely to change

Francois Marier: IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

5 February, 2017 - 20:20

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block

The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.

If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address

If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules

iface eth0 inet6 static
    address 2600:3c01::xxxx:xxxx:xxxx:939f/64
    gateway fe80::1
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

iface tun0 inet6 static
    address 2600:3c01:xxxx:xxxx::/64
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.

Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:

ping6 2600:3c01:xxxx:xxxx::1
OpenVPN configuration

The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:

proto udp


proto udp6

in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:

server-ipv6 2600:3c01:xxxx:xxxx::/64
push "route-ipv6 2000::/3"

to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.

In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:


and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):

# openvpn
-A INPUT -p udp --dport 1194 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT

in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.

With all of this done, apply the settings by running:

sysctl -p /etc/sysctl.d/openvpn.conf
systemctl restart openvpn.service
Testing the connection

Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.

Then you can ping the server's new IP address:

ping6 2600:3c01:xxxx:xxxx::1

and from the server, you can ping the client's IP (which you can see in the network settings):

ping6 2600:3c01:xxxx:xxxx::1002

Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:


and then pinging your client from an IPv6-enabled host somewhere:

ping6 2600:3c01:xxxx:xxxx::1002

If that works, other online tests should also work.

Bits from Debian: Debian welcomes its Outreachy interns

5 February, 2017 - 18:00

Better late than never, we'd like to welcome our three Outreachy interns for this round, lasting from the 6th of December 2016 to the 6th of March 2017.

Elizabeth Ferdman is working in the Clean Room for PGP and X.509 (PKI) Key Management.

Maria Glukhova is working in Reproducible builds for Debian and free software.

Urvika Gola is working in improving voice, video and chat communication with free software.

From the official website: Outreachy helps people from groups underrepresented in free and open source software get involved. We provide a supportive community for beginning to contribute any time throughout the year and offer focused internship opportunities twice a year with a number of free software organizations.

The Outreachy program is possible in Debian thanks to the effort of Debian developers and contributors that dedicate part of their free time to mentor students and outreach tasks, and the help of the Software Freedom Conservancy, who provides administrative support for Outreachy, as well as the continued support of Debian's donors, who provide funding for the internships.

Debian will also participate in the next round for Outreachy, during the summer of 2017. More details will follow in the next weeks.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

Congratulations, Elizabeth, Maria and Urvika!

Shirish Agarwal: Flights of fancy – how to figure trends for Airline tickets.

5 February, 2017 - 15:28

Google Flights Fees tracking between PNQ and YUL and back, economy fares.

Couple of weeks ago, either on some mailing list, on IRC or somewhere else, somebody mentioned that people always put higher amount of airline expenditure for self when asking for sponsorship.

Now last year, between sending the application and getting the approval for sponsorship, there was 3 months of difference between the two. Now if you put up an application for sponsorship like I did last year, I had added 10% to the cost of flight tickets of the cheapest prevailing prices at that point in time on skyscanner or any of the meta-search-engines were showing me at that point in time.

I was sceptical whether the amount that I had put for the to and fro tickets would be enough or not. Strangely, I was lucky enough to get my ticket around the new estimated price. I would have to mention though that there were only 2 tickets left at the new price and if I had waited just a few hours more, those tickets would have gone too and all other tickets were around 25% more than before. The only reasons I could fathom are –

a. Luck, pure and simple.

b. Going at the end of the tourism season – This was evident as I was able to book my extended stay at any hostel just 2 days before my stay at UCT (University of Cape Town) was over. Was corroborated by hostel staff, shop-owners as well as whatever info. I found on the web before and during my stay in SA.

c. South Africa being more lenient than probably Canada is giving and processing visas.

While looking at the third point, thought I better check world tourism rankings and saw the Wikipedia page for it. Interestingly, South Africa seems to have a slight edge over Canada when it comes to statistics and hence strengthens my assumptions that probably more people apply for SA than Canada as they know the possibility of more people making through visa processing. It would stand to be logical that more people would apply for a tourist or similar short-term resident visas if they know they have a good chance going through.

While researching on the topic I also came across/ hunted to find the hardest places to get a Visa for and was surprised to find India being lasted therein.Coincidentally, that site also has a UK domain.It does burst the bubble in ‘Incredible India’ a little bit.

As a newbie who had no clue I knew I was probably a victim of Information Asymmetry where the airlines have much more information about travel trends, ongoing trends at Airports, Politics and Economics of Countries, Price of Crude Oil, Profit, Competition and probably many more factors that I haven’t taken into account which decide fares.

While researching on the topic, one of the interesting finds I had while trying to figure the above is that Airlines didn’t pass on fuel savings to their customers. Now I don’t know whether this was the same around the world or was this only in UK. I am shocked that British (and by definition EU, as UK was part of EU at that time) travellers or consumer groups didn’t file a suit in the court of law as reading the above smells of anticompetitive behaviour. The most shocking statement was this –

“Average fares to Spain rose by 10 per cent over the same period.” – Telegraph, UK.

In order to lessen this information asymmetry a bit, I used google flights and its data of the past 2 months to see how the fares have been changing to have some insight of where the fares might end up. I know google is hated by one and all, but in this instance I couldn’t find any comparable site which does this kind of thing.

As can be seen in the graph, the tickets had started relatively cheap from around INR 65k ish to around 80k ish at this point in time. That is a jump of around 30% in the last couple of months. All of these flights have a layover somewhere in Europe and taking a second flight from there to Canada.

The one which didn’t show much of action is the direct plan between BOM – YUL and back but then this seems to be a premium service . Taking a direct flight from BOM – YUL is north of INR 90k/- which doesn’t make much sense unless one is fond of spending 13+ hours in flight. Definitely not my cup of tea.

With layovers it makes the experience a bit more bearable.

While the real action is probably 3 or bit more months away, its interesting to see how things are panning out at least on airline price tickets and the dynamics involved therein.

Even with all the above attempts at finding the answer, I’m no closer to figuring out to estimate airline ticket prices when window is largish 3 months in making. Any ideas anybody?

If the previous jump is any indication, then 10-15% escalation bit might not hack this time around. Any strategies that people could advise while trying to put a ball-park figure.

Filed under: Miscellenous Tagged: #air-travel, #Airfares, #Competition, #Price Estimation, #Sponsorship

Dirk Eddelbuettel: RcppCCTZ 0.2.1

5 February, 2017 - 05:41

A new minor version 0.2.1, of RcppCCTZ is now on CRAN. It corrects a possible shortcoming and rounding in the conversion from internal representation (in C++11 using int64_t) to the two double values for seconds and nanoseconds handed to R. Two other minor changes are also summarized below.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

The changes in this version are summarized here:

Changes in version 0.2.1 (2017-02-04)
  • Conversion from timepoint to two double values now rounds correctly (#14 closing #12, with thanks to Leonardo)

  • The Description was expanded to stress the need for a modern C++11 compiler; g++-4.8 (as on 'trusty' eg in Travis CI) works

  • Travis CI is now driven via from our fork

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: nanotime 0.1.1

5 February, 2017 - 05:33

A new version of the nanotime package for working with nanosecond timestamps is now on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting, and the bit64 package for the actual integer64 arithmetic.

This release adds an improved default display format always showing nine digits of fractional seconds. It also changes the print() method to call format() first, and we started to provide some better default Ops methods. These fixes were suggested by Matt Dowle. We also corrected a small issue which could lead to precision loss in formatting as pointed out by Leonardo Silvestri.

Changes in version 0.1.1 (2017-02-04)
  • The default display format now always shows nine digits (#10 closing #9)

  • The default print method was updated to use formated output, and a new new converter as.integer64 was added

  • Several 'Ops' method are now explicitly defined allowing casting of results (rather than falling back on bit64 behaviour)

  • The format routine is now more careful about not loosing precision (#13 closing #12)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้