Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 12 min ago

Raphaël Hertzog: My Free Software Activities in August 2015

1 September, 2015 - 18:49

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

  • Prepared and released DLA-301-1 fixing 2 CVE in python-django.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 11 commits to the security tracker.

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That’s always good.

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new “overlay” filesystem that replaces “aufs” in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian’s SSO).

While the numerous discussions I had during DebConf can’t be qualified as “work”, they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the “Debian in the corporate IT” BoF led by Michael Meskes (Credativ’s CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian’s Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there’s no upstream fix for more than 8 months, I went back to the former version 2.9.1. It’s in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it’s not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It’s not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn’t solve the issue for users of other init systems so it’s not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It’s working nice, and this authentication scheme is far easier to support. Good job, Enrico! broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported “Hashsum mismatch” on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously “aufs” was hardcoded).


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Bits from Debian: New Debian Developers and Maintainers (July and August 2015)

1 September, 2015 - 18:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller


Russ Allbery: Review: Kanban

1 September, 2015 - 11:06

Review: Kanban, by David J. Anderson

Publisher: Blue Hole Copyright: 2010 ISBN: 0-9845214-0-2 Format: Trade paperback Pages: 240

Another belated review, this time of a borrowed book. Which I can now finally return! A delay in the review of this book might be a feature if I had actually used it for, as the subtitle puts it, successful evolutionary change in my technology business. Sadly, I haven't, so it's just late.

So, my background: I've done a lot of variations of traditional project management for IT projects (both development and more operational ones), both as a participant and as a project manager. (Although I've never done the latter as a full-time job, and have no desire to do so.) A while back at Stanford, my team adopted Agile, specifically Scrum, so I did a fair bit of research about Scrum including a couple of training courses. Since then, at Dropbox, I've used a few different variations on a vaguely Agile-inspired planning process, although it's not particularly consistent with any one system.

I've been hearing about Kanban for a while and have friends who swear by it, but I only had a vague idea of how it worked. That seemed like a good excuse to read a book.

And Anderson's book is a good one if, like me, you're looking for a basic introduction. It opens with a basic description and definition, talks about motivation and the expected benefits, and then provides a detailed description of Kanban as a system. The tone is on the theory side, written using the terminology of (I presume) management theory and operations management, areas about which I know almost nothing, but the terminology wasn't so heavy as to make the book hard to read. Anderson goes into lots of detail, and I thought he did a good job of distinguishing between basic principles, optional techniques, and variations that may be appropriate for particular environments.

If you're also not familiar, the basic concept of Kanban is to organize work using an in-progress queue. It eschews time-bounded planning entirely in favor of staging work at the start of a sequence of queues and letting the people working on each queue pull from the previous queue when they're ready to take on a new task. As you might guess from that layout, Kanban was originally invented for assembly-line manufacturing (at Toyota). That was one of the problems that I had with it, or at least the presentation in this book: most of my software development doesn't involve finishing one part of something and handing it off to someone else, which made it hard to identify with the pipeline model. Anderson has clearly spent a lot of time working with large-scale programming shops, including outsourced development firms, with dedicated QA and operations roles. This is not at all how Silicon Valley agile development works, so parts of this book felt like missives from a foreign country.

That said, the key point of Kanban is not the assembly line but the work limit. One of the defining characteristics of Kanban, at least as Anderson presents it, is that one does not try to estimate work and tile it based on estimates, not even to the extent that Scrum and other Agile methodologies do within the sprint. Instead, each person takes as long as they take on the things they're working on, and pulls a new task when they have free capacity. The system instead puts a limit on how many things they can have in progress at a time. The problem of pipeline stalls is dealt with both via continuous improvement and "swarming" of a problem to unblock the line (since other teams may not be able to work until the block is fixed), and with being careful about the sizing of work items (I'm used to calling them stories) that go in the front end.

Predictability, which Scrum uses story sizing and team velocity analysis to try to achieve, is basically statistical. One uses a small number of buckets of sizes of stories, and the whole pipeline will finish some number of work items per unit time, with a variance. The promise made to clients and other teams is that some percentage of the work items will be finished within some time frame from when they enter the system. Most importantly, these are all descriptive properties, determined statistically after the fact, rather than planned properties worked out through story sizing and extensive team discussion. If you, like me, are pretty thoroughly sick of two-hour sprint planning meetings and endless sizing exercises, this is rather appealing.

My problem with most work planning systems is that I think they overplan and put too much weight our ability to estimate how long work will take. Kanban is very appealing when viewed through that lens: it gives up on things we're bad at in favor of simple measurement and building a system with enough slack that it can handle work of various different sizes. As mentioned at the top, I haven't had a chance to try it (and I'm not sure it's a good fit with the inter-group planning methods in use at my current workplace), but I came away from this book wanting to do so.

If, like me, your experience is mostly with small combined teams or individual programming work, Anderson's examples may seem rather large, rather formal, and bureaucratic. But this is a solid introduction and well worth reading if your only experience with Agile is sprint planning, story writing and sizing, and fast iterations.

Rating: 7 out of 10

Norbert Preining: PiwigoPress release 2.31

1 September, 2015 - 07:36

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates new features for the sidebar widget, and better interoperability with some Piwigo galleries.

The new features are:

  • Selection of images: Up to now images for the widget were selected at random. The current version allows selecting images either at random (the default as before), but also in ascending or descending order by various criteria (uploaded, availability time, id, name, etc). With this change it is now possible to always display the most recent image(s) from a gallery.
  • Interoperability: Some Piwigo galleries don’t have thumbnail sized representatives. For these galleries the widget was broken and didn’t display any image. We now check for either square or thumbnail.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

Junichi Uekawa: I've been writing ELF parser for fun using C++ templates to see how much I can simplify.

1 September, 2015 - 04:36
I've been writing ELF parser for fun using C++ templates to see how much I can simplify. I've been reading bionic loader code enough these days that I already know how it would look like in C and gradually converted into C++ but it's nevertheless fun to have a pet project that kind of grows.

Matthew Garrett: Working with the kernel keyring

1 September, 2015 - 00:18
The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.


Martín Ferrari: Romania

31 August, 2015 - 13:59

It's been over 2 years since I decided to start a new, nomadic life. I had the idea of blogging about this experience as it happened, but not only I am incredibly lazy when it comes to writing, most of the time I have been too busy just enjoying this lifestyle!

The TL;DR version of these last 2 years:

  • Lounged a bit in Ireland after leaving work, went on a great road trip along the West coast.
  • Lived in Nice 3 months, back in the same house I lived between 2009 and 2010.
    • During that time, my dad visited and took him for a trip around Nothern Italy, the Côte d'Azur and Paris; then travelled to DebConf in Switzerland, visited Gregor in Innsbruck, and travelled back to Nice by train, crossing the alps and a big chunk of Italy.
  • Then went to Buenos Aires for 3 months (mom was very happy).
  • Back in Europe, attended Fosdem, and moved to Barcelona for 3 months; so far, the best city I ever lived in!
  • Went back to Dublin for a while, ended up staying over 8 months, including getting a temporary job at a normal office (booo!).
    • Although one of these months I spent travelling in the States (meh).
    • And of course, many more short trips, including a couple of visits to Barcelona, Lille, Nice, and of course Brussels for Fosdem.
  • Again went to Buenos Aires, only 2 months this time.
  • Another month in Dublin, then holidays visiting my friends in Lisbon, wedding in Oviedo, and a road trip around Asturias and Galicia.
  • From Spain I flew to Prague and stayed for 2 months (definitely not enough).
  • Quick trip to Dublin, then CCC camp and DebConf in Germany.

And now, I am in Cluj-Napoca, Romania.

View from my window


Martín Ferrari: IkiWiki

31 August, 2015 - 11:55

I haven't posted in a very long time. Not only because I suck at this, but also because IkiWiki decided to stop working with OpenID, so I can't use the web interface any more to post.. Very annoying.

Already spent a good deal of time trying to find a solution, without any success.. I really don't want to migrate to another software again, but this is becoming a showstopper for me.


Russ Allbery: Review: Through Struggle, The Stars

31 August, 2015 - 10:54

Review: Through Struggle, The Stars, by John J. Lumpkin

Series: Human Reach #1 Publisher: John J. Lumpkin Copyright: July 2011 ISBN: 1-4611-9544-6 Format: Kindle Pages: 429

Never let it be said that I don't read military SF. However, it can be said that I read books and then get hellishly busy and don't review them for months. So we'll see if I can remember this well enough to review it properly.

In Lumpkin's future world, mankind has spread to the stars using gate technology, colonizing multiple worlds. However, unlike most science fiction futures of this type, it's not all about the United States, or even the United States and Russia. The great human powers are China and Japan, with the United States relegated to a distant third. The US mostly maintains its independence from either, and joins the other lesser nations and UN coalitions to try to pick up a few colonies of its own. That's the context in which Neil and Rand join the armed services: the former as a pilot in training, and the latter as an army grunt.

This is military SF, so of course a war breaks out. But it's a war between Japan and China: improved starship technology and the most sophisticated manufacturing in the world against a much larger economy with more resources and a larger starting military. For reasons that are not immediately clear, and become a plot point later on, the United States president immediately takes an aggressive tone towards China and pushes the country towards joining the war on the side of Japan.

Most of this book is told from Neil's perspective, following his military career during the outbreak of war. His plans to become a pilot get derailed as he gets entangled with US intelligence agents (and a bad supervisor). The US forces are not in a good place against China, struggling when they get into ship-to-ship combat, and Neil's ship goes on a covert mission to attempt to complicate the war with political factors. Meanwhile, Rand tries to help fight off a Chinese invasion of one of the rare US colony worlds.

Through Struggle, The Stars does not move quickly. It's over 400 pages, and it's a bit surprising how little happens. What it does instead is focus on how the military world and the war feels to Neil: the psychological experience of wanting to serve your country but questioning its decisions, the struggle of working with people who aren't always competent but who you can't just make go away, the complexities of choosing a career path when all the choices are fraught with politics that you didn't expect to be involved in, and the sheer amount of luck and random events involved in the progression of one's career. I found this surprisingly engrossing despite the slow pace, mostly because of how true to life it feels. War is not a never-ending set of battles. Life in a military ship has moments when everything is happening far too quickly, but other moments when not much happens for a long time. Lumpkin does a great job of reflecting that.

Unfortunately, I thought there were two significant flaws, one of which means I probably won't seek out further books in this series.

First, one middle portion of the book switches away from Neil to follow Rand instead. The first part of that involves the details of fighting orbiting ships with ground-based lasers, which was moderately interesting. (All the technological details of space combat are interesting and thoughtfully handled, although I'm not the sort of reader who will notice more subtle flaws. But David Weber this isn't, thankfully.) But then it turns into a fairly generic armed resistance story, which I found rather boring.

It also ties into the second and more serious flaw: the villain. The overall story is constructed such that it doesn't need a personal villain. It's about the intersection of the military and politics, and a war that may be ill-conceived but that is being fought anyway. That's plenty of conflict for the story, at least in my opinion. But Lumpkin chose to introduce a specific, named Chinese character in the villain role, and the characterization is... well.

After he's humiliated early in the story by the protagonists, Li Xiao develops an obsession with killing them, for his honor, and then pursues them throughout the book in ways that are sometimes destructive to the Chinese war efforts. It's badly unrealistic compared to the tone of realism taken by the rest of the story. Even if someone did become this deranged, it's bizarre that a professional military (and China's forces are otherwise portrayed as fairly professional) would allow this. Li reads like pure caricature, and despite being moderately competent apart from his inexplicable (but constantly repeated) obsession, is cast in a mustache-twirling role of personal villainy. This is weirdly out of place in the novel, and started irritating me enough that it took me out of the story.

Through Struggle, The Stars is the first book of a series, and does not resolve much by the end of the novel. That plus its length makes the story somewhat unsatisfying. I liked Neil's development, and liked him as a character, and those who like the details of combat mixed with front-lines speculation about politics may enjoy this. But a badly-simplified mustache-twirling victim and some extended, uninteresting bits mar the book enough that I probably won't seek out the sequels.

Followed by The Desert of Stars.

Rating: 6 out of 10

Andrew Cater: Rescuing a Windows 10 failed install using GParted Live on CD

31 August, 2015 - 03:09
Windows 10 is here, for better or worse. As the family sysadmin, I've been tasked to update the Windows machines: ultimately, failure modes are not well documented and I needed Free software and several hours to recover a vital machine.

The "free upgrade for users of prior Windows versions" is a limited time offer for a year from launch. Microsoft do not offer licence keys for the upgrade: once a machine has updated to Windows 10 and authenticated on the 'Net, then a machine can be re-installed and will be regarded by Microsoft as pre-authorised. Users don't get the key at any point.

Although Microsoft have pushed the fact that this can be done through Windows Update, there is the option to use Microsoft's Media Creation tool to do the upgrade directly on the Windows machine concerned. This would be necessary to get the machine to upgrade and register before a full clean install of Windows 10 from media.

This Media Creation Tool failed for me on three machines with "Unable to access System reserved partition'

All the machines have SSDs from various makers: a colleague suggested that resizing the partition might enable the upgrade to continue.  Of the machines that failed, all were running Windows 7 - two were running using BIOS, one using UEFI boot on a machine with no Legacy/CSM mode.

Using GParted live .iso  - itself based on Debian Live - allowed me to resize the System partition from 100MiB to 200MiB by moving the Windows partition but  Windows became unbootable.

In two cases, I was able to boot from DVD Windows installation media and make Windows bootable again at which point the Microsoft Media Creation Tool could be used to install Windows 10

The UEFI boot machine proved more problematic: I had to create a Windows 7 System Repair disk and repair Windows booting before Windows 10 could proceed.

My Windows-using colleaague had used only Windows-based recovery disks: using Linux tools allowed me to repair Windows installations I couldn't boot

Antonio Terceiro: DebConf15, testing debian packages, and packaging the free software web

31 August, 2015 - 02:12

This is my August update, and by the far the coolest thing in it is Debconf.


I don’t get tired of saying it is the best conference I ever attended. First it’s a mix of meeting both new people and old friends, having the chance to chat with people whose work you admire but never had a chance to meet before. Second, it’s always quality time: an informal environment, interesting and constructive presentations and discussions.

This year the venue was again very nice. Another thing that was very nice was having so many kids and families. This was no coincidence, since this was the first DebConf in which there was organized childcare. As the community gets older, this a very good way of keeping those who start having kids from being alienated from the community. Of course, not being a parent yet I have no idea how actually hard is to bring small kids to a conference like DebConf. ;-)

I presented two talks:

  • Tutorial: Functional Testing of Debian packages, where I introduced the basic concepts of DEP-8/autopkgtest, and went over several examples from my packages giving tips and tricks on how to write functional tests for Debian packages.
  • Packaging the Free Software Web for the end user, where I presented the motivation for, and the current state of shak, a project I am working on to make it trivial for end users to install server side applications in Debian. I spent quite some hacking time during DebConf finishing a prototype of the shak web interface, which was demonstrated live in the talk (of course, as usual with live demos, not everything worked :-)).

There was also the now traditional Ruby BoF, where discussed the state and future of the Ruby ecosystem in Debian; and an in promptu Ruby packaging workshop where we introduced the basics of packaging in general, and Ruby packaging specifically.

Besides shak, I was able to hack on a few cool things during DebConf:

  • debci has been updated with a first version of the code to produce britney hints files that block packages that fail their tests from migrating to testing. There are some issues to be sorted out together with the release team to make sure we don’t block packages unecessarily, e.g. we don’t want to block packages that never passed their test suite — most the test suite, and not the package, is broken.
  • while hacking I ended up updating jquery to the newest version in the 1.x series, and in fact adopting it I guess. This allowed emeto drop the embedded jquery copy I used to have in the shak repository, and since then I was able to improve the build to produce an output that is identical, except for a build timestamp inside a comment and a few empty lines, to the one produced by upstream, without using grunt (.
Miscellaneous updates

DebConf team: DebConf15: Farewell, and thanks for all the Fisch (Posted by DebConf Team)

31 August, 2015 - 01:24

A week ago, we concluded our biggest DebConf ever! It was a huge success.

We are overwhelmed by the positive feedback, for which we’re very grateful. We want to thank you all for participating in the talks; speakers and audience alike, in person or live over the global Internet — it wouldn’t be the fantastic DebConf experience without you!

Many of our events were recorded and streamed live, and are now available for viewing, as are the slides and photos.

To share a sense of the scale of what all of us accomplished together, we’ve compiled a few statistics:

  • 555 attendees from 52 countries (including 28 kids)
  • 216 scheduled events (183 talks and workshops), of which 119 were streamed and recorded
  • 62 sponsors and partners
  • 169 people sponsored for food & accommodation
  • 79 professional and 35 corporate registrations

Our very own designer Valessio Brito made a lovely video of impressions and images of the conference.

Your browser does not support the video tag.

We’re collecting impressions from attendees as well as links to press articles, including Linux Weekly News coverage of specific sessions of DebConf. If you find something not yet included, please help us by adding links to the wiki.

We tried a few new ideas this year, including a larger number of invited and featured speakers than ever before.

On the Open Weekend, some of our sponsors presented their career opportunities at our job fair, which was very well attended.

And a diverse selection of entertainment options provided the necessary breaks and ample opportunity for socialising.

On the last Friday, the Oscar-winning documentary “Citizenfour” was screened, with some introductory remarks by Jacob Appelbaum and a remote address by its director, Laura Poitras, and followed by a long Q&A session by Jacob.

DebConf15 was also the first DebConf with organised childcare (including a Teckids workshop for kids of age 8-16), which our DPL Neil McGovern standardised for the future: “it’s a thing now,” he said.

The participants used the week before the conference for intensive work, sprints and workshops, and throughout the main conference, significant progress was made on Debian and Free Software. Possibly the most visible was the endeavour to provide reproducible builds, but the planning of the next stable release “stretch” received no less attention. Groups like the Perl team, the diversity outreach programme and even DebConf organisation spent much time together discussing next steps and goals, and hundreds of commits were made to the archive, as well as bugs closed.

DebConf15 was an amazing conference, it brought together hundreds of people, some oldtimers as well as plenty of new contributors, and we all had a great time, learning and collaborating with each other, says Margarita Manterola of the organiser team, and continues: The whole team worked really hard, and we are all very satisfied with the outcome. Another organiser, Martin Krafft adds: We mainly provided the infrastructure and space. A lot of what happened during the two weeks was thanks to our attendees. And that’s what makes DebConf be DebConf.

Our organisation was greatly supported by the staff of the conference venue, the Jugendherberge Heidelberg International, who didn’t take very long to identify with our diverse group, and who left no wishes untried. The venue itself was wonderfully spacious and never seemed too full as people spread naturally across the various conference rooms, the many open areas, the beergarden, the outside hacklabs and the lawn.

The network installed specifically for our conference in collaboration with the nearby university, the neighbouring zoo, and the youth hostel provided us with a 1 Gbps upstream link, which we managed to almost saturate. The connection will stay in place, leaving the youth hostel as one with possibly the fastest Internet connection in the state.

And the kitchen catered high-quality food to all attendees and their special requirements. Regional beer and wine, as well as local specialities, were provided at the bistro.

DebConf exists to bring people together, which includes paying for travel, food and accomodation for people who could not otherwise attend. We would never have been able to achieve what we did without the support of our generous sponsors, especially our Platinum Sponsor Hewlett-Packard. Thank you very much.

See you next year in Cape Town, South Africa!

Philipp Kern: Automating the 3270 part of a Debian System z install

31 August, 2015 - 00:36
If you try to install Debian on System z within z/VM you might be annoyed at the various prompts it shows before it lets you access the network console via SSH. We can do better. From within CMS copy the default EXEC and default PARMFILE:


Now edit DEBAUTO EXEC A and replace the DEBIAN in 'PUNCH PARMFILE DEBIAN * (NOHEADER' with DEBAUTO. This will load the alternate kernel parameters file into the card reader, while still loading the original kernel and initrd files.

Replace PARMFILE DEBAUTO A's content with this (note the 80 character column limit):

ro locale=C                                                              
s390-netdevice/choose_networktype=qeth s390-netdevice/qeth/layer2=true   
netcfg/get_ipaddress=<IPADDR> netcfg/get_netmask=       
netcfg/get_gateway=<GW> netcfg/get_nameservers=<FIRST-DNS>    
netcfg/confirm_static=true netcfg/get_hostname=debian                    

Replace <IPADDR>, <GW>, and <FIRST-DNS> to suit your local network config. You might also need to change the netmask, which I left in for clarity about the format. Adjust the device address of your OSA network card. If it's in layer 3 mode (very likely) you should set layer2=false. Note that mixed case matters, hence you will want to SET CASE MIXED in xedit.

Then there are the two URLs that need to be changed. The authorized_keys_url file contains your SSH public key and is fetched unencrypted and unauthenticated, so be careful what networks you traverse with your request (HTTPS is not supported by debian-installer in Debian).

preseed/url is needed for installation parameters that do not fit the parameters file - there is an upper character limit that's about two lines longer than my example. This is why this example only contains the bare minimum for the network part, everything else goes into this preseeding file. It file can optionally be protected with a MD5 checksum in preseed/url/checksum.

Both URLs need to be very short. I thought that there was a way to specify a line continuation, but in my tests I was unable to produce one. Hence it needs to fit on one line, including the key. You might want to use an IPv4 as the hostname.

To skip the initial boilerplate prompts and to skip straight to the user and disk setup you can use this as preseed.cfg:

d-i debian-installer/locale string en_US
d-i debian-installer/country string US
d-i debian-installer/language string en
d-i time/zone US/Eastern
d-i mirror/country manual
d-i mirror/http/mirror string
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string

I'm relatively certain that the DASD disk setup part cannot be automated yet. But the other bits of the installation should be preseedable just like on non-mainframe hardware.

Dirk Eddelbuettel: RcppGSL 0.3.0

30 August, 2015 - 22:05

A new version of RcppGSL just arrived on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using our Rcpp package.

Following on the heels of an update last month we updated the package (and its vignette) further. One of the key additions concern memory management: Given that our proxy classes around the GSL vector and matrix types are real C++ object, we can monitor their scope and automagically call free() on them rather then insisting on the user doing it. This renders code much simpler as illustrated below. Dan Dillon added const correctness over a series on pull request which allows us to write more standard (and simply nicer) function interfaces. Lastly, a few new typedef declaration further simply the use of the (most common) double and int vectors and matrices.

Maybe a code example will help. RcppGSL contains a full and complete example package illustrating how to write a package using the RcppGSL facilities. It contains an example of computing a column norm -- which we blogged about before when announcing an much earlier version. In its full glory, it looks like this:

#include <RcppGSL.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_blas.h>

extern "C" SEXP colNorm(SEXP sM) {

  try {

        RcppGSL::matrix<double> M = sM;     // create gsl data structures from SEXP
        int k = M.ncol();
        Rcpp::NumericVector n(k);           // to store results

        for (int j = 0; j < k; j++) {
            RcppGSL::vector_view<double> colview = gsl_matrix_column (M, j);
            n[j] = gsl_blas_dnrm2(colview);
        } ;
        return n;                           // return vector

  } catch( std::exception &ex ) {
        forward_exception_to_r( ex );

  } catch(...) {
        ::Rf_error( "c++ exception (unknown reason)" );
  return R_NilValue; // -Wall

We manually translate the SEXP coming from R, manually cover the try and catch exception handling, manually free the memory etc pp.

Well in the current version, the example is written as follows:

#include <RcppGSL.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_blas.h>

// [[Rcpp::export]]
Rcpp::NumericVector colNorm(const RcppGSL::Matrix & G) {
    int k = G.ncol();
    Rcpp::NumericVector n(k);           // to store results
    for (int j = 0; j < k; j++) {
        RcppGSL::VectorView colview = gsl_matrix_const_column (G, j);
        n[j] = gsl_blas_dnrm2(colview);
    return n;                           // return vector

This takes full advantage of Rcpp Attributes automagically creating the interface and exception handler (as per the previous release), adds a const & interface, does away with the tedious and error-pronce free() and uses the shorter-typedef forms for RcppGSL::Matrix and RcppGSL::VectorViews using double variables. Now the function is short and concise and hence easier to read and maintain. The package vignette has more details on using RcppGSL.

The NEWS file entries follows below:

Changes in version 0.3.0 (2015-08-30)
  • The RcppGSL matrix and vector class now keep track of object allocation and can therefore automatically free allocated object in the destructor. Explicit use is still supported.

  • The matrix and vector classes now support const reference semantics in the interfaces (thanks to PR #7 by Dan Dillon)

  • The matrix_view and vector_view classes are reorganized to better support const arguments (thanks to PR #8 and #9 by Dan Dillon)

  • Shorthand forms such as Rcpp::Matrix have been added for double and int vectors and matrices including views.

  • Examples such as fastLm can now be written in a much cleaner and shorter way as GSL objects can appear in the function signature and without requiring explicit .free() calls at the end.

  • The included examples, as well as the introductory vignette, have been updated accordingly.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sven Hoexter: 1960 SubjectAlternativeNames on one certificate

30 August, 2015 - 20:58

tl;dr; You can add 1960+ SubjectAlternativeNames on one certificate and at least Firefox and Chrome are working fine with that. Internet Explorer failed but I did not investigate why.

So why would you want to have close to 2K SANs on one certificate? While we're working on adopting a more dynamic development workflow at my workplace we're currently bound to a central development system. From there we serve a classic virtual hosting setup with "projectname.username.devel.ourdomain.example" mapped on "/web/username/projectname/". That is 100% dynamic with wildcard DNS entries and you can just add a new project to your folder and use it directly. All of that is served from just a single VirtualHost.

Now our developers started to go through all our active projects to make them fit for serving via HTTPS. While we can verify the proper usage of https on our staging system where we've validating certificates, that's not the way you'd like to work. So someone approached me to look into a solution for our development system. Obvious choices like wildcard certificates do not work here because we've two dynamic components in the FQDN. So we would've to buy a wildcard certificate for every developer and we would've to create a VirtualHost entry for every new developer. That's expensive and we don't want all that additional work. So I started to search for documented limits on the number of SANs you can have on a certificate. The good news: there are none. The RFC does not define a limit. So much about the theory.

Following Ivans excellent documentation I setup an internal CA and an ugly "find ... |sed ...|tr ..." one-liner later I had a properly formated openssl config file to generate a CSR with all 1960 "projectname.username..." SAN combinations found on the development system. Two openssl invocations (CSR generation and signing) later I had a signed certificate with 1960 SANs on it. I imported the internal CA I created in Firefox and Chrome, and to my surprise it worked.

Noteworthy: To sign with "openssl ca" without interactive prompts you've to use the "-batch" option.

I'm thinking about regenerating the certificate every morning so our developers just have to create a new project directory and within 24h serving via HTTPS would be enabled. The only thing I'm currently pondering about is how to properly run the CA in a corporate Windows world. We could of course ask the Windows guys to include it for everyone but then we would've to really invest time in properly running the CA. I'd like to avoid that hassle. So I'd guess we just stick to providing the CA for those developers who need it. This all or nothing model is a constant PITA, and you really do not want to get owned via your own badly managed CA.

Regarding Internet Explorer it jumped in my face with a strange error message that recommended to enable TLS 1.0, 1.1 and 1.2 in the options menu. Of course that's already enable. I'll try to take a look at the handshake next week, but I bet we've to accept for the moment that IE will not work with so many SANs. Would be interesting to try out Windows 10 with Spartan, but well I'm not that interested in Windows to invest more time on that front. Other TLS implementations, like Java, would be also interesting to test.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้