Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 38 min 5 sec ago

Noah Meyerhans: On the demise of Linux Journal

3 December, 2017 - 09:54

Lwn, Slashdot, and many others have marked the recent announcement of Linux Journal's demise. I'll take this opportunity to share some of my thoughts, and to thank the publication and its many contributors for their work over the years.

I think it's probably hard for younger people to imagine what the Linux world was like 20 years ago. Today, it's really not an exaggeration to say that the Internet as we know it wouldn't exist at all without Linux. Almost every major Internet company you can think of runs almost completely on Linux. Amazon, Google, Facebook, Twitter, etc, etc. All Linux. In 1997, though, the idea of running a production workload on Linux was pretty far out there.

I was in college in the late 90's, and worked for a time at a small Cambridge, Massachusetts software company. The company wrote a pretty fancy (and expensive!) GUI builder targeting big expensive commercial UNIX platforms like Solaris, HP/UX, SGI IRIX, and others. At one point a customer inquired about the availability of our software on Linux, and I, as an enthusiastic young student, got really excited about the idea. The company really had no plans to support Linux, though. I'll never forget the look of disbelief on a company exec's face as he asked "$3000 on a Linux system?"

Throughout this period, on my lunch breaks from work, I'd swing by the now defunct Quantum Books. One of the monthly treats was a new issue of Linux Journal on the periodicals shelf. In these issues, I learned that more forward thinking companies actually were using Linux to do real work. An article entitled "Linux Sinks the Titanic" described how Hollywood deployed hundreds(!) of Linux systems running custom software to generate the special effects for the 1997 movie Titanic. Other articles documented how Linux was making inroads at NASA and in the broader scientific community. Even the ads were interesting, as they showed increasing commercial interest in Linux, both on the hardware (HyperMicro, VA Research, Linux Hardware Solutions, etc) and software (CDE, Xi Graphics) fronts.

The software world is very different now than it was in 1997. The media world is different, too. Not only is Linux well established, it's pretty much the dominant OS on the planet. When Linux Journal reported in the late 90's that Linux was being used for some new project, that was news. When they documented how to set up a Linux system to control some new piece of hardware or run some network service, you could bet that they filled a gap that nobody else was working on. Today, it's no longer news that a successful company is using Linux in production. Nor is it surprising that you can run Linux on a small embedded system; in fact it's quite likely that the system shipped with Linux pre-installed. On the media side, it used to be valuable to have everything bundled in a glossy, professionally produced archive published on a regular basis. Today, at least in the Linux/free software sphere, that's less important. Individual publication is easy on the Internet today, and search engines are very good at ensuring that the best content is the most discoverable content. The whole Internet is basically one giant continuously published magazine.

It's been a long time since I paid attention to Linux Journal, so from a practical point of view I can't honestly say that I'll miss it. I appreciate the role it played in my growth, but there are so many options for young people today entering the Linux/free software communities that it appears that the role is no longer needed. Still, the termination of this magazine is a permanent thing, and I can't help but worry that there's somebody out there who might thrive in the free software community if only they had the right door open before them.

Thomas Goirand: There’s cloud, and it can even be YOURS on YOUR computer

3 December, 2017 - 05:00

Each time I see the FSFE picture, just like on Daniel’s last post to planet.d.o, where it says:

“There is NO CLOUD, just other people’s computers”

it makes me so frustrated. There’s such a thing as private cloud, setup on your own set of servers. I’ve been working on delivering OpenStack to Debian for the last 6 years and a half, motivated exactly to fix this issue: I refuse that the only cloud people could use would be a closed source solution like GCE, AWS or Azure. The FSFE (and the FSF) completely dismissing this work is more than annoying: it is counter productive. Not only the FSFE shouldn’t pull anyone away from the cloud, but it should push for the public to choose cloud providers using free software like OpenStack.

The market place lists 23 public cloud providers using OpenStack, so there is now no excuse to use any other type of cloud: for sure, there’s one where you need it. If you use a free software solution like OpenStack, then the question if you’re running on your own hardware, on some rented hardware (on which you deployed OpenStack yourself), or on someone else’s OpenStack deployment is just a practical one, on which you can always back-up quickly. That’s one of the very reason why one should deploy on the cloud: so that it’s possible to redeploy quickly on another cloud provider, or even on your own private cloud. This gives you more freedom than you ever had, because it makes you not dependent anymore on the hosting company you’ve selected: switching provider is just the mater of launching a script. The reality is that neither the FSFE or RMS understand all of this. Please don’t dive into the FSFE very wrong message.

Steve Kemp: repository cleanup, and email-changes.

3 December, 2017 - 05:00

I've shuffled around all the repositories which are associated with the blogspam service, such that they're all in the same place and refer to each other correctly:

Otherwise I've done a bit of tidying up on virtual machines, and I'm just about to drop the use of qpsmtpd for handling my email. I've used the (perl-based) qpsmtpd project for many years, and documented how my system works in a "book":

I'll be switching to pure exim4-based setup later today, and we'll see what that does. So far today I've received over five thousand spam emails:

  steve@ssh /spam/today $ find . -type f | wc -l

Looking more closely though over half of these rejections are "dictionary attacks", so they're not SPAM I'd see if I dropped the qpsmtpd-layer. Here's a sample log entry (for a mail that was both rejected at SMTP-time by qpsmtpd and archived to disc in case of error):

    "reason":"Mail for juha not accepted at",
    "subject":"Viagra Professional. Beyond compare. Buy at our shop.",

I suspect that with procmail piping to crm114, and a beefed up spam-checking configuration for exim4 I'll not see a significant difference and I'll have removed something non-standard. For what it is worth over 75% of the remaining junk which was rejected at SMTP-time has been rejected via DNS-blacklists. So again exim4 will take care of that for me.

If it turns out that I'm getting inundated with junk-mail I'll revert this, but I suspect that it'll all be fine.

Thorsten Alteholz: My Debian Activities in November 2017

2 December, 2017 - 23:55

FTP master

As you might have read elsewhere, I am no longer an FTP assistant. I am very delighted about my new delegation as FTP master.

So this month I almost doubled the number of accepted packages to 385 packages and rejected 60 uploads. The overall number of packages that got accepted this month was 448.

Debian LTS

This was my forty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13h. During that time I did LTS uploads of:

  • [DLA 1188-1] libxml2 security update one CVE
  • [DLA 1191-1] python-werkzeug security update one CVE
  • [DLA 1192-1] libofx security update two CVEs
  • [DLA 1195-1] curl security update one CVE
  • [DLA 1194-1] libxml2 security update two CVEs

I also took care of an rsync issue and continued to work on wireshark.

Other stuff

During November I uploaded new upstream versions of …

I also did uploads of …

  • openoverlayrouter to change the source package Section: and fix some problems in Ubuntu
  • duktape to not only provide a shared library but also a pkg-config file
  • astronomical-almanac to make Helmut happy and fix a FTCBFS where he also provided the patch

Last month I wrote about apcupsd as the DOPOM of October. Unfortunately in November was the next power outage due to some malfunction in a transformer station. I never would have guessed that such a malfunction can do so much harm within the power grid. Anyway, the power was back after 31 minutes and my batteries would have lasted 34 minutes before turning off all computer. At least my spec was correct :-).

The DOPOM for this month has been dateutils.

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

Last but not least I sponsored the upload of evqueue-core.

Martin-&#201;ric Racine: dbus, rsyslogd, systemd: Which one is the culprit?

2 December, 2017 - 22:03
I have been facing this issue since a few weeks on testing. For many weeks, it prevented upgrading dbus to the latest version that trickled to Testing. Having manually force-installed dbus via the Recovery Mode's shell, I then ran into this issue:

This is a nasty one, since it also prevents performing a clean poweroff. That systemd-journald line about getting a timeout while attempting to connect to the Watchdog keeps on showing ad infinitum.

What am I missing?

Daniel Pocock: Hacking with posters and stickers

2 December, 2017 - 03:27

The hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.

Ben Hutchings: Debian LTS work, November 2017

2 December, 2017 - 00:54

I was assigned 13 hours of work by Freexian's Debian LTS initiative and carried over 14 hours from September. I worked all 17 hours.

I prepared and released two updates on the Linux 3.2 longterm stable branch (3.2.95, 3.2.96), but I didn't upload an update to Debian. However, I have rebased the Debian package on 3.2.96 and expect to make a new upload soon.

Ben Hutchings: Mini-DebConf Cambridge 2017

2 December, 2017 - 00:51

Last week I attended Cambridge's annual mini-DebConf. It's slightly strange to visit a place one has lived in for a long time but which is no longer home. I joined Nattie in the 'video team house' which was rented for the whole week; I only went for four days.

I travelled down on Wednesday night, and spent a long time (rather longer than planned) on trains and in waiting rooms. I used this time to catch up on discussions about signing infrastructure for Secure Boot, explaining my concerns with the most recent proposal and proposing some changes that might alleviate those. Sorry to everyone who was waiting for that; I should have replied earlier.

On the Thursday and Friday I prepared for my talk, and had some conversations with Steve McIntyre and others about SB signing infrastructure. Nattie and Andy respectively organised group dinners at the Polish club on Thursday and a curry house on Friday, both of which I enjoyed.

The mini-DebConf proper took place on the Saturday and Sunday, and I presented my now annual talk on "What's new in the Linux kernel". As usual, the video team did a fine job of recording and publishing video of the talks.

Junichi Uekawa: I was trying to get selenium up and running.

2 December, 2017 - 00:20
I was trying to get selenium up and running. I wanted to try chrome headless and one that seemed to be usable seemed to be selenium but that didn't just work out of the box on Debian apt-get installed binary. hmm.

James McCoy: Monthly FLOSS activity - 2017/11 edition

1 December, 2017 - 12:02
Debian vim
  • Uploaded version 2:8.0.1257-1 to fix a flaky test and the debsources syntax highlighting
    • This round of builds revealed some more flaky tests, so I sent PR#2282 to make the tests more deterministic.
  • Uploaded version 2:8.0.1257-2
  • Uploaded versions 0.2.1-1 and 0.2.1-1
    • There were various test failures, which thankfully boiled down to a few common issues
      • Lua appears to be stricter than LuaJIT when formatting nil
      • A few potentially flaky screen tests
      • Garbage being formatted into an error string
  • Uploaded version 0.2.1-3 to fix the test failures
  • Uploaded version 0.2.2-1
  • Uploaded version 0~bzr715-1
    • This is needed by the new pangoterm revision to report focus events
  • Uploaded version 0~bzr607-1
    • Focus reporting can now be enabled by applications
    • The -T switch, an alias for --title, is supported for full compliance with the requirements of an x-terminal-emulator alternative
  • Uploaded version 1.3.9-4
    • Cleaned up the packaging
    • Marked a symbol as public since svn has been using it for 10 years, which let me drop a patch from svn's packaging
  • Changed how swig is invoked so there are explicit mechanisms for passing general or language-specific options when generating the bindings. No longer are CPPFLAGS being (ab)used to handle this, so the whack-a-mole game of stripping unrecognized switches was also removed. (r1816254)
  • Since there was a bit of development activity at the hackathon and talk of another pre-release for 1.10, I performed a Coverity scan of trunk. I did a quick scrub of some of the new and existing issues, one of which I proposed for a backport to 1.9.x.

Paul Wise: FLOSS Activities November 2017

1 December, 2017 - 09:44
Changes Issues Review Administration
  • hexchat-addons: merged pull request
  • Debian: fix LDAP issue, reenable webserver on a VM, redirect users to support channels
  • Debian wiki: whitelist email addresses
  • Debian website: check on a build issue
  • Invite Slax to the Debian derivatives census

All work was done on a volunteer basis.

Antoine Beaupré: November 2017 report: LTS, standard disclosure, Monkeysphere in Python, flash fraud and Goodbye Drupal

1 December, 2017 - 05:33
Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I didn't do as much as I wanted this month so a bunch of hours are reported to next month. I got frustrated by two difficult packages: exiv2 and libreoffice.


For Exiv2 I first reported the issues upstream as requested in the original CVE assignment. Then I went to see if I could reproduce the issue. Valgrind didn't find anything, so I went on to test the new ASAN instructions that tell us how to build for ASAN in LTS. Turns out that I couldn't make that work either. Fortunately, Roberto was able to build the package properly and confirmed the wheezy version as non-vulnerable, so I marked the three CVEs as not-affected and moved on.


Next up was LibreOffice. I started backporting the patches to wheezy which was a little difficult because any error in the backport takes hours to find because LibreOffice is so big. The monster takes about 4 hours to build on my i3-6100U processor - I can't imagine how long that would take on a slower machine. Still, I managed to get patches that mostly builds. I say mostly because while most of the code builds, the tests fail to build. Not only do they fail to build, but they even segfault the linker. At that point, I had already spent too many hours working on this frustrating loop of "work/build-wait/crash" that I gave up.

I also worked on reproducing a supposed regression associated with the last security update. Somehow, I couldn't reproduce either - the description of the regression was very limited and all suggested approaches failed to trigger the problems described.


Finally, a little candy: an easy backport of a simple 2-line patch for a simple program, OptiPNG that, ironically, had a vulnerability (CVE-2017-16938) in GIF parsing. I could do hundreds of those breezy updates, they are fun and simple, and easy to test. This resulted in the trivial DLA-1196-1.


LibreOffice stretched the limits of my development environment. I had to figure out how to deal with out of space conditions in the build tree (/build) something that is really not obvious in sbuild. I ended up documenting that in a new troubleshooting section in the wiki.

Other free software work feed2exec

I pushed forward with the development of my programmable feed reader, feed2exec. Last month I mentioned I released the 0.6.0 beta: since then 4 more releases were published, and we are now at the 0.10.0 beta. This added a bunch new features:

  • wayback plugin to save feed items to the Wayback Machine on
  • archive plugin to save feed items to the local filesystem
  • transmission plugin to save RSS Torrent feeds to the Transmission torrent client
  • vast expansion of the documentation, now hosted on ReadTheDocs. The design was detailed with a tour of the source code and detailed plugin writing instructions were added to the documentation, also shipped as a feed2exec-plugins manpage.
  • major cleanup and refactoring of the codebase, including standard locations for the configuration files, which moved

The documentation deserves special mention. If you compare between version 0.6 and the latest version you can see 4 new sections:

  • Plugins - extensive documentation on plugins use, the design of the plugin system and a full tutorial on how to write new plugins. the tutorial was written while writing the archive plugin, which was written as an example plugin just for that purpose and should be readable by novice programmers
  • Support - a handy guide on how to get technical support for the project, copied over from the Monkeysign project.
  • Code of conduct - was originally part of the contribution guide. the idea then was to force people to read the Code when they wanted to contribute, but it wasn't a good idea. The contribution page was overloaded and critical parts were hidden down in the page, after what is essentially boilerplate text. Inversely, the Code was itself hidden in the contribution guide. Now it is clearly visible from the top and trolls will see this is an ethical community.
  • Contact - another idea from the Monkeysign project. became essential when the security contact was added (see below).

All those changes were backported in the ecdysis template documentation and I hope to backport them back into my other projects eventually. As part of my documentation work, I also drifted into the Sphinx project itself and submitted a patch to make manpage references clickable as well.

I now use feed2exec to archive new posts on my website to the internet archive, which means I have an ad-hoc offsite backup of all content I post online. I think that's pretty neat. I also leverage the Linkchecker program to look for dead links in new articles published on the site. This is possible thanks to a Ikiwiki-specific filter to extract links to changed files from the Recent Changes RSS feed.

I'm considering making the parse step of the program pluggable. This is an idea I had in mind for a while, but it didn't make sense until recently. I described the project and someone said "oh that's like IFTTT", a tool I wasn't really aware of, which connects various "feeds" (Twitter, Facebook, RSS) to each other, using triggers. The key concept here is that feed2exec could be made to read from Twitter or other feeds, like IFTTT and not just write to them. This could allow users to bridge social networks by writing only to a single one and broadcasting to the other ones.

Unfortunately, this means adding a lot of interface code and I do not have a strong use case for this just yet. Furthermore, it may mean switching from a "cron" design to a more interactive, interrupt-driven design that would run continuously and wake up on certain triggers.

Maybe that could come in a 2.0 release. For now, I'll see to it that the current codebase is solid. I'll release a 0.11 release candidate shortly, which has seen a major refactoring since 0.10. I again welcome beta testers and users to report their issues. I am happy to report I got and fixed my first bug report on this project this month.

Towards standard security disclosure guidelines

When reading the excellent State of Opensource Security report, some metrics caught my eye:

  • 75% of vulnerabilities are not discovered by the maintainer

  • 79.5% of maintainers said that they had no public-facing disclosure policy in place

  • 21% of maintainers who do not have a public disclosure policy have been notified privately about a vulnerability

  • 73% of maintainers who do have a public disclosure policy have been notified privately about a vulnerability

In other words, having a public disclosure policy more than triples your chances of being notified of a security vulnerability. I was also surprised to find that 4 out of 5 projects do not have such a page. Then I realized that none of my projects had such a page, so I decided to fix that and fix my documentation templates (the infamously named ecdysis project) to specifically include a section on security issues.

I found that the HackerOne disclosure guidelines were pretty good, except they require having a bounty for disclosure. I understand it's part of their business model, but I have no such money to give out - in fact, I don't even pay myself for the work of developing the project, so I don't quite see why I would pay for disclosures.

I also found that many projects include OpenPGP key fingerprints in their contact information. I find that's a little silly: project documentation is no place to offer OpenPGP key discovery. If security researchers cannot find and verify OpenPGP key fingerprints, I would be worried about their capabilities. Adding a fingerprint or key material is just bound to create outdated documentation when maintainers rotate. Instead, I encourage people to use proper key discovery mechanism like the Web of trust, WKD or obviously TOFU which is basically what publishing a fingerprint does anyways.


After being granted access to the Git-Mediawiki project last month, I got to work. I fought hard with both Travis and Git, and Perl, and MediaWiki, to add continuous integration in the repository. It turns out the MediaWiki remote doesn't support newer versions of MediaWiki because the authentication system changed radically. Even though there is supposed to be backwards-compatibility, it doesn't seem to really work in my cases, which means that any test case that requires logging into the wiki fails. This also required using an external test suite instead of the one provided by Git, which insists on building Git before being used at all. I ended up using the Sharness project and submitted a few tests that were missing.

I also:

I also opened a discussion regarding the versioning scheme and started tracking issues that would be part of the next release. I encourage people to get involved in this project if they are interested: the door is wide open for contributors, of course.

Rewriting Monkeysphere in Python?

After being told one too many times that Monkeysphere doesn't support elliptic curve algorithms, I finally documented the issue and looked into solutions. It turns out that a key part of Monkeysphere is a Perl program written using a library that doesn't support ECC itself, which makes this problem very hard to solve. So I looked into the PGPy project to see if it could be useful and it turns out that ECC support already exists - the only problem is the specific curve GnuPG uses, ed25519, was missing. The author fixed that but the fix requires a development version of OpenSSL, because even the 1.1 release doesn't support that curve.

I nevertheless went ahead to see how hard it would be to re-implement that key component of Monkeysphere in Python, and ended up with monkeypy, an analysis of the Monkeysphere design and first prototype of a Python-based OpenPGP / SSH conversion tool. This lead to a flurry of issues on the PGPy project to fix UTF-8 support, easier revocation checks and a few more. I also reviewed 9 pull requests that were pending in the repository. A key piece missing is keyserver interaction and I made a first prototype of this as well.

It was interesting to review Monkeysphere's design. I hope to write more about this in the future, especially about how it could be improved, and find the time to do this re-implementation which could mean wider adoption of this excellent project.

Goodbye, Drupal

I finally abandoned all my Drupal projects, except the Aegir-related ones, which I somewhat still feel attached to, even though I do not do any Drupal development at all anymore. I do not use Drupal anywhere (unless as a visitor on some websites maybe?), I do not manage Drupal or Aegir sites anymore, and in fact, I have completely lost interested in writing anything remotely close to PHP.

So it was time to go. I have been involved with Drupal for a long time: I registered on in March 2001, almost 17 years ago. I was active for about 12 years on the site: I made my first post in 2003 which showed I was already "hacking core" which back then was the way to go. My first commit, to the mailman_api project, was also in 2003, and I kept contributing until my last commit in 2015, on the Aegir project of course.

That is probably the longest time I have spent on any software project, with the notable exception of the Debian project. I had a lot of fun working with Drupal: I met amazing people, traveled all over the world and made powerful websites that shook the world. Unfortunately, the Drupal project has decided to go in a direction I cannot support: Drupal 8 is a monster that is beyond anything needed for me, my friends or the organizations I support. It may be very well for the startups and corporations, to create large sites, but it seems it completely lost touch with its roots. Drupal used to be a simple tool for "community plumbing" (ref), it is now a behemoth to "Launch, manage, and scale ambitious digital experiences—with the flexibility to build great websites or push beyond the browser" (ref).

When I heard a friend had his small artist website made in Drupal, my first questions were "which version?" and "who takes care of the upgrades?" Those questions still don't seem to be resolved in the community: a whole part of the project is still stuck in the old Drupal 7 world, years after the first D8 alpha releases. Drupal 8 doesn't address the fundamental issue of upgradability and cost: Drupal sites are bigger than they ever were. The Drupal community seems to be following a very different way of thinking (notice the Amazon Echo device there) than what I hope the future of the Internet will be.

Goodbye, Drupal - it's been a great decade, but I'm already gone and I'm not missing you at all.

Fixing fraudulent memory cards

I finally got around testing the interesting f3 project with a real use case, which allowed me to update the stressant documentation with actual real-world results. A friend told me the memory card he put in his phone had trouble with some pictures he was taking with the phone: they would show up as "gray" when he would open them, yet the thumbnail looked good.

It turns out his memory card wasn't really 32GB as advertised, but really 16GB. Any write sent to the card above that 16GB barrier was just discarded and reads were returned as garbage. But thanks to the f3 project it was possible to fix that card so he could use it normally.

In passing, I worked on the f3 project to merge the website with the main documentation, again using the Sphinx project to generate more complete documentation. The old website is still up, but hopefully a redirection will be installed soon.

  • quick linkchecker patch, still haven't found the time to make a real release

  • tested the goaccess program to inspect web server logs and get some visibility on the traffic to this blog. found two limitations: some graphs look weird and would benefit from a logarithmic scale and it would be very useful to have plain text reports. otherwise this makes for a nice replacement for the aging analog program I am currently using, especially as a sysadmin quick diagnostic tool. for now I use this recipe to send a HTML report by email every week

  • found out about the mpd/subsonic bridge and sent a few issues about problems I found. turns out the project is abandoned and this prompted the author to archive the repository. too bad...

  • uploaded a new version of etckeeper into Debian, and started an upstream discussion about what to do with the pile of patches waiting there

  • uploaded a new version of the also aging smokeping in Debian as well to fix a silly RC bug about the rename dependency

  • GitHub also says I "created 7 repositories", "54 commits in 8 repositories", "24 pull requests in 11 repositories", "reviewed 19 pull requests in 9 repositories", and "opened 23 issues in 14 repositories".

  • blocked a bunch of dumb SMTP bots attacking my server using this technique. this gets rid of those messages I was getting many times per second in the worst times:

    lost connection after AUTH from unknown[]
  • happily noticed that fail2ban gained the possibility of incremental ban times in the upcoming version 0.11, which i documented on unfortunately, the Debian package is out of date for now...

  • tested Firefox Quantum with great success: all my extensions are supported and it is much, much faster than it was. i do miss the tab list drop-down which somehow disappeared and I also miss a Debian package. I am also concerned about the maintainability of Rust in Debian, but we'll see how that goes.

I spent only 30% of this month on paid work.

Chris Lamb: Free software activities in November 2017

1 December, 2017 - 00:51

Here is my monthly update covering what I have been doing in the free software world in November 2017 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Presented at the Open Compliance Summit 2017 in Yokohama, Japan and had many follow-up conversations regarding using reproducible builds as a way of ensuring the long-term sustainability of civil infrastructure.
  • Created pull requests upstream for fswatch, bitz-server, stetl, nbsphinx & stardicter.
  • Updated diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, to only parse DTB's version number, not any -dirty suffix. (#880279)
  • Expanded the documentation for disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues, highlighting the non-intuitive recommendation to sort instead of shuffle. [...]
  • Made some brief changes to, my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them:
    • Add a by-hash API endpoint. [...]
    • Support ?key__uid=X&key__uid=Y filtering. [...]
  • Updated our website:
    • Move the "contribute" page from the Debian wiki to /contribute/. [...]
    • Add a (redirecting) /docs/source-date-epoch/ page so we have a canonical URL. [...]
    • Add recent talks to Resources page. [...]
    • Cachebust CSS files. [...]
  • In Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Made some changes to: which uns our comprehensive testing framework:
    • Ignore "warning" strings in commit messages causing builds to be marked as unstable. [...]
    • Update the email subject of status change mails away from Debian-specific URI. [...]
    • Move some IRC announcements to #debian-reproducible-changes. [...]
  • Worked on publishing our weekly reports. (#132, #133, #134 & #135)


My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed
  • dget: Please support downloading packages over gopher://. (#880649)
  • gpaw: Incorrectly creates logging files called - instead of logging to standard output. (#882638)
  • pk4: Please avoid the use of avail in package descriptions. (#881343)
Debian LTS

This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1161-1 for the redis key-value storage database to fix cross-protocol scripting attack.
  • Issued DLA 1162-1 & DLA 1163-1 to fix out-of-bounds memory vulnerabilites in apr and apr-util, portability libraries for various Apache applications.
  • Issued DLA 1173-1 for procmail, a tool used to sort incoming mail into various directories and filter out spam messages to fix a heap-based buffer overflow.
  • Issued DLA 1174-1 to correct a denial of service vulnerability in the konversation IRC client related to parsing of color formatting codes.
  • Issued DLA 1175-1 for the lynx-cur web browser, preventing a use-after-free vulnerability in the HTML parser which could lead to memory/information disclosure.
  • python-django:
  • redis:
    • 4.0.2-6 — Correct locations of redis-sentinel pidfiles. (#880980)
    • 4.0.2-7 — Add a redis metapackage. (#876475)
    • 4.0.2-8 — Use get_current_dir_name over a PATHMAX, etc. (#881684), don't rely on taskset existing for kFreeBSD-* (#881683), drop "memory efficiency" tests on advice from upstream (#881682) and allow the package be bin-NMUable.
    • 4.0.2-9 — Modify aof.c for MAXPATHLEN issues. (#881684)
    • 4.0.2-9~bpo9+1 — Upload to stretch-backports.
  • bfs:
    • 1.1.4-1 — New upstream release.
    • 1.1.4-2 — Use upstream's new manpage.
  • python-daiquiri:
    • 1.3.0-2 — Ensure all dependencies are available for DEP-8 tests. (#882876)
  • redisearch (0.90.0~alpha1-1, 0.90.1-1, 0.99.0-1 & 0.99.2-1) — New upstream releases.

Finally, I also made a non-maintainer upload (NMU) of cpio (2.12+dfsg-5) to the experimental distribution.

Debian bugs filed
  • cappuccino: Broken symlink in /usr/games. (#880714)
  • statsmodels: Accesses during build. (#882641)
  • python-lti: Please run the upstream testsuite. (#880834)
  • git-buildpackage: gbp dch needs a better workflow description. (#880552)
  • audacity: New upstream release. (#880717)
  • python-djangorestframework: New upstream release. (#880538)
  • djangorestframework: New upstream release. (#880558)

I also filed 2 FTBFS bugs against django-axes & plinth.

FTP Team

As a Debian FTP assistant I ACCEPTed 58 packages: aladin, apulse, aribb24, ayatana-indicator-printers, beads, belr, binutils, breezy-debian, brightnessctl, cupt, dino-im, evqueue-core, fdm-materials, fonts-noto-color-emoji, gcc-8-cross, gcc-8-cross-ports, gnome-shell-extension-hide-veth, gnome-shell-extension-no-annoyance, gnome-shell-extension-tilix-shortcut, gnome-shell-extension-workspaces-to-dock, goocanvasmm-2.0, intel-vaapi-driver-shaders, ldc, libaws-signature4-perl, libcdio-paranoia, libemail-address-xs-perl, libjs-jquery-file-upload, libmath-utils-perl, libosmo-abis, libosmocore, libsavitar, libsignal-protocol-c, lr, mate-window-applets, node-ms, openjdk-10, phast, pspg, python-daphne, r-cran-cardata, r-cran-cvst, r-cran-forcats, r-cran-gower, r-cran-guerry, r-cran-haven, r-cran-lava, r-cran-nortest, r-cran-rcpproll, r-cran-readr,, r-cran-tidyselect, ros-geometry2, shoogle, snapd-glib, sphinx-intl, tang, ulfius & webapps-metainfo.

I additionally filed 2 RC bugs against packages that had incomplete debian/copyright files against libsavitar & fdm-materials.

Jonathan Dowland: distribution-wide projects in Debian

30 November, 2017 - 23:36

A few weeks ago, Lars and I had a discussion: wouldn't it be nice if, across the whole of Debian, applications stopped writing to arbitrary dot-files in your home directory (e.g. ~/.someapp) and instead abided by the XDG Base Directory Specification?

Assuming that there was Debian-wide consensus that this was a good idea, in theory it could be achieved. The main problem would be that many of the upstream authors of the software we package would not accept the change. Consequently, Debian would be left carrying the patches.

We generally try to remain as close to upstream's code as possible and shy away from carrying too many patches in too many packages. The ideal lifecycle for a patch is for it to be accepted upstream. Patches are a burden for packagers, and we don't have enough packagers or packager time (or both).

Contrast this to OpenBSD. I know very little about the BSDs, and what little I do know I've mostly gleaned from the excellent blog posts by Ted Unangst, but my understanding is that when they import some software into their main project, their mental model appears to be closer to that of a fork of the upstream software than the way Debian (and most Linux distributions) operate. The entire OpenBSD Operating System is in a single (CVS!) version control repository. When some software is imported to the core Operating System, that software's source is copied into place within the tree and from that point forward is managed as part of the whole.

(OpenBSD also have a separate source code collection called "ports", which is managed in a different manner which is much closer to a Linux distribution's notion of packaging, but I won't cover that further here.)

If they decided that a distribution-wide project was worthy, they would have no hesitation in making that change across all the software in their Operating System. Their focus is on consistency and they seem to maintain that focus rather than thinking about packages in relative isolation.

Although I do appreciate the practical problem of Debian managing a lot of patches, I am sometimes envious of OpenBSD's approach and corresponding perspective. Although I really do not miss CVS.

Thanks to Ryan Freeman for proof-reading this blog post. The remaining errors are mine.

Enrico Zini: Qt Creator cross-platform development in Stretch: chroot automation

30 November, 2017 - 15:35

I wrote a tool to automate the creation and maintenance of Qt cross-build environments built from packages in Debian Stretch.

It allows to:

  • Do cross-architecture development with Qt Creator, including remote debugging on the target architecture
  • Compile using native compilers and cross-compilers, to avoid running the compilers inside an emulator making the build slower
  • Leverage all of Debian as development environment, using existing Debian packages for cross-build-dependencies
Getting started
# Creates an armhf environment under the current directory
$ sudo cbqt ./armhf --create --verbose
2017-11-30 14:09:23,715 INFO armhf: Creating /home/enrico/lavori/truelite/system/cross/armhf
2017-11-30 14:14:49,887 INFO armhf: Configuring cross-build environment

# Get information about an existing chroot.
# Note: the output is machine-parsable yaml
$ cbqt ./armhf --info
name: armhf
path: ./armhf
arch: armhf
arch_triplet: arm-linux-gnueabihf
exists: true
configured: true
issues: []

# Create a qmake wrapper for this environment
$ sudo ./cbqt ./armhf --qmake -o /usr/local/bin/qmake-armhf

# Install the build-depends you need
# Note: :arch is added automatically to package names if no arch is explicitly
#       specified
$ sudo ./cbqt ./armhf --install libqt5svg5-dev libmosquittopp-dev qtwebengine5-dev
Building packages

To build a package, use the qmake wrapper generated with cbqt --qmake instead of the normal qmake:

$ qmake-armhf -makefile
$ make
arm-linux-gnueabihf-g++ … -I../armhf/usr/include/arm-linux-gnueabihf/… -I../armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/arm-linux-gnueabihf -o browser.o browser.cpp
/home/enrico/…/armhf/usr/bin/moc …
arm-linux-gnueabihf-g++ … -Wl,-rpath-link,/home/enrico/…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,/home/enrico/…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,/home/enrico/…/armhf/usr/lib/ -o program browser.o … -L/home/enrico/…/armhf/usr/lib/arm-linux-gnueabihf …
Building in Qt Creator

Configure a new Kit in Qt Creator:

  1. Tools/Options, then Build & Run, then Kits, then Add
  2. Name: armhf (or anything you like)
  3. Qt version: choose the one autodetected from /usr/local/bin/qmake-armhf
  4. In Compilers, add a GCC C compiler with path arm-linux-gnueabihf-gcc, and a GCC C++ compiler with path arm-linux-gnueabihf-g++
  5. Choose the newly created compilers in the kit
  6. Dismiss the dialog with "OK": the new kit is ready

Now you can choose the default kit to build and run locally, and the armhf kit for remote cross-development.

Where to get it



This has been done as part of my work with Truelite.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้