Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 21 min 11 sec ago

intrigeri: Contribute your skills to Debian in Paris, May 13-14 2017

4 March, 2017 - 23:33

(There is a French translation of this announcement.)

Join us in Paris, on May 13-14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to).

Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian.

The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome :)

Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated.

Here's what we will be doing:

  • We will test the upcoming version of Debian and gather all kinds of feedback.
  • We will identify problems about graphics and design in Debian, and solve some of them.
  • We will triage bug reports that are blocking the release of the upcoming version of Debian.
  • Debian package maintainers will fix some of these bugs.
Goals and principles

Before diving into the exact content of this event, a few words from the organization team.

This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet.

We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend.

We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting our Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after. There will be a team ready to act if you feel you have been or are being harassed or made uncomfortable by an attendee.

We believe that money should not be a blocker for contributing to Debian. We will sponsor travel and find a place to sleep for as many enthusiastic attendees as possible. The space where this event will take place is accessible to wheelchairs. We are trying to organize a translation into (probably French) sign language. Vegan food should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best.

What we will be doing

There will be a number of stations, i.e. physical space dedicated to people with a given set of skills, hosted by someone who is ready to make this space welcoming and productive.

A few stations are already organized, and are described below. If you want to host a station yourself, or would like us to organize another one, please let us know. For example, you may want to assess the state of Debian Stretch for a specific field of interest, be it audio editing, office work, network auditing or whatever you are doing with Debian :)

Test the upcoming version of Debian

We will test Debian Stretch and gather feedback. We are particularly interested in:

  • feedback about support for universal access technologies, such as screen readers;
  • feedback about the state of translations into your language;
  • the top 3 things you like or dislike most in the current version of Debian; the top 3 things you like or dislike most in the upcoming version of Debian;
  • general feelings about your experience with Debian!

Experienced Debian contributors will be ready to relay this feedback to the relevant teams so it is put to good use. Hypra collaborators will be there to bring a focus on universal access technologies.

Design and graphics

Truth be told, Debian lacks people who are good at design and graphics. There are definitely a good number of low-hanging fruits that can be tackled in a week-end, either in Debian per-se, or in upstream) projects, or in Debian derivatives.

This station will be hosted by Juliette Belin. She designed the themes for the last two versions of Debian.

Triage Release Critical bugs

Bugs flagged as Release Critical are blocking the release of the upcoming version of Debian. To fix them, it helps to make sure the bug report documents the up-to-date status of the bug, and of its resolution. One does not need to be a programmer to do this work! For example, you can try and reproduce bugs in software you use... or in software you will discover. This helps package maintainers better focus their work.

This station will be hosted by Solveig. She has experience in bug triaging, in Debian and elsewhere. Cyril Brulebois, a long-time Debian developer, will provide support and advice upon request.

Fix Release Critical bugs

There will be a Bug Squashing Party.

Where? When? How to register?

See for the exact address and time.

Please register by the end of March if you want to attend this event!

If you have any questions, please get in touch with us.

Markus Koschany: My Free Software Activities in February 2017

4 March, 2017 - 00:11

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

We have reached the end of Stretch’s development cycle, a phase called full freeze. That means packages may only migrate to Testing aka Stretch after approval by the release team. Changes must be minimal and only address important or release critical bugs. This is usually the time when I stop uploading new upstream releases to unstable to avoid any disruptions. Of course there are exceptions but if you are unsure best practice is to use experimental instead. A lot of RC bugs are still open and affect the next release. In February I could close five one and triage two more.

Debian Games
  • I packaged new upstream releases of Bullet (2.86 and 2.86.1), a 3D physics engine, after I was informed by Debian’s OpenMW maintainer (who is also one of the upstream developers) that this would fix a couple of issues for them.
  • Debian Bug #848063 – ri-li: FTBFS randomly (Impossible d’initialiser SDL:Couldn’t open X11 display): I usually note bug fixes very briefly but this one deserves some extra information. Apparently ri-li randomly failed to build from source on the bug reporter’s build system. The error message indicated that an X11 display could not be opened. For those who wonder why an X11 display is required to build a package from source; ri-li is a game and comes with a small development program, MakeDat, to build the data files from source. The program is only needed at build time but it requires some SDL functions to work properly. During the compilation step MakeDat tries to initialize SDL and it requires an X11 display for doing that. Ri-Li uses the xvfb-run wrapper to create a virtual X server environment and then executes MakeDat. I didn’t need to touch the package for more than two years and needless to say ri-li has always worked so far.  I agreed that this was probably a regression in one of ri-li’s dependencies. I immediately suspected xvfb and the xvfb-run script being the most likely cause for the build failures and after some investigation on the Internet I uploaded a new revision using the “-a” switch for xvfb-run. Unfortunately that didn’t work as expected. On the other hand I noticed that the package built fine on the official buildd network for all release architectures, not to mention on my own system. I decided that severity important would be the appropriate severity level for this issue because the majority of users was unaffected and the claim the package failed to build 99 % of the time was just wrong.

    So much for the prologue. The bug reporter disagreed with the bug severity and insisted to make #848063 release critical again. Since nobody of the Games Team wanted to do that and there were more people in a similar situation who disagreed with such a move, a thread was started on the debian-devel mailing list. I stayed away from it mainly because I already participated in the same discussions before where I got the impression that concerns were simply ignored. Also other people made a good response and expressed my views, for instance here , here and here.

    In my opinion Debian is more than just an operating system and “not an academic exercise”. I really do think that a package which fails to build from source is a bug and should be fixed but not every FTBFS is release critical, that’s why we have for example release architectures and ports. We already make distinctions and we don’t support every possible hardware configuration.  If a package FTBFS on my laptops because 64 MB RAM or a 6 GB hard disk don’t cut it anymore I’m not going to file RC bugs against the package in question, I’ll try with a slightly more powerful machine again. RC bugs are a big hammer and we should be really considerate when we file them because as a consequence, if we can’t fix them in time the package will not be part of the next stable release or even removed from Debian. We certainly don’t have a shortage of bugs and if there is disagreement we should make case-by-case decisions and not blindly act “by the book”. Threatening people to escalate bugs to Debian’s Technical Committee isn’t helpful either. I am not saying that random build failures should be ignored. There are tests which are designed to fail 50 % of the time. Obviously they are not very useful for Debian. But we have also many tests that check for real life situations, which require a specific amount of memory and disk space. I think it is a shame that we have to disable those tests or even the whole test suite if they work locally and on the official buildd network but not in a custom build environment.  I fear we don’t make Debian better but instead we “verschlimmbessern” (to improve sth. for the worse) it. Last but not least bug #848063 was never about a single vs. multi-core CPU issue, even the bug reporter agreed with this statement but apparently a lot of people who commented on debian-devel never fully read the bug report or followed closely enough.

Debian Java Debian LTS

This was my twelfth month as a paid contributor and I have been paid to work 13 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 06. February until 13. February I was in charge of our LTS frontdesk. I triaged security issues in mp3splt, spice, gnome-keyring, irssi, gtk-vnc, php5, openpyxl, postfixadmin, sleekxmpp, mcabber, psi-plus, vim, mupdf, netpbm-free and libplist.
  • DLA-820-1. Issued a security update for viewvc fixing 1 CVE.
  • DLA-823-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-825-1. Issued a security update for spice fixing 2 CVE.
  • DLA-823-2. Issued a regression update for tomcat7.
  • DLA-834-1. Issued a security update for phpmyadmin fixing 1 CVE.
  • DLA-835-1. Issued a security update for cakephp fixing 1 CVE.
  • DLA-840-1. Issued a security update for libplist fixing 2 CVE.
Non-maintainer uploads

Thanks for reading and see you next time.

Petter Reinholdtsen: Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress

3 March, 2017 - 20:50

For almost a year now, we have been working on making a Norwegian Bokmål edition of The Debian Administrator's Handbook. Now, thanks to the tireless effort of Ole-Erik, Ingrid and Andreas, the initial translation is complete, and we are working on the proof reading to ensure consistent language and use of correct computer science terms. The plan is to make the book available on paper, as well as in electronic form. For that to happen, the proof reading must be completed and all the figures need to be translated. If you want to help out, get in touch.

A fresh PDF edition in A4 format (the final book will have smaller pages) of the book created every morning is available for proofreading. If you find any errors, please visit Weblate and correct the error. The state of the translation including figures is a useful source for those provide Norwegian bokmål screen shots and figures.

Michal Čihař: Weblate 2.12

3 March, 2017 - 18:00

Weblate 2.12 has been released today, few days behind schedule. It brings improved screenshots management, better search and replace features or improved import. Many of the new features were already announced in previous post, where you can find more details about them.

Full list of changes:

  • Improved admin interface for groups.
  • Added support for Yandex Translate API.
  • Improved speed of sitewide search.
  • Added project and component wide search.
  • Added project and component wide search and replace.
  • Improved rendering of inconsistent translations.
  • Added support for opening source files in local editor.
  • Added support for configuring visual keyboard with special characters.
  • Improved screenshot management with OCR support for matching source strings.
  • Default commit message now includes translation information and URL.
  • Added support for Joomla translation format.
  • Improved reliability of import across file formats.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Joey Hess: what I would ask my lawyers about the new Github TOS

3 March, 2017 - 02:35

The Internet saw Github's new TOS yesterday and collectively shrugged.

That's weird..

I don't have any lawyers, but the way Github's new TOS is written, I feel I'd need to consult with lawyers to understand how it might affect the license of my software if I hosted it on Github.

And the license of my software is important to me, because it is the legal framework within which my software lives or dies. If I didn't care about my software, I'd be able to shrug this off, but since I do it seems very important indeed, and not worth taking risks with.

If I were looking over the TOS with my lawyers, I'd ask these questions...

4 License Grant to Us

This seems to be saying that I'm granting an additional license to my software to Github. Is that right or does "license grant" have some other legal meaning?

If the Free Software license I've already licensed my software under allows for everything in this "License Grant to Us", would that be sufficient, or would my software still be licenced under two different licences?

There are violations of the GPL that can revoke someone's access to software under that license. Suppose that Github took such an action with my software, and their GPL license was revoked. Would they still have a license to my software under this "License Grant to Us" or not?

"Us" is actually defined earlier as "GitHub, Inc., as well as our affiliates, directors, subsidiaries, contractors, licensors, officers, agents, and employees". Does this mean that if someone say, does some brief contracting with Github, that they get my software under this license? Would they still have access to it under that license when the contract work was over? What does "affiliates" mean? Might it include other companies?

Is it even legal for a TOS to require a license grant? Don't license grants normally involve some action on the licensor's part, like signing a contract or writing a license down? All I did was loaded a webpage in a browser and saw on the page that by loading it, they say I've accepted the TOS. (I then set about removing everything from Github.)

Github's old TOS was not structured as a license grant. What reasons might they have for structuring this TOS in such a way?

Am I asking too many questions only 4 words into this thing? Or not enough?

Your Content belongs to you, and you are responsible for Content you post even if it does not belong to you. However, we need the legal right to do things like host it, publish it, and share it. You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.

If this is a software license, the wording seems rather vague compared with other software licenses I've read. How much wiggle room is built into that wording?

What are the chances that, if we had a dispute and this came before a judge, that Github's laywers would be able to find a creative reading of this that makes "do things like" include whatever they want?

Suppose that my software is javascript code or gets compiled to javascript code. Would this let Github serve up the javascript code for their users to run as part of the process of rendering their website?

That means you're giving us the right to do things like reproduce your content (so we can do things like copy it to our database and make backups); display it (so we can do things like show it to you and other users); modify it (so our server can do things like parse it into a search index); distribute it (so we can do things like share it with other users); and perform it (in case your content is something like music or video).

Suppose that Github modified my software, does not distribute the modified version, but converts it to javascipt code and distributes that to their users to run as part of the process of rendering their website. If my software is AGPL licensed, they would be in violation of that license, but doesn't this additional license allow them to modify and distribute my software in such a way?

This license does not grant GitHub the right to sell your Content or otherwise distribute it outside of our Service.

I see that "Service" is defined as "the applications, software, products, and services provided by GitHub". Does that mean at the time I accept the TOS, or at any point in the future?

If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not?

If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?

5 License Grant to Other Users

Any Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and "fork" your repositories (this means that others may make their own copies of your Content in repositories they control).

Let's say that company Foo does something with my software that violates its GPL license and the license is revoked. So they no longer are allowed to copy my software under the GPL, but it's there on Github. Does this "License Grant to Other Users" give them a different license under which they can still copy my software?

The word "fork" has a particular meaning on Github, which often includes modification of the software in a repository. Does this mean that other users could modify my software, even if its regular license didn't allow them to modify it or had been revoked?

How would this use of a platform-specific term "fork" be interpreted in this license if it was being analized in a courtroom?

If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.

This paragraph seems entirely innocious. So, what does your keen lawyer mind see in it that I don't?

How sure are you about your answers to all this? We're fairly sure we know how well the GPL holds up in court; how well would your interpretation of all this hold up?

What questions have I forgotten to ask?

And finally, the last question I'd be asking my lawyers:

What's your bill come to? That much? Is using Github worth that much to me?

Jonathan McDowell: Rational thoughts on the GitHub ToS change

3 March, 2017 - 01:13

I woke this morning to Thorsten claiming the new GitHub Terms of Service could require the removal of Free software projects from it. This was followed by joeyh removing everything from github. I hadn’t actually been paying attention, so I went looking for some sort of summary of whether I should be worried and ended up reading the actual ToS instead. TL;DR version: No, I’m not worried and I don’t think you should be either.

First, a disclaimer. I’m not a lawyer. I have some legal training, but none of what I’m about to say is legal advice. If you’re really worried about the changes then you should engage the services of a professional.

The gist of the concerns around GitHub’s changes are that they potentially circumvent any license you have applied to your code, either converting GPL licensed software to BSD style (and thus permitting redistribution of binary forms without source) or making it illegal to host software under certain Free software licenses on GitHub due to being unable to meet the requirements of those licenses as a result of GitHub’s ToS.

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. There are sadly too many people who upload code there without a license, meaning that technically no one can do anything with it. Don’t do this people; make sure that any project you put on GitHub has some sort of license attached to it (don’t write your own - it’s highly likely one of Apache/BSD/GPL will suit your needs) so people know whether they can make use of it or not. “I don’t care” is not a valid reason not to do this.

Section D, relating to user generated content, is the one causing the problems. It’s possibly easiest to walk through each subsection in order.

D1 says GitHub don’t take any responsibility for your content; you make it, you’re responsible for it, they’re not accepting any blame for harm your content does nor for anything any member of the public might do with content you’ve put on GitHub. This seems uncontentious.

D2 reaffirms your ownership of any content you create, and requires you to only post 3rd party content to GitHub that you have appropriate rights to. So I can’t, for example, upload a copy of ‘Friday’ by Rebecca Black.

Thorsten has some problems with D3, where GitHub reserve the right to remove content that violates their terms or policies. He argues this could cause issues with licenses that require unmodified source code. This seems to be alarmist, and also applies to any random software mirror. The intent of such licenses is in general to ensure that the pristine source code is clearly separate from 3rd party modifications. Removal of content that infringes GitHub’s T&Cs is not going to cause an issue.

D4 is a license grant to GitHub, and I think forms part of joeyh’s problems with the changes. It affirms the content belongs to the user, but grants rights to GitHub to store and display the content, as well as make copies such as necessary to provide the GitHub service. They explicitly state that no right is granted to sell the content at all or to distribute the content outside of providing the GitHub service.

This term would seem to be the minimum necessary for GitHub to ensure they are allowed to provide code uploaded to them for download, and provide their web interface. If you’ve actually put a Free license on your code then this isn’t necessary, but from GitHub’s point of view I can understand wanting to make it explicit that they need these rights to be granted. I don’t believe it provides a method of subverting the licensing intent of Free software authors.

D5 provides more concern to Thorsten. It seems he believes that the ability to fork code on GitHub provides a mechanism to circumvent copyleft licenses. I don’t agree. The second paragraph of this subsection limits the license granted to the user to be the ability to reproduce the content on GitHub - it does not grant them additional rights to reproduce outside of GitHub. These rights, to my eye, enable the forking and viewing of content within GitHub but say nothing about my rights to check code out and ignore the author’s upstream license.

D6 clarifies that if you submit content to a GitHub repo that features a license you are licensing your contribution under these terms, assuming you have no other agreement in place. This looks to be something that benefits projects on GitHub receiving contributions from users there; it’s an explicit statement that such contributions are under the project license.

D7 confirms the retention of moral rights by the content owner, but states they are waived purely for the purposes of enabling GitHub to provide service, as stated under D4. In particular this right is revocable so in the event they do something you don’t like you can instantly remove all of their rights. Thorsten is more worried about the ability to remove attribution and thus breach CC-BY or some BSD licenses, but GitHub’s whole model is providing attribution for changesets and tracking such changes over time, so it’s hard to understand exactly where the service falls down on ensuring the provenance of content is clear.

There are reasons to be wary of GitHub (they’ve taken a decentralised revision control system and made a business model around being a centralised implementation of it, and they store additional metadata such as PRs that aren’t as easily extracted), but I don’t see any indication that the most recent changes to their Terms of Service are something to worry about. The intent is clearly to provide GitHub with the legal basis they need to provide their service, rather than to provide a means for them to subvert the license intent of any Free software uploaded.

Antoine Beaupré: A short history of password hashers

2 March, 2017 - 21:45

These are notes from my research that led to the publication of the password hashers article. This article is more technical than the previous ones and compares the various cryptographic primitives and algorithms used in the various software I have reviewed. The criteria for inclusion on this list is fairly vague: I mostly included a password hasher if it was significantly different from the previous implementations in some way, and I have included all the major ones I could find as well.

The first password hashers

Nic Wolff claims to be the first to have written such a program, all the way back in 2003. Back then the hashing algorithm was MD5, although Wolff has now updated the algorithm to use SHA-1 and still maintains his webpage for public use. Another ancient but unrelated implementation, is the Standford University Applied Cryptography's pwdhash software. That implementation was published in 2004 and unfortunately, that implementation was not updated and still uses MD5 as an hashing algorithm, but at least it uses HMAC to generate tokens, which makes the use of rainbow tables impractical. Those implementations are the simplest password hashers: the inputs are simply the site URL and a password. So the algorithms are, basically, for Wolff's:

token = base64(SHA1(password + domain))

And for Standford's PwdHash:

token = base64(HMAC(MD5, password, domain)))

Another unrelated implementation that is still around is supergenpass is a bookmarklet that was created around 2007, originally using MD5 as well but now supports SHA512 now although still limited to 24 characters like MD5 (which needlessly limits the entropy of the resulting password) and still defaults MD5 with not enough rounds (10, when key derivation recommendations are more generally around 10 000, so that it's slower to bruteforce).

Note that Chris Zarate, the supergenpass author, actually credits Nic Wolff as the inspiration for his implementation. Supergenpass is still in active development and is available for the browser (as a bookmarklet) or mobile (as an webpage). Supergenpass allows you to modify the password length, but also add an extra profile secret which adds to the password and generates a personalized identicon presumably to prevent phishing but it also introduces the interesting protection, the profile-specific secret only found later in Password Hasher Plus. So the Supergenpass algorithm looks something like this:

token = base64(SHA512(password + profileSecret + ":" + domain, rounds))
The Wijjo Password Hasher

Another popular implementation is the Wijjo Password Hasher, created around 2006. It was probably the first shipped as a browser extension which greatly improved the security of the product as users didn't have to continually download the software on the fly. Wijjo's algorithm also improved on the above algorithms, as it uses HMAC-SHA1 instead of plain SHA-1 or HMAC-MD5, which makes it harder to recover the plaintext. Password Hasher allows you to set different password policies (use digits, punctuation, mixed case, special characters and password length) and saves the site names it uses for future reference. It also happens that the Wijjo Password Hasher, in turn, took its inspiration on different project,, created in 2006 and also based on HMAC-SHA-1. Indeed, hashapass "can easily be generated on almost any modern Unix-like system using the following command line pattern":

echo -n parameter \
| openssl dgst -sha1 -binary -hmac password \
| openssl enc -base64 \
| cut -c 1-8

So the algorithm here is obviously:

token = base64(HMAC(SHA1, password, domain + ":" + counter)))[:8]

... although in the case of Password Hasher, there is a special routine that takes the token and inserts random characters in locations determined by the sum of the values of the characters in the token.

Password Hasher Plus

Years later, in 2010, Eric Woodruff ported the Wijjo Password Hasher to Chrome and called it Password Hasher Plus. Like the original Password Hasher, the "plus" version also keeps those settings in the extension and uses HMAC-SHA-1 to generate the password, as it is designed to be backwards-compatible with the Wijjo Password Hasher. Woodruff did add one interesting feature: a profile-specific secret key that gets mixed in to create the security token, like what SuperGenPass does now. Stealing the master password is therefore not enough to generate tokens anymore. This solves one security concern with Password Hasher: an hostile page could watch your keystrokes and steal your master password and use it to derive passwords on other sites. Having a profile-specific secret key, not accessible to the site's Javascript works around that issue, but typing the master password directly in the password field, while convenient, is just a bad idea, period. The final algorithm looks something like:

token = base64(HMAC(SHA1, password, base64(HMAC(SHA1, profileSecret, domain + ":" + counter))))

Honestly, that seems rather strange, but it's what I read from the source code, which is available only after decompressing the extension nowadays. I would have expected the simplest version:

token = base64(HMAC(SHA1, HMAC(SHA1, profileSecret, password), domain + ":" + counter))

The idea here would be "hide" the master password from bruteforce attacks as soon as possible... But maybe this is all equivalent.

Regardless, Password Hasher Plus then takes the token and applies the same special character insertion routine as the Password Hasher.


Last year, Guillaume Vincent a french self-described "humanist and scuba diving fan" released the lesspass extension for Chrome, Firefox and Android. Lesspass introduces several interesting features. It is probably the first to include a commandline version. It also uses a more robust key derivation algorithm (PBKDF2) and takes into account the username on the site, allowing multi account support. The original release (version 1) used only 8192 rounds which is now considered too low. In the bug report it was interesting to note that LessPass couldn't do the usual practice of running the key derivation for 1 second to determine the number of rounds needed as the results need to be deterministic.

At first glance, the LessPass source code seems clear and easy to read which is always a good sign, but of course, the devil is in the details. One key feature that is missing from Password Hasher Plus is the profile-specific seed, although it should be impossible, for a hostile web page to steal keystrokes from a browser extension, as far as I know.

The algorithm then gets a little more interesting:

entropy = PBKDF2(SHA256, masterPassword, domain + username + counter, rounds, length)

entropy is then used to pick characters to match the chosen profile.

Regarding code readability, I got quickly confused by the PBKDF2 implementation: SubtleCrypto.ImportKey() doesn't seem to support PBKDF2 in the API, yet it's how it is used there... Is it just something to extract key material? We see later what looks like a more standard AES-based PBKDF2 implementation, but this code looks just strange to me. It could be me unfamilarity with newer Javascript coding patterns, however.

There is also a lesspass-specific character picking routing that is also not base64, and different from the original Password Hasher algorithm.

Master Password

A review of password hashers would hardly be complete without mentioning the Master Password and its elaborate algorithm. While the applications surrounding the project are not as refined (there is no web browser plugin and the web interface can't be easily turned into a bookmarklet), the algorithm has been well developed. Of all the password managers reviewed here, Master Password uses one of the strongest key derivation algorithms out there, scrypt:

key = scrypt( password, salt, cost, size, parallelization, length )
salt = "com.lyndir.masterpassword" + len(username) + name
cost = 32768
size = 8
parallelization = 2
length = 64
entropy = hmac-sha256(key, "com.lyndir.masterpassword" + len(domain) + domain + counter )

Master Password the uses one of 6 sets of "templates" specially crafted to be "easy for a user to read from a screen and type using a keyboard or smartphone" and "compatible with most site's password policies", our "transferable" criteria defined in the first passwords article. For example, the default template mixes vowels, consonants, numbers and symbols, but carefully avoiding possibly visibly similar characters like O and 0 or i and 1 (although it does mix 1 and l, oddly enough).

The main strength of Master Password seems to be the clear definition of its algorithm (although does give out OpenSSL commandline examples...), which led to its reuse in another application called freepass. The Master Password app also doubles as a stateful password manager...

Other implementations

I have also considered including easypasswords, which uses PBKDF2-HMAC-SHA1, in my list of recommendations. I discovered only recently that the author wrote a detailed review of many more password hashers and scores them according to their relative strength. In the end, I ended up covering more LessPass since the design is very similar and LessPass does seem a bit more usable. Covering LessPass also allowed me to show the contrast and issues regarding the algorithm changes, for example.

It is also interesting to note that the EasyPasswords author has criticized the Master Password algorithm quite severely:

[...] scrypt isn’t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key — as long as the username doesn’t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary[...]

During a discussion with the Master Password author, he outlined that "there is nothing "easy" about brute-force deriving a 64-byte key through a SHA-256 algorithm." SHA-256 is used in the last stage because it is "extremely fast". scrypt is used as a key derivation algorithm to generate a large secret and is "intentionnally slow": "we don't want it to be easy to reverse the master password from a site password". "But it' unnecessary for the second phase because the input to the second phase is so large. A master password is tiny, there are only a few thousand or million possibilities to try. A master key is 8^64, the search space is huge. Reversing that doesn't need to be made slower. And it's nice for the password generation to be fast after the key has been prepared in-memory so we can display site passwords easily on a mobile app instead of having to lock the UI a few seconds for every password."

Finally, I considered covering Blum's Mental Hash (also covered here and elsewhere). This consists of an algorithm that can basically be ran by the human brain directly. It's not for the faint of heart, however: if I understand it correctly, it will require remembering a password that is basically a string of 26 digits, plus compute modulo arithmetics on the outputs. Needless to say, most people don't do modulo arithmetics every day...

Guido Günther: Debian Fun in February 2017

2 March, 2017 - 17:15
Debian LTS

February marked the 22nd month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:

  • the 2nd half of a LTS frontdesk week
  • an update to so we don't ignore undetermined issues anymore
  • testing the bind9 update prepared by Thorsten Alteholz
  • testing of apache2 packages prepared by Jonas Meurer and Antoine Beaupré
  • triaging of QEMU CVEs and fixing most if them resulting in DLA-842-1
Other Debian stuff
  • libvirt and gtk-vnc uploads to fix CVEs in unstable and stretch
  • A git-buildpackage upload to unstable to unbreak importing large histories with import-dsc
  • Some CSS improvements for git-buildpackage to (hopefully) make the manual easier to read.
Some other Free Software activities

Nothing exciting, just some minor fixes at several places:

Martín Ferrari: Prometheus in Jessie(bpo) and Stretch

2 March, 2017 - 14:31

Just over 2 years ago, I started packaging Prometheus for Debian. It was a daunting task, mainly because the Golang ecosystem is so young, almost none of its dependencies were packaged, and upstream practices are very different from Debian's.

Today I want to announce that this is complete.

The main part of the Prometheus stack is available in Debian testing, which very soon will become the Stretch release:

These are available for a bunch of architectures (sadly, not all of them), and are the most important building blocks to deploy Prometheus in your network.

I have also finished preparing backports for all the required dependencies, and jessie-backports has a complete Prometheus stack too!

Adding to these, the archive already has the client libraries for Go, Python, and Django; and a bunch of other exporters. Except for the Postgres exporter, most of these are going to be in the Stretch release, and I plan to prepare backports for Jessie too:

Note that not all of these have been packaged by me, luckily other Debianites are also working on Prometheus packaging!

I am confident that Prometheus is going to become one of the main monitoring tools in the near future, and I am very happy that Debian is the first distribution to offer official packages for it. If you are interested, there is still lots of work ahead. Patches, bug reports, and co-maintainers are welcome!


Hideki Yamane: Debian docker image is smaller than Oracle Linux 7

2 March, 2017 - 12:03
From Oracle Developers Blog,
We've just introduced a new base Oracle Linux 7-slim Docker image that's a minuscule 114MB. Ok, so it's not quite as small as Alpine, but it's now the smallest base image of any of the major distributions. Check out the numbers in the graph to see just how small.It's not fair, Oracle. You talked about -slim image for Oracle Linux, then you should do for other distros, too.
$ sudo docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
debian              jessie-slim         232f5cd0c765        2 days ago          80 MB
debian              jessie              978d85d02b87        2 days ago          123 MB
oraclelinux         7-slim              f005b5220b05        8 days ago          114 MBDebian's jessie-slim image is 80MB, smaller than oraclelinux 7-slim image.

And, we're going to go Debian 9 "Stretch"
debian              stretch-slim        02ee50628785        2 days ago          57.1 MB
debian              stretch             6f4d77d39d73        2 days ago          100 MBIt's smaller than Oracle's -slim image by default, and 1/2 size for -slim image. Nice, isn't it? :)

Petter Reinholdtsen: Unlimited randomness with the ChaosKey?

2 March, 2017 - 02:50

A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s

The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s

Quite the difference. :) I bought a few more than I need, in case someone want to buy one her in Norway. :)

Neil McGovern: GNOME ED update – Week 9

2 March, 2017 - 01:38

As mentioned in my previous post, I’ll be posting regularly with an update on what I’ve been up to as the GNOME Executive Director, and highlighting some cool stuff around the project!

If you find this dull, they’re tagged with [update-post] so hopefully, you can filter them out. And dear folk – if this annoys you too much I can turn the feed category to turn this off it’s not interesting enough :) However, if you like these or have any suggestions for things you’d like to see here, let me know.


One of the areas we’ve been working on is the sponsorship brochure for GUADEC and GNOME.Asia. Big thanks to Allan Day and the Engagement team for helping out here – and I’m pleased to say it’s almost finished! In the meantime, if you or your company are interested in sponsoring us, please drop a mail to!


A fairly lengthy and wide-ranging interview with myself has been published at It covers a bit of my background (although mistakenly says I worked for Collabora Productivity, rather than Collabora Limited!), and looks at a few different areas on where I see GNOME and how it sits within the greater GNU/Linux movement – I cover “some uncomfortable subjects around desktop Linux”. It’s well worth a read.

Release update

The GNOME 3.24 release is happening soon! As such, the release team announced the string freeze. If you want to help out with how much has been translated into your language, then is a good place to start. I’d like to give a shout out to the translation teams in particular too. Sometimes people don’t realise how much work goes into this, and it’s fantastic that we’re able to reach so many more users with our software.

Google Summer of Code

GNOME is now announced as a mentoring organisation for Google Summer of Code! There are some great ideas for Summer (Well, in the Northern hemisphere anyway) projects, so if you want to spend your time coding on Free Software, and get paid for it, why not sign up as a student?

Brett Parker: Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

2 March, 2017 - 01:35

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443

<IfModule mod_gnutls.c>
       Listen [::1]:4443

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       ssl-default-bind-options no-sslv3

       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat:

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://<your-pi-name>, e.g.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.









case $action in
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/ and /var/llib/dehydrated/certs/ files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

Joey Hess: removing everything from github

2 March, 2017 - 01:24

Github recently drafted an update to their Terms Of Service. The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license. I'll leave the full analysis to the lawyers, but see Thorsten's analysis.

I contacted Github about this weeks ago, and received only an anodyne response. The Free Software Foundation was also talking with them about it. It seems that Github doesn't care or has some reason to want to effectively neuter copyleft software licenses.

I am deleting my repositories from Github at this time. If you used the Github mirrors for git-annex, propellor, ikiwiki, etckeeper, myrepos, click on the links for the non-Github repos ( also has mirrors). (There's an oncoming severe weather event here, so it may take some time before I get everything deleted and cleaned up.)

[Some commits to git-annex were pushed to Github this morning by an automated system, but I had NOT accepted their new TOS at that point, and explicitly do NOT give Github or anyone any rights to git-annex not granted by its GPL and AGPL licenses.]

See also: PDF of Github TOS that can be read without being forced to first accept Github's TOS

Brett Parker: Ooooooh! Shiny!

1 March, 2017 - 22:12

Yay! So, it's a year and a bit on from the last post (eeep!), and we get the news of the Psion Gemini - I wants one, that looks nice and shiny and just the right size to not be inconvenient to lug around all the time, and far better for ssh usage than the onscreen keyboard on my phone!

Junichi Uekawa: I was wondering why my Termux is so slow.

1 March, 2017 - 14:32
I was wondering why my Termux is so slow. ARM might be slow but not this slow. Maybe I have too high expectations on interactivity of a local development environment.

Paul Wise: FLOSS Activities February 2017

1 March, 2017 - 09:37
Changes Issues Review Administration
  • Debian: do the samhain dance, ask for new local contacts at one site, ask local admins to reset one machine, powercycle 2 dead machines, redirect 1 user to the support channels, redirect 1 user to a service admin, redirect 1 spam reporter to the right mechanisms, investigate mail logs for a missing bug report, ping bugs-search.d.o service admin about moving off glinka and remove data, poke cdimage-search.d.o service admin about moving off glinka, update a cron job on denis.d.o for the rename of to dehydrated, debug planet.d.o issue and remove stray cron job lock file, check if ftp is used on a couple of security.d.o mirrors, discuss storage upgrade for LeaseWeb for snapshot.d.o/deriv.d.n/etc, investigate SSD SMART error and ignore the unknown attribute, ask 9 users to restart their processes, investigate apt-get update failure in nagios, swapoff/swapon a swap file to drain it, restart/disable some failed services, help restore the backup server, debug stretch /dev/log issue,
  • Debian QA: deploy merged PTS/tracker patches,
  • Debian wiki: answer 1 IP-blocked VPN user, pinged 1 user on IRC about their bouncing mail, disabled 4 accounts due to bouncing mail, redirect 1 person to documentation/lists, whitelist 5 email addresses, forward 1 password reset token, killed 1 spammer account, reverted 1 spammer edit,
  • Debian mentors: security upgrades, check which email a user signed up with
  • Openmoko: security upgrades, daemon restarts, reboot
Debian derivatives
  • Turned off the census cron job because it ran out of disk space
  • Update Armbian sources.list
  • Ping siduction folks about updating their sources.list
  • Start a discussion about DebConf17
  • Notify the derivatives based on jessie or older that stretch is frozen
  • Invite Rebellin Linux (again)

The libesedb Debian backport was sponsored by my employer. All other work was done on a volunteer basis.

Thorsten Glaser: New GitHub Terms of Service r̲e̲q̲u̲i̲r̲e̲ removing many Open Source works from it

1 March, 2017 - 07:00

The new Terms of Service of GitHub became effective today, which is quite problematic — there was a review phase, but the problems were not answered, and, while the language is somewhat changed from the draft, they became effective immediately.

Now, the new ToS are not so bad that one immediately must stop using their service for disagreement, but it’s important that certain content may no longer legally be pushed to GitHub. I’ll try to explain which is affected, and why.

I’m mostly working my way backwards through section D, as that’s where the problems I identified lie, and because this is from easier to harder.

Note that using a private repository does not help, as the same terms apply.

Anything requiring attribution (e.g. CC-BY, but also BSD, …)

Section D.7 requires the person uploading content to waive any and all attribution rights. Ostensibly “to allow basic functions like search to work”, which I can even believe, but, for a work the uploader did not create completely by themselves, they can’t grant this licence.

The CC licences are notably bad because they don’t permit sublicencing, but even so, anything requiring attribution can, in almost all cases, not “written or otherwise, created or uploaded by our Users”. This is fact, and the exceptions are few.

Anything putting conditions on the right to “use, display and perform” the work and, worse, “reproduce” (all Copyleft)

Section D.5 requires the uploader to grant all other GitHub users…

  • the right to “use, display and perform” the work (with no further restrictions attached to it) — while this (likely — I didn’t check) does not exclude the GPL, many others (I believe CC-*-SA) are affected, and…
  • the right to “reproduce your Content solely on GitHub as permitted through GitHub's functionality”, with no further restructions attached; this is a killer for, I believe, any and all licences falling into the “copyleft” category.

Note that section D.4 is similar, but granting the licence to GitHub (and their successors); while this is worded much more friendly than in the draft, this fact only makes it harder to see if it affects works in a similar way. But that doesn’t matter since D.5 is clear enough.

This means that any and all content under copyleft licences is also no longer welcome on GitHub.

Anything requiring integrity of the author’s source (e.g. LPPL)

Some licences are famous for requiring people to keep the original intact while permitting patches to be piled on top; this is actually permissible for Open Source, even though annoying, and the most common LaTeX licence is rather close to that. Section D.3 says any (partial) content can be removed — though keeping a PKZIP archive of the original is a likely workaround.

But what if I just fork something under such a licence?

Only “continuing to use GitHub” constitutes accepting the new terms. This means that repositories from people who last used GitHub before March 2017 are excluded.

Even then, the new terms likely only apply to content uploaded in March 2017 or later (note that git commit dates are unreliable, you have to actually check whether the contribution dates March 2017 or later).

And then, most people are likely unaware of the new terms. If they upload content they themselves don’t have the appropriate rights (waivers to attribution and copyleft/share-alike clauses), it’s plain illegal and also makes your upload of them or a derivate thereof no more legal.

Granted, people who, in full knowledge of the new ToS, share any “User-Generated Content” with GitHub on or after 1ˢᵗ March, 2017, and actually have the appropriate rights to do that, can do that; and if you encounter such a repository, you can fork, modify and upload that iff you also waive attribution and copyleft/share-alike rights for your portion of the upload. But — especially in the beginning — these will be few and far between (even more so taking into account that GitHub is, legally spoken, a mess, and they don’t even care about hosting only OSS / Free works).

Conclusion (Fazit)

I’ll be starting to remove any such content of mine, such as the source code mirrors of jupp, which is under the GNU GPLv1, now and will be requesting people who forked such repositories on GitHub to also remove them. This is not something I like to do but something I am required to do in order to comply with the licence granted to me by my upstream. Anything you’ve found contributed by me in the meantime is up for review; ping me if I forgot something. (mksh is likely safe, even if I hereby remind you that the attribution requirement of the BSD-style licences still applies outside of GitHub.)

(Pet peeve: why can’t I “adopt a licence” with British spelling? They seem to require oversea barbarian spelling.)

The others

Atlassian Bitbucket has similar terms (even worse actually; I looked at them to see whether I could mirror mksh there, and turns out, I can’t if I don’t want to lose most of what few rights I retain when publishing under a permissive licence). Gitlab seems to not have such, but requires you to indemnify them… YMMV. I think I’ll self-host the removed content.

Chris Lamb: Free software activities in February 2017

1 March, 2017 - 05:09

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Submitted a number of pull requests to the Django web development framework:
    • Add a --mode=unified option to the "diffsettings" management command. (#8113)
    • Fix a crash in setup_test_environment() if ALLOWED_HOSTS is a tuple. (#8101)
    • Use Python 3 "shebangs" now that the master branch is Python 3 only. (#8105)
    • URL namespacing warning should consider nested namespaces. (#8102)
  • Created an experimental patch against the Python interpreter in order to find reproducibility-related assumptions in dict handling in arbitrary Python code. (#29431)
  • Filed two issues against dh-virtualenv, a tool to package Python virtualenv environments in Debian packages:
    • Fix "upgrage-pip" typo in usage documentation. (#195)
    • Missing DH_UPGRADE_SETUPTOOLS equivalent for dh_virtualenv (#196)
  • Fixed a large number of spelling corrections in Samba, a free-software re-implementation of the Windows networking protocols.
  • Reviewed and merged a pull request by @jheld for django-slack (my library to easily post messages to the Slack group-messaging utility) to support per-message backends and channels. (#63)
  • Created a pull request for django-two-factor-auth, a complete Two-Factor Authentication (2FA) framework for projects using the Django web development framework to drop use of the @lazy_property decorator to ensure compatibility with Django 1.11. (#195)
  • Filed, triaged and eventually merged a change from @evgeni to fix an autopkgtest-related issue in, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change) (#41)
  • Submitted a pull request against social-core — a library to allow Python applications to authenticate against third-party web services such as Facebook, Twitter, etc. — to use the more-readable X if Y else Z construction over Y and X or Z. (#44)
  • Filed an issue against freezegun (a tool to make it easier to write Python tests involving times) to report that dateutils was missing from requirements.txt. (#173)
  • Submitted a pull request against the Hypothesis "QuickCheck"-like testing framework to make the build reproducible. (#440)
  • Fixed an issue reported by @davidak in trydiffoscope (a web-based version of the diffoscope in-depth and content-aware diff utility) where the maximum upload size was incorrectly calculated. (#22)
  • Created a pull request for the Mars Simulation Project to remove some embedded timestamps from the changelog.gz and mars-sim.1.gz files in order to make the build reproducible. (#24)
  • Filed a bug against the cpio archiving utility to report that the testsuite fails when run in the UTC +1300 timezone. (Thread)
  • Submitted a pull request against the "pnmixer" system-tray volume mixer in order to make the build reproducible. (#153)
  • Sent a patch to Testfixtures (a collection of helpers and mock objects that are useful when writing Python unit tests or doctests) to make the build reproducible. (#56)
  • Created a pull request for the "Cloud" Sphinx documentation theme in order to make the output reproducible. (#22)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

(I have been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)

This month I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Add a machine-readable JSON output format. (Closes: #850791).
    • Add an --exclude option. (Closes: #854783).
    • Show results from debugging packages last. (Closes: #820427).
    • Extract archive members using an auto-incrementing integer avoiding the need to sanitise filenames. (Closes: #854723).
    • Apply --max-report-size to --text output. (Closes: #851147).
    • Specify <html lang="en"> in the HTML output. (re. #849411).
  • Bug fixes:
    • Fix errors when comparing directories with non-directories. (Closes: #835641).
    • Device and RPM fallback comparisons require xxd. (Closes: #854593).
    • Fix tests that call xxd on Debian Jessie due to change of output format. (Closes: #855239).
    • Add missing Recommends for comparators. (Closes: #854655).
    • Importing submodules (ie. parent.child) will attempt to import parent. (Closes: #854670).
    • Correct logic of module_exists ensuring we correctly skip the debian.deb822 tests when python3-debian is not installed. (Closes: #854745).
    • Clean all temporary files in the signal handler thread instead of attempting to pass the exception back to the main thread. (Closes: #852013).
    • Fix behaviour of setting report maximums to zero (ie. no limit).
  • Optimisations:
    • Don't uselessly run xxd(1) on non-directories.
    • No need to track libarchive directory locations.
    • Optimise create_limited_print_func.
  • Tests:
    • When comparing two empty directories, ensure that the mtime of the directory is consistent to avoid non-deterministic failures.
    • Ensure we can at least import the "deb_fallback" and "rpm_fallback" modules.
    • Add test for symlink differing in destination.
    • Add tests for --progress, --status-fd and profiling output options as well as the Deb{Changes,Buildinfo,Dsc} and RPM fallback comparisons.
    • Add get_data and @skip_unless_module_exists test helpers.
    • Mark impossible-to-reach code to improve test coverage. is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Drop raw_text fields now as we've moved these to Amazon S3.
  • Drop storage of Installed-Build-Depends and subsequently-orphaned Binary package instances to recover diskspace.


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Print log entry when fixing a file. (Closes: #777239).
  • Run our entire testsuite in autopkgtests, not just the first test. (Closes: #852517).
  • Don't test for stat(2)'s blksize and block attributes. (Closes: #854937).
  • Use error() from over "manual" die().

Debian Patches contributed Debian LTS

This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 817-1 for libphp-phpmailer, correcting a local file disclosure vulnerability where insufficient parsing of HTML messages could potentially be used by attacker to read a local file.
  • Issued DLA 826-1 for wireshark which fixes a denial of service vulnerability in wireshark, where a malformed NATO Ground Moving Target Indicator Format ("STANAG 4607") capture file could cause a memory exhausion/infinite loop.
  • python-django (1:1.11~beta1-1) — New upstream beta release.
  • redis (3:3.2.8-1) — New upstream release.
  • gunicorn (19.6.0-11) — Use ${misc:Pre-Depends} to populate Pre-Depends for dpkg-maintscript-helper.
  • dh-virtualenv (1.0-1~bpo8+1) — Upload to jessie-backports.

I sponsored the following uploads:

I also performed the following QA uploads:

  • dh-kpatches (0.99.36+nmu4) — Make kernel kernel builds reproducible.

Finally, I made the following non-maintainer uploads:

  • cpio (2.12+dfsg-3) — Remove rmt.8.gz to prevent a piuparts error.
  • dot-forward (1:0.71-2.2) — Correct a FTBFS; we don't install anything to /usr/sbin, so use GNU Make's $(wildcard ..) over the shell's own * expansion.
Debian bugs filed

I also filed 15 FTBFS bugs against binaryornot, chaussette, examl, ftpcopy, golang-codegangsta-cli, hiro, jarisplayer, libchado-perl, python-irc, python-stopit, python-stopit, python-stopit, python-websockets, rubocop & yash.

FTP Team

As a Debian FTP assistant I ACCEPTed 116 packages: autobahn-cpp, automat, bglibs, bitlbee, bmusb, bullet, case, certspotter, checkit-tiff, dash-el, dash-functional-el, debian-reference, el-x, elisp-bug-hunter, emacs-git-messenger, emacs-which-key, examl, genwqe-user, giac, golang-github-cloudflare-cfssl, golang-github-docker-goamz, golang-github-docker-libnetwork, golang-github-go-openapi-spec, golang-github-google-certificate-transparency, golang-github-karlseguin-ccache, golang-github-karlseguin-expect, golang-github-nebulouslabs-bolt, gpiozero, gsequencer, jel, libconfig-mvp-slicer-perl, libcrush, libdist-zilla-config-slicer-perl, libdist-zilla-role-pluginbundle-pluginremover-perl, libevent, libfunction-parameters-perl, libopenshot, libpod-weaver-section-generatesection-perl, libpodofo, libprelude, libprotocol-http2-perl, libscout, libsmali-1-java, libtest-abortable-perl, linux, linux-grsec, linux-signed, lockdown, lrslib, lua-curses, lua-torch-cutorch, mariadb-10.1, mini-buildd, mkchromecast, mocker-el, node-arr-exclude, node-brorand, node-buffer-xor, node-caller, node-duplexer3, node-ieee754, node-is-finite, node-lowercase-keys, node-minimalistic-assert, node-os-browserify, node-p-finally, node-parse-ms, node-plur, node-prepend-http, node-safe-buffer, node-text-table, node-time-zone, node-tty-browserify, node-widest-line, npd6, openoverlayrouter, pandoc-citeproc-preamble, pydenticon, pyicloud, pyroute2, pytest-qt, pytest-xvfb, python-biomaj3, python-canonicaljson, python-cgcloud, python-gffutils, python-h5netcdf, python-imageio, python-kaptan, python-libtmux, python-pybedtools, python-pyflow, python-scrapy, python-scrapy-djangoitem, python-signedjson, python-unpaddedbase64, python-xarray, qcumber, r-cran-urltools, radiant, repo, rmlint, ruby-googleauth, ruby-os, shutilwhich, sia, six, slimit, sphinx-celery, subuser, swarmkit, tmuxp, tpm2-tools, vine, wala & x265.

I additionally filed 8 RC bugs against packages that had incomplete debian/copyright files against: checkit-tiff, dash-el, dash-functional-el, libcrush, libopenshot, mkchromecast, pytest-qt & x265.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้