Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 44 min 34 sec ago

Antoine Beaupré: January 2018 report: LTS

3 February, 2018 - 05:25

I have already published a yearly report which covers all of 2017 but also some of my January 2018 work, so I'll try to keep this short.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I was happy to switch to the new Git repository for the security tracker this month. It feels like some operations (namely pull / push) are a little slower, but others, like commits or log inspection, are much faster. So I think it is a net win.


I did some work on trying to cleanup a situation with the jQuery package, which I explained in more details in a long post. It turns out there are multiple databases out there that track security issues in web development environemnts (like Javascript, Ruby or Python) but do not follow the normal CVE tracking system. This means that Debian had a few vulnerabilities in its jQuery packages that were not tracked by the security team, in particular three that were only on (CVE-2012-6708, CVE-2015-9251 and CVE-2016-10707). The resulting discussion was interesting and is worth reading in full.

A more worrying aspect of the problem is that this problem is not limited to flashy new web frameworks. Ben Hutchings estimated that almost half of the Linux kernel vulnerabilities are not tracked by CVE. It seems the concensus is that we want to try to follow the CVE process, and Mitre has been helpful in distributing this by letting other entities, called CVE Numbering Authorities or CNA, issue their own CVEs. After contacting Snyk, it turns out that they have started the process of becoming a CNA and are trying to get this part of their workflow, so that's a good sign.

LTS meta-work

I've done some documentation and architecture work on the LTS project itself, mostly around updating the wiki with current practices.


I've done a quick security update of OpenSSH for LTS, which resulted in DLA-1257-1. Unfortunately, after a discussion with the security researcher that published that CVE, it turned out that this was only a "self-DOS", i.e. that the NEWKEYS attack would only make the SSH client terminate its own connection, and therefore not impact the rest of the server. One has to wonder, in that case, why this was issue a CVE at all: presumably the vulnerability could be leveraged somehow, but I did not look deeply enough into it to figure that out.

Hopefully the patch won't introduce a regression: I tested this summarily and it didn't seem to cause issue at first glance.

Hardlinks attacks

An interesting attack (CVE-2017-18078) was discovered against systemd where the "tmpfiles" feature could be abused to bypass filesystem access restrictions through hardlinks. The trick is that the attack is possible only if kernel hardening (specifically fs.protected_hardlinks) is turned off. That feature is available in the Linux kernel since the 3.6 release, but was actually turned off by default in 3.7. In the commit message, Linus Torvalds explained the change was breaking some userland applications, which is a huge taboo in Linux, and recommended that distros configure this at boot instead. Debian took the reverse approach and Hutchings issued a patch which reverts the default to the more secure default. But this means users of custom kernels are still vulnerable to this issue.

But, more importantly, this affects more than systemd. The vulnerability also happens when using plain old chown with hardening turned off, when running a simple command like this:

chown -R non-root /path/owned/by/non-root

I didn't realize this, but hardlinks share permissions: if you change permissions on file a that's hardlinked to file b, both files have the new permissions. This is especially nasty if users can hardlink to critical files like /etc/password or suid binaries, which is why the hardening was introduced in the first place.

In Debian, this is especially an issue in maintainer scripts, which often call chown -R on arbitrary, non-root directories. Daniel Kahn Gillmor had reported this to the Debian security team all the way back in 2011, but it didn't get traction back then. He now opened Debian bug #889066 to at least enable a warning in lintian and an issue was also opened on colord Debian bug #889060, as an example, but many more packages are vulnerable. Again, this is only if hardening is somewhat turned off.

Normally, systemd is supposed to turn that hardening on, which should protect custom kernels, but this was turned off in Debian. Anyways, Debian still supports non-systemd init systems (although those users mostly probably all migrated to Devuan) so the fix wouldn't be complete. I have therefore filed Debian bug #889098 against procps (which owns /etc/sysctl.conf and related files) to try and fix the issue more broadly there.

And to be fair, this was very explicitly mentioned in the jessie release notes so those people without the protection kind of get what they desserve here...


Lastly, I did a fairly trivial update of the p7zip package, which resulted in DLA-1268-1. The patch was sent upstream and went through a few iterations, including coordination with the security researcher.

Unfortunately, the latter wasn't willing to share the proof of concept (PoC) so that we could test the patch. We are therefore trusting the researcher that the fix works, which is too bad because they do not trust us with the PoC...

Other free software work

I probably did more stuff in January that wasn't documented in the previous report. But I don't know if it's worth my time going through all this. Expect a report in February instead!

Have happy new year and all that stuff.

Joachim Breitner: The magic “Just do it” type class

3 February, 2018 - 02:01

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.

Once more, I am feeling silly

In fact, it often is completely mechanical. Consider the following function:

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))

This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.

Instead, I’d like to tell the compiler to just do it for me! I want to be able to write

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works:

{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}
import GHC.JustDoIt
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt
What is this justDoIt?
*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a

Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead.

What is supported right now?

Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for

data Result a b = Failure a | Success b
newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a }
foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b)
foo2 = (…)

It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.

How do I know that it creates a sensible implementation?

You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output.

Is this for real? Should I use it?

Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:

  • To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:

    • Preprocessors
    • Template Haskell
    • Generic programming together with type-level computation (e.g. generic-lens)
    • GHC Core-to-Core plugins

    In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.

  • To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.

What needs to happen for this to become real?

A bunch of things:

  • The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.
  • The implementation is very much unoptimized.
  • For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.
  • It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing here as well.

If someone wants to pick it up from here, that’d be great!

I have seen this before…

Indeed, the idea is not new.

Most famously in the Haskell work is certainly Lennart Augustssons’s Djinn tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library djinn-ghc, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).

Lennart Spitzner’s exference is a much more sophisticated tool that also takes library API functions into account.

In the Scala world, Sergei Winitzki very recently presented the pretty neat curryhoward library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.

And in Idris, Joomy Korkut has created hezarfen.

Joey Hess: improving powertop autotuning

2 February, 2018 - 22:33

I'm wondering about improving powertop's auto-tuning. Currently the situation is that, if you want to tune your laptop's power consumption, you can run powertop and turn on all the tunables and try it for a while to see if anything breaks. The breakage might be something subtle.

Then after a while you reboot and your laptop is using too much power again until you remember to run powertop again. This happens a half dozen or so times. You then automate running powertop --auto-tune or individual tuning commands on boot, probably using instructions you find in the Arch wiki.

Everyone has to do this separately, which is a lot of duplicated and rather technical effort for users, while developers are left with a lot of work to manually collect information, like Hans de Goede is doing now for enabling PSR by default.

To improve this, powertop could come with a service file to start it on boot, read a config file, and apply tunings if enabled.

There could be a simple GUI to configure it, where the user can report when it's causing a problem. In case the problem prevents booting, there would need to be a boot option that disables the autotuning too.

When the user reports a problem, the GUI could optionally walk them through a bisection to find the problematic tuning, which would probably take only 4 or so steps.

Information could be uploaded, anonymously to a hardware tunings database. Developers could then use that to find and whitelist safe tunings. Powertop could also query that to avoid tunings that are known to cause problems on the laptop.

I don't know if this is a new idea, but if it's not been tried before, it seems worth working on.

Wouter Verhelst: Day four of the pre-FOSDEM Debconf Videoteam sprint

2 February, 2018 - 17:04

Day four was a day of pancakes and stew. Oh, and some video work, too.


Did more documentation review. She finished SReview documentation, got started on the documentation of the examples of our ansible repository.


Finished splitting out the ansible configuration from the ansible code repository. The code repository now includes an example configuration that is well documented for getting started, whereas our production configuration lives in a separate repository.


Spent much time on the debconf website, mostly working on a new upstream release of wafer.

He also helped review Kyle's documentation, and spent some time together with Tzafrir debugging our ansible test setup.


Worked on documentation, and did a test run of the ansible repository. Found and fixed issues that cropped up during that.


Spent much time trying to figure out why SReview was not doing what he was expecting it to do. Side note: I hate video codecs. Things are working now, though, and most of the fixes were implemented in a way that makes it reusable for other conferences.

There's one more day coming up today. Hopefully won't forget to blog about it tonight.

Daniel Pocock: Everything you didn't know about FSFE in a picture

2 February, 2018 - 16:51

As FSFE's community begins exploring our future, I thought it would be helpful to start with a visual guide to the current structure.

All the information I've gathered here is publicly available but people rarely see it in one place, hence the heading. There is no suggestion that anything has been deliberately hidden.

The drawing at the bottom includes Venn diagrams to show the overlapping relationships clearly and visually. For example, in the circle for the General Assembly, all the numbers add up to 29, the total number of people listed on the "People" page of the web site. In the circle for Council, there are 4 people in total and in the circle for Staff, there are 6 people, 2 of them also in Council and 4 of them in the GA but not council.

The financial figures on this diagram are taken from the 2016 financial summary. The summary published by FSFE is very basic so the exact amount paid in salaries is not clear, the number in the Staff circle probably pays a lot more than just salaries and I feel FSFE gets good value for this money.

Some observations about the numbers:

  • The volunteers don't elect any representatives to the GA, although some GA members are also volunteers
  • From 1,700 fellowship members, only 2 are elected in 2 of the 29 GA seats yet they provide over thirty percent of the funding through recurring payments
  • Out of 6 staff, all 6 are members of the GA (6 out of 29) since a decision to accept them at the last GA meeting
  • Only the 29 people in the General Assembly are full (legal) members of the FSFE e.V. association with the right to vote on things like constitutional changes. Those people are all above the dotted line on the page. All the people below the line have been referred to by other names, like fellow, supporter, community, donor and volunteer.

If you ever clicked the "Join the FSFE" button or filled out the "Join the FSFE" form on the FSFE web site and made a payment, did you believe you were becoming a member with an equal vote? If so, one procedure you can follow is to contact the president as described here and ask to be recognized as a full member. I feel it is important for everybody who filled out the "Join" form to clarify their rights and their status before the constitutional change is made.

I have not presented these figures to create conflict between staff and volunteers. Both have an important role to play. Many people who contribute time or money to FSFE are very satisfied with the concept that "somebody else" (the staff) can do difficult and sometimes tedious work for the organization's mission and software freedom in general. As I've been elected as a fellowship representative, I feel a responsibility to ensure the people I represent are adequately informed about the organization and their relationship with it and I hope these notes and the diagram helps to fulfil that responsibility.

Therefore, this diagram is presented to the community not for the purpose of criticizing anybody but for the purpose of helping make sure everybody is on the same page about our current structure before it changes.

If anybody has time to make a better diagram it would be very welcome.

John Goerzen: How are you handling building local Debian/Ubuntu packages?

2 February, 2018 - 16:26

I’m in the middle of some conversations about Debian/Ubuntu repositories, and I’m curious how others are handling this.

How are people maintaining repos for an organization? Are you integrating them with a git/CI (github/gitlab, jenkins/travis, etc) workflow? How do packages propagate into repos? How do you separate prod from testing? Is anyone running buildd locally, or integrating with more common CI tools?

I’m also interested in how people handle local modifications of packages — anything from newer versions of C libraries to newer interpreters. Do you just use the regular Debian toolchain, packaging them up for (potentially) the multiple distros/versions that you have in production? Pin them in apt?

Just curious what’s out there.

Some Googling has so far turned up just one relevant hit: Michael Prokop’s DebConf15 slides, “Continuous Delivery of Debian packages”. Looks very interesting, and discusses jenkins-debian-glue.

Some tangentially-related but interesting items:

Edit 2018-02-02: I should have also mentioned BuildStream

Bits from Debian: Debian welcomes its Outreachy interns

2 February, 2018 - 14:30

We'd like to welcome our three Outreachy interns for this round, lasting from December 2017 to March 2018.

Juliana Oliveira is working on reproducible builds for Debian and free software.

Kira Obrezkova is working on bringing open-source mobile technologies to a new level with Debian (Osmocom).

Renata D'Avila is working on a calendar database of social events and conferences for free software developers.

Congratulations, Juliana, Kira and Renata!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Debian will also participate this summer in the next round for Outreachy, and is currently applying as mentoring organisation for the Google Summer of Code 2018 programme. Have a look at the projects wiki page and contact the Debian Outreach Team mailing list to join as a mentor or welcome applicants into the Outreachy or GSoC programme.

Join us and help extend Debian!

Ritesh Raj Sarraf: Laptop Mode Tools 1.72

2 February, 2018 - 00:45

What a way to make a gift!

I'm pleased to announce the 1.72 release of Laptop Mode Tools. Major changes include the port of the GUI configuration utility to Python 3 and PyQt5. Some tweaks, fixes and enhancements in current modules. Extending {black,white}list of devices to types other than USB. Listing of devices by their devtype attribute.

A filtered list of changes is mentioned below. For the full log, please refer to the git repository. 

Source tarball, Feodra/SUSE RPM Packages available at:

Debian packages will be available soon in Unstable.

Mailing List:



1.72 - Thu Feb  1 21:59:24 IST 2018
    * Switch to PyQt5 and Python3
    * Add btrfs to list of filesystems for which we can set commit interval
    * Add pkexec invocation script
    * Add desktop file to upstream repo and invoke script
    * Update installer to includes gui wrappers
    * Install new SVG pixmap
    * Control all available cards in radeon-dpm
    * Prefer to use the new runtime pm autosuspend_delay_ms interface
    * tolerate broken device interfaces quietly
    * runtime-pm: Make {black,white}lists work with non-USB devices
    * send echo errors to verbose log
    * Extend blacklist by device types of devtype


What is Laptop Mode Tools
Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

PS: This release took around 13 months. A lot of things changed, for me, personally. Some life lessons learnt. Some idiots uncovered. But the best of 2017, I got married. I am hopeful to keep work-life balanced, including time for FOSS.

Categories: Keywords: Like: 

Raphaël Hertzog: My Free Software Activities in January 2018

1 February, 2018 - 22:08

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

While I continue to manage the administrative side of Debian LTS, I’m taking a break of the technical work (i.e. preparing and releasing security updates). The hope is that it will help me focus more on my book which (still) needs to be updated for stretch. In truth, this did not happen in January but I hope to do better in the upcoming months.

Salsa and related

The switch to is a major event in our community. Last month I started with the QA team and the distro-tracker repository as an experiment. This month I took this opportunity to bring to fruition a merge between the pkg-security team and the forensics team that I already proposed in the past and that we postponed because it was deemed busy work for no gains. Now that both teams had to migrate anyway, it was easier to migrate everything at once under a single project.

All our repositories are now managed under the same team in salsa: But for the mailing list we are still waiting for the new list to be created on (#888136).

As part of this work, I contributed some fixes to the scripts maintained by Mehdi Dogguy. I also filed a wishlist request for a new script to make it easy to share repositories with the Debian group.

With the expected demise of alioth mailing lists, there’s some interest in getting the Debian package tracker to host the official maintainer email. As the central hub for most emails related to packages, it seems natural indeed. We made some progress lately on making it possible to use emails (with the downside of receiving duplicate emails currently) but that’s not an really an option when you maintain many packages and want to see them grouped under the same maintainer email. Furthermore it doesn’t allow for automatic association of a package to its maintainer team. So I implemented a email that works for each team registered on the package tracker and that will automatically associate the package to its team. The email is just a black hole for now (not really a problem as most automatic emails are already received through another email) but I expect to forward non-automatic mails to team members to make it useful as a way to discuss between team members.

The package tracker also learned to recognize commit mails generated by GitLab and it will now forward them to the source package whose name is matching the name of the GitLab project that generated them (see #886114).

Misc Debian stuff

Distro Tracker. I got my two first merge requests which I reviewed and merged. One adds native HTML support to toggle action items (i.e. without javascript on recent browsers) and the other improves some of the messages shown by the vcswatch integration. In #886450, we discussed how to better filter build failure mails sent by the build daemons. New headers have been added.

Bug reports and patches. I forwarded and/or got moving a couple of bugs that we encountered in Kali (glibc: new data brought to #820826, raspi3-firmware: #887062, glibc: tracking down #886506 to a glibc regression affecting busybox, gr-fcdproplus: #888853 new watch file, gjs: upstream bug #33). I also needed a new feature in live-build so I filed #888507 which I implemented almost immediately (but released only in Kali because it’s not documented yet and can possibly be improved a bit further).

While doing my yearly accounting, I opened an issue on tryton and pushed a fix after approval. While running unit tests on distro-tracker, I got an unexpected warning that seems to be caused by virtualenv (see upstream issue #1120).

Debian Packaging. I uploaded zim 0.68~rc1-1 to experimental.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Shirish Agarwal: webmail saga continues

1 February, 2018 - 21:53

I was pleased to see a reply from Daniel as a reaction to my post. I read and re-read the blog couple of times yesterday and another time today to question my own understanding and see if there is anyway I could make life easier and simpler for myself and other people whom I interact with but finding it somewhat of an uphill task. I will not be limiting myself to e-mail alone as I feel until we don’t get/share the big picture it would remain incomplete.

Allow to share me few observations below –

1. The first one is probably cultural in nature (either specific to India or its worldwide I have no contextual information.) Very early in my professional and personal life I understood that e-mails are leaky by design. By leaky I mean being leaked by individuals for profit or some similar motive.

Also e-mails are and were used as misinformation tools by companies and individuals then and now or using sub-set or superset of them without providing contextual information in which they were written. While this could be construed as giving straw man I do not know any other way. So the best way, at least for me is to construct e-mails in a way where even if some information is leaked, I’m ok with it being leaked or being in public domain. It just hurts less. I could probably give 10-15 high-profile public outings in the last 2-3 years itself. And these are millionaires and billionaires, people on whom many people rely on their livelihoods should have known better. In Indian companies, for communications they do have specific clauses where any communication you had with them is subject to privacy and if you share it with somebody you would be prosecuted, on the other hand if the company does it, it gets a free pass.

2. Because of my own experiences I have been pretty circumspect/slightly paranoid of anybody promising or selling the cool-aid of total privacy. Another example which is of slightly recentish vintage and pains me even today was a Mozilla add-on for which I had done RFP (Request for Package) which a person for (probably will be moved to salsa in near future) packaged and I thanked him/her for it. Two years later it came to fore that under the guise of protecting us from bad cookies or whatever the add-on was supposed to do, it was actually tracking us and selling this information to third-parties.

This was found out by some security researcher casually auditing the code two years down the line (not mozilla) and then being confirmed by other security researchers as well. It was a moment of anguish for me as so many people’s privacy had been invaded even though there were good intentions from my side.

It was also a bit sad as I had assumed (perhaps incorrectly) that Debian does do some automated security audit along with hardening flags that it uses when a package is built. This isn’t to show Debian in a bad light but to understand and realize that Debian has its own shortcomings in many ways. I did hear recently that lot of packages from Kali would make it to Debian core, hopefully some of those packages could also serve as an additional tool to look at packages when they are being built

I do know it’s a lot to ask for as Debian is a volunteer effort. I am happy to test or whichever way I can contribute to Debian if in doing so we can raise the bar for intended or unintended malicious apps. to go through. I am not a programmer but still I’m sure there might be somehow I could add strength to the effort.

3. The other part is I don’t deny that Google is intrusive. Google is intrusive not just in e-mail but in every way, every page that uses Google analytics or the google Spider search-engine be used for tracking where you are and what you are doing. The way they have embedded themselves in web-pages is it has become almost impossible to see almost all web-pages (some exceptions remain) without allowing to see what you are seeing. I use requestpolicy-continued to know what third-party domains are there on web-page and I see, and some of the others almost all the time. The problem there is we also don’t know how much information google gathers. For e.g. even if I don’t use Google search engine and if I am searching on any particular topic and if 3-4 of the websites use google for any form or manner, it would be easy to know the information and the line/mode or form of the info. I’m looking for. That actually is same if not more of a problem to me than e-mails and I have no solution for it. Tor and torbrowser-launcher are and were supposed to be an answer to this problem but most big CDNs (Content Distributor Networks) like are against it so privacy remains an elusive dream there as well.

5. It becomes all the more dangerous when in mobile space where Google Android is the only vendor. The rise of carrier-handset locking which is prevalent in the west has also started making inroads in India. In the manufacturer-carrier-Operating System complex such things will become more common. I have no idea about other vendors but from what I have seen I think the majority might probably be doing the same. IPhone is supposed to also have lot of nastiness where it comes to surveillance.

6. My main worry for protonmail or any other vendor is should we just take them at face-value or is there some other way for people around the world to be assured and in case things take a worse time be possible to file claim for damages if those terms and conditions are not met. I looked to see if I could find an answer to this question which I asked in my previous post and I looked but didn’t find any appropriate answer in your post. The only way I see out of is decentralized networks and apps but they too leave much to be desired. Two examples I can share of the latter. Diaspora started with the idea that I could have my profile in one pod, if for some reason I didn’t like the pod, I could take all the info. to another pod with all the messages, everything in an instant. At least till few months back, I tried to migrate to another pod and found that feature doesn’t work/still a work in progress.

Similarly, is another service which claimed to use de-centralization but for last year or so I haven’t been able to send one email to another user till date.

I used both these examples as both are foss and both have considerable communities and traction built around them. Security or/and anonymity is still at a lower path though as of yet.

I hope I was able to share where I’m coming from.

Daniel Pocock: Our future relationship with FSFE

1 February, 2018 - 20:19

Below is an email that has been distributed to the FSFE community today. FSFE aims to be an open organization and people are welcome to discuss it through the main discussion group (join, thread and reply) whether you are a member or not.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site. The "No Cloud" stickers and the Public Money Public Code campaign are examples of initiatives started by FSFE - you can request free stickers and posters by filling in this form.

Dear FSFE Community,

I'm writing to you today as one of your elected fellowship representatives rather than to convey my own views, which you may have already encountered in my blog or mailing list discussions.

The recent meeting of the General Assembly (GA) decided that the annual elections will be abolished but this change has not yet been ratified in the constitution.

Personally, I support an overhaul of FSFE's democratic processes and the bulk of the reasons for this change are quite valid. One of the reasons proposed for the change, the suggestion that the election was a popularity contest, is an argument I don't agree with: the same argument could be used to abolish elections anywhere.

One point that came up in discussions about the elections is that people don't need to wait for the elections to be considered for GA membership. Matthias Kirschner, our president, has emphasized this to me personally as well, he looks at each new request with an open mind and forwards it to all of the GA for discussion. According to our constitution, anybody can write to the president at any time and request to join the GA. In practice, the president and the existing GA members will probably need to have seen some of your activities in one of the FSFE teams or local groups before accepting you as a member. I want to encourage people to become familiar with the GA membership process and discuss it within their teams and local groups and think about whether you or anybody you know may be a good candidate.

According to the minutes of the last GA meeting, several new members were already accepted this way in the last year. It is particularly important for the organization to increase diversity in the GA at this time.

The response rate for the last fellowship election was lower than in previous years and there is also concern that emails don't reach everybody thanks to spam filters or the Google Promotions tab (if you use gmail). If you had problems receiving emails about the last election, please consider sharing that feedback on the discussion list.

Understanding where the organization will go beyond the extinction of the fellowship representative is critical. The Identity review process, championed by Jonas Oberg and Kristi Progri, is actively looking at these questions. Please contact Kristi if you wish to participate and look out for updates about this process in emails and Planet FSFE. Kristi will be at FOSDEM this weekend if you want to speak to her personally.

I'll be at FOSDEM this weekend and would welcome the opportunity to meet with you personally. I will be visiting many different parts of FOSDEM at different times, including the FSFE booth, the Debian booth, the real-time lounge (K-building) and the Real-Time Communications (RTC) dev-room on Sunday, where I'm giving a talk. Many other members of the FSFE community will also be present, if you don't know where to start, simply come to the FSFE booth. The next European event I visit after FOSDEM will potentially be OSCAL in Tirana, it is in May and I would highly recommend this event for anybody who doesn't regularly travel to events outside their own region.

Changing the world begins with the change we make ourselves. If you only do one thing for free software this year and you are not sure what it is going to be, then I would recommend this: visit an event that you never visited before, in a city or country you never visited before. It doesn't necessarily have to be a free software or IT event. In 2017 I attended OSCAL in Tirana and the Digital-Born Media Carnival in Kotor for the first time. You can ask FSFE to send you some free stickers and posters (online request with optional donation) to give to the new friends you meet on your travels. Change starts with each of us doing something new or different and I hope our paths may cross in one of these places.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site.

Please feel free to discuss this through the FSFE discussion group (join, thread and reply)

Paul Wise: FLOSS Activities January 2018

1 February, 2018 - 07:12
Changes Issues Review Administration
  • Debian: try to regain OOB access to a host, try to connect with a hoster, restart bacula after db restart, provide some details to a hoster, add debsnap to snapshot host, debug external email issue, redirect users to support channels
  • Debian mentors: redirect to sponsors, teach someone about dput .upload files, check why a package disappeared
  • Debian wiki: unblacklist IP address, whitelist email addresses, whitelist email domain, investigate DocBook output crash
  • Initiate discussion about ingestion of more security issue feeds
  • Invite LinuxCNC to the Debian derivatives census

I renewed my support of Software Freedom Conservancy.

The Discord related uploads (harmony, librecaptcha, purple-discord) and the Debian fakeupstream change were sponsored by my employer. All other work was done on a volunteer basis.

Chris Lamb: Free software activities in January 2018

1 February, 2018 - 05:20

Here is my monthly update covering what I have been doing in the free software world in January 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Compare JSON files using the jsondiff module. (#888112)
    • Report differences in extended file attributes when comparing files. (#888401)
    • Show extended filesystem metadata when directly comparing two files not just when we specify two directories. (#888402)
    • Do some fuzzy parsing to detect JSON files not named .json. [...]
  • Bug fixes:
    • Return unknown if we can't parse the readelf version number for (eg.) FreeBSD. (#886963)
    • If the LLVM disassembler does not work, try the internal one. (#886736)
  • Misc:
    • Explicitly depend on e2fsprogs. (#887180)
    • Clarify Unidentified file log message as we did try and lookup via the comparators first. [...]

I also fixed an issue in the "trydiffoscope" command-line client that was preventing installation on non-Debian systems (#888882).


disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Correct "explicitly" typo in disorderfs.1.txt. [...]
  • Bump Standards-Version to 4.1.3. [...]
  • Drop trailing whitespace in debian/control. [...]


My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

In addition to this, I:

  • Published, an overview of why APT does not rely solely on SSL for validation of downloaded packages as I noticed it was being asked a lot on support forums.
  • Reported a number of issues for the review service.
Patches contributed
  • dput: Suggest --force if package has already been uploaded. (#886829)
  • linux: Add link to the Firmware page on the wiki to failed to load log messages. (#888405)
  • markdown: Make markdown exit with a non-zero exit code if cannot open input file. (#886032)
  • spectre-meltdown-checker: Return a sensible exit code. (#887077)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Initial draft of a script to automatically detect when CVEs should be assigned to multiple source packages in the case of legacy renames, duplicates or embedded code copies.
  • Issued DLA 1228-1 for the poppler PDF library to fix an overflow vulnerability.
  • Issued DLA 1229-1 for imagemagick correcting two potential denial-of-service attacks.
  • Issued DLA 1233-1 for gifsicle — a command-line tool for manipulating GIF images — to fix a use-after-free vulnerability.
  • Issued DLA 1234-1 to fix multiple integer overflows in the GTK gdk-pixbuf graphics library.
  • Issued DLA 1247-1 for rsync, fixing a command-injection vulnerability.
  • Issued DLA 1248-1 for libgd2 to prevent a potential infinite loop caused by signedness confusion.
  • Issued DLA 1249-1 for smarty3 fixing an arbitrary code execution vulnerability.
  • "Frontdesk" duties, triaging CVEs, etc.
  • adminer (4.5.0-1) — New upstream release.
  • bfs (1.2-1) — New upstream release.
  • dbus-cpp (5.0.0+18.04.20171031-1) — Initial upload to Debian.
  • installation-birthday (7) — Add e2fsprogfs to Depends so it can drop Essential: yes. (#887275
  • process-cpp:
    • 3.0.1-1 — Initial upload to Debian.
    • 3.0.1-2 — Fix FTBFS due to symbol versioning.
  • python-django (1:1.11.9-1 & 2:2.0.1-1) — New upstream releases.
  • python-gflags (1.5.1-4) — Always use SOURCE_DATE_EPOCH from the environment.
  • redis:
    • 5:4.0.6-3 — Use --clients argument to runtest to force single-threaded operation over using taskset.
    • 5:4.0.6-4 — Re-add procps to Build-Depends. (#887075)
    • 5:4.0.6-5 — Fix a dangling symlink (and thus a broken package). (#884321)
    • 5:4.0.7-1 — New upstream release.
  • redisearch (1.0.3-1, 1.0.4-1 & 1.0.5-1) — New upstream releases.
  • trydiffoscope (67.0.0) — New upstream release.

I also sponsored the following uploads:

Debian bugs filed
  • gdebi: Invalid gnome-mime-application-x-deb icon in AppStream metadata. (#887056)
  • git-buildpackage: Please make gbp clone not quieten the output by default. (#886992)
  • git-buildpackage: Please word-wrap generated changelog lines. (#887055)
  • isort: Don't install to global Python namespace. (#887816)
  • restrictedpython: Please add Homepage. (#888759)
  • xcal: Missing patches due to 00List != 00list. (#888542)

I also filed 4 bugs against packages missing patches due to incomplete quilt conversions against cernlib geant321, mclibs & paw.

RC bugs
  • gnome-shell-extension-tilix-shortcut: Invalid date in debian/changelog. (#886950)
  • python-qrencode: Missing PIL dependencies due to use of Python 2 substvars in Python 3 package. (#887811)

I also filed 7 FTBFS bugs against lintian, netsniff-ng, node-coveralls, node-macaddress, node-timed-out, python-pyocr & sleepyhead.

FTP Team

As a Debian FTP assistant I ACCEPTed 173 packages: appmenu-gtk-module, atlas-cpp, canid, check-manifest, cider, citation-style-language-locales, citation-style-language-styles, cloudkitty, coreapi, coreschema, cypari2, dablin, dconf, debian-dad, deepin-icon-theme, dh-dlang, django-js-reverse, flask-security, fpylll, gcc-8, gcc-8-cross, gdbm, gitlint, gnome-tweaks, gnupg-pkcs11-scd, gnustep-back, golang-github-juju-ansiterm, golang-github-juju-httprequest, golang-github-juju-schema, golang-github-juju-testing, golang-github-juju-webbrowser, golang-github-posener-complete, golang-gopkg-juju-environschema.v1, golang-gopkg-macaroon-bakery.v2, golang-gopkg-macaroon.v2, harmony, hellfire, hoel, iem-plugin-suite, ignore-me, itypes, json-tricks, jstimezonedetect.js, libcdio, libfuture-asyncawait-perl, libgig, libjs-cssrelpreload, liblxi, libmail-box-imap4-perl, libmail-box-pop3-perl, libmail-message-perl, libmatekbd, libmoosex-traitfor-meta-class-betteranonclassnames-perl, libmoosex-util-perl, libpath-iter-perl, libplacebo, librecaptcha, libsyntax-keyword-try-perl, libt3highlight, libt3key, libt3widget, libtree-r-perl, liburcu, linux, mali-midgard-driver, mate-panel, memleax, movit, mpfr4, mstch, multitime, mwclient, network-manager-fortisslvpn, node-babel-preset-airbnb, node-babel-preset-env, node-boxen, node-browserslist, node-caniuse-lite, node-cli-boxes, node-clone-deep, node-d3-axis, node-d3-brush, node-d3-dsv, node-d3-force, node-d3-hierarchy, node-d3-request, node-d3-scale, node-d3-transition, node-d3-zoom, node-fbjs, node-fetch, node-grunt-webpack, node-gulp-flatten, node-gulp-rename, node-handlebars, node-ip, node-is-npm, node-isomorphic-fetch, node-js-beautify, node-js-cookie, node-jschardet, node-json-buffer, node-json3, node-latest-version, node-npm-bundled, node-plugin-error, node-postcss, node-postcss-value-parser, node-preact, node-prop-types, node-qw, node-sellside-emitter, node-stream-to-observable, node-strict-uri-encode, node-vue-template-compiler, ntl, olivetti-mode, org-mode-doc, otb, othman, papirus-icon-theme, pgq-node, php7.2, piu-piu, prometheus-sql-exporter, py-radix, pyparted, pytest-salt, pytest-tempdir, python-backports.tempfile, python-backports.weakref, python-certbot, python-certbot-apache, python-certbot-nginx, python-cloudkittyclient, python-josepy, python-jsondiff, python-magic, python-nose-random, python-pygerrit2, python-static3, r-cran-broom, r-cran-cli, r-cran-dbplyr, r-cran-devtools, r-cran-dt, r-cran-ggvis, r-cran-git2r, r-cran-pillar, r-cran-plotly, r-cran-psych, r-cran-rhandsontable, r-cran-rlist, r-cran-shinydashboard, r-cran-utf8, r-cran-whisker, r-cran-wordcloud, recoll, restrictedpython, rkt, rtklib, ruby-handlebars-assets, sasmodels, spectre-meltdown-checker, sphinx-gallery, stepic, tilde, togl, ums2net, vala-panel, vprerex, wafw00f & wireguard.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: fpylll, gnome-tweaks, org-mode-doc & py-radix.

Wouter Verhelst: Day three of the pre-FOSDEM Debconf Videoteam sprint

1 February, 2018 - 01:46

This should really have been the "day two" post, but I forgot to do that yesterday, and now it's the end of day three already, so let's just do the two together for now.


Has been hacking on the opsis so we can get audio through it, but so far without much success. In addition, he's been working a bit more on documentation, as well as splitting up some data that's currently in our ansible repository into a separate one so that other people can use our ansible configuration more easily, without having to fork too much.


Did some tests on the ansible setup, and did some documentation work, and worked on a kodi plugin for parsing the metadata that we've generated.


Did some work on the DebConf website. This wasn't meant to be much, but yak shaving sucks. Additionally, he's been doing some work on the youtube uploader as well.


Did more work reviewing our documentation, and has been working on rewording some of the more awkward bits.


Spent much time on improving the SReview installation for FOSDEM. While at it, fixed a number of bugs in some of the newer code that were exposed by full tests of the FOSDEM installation. Additionally, added code to SReview to generate metadata files that can be handed to Stefano's youtube uploader.


Although he had less time yesterday than he did on monday (and apparently no time today) to sprint remotely, Pollo still managed to add a basic CI infrastructure to lint our ansible playbooks.

Abhijith PA: Swatantra17

31 January, 2018 - 20:49

Its very late but here it goes..

Last month Thiruvananthapuram witnessed one of the biggest Free and Open Source Software conference called Swatantra17. Swatantra is a flagship triennial ( actually used to be triennial, but from now on organizers decided to conduct in every 2 years.) FOSS conference from ICFOSS. This year there were more than 30 speakers from all around the world. The event held from 20-21 December at Mascot hotel, Thiruvananthapuram. I was one of the community volunteer for the event and was excited from the day it announced :) .

Current Kerala Chief Minister Pinarayi Vijayan inaugurated Swatantra17. The first day session started with keynote from Software Freedom Conservancy executive director Karen Sandler. Karen told about safety of medical devices like defibrillator which runs proprietary software. After that there were many parallel talks about various free software projects,technologies and tools. This edition of Swatantra focused more on art. It was good to know more about artist’s free software stack. Most amazing thing is through out the conference I met so many people from FSCI whom I only know through matrix/IRC/emails.

The first day talks were ended at 6PM. After that Oorali band performed for us. This band is well-known in Kerala because they speak for many social and political issues. This make them best match for a free software conference cultural program :). Their songs are mainly about birds, forests, freedom and we danced to the many of the songs.

Last day evening there was kind of BoF from FSF person Benjamin Mako Hill. Half way through I came to know he is also a Debian Developer :D. Unfortunately this Bof stopped as he was called for a panel discussion. After the panel discussion we all Debian people gathered and had a chat.

Daniel Leidert: Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics

31 January, 2018 - 19:24

With the deprecation of the subversion service hosted there will be shut down too. According to lintian the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the debichem group we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found here.

Of course I began as described at the Debian Wiki. But following this guide, using git-svn and converting the tags with the script supplied under rubric Convert remote tags and branches to local one gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that git-svn was to blame and reported this as bug #887881. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:

$ svn log -v -r7405
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
Geänderte Pfade:
A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)

[svn-buildpackage] Tagging shelxle 1.0.888-1

Looking into the git log, the tags deteremined by git-svn are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO git-svn is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use /branch/package/, I began to look for different tools and found svn-all-fast-export, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)

Step 1: Setting up a local subversion mirror

First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the svnsync command. There are several howtos for this, so I won't describe this step here. Please check out this guide. In my case I have such a copy in /srv/svn/debichem.

Step 2: Creating the identity map

svn-all-fast-export needs at least two files to work. One is the so called identity map. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by git-svn:

loginname = author name <mail address>


dleidert = Daniel Leidert <>

The list of subversion user IDs can be obtained the same way as described in the Wiki:

svn log SVN_URL | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u

Just replace the placeholder SVN_URL with your subversion URL. Here is the complete file for the debichem group.

Step 3: Creating the rules

The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file src/ruleparser.cpp. I'll describe, what I already found out. If you are impatient, here is my result so far.

The basic rules are:

create repository REPOSITORY
end repository


end match

The first rule creates a bare git repository with the name you've chosen (above represented by REPOSITORY). It can have one child, that is the repository description to be put into the repositories description file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:

create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository

You'll have to create every repository, before you can put something into it. Else svn-all-fast-export will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.

Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by PATTERN), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply! So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type QRegExp and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with standard layout (that is /PACKAGE/{trunk,tags,branches}/), clean naming and subversion history, the rules could be:

match /([^/]+)/trunk/
repository \1
branch master
end match

match /([^/]+)/tags/([^/]+)/
repository \1
branch refs/tags/debian/\2
annotated true
end match

match /([^/]+)/branches/([^/]+)/
repository \1
branch \2
end match

The first rule captures the (source) package name from the path and puts it into the backreference \1. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference \1 to that name - and there into the master branch. Note, that svn-all-fast-export will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the create repository rule. The second rule captures the (source) package name from the path too and puts it into the backreference \1. But in backreference \2 it further captures (and applies to) all the tag directories under the /tags/ directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to annotated Git tags (another advantage of svn-all-fast-export over git-svn). Without enabling the annotated statement, the tags created will be lightweight tags. So the tag name (here: debian/VERSION) is determined via backreference \2. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.

Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called authors and the rules file is called rules and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at /srv/svn/mymirror. So ...

svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror

... will create one or more bare Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...

git -C REPOSITORY/ --bare show-ref
git -C REPOSITORY/ --bare log --all --graph

... and you will find your repositories description (if you added one to the rules file) in REPOSITORY/description:

cat REPOSITORY/description

Please note, that not all the debian version strings are well formed Git reference names and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.

But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to skip to the next step.

In the debichem group we used a different layout. The packaging directories were under /{unstable,experimental,wheezy,lenny,non-free}/PACKAGE/. This translates to /unstable/PACKAGE/ and /non-free/PACKAGE/ being the trunk directories and the others being the branches. The tags are in /tags/PACKAGE/. And packages, that are yet to upload are located in /wnpp/PACKAGE/. With this layout, the basic rules are:

# trunk handling
# e.g. /unstable/espresso/
# e.g. /non-free/molden/
match /(?:unstable|non-free)/([^/]+)/
repository \1
branch master
end match

# handling wnpp
# e.g. /wnpp/osra/
match /(wnpp)/([^/]+)/
repository \2
branch \1
end match

# branch handling
# e.g. /wheezy/espresso/
match /(lenny|wheezy|experimental)/([^/]+)/
repository \2
branch \1
end match

# tags handling
# e.g. /tags/espresso/VERSION/
match /tags/([^/]+)/([^/]+)/
repository \1
annotated true
branch refs/tags/debian/\2
substitute branch s/~/_/
substitute branch s/:/_/
end match

In the first rule, there is a non-capturing expression (?: ... ), which simply means, that the rule applies to /unstable/ and /non-free/. Thus the backreference \1 refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the wnpp directory are not pushed to master, but instead to a branch called wnpp. This was necessary because of overlaps between /unstable/ and /wnpp/ history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference \1 determines the branch (note the capturing expression in contrast to the first rule) and the second backreference \2 the package repository to act on. The last rule is similar, but now \1 determines the package repository and \2 the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the substitute branch EXPRESSION instructions.

Step 4: Cleaning the bare repository

The tool documentation suggests to run ...

git -C REPOSITORY/ repack -a -d -f

... before you upload this bare repository to another location. But Stuart Prescott told me on the debichem list, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:

git -C REPOSITORY/ --bare clone --bare REPOSITORY.git
git -C REPOSITORY.git/ repack -a -d -f

Please note, that this won't copy the repositories description file. You'll have to copy it manually, if you wanna keep it. The resulting bare repository can be uploaded (e.g. to as personal repository:

cp REPOSITORY/description REPOSITORY.git/description
touch REPOSITORY.git/git-daemon-export-ok
rsync -avz REPOSITORY.git

Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script here.

John Goerzen: An old DOS BBS in a Docker container

31 January, 2018 - 18:32

Awhile back, I wrote about my Debian Docker base images. I decided to extend this concept a bit further: to running DOS applications in Docker.

But first, a screenshot:

It turns out this is possible, but difficult. I went through all three major DOS emulators available (dosbox, qemu, and dosemu). I got them all running inside the Docker container, but had a number of, er, fun issues to resolve.

The general thing one has to do here is present a fake modem to the DOS environment. This needs to be exposed outside the container as a TCP port. That much is possible in various ways — I wound up using tcpser. dosbox had a TCP modem interface, but it turned out to be too buggy for this purpose.

The challenge comes in where you want to be able to accept more than one incoming telnet (or TCP) connection at a time. DOS was not a multitasking operating system, so there were any number of hackish solutions back then. One might have had multiple physical computers, one for each incoming phone line. Or they might have run multiple pseudo-DOS instances under a multitasking layer like DESQview, OS/2, or even Windows 3.1.

(Side note: I just learned of DESQview/X, which integrated DESQview with X11R5 and replaced the Windows 3 drivers to allow running Windows as an X application).

For various reasons, I didn’t want to try running one of those systems inside Docker. That left me with emulating the original multiple physical node setup. In theory, pretty easy — spin up a bunch of DOS boxes, each using at most 1MB of emulated RAM, and go to town. But here came the challenge.

In a multiple-physical-node setup, you need some sort of file sharing, because your nodes have to access the shared message and file store. There were a myriad of clunky ways to do this in the old DOS days – Netware, LAN manager, even some PC NFS clients. I didn’t have access to Netware. I tried the Microsoft LM client in DOS, talking to a Samba server running inside the Docker container. This I got working, but the LM client used so much RAM that, even with various high memory tricks, BBS software wasn’t going to run. I couldn’t just mount an underlying filesystem in multiple dosbox instances either, because dosbox did caching that wasn’t going to be compatible.

This is why I wound up using dosemu. Besides being a more complete emulator than dosbox, it had a way of sharing the host’s filesystems that was going to work.

So, all of this wound up with this: jgoerzen/docker-bbs-renegade.

I also prepared building blocks for others that want to do something similar: docker-dos-bbs and the lower-level docker-dosemu.

As a side bonus, I also attempted running this under Joyent’s Triton (SmartOS, Solaris-based). I was pleasantly impressed that I got it all almost working there. So yes, a Renegade DOS BBS running under a Linux-based DOS emulator in a container on a Solaris machine.

Russ Allbery: Review: My Grandmother Asked Me to Tell You She's Sorry

31 January, 2018 - 11:19

Review: My Grandmother Asked Me to Tell You She's Sorry, by Fredrik Backman

Series: Britt-Marie #1 Translator: Henning Koch Publisher: Washington Square Copyright: 2014 Printing: April 2016 ISBN: 1-5011-1507-3 Format: Trade paperback Pages: 372

Elsa is seven, going on eight. She's not very good at it; she knows she's different and annoying, which is why she gets chased and bullied constantly at school and why her only friend is her grandmother. But Granny is a superhero, who's also very bad at being old. Her superpowers are lifesaving and driving people nuts. She made a career of being a doctor in crisis zones; now she makes a second career of, well, this sort of thing:

Or that time she made a snowman in Britt-Marie and Kent's garden right under their balcony and dressed it up in grown-up clothes so it looked as if a person had fallen from the roof. Or that time those prim men wearing spectacles started ringing all the doorbells and wanted to talk about God and Jesus and heaven, and Granny stood on her balcony with her dressing gown flapping open, shooting at them with her paintball gun

The other thing Granny is good at is telling fairy tales. She's been telling Elsa fairy tales since she was small and her mom and dad had just gotten divorced and Elsa was having trouble sleeping. The fairy tales are all about Miamas and the other kingdoms of the Land-of-Almost-Awake, where the fearsome War-Without-End was fought against the shadows. Miamas is the land from which all fairy tales come, and Granny has endless stories from there, featuring princesses and knights, sorrows and victories, and kingdoms like Miploris where all the sorrows are stored.

Granny and Miamas and the Land-of-Almost-Awake make Elsa's life not too bad, even though she has no other friends and she's chased at school. But then Granny dies, right after giving Elsa one final quest, her greatest quest. It starts with a letter and a key, addressed to the Monster who lives downstairs. (Elsa calls him that because he's a huge man who only seems to come out at night.) And Granny's words:

"Promise you won't hate me when you find out who I've been. And promise me you'll protect the castle. Protect your friends."

My Grandmother Asked Me to Tell You She's Sorry is written in third person, but it's close third person focused on Elsa and her perspective on the world. She's a precocious seven-year-old who I thought was nearly perfect (rare praise for me for children in books), which probably means some folks will think she's a little too precocious. But she has a wonderful voice, a combination of creative imagination, thoughtfulness, and good taste in literature (particularly Harry Potter and Marvel Comics). The book is all about what it's like to be seven, going on eight, with a complicated family situation and an awful time at school, but enough strong emotional support from her family that she's still full of stubbornness, curiosity, and fire.

Her grandmother's quest gets her to meet the other residents of the apartment building she lives in, turning them into more than the backdrop of her life. That, in turn, adds new depth to the fairy tales her Granny told her. Their events turn out to not be pure fabrication. They were about people, the many people in her Granny's life, reshaped by Granny's wild imagination and seen through the lens of a child. They leave Elsa surprisingly well-equipped to navigate and start to untangle the complicated relationships surrounding her.

This is where Backman pulls off the triumph of this book. Elsa's discoveries that her childhood fairy tales are about the people around her, people with a long history with her grandmother, could have been disillusioning. This could be the story of magic fading into reality and thereby losing its luster. And at first Elsa is quite angry that other people have this deep connection to things she thought were hers, shared with her favorite person. But Backman perfectly walks that line, letting Elsa keep her imaginative view of the world while intelligently mapping her new discoveries onto it. The Miamas framework withstands serious weight in this story because Elsa is flexible, thoughtful, and knows how to hold on to the pieces of her story that carry deeper truth. She sees the people around her more clearly than anyone else because she has a deep grasp of her grandmother's highly perceptive, if chaotic, wisdom, baked into all the stories she grew up with.

This book starts out extremely funny, turns heartwarming and touching, and develops real suspense by the end. It starts out as Elsa nearly alone against the world and ends with a complicated matrix of friends and family, some of whom were always supporting each other beneath Elsa's notice and some of whom are re-learning the knack. It's a beautiful story, and for the second half of the book I could barely put it down.

I am, as a side note, once again struck by the subtle difference in stories from cultures with a functional safety net. I caught my American brain puzzling through ways that some of the people in this book could still be alive and living in this apartment building since they don't seem capable of holding down jobs, before realizing this story is not set in a brutal Hobbesian jungle of all against all like the United States. The existence of this safety net plays no significant role in this book apart from putting a floor under how far people can fall, and yet it makes all the difference in the world and in some ways makes Backman's plot possible. Perhaps publishers should market Swedish literary novels as utopian science fiction in the US.

This is great stuff. The back and forth between fairy tales and Elsa's resilient and slightly sarcastic life can take a bit to get used to, but stick with it. All the details of the fairy tales matter, and are tied back together wonderfully by the end of the book. Highly recommended. In its own way, this is fully as good as A Man Called Ove.

There is a subsequent book, Britt-Marie Was Here, that follows one of the supporting characters of this novel, but My Grandmother Asked Me to Tell You She's Sorry stands alone and reaches a very satisfying conclusion (including for that character).

Rating: 10 out of 10

Jeremy Bicha: logo.png for default avatar for GitLab repos

31 January, 2018 - 10:13

Debian and GNOME have both recently adopted self-hosted GitLab for their git hosting. GNOME’s service is named simply ; Debian’s has the more intriguing name . If you ask the Salsa sysadmins, they’ll explain that they were in a Mexican restaurant when they needed to decide on a name!

There’s a useful under-documented feature I found. If you place a logo.png in the root of your repository, it will be automatically used as the default “avatar” for your project (in other words, the logo that shows up on the web page next to your project).

I added a logo.png to GNOME Tweaks at GNOME and it automatically showed up in Salsa when I imported the new version.

Other Notes

I first tried with a symlink to my app icon, but it didn’t work. I had to actually copy the icon.

The logo.png convention doesn’t seem to be supported at GitHub currently.

Daniel Pocock: Fair communication requires mutual consent

31 January, 2018 - 03:33

I was pleased to read Shirish Agarwal's blog in reply to the blog I posted last week Do the little things matter?

Given the militaristic theme used in my own post, I was also somewhat amused to see news this week of the Strava app leaking locations and layouts of secret US military facilities like Area 51. What a way to mark International Data Privacy Day. Maybe rather than inadvertently misleading people to wonder if I was suggesting that Gmail users don't make their beds, I should have emphasized that Admiral McRaven's boot camp regime for Navy SEALS needs to incorporate some of my suggestions about data privacy?

A highlight of Agarwal's blog is his comment I usually wait for a day or more when I feel myself getting inflamed/heated and I wish this had occurred in some of the other places where my ideas were discussed. Even though my ideas are sometimes provocative, I would kindly ask people to keep point 2 of the Debian Code of Conduct in mind, Assume good faith.

One thing that became clear to me after reading Agarwal's blog is that some people saw my example one-line change to Postfix's configuration as a suggestion that people need to run their own mail server. In fact, I had seen such comments before but I hadn't realized why people were reaching a conclusion that I expect everybody to run a mail server. The purpose of that line was simply to emphasize the content of the proposed bounce message, to help people understand, the receiver of an email may never have agreed to Google's non-privacy policy but if you do use Gmail, you impose that surveillance regime on them, and not just yourself, if you send them a message from a Gmail account.

Communication requires mutual agreement about the medium. Think about it another way: if you go to a meeting with your doctor and some stranger in a foreign military uniform is in the room, you might choose to leave and find another doctor rather than communicate under surveillance.

As it turns out, many people are using alternative email services, even if they only want a web interface. There is already a feature request discussion in ProtonMail about letting users choose to opt-out of receiving messages monitored by Google and send back the bounce message suggested in my blog. Would you like to have that choice, even if you didn't use it immediately? You can vote for that issue or leave your own feedback comments in there too.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้