Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 7 min 33 sec ago

Sean Whitton: This weekend's nutrition

5 February, 2018 - 00:23

Bird went out into the driveway alone. It wasn’t raining and the wind had died: the clouds sailing the sky were bright, dry. A brilliant morning had broken from the dawn’s cocoon of semidarkness, and the air had a good, first-days-of-summer smell that slackened every muscle in Bird’s body. A night softness had lingered in the hospital, and now the morning light, reflecting off the wet pavement and off the leafy trees, stabbed like icicles at Bird’s pampered eyes. Labouring into this light on his bike was like being poised on the edge of a diving board; Bird felt severed from the certainty of the ground, isolated. And he was as numb as stone, a weak insect on a scorpion’s sting. (Ōe, A Personal Matter (trans. John Nathan), ch. 2)

Forenoon: the most exhilarating hour of an early summer day. And a breeze that recalled elementary school excursions quickened the worms of tingling pleasure on Bird’s cheeks and earlobes, flushed from lack of sleep. The nerve cells in his skin, the farther they were from conscious restraint, the more thirstily they drank the sweetness of the season and the hour. Soon a sense of liberation rose to the surface of his consciousness. (ch. 3)

Shirish Agarwal: tracking pictures and a minidebconf in Pune

4 February, 2018 - 16:35

This would be a big longish post.

I have been a user for a long time and before that blog post for a long long time. Sometime back on few blog posts I began to notice that on the pictures were not appearing. I asked the planet maintainers what was going on. They in turn shared with me a list of filters that they were using as a default. While I’m not at liberty to share any of the filters, it did become clear from reading of the regular expressions of the filters and conversations with the planet maintainers that was at fault and not Planet.debian. I tried to see if there was anything as a content producer I could do but apparently nothing. The only settings for media or even for general has no settings through which I could stop tracking.

Sharing a screenshot below –

Sp while there’s nothing I can do atm, I can share about the Debian event that we did in reserved-bit about couple of months ago. Before I start, here’s a small brief about reserved bit, it’s a makerspace right next to where all the big IT companies are and where they come to pass the time after work. It’s on top of a mall.

Reserved-bit is run jointly by Siddhesh and Nisha Poyarekar husband-wife duo. Siddhesh was working with Redhat and now does his own thing. Works with Linaro and is a glibc maintainer and I read somewhere that he was even a releaser of couple of glibc releases (Update = actually 2.25 and 2.26 of glibc which is wow and also a big responsibility.) Pune is to India what Detroit was to the States. We have number of automobile companies and siddesh did share he was working on the glibc variants for the automobile, embedded market.

Nisha on the other hand is more on the maker side of the things, his better half and I believe she knows quite a bit of Aurdino. I believe there was a workshop yesterday on aurdino but due to time and personal constraints was not able to attend it or would have got more to share. She is the one who is looking at the day-to-day operations of maker-bit and Siddhesh chips in as and when he can.

Because of the image issue, I had been dragging my feet to post about the event for more than couple of months now. I do have access of a debconf gallery instance but was not comfortable for this. If I do attend a debconf then probably that would be the place for that.

Anyways, about 3 months back Gaurav shared an e-mail on the local LUG mailing list . We were trying to get the college where the LUG meets as it is one of the central parts of the city but then due to scheduling changes it was decided to be held at reserved-bit. I had not talked with Praveen for some time but had an inkling that he might be bringing one of the big packages which has a lot of dependencies on them which I shared in an email . As can be seen, I also shared the immense list that Paul always has and as can be seen free software is just growing leaps and bounds, what we are missing are more packagers and maintainers.

I also thought that it is possible that somebody might want to install debian and hence shared about that as well.

As I wasn’t feeling strong enough, I decided to forgo taking the lappy along. Fortunately, a friend arrived and we were together able to reach the venue on time. We probably missed about 10 minutes which probably was the introduction session a bit.

Image – courtesy Gaurav Sitlani

Praveen is in middle, somewhat like me with the beard, and white t-shirt.

I had mentally prepared myself for newbie questions but refreshingly, even though there were lot of amateurs, most of them had used Debian for sometime. So instead of talking about why we need to have Debian as a choice or why X disto is better than Y we had more pointed topical questions. There were questions about privacy as well where Debian is strong and looking to set the bar even higher. I came to know much later than Kali people are interested in porting most of their packages and maintain it in main, more eyes to see the code, a larger superset of people would use the work they do than those who would only use kali and in time higher quality of packages which is win-win to all the people concerned.

As I had suspected Praveen shared two huge lists of potentials software that needs to be packaged. Before starting he took some of the introductory part of the npm2deb tutorial. I had played with build programs before but npm2deb seemed a bit more automated than others, specifically with the way it picks up the metadata about software to be packaged. I do and did realize that npm2deb is for specific bits only and probably that is the reason that it could be a bit more automated than something like makefile, cmake, premake but then the latter are more generic in nature, they are not tied to a specific platform or way of doing things.

He showed a demo of npm2deb, the resultant deb package, ran lintian on top of it . He did share the whole list of software that needs to be packaged in order to see npm come into Debian. He and Shruti also did a crowdfunding for it sometime back.

I am not sure how many people noticed but from what I recollect both nodejs and npm came around June/July 2017 in Debian. While I don’t know it seemed Praveen and Shruti did the boring yet hard work to bring both the biggish packages into Debian. There may be some people involved as well which I might not know about but that is unintentional. If anybody knows any better feel free to correct me and will update it here as well.

Then after a while Raju shared the work he has been doing with Hamara but not in great detail as still lot of work is yet to be done. There were questions about rolling release and how often people update packages, while both Praveen and Raju pointed out that they did monthly updates, I am more of a weekly offender. I usually use debdelta to update packages and its far much easier to track and have the package diffs cheaply without affecting the bandwidth too much.

I wanted to share about adequate as I think it’s one of the better tools but as it has been orphaned and nobody has stepped up, it seems it will die a death after sometime. What a waste of a useful tool.

What we hadn’t prepared for that somebody had actually wanted to install Debian on their laptop then and there. I just had the netinstall usb stick by chance but the person who wanted to install debian had not done the preparatory work which needs to be done before setting up Debian. We had to send couple of people to get a spare external hdd which took time, copying the person’s data and then formatting that partition, sharing the different ways that Debian could be installed onto the system. There was a bit of bike-shedding there as there are just too many ways. I am personally towards have a separate / , /boot (part of it I am still unable to resolve under the whole Windows 10 nightmare, /home, /logs and swap. There was also a bit of discussion about swap as the older model of 1:1 memory doesn’t hold much water in the 8 GB RAM+ scenario.

By the time the external hdd came, we were able to download a CD .iso and show a minimal desktop installation. We had discussions about the various window managers and desktop environments, the difference and the similarities. IIRC, CD 1 has just LXDE as none of the other desktop environments can fit on CD1. I also shared about my South African Debconf experience as well the whole year-long preparation it takes to organize Debconf. IIRC, I *think* I shared having a conference like that costs north of USD 100,000 (it costed that much for South Africa, beautiful country) – the Canadian one might have costed more and the Taiwan one happening coming July would cost the same even though accommodation is free. I did share that we had something like 300+ people for the conference, the Germany one the year before had 500 so for any Indian bid we would have to grow up a whole lot more before we think of getting anywhere of hosting a debconf in India.

There was interest from people to contribute to Debian but this is where it gets a bit foggy, while some of the students want/ed to contribute they were not clear as to where they could contribute. I think we shared with them the lists, shared/showed them IRC/Matrix and sort of left them to their own devices. I do think we did mention #debian-welcome and #debian-mentors at possible points of contact. As all of us are busy with our lives, work etc. it does become hard to tell/advise people. Over the years we have realized that its much better to just share the starting info. and let them find if there is something that interests them.

There was also discussion about different operating systems and how the work culture and things differed from the debian perspective. For e.g. I shared how we have borrowed quite a bit of security software from the BSD stable and some plausible reasons of where BSD has made it big and where it sort of failed. The same was dissected for other operating systems too who are in the free software space and quite a few students realized it’s a big universe out there. We shared about devuan and how a group of people who didn’t like systemd did their own thing but at the same they realized the amount of time it takes to maintain a distro. In many a ways, it is a miracle that Debian is able to be independent and have its own morals and compasses. We also shared bits of the Debian constitution and Manifesto but not too much otherwise it would have become too preachy.

Coming towards the end, it gives me quite a bit of pleasure to share that Debian would be taking part in Outreachy and GSOC at the same time. While the projects seem to be different, I do have some personal favorites. The most exciting to me as a user are –

1. Wizard/GUI helping students/interns apply and get started – While they have narrow-cased it, it should help almost everybody who has to get over the learning curve to make her/is contribution to Debian. Having all the tools configured and ready to work would make the job of on boarding on to Debian a whole lot easier.

2. Firefox and Thunderbird plugins for free software habits – It’s always a good idea to start of with privacy tools, it would make the journey of free software much easier and enjoyable.

3. Android SDK Tools in Debian – This I think would be a multi-year project for as long as Android is there as a competitor in the mobile space. Especially for Pune students doing work with Android might lead to upstream work with Linaro who have been working with companies and various stake-holders to have more homogeneity to a kernel which would make it more secure, more maintainable in the short and long run.

4. Open Agriculture Food Computer – This probably would be a bit difficult but for colleges like COEP who have CNC lathes and 3-d printer and a benefactor in Sharad Pawar and other people who are interested in Agriculture, Nitin Gadkari . The TED link shared and reproduced below does give some idea. Vandana Shiva, who has been a cultural force and has a seed bank so we have culture, recipes and food for generations would be pretty much appropriate for the problems we face. It actually ties in with another ted talk which is also a global concern, the shortage of water and recycling of water.

This again from what I could assess with my almost non-existent agricultural skills, would be multi-year project as the science and understanding of it are in early stages. People from agriculture, IT, Atmospheric Science etc. all would have a role in a project like this. The interesting part of it is that from what has been shared, it seems there are lots that can be done in that arena.

Lastly, I would like some of the more privacy consciously people to weigh in on 1322748. I have used all the addons which have been mentioned on the bugzilla one time or the other and am stymied as my web experience is poorer as I cannot know who to trust and who to not without the info. about what ciphers the webmasters are using. Public pressure can only work when that info. is available.

I am sure I missed a lot, but that’s all I could cover. If people have some more ideas or inputs, feel free to suggest in the comments and I will see if I can incorporate them in the blog post if need be.

Keith Packard: CompositeAcceleration

3 February, 2018 - 07:31
Composite acceleration in the X server

One of the persistent problems with the modern X desktop is the number of moving parts required to display application content. Consider a simple PresentPixmap call as made by the Vulkan WSI or GL using DRI3:

  1. Application calls PresentPixmap with new contents for its window

  2. X server receives that call and pends any operation until the target frame

  3. At the target frame, the X server copies the new contents into the window pixmap and delivers a Damage event to the compositor

  4. The compositor responds to the damage event by copying the window pixmap contents into the next screen pixmap

  5. The compositor calls PresentPixmap with the new screen contents

  6. The X server receives that call and either posts a Swap call to the kernel or delays any action until the target frame

This sequence has a number of issues:

  • The operation is serialized between three processes with at least three context switches involved.

  • There is no traceable relation between when the application asked for the frame to be shown and when it is finally presented. Nor do we even have any way to tell the application what time that was.

  • There are at least two copies of the application contents, from DRI3 buffer to window pixmap and from window pixmap to screen pixmap.

We'd also like to be able to take advantage of the multi-plane capabilities in the display engine (where available) to directly display the application contents.

Previous Attempts

I've tried to come up with solutions to this issue a couple of times in the past.

Composite Redirection

My first attempt to solve (some of) this problem was through composite redirection. The idea there was to directly pass the Present'd pixmap to the compositor and let it copy the contents directly from there in constructing the new screen pixmap image. With some additional hand waving, the idea was that we could associate that final presentation with all of the associated redirected compositing operations and at least provide applications with accurate information about when their images were presented.

This fell apart when I tried to figure out how to plumb the necessary events through to the compositor and back. With that, and the realization that we still weren't solving problems inherent with the three-process dance, nor providing any path to using overlays, this solution just didn't seem worth pursuing further.

Automatic Compositing

More recently, Eric Anholt and I have been discussing how to have the X server do all of the compositing work by natively supporting ARGB window content. By changing compositors to place all screen content in windows, the X server could then generate the screen image by itself and not require any external compositing manager assistance for each frame.

Given that a primitive form of automatic compositing is already supported, extending that to support ARGB windows and having the X server manage the stack seemed pretty tractable. We would extend the driver interface so that drivers could perform the compositing themselves using a mixture of GPU operations and overlays.

This runs up against five hard problems though.

  1. Making transitions between Manual and Automatic compositing seamless. We've seen how well the current compositing environment works when flipping compositing on and off to allow full-screen applications to use page flipping. Lots of screen flashing and application repaints.

  2. Dealing with RGB windows with ARGB decorations. Right now, the window frame can be an ARGB window with the client being RGB; painting the client into the frame yields an ARGB result with the A values being 1 everywhere the client window is present.

  3. Mesa currently allocates buffers exactly the size of the target drawable and assumes that the upper left corner of the buffer is the upper left corner of the drawable. If we want to place window manager decorations in the same buffer as the client and not need to copy the client contents, we would need to allocate a buffer large enough for both client and decorations, and then offset the client within that larger buffer.

  4. Synchronizing window configuration and content updates with the screen presentation. One of the major features of a compositing manager is that it can construct complete and consistent frames for display; partial updates to application windows need never be shown to the user, nor does the user ever need to see the window tree partially reconfigured. To make this work with automatic compositing, we'd need to both codify frame markers within the 2D rendering stream and provide some method for collecting window configuration operations together.

  5. Existing compositing managers don't do this today. Compositing managers are currently free to paint whatever they like into the screen image; requiring that they place all screen content into windows would mean they'd have to buy in to the new mechanism completely. That could still work with older X servers, but the additional overhead of more windows containing decoration content would slow performance with those systems, making migration less attractive.

I can think of plausible ways to solve the first three of these without requiring application changes, but the last two require significant systemic changes to compositing managers. Ick.

Semi-Automatic Compositing

I was up visiting Pierre-Loup at Valve recently and we sat down for a few hours to consider how to help applications regularly present content at known times, and to always know precisely when content was actually presented. That names just one of the above issues, but when you consider the additional work required by pure manual compositing, solving that one issue is likely best achieved by solving all three.

I presented the Automatic Compositing plan and we discussed the range of issues. Pierre-Loup focused on the last problem -- getting existing Compositing Managers to adopt whatever solution we came up with. Without any easy migration path for them, it seemed like a lot to ask.

He suggested that we come up with a mechanism which would allow Compositing Managers to ease into the new architecture and slowly improve things for applications. Towards that, we focused on a much simpler problem

How can we get a single application at the top of the window stack to reliably display frames at the desired time, and to know when that doesn't occur.

Coming up with a solution for this led to a good discussion and a possible path to a broader solution in the future.

Steady-state Behavior

Let's start by ignoring how we start and stop this new mode and look at how we want applications to work when things are stable:

  1. Windows not moving around
  2. Other applications idle

Let's get a picture I can use to describe this:

In this picture, the compositing manager is triple buffered (as is normal for a page flipping application) with three buffers:

  1. Scanout. The image currently on the screen

  2. Queued. The image queued to be displayed next

  3. Render. The image being constructed from various window pixmaps and other elements.

The contents of the Scanout and Queued buffers are identical with the exception of the orange window.

The application is double buffered:

  1. Current. What it has displayed for the last frame

  2. Next. What it is constructing for the next frame

Ok, so in the steady state, here's what we want to happen:

  1. Application calls PresentPixmap with 'Next' for its window

  2. X server receives that call and copies Next to Queued.

  3. X server posts a Page Flip to the kernel with the Queued buffer

  4. Once the flip happens, the X server swaps the names of the Scanout and Queued buffers.

If the X server supports Overlays, then the sequence can look like:

  1. Application calls PresentPixmap

  2. X server receives that call and posts a Page Flip for the overlay

  3. When the page flip completes, the X server notifies the client that the previous Current buffer is now idle.

When the Compositing Manager has content to update outside of the orange window, it will:

  1. Compositing Manager calls PresentPixmap

  2. X server receives that call and paints the Current client image into the Render buffer

  3. X server swaps Render and Queued buffers

  4. X server posts Page Flip for the Queued buffer

  5. When the page flip occurs, the server can mark the Scanout buffer as idle and notify the Compositing Manager

If the Orange window is in an overlay, then the X server can skip step 2.

The Auto List

To give the Compositing Manager control over the presentation of all windows, each call to PresentPixmap by the Compositing Manager will be associated with the list of windows, the "Auto List", for which the X server will be responsible for providing suitable content. Transitioning from manual to automatic compositing can therefore be performed on a window-by-window basis, and each frame provided by the Compositing Manager will separately control how that happens.

The Steady State behavior above would be represented by having the same set of windows in the Auto List for the Scanout and Queued buffers, and when the Compositing Manager presents the Render buffer, it would also provide the same Auto List for that.

Importantly, the Auto List need not contain only children of the screen Root window. Any descendant window at all can be included, and the contents of that drawn into the image using appropriate clipping. This allows the Compositing Manager to draw the window manager frame while the client window is drawn by the X server.

Any window at all can be in the Auto List. Windows with PresentPixmap contents available would be drawn from those. Other windows would be drawn from their window pixmaps.

Transitioning from Manual to Auto

To transition a window from Manual mode to Auto mode, the Compositing Manager would add it to the Auto List for the Render image, and associate that Auto List with the PresentPixmap request for that image. For the first frame, the X server may not have received a PresentPixmap for the client window, and so the window contents would have to come from the Window Pixmap for the client.

I'm not sure how we'd get the Compositing Manager to provide another matching image that the X server can use for subsequent client frames; perhaps it would just create one itself?

Transitioning from Auto to Manual

To transition a window from Auto mode to Manual mode, the Compositing manager would remove it from the Auto List for the Render image and then paint the window contents into the render image itself. To do that, the X server would have to paint any PresentPixmap data from the client into the window pixmap; that would be done when the Compositing Manager called GetWindowPixmap.

New Messages Required

For this to work, we need some way for the Compositing Manager to discover windows that are suitable for Auto composting. Normally, these will be windows managed by the Window Manager, but it's possible for them to be nested further within the application hierarchy, depending on how the application is constructed.

I think what we want is to tag Damage events with the source window, and perhaps additional information to help Compositing Managers determine whether it should be automatically presenting those source windows or a parent of them. Perhaps it would be helpful to also know whether the Damage event was actually caused by a PresentPixmap for the whole window?

To notify the server about the Auto List, a new request will be needed in the Present extension to set the value for a subsequent PresentPixmap request.

Actually Drawing Frames

The DRM module in the Linux kernel doesn't provide any mechanism to remove or replace a Page Flip request. While this may get fixed at some point, we need to deal with how it works today, if only to provide reasonable support for existing kernels.

I think about the best we can do is to set a timer to fire a suitable time before vblank and have the X server wake up and execute any necessary drawing and Page Flip kernel calls. We can use feedback from the kernel to know how much slack time there was between any drawing and the vblank and adjust the timer as needed.

Given that the goal is to provide for reliable display of the client window, it might actually be sufficient to let the client PresentPixmap request drive the display; if the Compositing Manager provides new content for a frame where the client does not, we can schedule that for display using a timer before vblank. When the Compositing Manager provides new content after the client, it would be delayed until the next frame.

Changes in Compositing Managers

As described above, one explicit goal is to ease the burden on Compositing Managers by making them able to opt-in to this new mechanism for a limited set of windows and only for a limited set of frames. Any time they need to take control over the screen presentation, a new frame can be constructed with an empty Auto List.

Implementation Plans

This post is the first step in developing these ideas to the point where a prototype can be built. The next step will be to take feedback and adapt the design to suit. Of course, there's always the possibility that this design will also prove unworkable in practice, but I'm hoping that this third attempt will actually succeed.

Dirk Eddelbuettel: RVowpalWabbit 0.0.12

3 February, 2018 - 06:20

And yet another boring little RVowpalWabbit package update, now to version 0.0.12, and still in response to the CRAN request of not writing files where we should not (as caught by new tests added by Kurt). I had misinterpreted one flag and actually instructed to examples and tests to write model files back to the installed directory. Oops. Now fixed. I also added a reusable script for such tests in the repo for everybody's perusal (but it will require Linux and bindfs).

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch. I am also thinking that rvw extensions / work may make for a good GSoC 2018 project (and wrote up a short note). Again, if interested, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Antoine Beaupré: January 2018 report: LTS

3 February, 2018 - 05:25

I have already published a yearly report which covers all of 2017 but also some of my January 2018 work, so I'll try to keep this short.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I was happy to switch to the new Git repository for the security tracker this month. It feels like some operations (namely pull / push) are a little slower, but others, like commits or log inspection, are much faster. So I think it is a net win.


I did some work on trying to cleanup a situation with the jQuery package, which I explained in more details in a long post. It turns out there are multiple databases out there that track security issues in web development environemnts (like Javascript, Ruby or Python) but do not follow the normal CVE tracking system. This means that Debian had a few vulnerabilities in its jQuery packages that were not tracked by the security team, in particular three that were only on (CVE-2012-6708, CVE-2015-9251 and CVE-2016-10707). The resulting discussion was interesting and is worth reading in full.

A more worrying aspect of the problem is that this problem is not limited to flashy new web frameworks. Ben Hutchings estimated that almost half of the Linux kernel vulnerabilities are not tracked by CVE. It seems the concensus is that we want to try to follow the CVE process, and Mitre has been helpful in distributing this by letting other entities, called CVE Numbering Authorities or CNA, issue their own CVEs. After contacting Snyk, it turns out that they have started the process of becoming a CNA and are trying to get this part of their workflow, so that's a good sign.

LTS meta-work

I've done some documentation and architecture work on the LTS project itself, mostly around updating the wiki with current practices.


I've done a quick security update of OpenSSH for LTS, which resulted in DLA-1257-1. Unfortunately, after a discussion with the security researcher that published that CVE, it turned out that this was only a "self-DOS", i.e. that the NEWKEYS attack would only make the SSH client terminate its own connection, and therefore not impact the rest of the server. One has to wonder, in that case, why this was issue a CVE at all: presumably the vulnerability could be leveraged somehow, but I did not look deeply enough into it to figure that out.

Hopefully the patch won't introduce a regression: I tested this summarily and it didn't seem to cause issue at first glance.

Hardlinks attacks

An interesting attack (CVE-2017-18078) was discovered against systemd where the "tmpfiles" feature could be abused to bypass filesystem access restrictions through hardlinks. The trick is that the attack is possible only if kernel hardening (specifically fs.protected_hardlinks) is turned off. That feature is available in the Linux kernel since the 3.6 release, but was actually turned off by default in 3.7. In the commit message, Linus Torvalds explained the change was breaking some userland applications, which is a huge taboo in Linux, and recommended that distros configure this at boot instead. Debian took the reverse approach and Hutchings issued a patch which reverts the default to the more secure default. But this means users of custom kernels are still vulnerable to this issue.

But, more importantly, this affects more than systemd. The vulnerability also happens when using plain old chown with hardening turned off, when running a simple command like this:

chown -R non-root /path/owned/by/non-root

I didn't realize this, but hardlinks share permissions: if you change permissions on file a that's hardlinked to file b, both files have the new permissions. This is especially nasty if users can hardlink to critical files like /etc/password or suid binaries, which is why the hardening was introduced in the first place.

In Debian, this is especially an issue in maintainer scripts, which often call chown -R on arbitrary, non-root directories. Daniel Kahn Gillmor had reported this to the Debian security team all the way back in 2011, but it didn't get traction back then. He now opened Debian bug #889066 to at least enable a warning in lintian and an issue was also opened on colord Debian bug #889060, as an example, but many more packages are vulnerable. Again, this is only if hardening is somewhat turned off.

Normally, systemd is supposed to turn that hardening on, which should protect custom kernels, but this was turned off in Debian. Anyways, Debian still supports non-systemd init systems (although those users mostly probably all migrated to Devuan) so the fix wouldn't be complete. I have therefore filed Debian bug #889098 against procps (which owns /etc/sysctl.conf and related files) to try and fix the issue more broadly there.

And to be fair, this was very explicitly mentioned in the jessie release notes so those people without the protection kind of get what they desserve here...


Lastly, I did a fairly trivial update of the p7zip package, which resulted in DLA-1268-1. The patch was sent upstream and went through a few iterations, including coordination with the security researcher.

Unfortunately, the latter wasn't willing to share the proof of concept (PoC) so that we could test the patch. We are therefore trusting the researcher that the fix works, which is too bad because they do not trust us with the PoC...

Other free software work

I probably did more stuff in January that wasn't documented in the previous report. But I don't know if it's worth my time going through all this. Expect a report in February instead!

Have happy new year and all that stuff.

Joachim Breitner: The magic “Just do it” type class

3 February, 2018 - 02:01

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.

Once more, I am feeling silly

In fact, it often is completely mechanical. Consider the following function:

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))

This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.

Instead, I’d like to tell the compiler to just do it for me! I want to be able to write

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works:

{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}
import GHC.JustDoIt
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt
What is this justDoIt?
*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a

Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead.

What is supported right now?

Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for

data Result a b = Failure a | Success b
newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a }
foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b)
foo2 = (…)

It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.

How do I know that it creates a sensible implementation?

You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output.

Is this for real? Should I use it?

Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:

  • To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:

    • Preprocessors
    • Template Haskell
    • Generic programming together with type-level computation (e.g. generic-lens)
    • GHC Core-to-Core plugins

    In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.

  • To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.

What needs to happen for this to become real?

A bunch of things:

  • The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.
  • The implementation is very much unoptimized.
  • For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.
  • It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing here as well.

If someone wants to pick it up from here, that’d be great!

I have seen this before…

Indeed, the idea is not new.

Most famously in the Haskell work is certainly Lennart Augustssons’s Djinn tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library djinn-ghc, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).

Lennart Spitzner’s exference is a much more sophisticated tool that also takes library API functions into account.

In the Scala world, Sergei Winitzki very recently presented the pretty neat curryhoward library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.

And in Idris, Joomy Korkut has created hezarfen.

Joey Hess: improving powertop autotuning

2 February, 2018 - 22:33

I'm wondering about improving powertop's auto-tuning. Currently the situation is that, if you want to tune your laptop's power consumption, you can run powertop and turn on all the tunables and try it for a while to see if anything breaks. The breakage might be something subtle.

Then after a while you reboot and your laptop is using too much power again until you remember to run powertop again. This happens a half dozen or so times. You then automate running powertop --auto-tune or individual tuning commands on boot, probably using instructions you find in the Arch wiki.

Everyone has to do this separately, which is a lot of duplicated and rather technical effort for users, while developers are left with a lot of work to manually collect information, like Hans de Goede is doing now for enabling PSR by default.

To improve this, powertop could come with a service file to start it on boot, read a config file, and apply tunings if enabled.

There could be a simple GUI to configure it, where the user can report when it's causing a problem. In case the problem prevents booting, there would need to be a boot option that disables the autotuning too.

When the user reports a problem, the GUI could optionally walk them through a bisection to find the problematic tuning, which would probably take only 4 or so steps.

Information could be uploaded, anonymously to a hardware tunings database. Developers could then use that to find and whitelist safe tunings. Powertop could also query that to avoid tunings that are known to cause problems on the laptop.

I don't know if this is a new idea, but if it's not been tried before, it seems worth working on.

Wouter Verhelst: Day four of the pre-FOSDEM Debconf Videoteam sprint

2 February, 2018 - 17:04

Day four was a day of pancakes and stew. Oh, and some video work, too.


Did more documentation review. She finished SReview documentation, got started on the documentation of the examples of our ansible repository.


Finished splitting out the ansible configuration from the ansible code repository. The code repository now includes an example configuration that is well documented for getting started, whereas our production configuration lives in a separate repository.


Spent much time on the debconf website, mostly working on a new upstream release of wafer.

He also helped review Kyle's documentation, and spent some time together with Tzafrir debugging our ansible test setup.


Worked on documentation, and did a test run of the ansible repository. Found and fixed issues that cropped up during that.


Spent much time trying to figure out why SReview was not doing what he was expecting it to do. Side note: I hate video codecs. Things are working now, though, and most of the fixes were implemented in a way that makes it reusable for other conferences.

There's one more day coming up today. Hopefully won't forget to blog about it tonight.

Daniel Pocock: Everything you didn't know about FSFE in a picture

2 February, 2018 - 16:51

As FSFE's community begins exploring our future, I thought it would be helpful to start with a visual guide to the current structure.

All the information I've gathered here is publicly available but people rarely see it in one place, hence the heading. There is no suggestion that anything has been deliberately hidden.

The drawing at the bottom includes Venn diagrams to show the overlapping relationships clearly and visually. For example, in the circle for the General Assembly, all the numbers add up to 29, the total number of people listed on the "People" page of the web site. In the circle for Council, there are 4 people in total and in the circle for Staff, there are 6 people, 2 of them also in Council and 4 of them in the GA but not council.

The financial figures on this diagram are taken from the 2016 financial summary. The summary published by FSFE is very basic so the exact amount paid in salaries is not clear, the number in the Staff circle probably pays a lot more than just salaries and I feel FSFE gets good value for this money.

Some observations about the numbers:

  • The volunteers don't elect any representatives to the GA, although some GA members are also volunteers
  • From 1,700 fellowship members, only 2 are elected in 2 of the 29 GA seats yet they provide over thirty percent of the funding through recurring payments
  • Out of 6 staff, all 6 are members of the GA (6 out of 29) since a decision to accept them at the last GA meeting
  • Only the 29 people in the General Assembly are full (legal) members of the FSFE e.V. association with the right to vote on things like constitutional changes. Those people are all above the dotted line on the page. All the people below the line have been referred to by other names, like fellow, supporter, community, donor and volunteer.

If you ever clicked the "Join the FSFE" button or filled out the "Join the FSFE" form on the FSFE web site and made a payment, did you believe you were becoming a member with an equal vote? If so, one procedure you can follow is to contact the president as described here and ask to be recognized as a full member. I feel it is important for everybody who filled out the "Join" form to clarify their rights and their status before the constitutional change is made.

I have not presented these figures to create conflict between staff and volunteers. Both have an important role to play. Many people who contribute time or money to FSFE are very satisfied with the concept that "somebody else" (the staff) can do difficult and sometimes tedious work for the organization's mission and software freedom in general. As I've been elected as a fellowship representative, I feel a responsibility to ensure the people I represent are adequately informed about the organization and their relationship with it and I hope these notes and the diagram helps to fulfil that responsibility.

Therefore, this diagram is presented to the community not for the purpose of criticizing anybody but for the purpose of helping make sure everybody is on the same page about our current structure before it changes.

If anybody has time to make a better diagram it would be very welcome.

John Goerzen: How are you handling building local Debian/Ubuntu packages?

2 February, 2018 - 16:26

I’m in the middle of some conversations about Debian/Ubuntu repositories, and I’m curious how others are handling this.

How are people maintaining repos for an organization? Are you integrating them with a git/CI (github/gitlab, jenkins/travis, etc) workflow? How do packages propagate into repos? How do you separate prod from testing? Is anyone running buildd locally, or integrating with more common CI tools?

I’m also interested in how people handle local modifications of packages — anything from newer versions of C libraries to newer interpreters. Do you just use the regular Debian toolchain, packaging them up for (potentially) the multiple distros/versions that you have in production? Pin them in apt?

Just curious what’s out there.

Some Googling has so far turned up just one relevant hit: Michael Prokop’s DebConf15 slides, “Continuous Delivery of Debian packages”. Looks very interesting, and discusses jenkins-debian-glue.

Some tangentially-related but interesting items:

Edit 2018-02-02: I should have also mentioned BuildStream

Bits from Debian: Debian welcomes its Outreachy interns

2 February, 2018 - 14:30

We'd like to welcome our three Outreachy interns for this round, lasting from December 2017 to March 2018.

Juliana Oliveira is working on reproducible builds for Debian and free software.

Kira Obrezkova is working on bringing open-source mobile technologies to a new level with Debian (Osmocom).

Renata D'Avila is working on a calendar database of social events and conferences for free software developers.

Congratulations, Juliana, Kira and Renata!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Debian will also participate this summer in the next round for Outreachy, and is currently applying as mentoring organisation for the Google Summer of Code 2018 programme. Have a look at the projects wiki page and contact the Debian Outreach Team mailing list to join as a mentor or welcome applicants into the Outreachy or GSoC programme.

Join us and help extend Debian!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้