Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 48 min ago

Kees Cook: evolution of seccomp

12 November, 2015 - 01:01

I’m excited to see other people thinking about userspace-to-kernel attack surface reduction ideas. Theo de Raadt recently published slides describing Pledge. This uses the same ideas that seccomp implements, but with less granularity. While seccomp works at the individual syscall level and in addition to killing processes, it allows for signaling, tracing, and errno spoofing. As de Raadt mentions, Pledge could be implemented with seccomp very easily: libseccomp would just categorize syscalls.

I don’t really understand the presentation’s mention of “Optional Security”, though. Pledge, like seccomp, is an opt-in feature. Nothing in the kernel refuses to run “unpledged” programs. I assume his point was that when it gets ubiquitously built into programs (like stack protector), it’s effectively not optional (which is alluded to later as “comprehensive applicability ~= mandatory mitigation”). Regardless, this sensible (though optional) design gets me back to his slide on seccomp, which seems to have a number of misunderstandings:

  • A Turing complete eBPF program watches your program Strictly speaking, seccomp is implemented using a subset of BPF, not eBPF. And since BFP (and eBPF) programs are guaranteed to halt, it makes seccomp filters not Turing complete.
  • Who watches the watcher? I don’t even understand this. It’s in the kernel. The kernel watches your program. Just like always. If this is a question of BPF program verification, there is literally a program verifier that checks various properties of the BPF program.
  • seccomp program is stored elsewhere This, with the next statement, is just totally misunderstood. Programs using seccomp define their program in their own code. It’s used the same way as the Pledge examples are shown doing.
  • Easy to get desyncronized either program is updated As above, this just isn’t the case. The only place where this might be true is when using seccomp on programs that were not written natively with seccomp. In that case, yes, desync is possible. But that’s one of the advantages of seccomp’s design: a program launcher (like minijail or systemd) can declare a seccomp filter for a program that hasn’t yet been ported to use one natively.
  • eBPF watcher has no real idea what the program under observation is doing… I don’t understand this statement. I don’t see how Pledge would “have a real idea” either: they’re both doing filtering. If we get AI out of our syscall filters, we’re in serious trouble. :)

OpenBSD has some interesting advantages in the syscall filtering department, especially around sockets. Right now, it’s hard for Linux syscall filtering to understand why a given socket is being used. Something like SOCK_DNS seems like it could be quite handy.

Another nice feature of Pledge is the path whitelist feature. As it’s still under development, I hope they expand this to include more things than just paths. Argument inspection is a weak point for seccomp, but under Linux, most of the arguments are ultimately exposed to the LSM layer. Last year I experimented with creating a “seccomp LSM” for path matching where programs could declare whitelists, similar to standard LSMs.

So, yes, Linux “could match this API on seccomp”. It’d just take some extensions to libseccomp to implement pledge(), as I described at the top. With OpenBSD doing a bunch of analysis work on common programs, it’d be excellent to see this usable on Linux too. So far on Linux, only a few programs (e.g. Chrome, vsftpd) have bothered to do this using seccomp, and it could be argued that this is ultimately due to how fine grained it is.

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Jonathan Dowland: Useful Mac programs

11 November, 2015 - 22:02

A little over a year ago I wrote about how I'd been using a Mac as my main work machine. I hadn't written anything on the subject since. Here are four useful Mac programs that I can recommend to people.

Sizeup, from Irradiated Software. A bit Like Windows' aero snap, but on steroids. I love this. I regularly move windows between two desktops (external and internal display), resize and centre them or put them in one quarter of the screen with just a few presses. The "snapback" feature is also great.

X Lossless Decoder (XLD). A handy transcoder that can use QuickTime encoders and so can write out Apple/Quicktime/iTunes-encoded AAC/MP4 files en-mass, translating file meta-data.


LimeChat, an IRC client. I theme it to look like Colloquy. The net result is pretty clean. I like using a proportional-font for text things, so getting IRC out of a terminal is a win for me.

LimeChat automatically fetches and thumbnails URIs to images that people paste in channels, which is either incredibly convenient or a curse, depending on the channel. You can toggle that behaviour but only across the whole client, not on a channel or network basis.

Finally, 1Password is an incredibly slick password manager. I use it in a very basic fashion: no mobile client, no syncing to anything outside of my workstation. You could also use LastPass which is similar and has a Linux client. I haven't tried it, but there's a third party tool to read 1Password password stores on Linux written by Ryan Coleman of SDL fame called 1pass.

Ritesh Raj Sarraf: apt-offline 1.7

11 November, 2015 - 19:29

Hello World,

In this part of the world, today is a great day. Today is Diwali - the festival of lights

On this day, I am very happy to announce the release of apt-offline, version 1.7. This release brings in a large number of fixes and is a recommended update. Thanks to Bernd Dietzel for uncovering the shell injection bug which could be exploited by carefully crafting the signature file. Since apt-offline could be run as 'root', this one was an important bug. Also thanks to him for the fix.

During my tests, I also realized that apt-offline's --install-src-packages implementation had broken over time. --install-src-packages option can be useful to users who would like the offline ability to download a [Debian] source package, along with all its Build Dependencies.

For further details on many of the other fixes, please refer to the git repository at the homepage. Packages for Debian (and derivatives) are already in the queue.

Wishing You and Your Loved Ones a Very Happy Diwali.

Categories: Keywords: Images: Like: 

Mike Gabriel: My FLOSS activities in Octobre 2015

11 November, 2015 - 16:07

Octobre 2015 has been mainly dedicated to contracted/payed work. Only a few issues I could address during the last month:

  • Fix FTBFS of Arctica Greeter on non-Ubuntu systems
  • Co-working on renewed Xinerama support in nx-libs
  • Development of GOsa² Password Management Add-on
  • Improving Debian Edu main server upgrade documentation (from Debian Edu squeeze to Debian Edu jessie)
  • Fixing my personal Horde Groupware installation for access via mobile devices
  • Learning Dovecot et al.
Arctica Project

While having a week off from work, I managed to get Arctica Greeter to build on non-Ubuntu systems. The issue was very simple. The build crashed during the test suite run and it was caused by the XDG_DATA_DIRS variable not being set in my clean build environment. Furthermore, I added various more session type icons to Arctica Greeter (XFCE, LXDE, MATE, OpenBox, TWM, Default X11 Session, etc.) and also rebased the Arctica Greeter code base against all recent commits found in Unity Greeter for Ubuntu 15.10 / upcoming 16.04.

Together with Ulrich Sibiller, I continued our work on the new Xinerama implementation for the remote X11 server nxagent (used as x2goagent in X2Go). However, this is unfortunately still work in progress, because various theoretical monitor layout issues became evident that require being handled in the new code before it can get merged into nx-libs's current 3.6.x branch.

Also, I managed to do some little work on, the still too rudimentary project homepage.

read more

Jaldhar Vyas: Happy Diwali and Sal Mubarak 2072

11 November, 2015 - 04:42

Wishing you all a happy Diwali and Gujarati New Year (V.S. 2072 called Plavanga.)

Instead of being something sane like a flat disk supported by four elephants standing on a tortoise, the Earth is an oblate spheroid in heliocentric orbit. The Moon too offends by maintaining an unreasonably non-epicyclic orbit. Combined this means that the Americas (plus Iceland if there any Hindus there) observe Diwali today whereas in India and the rest of the world it is tomorrow. Technically in these longitudes the Ashvayuja amavasya tithi pervades both Tuesday and Wednesday but as Lakshmi Puja takes place at the pradosha kala it means it should be celebrated today though no doubt most of the non-astronomically minded people will just follow India any way. The New Year is on Thursday all over the world.

Regardless of when you celebrate, may your year be full of happiness, good fortune and prosperity.

John Goerzen: Wow. I did that!

11 November, 2015 - 02:19

It’s now official: I’m a pilot. This has been one of the most challenging, and also most rewarding, journeys I’ve been on. It had its moments of struggle, moments of joy, moments of poetry. I wrote about the poetry of flying at night recently. Here is the story of my first landing on a grass runway, a few months ago.


Where the air is so pure, the zephyrs so free,
The breezes so balmy and light,
That I would not exchange my home on the range
For all of the cities so bright.

– John A. Lomax

We are used to seeing planes in these massive palaces of infrastructure we call airports. We have huge parking garages, giant terminals, security lines hundreds of people deep, baggage carts, jetways, video screens, restaurants, and miles and miles of concrete.

But most of the world’s airports are not like that. A pilot of a small plane gets to see the big airports, sure, but we also get to use the smaller airports — hidden in plain sight to most.

Have you ever taken off from a strip of grass? As I told my flight instructor when I tried it for the first time, “I know this will work, but somehow I will still be amazed if it actually does.”

I took off from a strip of grass not long ago. The airport there had one paved runway, and the rest were grass. Short runways, narrow runways, grass runways. No lights. No paint. No signs. No trucks, no jetways, nothing massive. In fact, no people. Just a mowed path and a couple of yellow or white markers.

I taxied down the grass runway, being careful to never let the plane’s wheels stop moving lest the nose gear get stuck in a pothole. I felt all the bumps in the ground as we moved.

End of runway. Turn the plane around. A little bit of flap for more lift, full throttle, mind the centerline — imaginary centerline, this time. It starts picking up speed, slower than usual, bump bump bump. Those buildings at the end of the runway are staring me down. More speed, and suddenly the runway feels smooth; it has enough lift to keep from falling into every bump. Then we lift off just a touch; I carefully keep the plane down until we pick up enough speed to ascend farther, then up we go. I keep a watchful eye on those buildings straight ahead and that water tower just slightly off to the one side. We climb over a lake as I watch that water tower pass safely below and to the side of the plane. It had worked, and I had a smile of amazement.

With a half mile of grass, you really can go anywhere.

Many times I had driven within half a mile of that runway, but never seen it. Never wondered where people go after using it. Never realizing that, although it’s a 45-minute drive from my house, it’s really pretty close. Never understanding that “where people go” after taking off from that runway is “everywhere”.

Steinar H. Gunderson: HTTPS-enabling gitweb

11 November, 2015 - 01:25

If you have a HTTPS-enabling proxy in front of your gitweb, so that it tries to do <base href="http://..."> (because it doesn't know that the user is actually using HTTPS), here's the Apache configuration variable to tell it otherwise:


So now works with HTTPS after Let's Encrypt, without the CSS being broken. Woo. (Well, the clone URL still says http. So, halfway there, at least.)

Steinar H. Gunderson: Launch

11 November, 2015 - 01:24

Daniel Pocock: Aggregating tasks from multiple issue trackers

10 November, 2015 - 19:37

After my experiments with the iCalendar format at the beginning of 2015, including Bugzilla feeds from Fedora and reSIProcate, aggregating tasks from the Debian Maintainer Dashboard, Github issue lists and even unresolved Nagios alerts, I decided this was fertile ground for a GSoC student.

In my initial experiments, I tried using the Mozilla Lightning plugin (Iceowl-extension on Debian/Ubuntu) and GNOME Evolution's task manager. Setting up the different feeds in these clients is not hard, but they have some rough edges. For example, Mozilla Lightning doesn't tell you if a feed is not responding, this can be troublesome if the Nagios server goes down, no alerts are visible, so you assume all is fine.

To take things further, Iain Learmonth and I proposed a GSoC project for a student to experiment with the concept. Harsh Daftary from Mumbai, India was selected to work on it. Over the summer, he developed a web application to pull together issue, task and calendar feeds from different sources and render them as a single web page.

Harsh presented his work at DebConf15 in Heidelberg, Germany, the video is available here. The source code is in a Github repository. The code is currently running as a service at although it is not specific to Debian and is probably helpful for any developer who works with more than one issue tracker.

Your browser does not support the <video> tag.

Norbert Preining: Music check: Google versus Apple – Is that all? You can do better, Google!

10 November, 2015 - 07:34

Ok, I have been an iPhone user since I moved to Japan and got my first smart-phone ever. First a 3s, then a 4s, then a 5s that I dropped into the toilet, so I switched the SIM back into the 4s to have a working phone. Furthermore, I am a heavy music listener and used iTunes Radio for 3 months. Since I am planning to change phones to an Android phone (being fed up with Apple’s super-closed environment), I tried out Google Music Plus for about 3 months, too. Here is my verict – Google Music is a big pain, far from iTunes in comfort and user friendliness.

My move to Android is not very much in danger.

General description of the service

In principle, iTunes Radio and Google Music Subscription do the same things:

  • allow you to have your own music in the cloud
  • stream any music from the respective market place to your device
  • provide radio stations, pre-curated or based on artist/genre/etc

There are slight differences, that often create confusion, especially with Apple’s services: If you are only signed in, you can listen to music stations and can skip some music (limited), but you cannot listen to arbitrary songs from the whole Apple Music library. Then there is another service from Apple, called iTunes Match, that only allows you to upload your music library to the cloud, but other than that again only listening to normal radio stations.

Google Music is much simpler, there are only two options: By default it is free to have your music (up to 25000 songs) in the cloud, but if you want to listen to radio stations or any other music in the Google music store, you need a subscription.

Prizes and features overview:

Apple Google No subscription iTunes Match iTunes Radio No subscription With subscription Prize 0 25$/year 10$/month 0 10$/month
(early access 8$) Cloud space 0 25000 (?) 25000 Extra service Radio stations Radio stations Radio/all music Nothing Radio/all music Music on the go – the application

Let us first consider the applications provided to listen to music on your smart phone:

Apple Music can of course only be used on Apple devices, and uses the built-in Music application. Start up time is about a few seconds after a cold boot (all on my 4s), and music streaming starts with hardly any delay. Responsiveness is good, and the user interface is clear and easy.

Google’s application is available on iOS and Android. I have tested the latest version on iOS, but it is a pain in the butt:

  • Starting the application, even after cold boot, is successful at a rate of 1/10. Most of the time the application crashes right away. This might be a problem due to my low end device (iPhone 4s with 64Gb), but not due to space problems (half empty) nor internet connectivity (wlan).
  • Responsiveness is abysmal
  • Access to additional content (radio, songs) and your own library is well done, similar to iMusic.
Managing your library

Here iTunes is the way to go, bit of a pain when using Linux, but there are other reasons I have a Windows installation in parallel, so I don’t mind to boot now and then into Windows. iTunes gives you very powerful tools to change all kind of data.

Google gives you a few options: Use your iTunes library – in this case all your playlists and ratings (but see later) are also uploaded. I am not sure what happens if I retag a song in iTunes, or change anything else in there. I guess the song will not be re-uploaded, but who knows for sure. Furthermore, you can edit your library via the web interface, but this is rather poor.

Searching your library

Over times your library grows and there are hundreds if not thousands of artists. So you want to search them. The natural way would be to scroll to the first letter of the artist you are searching, and then look it up. Well, that works perfectly in both Apple’s and Google’s application – unless you are having artists written in some strange script like Japanese or Korean.

iTunes allows you to set a field called something like “Artist name for sorting”, which allows me to put for example “友川カズキ” or “김두수” into the artist field, and into the sort field “Tomokawa Kazuki” and “Kim Doo Soo”. This way I find the artists in the correct place.

Google on the other hand uses simple Unicode order sorting – how could you do that? This is simply plain wrong, and everyone should know that by now. Japanese people will never be able to find anything in their list. And – in contrast to Goole Contacts – Google Music does not support phonetic name fields (similar to the order sort artist) for artists.

What I had to do now is to rename all the artists to include first the phonetic name, followed by the proper name, like in “Tomokawa Kazuki (友川カズキ)”. Something I strongly detest!

Radio stations

I might have a slightly peculiar music taste, but the radio stations mixed by Google are simply a pain. The reason is easy to explain: I live in Japan, and so what Google does is mixing about 80% of J-Pop into the radio, all those happy yodeling girlies I really dislike (see my Anti J-Pop campaign for alternatives – yes, they do exist also in Japan!). iTunes radio is here more relaxed it seems.

I appreciate Google’s trial to cater to local (dis)taste, but besides voting down each and every song I hear I don’t see any other option. And honestly, I cannot go through all this voting down without dying from pain inflicted by J-Pop.

Other than this, the two radio stations are probably more or less the same – but as I said, due to the local colorit it is hard to compare.


Ohhh, what a dire point. So there you have you well curated iTunes library with 5-star ratings. I used the ratings in a way that those songs I like get 1 star, those I even like more get 2 stars, and my absolute favorites got 5 stars. I didn’t do any negative ratings.

iTunes/iOS Music app allows you to easily adjust rating, and they are synced between devices. All as you would expect.

Now for Google – they did have a 5-star rating system at some time, but:

Thank you for your feedback. We’ve decided to remove the 5-star Rating lab. This decision wasn’t made lightly, but Thumbs Up/Down is integral for the future vision for Play Music and will be a central design point for our future releases. Please note that we’ll continue to store your ratings that you’ve set via the star lab or via iTunes, and we’ll translate them to thumbs up/down (1-2 stars = thumbs down, 4-5 stars = thumbs up).

Amanda, Google Play Community Manager

Here we go – with the move to a new design they threw away the 5-star system, without any reason but integral for future vision – rubbish sales speak. Besides being a very very poor rating system to have only an up and down (good-bad-don’t care), the translation from my rating system to the thumbs up-down is just plain wrong.

Why on earth is the current movement to reduce functionality and rob users of control? Gnome 3 is the prime example of how we `stupiditize’ users by taken any freedom from them in the name of simple design. Google now does the same. I am so sick of getting patronized this way, so Gnome3 was completely banished from my computer and replaced by Cinnamon, which uses the same (good) underlying technique, but takes users seriously!

Overall verdict

As much as I would like to see Google Music a valid alternative to iTunes, by now it makes the impression of a quickly hacked together rotten piece of code that tries to get a share of the market without providing equivalent service. Google is using its market presence and convenience to convert people, not features and quality. I can only hope that this changes in the future.

That still leaves me with the question – move to Android or remain with iOS …

Ben Armstrong: The passing of Debian Live

10 November, 2015 - 06:03

Debian Live has passed on. And it has done so in not happy circumstances. (You can search the list archives for more if you are confused.) I have reposted here my response to this one thread because it’s all I really want to say, after all of the years of working with the team.

On 09/11/15 12:47 PM, Daniel Baumann wrote:
> So long, and thanks for all the fish[7].
> Daniel
> [7]

Enough bitter words have been said. I don’t want to add any more. So:

I’m proud.

Indeed, that long list of downstreams does speak to the impact you’ve had in inspiring and equipping people to make their own live images. I’m proud to have been a small part of this project.

I’m thankful.

I’m thankful that I was able to, through this project, contribute to something for a while that had a positive impact on many people, and made Debian more awesome.

I remember the good times.

I remember fondly the good times we had in the project’s heyday. I certainly found your enthusiasm and vision for the project, Daniel, personally inspiring. It motivated me to contribute. Debconf10 was a highlight among those experiences, but also I had many good times and made many friendships online, too.

I’m sad.

I’m sad, because although I made some attempts to liaise between Debian Live and the CD and Installer teams, I don’t feel I did an effective job there, and that contributed to the situation we now find ourselves in. If I did you or the project injury in trying to fulfill that role, please forgive me.

I’m hopeful.

I’m hopeful that whichever way we all go from here, that the bitterness will not be forever. That we’ll heal. That we’ll have learned. That we’ll move on to accomplish new things, bigger and better things.

Thank you, Daniel. Thank you, Debian Live team.


Jonathan McDowell: The Joy of Recruiters

10 November, 2015 - 00:45

Last week Simon retweeted a link to Don’t Feed the Beast – the Great Tech Recruiter Infestation. Which reminded me I’d been meaning to comment on my own experiences from earlier in the year.

I don’t entertain the same level of bile as displayed in the post, but I do have a significant level of disappointment in the recruitment industry. I had conversations with 3 different agencies, all of whom were geographically relevant. One contacted me, the other 2 (one I’d dealt with before, one that was recommended to me) I contacted myself. All managed to fail to communicate with any level of acceptability.

The agency hat contacted me eventually went quiet, after having asked if they could put my CV forward for a role and pushing very hard about when I could interview. The contact in the agency I’d dealt with before replied to say I was being passed to someone else who would get in contact. Who of course didn’t. And the final agency, who had been recommended, passed me between 3 different people, said they were confident they could find me something, and then went dark except for signing me up to their generic jobs list which failed to have anything of relevance on it.

As it happens my availability and skill set were not conducive to results at that point in time, so my beef isn’t with the inability to find a role. Instead it’s with the poor levels of communication presented by an industry which seems, to me, to have communication as part of the core value it should be offering. If anyone had said at the start “Look, it’s going to be tricky, we’ll see what we can do” or “Look, that’s not what we really deal in, we can’t help”, that would have been fine. I’m fine with explanations. I get really miffed when I’m just left hanging.

I’d love to be able to say I’ll never deal with a recruiter again, but the fact of the matter is they do serve a purpose. There’s only so far a company can get with word of mouth recruitment; eventually that network of personal connections from existing employees who are considering moving dries up. Advertising might get you some more people, but it can also result in people who are hugely inappropriate for the role. From the company point of view recruiters nominally fulfil 2 roles. Firstly they connect the prospective employer with a potentially wider base of candidates. Secondly they should be able to do some sort of, at least basic, filtering of whether a candidate is appropriate for a role. From the candidate point of view the recruiter hopefully has a better knowledge of what roles are out there.

However the incentives to please each side are hugely unbalanced. The candidate isn’t paying the recruiter. “If you’re not paying for it, you’re the product” may be bandied around too often, but I believe this is one of the instances where it’s very applicable. A recruiter is paid by their ability to deliver viable candidates to prospective employers. The delivery of these candidates is the service. Whether or not the candidate is happy with the job is irrelevant beyond them staying long enough that the placement fee can be claimed. The lengthy commercial relationship is ideally between the company and the recruitment agency, not the candidate and the agency. A recruiter wants to be able to say “Look at the fine candidate I provided last time, you should always come to me first in future”. There’s a certain element of wanting the candidate to come back if/when they are looking for a new role, but it’s not a primary concern.

It is notable that the recommendations I’d received were from people who had been on the hiring side of things. The recruiter has a vested interest in keeping the employer happy, in the hope of a sustained relationship. There is little motivation for keeping the candidate happy, as long as you don’t manage to scare them off. And, in fact, if you scare some off, who cares? A recruiter doesn’t get paid for providing the best possible candidate. Or indeed a candidate who will fully engage with the role. All they’re required to provide is a hire-able candidate who takes the role.

I’m not sure what the resolution is to this. Word of mouth only scales so far for both employer and candidate. Many of the big job websites seem to be full of recruiters rather than real employers. And I’m sure there are some decent recruiters out there doing a good job, keeping both sides happy and earning their significant cut. I’m sad to say I can’t foresee any big change any time soon.

[Note I’m not currently looking for employment.]

[No recruitment agencies were harmed in the writing of this post. I have deliberately tried to avoid outing anyone in particular.]

Mike Gabriel: Making appindicators available for non-Ubuntu platforms

9 November, 2015 - 20:24

As many (Debianic) people possibly know, the appindicator support (libindicator, libappindicator, etc.) in Debian is very weak and outdated. Various native indicators (e.g. indicator-* packages (where * is "datetime", "sound", "session", etc.) are missing or unmaintained in Debian and neither is the indicator-application service available (a service that allows other applications e.g., like the nm-applet tool) to dock to the indicator area of the desktop's panel bar. Furthermore, no recent appindicator related uploads have been seen in Debian (last seen uploads are from 2013).

I recently e-mailed with Andrew Starr-Bochicchio, one of the Ayatana Packaging team members, about the current Debian status of indicator packages specifically and Ayatana packages [1] in general. The below information summarizes (I hope) what I got from the mail exchange:

read more

Ben Hutchings: Debian LTS work, October 2015

9 November, 2015 - 18:41

For my 11th month working on Debian LTS, I carried over 5.5 hours from September and was assigned another 13.5 hours of work by Freexian's Debian LTS initiative. I worked 14 of a possible 19 hours.

As I mentioned in the report for September, I uploaded binutils and issued DLA-324-1 early in October.

I fixed a few security issues in the kernel, uploaded and issued DLA-325-1.

I had some email discussions about long-term support of the Linux kernel with Willy Tarreau (Linux 2.6.32 maintainer) and Greg Kroah-Hartman (overall stable maintainer). Greg normally selects one version per year to maintain as a 'longterm' branch for 2 years, after which he may hand it over to another maintainer. A few upstream versions have received long-term support entirely from another developer. We were in agreement that it's desirable to have fewer of these branches with more developers and distributions contributing to each, but didn't come to a conclusion about how to coordinate this. The topic came up again at the Kernel Summit, and Greg then agreed to select the first kernel version of each calendar year, starting with Linux 4.4 (expected in January). This doesn't fit well with Debian's current release schedule, but I mean to discuss with the release team whether the freeze date can be set to allow inclusion of 2017's LTS kernel.

I spent another week in the 'front desk' role, where I triaged new security issues for squeeze. There was a mixture of serious and trivial, old and new (not affecting squeeze) issues.

The many ntp issues announced in October were fixed in unstable by a new upstream release. I spent a long time digging out the specific commits that fixed them and comparing with the older version in squeeze. Several of the issues had been introduced in ntp 4.2.7 or 4.2.8 and therefore didn't affect squeeze (or the newer stable releases). Of the fixes that were needed, most applied with minimal changes. Having prepared an update, I asked the ntp maintainer, Kurt Roeckx, to review my work. (The package has a limited test suite and none of the fixes added new tests.) Following this review he added a few more patches, uploaded and issued DLA-335-1.

MySQL 5.1, as shipped in squeeze, no longer receives security support from upstream, and the security fixes they do issue are mixed with other changes that make it impractical to backport them. The LTS team is planning to backport the mysql-5.5 package to squeeze while avoiding conflicts with the binaries built from mysql-5.1. Santiago Ruano Rincón has prepared a backport and I spent some time reviewing this, but haven't yet sent my review comments.

Vasudev Kamath: Forwarding host port to service in container

9 November, 2015 - 18:36

I've lxc container running distcc daemon and I would like to forward the distcc traffic coming to my host system to container.


Following simple script which uses iptable did the job.

 set -e

usage() {
    cat <<EOF
    $(basename $0) [options] <in-interface> <out-interface> <port> <destination>

    --clear           Clear the previous rules before inserting new

    --protocol        Protocol for the rules to use.

    in-interface      Interface on which incoming traffic is expected
    out-interface     Interface to which incoming traffic is to be

    port              Port to be forwarded. Can be integer or string
                      from /etc/services.
    destination       IP and port of the destination system to which
                      traffic needs to be forwarded. This should be in
                      form <destination_ip:port>

(C) 2015 Vasudev Kamath - This program comes with ABSOLUTELY NO
WARRANTY. This is free software, and you are welcome to redistribute
it under the GNU GPL Version 3 (or later) License


setup_portforwarding () {
    local protocol="$1"
    iptables -t nat -A PREROUTING -i $IN_INTERFACE -p "$protocol" --dport $PORT \
           -j DNAT --to $DESTINATION
    iptables -A FORWARD -p "$protocol" -d ${DESTINATION%%\:*} --dport $PORT -j ACCEPT

    # Returning packet should have gateway IP
    iptables -t nat -A POSTROUTING -s ${DESTINATION%%\:*} -o \
           $IN_INTERFACE -j SNAT --to ${IN_IP%%\/*}


if [ $(id -u) -ne 0 ]; then
    echo "You need to be root to run this script"
    exit 1

while true; do
    case "$1" in
      if [ "$1" = "--protocol" -a -n "$2" ];then
          shift 2
      elif [ "${1#--protocol=}" != "$1" ]; then
          shift 1
          echo "You need to specify protocl (tcp|udp)"
          exit 2


if [ $# -ne 4 ]; then
    usage $0
    exit 2


# Get the incoming interface IP. This is used for SNAT.
IN_IP=$(ip addr show $IN_INTERFACE|\
             perl -nE '/inet\s(.*)\sbrd/ and print $1')

if [ -n "$CLEAR_RULES" ]; then
    iptables -t nat -X
    iptables -t nat -F
    iptables -F

if [ -n "$PROTOCOL" ]; then
    setup_portforwarding $PROTOCOL
    setup_portforwarding tcp
    setup_portforwarding udp

Coming to systemd-nspawn I see there is --port option which takes argument in form proto:hostport:destport where proto can be either tcp or udp, hostport and destport are number from 1-65535. This option assumes private networking enabled in the container. I've not tried this option yet but that simplifies quite a lot of thing, its like -p switch used by docker.

Lunar: Reproducible builds: week 28 in Stretch cycle

9 November, 2015 - 17:20

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Colin Watson uploaded groff/1.22.3-2 which implements support for SOURCE_DATE_EPOCH.
  • Colin Watson uploaded halibut/1.1-2 which implements support for SOURCE_DATE_EPOCH.

Chris Lambed filled a bug on python-setuptools with a patch to make the generated requires.txt files reproducible. The patch has been forwarded upstream.

Chris also understood why the she-bang in some Python scripts kept being undeterministic: setuptools as called by dh-python could skip re-installing the scripts if the build had been too fast (under one second). #804339 offers a patch fixing the issue by passing --force to install.

#804141 reported on gettext asks for support of SOURCE_DATE_EPOCH in gettextize. Santiago Vila pointed out that it doesn't felt appropriate as gettextize is supposed to be an interactive tool. The problem reported seems to be in avahi build system instead.

Packages fixed

The following packages became reproducible due to changes in their build dependencies: celestia, dsdo, fonts-taml-tscu, fte, hkgerman, ifrench-gut, ispell-czech, maven-assembly-plugin, maven-project-info-reports-plugin, python-avro, ruby-compass, signond, thepeg, wagon2, xjdic.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

Chris Lamb closed a wrongly reopened bug against haskell-devscripts that was actually a problem in haddock.

FreeBSD tests are now run for three branches: master, stable/10, release/10.2.0. (h01ger)

diffoscope development

Support has been added for Free Pascal unit files (.ppc). (Paul Gevers)

The homepage is now available using HTTPS, thanks to Let's Encrypt!.

Work has been done to be able to publish diffoscope on the Python Package Index (also known as PyPI): the tlsh module is now optional, compatibility with python-magic has been added, and the fallback code to handle RPM has been fixed.

Documentation update

Reiner Herrmann, Paul Gevers, Niko Tyni, opi, and Dhole offered various fixes and wording improvements to the A mailing-list is now available to receive change notifications.

NixOS, Guix, and Baserock are featured as projects working on reproducible builds.

Package reviews

70 reviews have been removed, 74 added and 17 updated this week.

Chris Lamb opened 22 new “fail to build from source” bugs.

New issues this week: randomness_in_ocaml_provides, randomness_in_qdoc_page_id, randomness_in_python_setuptools_requires_txt, gettext_creates_ChangeLog_files_and_entries_with_current_date.


h01ger and Chris Lamb presented “Beyond reproducible builds” at the MiniDebConf in Cambridge on November 8th. They gave an overview of where we stand and the changes in user tools, infrastructure, and developper practices that we might want to see happening. Feedback on these thougts are welcome. Slides are already available, and the video should be online soon.

At the same event, A meeting happened with the release team to discuss the best strategy regarding releases and reproducibility. Minutes have been posted on the Debian reproducible-builds mailing-list.

Daniel Pocock: RTC: announcing XMPP, SIP presence and more

9 November, 2015 - 14:57

Announced 7 November 2015 on the debian-devel-announce mailing list.

The Debian Project now has an XMPP service available to all Debian Developers. Your email identity can be used as your XMPP address.

The SIP service has also been upgraded and now supports presence. SIP and XMPP presence, rosters and messaging are not currently integrated.

The Lumicall app has been improved to enable rapid setup for SIP users.

This announcement concludes the maintenance window on the RTC services. All services are now running on jessie (using packages from jessie-backports).

XMPP and SIP enable a whole new world of real-time multimedia communications possibilities: video/webcam, VoIP, chat messaging, desktop sharing and distributed, federated communication are the most common use cases.

Details about how to get started and get support are explained in the User Guide in the Debian wiki. As it is a wiki, you are completely welcome to help it evolve.

Several of the people involved in the RTC team were also at the Cambridge mini-DebConf (7-8 November).

The password for all these real time communication services can be set via the LDAP control panel. Please note that this password needs to be different to any of your other existing passwords. Please use a strong password and please keep it secure.

Some of the infrastructure, like the TURN server, is shared by clients of both SIP and XMPP. Please configure your XMPP and SIP clients to use the TURN server for audio or video streaming to work most reliably through NAT.

A key feature of both our XMPP and SIP services is that they support federated inter-connectivity with other domains. Please try it. The FedRTC service for Fedora developers is one example of another SIP service that supports federation. For details of how it works and how we establish trust between domains, please see the RTC Quick Start Guide. Please reach out to other communities you are involved with and help them consider enabling SIP and XMPP federation of their own communities/domains: as Metcalfe's law suggests, each extra person or community who embraces open standards like SIP and XMPP has far more than just an incremental impact on the value of these standards and makes them more pervasive.

If you are keen to support and collaborate on the wider use of Free RTC technology, please consider joining the Free RTC mailing list sponsored by FSF Europe. There will also be a dedicated debian-rtc list for discussion of these technologies within Debian and derivatives.

This service has been made possible by the efforts of the DSA team in the original SIP+WebRTC project and the more recent jessie upgrades and XMPP project. Real-time communications systems have specific expectations for network latency, connectivity, authentication schemes and various other things. Therefore, it is a great endorsement of the caliber of the team and the quality of the systems they have in place that they have been able to host this largely within their existing framework for Debian services. Feedback from the DSA team has also been helpful in improving the upstream software and packaging to make them convenient for system administrators everywhere.

Special thanks to Peter Palfrader and Luca Filipozzi from the DSA team, Matthew Wild from the Prosody XMPP server project, Scott Godin from the reSIProcate project, Juliana Louback for her contributions to JSCommunicator during GSoC 2014, Iain Learmonth for helping get the RTC team up and running, Enrico Tassi, Sergei Golovan and Victor Seva for the Prosody and prosody-modules packaging and also the Debian backports team, especially Alexander Wirt, helping us ensure that rapidly evolving packages like those used in RTC are available on a stable Debian system.

Vasudev Kamath: Taming systemd-nspawn for running containers

9 November, 2015 - 14:22

I've been trying to run containers using systemd-nspawn for quite some time. I was always bumping to one or other dead end. This is not systemd-nspawn's fault, rather my impatience stopping me from reading manual pages properly lack of good tutorial like article available online. Compared to this LXC has a quite a lot of good tutorials and howto's available online.

This article is my effort to create a notes putting all required information in one place.

Creating a Debian Base Install

First step is to have a minimal Debian system some where on your hard disk. This can be easily done using debootsrap. I wrote a custom script to avoid reading manual every time I want to run debootstrap. Parts of this script (mostly packages and the root password generation) is stolen from lxc-debian template provided by lxc package.


set -e
set -x

usage () {
    echo "${0##/*} [options] <suite> <target> [<mirror>]"
    echo "Bootstrap rootfs for Debian"
    cat <<EOF
    --arch         set the architecture to install
    --root-passwd  set the root password for bootstrapped rootfs

# copied from the lxc-debian template

if [ $(id -u) -ne 0 ]; then
    echo "You must be root to execute this command"
    exit 2

if [ $# -lt 2 ]; then
   usage $0

while true; do
    case "$1" in
            if [ "$1" = "--root-passwd" -a -n "$2" ]; then
                shift 2
            elif [ "$1" != "${1#--root-passwd=}" ]; then
                shift 1
                # copied from lxc-debian template
                ROOT_PASSWD="$(dd if=/dev/urandom bs=6 count=1 2>/dev/null|base64)"
            if [ "$1" = "--arch" -a -n "$2" ]; then
                shift 2
            elif [ "$1" != "${1#--arch=}" ]; then
                shift 1
                ARCHITECTURE="$(dpkg-architecture -q DEB_HOST_ARCH)"


if [ -z "$1" ] || [ -z "$2" ]; then
    echo "You must specify suite and target"
    exit 1

if [ -n "$3" ]; then


echo "Downloading Debian $release ..."
debootstrap --verbose --variant=minbase --arch=$ARCHITECTURE \
             --include=$packages \
             "$release" "$target" "$MIRROR"

if [ -n "$ROOT_PASSWD" ]; then
    echo "root:$ROOT_PASSWD" | chroot "$target" chpasswd
    echo "Root password is '$ROOT_PASSWRD', please change!"

It just gets my needs done, if you don't like it feel free to modify or use debootstrap directly.

!NB Please install dbus package in the minimal base install, otherwise you will not be able to control the container using machinectl

Manually Running Container and then persisting it

Next we need to run the container manually. This is done by using following command.

systemd-nspawn -bD   /path/to/container --network-veth \
     --network-bridge=natbr0 --machine=Machinename

--machine option is not mandatory, if not specified systemd-nspawn will take the directory name as machine name, and if you have characters like - in the directory name it translates to hexcode x2d and controlling container with name becomes difficult.

--network-veth specifies the systemd-nspawn to enable virtual ethernet based networking and --network-bridge tells the bridge interface on host system to be used by systemd-nspawn. These options together constitutes private networking for container. If not specified container can use host systems interface there by removing network isolation of container.

Once you run this command container comes up. You can now run machinectl to control the container. Container can be persisted using following command

machinectl enable container-name

This will create a symbolic link of /lib/systemd/system/systemd-nspawn@service to /etc/systemd/system/ This allows you to start or stop container using machinectl or systemctl command. Only catch here is your base install should be in /var/lib/machines/. What I do in my case is create a symbolic link from my base container to /var/lib/machines/container-name.

!NB Note that symbolic link name under /var/lib/machines should be same as the container name you gave using --machine switch or the directory name if you didn't specify --machine

Persisting Container Networking

We did persist the container in above step, but this doesn't persist the networking options we provided in command line. systemd-nspawn@.service provides following command to invoke container.

ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-veth --settings=override --machine=%I

To persist the bridge networking configuration we did in command line, we need the help of systemd-networkd. So first we need to enable the systemd-networkd.service on both container and the host system.

systemctl enable systemd-networkd.service

Now inside the container, interfaces will be named as hostN. Depending on how many interfaces we have N increments. In our example case we had single interface, hence it will named as host0. By default network interfaces will be down inside container, hence systemd-networkd is needed to put it up.

We put the following in /etc/systemd/network/ file inside the container.


Description=Container wired interface host0

And in the host system we just configure the bridge interface using systemd-nspawn. I put following in natbr0.netdev in /etc/systemd/network/

Description=Bridge natbr0

In my case I already had configured the bridge using /etc/network/interfaces file for lxc. I think its not really needed to use systemd-networkd in this case. Since systemd-networkd doesn't do anything if network / virtual device is already present I safely put above configuration and enabled systemd-networkd.

Just for the notes here is my natbr0 configuration in interfaces file.

auto natbr0
iface natbr0 inet static
   pre-up brctl addbr natbr0
   post-down brctl delbr natbr0
   post-down sysctl net.ipv4.ip_forward=0
   post-down sysctl net.ipv6.conf.all.forwarding=0
   post-up sysctl net.ipv4.ip_forward=1
   post-up sysctl net.ipv6.conf.all.forwarding=1
   post-up iptables -A POSTROUTING -t mangle -p udp --dport bootpc -s -j CHECKSUM --checksum-fill
   pre-down iptables -D POSTROUTING -t mangle -p udp --dport bootpc -s -j CHECKSUM --checksum-fill

Once this is done just reload the systemd-networkd and make sure you have dnsmasq or any other DHCP server running in your system.

Now the last part is to tell systemd-nspawn to use the bridge networking interface we have defined. This is done using container-name.nspawn file. Put this file under /etc/systemd/nspawn folder.




Here you can specify networking, and files mounting section of the container. For full list please refer the systemd.nspawn manual page.

Now all this is done you can happily do

machinectl start container-name
systemctl start systemd-nspawn@container-name
Resource Control

Now all things said and done, one last part remains. Yes what is the point if we can't control how much resource does the container use. Atleast it is more important for me, because I use old and bit low powered laptop.

Systemd provides way to control the resource using Control interface. To see all the the interfaces exposed by systemd please refer systemd.resource-control manual page.

The way to control the resource is using systemctl. Once container starts running we can run following command.

systemctl set-property container-name CPUShares=200 CPUQuota=30% MemoryLimit=500M

The manual page does say that these things can be put under [Slice] section of unit files. Now I don't have clear idea if this can be put under .nspawn files or not. For the sake of persisting the container I manually wrote the service file for container by copying systemd-nspawn@.service and adding [Slice] section. But if I don't know how to find out if this had any effect or not.

If some one knows about this please share your suggestions to me and I will update this section with your provided information.


All in all I like systemd-nspawn a lot. I use it to run container for development of apt-offline. I previously used lxc where all can be controlled using a single config file. But I feel systemd-nspawn is more tightly integrated with system than lxc.

There is definitely more in systemd-nspawn than I've currently figured out. Only thing is its not as popular as other alternatives and definitely lacks good howto documentation.For now only way out is dig the manual pages, scratch your head, pull your hair out and figure out new possibilities in systemd-nspawn. ;-)

Andrew Cater: MiniDebconf 2015 ARM, Cambridge - ARM, Cambridge 1640 - final wrap-up - thanks to all

8 November, 2015 - 23:45
Two days worth of sprints - lots of hugely good work done.

Two days worth of open days with talks - lots of interest, feedback, chatting out of earshot in the breakout rooms.

Two days worth of monetary and other input from all the sponsors:  ARM, Codethink, Cosworth, Hewlett Packard Enterprise Collabora  - PRICELESS

Sociableness in pubs and so on overnight

Thanks also to Steve and Jo McIntyre for a houseful of guests and to all on front desk

And also lastly, again to ARM, as in all these posts because of this great venue and for the small army of ARM pass holders who have let us in and out of doors all this time.

Andrew Cater: MiniDebconf, ARM, Cambridge - ARM 8 November 1600

8 November, 2015 - 23:26
Peter Green's talk very clear and well received.

Now for (almost) the last of the day - Steve McIntyre (93sam / Sledge) for his annual presentation on the state of UEFI


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้