Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 34 min ago

Petter Reinholdtsen: Norwegian citizens now required by law to give their fingerprint to the police

10 May, 2015 - 21:00

5 days ago, the Norwegian Parliament decided, unanimously, that all citizens of Norway, no matter if they are suspected of something criminal or not, are required to give fingerprints to the police (vote details from Holder de ord). The law make it sound like it will be optional, but in a few years there will be no option any more. The ID will be required to vote, to get a bank account, a bank card, to change address on the post office, to receive an electronic ID or to get a drivers license and many other tasks required to function in Norway. The banks plan to stop providing their own ID on the bank cards when this new national ID is introduced, and the national road authorities plan to change the drivers license to no longer be usable as identity cards. In effect, to function as a citizen in Norway a national ID card will be required, and to get it one need to provide the fingerprints to the police.

In addition to handing the fingerprint to the police (which promised to not make a copy of the fingerprint image at that point in time, but say nothing about doing it later), a picture of the fingerprint will be stored on the RFID chip, along with a picture of the face and other information about the person. Some of the information will be encrypted, but the encryption will be the same system as currently used in the passports. The codes to decrypt will be available to a lot of government offices and their suppliers around the globe, but for those that do not know anyone in those circles it is good to know that the encryption is already broken. And they can be read from 70 meters away. This can be mitigated a bit by keeping it in a Faraday cage (metal box or metal wire container), but one will be required to take it out of there often enough to expose ones private and personal information to a lot of people that have no business getting access to that information.

The new Norwegian national IDs are a vehicle for identity theft, and I feel sorry for us all having politicians accepting such invasion of privacy without any objections. So are the Norwegian passports, but it has been possible to function in Norway without those so far. That option is going away with the passing of the new law. In this, I envy the Germans, because for them it is optional how much biometric information is stored in their national ID.

And if forced collection of fingerprints was not bad enough, the information collected in the national ID card register can be handed over to foreign intelligence services and police authorities, "when extradition is not considered disproportionate".

Update 2015-05-12: For those unable to believe that the Parliament really could make such decision, I wrote a summary of the sources I have for concluding the way I do (Norwegian Only, as the sources are all in Norwegian).

Ian Campbell: Qcontrol Homepage Moved to

10 May, 2015 - 20:01

Since gitorious has now shutdown I've (finally!) moved the qcontrol homepage to:

Source can now be found at

Daniel Silverstone: Kimchi Trial 1 - Part 3 - Kimchiguk

10 May, 2015 - 03:18

Earlier this week I allowed some of my colleagues at work to try the Kimchi and one of them (James) liked it enough to ask for a tub of it just for himself. Being a lovely person I obliged.

My darling Rob decided that what he really wanted was Kimchiguk or Kimchijijae. Being both wonderful and lacking in ingredients, I set to do achieving something.

I read a few recipes, worked out that I lacked critical ingredients and so set about simulating Kimchiguk. I grabbed two and a bit cupfuls of kimchi from the box in the fridge, chopped it up more finely, popped it into a pot along with about a pound of cubed pork belly, two teaspoons of hot pepper flakes, two teaspoons of sugar, and two teaspoons of cornflour. I then topped that off with five cups of water, mixed it up, brought it to the boil and simmered it for 30 minutes. Then I cubed some tofu, popped that in and simmered it for another 10 minutes while some rice cooked. Served in a bowl over rice, my approximation of Kimchiguk turned out pretty well. (Note it fed both of us and there was a serving left over.)

With a do-over, I'd try harder to get pork shoulder (belly is quite fatty) and given the option I'd wait until I had hot pepper paste because the malty fermented sweetness of hot pepper paste is pretty much impossible to emulate otherwise.

Daniel Silverstone: Kimchi Trial 1 - Part 2

10 May, 2015 - 03:18

Last night we tried the small amount of 'fresh kimchi' from my epic kimchi making trial.

I marinaded some beef strips in a blended paste of onion, garlic, ginger, and sesame oil for a few hours before frying that up, and serving on rice with stir-fried carrots, leek and spring onion. A little fresh kimchi on the side for flavour.

I think the Kimchi was a success. Dearest Rob said that it tasted of Kimchi. The flavours were not very well combined and quite clearly underdeveloped, and the pepper flakes were still a little hard. I expect this to resolve over the next few days as it begins to ferment.

All in all, experiment going well, expect part 3 when I know how it tastes after fermentation begins.

Daniel Silverstone: Kimchi Trial 1 - Part 1

10 May, 2015 - 03:18

I have spent today making my first ever batch of Kimchi. I have been documenting it in photos as I go, but thought I'd write up what I did so that if anyone else fancies having a go too, we can compare results.

For a start, this recipe is nowhere near "traditional" because I don't have access to certain ingredients such as glutinous rice flour. I'm sure if I searched in many of the asian supermarkets around the city centre I could find it, but I'm lazy so I didn't even try.

I am not writing this up as a traditional recipe because I'm kinda making it up as I go along, with hints from various sources including the great and wonderful Maangchi whose YouTube channel I follow. Observant readers or followers of Maangchi will recognise the recipe as being close to her Easy Kimchi recipe, however since I'm useless, it won't be exact. If this batch turns out okay then I'll write it up as a proper recipe for you to follow.

I started off with three Chinese Leaf cabbages which seemed to be about 1.5kg or so once I'd stripped the less nice outer leaves, cored and chopped them.

I then soaked and drained the cabbage in cold water...

...before sprinkling a total of one third of a cup of salt over the cabbage and mixing it to distribute the salt.

Then I returned to the cabbage every 30 minutes to re-mix it a total of three times. After the cabbage had been salted for perhaps 1h45m or so, I rinsed it out. Maangchi recommends washing the cabbage three times so that's what I did before setting it out to drain in a colander.

Maangchi then calls for the creation of a porridge made from sweet rice flour which it turns out is very glutinous. Since I lack the ability to get that flour easily I substituted cornflour which I hope will be okay and then continued as before. One cup of water, one third of a cup of cornflour was heated until it started to bubble and then one sixth of a cup of sugar was added. Stirred throughout, once it went translucent I turned the heat off and proceeded.

One half of a red onion, a good thumb (once peeled) of ginger, half a bulb of garlic and one third of a cup of fish sauce went into a mini-zizzer. I then diagonal-chopped about five spring onions, and one leek, before cutting a fair sized carrot into inch long pieces before halving and then thinly slicing it. Maangchi calls for julienned carrots but I am not that patient.

Into the cooled porridge I put two thirds of a cup of korean hot pepper flakes (I have the coarse, but a mix of coarse and fine would possibly be better), the zizzed onion/garlic/ginger/fish mix and the vegetables...

...before mixing that thoroughly with a spatula.

Next came the messy bit (I put latex gloves on, and thoroughly washed my gloved hands for this). Into my largest mixing bowl I put a handful of the drained cabbage into the bowl and a handful of the pepper mix. Thoroughly mixing this before adding another handful of cabbage and pepper mix I repeated until all the cabbage and hot pepper mixed vegetables are well combined. I really got your arms into it, squishing it around, separating the leek somewhat, etc.

As a final task, I scooped the kimchi into a clicklok type box, pressing it down firmly to try and remove all the air bubbles, before sealing it in for a jolly good fermenting effort. I will have to remove a little tonight for our dinner (beef strips marinated in onion, ginger and garlic, with kimchi on rice) but the rest will then sit to ferment for a bit. Expect a part-2 with the report from tonight's dinner and a part-3 with the results after fermentation.

As an aside, I got my hot pepper flakes from Sous Chef who, it turns out, also stock glutinous rice flour -- I may have to get some more from them in the future. (#notsponsored)

Yves-Alexis Perez: Xfce 4.12 in Debian sid

10 May, 2015 - 02:05

So, following the Jessie release, and after a quick approval by the release team for the 4.12 transition, we've uploaded Xfce 4.12 to sid and have asked the RT to schedule the relevant binNMUs for the libxfce4util and xfce4-panel reverse dependencies.

It went apparently well (besides some hickups here and there, lilke some lag on sparc, and some build-failulres on hurd). So Xfce 4.12 is now in sid, and should migrate to Stretch in the following weeks, provided nothing release critical is found.

Hideki Yamane: Now we don't have do-release-upgrade/fedup

10 May, 2015 - 00:30
We've noted some instructions to release note, but probably next step is providing upgrade tool like do-release-upgrade (ubuntu) and fedup (fedora). I hope such cool tool would be provided before Stretch release. 
Or if someone would start it already, tell me about it.

John Goerzen: First impressions and review of OwnCloud

9 May, 2015 - 08:57

In my recent post (I give up on Google), a lot of people suggested using OwnCloud as a replacement for several Google services. I’ve been playing around with it for a few days, and it is something of a mix of awesome and disappointing, in my opinion.


OwnCloud started as a file-sync tool, somewhat akin to Google Drive and Dropbox. It has clients for every platform, and it is also a client for every platform: you can have subfolders of your OwnCloud installation stored on WebDav, *FTP*, Google Drive, Dropbox, you name it. It is a pretty nice integrator of other storage services, and provides the only way to use some of them on Linux (*cough* Google Drive *cough*)

One particularly interesting feature is the live editing in the browser of ODT, DOCX, and TXT files. This is somewhat similar to Google Docs and the only such thing I’ve seen in Open Source software. It writes changes directly back to the documents and, in my limited testing, seems to work well. A very nice feature!

I’ve tested the syncing only on Linux so far, but it looks solid.

There are two surprising issues, however: there is no deduplication and no delta-uploads. Add 10 bytes to the end of a 1GB file, and you re-upload the 1GB file. Thankfully the OwnCloud GUI client is smart enough to use inotify to notice an mv, but my guess is — and I haven’t tested this, but apparently OwnCloud doesn’t use hashes at all — that the CLI client would require a reupload after any mv, because it doesn’t run continuously.

In some situations, Syncany may be a useful work-around for this, as it does chunk-based dedup and client-side encryption. However, you would lose a lot of the sharing features inside OwnCloud by doing this, and the integration with the OwnCloud “apps” for photos, videos, and music.

The Android/mobile apps support all the usual auto-upload options.


A lot of people report using OwnCloud as a calendar server, and it does indeed use CalDAV. With a program like DAVDroid or Mozilla Lightning, this makes, in theory, a full-functioning calendar syncing tool. There is, of course, also a web interface to the calendar. It, sadly, is limited. Or shall we say, VERY limited. Even something like sending an invite is missing — and in fact, the GUI for sharing an event is baffling. You can share it with someone, they get no say in whether or not it shows up, and it shows up on their calendar on the web only (not on synced copies) and they have no way to remove it!

Sharing calendars is similar; you can hide the display of any one of your calendars on the web interface, but not of any calendars shared with you. Baffling.

Address Book

I haven’t tested this yet, but there’s not much to test, I suspect. It can be shared with others, which I could see as a nice feature.


An interesting bookmarks manager, though mysteriously not with Firefox sync support. There is Chrome sync support, and a separate Mozilla Sync support, but it doesn’t provide cross-browser syncing, apparently.


It is designed to present an interface to music that is stored in Files. It provides an Ampache-compatible API, so there are a lot of clients that can stream music. It has very few options, not even for transcoding, so I don’t see how it would be useful for my FLAC collection.


Sort of a gallery view of photos synced up with Files. Very basic. Has a sharing button to share a link to an entire folder, but no option to embed photos in blog posts at a lower resolution or shortcut to sharing individual photos.

Notes, Tasks, etc.

I haven’t had the chance to look at this much. Some of them sync to various clients. The Notes are saved as HTML files that get synced down.

Clients overall

There is a very helpful page that lists all the sync clients for OwnCloud — not just for files, but also for calendars, contacts, etc. The list is extensive!

Other options

The two other Open Source options mentioned on my blog post were Kolab and Sogo, and there is also Zimbra which also has a community edition. The Debian Groupware page lists a number of other groupware options as well. Citadel caught my eye (wow, it’s still around!). Sogo has ActiveSync support, which might make phone integration a lot easier. It is not dead-simple to set up like OwnCloud is, though, so I haven’t tried it out, but I will probably be looking at it and Citadel next.

Daniel Kahn Gillmor: Cheers to audacity!

9 May, 2015 - 01:43
When paultag recently announced a project to try to move debian infrastructure to python3, my first thought was how large that undertaking would likely be. It seems like a classic engineering task, full of work and nit-picky details to get right, useful/necessary in the long-term, painful in the short-term, and if you manage to pull it off successfully, the best you can usually hope for is that no one will notice that it was done at all.

I always find that kind of task a little off-putting and difficult to tackle, but I was happy to see someone driving the project, since it does need to get done. Debian is potentially also in a position to help the upstream python community, because we have a pretty good view of what things are being used, at least within our own ecosystem.

I'm happy to say that i also missed one of the other great benefits of paultag's audacious proposal, which is how it has engaged people who already knew about debian but who aren't yet involved. Evidence of this engagement is already visible on the py3porters-devel mailing list. But if that wasn't enough, I ran into a friend recently who told me, "Hey, I found a way to contribute to debian finally!" and pointed me to the py3-porters project. People want to contribute to the project, and are looking for ways in.

So cheers to the people who propose audacious projects and make them inviting to everyone, newcomers included. And cheers to the people who step up to potentially daunting work, stake out a task, roll up their sleeves, and pitch in. Even if the py3porters project doesn't move all of debian's python infrastructure to pyt3 as fast as paultag wants it to, i think it's already a win for the project as a whole. I am looking forward to seeing what comes out of it (and it's reminding me i need to port some of my own python work, too!)

The next time you stumble over something big that needs doing in debian, even something that might seem impossible, please make it inviting, and dive in. The rest of the project will grow and improve from the attempt.

Tags: py3-porters

Gunnar Wolf: Guests in the Classroom: Felipe Esquivel (@felipeer) on the applications on parallelism, focusing on 3D animation

8 May, 2015 - 22:44

I love having guests give my classes :)

This time, we had Felipe Esquivel, a good friend who had been once before invited by me to the Faculty, about two years ago. And it was due time to invite him again!

Yes, this is the same Felipe I recently blogged about — To give my blog some credibility, you can refer to Felipe's entry in IMDb and, of course, to the Indiegogo campaign page for Natura.

Felipe knows his way around the different aspects of animation. For this class (2015-04-15), he explained how traditional ray-tracing techniques work, and showed clear evidences on the promises and limits of parallelism — Relating back to my subject and to academic rigor, he clearly shows the speed with which we face Amdahl's Law, which limits the efficiency of parallelization at a certain degree perprogram construct, counterpointed against Gustafson's law, where our problem will be able to be solved in better detail given more processing abilities (and will thus not hit Amdahl's hard ceiling).

A nice and entertaining talk. But I know you are looking for the videos! Get them, either at my server or at

Eddy Petrișor: Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1

8 May, 2015 - 07:24
About 2 months ago I set a goal to run some kind of BSD on the spare Linksys NSLU2 I had. This was driven mostly by curiosity, after listening to a few BSDNow episodes and becoming a regular listener, but it was a really interesting experience (it was also somewhat frustrating, mostly due to lacking documentation or proprietary code).

Looking for documentation on how to install any BSD flavour on the Linksys NSLU2, I have found what appears to be some too-incomplete-to-be-useful-for-a-BSD-newbie information about installing FreeBSD, no information about OpenBSD and some very detailed information about NetBSD on the Linksys NSLU2.

I was very impressed by the NetBSD script which can be used to cross-compile the entire NetBSD system - to do that, it also builds the appropriate toolchain - NetBSD kernel and the base system, even when ran on a Linux host. Having some experience with cross compilation for GNU/Linux embedded systems I can honestly say this is immensely impressive, well done NetBSD!

Gone were a few failed attempts to properly follow the instruction and lots of hours of (re)building, but then I had the kernel and the sets (the NetBSD system is split into several parts which are grouped by functionality, these are the sets), so I was in the position to have to set things up to be able to net boot - kernel loading via TFTP and rootfs on NFS.

But it wouldn't be challenging if the instructions were followed to the letter, so the first thing I wanted to change was that I didn't want to run dhcpd just to pass the DHCP boot configuration to the NSLU2, that seemed like a waste of resources since I already had dnsmasq running.

After some effort and struggling with missing documentation, I managed to use dnsmasq to pass DHCP boot parameters to the slug, but also use it as TFTP server - after some time I documented this for future reference on my blog and expect to refer to it in the future.

Setting up NFS wasn't a problem, but, when trying to boot, I found that I managed to misread at least 3 or 4 times some of the NSLU2 related information on the NetBSD wiki. To be able to debug what was happening I concluded the slug should have a serial console attached to it, which helped a lot.

Still the result was that I wasn't able to boot the trunk version of the NetBSD code on my NSLU2.

Long story short, with the help of some people from the #netbsd IRC channel on Freenode and from the port-arm NetBSD mailing list I found out that I might have a better chance with specific older versions. In practice what really worked was the code from the netbsd_6_1 branch.

Discussions on the port-arm mailing list, some digging into the (recently found) PR (problem reports), and a successful execution of the trunk kernel (at the time, version 7.99.4) together with 6.1.5 userspace lead me to the conclusion the NetBSD userspace for armbe was broken in the trunk branch.

And since I concluded this would be a good occasion to learn a few details about NetBSD, I set out to git bisect through the trunk history to identify when this happened. But that meant being able to easily load kernels and run them from TFTP, which was not how the RedBoot bootloader flashed into the slug behaves by default.

Be default, the RedBoot bootloader flashed into the NSLU2 waits for 2 seconds for a manual interaction (it waits for a ^C) on the serial console or on the telnet RedBoot prompt, then, if no such event happens, it copies the Linux image it has in flash starting with adress 0x50060000 into RAM at address 0x01d00000 (after stripping the Sercomm header) and then executes the copied code from RAM.

Of course, this is not a very handy way to try to boot things from TFTP, so my first idea to overcome this limitation was to use a second stage bootloader which would do the loading via TFTP of the NetBSD kernel, then execute it from RAM. Flashing this second stage bootloader instead of the Linux kernel at 0x50060000 would make sure that no manual intervention except power on would be necessary when a new kernel+userspace pair is ready to be tested.

Another advantage was that I would not risk bricking the NSLU2 since I would not be changing RedBoot, the original bootloader.

I knew Apex was used as the second stage bootloader in Debian, so I started configuring my own version of the APEX bootloader to make it work for the netbsd-nfs.bin image to be loaded via TFTP.

My first disappointment was that Apex was did not support receiving the boot parameters via DHCP, but only via RARP (it was clear it was less tested with BOOTP or DHCP) and TFTP was documented in the code as being problematic. That meant that I would have to hard code the boot configuration or configure RARP, but that wasn't too bad.

Later I found out that I wasted time on that avenue because the network driver in Apex was some Intel code (NPE Access Library) which can't be freely distributed, but could have been downloaded from Intel's site back in 2008-2009. The bad news was that current versions did not work at all with the old patch work that was done in Apex to allow for the driver made for Linux to compile in a world of its own so it could be incorporated in Apex.

I was stuck and the only options I were:
  1. Fight with the available Intel code and make it compile in Apex
  2. Incorporate the NPE driver from NetBSD into a rump kernel which will be included in Apex, since I knew the NetBSD driver only needed a very easily obtainable binary blob, instead of the entire driver as was in Apex before
  3. Hack together an Apex version that simulates the typing of the necessary commands to load the netbsd-nfs.bin image inside RedBoot, or in other words, call from Apex the RedBoot functions necessary to load from TFTP and execute NetBSD.
Option 1 did not look that appealing after looking into the horrible Intel build system and its endless dependencies into a specific Linux kernel version.

Option 2 was more appealing, but since I didn't knew NetBSD and only tried once to build and run a NetBSD rump kernel, it seemed like a doable project only for an experienced NetBSD developer or at least an experienced NetBSD user, which I was not.

So I was left with option 3, which meant I had to do some reverse engineering of the code, because, although RedBoot is GPL, Linksys did not publish the source from which the running RedBoot was built from.

(to be continued)

Ben Hutchings: Debian LTS work, April 2015

7 May, 2015 - 17:14

This was my fifth month working on Debian LTS. I was assigned 16 hours by Freexian's Debian LTS initiative. I worked on several packages but haven't uploaded updates yet.


There is a steady stream of security issues in all Linux kernel versions. As we don't want to prompt reboots too often, these won't necessarily result in an update every month; it depends on the severity of the issue(s). However, I triaged the current issues and committed the available fixes to version control.


The PGP signature validation in dpkg-source has a number of bugs, including CVE-2015-0840 which allows the validation to be completely subverted. I backported fixes for these from the 1.16 (wheezy) branch to the 1.15 (squeeze) branch. dpkg has a test suite and the fixes all came with new regression tests, so this was easy to verify.

As dpkg is a native package (i.e. there is little separation between the 'upstream' and Debian packaging), I preferred that the dpkg maintainers would turn this into an 'upstream' release that I could upload. Guillem Jover has now reviewed and (semi-)released my changes, so I will complete this update soon.


A large number of bugs have been found in the venerable tiff library and tools, mostly by 'fuzzing' them with afl. The bugs have been unhelpfully grouped into a smaller number of CVE IDs, although it's not clear that these groups were introduced at the same time and they certainly weren't fixed at the same time.

Most of the upstream bug reports come with samples to reproduce the bug (crash or memory corruption detectable with valgrind), but many of these did not turn out to be reproducible (either upstream or with the version in 'squeeze'). Where they were reproducible, I've verified that the patches fix the issue.

Unfortunately tiff does not have a test suite, so I made a call for testing on the debian-lts list. If I don't get any responses soon, I'll run my own basic tests with programs that use libtiff before uploading.

Benjamin Mako Hill: Books Room

7 May, 2015 - 11:11

Is the locked “books room” at McMahon Hall at UW a metaphor for DRM in the academy? Could it be, like so many things in Seattle, sponsored by Amazon?

Mika noticed the room several weeks ago but felt that today’s International Day Against DRM was a opportune time to raise the questions in front of a wider audience.

Benjamin Mako Hill: DRM on Streaming Services

7 May, 2015 - 10:43

For the 2015 International Day Against DRM, I wrote a short essay on DRM for streaming services posted on the Defective by Design website. I’m republishing it here.

Between 2003 and 2009, most music purchased through Apple’s iTunes store was locked using Apple’s FairPlay digital restrictions management (DRM) software, which is designed to prevent users from copying music they purchased. Apple did not seem particularly concerned by the fact that FairPlay was never effective at stopping unauthorized distribution and was easily removed with publicly available tools. After all, FairPlay was effective at preventing most users from playing their purchased music on devices that were not made by Apple.

No user ever requested FairPlay. Apple did not build the system because music buyers complained that CDs purchased from Sony would play on Panasonic players or that discs could be played on an unlimited number of devices (FairPlay allowed five). Like all DRM systems, FairPlay was forced on users by a recording industry paranoid about file sharing and, perhaps more importantly, by technology companies like Apple, who were eager to control the digital infrastructure of music distribution and consumption. In 2007, Apple began charging users 30 percent extra for music files not processed with FairPlay. In 2009, after lawsuits were filed in Europe and the US, and after several years of protests, Apple capitulated to their customers’ complaints and removed DRM from the vast majority of the iTunes music catalog.

Fundamentally, DRM for downloaded music failed because it is what I’ve called an antifeature. Like features, antifeatures are functionality created at enormous cost to technology developers. That said, unlike features which users clamor to pay extra for, users pay to have antifeatures removed. You can think of antifeatures as a technological mob protection racket. Apple charges more for music without DRM and independent music distributors often use “DRM-free” as a primary selling point for their products.

Unfortunately, after being defeated a half-decade ago, DRM for digital music is becoming the norm again through the growth of music streaming services like Pandora and Spotify, which nearly all use DRM. Impressed by the convenience of these services, many people have forgotten the lessons we learned in the fight against FairPlay. Once again, the justification for DRM is both familiar and similarly disingenuous. Although the stated goal is still to prevent unauthorized copying, tools for “stripping” DRM from services continue to be widely available. Of course, the very need for DRM on these services is reduced because users don’t normally store copies of music and because the same music is now available for download without DRM on services like iTunes.

We should remember that, like ten years ago, the real effect of DRM is to allow technology companies to capture value by creating dependence in their customers and by blocking innovation and competition. For example, DRM in streaming services blocks third-party apps from playing music from services, just as FairPlay ensured that iTunes music would only play on Apple devices. DRM in streaming services means that listening to music requires one to use special proprietary clients. For example, even with a premium account, a subscriber cannot listen to music from their catalog using an alternative or modified music player. It means that their television, car, or mobile device manufacturer must cut deals with their service to allow each paying customer to play the catalog they have subscribed to. Although streaming services are able to capture and control value more effectively, this comes at the cost of reduced freedom, choice, and flexibility for users and at higher prices paid by subscribers.

A decade ago, arguments against DRM for downloaded music focused on the claim that users should have control over the music they purchase. Although these arguments may not seem to apply to subscription services, it is worth remembering that DRM is fundamentally a problem because it means that we do not have control of the technology we use to play our music, and because the firms aiming to control us are using DRM to push antifeatures, raise prices, and block innovation. In all of these senses, DRM in streaming services is exactly as bad as FairPlay, and we should continue to demand better.

Steve Kemp: On de-duplicating uploaded file-content.

7 May, 2015 - 07:00

This evening I've been mostly playing with removing duplicate content. I've had this idea for the past few days about object-storage, and obviously in that context if you can handle duplicate content cleanly that's a big win.

The naive implementation of object-storage involves splitting uploaded files into chunks, storing them separately, and writing database-entries such that you can reassemble the appropriate chunks when the object is retrieved.

If you store chunks on-disk, by the hash of their contents, then things are nice and simple.

The end result is that you might upload the file /etc/passwd, split that into four-byte chunks, and then hash each chunk using SHA256.

This leaves you with some database-entries, and a bunch of files on-disk:


In my toy-code I wrote out the data in 4-byte chunks, which is grossly ineffeciant. But the value of using such small pieces is that there is liable to be a lot of collisions, and that means we save-space. It is a trade-off.

So the main thing I was experimenting with was the size of the chunks. If you make them too small you lose I/O due to the overhead of writing out so many small files, but you gain because collisions are common.

The rough testing I did involved using chunks of 16, 32, 128, 255, 512, 1024, 2048, and 4096 bytes. As sizes went up the overhead shrank, but also so did the collisions.

Unless you could handle the case of users uploading a lot of files like /bin/ls which are going to collide 100% of the time with prior uploads using larger chunks just didn't win as much as I thought they would.

I wrote a toy server using Sinatra & Ruby, which handles the splitting/hashing/and stored block-IDs in SQLite. It's not so novel given that it took only an hour or so to write.

The downside of my approach is also immediately apparent. All the data must live on a single machine - so that reassmbly works in the simple fashion. That's possible, even with lots of content if you use GlusterFS, or similar, but it's probably not a great approach in general. If you have large capacity storage avilable locally then this might would well enough for storing backups, etc, but .. yeah.

Matthieu Caneill: Debsources got swag and continous integration

7 May, 2015 - 05:00

Debsources ( is still under active development. We recently had a Gnome Outreachy intern, Jingjie Jiang, and we're about to work with 2 GSoC students, Clément Schreiner and Orestis Ioannou.

I will present here the GitHub mirror we've set up, in order to allow external pull requests to be submitted, and to use the continous integration service provided by Travis-CI.

GitHub and Travis-CI

Debsources' source code is hosted on Debian's git servers, and from there is mirrored to GitHub. Every time a commit is pushed (to master or other branches) or a pull request is open, the test suite will be automatically run on Travis-CI, and the result (tests pass or don't) is displayed on GitHub. This allows us to quickly filter external contributions (when they are submitted on GitHub), and be sure everything works with our setup, before reviewing work.

Travis-CI runs the tests on OpenVZ containers. The complete infrastructure was a bit challenging to setup, but as we now have a Docker recipe to quicly begin to hack on Debsources, most of the work could be done using the Dockerfile instructions.

In average, a run on Travis-CI (which includes git cloning the code and test data, setup the server, and run the tests suite) takes 7 minutes, which is an ok amount of time to wait for before submitting a pull request, in my opinion.

Bugs discovered in the process

Setting up this continuous integration infrastructure made me discover a few bugs.

Python magic does black magic

Debsources runs fine on Debian (not surprisingly), but I got tricked by black magic when I tried to run it on Ubuntu (which is the OS run in Travis-CI's containers).

We use the magic library to guess the type of files we're dealing with, for instance when we need to decide between rendering a file (for text files) or downloading it (for binary files).

Here comes the tricky part: the Python bindings for libmagic are not the same in Debian and Pypi. Debsources uses Debian package python-magic, which is not in Ubuntu 12.04. Moreover, there's no Python egg for it on Pypi, which has however another package (called magic) which provides a different API.

I solved this with a dirty hack, using the fact python-magic lies in a single file:

mkdir /tmp/python-magic && wget -O /tmp/python-magic/ && export PYTHONPATH=/tmp/python-magic/:$PYTHONPATH

It simply downloads the library, saves it in a temporary folder and includes it in the Python path. Let's see for how long it works before everything breaks!

Size of a directory

One test in the suite was ensuring the information returned by ls -l on a directory and stored in the DB was the right information. Inode metadata was tested, such as name, permissions, type, or size.

Interestingly enough, the size of a directory was tested, and expected to be 4096 bytes. The size of a directory actually depends on the filesystem in use, and on the number of files this directory contains. We often see 4096 because it's the size of a not-too-big directory on ext4.

Travis-CI doesn't use ext4:

$ df -T
Filesystem            Type     1K-blocks      Used Available Use%
Mounted on
/vz/private/209140041 simfs    125829120 103460612  22368508  83% /
none                  devtmpfs   1572864         8   1572856   1% /dev
none                  tmpfs       314576        56    314520   1% /run
none                  tmpfs         5120         4      5116   1%
none                  tmpfs      1572864         0   1572864   0%
/dev/null             tmpfs       786432    171584    614848  22%

Simfs is a container filesystem for OpenVZ, on which directories have different sizes than on ext4:

$ ls -al /
total 0
drwxr-xr-x 23 root     root      480 Feb  4 18:08 .
drwxr-xr-x 23 root     root      480 Feb  4 18:08 ..
drwxr-xr-x  2 root     root     2480 Feb  4 18:20 bin
drwxr-xr-x  2 root     root       40 Apr 19  2012 boot
drwxr-xr-x  5 root     root      660 Apr 30 13:56 dev
drwxr-xr-x 99 root     root     3560 Apr 30 13:56 etc
-rw-r--r--  1 root     root        0 Feb  4 17:56 fastboot
drwxr-xr-x  3 root     root       80 Feb  4 17:57 home

Directory sizes are not even powers of 2.

Hence I changed the test to not check directory sizes. Hopefully this will help to make Debsources work on more filesystems!

An empty file is hiding

Last but not least, because this bug is still open in the wild. A file, which appears to be empty, is not taken into account by Debsources' updater. This file is sources/non-free/m/make-doc-non-dfsg/4.0-2/.pc/applied-patches. It is present in the filesystem in the container, is not the only empty file over there, but still doesn't appear in the database, and make fail the test which counts files.

The test has been commented out (booooooh), so that we still can use Travis-CI's platform for our GSoC students, before it's fixed.


Making Debsources run automatically on a different platform as the one we usually use permitted us to spot bugs, write dirty hacks, and expand the filesystems it's supposed to run on.

Now, let's hope the continuous integration will help our GSoC students, and let's wish them good luck!

C.J. Adams-Collier: A harowing upgrade

7 May, 2015 - 03:16

I just dist-upgraded the blog server to jessie. A couple of errors from code with checks for which modules are enabled from .htaccess (deprecated and now obsolete behavior). Nothing all that rocky. Go team Debian!

John Goerzen: I Give Up on Google: Free is Too Expensive

7 May, 2015 - 01:26

I am really tired of things Google has done lately.

The most recent example being retiring Classic Maps. That’s a problem, because the current Maps mysteriously doesn’t show most of my saved (“starred”) places. Google has known about this since at least 2013. There are posts all over their forums about it going back to when what is now “regular” Google Maps was beta. Google employees even knew about it and did nothing. For someone that made heavy use of it, this was quite annoying.

But there have been plenty of others:

  • Removing My Places and My Maps from Maps for Android. Those features were used to, for instance, plan trips, highlight routes, add campground possibilities, etc. (They eventually brought this feature back months/years later, in limited form.)
  • Removed the 7-day and month views from Calendar for Android, claiming this was “better” for users. Finally re-added those views a few months later after many complaints. I even participated in a survey process with them where they were clearly struggling to understand why anybody wanted to see 7 days at once, when that feature had been there for years…
  • Removing the XMPP capabilities in Google Talk/Hangouts.
  • Picasaweb pretty much shut down, with very strong redirects to Google+ Photos. Which still to this day doesn’t have a handy feature for embedding in a blog post or anything that’s not, well, Google+.
  • General creeping crapification of everything they touch. It’s almost like Microsoft in the 90s all over again. All of a sudden my starred places stop showing up in Google Maps, but show up in Google Drive — shared with the whole world. What? I never wanted them in Google Drive to start with.
  • All the products that are all-but-dead — Google Groups and the sad state of the Deja News archives. Maybe Google+ itself goes on this list soon?
  • Looks like they’re trying to kill off Google Voice and merge it into hangouts, but I can’t send a text from the web with Hangouts.
  • And this massive list of discontinued services and products. Yeowch. Remember when Google Code was hot, and then they didn’t touch it at all for years?
  • And they still haven’t fixed some really basic things, such as letting people change their email address when they get married.
  • Dropping SIP from Grand Central, ActiveSync from Apps, etc.

I even used to use Flickr, then moved to Picasa when Yahoo stopped investing in Flickr. Now I’m back to Flickr, because Google stopped investing in Picasa.

The takeaway is that you can’t really rely on Google for anything. Counting on something being there for an upcoming trip and then having it be suddenly yanked away is a level of frustration that just makes the service not so useful. Never knowing when obvious things (7-day calendar view) will be removed means you just can’t depend on it.

So, are there good alternatives? Things I’m thinking of include:

  • Alternative calendar applications. Ideally it would support shared calendars for multiple people in a family, an Android app that lets you easily view some or all calendars, etc. I wonder if is really the only competitor here? Last I looked — a few years ago — none of the Open Source options really worked well.
  • Alternative mapping applications. Must-haves include directions, navigation in the car, saving points of interest, and offline storage on Android. Nice-to-haves would include restaurant review integration, etc. Looks like Nokia ( and Mapquest, plus a few OSM spinoffs, are the leading contenders here.
  • Email is easily enough found elsewhere, and I’ve never used Gmail much anyhow.

Anybody else moving off Google?

Ben Hutchings: Debian LTS work, April 2015

6 May, 2015 - 23:27

This was my fifth month working on Debian LTS. I was assigned 16 hours by Freexian's Debian LTS initiative. I worked on several packages but haven't uploaded updates yet.


There is a steady stream of severity issues in all Linux kernel versions. As we don't want to prompt reboots too often, these won't necessarily result in an update every month; it depends on the severity of the issue(s). However, I triaged the current issues and committed the available fixes to version control.


The PGP signature validation in dpkg-source has a number of bugs, including CVE-2015-0840 which allows the validation to be completely subverted. I backported fixes for these from the 1.16 (wheezy) branch to the 1.15 (squeeze) branch. dpkg has a test suite and the fixes all came with new regression tests, so this was easy to verify.

As dpkg is a native package (i.e. there is little separation between the 'upstream' and Debian packaging), I preferred that the dpkg maintainers would turn this into an 'upstream' release that I could upload. Guillem Jover has now reviewed and (semi-)released my changes, so I will complete this update soon.


A large number of bugs have been found in the venerable tiff library and tools, mostly by 'fuzzing' them with afl. The bugs have been unhelpfully grouped into a smaller number of CVE IDs, although it's not clear that these groups were introduced at the same time and they certainly weren't fixed at the same time.

Most of the upstream bug reports come with samples to reproduce the bug (crash or memory corruption detectable with valgrind), but many of these did not turn out to be reproducible (either upstream or with the version in 'squeeze'). Where they were reproducible, I've verified that the patches fix the issue.

Unfortunately tiff does not have a test suite, so I made a call for testing on the debian-lts list. If I don't get any responses soon, I'll run my own basic tests with programs that use libtiff before uploading.

Martin Pitt: autopkgtest 3.14 “now twice as rebooty”

6 May, 2015 - 17:15

Almost every new autopkgtest release brings some small improvements, but 3.14 got some reboot related changes worth pointing out.

First of all, I simplified and unified the implementation of rebooting across all runners that support it (ssh, lxc, and qemu). If you use a custom setup script for adt-virt-ssh you might have to update it: Previously, the setup script needed to respond to a reboot function to trigger a reboot, wait for the testbed to go down, and come back up. This got split into issuing the actual reboot system command directly by adt-run itself on the testbed, and the “wait for go down and back up” part. The latter now has a sensible default implementation: it simply waits for the ssh port to become unavailable, and then waits for ssh to respond again; most testbeds should be fine with that. You only need to provide the new wait-reboot function in your ssh setup script if you need to do anything else (such as re-enabling ssh after reboot). Please consult the manpage and the updated SKELETON for details.

The ssh runner gained a new --reboot option to indicate that the remote testbed can be rebooted. This will automatically declare the reboot testbed capability and thus you can now run rebooting tests without having to use a setup script. This is very useful for running tests on real iron.

Finally, in testbeds which support rebooting your tests will now find a new /tmp/autopkgtest-reboot-prepare command. Like /tmp/autopkgtest-reboot it takes an arbitrary “marker”, saves the current state, restores it after reboot and re-starts your test with the marker; however, it will not trigger the actual reboot but expects the test to do that. This is useful if you want to test a piece of software which does a reboot as part of its operation, such as a system-image upgrade. Another use case is testing kernel crashes, kexec or another “nonstandard” way of rebooting the testbed. README.package-tests shows an example how this looks like.

3.14 is now available in Debian unstable and Ubuntu wily. As usual, for older releases you can just grab the deb and install it, it works on all supported Debian and Ubuntu releases.

Enjoy, and let me know if you run into troubles or have questions!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้