Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 28 min 2 sec ago

Iain R. Learmonth: The Domain Name System

23 October, 2016 - 01:15

As I posted yesterday, we released PATHspider 1.0.0. What I didn’t talk about in that post was an event that occured only a few hours before the release.

Everything was going fine, proofreading of the documentation was in progress, a quick git push with the documentation updates and… CI FAILED!?! Our CI doesn’t build the documentation, only tests the core code. I’m planning to release real soon and something has broken.

Starting to panic.

irl@orbiter# ./
Ran 16 tests in 0.984s


This makes no sense. Maybe I forgot to add a dependency and it’s been broken for a while? I scrutinise the dependencies list and it all looks fine.

In fairness, probably the first thing I should have done is look at the build log in Jenkins, but I’ve never had a failure that I couldn’t reproduce locally before.

It was at this point that I realised there was something screwy going on. A sigh of relief as I realise that there’s not a catastrophic test failure but now it looks like maybe there’s a problem with the University research group network, which is arguably worse.

Being focussed on getting the release ready, I didn’t realise that the Internet was falling apart. Unknown to me, a massive DDoS attack against Dyn, a major DNS host, was in progress. After a few attempts to debug the problem, I hardcoded a line into /etc/hosts, still believing it to be a localised issue.

I’ve just removed this line as the problem seems to have resolved itself for now. There are two main points I’ve taken away from this:

  • CI failure doesn’t necessarily mean that your code is broken, it can also indicate that your CI infrastructure is broken.
  • Decentralised internetwork routing is pretty worthless when the centralised name system goes down.

This afternoon I read a post by [tj] on the 57North Planet, and this is where I learnt what had really happened. He mentions multicast DNS and Namecoin as distributed name system alternatives. I’d like to add some more to that list:

Only the first of these is really a distributed solution.

My idea with ICMP Domain Name Messages is that you send an ICMP message to a webserver. Somewhere along the path, you’ll hit either a surveillance or censorship middlebox. These middleboxes can provide value by caching any DNS replies that are seen so that an ICMP DNS request message will cause the message to not be forwarded but a reply is generated to provide the answer to the query. If the middlebox cannot generate a reply, it can still forward it to other surveillance and censorship boxes.

I think this would be a great secondary use for the NSA and GCHQ boxen on the Internet, clearly fits within the scope of “defending national security” as if DNS is down the Internet is kinda dead, plus it’d make it nice and easy to find the boxes with PATHspider.

Dirk Eddelbuettel: RcppArmadillo 0.7.500.0.0

22 October, 2016 - 22:43

A few days ago, Conrad released Armadillo 7.500.0. The corresponding RcppArmadillo release 0.7.500.0.0 is now on CRAN (and will get into Debian shortly).

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 274 other packages on CRAN.

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.500.0.0 (2016-10-20)
  • Upgraded to Armadillo release 7.500.0 (Coup d'Etat)

    • Expanded qz() to optionally specify ordering of the Schur form

    • Expanded each_slice() to support matrix multiplication

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Christoph Egger: Running Debian on the ClearFog

22 October, 2016 - 17:37

Back in August, I was looking for a Homeserver replacement. During FrOSCon I was then reminded of the Turris Omnia project by The basic SoC (Marvel Armada 38x) seemed to be nice hand have decent mainline support (and, with the turris, users interested in keeping it working). Only I don't want any WIFI and I wasn't sure the standard case would be all that usefully. Fortunately, there's also a simple board available with the same SoC called ClearFog and so I got one of these (the Base version). With shipping and the SSD (the only 2242 M.2 SSD with 250 GiB I could find, a ADATA SP600) it slightly exceeds the budget but well.

When installing the machine, the obvious goal was to use mainline FOSS components only if possible. Fortunately there's mainline kernel support for the device as well as mainline U-Boot. First attempts to boot from a micro SD card did not work out at all, both with mainline U-Boot and the vendor version though. Turns out the eMMC version of the board does not support any micro SD cards at all, a fact that is documented but others failed to notice as well.


As the board does not come with any loader on eMMC and booting directly from M.2 requires removing some resistors from the board, the easiest way is using UART for booting. The vendor wiki has some shell script wrapping an included C fragment to feed U-Boot to the device but all that is really needed is U-Boot's kwboot utility. For some reason the SPL didn't properly detect UART booting on my device (wrong magic number) but patching the if (in arch-mvebu's spl.c) and always assume UART boot is an easy way around.

The plan then was to boot a Debian armhf rootfs with a defconfig kernel from USB stick. and install U-Boot and the rootfs to eMMC from within that system. Unfortunately U-Boot seems to be unable to talk to the USB3 port so no kernel loading from there. One could probably make UART loading work but switching between screen for serial console and xmodem seemed somewhat fragile and I never got it working. However ethernet can be made to work, though you need to set eth1addr to eth3addr (or just the right one of these) in U-Boot, saveenv and reboot. After that TFTP works (but is somewhat slow).


There's one last step required to allow U-Boot and Linux to access the eMMC. eMMC is wired to the same PINs as the SD card would be. However the SD card has an additional indicator pin showing whether a card is present. You might be lucky inserting a dummy card into the slot or go the clean route and remove the pin specification from the device tree.

--- a/arch/arm/dts/armada-388-clearfog.dts
+++ b/arch/arm/dts/armada-388-clearfog.dts
@@ -306,7 +307,6 @@

                        sdhci@d8000 {
                                bus-width = <4>;
-                               cd-gpios = <&gpio0 20 GPIO_ACTIVE_LOW>;
                                pinctrl-0 = <&clearfog_sdhci_pins

Next Up is flashing the U-Boot to eMMC. This seems to work with the vendor U-Boot but proves to be tricky with mainline. The fun part boils down to the fact that the boot firmware reads the first block from eMMC, but the second from SD card. If you write the mainline U-Boot, which was written and tested for SD card, to eMMC the SPL will try to load the main U-Boot starting from it's second sector from flash -- obviously resulting in garbage. This one took me several tries to figure out and made me read most of the SPL code for the device. The fix however is trivial (apart from the question on how to support all different variants from one codebase, which I'll leave to the U-Boot developers):

--- a/include/configs/clearfog.h
+++ b/include/configs/clearfog.h
@@ -143,8 +143,7 @@
 #define CONFIG_SYS_MMC_U_BOOT_OFFS             (160 << 10)
-                                                + 1)
 #define CONFIG_SYS_U_BOOT_MAX_SIZE_SECTORS     ((512 << 10) / 512) /* 512KiB */
 #define CONFIG_FIXED_SDHCI_ALIGNED_BUFFER      0x00180000      /* in SDRAM */

Now we have a System booting from eMMC with mainline U-Boot (which is a most welcome speedup compared to the UART and TFTP combination from the beginning). Getting to fine-tune linux on the device -- we want to install the armmp Debian kernel and have it work. As all the drivers are build as modules for that kernel this also means initrd support. Funnily U-Boots bootz allows booting a plain vmlinux kernel but I couldn't get it to boot a plain initrd. Passing a uImage initrd and a normal kernel however works pretty well. Back when I first tried there were some modules missing and ethernet didn't work with the PHY driver built as a module. In the meantime the PHY problem was fixed in the Debian kernel and almost all modules already added. Ben then only added the USB3 module on my suggestion and as a result, unstable's armhf armmp kernel should work perfectly well on the device (you still need to patch the device tree similar to the patch above). Still missing is an updated flash-kernel to automatically generate the initrd uImage which is work in progress but got stalled until I fixed the U-Boot on eMMC problem and everything should be fine -- maybe get debian u-boot builds for that board.

Pro versus Base

The main difference so far between the Pro and the Base version of the ClearFog is the switch chip which is included on the Pro. The Base instead "just" has two gigabit ethernet ports and a SFP. Both, linux' and U-Boot's device tree are intended for the Pro version which makes on of the ethernet ports unusable (it tries to find the switch behind the ethernet port which isn't there). To get both ports working (or the one you settled on earlier) there's a second patch to the device tree (my version might be sub-optimal but works), U-Boot -- the linux-kernel version is a trivial adaption:

--- a/arch/arm/dts/armada-388-clearfog.dts
+++ b/arch/arm/dts/armada-388-clearfog.dts
@@ -89,13 +89,10 @@
        internal-regs {
            ethernet@30000 {
                mac-address = [00 50 43 02 02 02];
+              managed = "in-band-status";
+              phy = <&phy1>;
                phy-mode = "sgmii";
                status = "okay";
-              fixed-link {
-                  speed = <1000>;
-                  full-duplex;
-              };

            ethernet@34000 {
@@ -227,6 +224,10 @@
                pinctrl-0 = <&mdio_pins>;
                pinctrl-names = "default";

+              phy1: ethernet-phy@1 { /* Marvell 88E1512 */
+                   reg = <1>;
+              };
                phy_dedicated: ethernet-phy@0 {
                     * Annoyingly, the marvell phy driver
@@ -386,62 +386,6 @@
        tx-fault-gpio = <&expander0 13 GPIO_ACTIVE_HIGH>;

-  dsa@0 {
-      compatible = "marvell,dsa";
-      dsa,ethernet = <&eth1>;
-      dsa,mii-bus = <&mdio>;
-      pinctrl-0 = <&clearfog_dsa0_clk_pins &clearfog_dsa0_pins>;
-      pinctrl-names = "default";
-      #address-cells = <2>;
-      #size-cells = <0>;
-      switch@0 {
-          #address-cells = <1>;
-          #size-cells = <0>;
-          reg = <4 0>;
-          port@0 {
-              reg = <0>;
-              label = "lan1";
-          };
-          port@1 {
-              reg = <1>;
-              label = "lan2";
-          };
-          port@2 {
-              reg = <2>;
-              label = "lan3";
-          };
-          port@3 {
-              reg = <3>;
-              label = "lan4";
-          };
-          port@4 {
-              reg = <4>;
-              label = "lan5";
-          };
-          port@5 {
-              reg = <5>;
-              label = "cpu";
-          };
-          port@6 {
-              /* 88E1512 external phy */
-              reg = <6>;
-              label = "lan6";
-              fixed-link {
-                  speed = <1000>;
-                  full-duplex;
-              };
-          };
-      };
-  };
    gpio-keys {
        compatible = "gpio-keys";
        pinctrl-0 = <&rear_button_pins>;

Apart from the mess with eMMC this seems to be a pretty nice device. It's now happily running with a M.2 SSD providing enough storage for now and still has a mSATA/mPCIe plug left for future journeys. It seems to be drawing around 5.5 Watts with SSD and one Ethernet connected while mostly idle and can feed around 500 Mb/s from disk over an encrypted ethernet connection which is, I guess, not too bad. My plans now include helping to finish flash-kernel support, creating a nice case and probably get it deployed. I might bring it to FOSDEM first though.

Working on it was really quite some fun (apart from the frustrating parts finding the one-block-offset ..) and people were really helpful. Big thanks here to Debian's arm folks, Ben Hutchings the kernel maintainer and U-Boot upstream (especially Tom Rini and Stefan Roese)

Matthew Garrett: Fixing the IoT isn't going to be easy

22 October, 2016 - 12:14
A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

Right. I'm off to portscan another smart socket.

[1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

[2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

[3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign


Russell Coker: Another Broken Nexus 5

22 October, 2016 - 11:56

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement.

Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone.

Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement.

Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero.

The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone.

For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices.

Related posts:

  1. The Nexus 5 The Nexus 5 is the latest Android phone to be...
  2. Nexus 4 Ringke Fusion Case I’ve been using Android phones for 2.5 years and...
  3. Nexus 4 My wife has had a LG Nexus 4 for about...

Iain R. Learmonth: PATHspider 1.0.0 released!

22 October, 2016 - 06:46

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.

For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.

PATHspider is a framework for performing and analyzing these measurements, while the actual A/B test can be easily customized. Late on the 21st October, we released version 1.0.0 of PATHspider and it’s ready for “production” use (whatever that means for Internet research software).

Our first real release of PATHspider was version 0.9.0 just in time for the presentation of PATHspider at the 2016 Applied Networking Research Workshop co-located with IETF 96 in Berlin earlier this year. Since this release we have made a lot of changes and I’ll talk about some of the highlights here (in no particular order):

Switch from twisted.plugin to straight.plugin

While we anticipate that some plugins may wish to use some features of Twisted, we didn’t want to have Twisted as a core dependency for PATHspider. We found that straight.plugin was not just a drop-in replacement but it simplified the way in which 3rd-party plugins could be developed and it was worth the effort for that alone.

Library functions for the Observer

PATHspider has an embedded flow-meter (think something like NetFlow but highly customisable). We found that even with the small selection of plugins that we had we were duplicating code across plugins for these customisations of the flow-meter. In this release we now provide library functions for common needs such as identifying TCP 3-way handshake completions or identifying ICMP Unreachable messages for flows.

New plugin: DSCP

We’ve added a new plugin for this release to detect breakage when using DiffServ code points to achieve differentiated services within a network.

Plugins are now subcommands

Using the subparsers feature of argparse, all plugins including 3rd-party plugins will now appear as subcommands to the PATHspider command. This makes every plugin a first-class citizen and makes PATHspider truly generalised.

We have an added benefit from this that plugins can also ask for extra arguments that are specific to the needs of the plugin, for example the DSCP plugin allows the user to select which code point to send for the experimental test.

Test Suite

PATHspider now has a test suite! As the size of the PATHspider code base grows we need to be able to make changes and have confidence that we are not breaking code that another module relies on. We have so far only achieved 54% coverage of the codebase but we hope to improve this for the next release. We have focussed on the critical portions of data collection to ensure that all the results collected by PATHspider during experiments is valid.

DNS Resolver Utility

Back when PATHspider was known as ECNSpider, it had a utility for resolving IP addresses from the Alexa top 1 million list. This utility has now been fully integrated into PATHspider and appears as a plugin to allow for repeated experiments against the same IP addresses even if the DNS resolver would have returned a different addresss.


Documentation is definitely not my favourite activity, but it has to be done. PATHspider 1.0.0 now ships with documentation covering commandline usage, input/output formats and development of new plugins.

If you’d like to check out PATHspider, you can find the website at

Debian packages will be appearing shortly and will find their way into stable-backports within the next 2 weeks (hopefully).

Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.

Dirk Eddelbuettel: anytime 0.0.4: New features and fixes

21 October, 2016 - 09:25

A brand-new release of anytime is now on CRAN following the three earlier releases since mid-September. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects -- and does so without requiring a format string. See the anytime page for a few examples.

With release 0.0.4, we add two nice new features. First, NA, NaN and Inf are now simply skipped (similar to what the corresponding Base R functions do). Second, we now also accept large numeric values so that, _e.g., anytime(as.numeric(Sys.time()) also works, effectively adding another input type. We also have squashed an issue reported by the 'undefined behaviour' sanitizer, and the widened the test for when we try to deploy the gettz package get missing timezone information.

A quick example of the new features:

anydate(c(NA, NaN, Inf, as.numeric(as.POSIXct("2016-09-01 10:11:12"))))
[1] NA           NA           NA           "2016-09-01"

The NEWS file summarises the release:

Changes in anytime version 0.0.4 (2016-10-20)
  • Before converting via lexical_cast, assign to atomic type via template logic to avoid an UBSAN issue (PR #15 closing issue #14)

  • More robust initialization and timezone information gathering.

  • More robust processing of non-finite input also coping with non-finite values such as NA, NaN and Inf which all return NA

  • Allow numeric POSIXt representation on input, also creating proper POSIXct (or, if requested, Date)

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kees Cook: CVE-2016-5195

21 October, 2016 - 06:02

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 6.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there str still other flaws on all our Linux machines right now. (And, I should note: this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Daniel Pocock: Choosing smartcards, readers and hardware for the Outreachy project

20 October, 2016 - 14:25

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad Software Free/open Mostly free/open, Proprietary firmware in reader Key extraction Possible Not generally possible Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers) Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily
SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Pau Garcia i Quiles: FOSDEM Desktops DevRoom 2016 all for Participation

20 October, 2016 - 06:41

FOSDEM is one of the largest (5,000+ hackers!) gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium, Europe).

Once again, one of the tracks will be the Desktops DevRoom (formerly known as “CrossDesktop DevRoom”), which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperability amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to:

  • Open Desktops: Gnome, KDE, Unity, Enlightenment, XFCE, Razor, MATE, Cinnamon, ReactOS, CDE etc
  • Closed desktops: Windows, Mac OS X, MorphOS, etc (when talking about a FLOSS topic)
  • Software development for the desktop
  • Development tools
  • Applications that enhance desktops
  • General desktop matters
  • Cross-platform software development
  • Web
  • Thin clients, desktop virtualiation, etc

Talks can be very specific, such as the advantages/disadvantages of distributing a desktop application with snap vs flatpak, or as general as using HTML5 technologies to develop native applications.

Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2016 schedule might give you some inspiration.


Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 400 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)
  • Requested time: from 15 to 45 minutes. Normal duration is 30 minutes. Longer duration requests must be properly justified. You may be assigned LESS time than you request.
How to submit

All submissions are made in the Pentabarf event planning tool:

To submit your talk, click on “Create Event”, then make sure to select the “Desktops” devroom as the “Track”. Otherwise your talk will not be even considered for any devroom at all.

If you already have a Pentabarf account from a previous year, even if your talk was not accepted, please reuse it. Create an account if, and only if, you don’t have one from a previous year. If you have any issues with Pentabarf, please contact


The deadline for submissions is December 5th 2016.

FOSDEM will be held on the weekend of 4 & 5 February 2017 and the Desktops DevRoom will take place on Sunday, February 5th 2017.

We will contact every submitter with a “yes” or “no” before December 11th 2016.

Recording permission

The talks in the Desktops DevRoom will be audio and video recorded, and possibly streamed live too.

In the “Submission notes” field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example:

“If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, <NAME>.”

If you want us to stop the recording in the Q & A part (should you have one), please tell us. We can do that but only for the Q & A part.

More information

The official communication channel for the Desktops DevRoom is its mailing list

Use this page to manage your subscription:


The Desktops DevRoom 2017 is managed by a team representing the most notable open desktops:

  • Pau Garcia i Quiles, KDE
  • Christophe Fergeau, Gnome
  • Michael Zanetti, Unity
  • Philippe Caseiro, Enlightenment
  • Jérome Leclanche, Razor

If you want to join the team, please contact

Héctor Orón Martínez: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service

20 October, 2016 - 02:18

In the previous post Open Build Service software architecture has been overviewed. In the following post, a tutorial on setting up a package build with OBS from Debian packages is presented.


  • Generate a test environment by creating Stretch/SID VM
  • Enable experimental repository
  • Install OBS server, api, worker and osc CLI packages
  • Ensure all OBS services are running
  • Create an OBS project for Download on Demand (DoD)
  • Create an OBS project linked to DoD
  • Adding a package to the project
  • Troubleshooting OBS
Generate a test environment by creating Stretch/SID VM

Really, use whatever suits you best, but please create an untrusted test environment for this one.

In the current tutorial it assumes “$hostname” is “stretch”, which should be stretch or sid suite.

Be aware that copy & paste configuration files from current post might lead you into broken characters (i.e. “).

Debian Stretch weekly netinst CD

Enable experimental repository
# echo "deb experimental main" >> /etc/apt/sources.list.d/experimental.list
# apt-get update
Install and setup OBS server, api, worker and osc CLI packages
# apt-get install obs-server obs-api obs-worker osc

In the install process mysql database is needed, therefore if mysql server is not setup, a password needs to be provided.
When OBS API database ‘obs-api‘ is created, we need to pick a password for it, provide “opensuse”. The ‘obs-api’ package will configure apache2 https webserver (creating a dummy certificate for “stretch”) to serve OBS webui.
Add “stretch” and “obs” aliases to “localhost” entry in your /etc/hosts file.
Enable worker by setting ENABLED=1 in /etc/default/obsworker
Try to connect to the web UI https://stretch/
Login into OBS webui, default login credentials: Admin/opensuse).
From command line tool, try to list projects in OBS

 $ osc -A https://stretch ls

Accept dummy certificate and provide credentials (defaults: Admin/opensuse)
If the install proceeds as expected follow to the next step.

Ensure all OBS services are running
# backend services
obsrun     813  0.0  0.9 104960 20448 ?        Ss   08:33   0:03 /usr/bin/perl -w /usr/lib/obs/server/bs_dodup
obsrun     815  0.0  1.5 157512 31940 ?        Ss   08:33   0:07 /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun    1295  0.0  1.6 157644 32960 ?        S    08:34   0:07  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun     816  0.0  1.8 167972 38600 ?        Ss   08:33   0:08 /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
obsrun    1296  0.0  1.8 168100 38864 ?        S    08:34   0:09  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
memcache   817  0.0  0.6 346964 12872 ?        Ssl  08:33   0:11 /usr/bin/memcached -m 64 -p 11211 -u memcache -l
obsrun     818  0.1  0.5  78548 11884 ?        Ss   08:33   0:41 /usr/bin/perl -w /usr/lib/obs/server/bs_dispatch
obsserv+   819  0.0  0.3  77516  7196 ?        Ss   08:33   0:05 /usr/bin/perl -w /usr/lib/obs/server/bs_service
mysql      851  0.0  0.0   4284  1324 ?        Ss   08:33   0:00 /bin/sh /usr/bin/mysqld_safe
mysql     1239  0.2  6.3 1010744 130104 ?      Sl   08:33   1:31  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/ --socket=/var/run/mysqld/mysqld.sock --port=3306

# web services
root      1452  0.0  0.1 110020  3968 ?        Ss   08:34   0:01 /usr/sbin/apache2 -k start
root      1454  0.0  0.1 435992  3496 ?        Ssl  08:34   0:00  \_ Passenger watchdog
root      1460  0.3  0.2 651044  5188 ?        Sl   08:34   1:46  |   \_ Passenger core
nobody    1465  0.0  0.1 444572  3312 ?        Sl   08:34   0:00  |   \_ Passenger ust-router
www-data  1476  0.0  0.1 855892  2608 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1477  0.0  0.1 856068  2880 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1761  0.0  4.9 426868 102040 ?       Sl   08:34   0:29 delayed_job.0
www-data  1767  0.0  4.8 425624 99888 ?        Sl   08:34   0:30 delayed_job.1
www-data  1775  0.0  4.9 426516 101708 ?       Sl   08:34   0:28 delayed_job.2
nobody    1788  0.0  5.7 496092 117480 ?       Sl   08:34   0:03 Passenger RubyApp: /usr/share/obs/api
nobody    1796  0.0  4.9 488888 102176 ?       Sl   08:34   0:00 Passenger RubyApp: /usr/share/obs/api
www-data  1814  0.0  4.5 282576 92376 ?        Sl   08:34   0:22 delayed_job.1000
www-data  1829  0.0  4.4 282684 92228 ?        Sl   08:34   0:22 delayed_job.1010
www-data  1841  0.0  4.5 282932 92536 ?        Sl   08:34   0:22 delayed_job.1020
www-data  1855  0.0  4.9 427988 101492 ?       Sl   08:34   0:29 delayed_job.1030
www-data  1865  0.2  5.0 492500 102964 ?       Sl   08:34   1:09 clockworkd.clock
www-data  1899  0.0  0.0  87100  1400 ?        S    08:34   0:00 /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
www-data  1900  0.1  0.4 161620  8276 ?        Sl   08:34   0:51  \_ /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf

# OBS worker
root      1604  0.0  0.0  28116  1492 ?        Ss   08:34   0:00 SCREEN -m -d -c /srv/obs/run/worker/boot/screenrc
root      1605  0.0  0.9  75424 18764 pts/0    Ss+  08:34   0:06  \_ /usr/bin/perl -w ./bs_worker --hardstatus --root /srv/obs/worker/root_1 --statedir /srv/obs/run/worker/1 --id stretch:1 --reposerver http://obs:5252 --jobs 1
Create an OBS project for Download on Demand (DoD)

Create a meta project file:

$ osc -A https://stretch:443 meta prj Debian:8 -e

<project name=”Debian:8″>
<title>Debian 8 DoD</title>
<description>Debian 8 DoD</description>
<person userid=”Admin” role=”maintainer”/>
<repository name=”main”>
<download arch=”x86_64″ url=”” repotype=”deb”/>

Visit webUI to check project configuration

Create a meta project configuration file:

$ osc -A https://stretch:443 meta prjconf Debian:8 -e

Add the following file, as found at

Repotype: debian

# create initial user
Preinstall: base-passwd
Preinstall: user-setup

# required for preinstall images
Preinstall: perl

# preinstall essentials + dependencies
Preinstall: base-files base-passwd bash bsdutils coreutils dash debconf
Preinstall: debianutils diffutils dpkg e2fslibs e2fsprogs findutils gawk
Preinstall: gcc-4.9-base grep gzip hostname initscripts insserv libacl1
Preinstall: libattr1 libblkid1 libbz2-1.0 libc-bin libc6 libcomerr2 libdb5.3
Preinstall: libgcc1 liblzma5 libmount1 libncurses5 libpam-modules
Preinstall: libpcre3 libsmartcols1
Preinstall: libpam-modules-bin libpam-runtime libpam0g libreadline6
Preinstall: libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2
Preinstall: libslang2 libss2 libtinfo5 libustr-1.0-1 libuuid1 login lsb-base
Preinstall: mount multiarch-support ncurses-base ncurses-bin passwd perl-base
Preinstall: readline-common sed sensible-utils sysv-rc sysvinit sysvinit-utils
Preinstall: tar tzdata util-linux zlib1g

Runscripts: base-passwd user-setup base-files gawk

VMinstall: libdevmapper1.02.1

Order: user-setup:base-files

# Essential packages (this should also pull the dependencies)
Support: base-files base-passwd bash bsdutils coreutils dash debianutils
Support: diffutils dpkg e2fsprogs findutils grep gzip hostname libc-bin 
Support: login mount ncurses-base ncurses-bin perl-base sed sysvinit 
Support: sysvinit-utils tar util-linux

# Build-essentials
Required: build-essential
Prefer: build-essential:make

# build script needs fakeroot
Support: fakeroot
# lintian support would be nice, but breaks too much atm
#Support: lintian

# helper tools in the chroot
Support: less kmod net-tools procps psmisc strace vim

# everything below same as for Debian:6.0 (apart from the version macros ofc)

# circular dependendencies in openjdk stack
Order: openjdk-6-jre-lib:openjdk-6-jre-headless
Order: openjdk-6-jre-headless:ca-certificates-java

Keep: binutils cpp cracklib file findutils gawk gcc gcc-ada gcc-c++
Keep: gzip libada libstdc++ libunwind
Keep: libunwind-devel libzio make mktemp pam-devel pam-modules
Keep: patch perl rcs timezone

Prefer: cvs libesd0 libfam0 libfam-dev expect

Prefer: gawk locales default-jdk
Prefer: xorg-x11-libs libpng fam mozilla mozilla-nss xorg-x11-Mesa
Prefer: unixODBC libsoup glitz java-1_4_2-sun gnome-panel
Prefer: desktop-data-SuSE gnome2-SuSE mono-nunit gecko-sharp2
Prefer: apache2-prefork openmotif-libs ghostscript-mini gtk-sharp
Prefer: glib-sharp libzypp-zmd-backend mDNSResponder

Prefer: -libgcc-mainline -libstdc++-mainline -gcc-mainline-c++
Prefer: -libgcj-mainline -viewperf -compat -compat-openssl097g
Prefer: -zmd -OpenOffice_org -pam-laus -libgcc-tree-ssa -busybox-links
Prefer: -crossover-office -libgnutls11-dev

# alternative pkg-config implementation
Prefer: -pkgconf
Prefer: -openrc
Prefer: -file-rc

Conflict: ghostscript-library:ghostscript-mini

Ignore: sysvinit:initscripts

Ignore: aaa_base:aaa_skel,suse-release,logrotate,ash,mingetty,distribution-release
Ignore: gettext-devel:libgcj,libstdc++-devel
Ignore: pwdutils:openslp
Ignore: pam-modules:resmgr
Ignore: rpm:suse-build-key,build-key
Ignore: bind-utils:bind-libs
Ignore: alsa:dialog,pciutils
Ignore: portmap:syslogd
Ignore: fontconfig:freetype2
Ignore: fontconfig-devel:freetype2-devel
Ignore: xorg-x11-libs:freetype2
Ignore: xorg-x11:x11-tools,resmgr,xkeyboard-config,xorg-x11-Mesa,libusb,freetype2,libjpeg,libpng
Ignore: apache2:logrotate
Ignore: arts:alsa,audiofile,resmgr,libogg,libvorbis
Ignore: kdelibs3:alsa,arts,pcre,OpenEXR,aspell,cups-libs,mDNSResponder,krb5,libjasper
Ignore: kdelibs3-devel:libvorbis-devel
Ignore: kdebase3:kdebase3-ksysguardd,OpenEXR,dbus-1,dbus-1-qt,hal,powersave,openslp,libusb
Ignore: kdebase3-SuSE:release-notes
Ignore: jack:alsa,libsndfile
Ignore: libxml2-devel:readline-devel
Ignore: gnome-vfs2:gnome-mime-data,desktop-file-utils,cdparanoia,dbus-1,dbus-1-glib,krb5,hal,libsmbclient,fam,file_alteration
Ignore: libgda:file_alteration
Ignore: gnutls:lzo,libopencdk
Ignore: gnutls-devel:lzo-devel,libopencdk-devel
Ignore: pango:cairo,glitz,libpixman,libpng
Ignore: pango-devel:cairo-devel
Ignore: cairo-devel:libpixman-devel
Ignore: libgnomeprint:libgnomecups
Ignore: libgnomeprintui:libgnomecups
Ignore: orbit2:libidl
Ignore: orbit2-devel:libidl,libidl-devel,indent
Ignore: qt3:libmng
Ignore: qt-sql:qt_database_plugin
Ignore: gtk2:libpng,libtiff
Ignore: libgnomecanvas-devel:glib-devel
Ignore: libgnomeui:gnome-icon-theme,shared-mime-info
Ignore: scrollkeeper:docbook_4,sgml-skel
Ignore: gnome-desktop:libgnomesu,startup-notification
Ignore: python-devel:python-tk
Ignore: gnome-pilot:gnome-panel
Ignore: gnome-panel:control-center2
Ignore: gnome-menus:kdebase3
Ignore: gnome-main-menu:rug
Ignore: libbonoboui:gnome-desktop
Ignore: postfix:pcre
Ignore: docbook_4:iso_ent,sgml-skel,xmlcharent
Ignore: control-center2:nautilus,evolution-data-server,gnome-menus,gstreamer-plugins,gstreamer,metacity,mozilla-nspr,mozilla,libxklavier,gnome-desktop,startup-notification
Ignore: docbook-xsl-stylesheets:xmlcharent
Ignore: liby2util-devel:libstdc++-devel,openssl-devel
Ignore: yast2:yast2-ncurses,yast2-theme-SuSELinux,perl-Config-Crontab,yast2-xml,SuSEfirewall2
Ignore: yast2-core:netcat,hwinfo,wireless-tools,sysfsutils
Ignore: yast2-core-devel:libxcrypt-devel,hwinfo-devel,blocxx-devel,sysfsutils,libstdc++-devel
Ignore: yast2-packagemanager-devel:rpm-devel,curl-devel,openssl-devel
Ignore: yast2-devtools:perl-XML-Writer,libxslt,pkgconfig
Ignore: yast2-installation:yast2-update,yast2-mouse,yast2-country,yast2-bootloader,yast2-packager,yast2-network,yast2-online-update,yast2-users,release-notes,autoyast2-installation
Ignore: yast2-bootloader:bootloader-theme
Ignore: yast2-packager:yast2-x11
Ignore: yast2-x11:sax2-libsax-perl
Ignore: openslp-devel:openssl-devel
Ignore: java-1_4_2-sun:xorg-x11-libs
Ignore: java-1_4_2-sun-devel:xorg-x11-libs
Ignore: kernel-um:xorg-x11-libs
Ignore: tetex:xorg-x11-libs,expat,fontconfig,freetype2,libjpeg,libpng,ghostscript-x11,xaw3d,gd,dialog,ed
Ignore: yast2-country:yast2-trans-stats
Ignore: susehelp:susehelp_lang,suse_help_viewer
Ignore: mailx:smtp_daemon
Ignore: cron:smtp_daemon
Ignore: hotplug:syslog
Ignore: pcmcia:syslog
Ignore: avalon-logkit:servlet
Ignore: jython:servlet
Ignore: ispell:ispell_dictionary,ispell_english_dictionary
Ignore: aspell:aspel_dictionary,aspell_dictionary
Ignore: smartlink-softmodem:kernel,kernel-nongpl
Ignore: OpenOffice_org-de:myspell-german-dictionary
Ignore: mediawiki:php-session,php-gettext,php-zlib,php-mysql,mod_php_any
Ignore: squirrelmail:mod_php_any,php-session,php-gettext,php-iconv,php-mbstring,php-openssl

Ignore: simias:mono(log4net)
Ignore: zmd:mono(log4net)
Ignore: horde:mod_php_any,php-gettext,php-mcrypt,php-imap,php-pear-log,php-pear,php-session,php
Ignore: xerces-j2:xml-commons-apis,xml-commons-resolver
Ignore: xdg-menu:desktop-data
Ignore: nessus-libraries:nessus-core
Ignore: evolution:yelp
Ignore: mono-tools:mono(gconf-sharp),mono(glade-sharp),mono(gnome-sharp),mono(gtkhtml-sharp),mono(atk-sharp),mono(gdk-sharp),mono(glib-sharp),mono(gtk-sharp),mono(pango-sharp)
Ignore: gecko-sharp2:mono(glib-sharp),mono(gtk-sharp)
Ignore: gnome-libs:libgnomeui
Ignore: nautilus:gnome-themes
Ignore: gnome-panel:gnome-themes
Ignore: gnome-panel:tomboy

Substitute: utempter

%ifnarch s390 s390x ppc ia64
Substitute: java2-devel-packages java-1_4_2-sun-devel
 %ifnarch s390x
Substitute: java2-devel-packages java-1_4_2-ibm-devel
Substitute: java2-devel-packages java-1_4_2-ibm-devel xorg-x11-libs-32bit

Substitute: yast2-devel-packages docbook-xsl-stylesheets doxygen libxslt perl-XML-Writer popt-devel sgml-skel update-desktop-files yast2 yast2-devtools yast2-packagemanager-devel yast2-perl-bindings yast2-testsuite

# SUSE compat mappings
Substitute: gcc-c++ gcc
Substitute: libsigc++2-devel libsigc++-2.0-dev
Substitute: glibc-devel-32bit
Substitute: pkgconfig pkg-config

%ifarch %ix86
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-bigsmp kernel-debug kernel-um kernel-xen kernel-kdump
%ifarch ia64
Substitute: kernel-binary-packages kernel-default kernel-debug
%ifarch x86_64
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-xen kernel-kdump
%ifarch ppc
Substitute: kernel-binary-packages kernel-default kernel-kdump kernel-ppc64 kernel-iseries64
%ifarch ppc64
Substitute: kernel-binary-packages kernel-ppc64 kernel-iseries64
%ifarch s390
Substitute: kernel-binary-packages kernel-s390
%ifarch s390x
Substitute: kernel-binary-packages kernel-default

%define debian_version 800

%debian_version 800

Visit webUI to check project configuration

Create an OBS project linked to DoD
$ osc -A https://stretch:443 meta prj test -e

<project name=”test”>
<person userid=”Admin” role=”maintainer”/>
<repository name=”Debian_8.0″>
<path project=”Debian:8″ repository=”main”/>

Visit webUI to check project configuration

Adding a package to the project

$ osc -A https://stretch:443 co test ; cd test
$ mkdir hello ; cd hello ; apt-get source -d hello ; cd - ; 
$ osc add hello 
$ osc ci -m "New import" hello

The package should go to dispatched state then get in blocked state while it downloads build dependencies from DoD link, eventually it should start building. Please check the journal logs to check if something went wrong or gets stuck.

Visit webUI to check hello package build state

OBS logging to the journal

Check in the journal logs everything went fine:

$ sudo journalctl -u obsdispatcher.service -u obsdodup.service -u obsscheduler@x86_64.service -u obsworker.service -u obspublisher.service

Currently we are facing few issues with web UI:

And there are more issues that have not been reported, please do ‘reportbug obs-api‘.

Héctor Orón Martínez: Open Build Service in Debian needs YOU! ☞

19 October, 2016 - 23:26

“Open Build Service is a generic system to build and distribute packages from sources in an automatic, consistent and reproducible way.”


openSUSE distributions’ build system is based on a generic framework named Open Build Service (OBS), I have been using these tools in my work environment, and I have to say, as Debian developer, that it is a great tool. In the current blog post I plan for you to learn the very basics of such tool and provide you with a tutorial to get, at least, a Debian package building.


Fig 1 – Open Build Service Architecture

The figure above shows Open Build Service, from now on OBS, software architecture. There are several parts which we should differenciate:

  • Web UI / API (obs-api)
  • Backend (obs-server)
  • Build daemon / worker (obs-worker)
  • CLI tool to manage API (osc)

Each one of the above packages can be installed in separated machines as a distributed architecture, it is very easy to split the system into several machines running the services, however in the tutorial below everything installs in one machine.


The backend is composed of several scripts written either in shell or Perl. There are several services running in the backend:

  • Source service
  • Repository service
  • Scheduler service
  • Dispatcher service
  • Warden service
  • Publisher service
  • Signer service
  • DoD service

The backend manages source packages (any format such RPM, DEB, …) and schedules them for a build in the worker. Once the package is built it can be published in a repository for the wider audience or kept unpublished and used by other builds.


System can have several worker machines which are encharged to perform the package builds. There are different options that can be configured (see /etc/default/obsworker) such enabling switch, number of worker instances, jobs per instance. This part of the system is written in shell and/or Perl language.


The frontend allows in a clickable way to get around most options OBS provides: setup projects, upload/branch/delete packages, submit review requests, etc. As an example, you can see a live instance running at

The frontend parts are really a Ruby-on-rails web application, we (mainly thanks to Andrew Lee with ruby team help) have tried to get it nicely running, however we have had lots of issues due to javascripts or rubygems malfunctioning. Current webui is visible and provides some package status, however most actions do not work properly, or configurations cannot be applied as editor does not save changes, projects or packages in a project are not listed either. If you are a Ruby-on-rails expert or if you are able to help us out with some of the webui issues we get at Debian that would be really appreciated from our side.


OSC is a managing command line tool, written in Python, that interfaces with OBS API to be able to perform actions, edit configurations, do package reviews, etc.


Now that we have done a general overview of the system, let me introduce you to OBS with a practical tutorial.

TUTORIAL: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, September 2016

19 October, 2016 - 17:29

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, about 152 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Balint Reczey did 15 hours (out of 12.25 hours allocated + 7.25 remaining, thus keeping 4.5 extra hours for October).
  • Ben Hutchings did 6 hours (out of 12.3 hours allocated + 1.45 remaining, he gave back 7h and thus keeps 9.75 extra hours for October).
  • Brian May did 12.25 hours.
  • Chris Lamb did 12.75 hours (out of 12.30 hours allocated + 0.45 hours remaining).
  • Emilio Pozuelo Monfort did not publish his report yet (he was allocated 12.3 hours and had 2.95 hours remaining).
  • Guido Günther did 6 hours (out of 7h allocated, thus keeping 1 extra hour for October).
  • Hugo Lefeuvre did 12 hours.
  • Jonas Meurer did 8 hours (out of 9 hours allocated, thus keeping 1 extra hour for October).
  • Markus Koschany did 12.25 hours.
  • Ola Lundqvist did 11 hours (out of 12.25 hours assigned thus keeping 1.25 extra hours).
  • Raphaël Hertzog did 12.25 hours.
  • Roberto C. Sanchez did 14 hours (out of 12.25h allocated + 3.75h remaining, thus keeping 2 extra hours).
  • Thorsten Alteholz did 12.25 hours.
Evolution of the situation

The number of sponsored hours reached 172 hours per month thanks to maxcluster GmbH joining as silver sponsor and RHX Srl joining as bronze sponsor.

We only need a couple of supplementary sponsors now to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 39 packages with a known CVE and the dla-needed.txt file 34. It’s a small bump compared to last month but almost all issues are affected to someone.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Kees Cook: Security bug lifetime

19 October, 2016 - 11:46

In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

 break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2

This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

And here it is zoomed in to just Critical and High:

The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

  • Critical: 2 @ 3.3 years
  • High: 34 @ 6.4 years
  • Medium: 334 @ 5.2 years
  • Low: 186 @ 5.0 years

This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Michal &#268;iha&#345;: Gammu 1.37.90

19 October, 2016 - 11:00

Yesterday Gammu 1.37.90 has been released. This release brings quite a lot of changes and it's for testing purposes. Hopefully stable 1.38.0 will follow soon as soon as I won't get negative feedback on the changes.

Besides code changes, there is one news for Windows users - there is Windows binary coming with the release. This was possible to automate thanks to AppVeyor, who does provide CI service where you can download built artifacts. Without this, I'd not be able to do make this as I don't have single Windows computer :-).

Full list of changes:

  • Improved support Huawei K3770.
  • API changes in some parameter types.
  • Fixed various Windows compilation issues.
  • Fixed several resource leaks.
  • Create outbox SMS atomically in FILES backend.
  • Removed getlocation command as we no longer fit into their usage policy.
  • Fixed call diverts on TP-LINK MA260.
  • Initial support for Oracle database.
  • Removed unused daemons, pbk and pbk_groups tables from the SMSD schema.
  • SMSD outbox entries now can have priority set in the database.
  • Added SIM IMSI to the SMSD status table.
  • Added CheckNetwork directive.
  • SMSD attempts to power on radio if disabled.
  • Fixed processing of AT unsolicited responses in some cases.
  • Fixed parsing USSD responses from some devices.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Reproducible builds folks: Reproducible Builds: week 77 in Stretch cycle

19 October, 2016 - 07:02

What happened in the Reproducible Builds effort between Sunday October 9 and Saturday October 15 2016:

Media coverage
  • despinosa wrote a blog post on Vala and reproducibility
  • h01ger and lynxis gave a talk called "From Reproducible Debian builds to Reproducible OpenWrt, LEDE" (video, slides to be linked here any moment) at the OpenWrt Summit 2016 held in Berlin, together with ELCE, held by the Linux Foundation.
  • A discussion on debian-devel@ resulted in a nice quotable comment from Paul Wise: "(Reproducible) builds from source (with continuous rechecking) is the only way to have enough confidence that a Debian user has the freedoms promised to them by the Debian social contract."
  • Chris Lamb will present a talk at Software Freedom Kosovo on reproducible builds on Saturday 22nd October.
Documentation update

After discussions with HW42, Steven Chamberlain, Vagrant Cascadian, Daniel Shahaf, Christopher Berg, Daniel Kahn Gillmor and others, Ximin Luo has started writing up more concrete and detailed design plans for setting SOURCE_ROOT_DIR for reproducible debugging symbols, buildinfo security semantics and buildinfo security infrastructure.

Toolchain development and fixes

Dmitry Shachnev noted that our patch for #831779 has been temporarily rejected by docutils upstream; we are trying to persuade them again.

Tony Mancill uploaded javatools/0.59 to unstable containing original patch by Chris Lamb. This fixed an issue where documentation Recommends: substvars would not be reproducible.

Ximin Luo filed bug 77985 to GCC as a pre-requisite for future patches to make debugging symbols reproducible.

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible - in our current test setup - after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • aodh/3.0.0-2 by Thomas Goirand.
  • eog-plugins/3.16.5-1 by Michael Biebl.
  • flam3/3.0.1-5 by Daniele Adriana Goulart Lopes.
  • hyphy/2.2.7+dfsg-1 by Andreas Tille.
  • libbson/1.4.1-1 by A. Jesse Jiryu Davis.
  • libmongoc/1.4.1-1 by A. Jesse Jiryu Davis.
  • lxc/1:2.0.5-1 by Evgeni Golov.
  • spice-gtk/0.33-1 by Liang Guo.
  • spice-vdagent/0.17.0-1 by Liang Guo.
  • tnef/1.4.12-1 by Kevin Coyner.

Some uploads have addressed some reproducibility issues, but not all of them:

Some uploads have addressed nearly all reproducibility issues, except for build path issues:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

101 package reviews have been added, 49 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Anders Kaseorg (1)
  • Chris Lamb (18)


  • h01ger has turned off the "Scheduled in testing+unstable+experimental" regular IRC notifications and turned them into emails to those running jenkins.d.n.
  • Re-add opi2a armhf node and 3 new builder jobs for a total of 60 build jobs for armhf. (h01ger and vagrant)
  • vagrant suggested to add a variation of init systems effecting the build, and h01ger added it to the TODO list.
  • Steven Chamberlain submitted a patch so that now all buildinfo files are collected (unsigned yet) at
  • Holger enabled CPU type variation (Intel Haswell or AMD Opteron 62xx) for i386. Thanks to for their great and continued support!


  • Increase memory on the 2 build nodes from 12 to 16gb, thanks to

This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

Enrico Zini: debtags and aptitude forget-new

18 October, 2016 - 15:25

I like to regularly go through the new packages section in aptitude to see what interesting new packages entered testing, but recently that joyful moment got less joyful for me because of a barrage of obscurely named packages.

I have just realised that aptitude forget-new supports search patterns, and that brought back the joy.

I put this in a script that I run before looking for new packages in aptitude:

aptitude forget-new '?tag(field::biology)
                   | ?tag(devel::lang:ruby)
                   | ?tag(devel::lang:perl)
                   | ?tag(role::shared-lib)
                   | ?tag(suite::openstack)
                   | ?tag(implemented-in::php)
                   | ~n^node-'

The actual content of the search pattern is purely a matter of taste.

I'm happy to see how debtags becomes quite useful here, to keep my own user experience manageable as the size of Debian keeps growing.

MJ Ray: Rinse and repeat

18 October, 2016 - 11:28

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web.

Dirk Eddelbuettel: gettz 0.0.2

18 October, 2016 - 09:16

Release 0.0.2 of gettz is now on CRAN.

gettz provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Windows is now no longer excluded, though it doesn't do anything useful yet. The main use of the package is still for Linux.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: pgpcontrol 2.5

18 October, 2016 - 06:36

pgpcontrol is the collection of the original signing and verification scripts that David Lawrence wrote (in Perl) for verification of Usenet control messages. I took over maintenance of it, with a few other things, but haven't really done much with it. It would benefit a lot from an overhaul of both the documentation and the code, and turning it into a more normal Perl module and supporting scripts.

This release is none of those things. It's just pure housekeeping, picking up changes made by other people (mostly Julien ÉLIE) to the copies of the scripts in INN and making a few minor URL tweaks. But I figured I may as well, rather than distribute old versions of the scripts.

You can tell how little I've done with this stuff by noting that they don't even have a distribution page on my web site. The canonical distribution site is, although I'm not sure if that site will pick up the new release. (This relies on a chain of rsync commands that have been moved multiple times since the last time I pushed the release button, and I suspect that has broken.) I'll ping someone about possibly fixing that; in the meantime, you can find the files on


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้