Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 15 min ago

Pau Garcia i Quiles: FOSDEM Desktops DevRoom 2016

22 October, 2015 - 23:45

It is now official: KDE will be present again at FOSDEM in the 2016 edition, on the 30th and 31st of January, 2016.

Talks will take place at the Desktops DevRoom, on Sunday the 31st, but not exclusively: in past years, there were Qt and KDE-related talks at the mobile devroom, lightning talks, distributions, open document editors and more.

KDE will be sharing the room with other desktop environments, as usual: Gnome, Unity, Enlightenment, Razor, etc. Representatives from those communities will be helping me in managing and organizing the devroom: Christophe Fergeau, Michael Zanetti, Philippe Caseiro and Jérome Leclanche.

I would like to extend the invitation to any other free/open source desktop environment and/or related stuff. Check last year’s schedule for an example. Closed-source shops (Microsoft, Apple, Oracle, etc) are ALSO invited, provided that you will talk about something related to open source.

We will publish the Call for Talks for the Desktops DevRoom 2016 soon. Stay tuned.

In the meanwhile, you can subscribe to the Desktops DevRoom mailing list to be informed of important and useful information, and talk about FOSDEM and specific issues of the Desktops DevRoom.

Russ Allbery: Review: What If?

22 October, 2015 - 10:20

Review: What If?, by Randall Munroe

Publisher: Houghton Mifflin Harcourt Copyright: 2014 ISBN: 0-544-27299-4 Format: Hardcover Pages: 295

This is another one of those reviews that's somewhat pointless to write, at least beyond telling people who for some strange reason aren't xkcd readers that this is a thing that exists in the world. What If? is a collection of essays from that feature on the xkcd web site and new essays in the same vein. (Over half are new to this collection.) If you've read them, you know what to expect; if you haven't, and have any liking at all for odd scientific facts or stick figures, you're in for a treat.

So, short review: The subtitle is Serious Scientific Answers to Absurd Hypothetical Questions, and it's exactly what it says on the tin, except that "serious" includes a healthy dose of trademark xkcd humor. Go read for numerous samples of Munroe's essay style. If you like what you see, this is a whole book of that: a nice, high-quality hardcover (at least the edition I bought), featuring the same mix of text and cartoon commentary, and with new (and in some cases somewhat longer) material. You probably now have all the information necessary to make a purchasing decision.

If you need more motivation, particularly to buy a physical copy, the inside of the dust jacket of the hardcover is a detailed, labeled map of the world after a drain in the Marianas Trench has emptied most of the oceans onto Mars. And the book inside the dust jacket is embossed with what happens after the dinosaur on the cover is lowered into, or at least towards, the Great Pit of Carkoon. This made me particularly happy, since too often hardcovers inside the dust jacket look just like every other hardcover except for the spine lettering. Very few of them have embossed Star Wars references.

Personally, I think that's a great reason to buy the hardcover even if, like me, you've been following What If? on the web religiously since it started. But of course the real draw is the new material. There's enough of it that I won't try any sort of comprehensive list, but rest assured that it's of equal or better quality than the web-published essays we know and love. My favorite of the new pieces is the answer to the question "what would happen if you made a periodic table out of cube-shaped bricks, where each brick was made of the corresponding element?" As with so many What If? questions, it starts with killing everyone in the vicinity, and then things get weird.

Another nice touch in this collection is what I'd call "rejected questions": questions that people submitted but that didn't inspire an essay. Most of these (I wish all) get a single cartoon of reaction to the question itself, which include some of the funniest (and most touching) panels in the book.

Ebook formatting has gotten much better, so there's some hope that at least some platforms could do justice to this book with its embedded cartoons. Putting the footnotes properly at the bottom of each page (thank you!) might be a challenge, though. Writing mixed with art is one of the things I think benefits greatly from a physical copy, and the hardcover is a satisfying and beautiful artifact. (I see there's also an audio book, but I'm sure how well that could work; so much of the joy of What If? is the illustrations, and I'm dubious that one could adequately describe them.) Prior web readers will be relieved to know that the mouse-over text is preserved as italic captions under the cartoons, although sadly most cartoons are missing captions. (As I recall, that's also the case for the early web What If? essays, but later essays have mouse-over text for nearly every cartoon.)

Anyway, this is a thing that exists. If you follow xkcd, you probably knew that already, given that the book was published last year and I only now got around to reading it. (My current backlog is... impressive.) If you were not previously aware of What If? or of xkcd itself, now you are, and I envy you the joy of discovery. A short bit of reading will tell you for certain whether this is something you want to purchase. If your relationship to physics is at all similar to mine, I suspect the answer will be yes.

A small personal note: I just now realized how much the style of What If? resembles the mixed text and illustrations of One Two Three... Infinity. Given how foundational that book was to my love of obscure physics facts, my love of What If? is even less surprising.

Rating: 10 out of 10

Joey Hess: propellor orchestration

22 October, 2015 - 08:02

With the disclamer that I don't really know much about orchestration, I have added support for something resembling it to Propellor.

Until now, when using propellor to manage a bunch of hosts, you updated them one at a time by running propellor --spin $somehost, or maybe you set up a central git repository, and a cron job to run propellor on each host, pulling changes from git.

I like both of these ways to use propellor, but they only go so far...

  • Perhaps you have a lot of hosts, and would like to run propellor on them all concurrently.

      master = host ""
          & concurrently conducts alotofhosts
  • Perhaps you want to run propellor on your dns server last, so when you add a new webserver host, it gets set up and working before the dns is updated to point to it.

      master = host ""
          & conducts webservers
              `before` conducts dnsserver
  • Perhaps you have something more complex, with multiple subnets that propellor can run in concurrently, finishing up by updating that dnsserver.

      master = host ""
          & concurrently conducts [sub1, sub2]
              `before` conducts dnsserver
      sub1 = ""
          & concurrently conducts webservers
          & conducts loadbalancers
      sub2 = ""
          & conducts dockerservers
  • Perhaps you need to first run some command that creates a VPS host, and then want to run propellor on that host to set it up.

      vpscreate h = cmdProperty "vpscreate" [hostName h]
          `before` conducts h

All those scenarios are supported by propellor now!

Well, I haven't actually implemented concurrently yet, but the point is that the conducts property can be used with any of propellor's property combinators, like before etc, to express all kinds of scenarios.

The conducts property works in combination with an orchestrate function to set up all the necessary stuff to let one host ssh into another and run propellor there.

main = defaultMain (orchestrate hosts)

hosts = 
    [ master
    , webservers 
    , ...

The orchestrate function does a bunch of stuff:

  • Builds up a graph of what conducts what.
  • Removes any cycles that might have snuck in by accident, before they cause foot shooting.
  • Arranges for the ssh keys to be accepted as necessary.
    Note that you you need to add ssh key properties to all relevant hosts so it knows what keys to trust.
  • Arranges for the private data of a host to be provided to the hosts that conduct it, so they can pass it along.

I've very pleased that I was able to add the Propellor.Property.Conductor module implementing this with only a tiny change to the rest of propellor. Almost everything needed to implement it was there in propellor's infrastructure already.

Also kind of cool that it only needed 13 lines of imperative code, the other several hundred lines of the implementation being all pure code.

Daniel Pocock: A mission statement for free real-time communications

22 October, 2015 - 00:51

At FOSDEM 2013, the RTC panel in the main track used the tag line "Can we finally replace Skype, Viber, Twitter and Facebook?"

Does replacing something else have the right ring to it though? Or does such a goal create a negative impression? Even worse, does the term Skype replacement fall short of being a mission statement that gives people direction and sets expectations of what we would actually like to replace it with?

Towards a clearer definition

Lets consider what a positive statement might look like:

Making it as easy as possible to make calls to other people and to receive calls from other people for somebody who chooses only to use genuinely Free Software, open standards, a free choice of service providers and a credible standard of privacy.

If you agree with this or if you feel you can propose a more precise statement, please come and share your thoughts on the Free RTC email list.

The value of a mission statement should not be underestimated. With the right mission statement, it should be possible to envision what the future will look like if we succeed and also if we don't. With the vision of success in mind, it should be easier for developers and the wider free software community to identify the steps that must be taken to make progress.

DebConf team: DebConf15 dates are set, come and join us! (Posted by DebConf15 team)

21 October, 2015 - 19:34

At DebConf14 in Portland, Oregon, USA, next year’s DebConf team presented their conference plans and announced the conference dates: DebConf15 will take place from 15 to 22 August 2015 in Heidelberg, Germany. On the Open Weekend on 15/16 August, we invite members of the public to participate in our wide offering of content and events, before we dive into the more technical part of the conference during following week. DebConf15 will also be preceeded by DebCamp, a time and place for teams to gather for intensive collaboration.

A set of slides from a quick show-case during the DebConf14 closing ceremony provide a quick overview of what you can expect next year. For more in-depth information, we invite you to watch the video recording of the full session, in which the team provides detailed information on the preparations so far, location and transportation to the venue at Heidelberg, the different rooms and areas at the Youth Hostel (for accommodation, hacking, talks, and social activities), details about the infrastructure that are being worked on, and the plans around the conference schedule.

We invite everyone to join us in organising this conference. There are different areas where your help could be very valuable, and we are always looking forward to your ideas. Have a look at our wiki page, join our IRC channels and subscribe to our mailing lists.

We are also contacting potential sponsors from all around the globe. If you know any organisation that could be interested, please consider handing them our sponsorship brochure or contact the fundraising team with any leads.

Let’s work together, as every year, on making the best DebConf ever!

Russell Coker: LUV Server Upgrade to Jessie

21 October, 2015 - 12:08

On Sunday night I started the process of upgrading the LUV server to Debian/Jessie from Debian/Wheezy. My initial plan was to just upgrade Apache first but dependencies required upgrading systemd too.

One problem I’ve encountered in the past is that the Wheezy version of systemd will often hang on an upgrade to a newer version. Generally the solution to this is to run “systemctl daemon-reexec” from another terminal. The problem in this case was that not all the libraries needed for systemd had been installed, so systemd could re-exec itself but immediately aborted. The kernel really doesn’t like it when process 1 aborts repeatedly and apparently immediately hanging is the result. At the time I didn’t know this, all I knew was that my session died and the server stopped responding to pings immediately after I requested a reexec.

The LUV server is hosted at VPAC for free. As their staff have actual work to do they couldn’t spend a lot of time working on the LUV server. They told me that the screen was flickering and suspected a VGA cable. I got to the VPAC server room with the spare LUV server (LUV had been given 3 almost identical Sun servers from Barwon Water) at 16:30. By 17:30 I had fixed the core problem (boot with “init=/bin/bash“, mount the root filesystem rw, finish the upgrade of systemd and it’s dependencies, and then reboot normally). That got it into a stage where the Xen server for Wikimedia Au was working but most LUV functionality wasn’t working.

By 23:00 on Monday I had the full list server functionality working for users, this is the main feature that users want when it’s not near a meeting time. I can’t remember whether it was Monday night or Tuesday morning when I got the Drupal site going (the main LUV web site). Last night at midnight I got the last of the Mailman administrative interface going, I admit I could have got it going a bit earlier by putting SE Linux in permissive mode, but I don’t think that the members would have benefited from that (I’ll upload a SE Linux policy package that gets Mailman working on Jessie soon).

Now it’s Wednesday and I’m still fixing some cron jobs. Along the way I noticed some problems with excessive disk space use that I’m fixing now and I’ve also removed some Wikimedia related configuration files that were obsolete and would have prevented anyone from using a address to subscribe to the LUV mailing lists.

Now I believe that everything is working correctly and generally working better than before.

Lessons Learned

While Sunday night wasn’t a bad time to start the upgrade it wasn’t the best. If I had started the upgrade on Monday morning there would have been less down-time. Another possibility might be to do the upgrade while near the VPAC office during business hours, I could have started the upgrade while at a nearby cafe and then visited the server room immediately if something went wrong.

Doing an upgrade on a day when there’s no meeting within a week was a good choice. It wasn’t really a conscious choice as I’m usually doing other LUV work near the meeting day which precludes doing other LUV work that doesn’t need to be done soon. But in future it would be best to consciously plan upgrades for a date when users aren’t going to need the service much.

While the Wheezy systemd bug is unlikely to ever be fixed there are work-arounds that shouldn’t result in a broken server. At the moment it seems that the best option would be to kill -9 the systemctl processes that hang until the packages that systemd depends on are installed. The problem is that the upgrade hangs while the new systemctl tries to tell the old systemd to restart daemons. If we can get past that to the stage where the shared objects are installed then it should be ok.

The Apache upgrade from 2.2.x to 2.4.x changed the operation of some access control directives and it took me some time to work out how to fix that. Doing a Google search on the differences between those would have led me to the Apache document about upgrading from 2.2 to 2.4 [1]. That wouldn’t have prevented some down-time of the web sites but would have allowed me to prepare for it and to more quickly fix the problems when they became apparent. Also the rather confusing configuration of the LUV server (supporting many web sites that are no longer used) didn’t help things. I think that removing cruft from an installation before an upgrade would be better than waiting until after things break.

Next time I do an upgrade of such a server I’ll write notes about it while I go. That will give a better blog post about it if it becomes newsworthy enough to be blogged about and also more opportunities to learn better ways of doing it.

Sorry for the inconvenience.

Related posts:

  1. Virgin Mobile CRM Upgrade Failure I’ve recently got a new Xperia X10 Android phone for...
  2. Debian SE Linux Status June 2012 It’s almost the Wheezy freeze time and I’ve been working...
  3. I need an LMTP server I am working on a system where a front-end mail...

Antoine Beaupré: Proprietary VDSL2 Linux routers adventures

21 October, 2015 - 11:40

I recently bought a wireless / phone adapter / VDSL modem from my Internet Service Provider (ISP) during my last outage. It generally works fine as a VDSL modem, but unfortunately, I can't seem to get used to configuring the device through their clickety web user interface... Furthermore, I am worried that I can't backup the config in a meaningful way, that is: if the device fails, I will probably not find the same model again and because they run a custom Linux distributions, the chances of the backup being possible to restore on another machine are basically zero. No way i will waste my time configuring this black box. So I started looking at running a distribution like OpenWRT on it.

(Unfortunately, I don't even dare hoping to run a decent operating system like Debian on those devices, if only because of the exotic chipsets that require all sorts of nasty hacks to run...)

The machine is a SmartRG SR630n (specs). I am linking to third party site, because the SmartRG site doesn't seem to know about their own product (!). I paid extra for this device to get one that would do both Wifi and VoIP, so i could replace two machines: my current Soekris net5501 router and a Cisco ATA 186 phone adapter that seems to mysteriously defy the challenges of time. (I don't remember when I got that thing, but it's at least from 2006.)

Unfortunately, it seems that SmartRG are running a custom, proprietary Linux distribution. According to my ISP, init is a complete rewrite that reads an XML config file (and indeed it's the format of the backup files) and does the configuration through a shared memory scheme (!?). According to DSL reports, the device seems to be running a Broadcom 63168 SOC (system on a chip) that is unsupported in Linux. There are some efforts to write drivers for those from scratch, but they have been basically stalled for years now.

Here are more details on the sucker:

Now the next step would logically be to "simply" build a new image with OpenWRT and install it in place. Then I would need to figure out a way to load the binary blobs into the OpenWRT kernel and run all the ADSL utilities as well. It's basically impossible: the odds of the binary modules being compatible with another arbitrary release of the Linux kernel are near zero. Furthermore, the userland tool are most likely custom as well. And worse of all: it seems that Bell Canada deployed a custom "Lucent Stinger" DSLAM which requires a custom binary firmware in the modem. This could be why the SmartRG is so bizarre in the first place. As long as the other end is non-standard, we are all screwed. And those Stinger DSLAM will stick around for a long time, thanks to bell.

See this other good explanation of Stinger.

Which means this machine is now yet another closed box sitting on the internet without firmware upgrades, totally handicapped. I will probably end up selling it back for another machine that has OpenWRT support for their VDSL modems. But there are very few such machines, and with a lot of those, VDSL support is often marked as "spotty" or "in progress". Some machines are supported but are basically impossible to find. There's the Draytek modems are also interesting because, apparently, some models run OpenWRT out of the box too, which is a huge benefit. This is because they use the more open Lantiq SOC. Which are probably not going to support Stinger lines.

Still, there are some very interesting projects out there... The Omnia is one I am definitely interested in right now. I really like their approach... But then they don't have a VDSL chipset in there (I asked for one, actually). And the connectors are only mini-PCIe, which makes it impossible to connect a VDSL PCI card into it.

I could find a single VDSL2 PCI card online, and it could be supported, but only the annex B is available, not the annex A, and it seems the network is using "annex A" according to the ADSL stats i had in 2015-05-28-anarcat-back-again. With such a card, I could use my existing Soekris net5501 router, slam a DSL card into it, and just use the SmartRG as a dumb wifi router/phone adapter. Then it will remain to see how supported are those VDSL cards in FreeBSD (they provide Linux source code, so that's cool). And of course, all this assumes the card works with the "Stinger" mode, which is probably not the case anyways. Besides, I have VDSL2 here, not the lowly ADSL2+.

By the way, Soekris keeps on pushing new interesting products out: their net6501, with 2 extra Gig-E cards could be a really interesting high-end switch, all working with free software tools.

A friend has a SmartRG 505n modem, which looks quite similar, except without the ATA connectors. And those modems are the ones that Teksavvy recommends ("You may use a Cellpipe 7130 or Sagemcom F@ST 2864 in lieu of our SmartRG SR505N for our DSL 15/10, DSL 25 or DSL 50 services."). Furthermore, Teksavvy provides a firmware update for the 505n - again, no idea if it works with the 630n. Of course, the 505n doesn't run OpenWRT either.

So, long story short, again I got screwed by my ISP: I thought i would get a pretty hackable device, "running Linux" that my ISP said over the phone. I got weeks of downtime, no refund, and while i got a better line (more reliable, higher bandwidth), my costs doubled. And I have yet another computing device to worry about: instead of simplifying and reducing waste, I actually just added crap on top of my already cluttered desk.

Next time, maybe I'll tell you about how my ISP overbilled me, broke IPv6 and drops large packets to the floor. I haven't had a response from them in months now... hopefully they will either answer and fix all of this (doubtful) or I'll switch to some other provider, probably Teksavvy.

Many thanks to the numerous people in the DSL reports Teksavvy forum that have amazing expertise. They are even building a map of Bell COs... Thanks also to Taggart for helping me figure out how the firmware images work and encouraging me to figure out how my machine works overall.

Note: all the information shared here is presented in the spirit of the fair use conditions of copyright law.

Russ Allbery: Review: The Oathbound

21 October, 2015 - 05:14

Review: The Oathbound, by Mercedes Lackey

Series: Vows and Honor #1 Publisher: DAW Copyright: July 1988 ISBN: 0-88677-414-4 Format: Mass market Pages: 302

This book warrants a bit of explanation.

Before Arrows of the Queen, before Valdemar (at least in terms of publication dates), came Tarma and Kethry short stories. I don't know if they were always intended to be set in the same world as Valdemar; if not, they were quickly included. But they came from another part of the world and a slightly different sub-genre. While the first two Valdemar trilogies were largely coming-of-age fantasy, Tarma and Kethry are itinerant sword-and-sorcery adventures featuring two women with a soul bond: the conventionally attractive, aristocratic mage Kethry, and the celibate, goddess-sworn swordswoman Tarma. Their first story was published, appropriately, in Marion Zimmer Bradley's Swords and Sorceress III.

This is the first book about Tarma and Kethry. It's a fix-up novel: shorter stories, bridged and re-edited, and glued together with some additional material. And it does not contain the first Tarma and Kethry story.

As mentioned in my earlier Valdemar reviews, this is a re-read, but it's been something like twenty years since I previously read the whole Valdemar corpus (as it was at the time; I'll probably re-read everything I have on hand, but it's grown considerably, and I may not chase down the rest of it). One of the things I'd forgotten is how oddly, from a novel reader's perspective, the Tarma and Kethry stories were collected. Knowing what I know now about publishing, I assume Swords and Sorceress III was still in print at the time The Oathbound was published, or the rights weren't available for some other reason, so their first story had to be omitted. Whatever the reason, The Oathbound starts with a jarring gap that's no less irritating in this re-read than it was originally.

Also as is becoming typical for this series, I remembered a lot more world-building and character development than is actually present in at least this first book. In this case, I strongly suspect most of that characterization is in Oathbreakers, which I remember as being more of a coherent single story and less of a fix-up of puzzle and adventure stories with scant time for character growth. I'll be able to test my memory shortly.

What we do get is Kethry's reconciliation of her past, a brief look at the Shin'a'in and the depth of Tarma and Kethry's mutual oath (unfortunately told more than shown), the introduction of Warrl (again, a relationship that will grow a great deal more depth later), and then some typical sword and sorcery episodes: a locked room mystery, a caravan guard adventure about which I'll have more to say later, and two rather unpleasant encounters with a demon. The material is bridged enough that it has a vague novel-like shape, but the bones of the underlying short stories are pretty obvious. One can tell this isn't really a novel even without the tell of a narrative recap in later chapters of events that you'd just read earlier in the same book.

What we also get is rather a lot of rape, and one episode of seriously unpleasant "justice."

A drawback of early Lackey is that her villains are pure evil. My not entirely trustworthy memory tells me that this moderates over time, but early stories tend to feature villains completely devoid of redeeming qualities. In this book alone one gets to choose between the rapist pedophile, the rapist lord, the rapist bandit, and the rapist demon who had been doing extensive research in Jack Chalker novels. You'll notice a theme. Most of the rape happens off camera, but I was still thoroughly sick of it by the end of the book. This was already a cliched motivation tactic when these stories were written.

Worse, as with the end of Arrow's Flight, the protagonists don't seem to be above a bit of "turnabout is fair play." When you're dealing with rape as a primary plot motivation, that goes about as badly as you might expect. The final episode here involves a confrontation that Tarma and Kethry brought entirely on themselves through some rather despicable actions, and from which they should have taken a lesson about why civilized societies have criminal justice systems. Unfortunately, despite an ethical priest who is mostly played for mild amusement, no one in the book seems to have drawn that rather obvious conclusion. This, too, I recall as getting better as the series goes along and Lackey matures as a writer, but that only helps marginally with the early books.

Some time after the publication of The Oathbound and Oathbreakers, something (presumably the rights situation) changed. Oathblood was published in 1998 and includes not only the first Tarma and Kethry story but also several of the short stories that make up this book, in (I assume) something closer to their original form. That makes The Oathbound somewhat pointless and entirely skippable. I re-read it first because that's how I first approached the series many years ago, and (to be honest) because I'd forgotten how much was reprinted in Oathblood. I'd advise a new reader to skip it entirely, start with the short stories in Oathblood, and then read Oathbreakers before reading the final novella. You'd miss the demon stories, but that's probably for the best.

I'm complaining a lot about this book, but that's partly from familiarity. If you can stomach the rape and one stunningly unethical protagonist decision, the stories that make it up are solid and enjoyable, and the dynamic between Tarma and Kethry is always a lot of fun (and gets even better when Warrl is added to the mix). I think my favorite was the locked room mystery. It's significantly spoiled by knowing the ending, and it has little deeper significance, but it's a classic sort unembellished, unapologetic sword-and-sorcery tale that's hard to come by in books. But since it too is reprinted (in a better form) in Oathblood, there's no point in reading it here.

Followed by Oathbreakers.

Rating: 6 out of 10

Ritesh Raj Sarraf: Controlling ill behaving applications with Linux Cgroups

21 October, 2015 - 03:23

For some time, I have been wanting to read more on Linux Cgroups to explore possibilities of using it to control Ill behaving applications. At this time, while I'm stuck in travel, it has given me some time to look into it.

In our Free Software world, most of the things are do-o-cracy, i.e. when your use case is not the common one, it is typically you who has to explore possible solutions. It could be Bugs, Feature Requests or as is in my case, performance issues. But that is not to assume that we do not have better quality software in Free Software world. Infact, in my opinion, some of the tools available are far much more better than the competition in terms of features, and to add a sweetener (or nutritional facts) to it is the fact that Free Software liberates the user.

One of my favorite tool, for photo management, is Digikam. Digikam is a big project, very featureful, and has some functionalities that may not be available in the competition. But as is with most Free Software projects, Digikam is a tool which underneath consumes many more subprojects from the Free Software ecosystem.

For anyone who has used Digikam, may know some of the bugs that surface on it. Not necessarily a bug in Digikam, but maybe in one of the underneath libraries/tools that it consumes (Exiv, libkface, marble, OpenCV, libPGF etc). But the bottom line is that the overall Digikam experience (and if I may say: the overall GNU/Linux experience) takes a hit.

Digikam has pretty powerful features for annotation, tagging, facial recognition. These features, together with Digikam, make it a compelling product. But the problem is that many of these projects are independent. Thus tight integration is a challenge. And at times, bugs can be hard to find, root cause and fix.

Let's take a real example here. If you were to use Digikam today (version 4.13.0) with annotation, tagging and facial recognition as some of the core features for your use case, you may run into frustrating overall experience. Not just that, the bugs would also effect your overall GNU/Linux experience.

The facial recognition feature, if triggered, will eat up all your memory. Thus leading you to uncover Linux's long old memory bug.

The tagging feature, if triggered, again will lead to frequent I/O. Thus again leading to a stalled Linux system because of blocked CPU cycled, for nothing.

So one of the items on my TODO list was to explore Linux Cgroups, and see if it was cleanly possible to tame a process to a confinement, so that even if it was ill behaving (for whatever reasons), your machine does not take the beating.

And now that the cgroups consumer dust has kinda settled down, systemd was my first obvious choice to look at. systemd provides a helper utility, systemd-run, for similar tasks. With systemd-run, you could apply all the resource controller logic to the given process, typically cpu, memory and blkio. And restrict it to a certain set. You can also define what user to run the service as.

rrs@learner:/var/tmp/Debian-Build/Result$ systemd-run -p BlockIOWeight=10 find /

Running as unit run-23805.service.

2015-10-20 / 21:37:44 ♒♒♒  ☺    
rrs@learner:/var/tmp/Debian-Build/Result$ systemctl status -l run-23805.service

● run-23805.service - /usr/bin/find /

   Loaded: loaded

  Drop-In: /run/systemd/system/run-23805.service.d

           └─50-BlockIOWeight.conf, 50-Description.conf, 50-ExecStart.conf

   Active: active (running) since Tue 2015-10-20 21:37:44 CEST; 6s ago

 Main PID: 23814 (find)

   Memory: 12.2M

      CPU: 502ms

   CGroup: /system.slice/run-23805.service

           └─23814 /usr/bin/find /

Oct 20 21:37:45 learner find[23814]: /proc/3/net/raw6

Oct 20 21:37:45 learner find[23814]: /proc/3/net/snmp

Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat

Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/rt_cache

Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/arp_cache

Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/ndisc_cache

Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/ip_conntrack

Oct 20 21:37:45 learner find[23814]: /proc/3/net/stat/nf_conntrack

Oct 20 21:37:45 learner find[23814]: /proc/3/net/tcp6

Oct 20 21:37:45 learner find[23814]: /proc/3/net/udp6

2015-10-20 / 21:37:51 ♒♒♒  ☺    

But, out of the box, graphical applications do not work. I haven't looked, but it should be doable by giving it the correct environment details.

  Underneath, systemd is using the same Linux Control Groups to limit resources for individual applications. So, in cases where you have a requirement and do not have systemd, or you directly want to make use of cgroups, it could be easily done with basic cgroups tools like cgroup-tools.   With cgroup-tools, I now have a simple cgroups hierarchy set for my current use case, i.e. Digikam
rrs@learner:/var/tmp/Debian-Build/Result$ ls /sys/fs/cgroup/memory/rrs_customCG/

cgroup.clone_children           memory.kmem.tcp.limit_in_bytes      memory.numa_stat

cgroup.event_control            memory.kmem.tcp.max_usage_in_bytes  memory.oom_control

cgroup.procs                    memory.kmem.tcp.usage_in_bytes      memory.pressure_level

digikam/                        memory.kmem.usage_in_bytes          memory.soft_limit_in_bytes

memory.failcnt                  memory.limit_in_bytes               memory.stat

memory.force_empty              memory.max_usage_in_bytes           memory.swappiness

memory.kmem.failcnt             memory.memsw.failcnt                memory.usage_in_bytes

memory.kmem.limit_in_bytes      memory.memsw.limit_in_bytes         memory.use_hierarchy

memory.kmem.max_usage_in_bytes  memory.memsw.max_usage_in_bytes     notify_on_release

memory.kmem.slabinfo            memory.memsw.usage_in_bytes         tasks

memory.kmem.tcp.failcnt         memory.move_charge_at_immigrate

2015-10-20 / 21:45:38 ♒♒♒  ☺    

rrs@learner:/var/tmp/Debian-Build/Result$ ls /sys/fs/cgroup/memory/rrs_customCG/digikam/

cgroup.clone_children           memory.kmem.tcp.max_usage_in_bytes  memory.oom_control

cgroup.event_control            memory.kmem.tcp.usage_in_bytes      memory.pressure_level

cgroup.procs                    memory.kmem.usage_in_bytes          memory.soft_limit_in_bytes

memory.failcnt                  memory.limit_in_bytes               memory.stat

memory.force_empty              memory.max_usage_in_bytes           memory.swappiness

memory.kmem.failcnt             memory.memsw.failcnt                memory.usage_in_bytes

memory.kmem.limit_in_bytes      memory.memsw.limit_in_bytes         memory.use_hierarchy

memory.kmem.max_usage_in_bytes  memory.memsw.max_usage_in_bytes     notify_on_release

memory.kmem.slabinfo            memory.memsw.usage_in_bytes         tasks

memory.kmem.tcp.failcnt         memory.move_charge_at_immigrate

memory.kmem.tcp.limit_in_bytes  memory.numa_stat

2015-10-20 / 21:45:53 ♒♒♒  ☺    


rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/cpu/rrs_customCG/cpu.shares 


2015-10-20 / 21:48:44 ♒♒♒  ☺    

rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/cpu/rrs_customCG/digikam/cpu.shares 


2015-10-20 / 21:49:05 ♒♒♒  ☺    


rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/memory/rrs_customCG/memory.limit_in_bytes 


2015-10-20 / 22:20:14 ♒♒♒  ☺    

rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/memory/rrs_customCG/digikam/memory.limit_in_bytes 


2015-10-20 / 22:20:27 ♒♒♒  ☺    
rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/blkio/rrs_customCG/blkio.weight


2015-10-20 / 21:51:43 ♒♒♒  ☺    

rrs@learner:/var/tmp/Debian-Build/Result$ cat /sys/fs/cgroup/blkio/rrs_customCG/digikam/blkio.weight


2015-10-20 / 21:51:50 ♒♒♒  ☺    
    The base group, $USER_customCG needs super admin privileges. Which once set appropriately, allows the user to further self-define sub-groups. And users can then also define separate limits per sub-group.   With the resource limitations set in place, my overall experience on very recent hardware (Intel Haswell Core i7, 8 GiB RAM, 500 GB SSHD, 128 GB SSD) has improved considerably. It still is not perfect, but it definitely is a huge improvement over what I had to go through ealire: A stalled machine for hours.  
top - 21:54:38 up 1 day,  6:46,  1 user,  load average: 7.22, 7.51, 7.37

Tasks: 299 total,   1 running, 298 sleeping,   0 stopped,   0 zombie

%Cpu0  :  7.1 us,  3.0 sy,  1.0 ni, 11.1 id, 77.8 wa,  0.0 hi,  0.0 si,  0.0 st

%Cpu1  :  6.0 us,  4.0 sy,  2.0 ni, 49.0 id, 39.0 wa,  0.0 hi,  0.0 si,  0.0 st

%Cpu2  :  5.0 us,  2.0 sy,  0.0 ni, 24.8 id, 68.3 wa,  0.0 hi,  0.0 si,  0.0 st

%Cpu3  :  5.9 us,  5.0 sy,  0.0 ni, 21.8 id, 67.3 wa,  0.0 hi,  0.0 si,  0.0 st

MiB Mem : 7908.926 total,   96.449 free, 4634.922 used, 3177.555 buff/cache

MiB Swap: 8579.996 total, 3454.746 free, 5125.250 used. 2753.324 avail Mem 

PID to signal/kill [default pid = 8879] 

  PID  PPID nTH USER        PR  NI S %CPU %MEM     TIME+ COMMAND                           UID 

 8879  8868  18 rrs         20   0 S  8.2 31.2  37:44.64 digikam                          1000 

10255  9960   4 rrs         39  19 S  1.0  0.8  19:47.73 tracker-miner-f                  1000 

10157  9960   7 rrs         20   0 S  0.5  3.0  32:29.76 gnome-shell                      1000 

    7     2   1 root        20   0 S  0.2        0:53.48 rcu_sched                           0 

  401     1   1 root        20   0 S  0.2  1.3   0:54.93 systemd-journal                     0 

10269  9937   4 rrs         20   0 S  0.2  0.4   2:34.50 gnome-terminal-                  1000 

15316     1  14 rrs         20   0 S  0.2  3.7  30:05.96 evolution                        1000 

23777     2   1 root        20   0 S  0.2        0:05.73 kworker/u16:0                       0 

23814     1   1 root        20   0 D  0.2  0.0   0:02.00 find                                0 

24049     2   1 root        20   0 S  0.2        0:01.29 kworker/u16:3                       0 

24052     2   1 root        20   0 S  0.2        0:02.94 kworker/u16:4                       0 

    1     0   1 root        20   0 S       0.1   0:18.24 systemd                             0 

The reporting tools may not be correct here. Because from what is being reported above, I should be having a machine stalled, and heavily paging, while the kernel scanning its list of processes to find the best process to kill.

From this approach of jailing processes, the major side effect I can see is that the process (Digikam) is now starved of resources and will take much much much more time than what it would have been usually. But in the usual cases, it takes up all, and ends up starving (and getting killed) for consuming all available resources.

So I guess it is better to be on a balanced resource diet. :-)

Categories: Keywords: Like: 

Dirk Eddelbuettel: Rblpapi 0.3.1

20 October, 2015 - 19:01

The first update to the Rblpapi package since the initial CRAN upload in August is now available.

Rblpapi connects R to the Bloomberg system, giving access to a truly vast array of time series data and custom calculations.

This release brings a new beqs() functions to access the results from Bloomberg EQS queries, improves the build system a correct a bug in the retrieval of multiple tick series. The changes are detailed below.

Changes in Rblpapi version 0.3.1 (2015-10-19)
  • Added beqs() to run Bloomberg Equity Screen Search, based on initial PR #79 by Rademeyer Vermaak, reworked in PRs #83 and #84 by Dirk; closes tickets #63 and #80.

  • Bloomberg header and library files are now cached locally instead of being re-downloaded for every build (PR #78 by Dirk addressing issue #65).

  • For multiple ticks, time-stamps are unique-yfied before merging (PR #77 by Dirk addressing issue #76).

  • A compiler warning under new g++ versions is now suppressed (PR #69 by John, fixing #68).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Simon Richter: Bluetooth and Dual Boot

20 October, 2015 - 11:28

I still have a dual-boot installation because sometimes I need Windows for something (usually consulting work), and I use a Bluetooth mouse.

Obviously, when I boot into Windows, it does not have the pairing information for the mouse, so I have to redo the entire pairing process, and repeat that when I get back to Linux.

So, dear lazyweb: is there a way to share Bluetooth pairing information between multiple operating system installations?

Petter Reinholdtsen: Lawrence Lessig interviewed Edward Snowden a year ago

19 October, 2015 - 16:50

Last year, US president candidate in the Democratic Party Lawrence interviewed Edward Snowden. The one hour interview was published by Harvard Law School 2014-10-23 on Youtube, and the meeting took place 2014-10-20.

The questions are very good, and there is lots of useful information to be learned and very interesting issues to think about being raised. Please check it out.

I find it especially interesting to hear again that Snowden did try to bring up his reservations through the official channels without any luck. It is in sharp contrast to the answers made 2013-11-06 by the Norwegian prime minister Erna Solberg to the Norwegian Parliament, claiming Snowden is no Whistle-Blower because he should have taken up his concerns internally and using official channels. It make me sad that this is the political leadership we have here in Norway.

Lunar: Reproducible builds: week 25 in Stretch cycle

19 October, 2015 - 01:51

What happened in the reproducible builds effort this week:

Toolchain fixes

Niko Tyni wrote a new patch adding support for SOURCE_DATE_EPOCH in Pod::Man. This would complement or replace the previously implemented POD_MAN_DATE environment variable in a more generic way.

Niko Tyni proposed a fix to prevent mtime variation in directories due to debhelper usage of cp --parents -p.

Packages fixed

The following 119 packages became reproducible due to changes in their build dependencies: aac-tactics, aafigure, apgdiff, bin-prot, boxbackup, calendar, camlmix, cconv, cdist, cl-asdf, cli-common, cluster-glue, cppo, cvs, esdl, ess, faucc, fauhdlc, fbcat, flex-old, freetennis, ftgl, gap, ghc, git-cola, globus-authz-callout-error, globus-authz, globus-callout, globus-common, globus-ftp-client, globus-ftp-control, globus-gass-cache, globus-gass-copy, globus-gass-transfer, globus-gram-client, globus-gram-job-manager-callout-error, globus-gram-protocol, globus-gridmap-callout-error, globus-gsi-callback, globus-gsi-cert-utils, globus-gsi-credential, globus-gsi-openssl-error, globus-gsi-proxy-core, globus-gsi-proxy-ssl, globus-gsi-sysconfig, globus-gss-assist, globus-gssapi-error, globus-gssapi-gsi, globus-net-manager, globus-openssl-module, globus-rsl, globus-scheduler-event-generator, globus-xio-gridftp-driver, globus-xio-gsi-driver, globus-xio, gnome-control-center, grml2usb, grub, guilt, hgview, htmlcxx, hwloc, imms, kde-l10n, keystone, kimwitu++, kimwitu-doc, kmod, krb5, laby, ledger, libcrypto++, libopendbx, libsyncml, libwps, lprng-doc, madwimax, maria, mediawiki-math, menhir, misery, monotone-viz, morse, mpfr4, obus, ocaml-csv, ocaml-reins, ocamldsort, ocp-indent, openscenegraph, opensp, optcomp, opus, otags, pa-bench, pa-ounit, pa-test, parmap, pcaputils, perl-cross-debian, prooftree, pyfits, pywavelets, pywbem, rpy, signify, siscone, swtchart, tipa, typerep, tyxml, unison2.32.52, unison2.40.102, unison, uuidm, variantslib, zipios++, zlibc, zope-maildrophost.

The following packages became reproducible after getting fixed:

Packages which could not be tested:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

Lunar reported that test strings depend on default character encoding of the build system in ongl.

The 189 packages composing the Arch Linux “core” repository are now being tested. No packages are currently reproducible, but most of the time the difference is limited to metadata. This has already gained some interest in the Arch Linux community.

An explicit log message is now visible when a build has been killed due to the 12 hours timeout. (h01ger)

Remote build setup has been made more robust and self maintenance has been further improved. (h01ger)

The minimum age for rescheduling of already tested amd64 packages has been lowered from 14 to 7 days, thanks to the increase of hardware resources sponsored by ProfitBricks last week. (h01ger)

diffoscope development

diffoscope version 37 has been released on October 15th. It adds support for two new file formats (CBFS images and Debian .dsc files). After proposing the required changes to TLSH, fuzzy hashes are now computed incrementally. This will avoid reading entire files in memory which caused problems for large packages.

New tests have been added for the command-line interface. More character encoding issues have been fixed. Malformed md5sums will now be compared as binary files instead of making diffoscope crash amongst several other minor fixes.

Version 38 was released two days later to fix the versioned dependency on python3-tlsh.

strip-nondeterminism development

strip-nondeterminism version 0.013-1 has been uploaded to the archive. It fixes an issue with nonconformant PNG files with trailing garbage reported by Roland Rosenfeld.

disorderfs development

disorderfs version 0.4.1-1 is a stop-gap release that will disable lock propagation, unless --share-locks=yes is specified, as it still is affected by unidentified issues.

Documentation update

Lunar has been busy creating a proper website for that would be a common location for news, documentation, and tools for all free software projects working on reproducible builds. It's not yet ready to be published, but it's surely getting there.

Package reviews

103 reviews have been removed, 394 added and 29 updated this week.

72 FTBFS issues were reported by Chris West and Niko Tyni.

New issues: random_order_in_static_libraries, random_order_in_md5sums.

Hideki Yamane: Who freezes my laptop?

18 October, 2015 - 07:29
After reboot, suddenly my laptop cannot boot :-(
Then, tried to login rescue mode but root account is disabled.

Then, use USBstick to boot Ubuntu and mount encrypted volume, make root enabled.

After checking syslog, libvirt calls dmidecode and it seems that  something is wrong with it.

Removed gnome-boxes and libvirt*, reboot my system and I can see login dialog by gdm3.

However, after login, I tried to exec dmidecode but nothing happened.

Also installing gnome-boxes and libvirt* packages and reboot is okay.

Okay, back to nomal. But....hmmm...? Who stole my time??

Sven Hoexter: TclCurl snapshot uploaded to unstable

18 October, 2015 - 05:23

While I was pondering if I should drop the tclcurl package and get it removed from Debian Christian Werner from Androwish (a Tcl/Tk port for Android) started to fix the open bugs. Thanks a lot Christian!

I've now uploaded a new TclCurl package to unstable based on the code in the new upstream repository plus the patches from Christian. In case you're one of the few TclCurl users out there please try the new package.

I'm still pondering if it's worth to keep the package. For the last five years or so I could get along with the Tcl http module just fine and thus no longer use TclCurl myself. In case someone would like to adopt it just write me a mail, I'd be happy to give it away.

Simon Richter: Key Transition

18 October, 2015 - 03:49

So, since several years I've had a second gpg key, 4096R/6AABE354. Several of you have already signed it, and I've been using it in Debian for some time already, but I've not announced it more widely yet, and I occasionally still get mail encrypted to the old key (which remains valid and usable, but it's 1024R).

Of course, I've also made a formal transition statement:

Hash: SHA1

OpenPGP Key Transition Statement for Simon Richter

I have created a new OpenPGP key and will be transitioning away from
my old key.  The old key has not been compromised and will continue to
be valid for some time, but I prefer all future correspondence to be
encrypted to the new key, and will be making signatures with the new
key going forward.

I would like this new key to be re-integrated into the web of trust.
This message is signed by both keys to certify the transition.  My new
and old keys are signed by each other.  If you have signed my old key,
I would appreciate signatures on my new key as well, provided that
your signing policy permits that without re-authenticating me.

The old key, which I am transitioning away from, is:

pub   1024D/5706A4B4 2002-02-26
      Key fingerprint = 040E B5F7 84F1 4FBC CEAD  ADC6 18A0 CC8D 5706 A4B4

The new key, to which I am transitioning, is:

pub   4096R/6AABE354 2009-11-19
      Key fingerprint = 9C43 2534 95E4 DCA8 3794  5F5B EBF6 7A84 6AAB E354

The entire key may be downloaded from:

To fetch the full new key from a public key server using GnuPG, run:

  gpg --keyserver --recv-key 6AABE354

If you already know my old key, you can now verify that the new key is
signed by the old one:

  gpg --check-sigs 6AABE354

If you are satisfied that you've got the right key, and the User IDs
match what you expect, I would appreciate it if you would sign my key:

  gpg --sign-key 6AABE354

You can upload your signatures to a public keyserver directly:

  gpg --keyserver --send-key 6AABE354

Or email (possibly encrypted) the output from:

  gpg --armor --export 6AABE354

If you'd like any further verification or have any questions about the
transition please contact me directly.

To verify the integrity of this statement:

  wget -q -O- | gpg --verify

Version: GnuPG v1


Joey Hess: it's a bird, it's a plane, it's a super monoid for propellor

18 October, 2015 - 01:43

I've been doing a little bit of dynamically typed programming in Haskell, to improve Propellor's Info type. The result is kind of interesting in a scary way.

Info started out as a big record type, containing all the different sorts of metadata that Propellor needed to keep track of. Host IP addresses, DNS entries, ssh public keys, docker image configuration parameters... This got quite out of hand. Info needed to have its hands in everything, even types that should have been private to their module.

To fix that, recent versions of Propellor let a single Info contain many different types of values. Look at it one way and it contains DNS entries; look at it another way and it contains ssh public keys, etc.

As an émigré from lands where you can never know what type of value is in a $foo until you look, this was a scary prospect at first, but I found it's possible to have the benefits of dynamic types and the safety of static types too.

The key to doing it is Data.Dynamic. Thanks to Joachim Breitner for suggesting I could use it here. What I arrived at is this type (slightly simplified):

newtype Info = Info [Dynamic]
    deriving (Monoid)

So Info is a monoid, and it holds of a bunch of dynamic values, which could each be of any type at all. Eep!

So far, this is utterly scary to me. To tame it, the Info constructor is not exported, and so the only way to create an Info is to start with mempty and use this function:

addInfo :: (IsInfo v, Monoid v) => Info -> v -> Info
addInfo (Info l) v = Info (toDyn v : l)

The important part of that is that only allows adding values that are in the IsInfo type class. That prevents the foot shooting associated with dynamic types, by only allowing use of types that make sense as Info. Otherwise arbitrary Strings etc could be passed to addInfo by accident, and all get concated together, and that would be a total dynamic programming mess.

Anything you can add into an Info, you can get back out:

getInfo :: (IsInfo v, Monoid v) => Info -> v
getInfo (Info l) = mconcat (mapMaybe fromDynamic (reverse l))

Only monoids can be stored in Info, so if you ask for a type that an Info doesn't contain, you'll get back mempty.

Crucially, IsInfo is an open type class. Any module in Propellor can make a new data type and make it an instance of IsInfo, and then that new data type can be stored in the Info of a Property, and any Host that uses the Property will have that added to its Info, available for later introspection.

For example, this weekend I'm extending Propellor to have controllers: Hosts that are responsible for running Propellor on some other hosts. Useful if you want to run propellor once and have it update the configuration of an entire network of hosts.

There can be whole chains of controllers controlling other controllers etc. The problem is, what if host foo has the property controllerFor bar and host bar has the property controllerFor foo? I want to avoid a loop of foo running Propellor on bar, running Propellor on foo, ...

To detect such loops, each Host's Info should contain a list of the Hosts it's controlling. Which is not hard to accomplish:

newtype Controlling = Controlled [Host]
    deriving (Typeable, Monoid)

isControlledBy :: Host -> Controlling -> Bool
h `isControlledBy` (Controlled hs) = any (== hostName h) (map hostName hs)

instance IsInfo Controlling where
    propigateInfo _ = True

mkControllingInfo :: Host -> Info
mkControllingInfo controlled = addInfo mempty (Controlled [controlled])

getControlledBy :: Host -> Controlling
getControlledBy = getInfo . hostInfo

isControllerLoop :: Host -> Host -> Bool
isControllerLoop controller controlled = go S.empty controlled
    go checked h
        | controller `isControlledBy` c = True
        -- avoid checking loops that have been checked before
        | hostName h `S.member` checked = False
        | otherwise = any (go (S.insert (hostName h) checked)) l
        c@(Controlled l) = getControlledBy h

This is all internal to the module that needs it; the rest of propellor doesn't need to know that the Info is using used for this. And yet, the necessary information about Hosts is gathered as propellor runs.

So, that's a useful technique. I do wonder if I could somehow make addInfo combine together values in the list that have the same type; as it is the list can get long. And, to show Info, the best I could do was this:

 instance Show Info where
            show (Info l) = "Info " ++ show (map (dynTypeRep . fst) l)

The resulting long list of the types of vales stored in a host's info is not a useful as it could be. Of course, getInfo can be used to get any particular type of value:

*Main> hostInfo kite
Info [InfoVal System,PrivInfo,PrivInfo,Controlling,DnsInfo,DnsInfo,DnsInfo,AliasesInfo, ...
*Main> getInfo (hostInfo kite) :: AliasesInfo
AliasesInfo (fromList ["","","","" ...

And finally, I keep trying to think of a better name than "Info".

Iustin Pop: Server upgrades and monitoring

18 October, 2015 - 00:18

Undecided whether the title should be "exercises in Yak shaving" or "paying back technical debt" or "too much complexity for personal systems". Anyway…

I started hosting my personal website and some other small stuff on a dedicated box (rented from a provider) in early 2008. Even for a relatively cheap box, it worked without issues for a good number of years. A surprising number of years, actually; the only issue was a power supply failure that was solved by the provider automatically and then nothing for many years. Even the harddrive (mechanical) had no issues at all for 7 years (Power_On_Hours: 64380; I probably got it after it had a few months of uptime). I believe it was the longest running harddrive I've ever used (for the record: Seagate Barracuda 7200.10, ST3250310AS).

The reason I delayed upgrade for a long time was twofold: first, at the same provider I couldn't get a similar SLA for the same amount of money. I could get better hardware, but with worse SLA and options. This is easily solvable, of course, by just finding a different provider.

The other issue was that I never bothered to setup proper configuration management for the host; after all, it was only supposed to run Apache with ikiwiki and some other trivial small other things. The truth was that over time it started pilling up more and more "small things"… so actually changing the host is expensive.

As the age of the server neared 7 years, I thought to combine upgrade from Wheezy to Jessie with a HW upgrade. Managed to find a different provider that had my desired SLA and HW configuration, got the server and the only thing left was to do the migration.

Previous OS upgrades were simple as they were on the same host; i.e. rely on Debian's reliable upgrade and nothing else to, eventually adjust slightly some configs. With a cross-host upgrade (I couldn't just copy the old OS since it was also a 32-to-64 bit change) it's much worse: since there's no previous installation, I had to manually check and port the old configuration for each individual service. This got very tedious, and I realised I have to make it somehow better.

"Proper" configuration management aside, I thought that I need proper monitoring first. I already had (for a long while actually) graphing via Munin, but no actual monitoring. Since the host only had few services, this was again supposed to be easy - same mistake again.

The problem is that once you have any monitoring system setup, it's very easy to actually add "just one more" host or service to it. First it was only the external box, then it was my firewall, then it was the rest of my home network. Then it was the cloud services that I use—for example, checking whether my domain registrar's nameservers still are authoritative for my domain or whether the expiration date it still far in the future. And so on…

In the end, what was in previous iterations (e.g. Squeeze to Wheezy upgrade) a half-weekend job only, spread out over many weekends (interleaved with other activities, not fully working on it). I had to keep the old machine running for a month more in order to make sure everything was up and running, and I ended up with 80 services monitored across multiple systems; the migrated machine itself has almost half of these. Some of these are light items (e.g. a checking that a single vhost responds) other are aggregates. I still need to add some more checks though, especially more complex (end-to-end) ones.

The lesson I learned in all this is that, with or without configuration management in place, having monitoring makes it much easier do to host or service moves, as you know much better when everything is done whether it's "done-done" or just "almost done".

The question that remains though: with 80 services for a home network plus external systems (personal use); I'm not sure if I'm doing things right (monitor the stuff I need) or wrong (do I really need these many things)?

Russell Coker: Mail Server Training

17 October, 2015 - 16:08

Today I ran a hands-on training session on configuring a MTA with Postfix and Dovecot for LUV. I gave each student a virtual machine running Debian/Jessie with full Internet access and instructions on how to configure it as a basic mail server. Here is a slightly modified set of instructions that anyone can do on their own system.

Today I learned that documentation that includes passwords on a command-line should have quotes around the password, one student used a semi-colon character in his password which caused some confusion (it’s the command separator character in BASH). I also discovered that trying to just tell users which virtual server to login to is prone to errors, in future I’ll print out a list of user-names and passwords for virtual servers and tear off one for each student so there’s no possibility of 2 users logging in to the same system.

I gave each student a sub-domain of (a zone that I use for various random sysadmin type things). I have changed the instructions to use which is the official address for testing things (or you could use any zone that you use). The test VMs that I setup had a user named “auser”, the documentation assumes this account name. You could change “auser” to something else if you wish.

Below are all the instructions for anyone who wants to try it at home or setup virtual machines and run their own training session.

Basic MTA Configuration
  1. Run “apt-get install postfix” to install Postfix, select “Internet Site” for the type of mail configuration and enter the domain name you selected for the mail name.
  2. The main Postfix configuration file is /etc/postfix/ Change the myhostname setting to the fully qualified name of the system, something like
    You can edit /etc/postfix/ with vi (or any other editor) or use the postconf command to change it, eg “postconf -e“.
  3. Add “home_mailbox=Maildir/” to the Postfix configuration to make it deliver to a Maildir spool in the user’s home directory.
  4. Restart Postfix to apply the changes.
  5. Run “apt-get install swaks libnet-ssleay-perl” to install swaks (a SMTP test tool).
  6. Test delivery by running the command “swaks -f -t -s localhost“. Note that swaks displays the SMTP data so you can see exactly what happens and if something goes wrong you will see everything about the error.
  7. Inspect /var/log/mail.log to see the messages about the delivery. View the message which is in ~auser/Maildir/new.
  8. When other students get to this stage run the same swaks command but with the -t changed to the address in their domain, check the mail.log to see that the messages were transferred and view the mail with less to see the received lines. If you do this on your own specify a recipient address that’s a regular email address of yours (EG a Gmail account).
Basic Pop/IMAP Configuration
  1. Run “apt-get install dovecot-pop3d dovecot-imapd” to install Dovecot POP and IMAP servers.
    Run “netstat -tln” to see the ports that have daemons listening on them, observe that ports 110 and 143 are in use.
  2. Edit /etc/dovecot/conf.d/10-mail.conf and change mail_location to “maildir:~/Maildir“. Then restart Dovecot.
  3. Run the command “nc localhost 110” to connect to POP, then run the following commands to get capabilities, login, and retrieve mail:
    user auser
    retr 1
  4. Run the command “nc localhost 143” to connect to IMAP, then run the following commands to list capabilities, login, and logout:
    a capability
    b login auser WHATEVERYOUMADEIT
    c logout
  5. For the above commands make note of the capabilities, we will refer to that later.

Now you have a basically functional mail server on the Internet!


To avoid password sniffing we need to use SSL. To do it properly requires obtaining a signed key for a DNS address but we can do the technical work with the “snakeoil” certificate that is generated by Debian.

  1. Edit /etc/dovecot/conf.d/10-ssl.conf and change “ssl = no” to “ssl = required“. Then add the following 2 lines:
    ssl_cert = </etc/ssl/certs/ssl-cert-snakeoil.pem
    ssl_key = </etc/ssl/private/ssl-cert-snakeoil.key
    1. Run “netstat -tln” and note that ports 993 and 995 are not in use.
    2. Edit /etc/dovecot/conf.d/10-master.conf and uncomment the following lines:
      port = 993
      ssl = yes
      port = 995
      ssl = yes
    3. Restart Dovecot, run “netstat -tln” and note that ports 993 and 995 are in use.
  2. Run “nc localhost 110” and “nc localhost 143” as before, note that the capabilities have changed to include STLS/STARTTLS respectively.
  3. Run “gnutls-cli --tofu -p 993” to connect to the server via IMAPS and “gnutls-cli --tofu -p 995” to connect via POP3S. The --tofu option means to “Trust On First Use”, it stores the public key in ~/.gnutls and checks it the next time you connect. This allows you to safely use a “snakeoil” certificate if all apps can securely get a copy of the key.
Postfix SSL
  1. Edit /etc/postfix/ and add the following 4 lines:
    smtpd_tls_received_header = yes
    smtpd_tls_loglevel = 1
    smtp_tls_loglevel = 1
    smtp_tls_security_level = may

    Then restart Postfix. This makes Postfix log TLS summary messages to syslog and in the Received header. It also permits Postfix to send with TLS.
  2. Run “nc localhost 25” to connect to your SMTP port and then enter the following commands:
    ehlo test

    Note that the response to the EHLO command includes 250-STARTTLS, this is because Postfix was configured with the Snakeoil certificate by default.
  3. Run “gnutls-cli --tofu -p 25 -s” and enter the following commands:
    ehlo test

    After the CTRL-D gnutls-cli will establish a SSL connection.
  4. Run “swaks -tls -f -t -s localhost” to send a message with SSL encryption. Note that swaks doesn’t verify the key.
  5. Try using swaks to send messages to other servers with SSL encryption. Gmail is one example of a mail server that supports SSL which can be used, run “swaks -tls -f -t” to send TLS (encapsulated SSL) mail to Gmail via swaks. Also run “swaks -tls -f -t -s localhost” to send via your new mail server (which should log that it was a TLS connection from swaks and a TLS connection to Gmail).

SASL is the system of SMTP authentication for mail relaying. It is needed to permit devices without fixed IP addresses to send mail through a server. The easiest way of configuring Postfix SASL is to have Dovecot provide it’s authentication data to Postfix. Among other things if you change Dovecot to authenticate in another way you won’t need to make any matching changes to Postfix.

  1. Run “mkdir -p /var/spool/postfix/var/spool” and “ln -s ../.. /var/spool/postfix/var/spool/postfix“, this allows parts of Postfix to work with the same configuration regardless of whether they are running in a chroot.
  2. Add the following to /etc/postfix/ and restart Postfix:
    smtpd_sasl_auth_enable = yes
    smtpd_sasl_type = dovecot
    smtpd_sasl_path = /var/spool/postfix/private/auth
    broken_sasl_auth_clients = yes
    smtpd_sasl_authenticated_header = yes
  3. Edit /etc/dovecot/conf.d/10-master.conf, uncomment the following lines, and then restart Dovecot:
    unix_listener /var/spool/postfix/private/auth {
    mode = 0666
  4. Edit /etc/postfix/, uncomment the line for the submission service, and restart Postfix. This makes Postfix listen on port 587 which is allowed through most firewalls.
  5. From another system (IE not the virtual machine you are working on) run “swaks -tls -f -t -s YOURSERVER and note that the message is rejected with “Relay access denied“.
  6. Now run “swaks -tls --auth-user auser --auth-password WHATEVER -f -t YOURREALADDRESS -s YOURSERVER” and observe that the mail is delivered (subject to anti-spam measures at the recipient).
  7. Configuring a MUA

    If every part of the previous 3 sections is complete then you should be able to setup your favourite MUA. Use “auser” as the user-name for SMTP and IMAP, for the SMTP/IMAP server and it should just work! Of course you need to use the same DNS server for your MUA to have this just work. But another possibility for testing is to have the MUA talk to the server by IP address not by name.

    Related posts:

    1. Mail Server Security I predict that over the course of the next 10...
    2. I need an LMTP server I am working on a system where a front-end mail...
    3. Moving a Mail Server Nowadays it seems that most serious mail servers (IE mail...

Steve Kemp: Robbing Peter to pay Paul, or location spoofing via DNS

17 October, 2015 - 13:00

I rarely watched TV online when I was located in the UK, but now I've moved to Finland with appalling local TV choices it has become more common.

The biggest problem with trying to watch BBC's iPlayer, and similar services, is the location restrictions.

Not a huge problem though:

  • Rent a virtual machine.
  • Configure an OpenVPN server on it.
  • Connect from $current-country to it.

The next part is the harder one - making your traffic pass over the VPN. If you were simple you'd just say "Send everything over the VPN". But that would slow down local traffic, so instead you have to use trickery.

My approach was just to run a series of routing additions, similar to this (except I did it in the openvpn configuration, via pushed-routes):

ip -4 route add .... dev tun0

This works, but it is a pain as you have to add more and more routes. The simpler solution which I switched to after a while was just configuring mitmproxy on the remote OpenVPN end-point, and then configuring that in the browser. With that in use all your traffic goes over the VPN link, if you enable the proxy in your browser, but nothing else will.

I've got a network device on-order, which will let me watch netflix, etc, from my TV, and I'm lead to believe this won't let you setup proxies, or similar, to avoid region-bypass.

It occurs to me that I can configure my router to give out bogus DNS responses - if the device asks for "" it can return - which is the remote host running the proxy.

I imagine this will be nice and simple, and thought I was being clever:

  • Remote OpenVPN server.
  • MITM proxy on remote VPN-host
    • Which is basically a transparent HTTP/HTTPS proxy.
  • Route traffic to it via DNS.
    • e.g. For any DNS request, if it ends in return

Because I can handle DNS-magic on the router I can essentially spoof my location for all the devices on the internal LAN, which is a good thing.

Anyway I was reasonably pleased with the idea of using DNS to route traffic over the VPN, in combination with a transparent proxy. I was even going to blog about it, and say "Hey! This is a cool idea I've never heard of before".

Instead I did a quick google(.fi) and discovered that there are companies offering this as a service. They don't mention the proxying bit, but it's clearly what they're doing - for example OverPlay's SmartDNS.

So in conclusion I can keep my current setup, or I can use the income I receive from DNS hosting to pay for SmartDNS, or other DNS-based location-fakers.

Regardless. DNS. VPN. Good combination. Try it if you get bored.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้