Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 45 min 58 sec ago

Russell Coker: A Linux Conference as a Ritual

10 July, 2014 - 16:00

Sociological Images has an interesting post by Jay Livingston PhD about a tennis final as a ritual [1]. The main point is that you can get a much better view of the match on your TV at home with more comfort and less inconvenience, so what you get for the price of the ticket (and all the effort of getting there) is participating in the event as a spectator.

It seems to me that the same idea applies to community Linux conferences (such as LCA) and some Linux users group meetings. In terms of watching a lecture there are real benefits to downloading it after the conference so that you can pause it and study related web sites or repeat sections that you didn’t understand. Also wherever you might sit at home to watch a video of a conference lecture you will be a lot more comfortable than a university lecture hall. Some people don’t attend conferences and users’ group meetings because they would rather watch a video at home.

Benefits of Attending (Apart from a Ritual)

One of the benefits of attending a lecture is the ability to ask questions. But that seems to mostly apply to the high status people who ask most questions. I’ve previously written about speaking stacks and my observations about who asks questions vs the number that can reasonably be asked [2].

I expect that most delegates ask no questions for the entire conference. I created a SurveyMonkey survey to discover how many questions people ask [3]. I count LCA as a 3 day conference because I am only counting the days where there are presentations that have been directly approved by the papers committee, approving a mini-conf (and thus delegating the ability to approve speeches) is different.

Another benefit of attending is the so-called “hallway track” where people talk to random other people. But that seems to be of most benefit to people who have some combination of high status in the community and good social skills. In the past I’ve attended the “Professional Delegates Networking Session” which is an event for speakers and people who pay the “Professional” registration fee. Sometimes at such events there has seemed to be a great divide between speakers (who mostly knew each other before the conference) and “Professional Delegates” which diminishes the value of the event to anyone who couldn’t achieve similar benefits without it.

How to Optimise a Conference as a Ritual

To get involvement of people who have the ritualistic approach one could emphasise the issue of being part of the event. For example to get people to attend the morning keynote speeches (which are sometimes poorly attended due to partying the night before) one could emphasise that anyone who doesn’t attend the keynote isn’t really attending the conference.

Conference shirts seem to be strongly correlated with the ritual aspect of conferences, the more “corporate” conferences don’t seem to offer branded clothing to delegates. If an item of branded schwag was given out before each keynote then that would increase the attendance by everyone who follows the ritual aspect (as well as everyone who just likes free stuff).

Note that I’m not suggesting that organisers of LCA or other conferences go to the effort of giving everyone schwag before the morning keynote, that would be a lot of work. Just telling people that anyone who misses the keynote isn’t really attending the conference would probably do.

I’ve always wondered why conference organisers want people to attend the keynotes and award prizes to random delegates who attend them. Is a keynote lecture a ritual that is incomplete if the attendance isn’t good enough?

Related posts:

  1. Length of Conference Questions After LCA last year I wrote about “speaking stacks” and...
  2. meeting people at Linux conferences One thing that has always surprised me is how few...
  3. Creating a Micro Conference The TEDxVolcano The TED conference franchise has been extended to...

Russell Coker: Taxing Inferior Products

10 July, 2014 - 10:48

I recently had a medical appointment cancelled due to a “computer crash”. Apparently the reception computer crashed and lost all bookings for a day and they just made new bookings for whoever called – and anyone who had a previous booking just missed out. I’ll probably never know whether they really had a computer problem or just used computer problems as an excuse when they made a mistake. But even if it wasn’t a real computer problem the fact that computers are so unreliable overall that “computer crash” is an acceptable excuse indicates a problem with the industry.

The problem of unreliable computers is a cost to everyone, it’s effectively a tax on all business and social interactions that involve computers. While I spent the extra money on a server with ECC RAM for my home file storage I have no control over the computers purchased by all the companies I deal with – which are mostly the cheapest available computers. I also have no option to buy a laptop with ECC RAM because companies like Lenovo have decided not to manufacture them.

It seems to me that the easiest way of increasing overall reliability of computers would be to use ECC RAM everywhere. In the early 90′s all IBM compatible PCs had parity RAM, that meant that for each byte there was one extra bit which would report 100% of single-bit errors and 50% of errors that involved random memory corruption. Then manufacturers decided to save a tiny amount of money on memory by using 8/9 the number of chips for desktop/laptop systems and probably make more money on selling servers with ECC RAM. If the government was to impose a 20% tax on computers that lack ECC RAM then manufacturers would immediately start using it everywhere and the end result would be no price increase overall as it’s cheaper to design desktop systems and servers with the same motherboards – apparently some desktop systems have motherboard support for ECC RAM but don’t ship with suitable RAM or advertise the support for such RAM.

This principle applies to many other products too. One obvious example is cars, a car manufacturer can sell cheap cars with few safety features and then when occupants of those cars and other road users are injured the government ends up paying for medical expenses and disability pensions. If there was a tax for every car that has a poor crash test rating and a tax for every car company that performs badly in real world use then it would give car companies some incentive to manufacture safer vehicles.

Now there are situations where design considerations preclude such features. For example implementing ECC RAM in mobile phones might involve technical difficulties (particularly for 32bit phones) and making some trucks and farm equipment safer might be difficult. But when a company produces multiple similar products that differ significantly in quality such as PCs with and without ECC RAM or cars with and without air-bags there would be no difficulty in making them all of them higher quality.

I don’t think that we will have a government that implements such ideas any time soon, it seems that our government is more interested in giving money to corporations than taxing them. But one thing that could be done is to adopt a policy of only giving money to companies if they produce high quality products. If a car company is to be given hundreds of millions of dollars for not closing a factory then that factory should produce cars with all possible safety features. If a computer company is going to be given significant tax breaks for doing R&D then they should be developing products that won’t crash.

No related posts.

Paul Tagliamonte: Dell XPS 13

10 July, 2014 - 10:38

More hardware adventures.

I got my Dell XPS13. Amazing.

The good news: This MacBook Air clone is clearly an Air competitor, and easily slightly better in nearly every regard except for the battery.

The bad news is that the Intel Wireless card needs non-free (I’ll be replacing that shortly), and the touchpad’s driver isn’t totally implemented until Kernel 3.16. I’m currently building a 3.14 kernel with the patch to send to the kind Debian kernel people. We’ll see if that works. Ubuntu Trusty already has the patch, but it didn’t get upstreamed. That kinda sucks.

It also shipped with UEFI disabled, and was defaulting to boot in ‘legacy’ mode. It shipped with Ubuntu, a bit disappointed to not see Ubuntu keys on the machine.

Touchscreen works; in short -stunning. I think I found my new travel buddy. Debian unstable runs great, stable had some issues.

Mike Gabriel: Cooperation between X2Go and TheQVD

10 July, 2014 - 01:51

I recently got in contact with Nicolas Arenas Alonso and Nito Martinez from the Quindel group (located in Spain) [1].

Those guys bring forth a software product called TheQVD (The Quality Virtual Desktop) [2]. The project does similar things that X2Go does. In fact, they use NX 3.5 from NoMachine internally like we do in X2Go. Already a year ago, I noticed their activity on TheQVD and thought.. "Ahaaa!?!".

Now, a couple of weeks back we received a patch for libxcomp3 that fixes an FTBFS (fails to build from source) for nx-libs-lite against Android [3].

read more

Christoph Berg: New urxvt tab in current directory

10 July, 2014 - 01:13

Following Enrico's terminal-emulators comparison, I wanted to implement "start a new terminal tab in my current working directory" for rxvt-unicode aka urxvt. As Enrico notes, this functionality is something between "rather fragile" and non-existing, so I went to implement it myself. Martin Pohlack had the right hint, so here's the patch:

--- /usr/lib/urxvt/perl/tabbed  2014-05-03 21:37:37.000000000 +0200
+++ ./tabbed    2014-07-09 18:50:26.000000000 +0200
@@ -97,6 +97,16 @@
       $term->resource (perl_ext_2 => $term->resource ("perl_ext_2") . ",-tabbed");
+   if (@{ $self->{tabs} }) {
+      # Get the working directory of the current tab and append a -cd to the command line
+      my $pid = $self->{cur}{pid};
+      my $pwd = readlink "/proc/$pid/cwd";
+      #print "pid $pid pwd $pwd\n";
+      if ($pwd) {
+         push @argv, "-cd", $pwd;
+      }
+   }
    push @urxvt::TERM_EXT, urxvt::ext::tabbed::tab::;
    my $term = new urxvt::term
@@ -312,6 +322,12 @@
+sub tab_child_start {
+   my ($self, $term, $pid) = @_;
+   $term->{pid} = $pid;
+   1;
 sub tab_start {
    my ($self, $tab) = @_;
@@ -402,7 +418,7 @@
 # simply proxies all interesting calls back to the tabbed class.
-   for my $hook (qw(start destroy key_press property_notify)) {
+   for my $hook (qw(start destroy key_press property_notify child_start)) {
       eval qq{
          sub on_$hook {
             my \$parent = \$_[0]{term}{parent}

Sune Vuorela: CMake and library properties

9 July, 2014 - 14:30

When writing libraries with CMake, you need to set a couple of properties, especially the VERSION and SOVERSION properties. For library libbar, it could look like:

set_property(TARGET bar PROPERTY VERSION “0.0.0″)
set_property(TARGET bar PROPERTY SOVERSION 0 )

This will give you a => => symlink chain with a SONAME of encoded into the library.

The SOVERSION target property controls the number in the middle part of the symlink chain as well as the numeric part of the SONAME encoded into the library. The VERSION target property controls the last part of the last element of the symlink chain

This also means that the first part of VERSION should match what you put in SOVERSION to avoid surprises for others and for the future you.

Both these properties control “Technical parts” and should be looked at from a technical perspective. They should not be used for the ‘version of the software’, but purely for the technical versioning of the library.

In the kdeexamples git repository, it is handled like this:


And a bit later:

set_target_properties(bar PROPERTIES VERSION ${BAR_VERSION}

which is a fine way to ensure that things actually matches.

Oh. And these components is not something that should be inherited from other external projects.

So people, please be careful to use these correct.

Junichi Uekawa: Having fun with lisp.

9 July, 2014 - 08:18
Having fun with lisp. I was writing lisp interpreter in C++ using boost::spirit. I am happy that my eval can do lambda. Took me a long time to figure out what was wrong with the different types. The data structure was recursive, and I needed to make a recursive type. make_recursive_variant works but it's not obvious when it doesn't.

Wouter Verhelst: HP printers require systemd, apparently

9 July, 2014 - 02:15
printer-driver-postscript-hp Depends: hplip
hplip Depends: policykit-1
policykit-1 Depends: libpam-systemd
libpam-systemd Depends: systemd (= 204-14)

Since the last in the above is a versioned dependency, that means you can't use systemd-shim to satisfy this dependency.

I do think we should migrate to systemd. However, it's unfortunate that this change is being rushed like this. I want to migrate my personal laptop to systemd—but not before I have the time to deal with any fallout that might result, and to make sure I can properly migrate my configuration.

Workaround (for now): hold policykit-1 at 0.105-3 rather than have it upgrade to 0.105-6. That version doesn't have a dependency on libpam-systemd.

Off-hand questions:

  • Why does one need to log in to an init system? (yes, yes, it's probably a session PAM module, not an auth or password module. Still)
  • What does policykit do that can't be solved with proper use of Unix domain sockets and plain old unix groups?

All this feels like another case of overengineering, like most of the *Kit thingies.

Update: so the systemd package doesn't actually cause systemd to be run, there are other packages that do that, and systemd-shim can be installed. I misread things. Of course, the package name is somewhat confusing... but that's no excuse.

Matthew Palmer: Doing Password Complexity Wrong

8 July, 2014 - 13:00

I just made an account on yet another web service. On the suggestion of my password manager, I attempted to use the password “W:9[$X*F”. It was rejected because “Password must contain at least one non-alphabet character, one lowercase letter, one uppercase letter”. OK, how about “Passw0rd”? Yep, that’s fine.

Anyone want to guess which of those two passwords is going to fall victim to a brute-force attack first? Go on, don’t be shy, take a wild shot in the dark!

Joey Hess: laptop death

8 July, 2014 - 08:00

So I was at Ocracoke island, camping with family, and I brought my laptop along as I've done probably half a dozen times before. An enormous thuderstorm came up. It rained for 8 hours and thundered for 3 of those. Some lightning cracks quite close by as we crouched in the food tent, our feet up off the increasingly wet ground "just in case". The campground flooded. Luckily we were camped in the dunes and tents mostly avoided being flooded with 2-3 inches of water. (That was just the warmup; a hurricane hit a week after we left.)

My laptop was in my tent when this started, and I got soaked to the skin just running over there and throwing it up on the thermarest to keep it out of any flooding and away from any drips. It seemed ok, so best not to try to move it to the car in that downpour.

Next time I checked, it turned out the top vent of the tent was slightly open and dripping. The laptop bag was damp. But inside it seemed ok. Rain had slackened to just heavy, so I ran it down to the car. Laptop appeared barely damp, but it was hard to tell as I had quite forgotten what "dry" was. Turned it on for 10 seconds to check the time. It was 7:30 and we still had to cook dinner in this mess. Transferred it to a dry bag.

(By the way, in some situations, discovering you have a single dry towel you didn't know you had is the best gift in the world!)

Next morning, the laptop was dead. When powered on, the fan came on full, the screen stayed black, and after a few seconds it turned itself back off.

I need this for work, so it was a crash priority to get it fixed or a replacement. Before I even got home, I had logged onto Lenovo's website to check warantee status and found 2 things:

  1. They needed some number from a sticker on the bottom of my laptop. Which was no longer there.
  2. The process required some stange login on an entirely different IBM website.

At this point, I had a premonition of how the beuracracy would go. Reading Sesse's Blehnovo, I see I was right. I didn't even try. I ordered a replacement with priority shipping.

When I got home, I pulled the laptop apart to try to debug it. I still don't know what's wrong with it. The SSD may be damaged; it seems to cause anything I put it into to fail to work.

New laptop arrived in 2 days. Since this model is now a year old, it was a few hundred dollars cheaper this time around. And now I have an extra power supply, and a replacment keyboard, and a replacement fan etc. And I've escaped the dead USB port and broken rocker switch of the old laptop too.

The only weird thing is that, while my old laptop had no problem with my Toshiba passport USB drive, this new one refuses to recognize it unless I plug it into a USB 1.0 hub. Oh well..

Update: Ocracode for this trip:

OBX1.1 P6 L7 SA3d+++b+c++ U2(rearended,laptop death) T4f2b1 R2T Bb+m++++n++
F+++u++ SC+s-g6 H+f0i3 V+++s++m0 E++r+

Steinar H. Gunderson: Blehnovo

8 July, 2014 - 06:47

Here's my own little (ongoing) story about Lenovo's customer support; feel free to skip if you don't like rants. (You may remember that it took me several months to get to actually buy this laptop in the first place.) Everything within “quotes” are actual quotes from Lenovo, except where otherwise noted.

May 30th: My laptop accidentially goes into the ground, and the screen cracks. Gah. Oh well, I'll be without laptop over the weekend, but I have this nice accident warranty and NBD thing from Lenovo, right? I go to their support web site; they recommend that I register with IBM and file a service ticket. I do so. Their site says I will receive a confirmation email within ten minutes.

Jun 1st: I realize I haven't received anything from Lenovo or IBM, despite 36 hours passing. Oh well.

Jun 2nd: The web system claims Lenovo has “successfully contacted” me several times, despite me never hearing anything from them.

Jun 3rd: I call Lenovo. They don't speak any English. They say there's an error in the “type” I've given them; seemingly “X240” is an invalid type, I needed to write “20AL”. I get it corected.

Jun 4th: Lenovo calls. I talk to them in German and explain what happened (again). They say that I have the choice between paying €150 + parts and sending it in, or €450 + parts to have a serviceman come to me. (I am not 100% sure these numbers are correct, but they're in the right ballpark.) I say that this sounds very weird since I have accident insurance, but the guy from Lenovo seems unfazed and says they will only cover things under warranty if it's a design mistake. Eventually I say that sure, I'll pay for the serviceman; I just want my laptop fixed, fast. They ask for photos of the damage, which I send immediately.

Jun 6th: A week after the damage, and nothing has happened.

Jun 12th: Still nothing has happened. I press the “escalate” button on the web page.

Jun 18th: Still nothing has happened. I send Lenovo email asking what the heck is going on. My case now changes to “the customer will send the machine in to the depot for servicing” (not an exact quote; I don't have this text anymore), and I get an email with an address. I reply asking why on Earth this is, quoting their web page for saying “If you are entitled to Onsite Warranty, your Accidental Damage Protection claim may be repaired at your location”.

Later that day, Lenovo calls me again. It turns out they have no extended warranty or insurance registered on me. They ask me to provide “proof of purchase”, and give me a new case number (since the old one is now seemingly locked into a “will send to depot” situation). I send them the warranty email they originally sent me, including a long warranty code (20 alphanumeric digits) and a PIN. (In passing, I notice that due to a very delayed shipment, this warranty seemingly started running a month or so before I actually received the laptop, so the so-called 4-year warranty is seemingly 3 years 11 months. Oh well.)

Jun 19th: I am contacted by Lenovo. They say this information is not good enough as “proof of purchase”. They reiterate that I need to send them “proof of purchase”. I send them every single email I have ever received for them regarding my purchase.

Jun 24th: Nothing has happened. I email Lenovo asking for a status update. I get an email saying they have “forwarded all the needed information to the warranty service, so that the extended warranty will be registered”. All I can do is wait.

Jun 30th: I miss a telephone call from Lenovo. I get an email saying they'll close the case in two days. I call them, choosing English in the telephone menu. I get to a polite gentleman who speaks English well, but all the case notes are in German, so he can't make heads or tails of my case. He says he'll have the technician responsible for my case call me back.

He does really call me back the same day. He says what they have received is not valid as “proof of purchase”. I become agitated over the phone, pointing out that it should not be my problem if their internal systems are messed up; I've obviously paid 228 CHF for something. He claims to understand, but says that the systems will not work without a “proof of purchase”. He says I need to call Digital River (the company that operates He gives me their telephone number. I think it looks funny, and asks him if this is really the right number; he says oh, no, that's the German one, not the Swiss one. He gives me the Swiss one. I call the number and it's for some completely different company, so I try the German one. It gives me a telephone menu, which says that for ThinkPad warranty questions, I need to call <some number>. I call that number; it's for Lenovo tech support in Germany. The tech in the other end of the line does not understand why Digital River would send me to Lenovo for warranty questions, but gives me their Swiss number and email address. The Swiss number is indeed correct, but just sends me to exactly the same menu. I send them an email.

On a whim, I check my warranty page on It clearly says I have the extended warranty properly registered already! I forward a screenshot to Lenovo.

Jul 1st: I get an email from Lenovo: “Although the warranty appears on Lenovo website to be ok, please send us the proof of purchase from the extended warranty, so we can register it in our database(it appears NOT to be registered). Thank you.”

Jul 3rd: I get an email from Digital River, pointing me to a web page where I can print out some very nondescript-looking bill. I make a PDF out of it and send it to Lenovo.

Jul 7th: I still haven't heard anything from Lenovo. But! Now I am in Norway on vacation, which means I have a new trick up my sleeve: I call Lenovo Norway. I describe the case. The man says that this won't be covered by warranty, and I point out that I have accident insurance. He says (my translation/paraphrasing): “Oh, you're right, it does show up in this other system here! Don't worry, we'll fix this.” He asks me to send him an email with the screenshot of the warranty. I do so. He opens a new case, tells me that I'll have to send it in (seemingly onsite is only for warranty coverage after all?), but that it'll usually take less than a week. I receive an email with a link to DHL for ordering pickup, packaging instructions and pre-filled customs documents. It also has a form where I am supposed to briefly describe the case again (sure), say what I want them to do if the SSD is damaged (give it back to me unrepaired so I can do my own rescue; no Windows 8.1 reimaging, please) and write down all my passwords (fat chance).

So, there we are. Seven minutes with Lenovo Norway got me where 38 days of talking to Lenovo Switzerland/Germany couldn't—now let's just hope that DHL actually picks it up tomorrow and that I get it repaired and back within reasonable time.

The end? I hope.

Jonathan McDowell: 2014 SPI Board election nominations open

8 July, 2014 - 04:13

I put out the call for nominations for the 2014 Software in the Public Interest (SPI) Board election last week. At this point I haven't yet received any nominations, so I'm mentioning it here in the hope of a slightly wider audience. Possibly not the most helpful as I would hope readers who are interested in SPI are already reading spi-announce. There are 3 positions open this election and it would be good to see a bit more diversity in candidates this year. Nominations are open until the end of Tuesday July 13th.

The primary hard and fast time commitment a board member needs to make is to attend the monthly IRC board meetings, which are conducted publicly via IRC (#spi on the OFTC network). These take place at 20:00 UTC on the second Thursday of every month. More details, including all past agendas and minutes, can be found at Most of the rest of the board communication is carried out via various mailing lists.

The ideal candidate will have an existing involvement in the Free and Open Source community, though this need not be with a project affiliated with SPI.

Software in the Public Interest (SPI, is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. We see it as our role to handle things like holding domain names and/or trademarks, and processing donations for free and open source projects, allowing them to concentrate on actual development.

Examples of projects that SPI helps includes Debian, LibreOffice, OFTC and PostgreSQL. A full list can be found at

Jan Wagner: Monitoring Plugins release ahead

7 July, 2014 - 21:41

It seems to be a great time for monitoring solutions. Some of you may have noticed that Icinga has released it's first stable version of the completely redeveloped Icinga 2.

After several changes in the recent past, where the Team maintaining the Plugins used for several Monitoring solutions was busy moving everything to new infrastructure, they are now back on track. The recent development milestone is reached and a call for testing was also sent out.

In the meanwhile I prepared the packaging for this bigger move. The packages are now moved to the source package monitoring-plugins, the whole packaging changes can be observed in the changelog. With this new release we have also some NEWS, which might be useful to check. Same counts for upstream NEWS.

You can give the packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at Right after the stable release, the packages will be uploaded into debian unstable, but might get delayed by the NEW queue due the new package names.

Dominique Dumont: Status and next step on lcdproc automatic configuration upgrade with Perl and Config::Model

7 July, 2014 - 00:42

Back in March, I uploaded a Debian’s version of lcdproc with a unique feature: user and maintainer configurations are merged during package upgrade: user customizations and developers enhancements are both preserved in the new configuration file. (See this blog for more details). This avoids tedious edition of the configuration LCDd.conf file after every upgrade of lcdproc package.

At the beginning of June, a new version of lcdproc (0.5.7-1) was uploaded. This triggered another round of automatic upgrades on user’s systems.

According to the popcon rise of libconfig-model-lcdproc-perl, about 100 people have upgraded lcdproc on their system. Since automatic upgrade has an opt-out feature, one cannot say for sure that 100 people are actually using automatic upgrade, but I bet a fair portion are them are.

So far, only one people has complained: a bug report was filed about the many dependencies brought by libconfig-model-lcdproc-perl.

The next challenge for lcdproc configuration upgrade is brought by a bug reported on Ubuntu: the device file provided by imon kernel module is a moving target: The device file created by the kernel can be /dev/lcd0 or /dev/lcd1 or even /dev/lcd2. Static configuration files and moving target don’t mix well.

The obvious solution is to provide a udev rule so that a symbolic link is created from a fixed location (/dev/lcd-imon) to the moving target. Once the udev rule is installed, the user only has to update LCDd.conf file to use the symlink as imon device file and we’re done.

But, wait… The whole point of automatic configuration upgrade is to spare the user this kind of trouble: the upgrade must be completely automatic.

Moreover, the upgrade must work in all cases: whether udev is available (Linux) or not. If udev is not available, the value present in the configuration file must be preserved.

To know whether udev is available, the upgrade tool (aka cme) will check whether the file provided by udev (/dev/lcd-imon) is present or not. This will be done by lcdproc postinst script (which is run automatically at the end of lcdproc upgrade). Which means that the new udev rule must also be
activated in the postinst script before the upgrade is done.

In other words, the next version of lcdproc (0.5.7-2) will:

  • Install a new udev rule to provide lcd-imon symbolic link
  • Activate this rule in lcdproc postinst script before upgrading the configuration (note to udev experts: yes, the udev rule is activated with “--action=change” option)
  • Upgrade the configuration by running “cme migrate” in lcdproc postinst script.

In the lcdproc configuration model installed by libconfig-model-lcdproc-perl, the “imon device” parameter is enhanced so that running cme check lcdproc or cme migrate lcdproc issues a warning if /dev/lcd-imon exists and if imon driver is not configured to use it.

This way, the next installation of lcdproc will deliver a fix for imon and cme will fix user’s configuration file without requiring user input.

The last point is admittedly bad marketing as users will not be aware of the magic performed by Config::Model… Oh well…

In the previous section, I’ve briefly mentioned that “imon_device” parameter is “enhanced” in lcdproc configuration model. If you’re not already bored, let’s lift the hood and see what kind of enhancements was added.

Let’s peek in lcdproc configuration file, LCDd.conf file which is used to generate lcdproc configuration model. You may remember that the formal description of all LCDd.conf parameters and their properties is generated from LCDd.conf to provide lcdproc configuration model. The comments in LCDd.conf follow a convention so that most properties of the parameters can be extracted from the comments. In the example below, the comments show that NewFirmware is a boolean value expressed as yes or no, the latter being the default :

# Set the firmware version (New means >= 2.0) [default: no; legal: yes, no]

Back to the moving target. In LCDd.conf, imon device file parameter is declared this way:

# Select the output device to use

This means that device is a string where the default value is /dev/lcd0.

Which is wrong once the special udev rule provided with Debian packages is activated. With this rule, the default value must be /dev/lcd-imon.

To fix this problem, a special comment is added in the Debian version of LCDd.conf to tune further the properties of the device parameter:

# select the device to use
# {%
#   default~
#   compute
#     use_eval=1
#     formula="my $l = '/dev/lcd-imon'; -e $l ? $l : '/dev/lcd0';"
#     allow_override=1 -
#   warn_if:not_lcd_imon
#     code="my $l = '/dev/lcd-imon';defined $_ and -e $l and $_ ne $l ;"
#     msg="imon device does not use /dev/lcd-imon link."
#     fix="$_ = undef;"
#   warn_unless:found_device_file
#     code="defined $_ ? -e : 1"
#     msg="missing imon device file"
#     fix="$_ = undef;"
#   - %}

This special comment between “{%” and “%}” follows the syntax of Config::Model::Loader. A small configuration model is declared there to enhance the model generated from LCDd.conf file.

Here are the main parts:

  • default~ suppress the default value of the “device” parameter declared in the original LCDd.conf (i.e. “/dev/ldcd0“)
  • compute and the 3 lines below computes a default value for the device file. Since “use_eval” is true, the formula is evaluated as Perl code. This code will return /dev/lcd-imon if this file is found. Otherwise, /dev/lcd0 is returned. Hence, either /dev/lcd-imon or /dev/lcd0 will be used a as default value. allow_override=1 lets the user override this computed value
  • warn_if and the 3 lines below test the configured device file with the Perl instructions provided by the code parameter. There, the device value is available in the $_ variable. This code will return true if /dev/lcd-imon exists and if the configured device does not use it. This will trigger a warning that will show the specified message.
  • Similarly warn_unless and the 3 lines below warns the user if the configured device file is not found.

In both warn_unless and warn_if parts, the fix code snippet is run when by the command cme fix lcdproc and is used to “repair” the warning condition. In this case, the fix consists in resetting the device configuration value so the computed value above can be used.

cme fix lcdproc is triggered during package post install script installed by dh_cme_upgrade.

Come to think of it, generating a configuration model from a configuration file can probably be applied to other projects: for instance, php.ini and kdmrc are also shipped with detailed comments. May be I should make a more generic model generator from the example used to generate lcdproc model…

Well, I will do it if people show interest. Not in the form “yeah, that would be cool”, but in the form, “yes, I will use your work to generate a configuration model for project [...]“. I’ll let you fill the blank ;-)

Tagged: Config::Model, configuration, debian, lcdproc, Perl, upgrade

Eugene V. Lyubimkin: (Finland) FUUG foundation gives money for FLOSS development

6 July, 2014 - 23:46
You live in Finland? You work on a FLOSS project or a project helping FLOSS in a way or another? Apply for FUUG's limited sponshorship program! Rules and details (in Finnish): .

Ian Campbell: Setting absolute date based Amazon S3 bucket lifecycles with curl

6 July, 2014 - 18:45

For my local backup regimen I use flexbackup to create a full backup twice a year and differential/incremental backups on a weekly/monthly basis. I then upload these to a new amazon S3 bucket for each half year (so each bucket corresponds to the a full backup plus the associated differentials and incrementals).

I then set the bucket's lifecycle to archive to glacier (cheaper offline storage) from the month after that half year has ended (reducing costs) and to delete it a year after the half ends. It used to be possible to do this via the S3 web interface but the absolute date based options seem to have been removed in favour of time since last update, which is not what I want. However the UI will still display such lifecycles if they are configured and directs you to the REST API to set them up.

I had a look around but couldn't any existing CLI tools to do this directly but I figured it must be possible with curl. A little bit of reading later I found that it was possible but it involved some faff calculating signatures etc. Luckily EricW has written Amazon S3 Authentication Tool for Curl (AKA s3curl) which automates the majority of that faff. The tool is "New BSD" licensed according to that page or Apache 2.0 license according to the included LICENSE file and code comments.


Following the included README setup ~/.s3curl containing your id and secret key (I called mine personal which I then use below).

Getting the existing lifecycle

Retrieving an existing lifecycle is pretty easy. For the bucket which I used for the first half of 2014:

$ s3curl --id=personal -- --silent http://$ | xmllint --format -
<?xml version="1.0" encoding="UTF-8"?>
<LifecycleConfiguration xmlns="">
    <ID>Archive and Expire</ID>

See GET Bucket Lifecycle for details of the XML.

Setting a new lifecycle

The desired configuration needs to be written to a file. For example to set the lifecycle for the bucket I'm going to use for the second half of 2014:

$ cat s3.lifecycle
    <ID>Archive and Expire</ID>
$ s3curl --id=personal --put s3.lifecycle --calculateContentMd5 -- http://$

See PUT Bucket Lifecycle for details of the XML.

Daniel Pocock: News team jailed, phone hacking not fixed though

6 July, 2014 - 16:20

This week former News of the World executives were sentenced, most going to jail, for the British phone hacking scandal.

Noticeably absent from the trial and much of the media attention are the phone companies. Did they know their networks could be so systematically abused? Did they care?

In any case, the public has never been fully informed about how phones have been hacked. Speculation has it that phone hackers were guessing PIN numbers for remote voicemail access, typically trying birthdates and inappropriate PIN numbers like 0000 or 1234.

There is more to it

Those in the industry know that there are additional privacy failings in mobile networks, especially the voicemail service. It is not just in the UK either.

There are various reasons for not sharing explicit details on a blog like this and comments concerning such techniques can't be accepted.

Nonetheless, there are some points that do need to be made:

  • it is still possible for phones, especially voicemail, to be hacked on demand
  • an attacker does not need expensive equipment nor do they need to be within radio range (or even the same country) as their target
  • the attacker does not need to be an insider (phone company or spy agency employee)
Disable voicemail completely - the only way to be safe

The bottom line is that the only way to prevent voicemail hacking is to disable the phone's voicemail service completely. Voicemail is not really necessary given that most phones support email now. For those who feel they need it, consider running the voicemail service on your own private PBX using free software like Asterisk or FreeSWITCH. Some Internet telephony service providers also offer third-party voicemail solutions that are far more secure than those default services offered by mobile networks.

To disable voicemail, simply do two things:

  • send a letter to the phone company telling them you do not want any voicemail box in their network
  • in the mobile phone, select the menu option to disable all diversions, or manually disable each diversion one by one (e.g. disable forwarding when busy, disable forwarding when not answered, disable forwarding when out of range)

Russell Coker: Desktop Publishing is Wrong

6 July, 2014 - 14:53

When I first started using computers a “word processor” was a program that edited text. The most common and affordable printers were dot-matrix and people who wanted good quality printing used daisy wheel printers. Text from a word processor was sent to a printer a letter at a time. The options for fancy printing were bold and italic (for dot-matrix), underlines, and the use of spaces to justify text.

It really wasn’t much good if you wanted to include pictures, graphs, or tables. But if you just wanted to write some text it worked really well.

When you were editing text it was typical that the entire screen (25 rows of 80 columns) would be filled with the text you were writing. Some word processors used 2 or 3 lines at the top or bottom of the screen to display status information.

Some time after that desktop publishing (DTP) programs became available. Initially most people had no interest in them because of the lack of suitable printers, the early LASER printers were very expensive and the graphics mode of dot matrix printers was slow to print and gave fairly low quality. Printing graphics on a cheap dot matrix printer using the thin continuous paper usually resulted in damaging the paper – a bad result that wasn’t worth the effort.

When LASER and Inkjet printers started to become common word processing programs started getting many more features and basically took over from desktop publishing programs. This made them slower and more cumbersome to use. For example Star Office/OpenOffice/LibreOffice has distinguished itself by remaining equally slow as it transitioned from running on an OS/2 system with 16M of RAM in the early 90′s to a Linux system with 256M of RAM in the late 90′s to a Linux system with 1G of RAM in more recent times. It’s nice that with the development of PCs that have AMD64 CPUs and 4G+ of RAM we have finally managed to increase PC power faster than LibreOffice can consume it. But it would be nicer if they could optimise for the common cases. LibreOffice isn’t the only culprit, it seems that every word processor that has been in continual development for that period of time has had the same feature bloat.

The DTP features that made word processing programs so much slower also required more menus to control them. So instead of just having text on the screen with maybe a couple of lines for status we have a menu bar at the top followed by a couple of lines of “toolbars”, then a line showing how much width of the screen is used for margins. At the bottom of the screen there’s a search bar and a status bar.

Screen Layout

By definition the operation of a DTP program will be based around the size of the paper to be used. The default for this is A4 (or “Letter” in the US) in a “portrait” layout (higher than it is wide). The cheapest (and therefore most common) monitors in use are designed for displaying wide-screen 16:9 ratio movies. So we have images of A4 paper with a width:height ratio of 0.707:1 displayed on a wide-screen monitor with a 1.777:1 ratio. This means that only about 40% of the screen space would be used if you don’t zoom in (but if you zoom in then you can’t see many rows of text on the screen). One of the stupid ways this is used is by companies that send around word processing documents when plain text files would do, so everyone who reads the document uses a small portion of the screen space and a large portion of the email bandwidth.

Note that this problem of wasted screen space isn’t specific to DTP programs. When I use the Google Keep website [1] to edit notes on my PC they take up a small fraction of the screen space (about 1/3 screen width and 80% screen height) for no good reason. Keep displays about 70 characters per line and 36 lines per page. Really every program that allows editing moderate amounts of text should allow more than 80 characters per line if the screen is large enough and as many lines as fit on the screen.

One way to alleviate the screen waste on DTP programs is to use a “landscape” layout for the paper. This is something that all modern printers support (AFAIK the only printers you can buy nowadays are LASER and ink-jet and it’s just a big image that gets sent to the printer). I tried to do this with LibreOffice but couldn’t figure out how. I’m sure that someone will comment and tell me I’m stupid for missing it, but I think that when someone with my experience of computers can’t easily figure out how to perform what should be a simple task then it’s unreasonably difficult for the vast majority of computer users who just want to print a document.

When trying to work out how to use landscape layout in LibreOffice I discovered the “Web Layout” option in the “View” menu which allows all the screen space to be used for text (apart from the menu bar, tool bars, etc). That also means that there are no page breaks! That means I can use LibreOffice to just write text, take advantage of the spelling and grammar correcting features, and only have screen space wasted by the tool bars and menus etc.

I never worked out how to get Google Docs to use a landscape document or a single webpage view. That’s especially disappointing given that the proportion of documents that are printed from Google Docs is probably much lower than most word processing or DTP programs.

What I Want

What I’d like to have is a word processing program that’s suitable for writing draft blog posts and magazine articles. For blog posts most of the formatting is done by the blog software and for magazine articles the editorial policy demands plain text in most situations, so there’s no possible benefit of DTP features.

The ability to edit a document on an Android phone and on a Linux PC is a good feature. While the size of a phone screen limits what can be done it does allow jotting down ideas and correcting mistakes. I previously wrote about using Google Keep on a phone for lecture notes [2]. It seems that the practical ability of Keep to edit notes on a PC is about limited to the notes for a 45 minute lecture. So while Keep works well for that task it won’t do well for anything bigger unless Google make some changes.

Google Docs is quite good for editing medium size documents on a phone if you use the Android app. Given the limitations of the device size and input capabilities it works really well. But it’s not much good for use on a PC.

I’ve seen a positive review of One Note from Microsoft [3]. But apart from the fact that it’s from Microsoft (with all the issues that involves) there’s the issue of requiring another account. Using an Android phone requires a Gmail account (in practice for almost all possible uses if not in theory) so there’s no need to get an extra account for Google Keep or Docs.

What would be ideal is an Android editor that could talk to a cloud service that I run (maybe using WebDAV) and which could use the same data as a Linux-X11 application.

Any suggestions?

Related posts:

  1. Desktop Equivalent Augmented Reality Augmented reality is available on all relatively modern smart phones....
  2. Linux on the Desktop I started using Linux in 1993. I initially used it...
  3. Lenny SE Linux on the Desktop I have been asked about the current status of Lenny...

Matthew Palmer: Witness the security of this fully DNSSEC-enabled zone!

6 July, 2014 - 13:00

After dealing with the client side of the DNSSEC puzzle last week, I thought it behooved me to also go about getting DNSSEC going on the domains I run DNS for. Like the resolver configuration, the server side work is straightforward enough once you know how, but boy howdy are there some landmines to be aware of.

One thing that made my job a little less ordinary is that I use and love tinydns. It’s an amazingly small and simple authoritative DNS server, strong in the Unix tradition of “do one thing and do it well”. Unfortunately, DNSSEC is anything but “small and simple” and so tinydns doesn’t support DNSSEC out of the box. However, Peter Conrad has produced a patch for tinydns to do DNSSEC, and that does the trick very nicely.

A brief aside about tinydns and DNSSEC, if I may… Poor key security is probably the single biggest compromise vector for crypto. So you want to keep your keys secure. A great way to keep keys secure is to not put them on machines that run public-facing network services (like DNS servers). So, you want to keep your keys away from your public DNS servers. A really great way of doing that would be to have all of your DNS records somewhere out of the way, and when they change regenerate the zone file, re-sign it, and push it out to all your DNS servers. That happens to be exactly how tinydns works. I happen to think that tinydns fits very nicely into a DNSSEC-enabled world. Anyway, back to the story.

Once I’d patched the tinydns source and built updated packages, it was time to start DNSSEC-enabling zones. This breaks down into a few simple steps:

  1. Generate a key for each zone. This will produce a private key (which, as the name suggests, you should keep to yourself), a public key in a DNSKEY DNS record, and a DS DNS record. More on those in a minute.

    One thing to be wary of, if you’re like me and don’t want or need separate “Key Signing” and “Zone Signing” keys. You must generate a “Key Signing” key – this is a key with a “flags” value of 257. Doing this wrong will result in all sorts of odd-ball problems. I wanted to just sign zones, so I generated a “Zone Signing” key, which has a “flags” value of 256. Big mistake.

    Also, the DS record is a hash of everything in the DNSKEY record, so don’t just think you can change the 256 to a 257 and everything will still work. It won’t.

  2. Add the key records to the zone data. For tinydns, this is just a matter of copying the zone records from the generated key into the zone file itself, and adding an extra pseudo record (it’s all covered in the tinydnssec howto).

  3. Publish the zone data. Reload your BIND config, run tinydns-sign and tinydns-data then rsync, or do whatever it is PowerDNS people do (kick the database until replication starts working again?).

  4. Test everything. I found the Verisign Labs DNSSEC Debugger to be very helpful. You want ticks everywhere except for where it’s looking for DS records for your zone in the higher-level zone. If there are any other freak-outs, you’ll want to fix those – because broken DNSSEC will take your domain off the Internet in no time.

  5. Tell the world about your DNSSEC keys. This is simply a matter of giving your DS record to your domain registrar, for them to add it to the zone data for your domain’s parent. Wherever you’d normally go to edit the nameservers or contact details for your domain, you probably want to do to the same place and look for something about “DS” or “Domain Signer” records. Copy and paste the details from the DS record in your zone into there, submit, and wait a minute or two for the records to get published.

  6. Test again. Before you pat yourself on the back, make sure you’ve got a full board of green ticks in the DNSSEC Debugger. if anything’s wrong, you want to rollback immediately, because broken DNSSEC means that anyone using a DNSSEC-enabled resolver just lost the ability to see your domain.

That’s it! There’s a lot of complicated crypto going on behind the scenes, and DNSSEC seems to revel in the number of acronyms and concepts that it introduces, but the actual execution of DNSSEC-enabling your domains is quite straightforward.

Maximilian Attems: xserver-xorg-video-intel 2.99.912+20140705 in experimental

6 July, 2014 - 07:00

Since the release of xf86-video-intel 2.99.912 a month ago several enhancements and fixes in xf86-video-intel git piled up. Again testing is very much appreciated: xserver-xorg-video-intel packages.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้