Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 week 3 days ago

Nicolas Dandrimont: We need your help to make GSoC and Outreachy in Debian a success this summer!

19 February, 2015 - 22:50

Hi everyone,

A quick announcement: Debian has applied to the Google Summer of Code, and will also participate in Outreachy (formerly known as the Outreach Program for Women) for the Summer 2015 round! Those two mentoring programs are a great way for our project to bootstrap new ideas, give an new impulse to some old ones, and of course to welcome an outstanding team of motivated, curious, lively new people among us.

We need projects and mentors to sign up really soon (before February 27th, that’s next week), as our project list is what Google uses to evaluate our application to GSoC. Projects proposals should be described on our wiki page. We have three sections:

  1. Coding projects with confirmed mentors are proposed to both GSoC and Outreachy applicants
  2. Non-Coding projects with confirmed mentors are proposed only to Outreachy applicants
  3. Project ideas without confirmed mentors will only happen if a mentor appears. They are kept on the wiki page until the application period starts, as we don’t want to give applicants false hopes of being picked for a project that won’t happen.

Once you’re done, or if you have any questions, drop us a line on our mailing-list (soc-coordination@lists.alioth.debian.org), or on #debian-soc on OFTC.

We also would LOVE to be able to welcome more Outreachy interns. So far, and thanks to our DPL, Debian has committed to fund one internship (US$6500). If we want more Outreachy interns, we need your help :).

If you, or your company, have some money to put towards an internship, please drop us a line at opw@debian.org and we’ll be in touch. Some of the successes of our Outreachy alumni include the localization of the Debian Installer to a new locale, improvements in the sources.debian.net service, documentation of the debbugs codebase, and a better integration of AppArmor profiles in Debian.

Thanks a lot for your help!

Richard Hartmann: Listing screen sessions on login

19 February, 2015 - 03:32

Given Peter Eisentraut's blog post on the same topic, I thought I would also share this Zsh function (from 2011):

startup () {
    # info on any running screens
    if [[ -x $(which screen) ]]
    then
        ZSHRC_SCREENLIST=(${${(M)${(f)"$(screen -ls)"}:#(#s)?:space:##([0-9]##).*}/(#b)?:space:#([0-9]##).*/$match[1]})
        if [[ $#ZSHRC_SCREENLIST -ge 1 ]]
        then
            echo "There are $#ZSHRC_SCREENLIST screens running. $ZSHRC_SCREENLIST"
        fi
    fi
}

Philipp Kern: Caveats of the HP MicroServer Gen8

19 February, 2015 - 01:15
If you intend to buy the HP MicroServer Gen8 as a home server there are a few caveats that I didn't find on the interwebs before I bought the device:
  • Even though the main chassis fan is now fixed in AHCI mode with recent BIOS versions, there is still an annoying PSU fan that's tiny and high frequency. You cannot control it and the PSU seems to be custom-built.
  • The BIOS does not support ACPI S3 (suspend-to-RAM) at all. Apparently it being a server BIOS they chose to not include the code paths in the BIOS needed to properly turn off devices and turn them back on. This means that it's not possible to simply suspend it and have it woken up when your media center boots.
  • In contrast to the older AMD-based MicroServers the Gen8 comes with iLO, which will consume quite a few watts just for being present even if you don't use it. I read figures of about ten watts. It also cannot be turned off, as it does system management like fan control.
  • The HDD cages are not vibration proof or decoupled.
If you try to boot FreeBSD with its zfsloader you will likely need to apply a workaround patch, because the BIOS seems to do something odd. Linux works as expected.

Lucas Nussbaum: Some email organization tips and tricks

18 February, 2015 - 00:34

I’d like to share a few tips that were useful to strengthen my personal email organization. Most of what follows is probably not very new nor special, but hey, let’s document it anyway.

Many people have an inbox folder that just grow over time. It’s actually similar to a twitter or RSS feed (except they probably agree that they are supposed to read more of their email “feed”). When I send an email to them, it sometimes happen that they don’t notice it, if the email arrives at a bad time. Of course, as long as they don’t receive too many emails, and there aren’t too many people relying on them, it might just work. But from time to time, it’s super-painful for those interacting with them, when they miss an email and they need to be pinged again. So let’s try not to be like them. :-)

Tip #1: do Inbox Zero (or your own variant of it)

Inbox Zero is an email management methodology inspired from David Allen’s Getting Things Done book. It’s best described in this video. The idea is to turn one’s Inbox into an area that is only temporary storage, where every email will get processed at some point. Processing can mean deleting an email, archiving it, doing the action described in the email (and replying to it), etc. Put differently, it basically means implementing the Getting Things Done workflow on one’s email.

Tip #1.1: archive everything

One of the time-consuming decisions in the original GTD workflow is to decide whether something should be eliminated (deleted) or stored for reference. Given that disk space is quite cheap, it’s much easier to never decide about that, and just archive everything (by duplicating the email to an archive folder when it is received). To retrieve archived emails when needed, I then use notmuch within mutt to easily search through recent (< 2 year) archives. I use archivemail to archive older email in compressed mboxes from time to time, and grepmail to search through those mboxes when needed.

I don’t archive most Debian mailing lists though, as they are easy to fetch from master.d.o with the following script:

#!/bin/sh
rsync -vP master.debian.org:~debian/*/*$1/*$1.${2:-$(date +%Y%m)}* .

Then I can fetch a specific list archive with getlist devel 201502, or a set of archives with e.g. getlist devel 2014, or the current month with e.g. getlist devel. Note that to use grepmail on XZ-compressed archives, you need libmail-mbox-messageparser-perl version 1.5002-3 (only in unstable — I was using a locally-patched version for ages, but finally made a patch last week, which Gregor kindly uploaded).

Tip #1.2: split your inbox

(Yes, this one looks obvious but I’m always surprised at how many people don’t do that.)

Like me, you probably receive various kinds of emails:

  • emails about your day job
  • emails about Debian
  • personal emails
  • mailing lists about your day job
  • mailing lists about Debian
  • etc.

Splitting those into separate folders has several advantages:

  • I can adjust my ‘default action’ based on the folder I am in (e.g. delete after reading for most mailing lists, as it’s archived already)
  • I can adjust my level of focus depending on the folder (I might not want to pay a lot of attention to each and every email from a mailing list I am only remotely interested in; while I should definitely pay attention to each email in my ‘DPL’ folder)
  • When busy, I can skip the less important folders for a few days, and still be responsive to emails sent in my more important folders

I’ve seen some people splitting their inbox into too many folders. There’s not much point in having a per-sender folder organization (unless there’s really a recurring flow of emails from a specific person), as it increases the risk of missing an email.

I use procmail to organize my email into folders. I know that there are several more modern alternatives, but I haven’t looked at them since procmail does the job for me.

Resulting organization

I use one folder for my day-job email, one for my DPL email, one for all other email directed or Cced to me. Then, I have a few folders for automated notifications of stuff. My Debian mailing list folders are auto-managed by procmail’s $MATCH:

:0:
 * ^X-Mailing-List: <.*@lists\.debian\.org>
 * ^X-Mailing-List: <debian-\/[^@]*
 .ml.debian.$MATCH/

Some other mailing lists are in they separate folders, and there’s a catch-all folder for the remaining ones. Ah, and since I use feed2imap, I have folders for the RSS/Atom feeds I follow.

I have two different commands to start mutt. One only shows a restricted number of (important) folders. The other one shows the full list of (non-empty) folders. This is a good trick to avoid spending time reading email when I am supposed to do something more important. :)

As for many people probably, my own organization is loosely based on GTD and Inbox Zero. It sometimes happen that some emails stay in my Inbox for several days or weeks, but I very rarely have more than 20 or 30 emails in one of my main inbox folders. I also do reviews of the whole content of my main inbox folders once or twice a week, to ensure that I did not miss an email that could be acted on quickly.

A last trick is that I have a special folder replies, where procmail copies emails that are replies to a mail I sent, but which do not Cc me. That’s useful to work-around Debian’s “no Cc on reply to mailing list posts” policy.

I receive email using offlineimap (over SSH to my mail server), and send it using nullmailer (through a SSH tunnel). The main advantage of offlineimap over using IMAP directly in mutt is that using IMAP to a remote server feels much more sluggish. Another advantage is that I only need SSH access to get my full email setup to work.

Tip #2: tracking sent emails

Two recurring needs I had was:

  • Get an overview of emails I sent to help me write the day-to-day DPL log
  • Easily see which emails got an answer, or did not (which might mean that they need a ping)

I developed a short script to address that. It scans the content of my ‘Sent’ maildir and my archive maildirs, and, for each email address I use, displays (see example output in README) the list of emails sent with this address (one email per line). It also detects if an email was replied to (“R” column after the date), and abbreviates common parts in email addresses (debian-project@lists.debian.org becomes d-project@l.d.o). It also generates a temporary maildir with symlinks to all emails, so that I can just open the maildir if needed.

Clint Adams: Copyleft licenses are oppressing someone

18 February, 2015 - 00:18

I go to a party, carrying two expensive bottles of liquor that I have acquired from faraway lands.

The hosts of the party provide a variety of liquors, snacks, and mixers.

Some neuro guy shows up, looks around, feels guilty, says that he should have brought something.

His friend shows up, bearing hot food. The neuro guy decides to contribute $7 to the purchase of food since he didn't bring anything. The friend then proceeds to charge us each $7.

No one else demands money for any of the other things being share and consumed by everyone. The hosts do not retroactively charge a cover fee for entrance to the house. No one else offers to pay anyone for anything.

The neuro guy attempts to wash some dishes before leaving, but is stopped by the hosts, because he is a guest.

John Goerzen: “Has Linux lost its way?” comments prompt a Debian developer to revisit FreeBSD after 20 years

17 February, 2015 - 23:11

I’ll admit it. I have a soft spot for FreeBSD. FreeBSD was the first Unix I ran, and it was somewhere around 20 years ago that I did so, before I switched to Debian. Even then, I still used some of the FreeBSD Handbook to learn Linux, because Debian didn’t have the great Reference that it does now.

Anyhow, some comments in my recent posts (“Has modern Linux lost its way?” and Reactions to that, and the value of simplicity), plus a latent desire to see how ZFS fares in FreeBSD, caused me to try it out. I installed it both in VirtualBox under Debian, and in an old 64-bit Thinkpad sitting in my basement that previously ran Debian.

The results? A mixture of amazing and disappointing. I will say that I am quite glad that both exist; there is plenty of innovation happening everywhere and neat features exist everywhere, too. But I can also come right out and say that the statement that FreeBSD doesn’t have issues like Linux does is false and misleading. In many cases, it’s running the exact same stack. In others, it’s better, but there are also others where it’s worse. Perhaps this article might dispell a bit of the FUD surrounding jessie, while also showing off some of the nice things FreeBSD does. My conclusion: Both jessie and FreeBSD 10.1 are awesome Free operating systems, but both have their warts. This article is more about FreeBSD than Debian, but it will discuss a few of Debian’s warts as well.

The experience

My initial reaction to FreeBSD was: wow, this feels so familiar. It reminds me of a commercial Unix, or maybe of Linux from a few years ago. A minimal, well-documented base system, everything pretty much in logical places in the filesystem, and solid memory management. I felt right at home. It was almost reassuring, even.

Putting together a FreeBSD box is a lot of package installing and config file editing. The FreeBSD Handbook, describing how to install X, talks about editing this or that file for this or that feature. I like being able to learn directly how things fit together by doing this.

But then you start remembering the reasons you didn’t like Linux a few years ago, or the commercial Unixes: maybe it’s that programs like apache are still not as well supported, or maybe it’s that the default vi has this tendency to corrupt the terminal periodically, or perhaps it’s that root’s default shell is csh. Or perhaps it’s that I have to do a lot of package installing and config file editing. It is not quite the learning experience it once was, either; now there are things like “paste this XML file into some obscure polkit location to make your mouse work” or something.

Overall, there are some areas where FreeBSD kills it in a way no other OS does. It is unquestionably awesome in several areas. But there are a whole bunch of areas where it’s about 80% as good as Linux, a number of areas (even polkit, dbus, and hal) where it’s using the exact same stack Linux is (so all these comments about FreeBSD being so differently put together strike me as hollow), and frankly some areas that need a lot of work and make it hard to manage systems in a secure and stable way.

The amazing

Let’s get this out there: I’ve used ZFS too much to use any OS that doesn’t support it or something like it. Right now, I’m not aware of anything like ZFS that is generally stable and doesn’t cost a fortune, so pretty much: if your Unix doesn’t do ZFS, I’m not interested. (btrfs isn’t there yet, but will be awesome when it is.) That’s why I picked FreeBSD for this, rather than NetBSD or OpenBSD.

ZFS on FreeBSD is simply awesome. They have integreated it extremely well. The installer supports root on zfs, even encrypted root on zfs (though neither is a default). top on a FreeBSD system shows a line of ZFS ARC (cache) stats right alongside everything else. The ZFS defaults for maximum cache size, readahead, etc. auto-tune themselves at boot (unless overridden) based on the amount of RAM in a system and the system type. Seriously, these folks have thought of everything and it just reeks of solid. I haven’t seen ZFS this well integrated outside the Solaris-type OSs.

I have been using ZFSOnLinux for some time now, but it is just not as mature as ZFS on FreeBSD. ZoL, for instance, still has some memory tuning issues, and is not really suggested for 32-bit machines. FreeBSD just nails it. ZFS on FreeBSD even supports TRIM, which is not available in ZoL and I think fairly unique even among OpenZFS platforms. It also supports delegated administration of the filesystem, both to users and to jails on the system, seemingly very similar to Solaris zones.

FreeBSD also supports beadm, which is like a similar tool on Solaris. This lets you basically use ZFS snapshots to make lightweight “boot environments”, so you can select which to boot into. This is useful, say, before doing upgrades.

Then there are jails. Linux has tried so hard to get this right, and fallen on its face so many times, a person just wants to take pity sometimes. We’ve had linux-vserver, openvz, lxc, and still none of them match what FreeBSD jails have done for a long time. Linux’s current jail-du-jour is LXC, though it is extremely difficult to configure in a secure way. Even its author comments that “you won’t hear any of the LXC maintainers tell you that LXC is secure” and that it pretty much requires AppArmor profiles to achieve reasonable security. These are still rather in flux, as I found out last time I tried LXC a few months ago. My confidence in LXC being as secure as, say, KVM or FreeBSD is simply very low.

FreeBSD’s jails are simple and well-documented where LXC is complex and hard to figure out. Its security is fairly transparent and easy to control and they just work well. I do think LXC is moving in the right direction and might even get there in a couple years, but I am quite skeptical that even Docker is getting the security completely right.

The simply different

People have been throwing around the word “distribution” with respect to FreeBSD, PC-BSD, etc. in recent years. There is an analogy there, but it’s not perfect. In the Linux ecosystem, there is a kernel project, a libc project, a coreutils project, a udev project, a systemd/sysvinit/whatever project, etc. You get the idea. In FreeBSD, there is a “base system” project. This one project covers the kernel and the base userland. Some of what they use in the base system is code pulled in from elsewhere but maintained in their tree (ssh), some is completely homegrown (kernel), etc. But in the end, they have a nicely-integrated base system that always gets upgraded in sync.

In the Linux world, the distribution makers are responsible for integrating the bits from everywhere into a coherent whole.

FreeBSD is something of a toolkit to build up your system. Gentoo might be an analogy in the Linux side. On the other end of the spectrum, Ubuntu is a “just install it and it works, tweak later” sort of setup. Debian straddles the middle ground, offering both approaches in many cases.

There are pros and cons to each approach. Generally, I don’t think either one is better. They are just different.

The not-quite-there

I said that there are a lot of things in FreeBSD that are about 80% of where Linux is. Let me touch on them here.

Its laptop support leaves something to be desired. I installed it on a few-years-old Thinkpad — basically the best possible platform for working suspend in a Free OS. It has worked perfectly out of the box in Debian for years. In FreeBSD, suspend only works if it’s in text mode. If X is running, the video gets corrupted and the system hangs. I have not tried to debug it further, but would also note that suspend on closed lid is not automatic in FreeBSD; the somewhat obscure instuctions tell you what policykit pkla file to edit to make suspend work in XFCE. (Incidentally, it also says what policykit file to edit to make the shutdown/restart options work).

Its storage subsystem also has some surprising misses. Its rough version of LVM, LUKS, and md-raid is called GEOM. GEOM, however, supports only RAID0, RAID1, and RAID3. It does not support RAID5 or RAID6 in software RAID configurations! Linux’s md-raid, by comparison, supports RAID0, RAID1, RAID4, RAID5, RAID6, etc. There seems to be a highly experimental RAID5 patchset floating around for many years, but it is certainly not integrated into the latest release kernel. The current documentation makes no mention of RAID5, although it seems that a dated logical volume manager supported it. In any case, RAID5 does not seem to be well-supported in software like it is in Linux.

ZFS does have its raidz1 level, which is roughly the same as RAID5. However, that requires full use of ZFS. ZFS also does not support some common operations, like adding a single disk to an existing RAID5 group (which is possible with md-raid and many other implementations.) This is a ZFS limitation on all platforms.

FreeBSD’s filesystem support is rather a miss. They once had support for Linux ext* filesystems using the actual Linux code, but ripped it out because it was in GPL and rewrote it so it had a BSD license. The resulting driver really only works with ext2 filesystems, as it doesn’t work with ext3/ext4 in many situations. Frankly I don’t see why they bothered; they now have something that is BSD-licensed but only works with a filesystem so old nobody uses it anymore. There are only two FreeBSD filesystems that are really useable: UFS2 and ZFS.

Virtualization under FreeBSD is also not all that present. Although it does support the VirtualBox Open Source Edition, this is not really a full-featured or fast enough virtualization environment for a server. Its other option is bhyve, which looks to be something of a Xen clone. bhyve, however, does not support Windows guests, and requires some hoops to even boot Linux guest installers. It will be several years at least before it reaches feature-parity with where KVM is today, I suspect.

One can run FreeBSD as a guest under a number of different virtualization systems, but their instructions for making the mouse work best under VirtualBox did not work. There may have been some X.Org reshuffle in FreeBSD that wasn’t taken into account.

The installer can be nice and fast in some situations, but one wonders a little bit about QA. I had it lock up on my twice. Turns out this is a known bug reported 2 months ago with no activity, in which the installer attempts to use a package manger that it hasn’t set up yet to install optional docs. I guess the devs aren’t installing the docs in testing.

There is nothing like Dropbox for FreeBSD. Apparently this is because FreeBSD has nothing like Linux’s inotify. The Linux Dropbox does not work in FreeBSD’s Linux mode. There are sketchy reports of people getting an OwnCloud client to work, but in something more akin to rsync rather than instant-sync mode, if they get it working at all. Some run Dropbox under wine, apparently.

The desktop environments tend to need a lot more configuration work to get them going than on Linux. There’s a lot of editing of polkit, hal, dbus, etc. config files mentioned in various places. So, not only does FreeBSD use a lot of the same components that cause confusion in Linux, it doesn’t really configure them for you as much out of the box.

FreeBSD doesn’t support as many platforms as Linux. FreeBSD has only two platforms that are fully supported: i386 and amd64. But you’ll see people refer to a list of other platforms that are “supported”, but they don’t have security support, official releases, or even built packages. They includ arm, ia64, powerpc, and sparc64.

The bad: package management

Roughly 20 years ago, this was one of the things that pulled me to Debian. Perhaps I am spolied from running the distribution that has been the gold standard for package management for so long, but I find FreeBSD’s package management — even “pkg-ng” in 10.1-RELEASE — to be lacking in a number of important ways.

To start with, FreeBSD actually has two different package management systems: one for the base system, and one for what they call the ports/packages collection (“ports” being the way to install from source, and “packages” being the way to install from binaries, but both related to the same tree.) For the base system, there is freebsd-update which can install patches and major upgrades. It also has a “cron” option to automate this. Sadly, it has no way of automatically indicating to a calling script whether a reboot is necessary.

freebsd-update really manages less than a dozen packages though. The rest are managed by pkg. And pkg, it turns out, has a number of issues.

The biggest: it can take a week to get security updates. The FreeBSD handbook explains pkg audit -F which will look at your installed packages (but NOT the ones in the base system) and alert you to packages that need to be updates, similar to a stripped-down version of Debian’s debsecan. I discovered this myself, when pkg audit -F showed a vulnerability in xorg, but pkg upgrade showed my system was up-to-date. It is not documented in the Handbook, but people on the mailing list explained it to me. There are workarounds, but they can be laborious.

If that’s not bad enough, FreeBSD has no way to automatically install security patches for things in the packages collection. Debian has several (unattended-upgrades, cron-apt, etc.) There is “pkg upgrade”, but it upgrades everything on the system, which may be quite a bit more than you want to be upgraded. So: if you want to run Apache with PHP, and want it to just always apply security patches, FreeBSD packages are not up to the job like Debian’s are.

The pkg tool doesn’t have very good error-handling. In fact, its error handling seems to be nonexistent at times. I noticed that some packages had failures during install time, but pkg ignored them and marked the package as correctly installed. I only noticed there was a problem because I happened to glance at the screen at the right moment during messages about hundreds of packages. In Debian, by contrast, if there are any failures, at the end of the run, you get a nice report of which packages failed, and an exit status to use in scripts.

It also has another issue that Debian resolved about a decade ago: package scripts displaying messages that are important for the administrator, but showing so many of them that they scroll off the screen and are never seen. I submitted a bug report for this one also.

Some of these things just make me question the design of pkg. If I can’t trust it to accurately report if the installation succeeded, or show me the important info I need to see, then to what extent can I trust it?

Then there is the question of testing of the ports/packages. It seems that, automated tests aside, basically everyone is running off the “master” branch of the ports/packages. That’s like running Debian unstable on your servers. I am distinctly uncomfortable with this notion, though it seems FreeBSD people report it mostly works well.

There are some other issues, too: FreeBSD ports make no distinction between development and runtime files like Debian’s packages do. So, just by virtue of wanting to run a graphical desktop, you get all of the static libraries, include files, build scripts, etc for XOrg installed.

For a package as concerned about licensing as FreeBSD, the packages collection does not have separate sections like Debian’s main, contrib, and non-free. It’s all in one big pot: BSD-license, GPL-license, proprietary without source license. There is /usr/local/share/licenses where you can look up a license for each package, but there is no way with FreeBSD, like there is with Debian, to say “never even show me packages that aren’t DFSG-free.” This is useful, for instance, when running in a company to make sure you never install packages that are for personal use only or something.

The bad: ABI stability

I’m used to being able to run binaries I compiled years ago on a modern system. This is generally possible in Linux, assuming you have the correct shared libraries available. In FreeBSD, this is explicitly NOT possible. After every major version upgrade, you must reinstall or recompile every binary on your system.

This is not necessarily a showstopper for me, but it is a hassle for a lot of people.

Update 2015-02-17: Some people in the comments are pointing out compat packages in the ports that may help with this situation. My comment was based on advice in the FreeBSD Handbook stating “After a major version upgrade, all installed packages and ports need to be upgraded”. I have not directly tried this, so if the Handbook is overstating the need, then this point may be in error.

Conclusions

As I said above, I found little validation to the comments that the Debian ecosystem is noticeably worse than the FreeBSD one. Debian has its warts too — particularly with keeping software up-to-date. You can see that the two projects are designed around a different passion: FreeBSD’s around the base system, and Debian’s around an integrated whole system. It would be wrong to say that either of those is always better. FreeBSD’s approach clearly produces some leading features, especially jails and ZFS integration. Yet Debian’s approach also produces some leading features in the way of package management and security maintainability beyond the small base.

My criticism of excessive complexity in the polkit/cgmanager/dbus area still stands. But to those people commenting that FreeBSD hasn’t “lost its way” like Linux has, I would point out that FreeBSD mostly uses these same components also, and FreeBSD has excessive complexity in its ports/package system and system management tools. I think it’s a draw. You pick the best for your use case. If you’re looking for a platform to run a single custom app then perhaps all of the Debian package management benefits don’t apply to you (you may not even need FreeBSD’s packages, or just a few). The FreeBSD ZFS support or jails may well appeal. If you’re looking to run a desktop environment, or a server with some application that needs a ton of PHP, Python, Perl, or C libraries, then Debian’s package management and security handling may well be attractive.

I am disappointed that Debian GNU/kFreeBSD will not be a release architecture in jessie. That project had the promise to provide a best of both worlds for those that want jails or tight ZFS integration.

Enrico Zini: akonadi-install

17 February, 2015 - 21:34
Setting up Akonadi

Now that I have a CalDAV server that syncs with my phone I would like to use it from my desktop.

It looks like akonadi is able to sync with CalDAV servers, so I'm giving it a try.

First thing first is to give a meaning to the arbitrary name of this thing. Wikipedia says it is the oracle goddess of justice in Ghana. That still does not hint at all at personal information servers, but seems quite nice. Ok. I gave up with software having purpose-related names ages ago.

# apt-get install akonadi-server akonadi-backend-postgresql

Akonadi wants a SQL database as a backend. By default it uses MySQL, but I had enough of MySQL ages ago.

I tried SQLite but the performance with it is terrible. Terrible as in, it takes 2 minutes between adding a calendar entry and having it show up in the calendar. I'm fascinated by how Akonadi manages to use SQLite so badly, but since I currently just want to get a job done, next in line is PostgreSQL:

# su - postgres
$ createuser enrico
$ psql postgres
postgres=# alter user enrico createdb;

Then as enrico:

$ createdb akonadi-enrico
$ cat <<EOT > ~/.config/akonadi/akonadiserverrc
[%General]
Driver=QPSQL

[QPSQL]
Name=akonadi-enrico
StartServer=false
Host=
Options=
ServerPath=
InitDbPath=

I can now use kontact to connect Akonadi to my CalDAV server and it works nicely, both with calendar and with addressbook entries.

KDE has at least two clients for Akonadi: Kontact, which is a kitchen sink application similar to Evolution, and KOrganizer, which is just the calendar and scheduling component of Kontact.

Both work decently, and KOrganizer has a pretty decent startup time. I now have a usable desktop PIM application that is synced with my phone. W00T!

Next step is to port my swift little calendar display tool to use Akonadi as a back-end.

Peter Eisentraut: Listing screen sessions on login

17 February, 2015 - 08:00

There is a lot of helpful information about screen out there, but I haven’t found anything about this. I don’t want to “forget” any screen sessions, so I’d like to be notified when I log into a box and there are screens running for me. Obviously, there is screen -ls, but it needs to be wrapped in a bit logic so that it doesn’t annoy when there is no screen running or even installed.

After perusing the screen man page a little, I came up with this for .bash_profile or .zprofile:

if which screen >/dev/null; then
    screen -q -ls
    if [ $? -ge 10 ]; then
        screen -ls
    fi
fi

The trick is that -q in conjuction with -ls gives you exit codes about the current status of screen.

Here is an example of how this looks in practice:

~$ ssh host
Last login: Fri Feb 13 11:30:10 2015 from 192.0.2.15
There is a screen on:
        31572.pts-0.foobar      (2015-02-15 13.03.21)   (Detached)
1 Socket in /var/run/screen/S-peter.

peter@host:~$ 

Matthew Garrett: Intel Boot Guard, Coreboot and user freedom

17 February, 2015 - 03:44
PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

[1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

comments

Julien Danjou: Hacking Python AST: checking methods declaration

16 February, 2015 - 18:39

A few months ago, I wrote the definitive guide about Python method declaration, which had quite a good success. I still fight every day in OpenStack to have the developers declare their methods correctly in the patches they submit.

Automation plan

The thing is, I really dislike doing the same things over and over again. Furthermore, I'm not perfect either, and I miss a lot of these kind of problems in the reviews I made. So I decided to replace me by a program – a more scalable and less error-prone version of my brain.

In OpenStack, we rely on flake8 to do static analysis of our Python code in order to spot common programming mistakes.

But we are really pedantic, so we wrote some extra hacking rules that we enforce on our code. To that end, we wrote a flake8 extension called hacking. I really like these rules, I even recommend to apply them in your own project. Though I might be biased or victim of Stockholm syndrome. Your call.

Anyway, it's pretty clear that I need to add a check for method declaration in hacking. Let's write a flake8 extension!

Typical error

The typical error I spot is the following:

class Foo(object):
# self is not used, the method does not need
# to be bound, it should be declared static
def bar(self, a, b, c):
return a + b - c


That would be the correct version:

class Foo(object):
@staticmethod
def bar(a, b, c):
return a + b - c


This kind of mistake is not a show-stopper. It's just not optimized. Why you have to manually declare static or class methods might be a language issue, but I don't want to debate about Python misfeatures or design flaws.

Strategy

We could probably use some big magical regular expression to catch this problem. flake8 is based on the pep8 tool, which can do a line by line analysis of the code. But this method would make it very hard and error prone to detect this pattern.

Though it's also possible to do an AST based analysison on a per-file basis with pep8. So that's the method I pick as it's the most solid.

AST analysis

I won't dive deeply into Python AST and how it works. You can find plenty of sources on the Internet, and I even talk about it a bit in my book The Hacker's Guide to Python.

To check correctly if all the methods in a Python file are correctly declared, we need to do the following:

  • Iterate over all the statement node of the AST
  • Check that the statement is a class definition (ast.ClassDef)
  • Iterate over all the function definitions (ast.FunctionDef) of that class statement to check if it is already declared with @staticmethod or not
  • If the method is not declared static, we need to check if the first argument (self) is used somewhere in the method
Flake8 plugin

In order to register a new plugin in flake8 via hacking, we just need to add an entry in setup.cfg:

[entry_points]
flake8.extension =
[…]
H904 = hacking.checks.other:StaticmethodChecker
H905 = hacking.checks.other:StaticmethodChecker


We register 2 hacking codes here. As you will notice later, we are actually going to add an extra check in our code for the same price. Stay tuned.

The next step is to write the actual plugin. Since we are using an AST based check, the plugin needs to be a class following a certain signature:

@core.flake8ext
class StaticmethodChecker(object):
def __init__(self, tree, filename):
self.tree = tree
 
def run(self):
pass


So far, so good and pretty easy. We store the tree locally, then we just need to use it in run() and yield the problem we discover following pep8 expected signature, which is a tuple of (lineno, col_offset, error_string, code).

This AST is made for walking ♪ ♬ ♩

The ast module provides the walk function, that allow to iterate easily on a tree. We'll use that to run through the AST. First, let's write a loop that ignores the statement that are not class definition.

@core.flake8ext
class StaticmethodChecker(object):
def __init__(self, tree, filename):
self.tree = tree
 
def run(self):
for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue


We still don't check for anything, but we know how to ignore statement that are not class definitions. The next step need to be to ignore what is not function definition. We just iterate over the attributes of the class definition.

for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue


We're all set for checking the method, which is body_item. First, we need to check if it's already declared as static. If so, we don't have to do any further check and we can bail out.

for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue
# Check that it has a decorator
for decorator in body_item.decorator_list:
if (isinstance(decorator, ast.Name)
and decorator.id == 'staticmethod'):
# It's a static function, it's OK
break
else:
# Function is not static, we do nothing for now
pass


Note that we use the special for/else form of Python, where the else is evaluated unless we used break to exit the for loop.

for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue
# Check that it has a decorator
for decorator in body_item.decorator_list:
if (isinstance(decorator, ast.Name)
and decorator.id == 'staticmethod'):
# It's a static function, it's OK
break
else:
try:
first_arg = body_item.args.args[0]
except IndexError:
yield (
body_item.lineno,
body_item.col_offset,
"H905: method misses first argument",
"H905",
)
# Check next method
continue


We finally added some check! We grab the first argument from the method signature. Unless it fails, and in that case, we know there's a problem: you can't have a bound method without the self argument, therefore we raise the H905 code to signal a method that misses its first argument.

Now you know why we registered this second pep8 code along with H904 in setup.cfg. We have here a good opportunity to kill two birds with one stone.

The next step is to check if that first argument is used in the code of the method.

for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue
# Check that it has a decorator
for decorator in body_item.decorator_list:
if (isinstance(decorator, ast.Name)
and decorator.id == 'staticmethod'):
# It's a static function, it's OK
break
else:
try:
first_arg = body_item.args.args[0]
except IndexError:
yield (
body_item.lineno,
body_item.col_offset,
"H905: method misses first argument",
"H905",
)
# Check next method
continue
for func_stmt in ast.walk(body_item):
if six.PY3:
if (isinstance(func_stmt, ast.Name)
and first_arg.arg == func_stmt.id):
# The first argument is used, it's OK
break
else:
if (func_stmt != first_arg
and isinstance(func_stmt, ast.Name)
and func_stmt.id == first_arg.id):
# The first argument is used, it's OK
break
else:
yield (
body_item.lineno,
body_item.col_offset,
"H904: method should be declared static",
"H904",
)


To that end, we iterate using ast.walk again and we look for the use of the same variable named (usually self, but if could be anything, like cls for @classmethod) in the body of the function. If not found, we finally yield the H904 error code. Otherwise, we're good.

Conclusion

I've submitted this patch to hacking, and, finger crossed, it might be merged one day. If it's not I'll create a new Python package with that check for flake8. The actual submitted code is a bit more complex to take into account the use of abc module and include some tests.

As you may have notice, the code walks over the module AST definition several times. There might be a couple of optimization to browse the AST in only one pass, but I'm not sure it's worth it considering the actual usage of the tool. I'll let that as an exercise for the reader interested in contributing to OpenStack. 😉

Happy hacking!

The Hacker's Guide to Python

A book I wrote talking about designing Python applications, state of the art, advice to apply when building your application, various Python tips, etc. Interested? Check it out.

Vincent Sanders: To a child, often the box a toy came in is more appealing than the toy itself.

16 February, 2015 - 07:31
I think Allen Klein might not have been referring to me when he said that but I do seem to like creating boxes for my toys.

My Lenovo laptop has an Ultrabay, these are a way to easily swap optical and hard drives drives. They allow me to carry around additional storage and, providing I remembered to pack the drive, access optical media.
Over time I have acquired several additional hard drives housed in Ultrabay caddies. Generally I only need to access one at a time but increasingly I want to have more than one available.

Lenovo used to sell docking stations with multiple Ultrabays but since Series 3 was introduced this is no longer the case as the docks have been reduced to port replicators.

One solution is to buy a SATA to USB convertor which lets you use the drive externally. However once you have more than one drive this becomes somewhat untidy, not to mention all those unhoused drives on your desk become something of a hazard.

Recently after another close call I decided what I needed was a proper external enclosure to house all my drives. After some extensive googling I found nothing suitable ready to buy. Most normal people would give up at this point, I appear to be an abnormal person so I got the CAD package out.

A few hours of design and a load of laser cutting later I came up with a four bay enclosure that now houses all my Ultrabay caddies.

The design was slightly evolved to accommodate the features of some older caddies and allow a pencil to be used to eject the drives (I put a square hole in the back)

The completed unit uses about £10 of plastic and takes 30 minutes to lasercut.

The only issue with the enclosure as manufactured is that Makespace ran out of black plastic stock and I had to use transparent to finish so it is not in classic black as lenovo intended.

As usual all the design files are publicly available from my design repo.

Antonio Terceiro: rmail: reviving upstream maintaince

15 February, 2015 - 22:37

It is always fun to write new stuff, and be able to show off that shiny new piece of code that just come out of your brilliance and/or restless effort. But the world does not spin based just on shiny things; for free software to continue making the world work, we also need the dusty, and maybe and little rusty, things that keep our systems together. Someone needs to make sure the rust does not take over, and that these venerable but useful pieces of code keep it together as the ecosystem around them evolves. As you know, Someone is probably the busiest person there is, so often you will have to take Someone’s job for yourself.

rmail is a Ruby library able to parse, modify, and generate MIME mail messages. While handling transitions of Ruby interpreters in Debian, it was one of the packages we always had to fix for new Ruby versions, to the point where the Debian package has accumulated quite a few patches. The situation became ridiculous.

It was considered to maybe drop it from the Debian archive, but dropping it would mean either also dropping feed2imap and sup or porting both to other mail library.

Since doing this type of port is always painful, I decided instead to do something about the sorry state in which rmail was on the upstream side.

The reasons why it was not properly maintained upstream does not matter: people lose interest, move on to other projects, are not active users anymore; that is normal in free software projects, and instead of blaming upstream maintainers in any way we need to thank them for writing us free software in the first place, and step up to fix the stuff we use.

I got in touch with the people listed as owner for the package on rubygems.org, and got owner permission, which means I can now publish new versions myself.

With that, I cloned the repository where the original author had imported the latest code uploaded to rubygems and had started to receive contributions, but that repository was inactive for more than one year. It had already got some contributions from the sup developers which never made it in a new rmail release, so the sup people started using their own fork called “rmail-sup”.

Already in my repository, I have imported all the patches that still made sense from the Debian repository, did a bunch of updates, mainly to modernize the build system, and did a 1.1.0 release to rubygems.org. This release is pretty much compatible with 1.0.0, but since I did not test it with Ruby versions older than than one in my work laptop (2.1.5), I bumped the minor version number as warning to prospective users still on older Ruby versions.

In this release, the test suite passes 100% clean, what always gives my mind a lot of comfort:

$ rake
/usr/bin/ruby2.1 -I"lib:." -I"/usr/lib/ruby/vendor_ruby" "/usr/lib/ruby/vendor_ruby/rake/rake_test_loader.rb" "test/test*.rb"
Loaded suite /usr/lib/ruby/vendor_ruby/rake/rake_test_loader
Started
...............................................................................
...............................................................................
........

Finished in 2.096916712 seconds.

166 tests, 24213 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
100% passed

79.16 tests/s, 11546.95 assertions/s

And in the new release I have just uploaded to the Debian experimental suite (1.1.0-1), I was able to drop all of the patches and just use the upstream source as is.

So that’s it: if you use rmail for anything, consider testing version 1.1.0-1 from Debian experimental, or 1.1.0 from rubygems.org if you into that, and report any bugs to the [github repository](https://github.com/terceiro/rmail). My only commitment for now is keep it working, but if you want to add new features I will definitively review and merge them.

Jingjie Jiang: Less is More

15 February, 2015 - 21:13
Things haven’t gone well recently.

I have been working on refactoring stuff for the past two or three weeks. At first, I found it much more exciting and interesting than merely fixing minor bugs. So I worked on most parts of the codebase, and made lots of changes.

It took me much more time and energy to do all this stuff. But sadly, the more energy results in a more broken codebase. It seems all these efforts are a waste since the code is no longer be able to be merged. It was very frustrating.

Get back on track

Well, I guess such trials and failures are just inevitable on the way towards an experienced developer. Despite all the divergences and dismay I have gone through, my internship must be back on track.

I am now more realistic. My first and foremost task is to GET THINGS DONE by focusing on small and doable changes. Although it seems the improvement is LESS, but it means MORE to have a not-so-perfect finished-product (in my view), rather than a to-be-perfect messy.

Journey continues.

Sergio Talens-Oliag: Retooling

15 February, 2015 - 16:45

I haven't blogged for a long time, but I've decided that I'm going to try to write again, at least about technical stuff.

My plan was to blog about the projects I've been working on lately, the main one being the setup of the latest version of Kolab with the systems we already have at work, but I'll do that on the next days.

Today I'm just going to make a list of the tools I use on a daily basis and my plans to start using additional ones in the near future.

Shells, Terminals and Text Editors

I do almost all my work on Z Shell sessions running inside tmux; for terminal emulation I use gnome-terminal on X, VX ConnectBot on Android systems and iTerm2 on Mac OS X.

For text editing I've been using Vim for a long time (even on Mobile devices) and while I'm aware I don't know half of the things it can do, what I know is good enough for my day to day needs.

In the past I also used Emacs as a programming editor and my main tool to write HTML, SGML and XML, but since I haven't really needed an IDE for a long time and I mainly use Lightweight Markup Languages I haven't used it for a long time (I briefly tried to use Org mode, but for some reason I ended up leaving it).

Documentation formats and tools

Since a long time ago I've been an advocate of Lightweight Markup Languages; I started to use LaTeX and Lout, then moved to SGML/XML formats (LinuxDoc and DocBook) and finally moved to plain text based formats.

I started using Wiki formats (parsewiki) and soon moved to reStructuredText; I also use other markup languages like Markdown (for this blog, aka ikiwiki) and tried MultiMarkdown to replace reStructuredText for general use, but as I never liked Markdown syntax I didn't liked an extended version of it.

While I've been using ReStructuredText for a long time, I recently found Asciidoctor and the Asciidoc format and I guess I'll be using it instead of rst whenever I can (I still need to try the slide backends and conversions to ODT, but if that works I guess I'll write all my new documents using Asciidoc).

Programming languages

I'm not a developer, but I read and patch a lot of free software code written on a lot of different programming languages (I wouldn't be able to write whole programs on most of them, but thanks to Stack Overflow I'm usually able to fix what I need).

Anyway, I'm able to program in some languages; I write a lot of shell scripts and I go for Python and C when I need something more complicated.

On the near future I plan to read about javascript programming and nodejs (I'll probably need it at work) and I already started looking at Haskell (I guess it was time to learn about functional programming and after reading about it, it looks like haskell is the way to go for me).

Version Control

For a long time I've been a Subversion user, at least for my own projects, but seems that everything has moved to git now and I finally started to use it (I even opened a github account) and plan to move all my personal subversion repositories at home and at work to git, including the move of all my debian packages from svn-buildpackage to git-buildpackage.

Further Reading

With the previous plans in mind, I've started reading a couple of interesting books:

  • Learn You a Haskell by Miran Lipovača (http://learnyouahaskell.com/)
  • Pro Git written by Scott Chacon and Ben Straub (http://git-scm.com/book/en/v2)

Now I just need to get enough time to finish reading them ... ;)

Clint Adams: Now with Stripe and Twitter too

15 February, 2015 - 12:17

I just had the best Valentine's Day ever.

Matthew Palmer: The Vicious Circle of Documentation

15 February, 2015 - 07:00

Ever worked at a company (or on a codebase, or whatever) where it seemed like, no matter what the question was, the answer was written down somewhere you could easily find it? Most people haven’t, sadly, but they do exist, and I can assure you that it is an absolute pleasure.

On the other hand, practically everyone has experienced completely undocumented systems and processes, where knowledge is shared by word-of-mouth, or lost every time someone quits.

Why are there so many more undocumented systems than documented ones out there, and how can we cause more well-documented systems to exist? The answer isn’t “people are lazy”, and the solution is simple – though not easy.

Why Johnny Doesn’t Read

When someone needs to know something, they might go look for some documentation, or they might ask someone else or just guess wildly. The behaviour “look for documentation” is often reinforced negatively, by the result “documentation doesn’t exist”.

At the same time, the behaviours “ask someone” and “guess wildly” are positively reinforced, by the results “I get my question answered” and/or “at least I can get on with my work”. Over time, people optimise their behaviour by skipping the “look for documentation” step, and just go straight to asking other people (or guessing wildly).

Why Johnny Doesn’t Write

When someone writes documentation, they’re hoping that people will read it and not have to ask them questions in order to be productive and do the right thing. Hence, the behaviour “write documentation” is negatively reinforced by the results “I still get asked questions”, and “nobody does things the right way around here, dammit!”

Worse, though, is that there is very little positive reinforcement for the author: when someone does read the docs, and thus doesn’t ask a question, the author almost certainly doesn’t know they dodged a bullet. Similarly, when someone does things the right way, it’s unlikely that anyone will notice. It’s only the mistakes that catch the attention.

Given that the experience of writing documentation tends to skew towards the negative, it’s not surprising that eventually, the time spent writing documentation is reallocated to other, more utility-producing activities.

Death Spiral

The combination of these two situations is self-reinforcing. While a suitably motivated reader might start by strictly looking for documentation, or an author initially be enthused to always fully documenting their work, over time the “reflex” will be for readers to just go ask someone, because “there’s never any documentation!”, and for authors to not write documentation because “nobody bothers to read what I write anyway!”.

It is important to recognise that this iterative feedback loop is the “natural state” of the reader/author ecosystem, resulting in something akin to thermodynamic entropy. To avoid the system descending into chaos, energy needs to be constantly applied to keep the system in order.

The Solution

Effective methods for avoiding the vicious circle can be derived from the things that cause it. Change the forces that apply themselves to readers and authors, and they will behave differently.

On the reader’s side, the most effective way to encourage people to read documentation is for it to consistently exist. This means that those in control of a project or system mustn’t consider something “done” until the documentation is in a good state. Patches shouldn’t be landed, and releases shouldn’t be made, unless the documentation is altered to match the functional changes being made. Yes, this requires discipline, which is just a form of energy application to prevent entropic decay.

Writing documentation should be an explicit and well-understood part of somebody’s job description. Whoever is responsible for documentation needs to be given the time to do it properly. Writing well takes time and mental energy, and that time needs to be factored into the plans. Never forget that skimping on documentation, like short-changing QA or customer support, is a false economy that will cost more in the long term than it saves in the short term.

Even if the documentation exists, though, some people are going to tend towards asking people rather than consulting the documentation. This isn’t a moral failing on their part, but only happens when they believe that asking someone is more beneficial to them than going to the documentation. To change the behaviour, you need to change the belief.

You could change the belief by increasing the “cost” of asking. You could fire (or hellban) anyone who ever asks a question that is answered in the documentation. But you shouldn’t. You could yell “RTFM!” at everyone who asks a question. Thankfully that’s one acronym that’s falling out of favour.

Alternately, you can reduce the “cost” of getting the answer from the documentation. Possibly the largest single productivity boost for programmers, for example, has been the existence of Google. Whatever your problem, there’s a pretty good chance that a search or two will find a solution. For your private documentation, you probably don’t have the power of Google available, but decent full-text search systems are available. Use them.

Finally, authors would benefit from more positive reinforcement. If you find good documentation, let the author know! It requires a lot of effort (comparatively) to look up an author’s contact details and send them a nice e-mail. The “like” button is a more low-energy way of achieving a similar outcome – you click the button, and the author gets a warm, fuzzy feeling. If your internal documentation system doesn’t have some way to “close the loop” and let readers easily give authors a bit of kudos, fix it so it does.

Heck, even if authors just know that a page they wrote was loaded N times in the past week, that’s better than the current situation, in which deafening silence persists, punctuated by the occasional plaintive cry of “Hey, do you know how to…?”.

Do you have any other ideas for how to encourage readers to read, and for authors to write?

John Goerzen: Willis Goerzen – a good reason to live in Kansas

15 February, 2015 - 05:55

From time to time, people ask me, with a bit of a disbelieving look on their face, “Tell me again why you chose to move to Kansas?” I can explain something about how people really care about their neighbors out here, how connections through time to a place are strong, how the people are hard-working, achieve great things, and would rather not talk about their achievements too much. But none of this really conveys it.

This week, as I got word that my great uncle Willis Goerzen passed away, it occured to me that the reason I live in Kansas is simple: people like Willis.

Willis was a man that, through and through, simply cared. For everyone. He had hugs ready anytime. When I used to see him in church every Sunday, I’d usually hear his loud voice saying, “Well John!” Then a hug, then, “How are you doing?” When I was going through a tough time in life, hugs from Willis and Thelma were deeply meaningful. I could see how deeply he cared in his moist eyes, the way he sought me out to offer words of comfort, reassurance, compassion, and strength.

Willis didn’t just defy the stereotypes on men having to hide their emotions; he also did so by being just gut-honest. Americans often ask, in sort of a greeting, “How are you?” and usually get an answer like “fine”. If I asked Willis “How are you?”, I might hear “great!” or “it’s hard” or “pretty terrible.” In a place where old-fashioned stoicism is still so common, this was so refreshing. Willis and I could have deep, heart-to-heart conversations or friendly ones.

Willis also loved to work. He worked on a farm, in construction, and then for many years doing plumbing and heating work. When he retired, he just kept on doing it. Not for the money, but because he wanted to. I remember calling him up one time about 10 years ago, asking if he was interested in helping me with a heating project. His response: “I’ll hitch up the horses and be right there!” (Of course, he had no horses anymore.) When I had a project to renovate what had been my grandpa’s farmhouse (that was Willis’s brother), he did all the plumbing work. He told me, “John, it’s great to be retired. I can still do what I love to do, but since I’m so cheap, I don’t have to be fast. My old knees can move at their own speed.” He did everything so precisely, built it so sturdy, that I used to joke that if a tornado struck the house, the house would be a pile of rubble but the ductwork would still be fine.

One of his biggest frustrations about ill health was being unable to work, and in fact he had a project going before cancer started to get the best of him. He was quite distraught that, for the first time in his life, he didn’t properly finish a job.

Willis installed a three-zone system (using automated dampers to send heat or cool from a single furnace/AC into only the parts of the house where it was needed) for me. He had never done that before. The night Willis and his friend Bob came over to finish the setup was one to remember. The two guys, both in their 70s, were figuring it all out, and their excitement was catching. By the time the evening was over, I certainly was more excited about thermostats than I ever had been in my life.

I heard a story about him once – he was removing some sort of noxious substance from someone’s house. I forget what it was — whatever it was, it had pretty bad long-term health effects. His comment: “Look, I’m old. It’s not going to be this that does me in.” And he was right.

In his last few years, Willis started up a project that only Willis would dream up. He invited people to bring him all their old and broken down appliances and metal junk – air conditioners, dehumidifiers, you name it. He carefully took them apart, stripped them down, and took the metals into a metal salvage yard. He then donated all the money he got to a charity that helped the poor, and it was nearly $5000.

Willis had a sense of humor about him that he somehow deployed at those perfect moments when you least expected it. Back in 2006, before I had moved into the house that had been grandpa’s, there was a fire there. I lost two barns (one was the big old red one with lots of character) and a chicken house. When I got out there to see what had happened, Willis was already there. It was quite the disappointment for me. Willis asked me if grandpa’s old manure spreader was still in the chicken house. (Cattle manure is sometimes used as a fertilizer.) This old manure spreader was horse-drawn. I told him it was, and so it had burned up. So Willis put his arm around me, and said, “John, do you know what we always used to call a manure spreader?” “Nope.” “Shit-slinger!” That was so surprising I couldn’t help but break out laughing. Willis was the only person that got me to laugh that day.

In his last few years, Willis battled several health ailments. When he was in a nursing home for a while due to complications from knee surgery, I’d drop by to visit. And lately as he was declining, I tried to drop in at his house to visit with Willis and Thelma as much as possible. Willis was always so appreciative of those visits. He always tried to get in a hug if he could, even if Thelma and I had to hold on to him when he stood up. He would say sometimes, “John, you are so good to come here and visit with me.” And he’d add, “I love you.” As did I.

Sometimes when Willis was felling down about not being able to work more, or not finish a project, I told him how he was an inspiration to me, and to many others. And I reminded him that I visited with him because I wanted do, and being able to do that meant as much to me as it did to him. I’m not sure if he ever could quite believe how deeply true that was, because his humble nature was a part of who he was.

My last visit earlier last week was mostly with Thelma. Willis was not able to be very alert, but I held his hand and made sure to tell him that I love and care for him that time. I’m not sure if he was able to hear, but I am sure that he didn’t need to. Willis left behind a community of hundreds of people that love him and had their lives touched by his kind and inspirational presence.

Steinar H. Gunderson: Running Cisco management software in KVM

14 February, 2015 - 20:00

In terms of wireless, Cisco primarily makes hardware (access points), but they also have a relatively wide range of associated support software: In particular, WLC (Wireless Controller, the thing that all your Cisco APs talk to for centralized configuration/authentication/load balancing/etc.), PI (Prime Infrastructure, logging/management/inventory), and MSE (Mobility Services Engine, physical positioning management).

You can buy all of these as hardware appliances, but they're also lately available as virtual appliances—after all, they're just Linux machines with software on. (You will need a Cisco support contract to download them; then you will get a free 30-day trial if you don't have a license.) But of course, since this is ENTERPRISE and it is NETWORKING, it flat-out assumes that your virtualization solution of choice is VMware ESXi. Not VMware Player, not VirtualBox, not KVM, not Hyper-V. Of course you already have an ESXi box with ~24 GB spare RAM, no?

Well, I didn't, and I wanted to try all of these three anyways. First of all, I should add that this is quite obviously not supported by Cisco. You will not get any support, and you will not be able to buy a license against such VMs; it's only for learning and evaluation. So here's the hackery needed to get it to run under KVM/QEMU:

vWLC is easy; supposedly it's even sort-of supported. Just untar the .vmdk image, convert the inner image with qemu-img to qcow2, and run. Tada.

MSE is harder. You can install it (assuming you give it exactly 8192 MB of RAM; no more, no less), but half-way through the process, it will freeze in strange ways. What's going on under-the-hood is that it tries to get the number of CPUs, and this fails in the default CPU map KVM presents through SMBIOS. The magic incantation you need to add is

-smbios type=1,product="VMware Virtual Platform" -smbios type=4,sock_pfx="Proc"

And add -cpu host for good measure.

PI, on the other hand, took quite a while. I will not go into details, but it turns out what you need is to modify the emulated BIOS from SeaBIOS to simulate the ESXi Option ROM:

( cat pc-bios/bios-256k.bin ; dd if=/dev/zero bs=655511 count=1 ; printf "VMware-56 4d 45 81 db 1a 63 8d-a9 45 65 c1 af f3 a1 a1\0" ; dd if=/dev/zero bs=130866 count=1 ) > mybios.bin

and then start with -bios mybios.bin.

Now, after all of this is done and installation is done, you can even get a root shell from the appliance, upgrade the kernel (find a more modern CentOS kernel from somewhere), modify the initrd script to load virtio_blk and virtio_pci, and then use virtio disk instead of IDE. (virtio networking works out-of-the-box.)

Easy, huh?

Wouter Verhelst: Docker

14 February, 2015 - 16:58

... is the new hype these days. Everyone seems to want to be part of it; even Microsoft wants to allow Docker to run on its platform. How they visualise that is slightly beyond me, seen as how Docker is mostly a case of "run a bunch of LXC instances", which by their definition can't happen on Windows. Presumably they'll just run a lot more VMs, then, which is a possible workaround. Or maybe Docker for Windows will be the same in concept, but not in implementation. I guess the future will tell.

As I understand the premise, the idea of Docker is that getting software to run on "all" distributions is a Hard Problem[TM], so in a Docker thing you just define that this particular stuff is meant to run on top of this and this and that environment, and Docker then compartmentalises everything for you. It should make things easier to maintain, and that's a good thing.

I'm not a fan. If the problem that Docker tries to fix is "making software run on all platforms is hard", then Docker's "solution" is "I give up, it's not possible". That's sad. Sure, having a platform which manages your virtualisation for you, without having to manually create virtual machines (or having to write software to do so) is great. And sure, compartmentalising software so that every application runs in its own space can help towards security, manageability, and a whole bunch of other advantages.

But having an environment which says "if you want to run this applicaiton, I'll set up a chroot with distribution X for you; if you want to run this other application, I'll set up a chroot with distribution Y for you; and if you want to run yet this other application yere, I'll start doing a chroot with distribution Z for you" will, in the end, get you a situation where, if there's another bug in libc6 or libssl, you now have a nightmare trying to track down all the different versions in all the docker instances to make sure they're all fixed. And while it may work perfectly well on the open Internet, if you're on a corporate network with a paranoid firewall and proxy, downloading packages from public mirrors is harder than just creating a local mirror instead. Which you now have to do not only for your local distribution of choice, but also for the distributions of choice of all the developers of the software you're trying to use. Which may result in more work than just trying to massage the software in question to actually bloody well work, dammit.

I'm sure Docker has a solution for some or all of the problems it introduces, and I'm not saying it doesn't work in practice. I'm sure it does fix some part of the "Making software run on all platforms is hard" problem, and so I might even end up using it at some point. But from an aesthetical point of view, I don't think Docker is a good system.

I'm not very fond of giving up.

Junichi Uekawa: February.

14 February, 2015 - 08:12
February. Wow.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้