Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 20 min ago

Laura Arjona: Upgrading my laptop to Debian Jessie

21 July, 2014 - 04:41

Some days ago I decided to upgrade my laptop from stable to testing.

I had tried Jessie since several months, in my husband’s laptop, but that was a fresh install, and a not-so-old laptop, and we have not much software installed there.

In my netbook (Compaq Mini 110c), with stable, I already had installed Pumpa, Dianara and how-can-i-help from testing, and since the freeze is coming, I thought that I could full-upgrade and use Jessie from now on, and report my issues and help to diagnose or fix them, if possible, before the freeze.

I keep Debian stable at job for my desktop and servers (well, some of them are still in oldstable, thanks LTS team!!), and I have testing in a laptop that I use as clonezilla/drbl server (but I had issues, next week I’ll put some time on them and I’ll write here my findings, and report bugs, if any).

So! let’s go. Here I write my experience and the issues that I found (very few! and not sure if they are bugs or configuration problems or what, I’ll keep an eye on them).

The upgrade

I pointed my /etc/apt/sources.list to jessie, then apt-get update, then apt-get dist-upgrade. (With the servers I am much more careful, read the release notes, upgrade guides and so, or directly I go for a fresh install, but with my laptop, I am too lazy).

I went to bed (wow, risky LArjona!) and when I got up for going to job, the laptop was waiting for me to accept to block root from ssh access, or restart some services, and so. Ok! the upgrade resumed… but I have to go to job and I wanted my laptop! Since all the packages were already downloaded, I closed the lid (double risky LArjona!) unplugged it, put everything in my bag, and catched the bus in time :)

At the bus, I opened again the lid of my laptop (crossing fingers!) and perfect, the laptop had suspended and returned back to life, and the upgrade just resumed with no problem. Wow! I love you Debian! After 15 minutes, I had to suspend again, since the bus arrived and I had to take the metro. In the metro, the upgrade resumed, and finished. I shutdown my laptop and arrive to job.

Testing testing :)

In a break for lunch, I opened my brand new laptop (the hardware is the same, but the software totally renewed, so it’s brand new for me). I have to say that use xfce, with some GNOME/GTK apps installed (gedit, cheese, evince, XChat…) and some others that use Qt or are part of the KDE project (Okular, Kile, QtLinguist, Pumpa, Dianara). I don’t know/care too much about desktops and tweaking my desktop: I just put the terminal and gedit in black background, Debian wallpaper is enough dark for me so ok, put the font size a bit smaller to better use my low-vertical-resolution, and that’s all, I only go to configure something else if there’s something that really annoys me.

My laptop booted correctly and a nice, more modern LightDM was greeting me. I logged in and everything worked ok, except some issues that follow.

Network Manager and WPA2-enterprise wireless connections

I had to reconfigure some wireless connections in Network Manager. At job we use WPA2-enterprise, TTLS + PAP. I had stored my username and password in the connection, and network manager seemed to remember the username but not the password. No problem, I said, and I wrote it when it asked, but… the “Save” or “OK” button was greyed out. I could not click it.

Then I went to edit the connections, and more or less the same, it seems that I could edit, but not save the (new) configuration. Finally, I removed the wireless connection and created it again, and everything worked as a charm.

This, I had to do it with the two wireless in my University (both of them are WPA2-enterprise TTLS + PAP). At home, I have WPA2 personal, and I had no issues, everything worked ok.

This problem is not appearing in a fresh install, since there are no old configs to keep.

Adblock Plus not working any more

I opened Iceweasel and I began to see ads in the webpages that I visited. What? I checked and Adblock plus was installed and activated… I reinstalled the package xul-ext-adblock-plus and it worked again.

Strange display in programs based on Qt

When I opened Pumpa I noticed that the edges of the windows where too rough, as if it was not using a desktop theme. I asked to a friend that uses Plasma and he suggested to install qt4-qtconfig, and then, select a theme for my Qt apps. It worked like a charm, but I find strange that I didn’t need it before in stable. Maybe the default xfce configuration from stable is setting a theme, and the new one is not setting it, and so, the Qt apps are left “barefoot”.
With qtconfig I chose a GTK+ Style GUI for my Qt apps and then, they looked similar to what I had in stable (frankly, I cannot say if it was “similar” or “exactly the same”, but I didn’t find them strange as before, so I’m fine).

Strange display in programs from GNOME

Well, this is not a Jessie problem, it’s just that some programs adopted the new GNOME appearance, and since I’m on xfce, not on GNOME, they look a bit strange (no menus integration, and so). I am not sure that I can run GNOME (fallback, classic?) in my 1 GB RAM laptop, I have to investigate if I can tweak it to use less memory, or what.

I’m not very tied to xfce, and in fact it does not look so light (well, on top of it, I don’t run light programs, I run Iceweasel, Icedove, Libreoffice, and some others). At job I use GNOME in my desktop, but with GNOME shell, not the fallback or classic modes, so I’m thinking about giving a chance to MATE or second chance to LXDE. We’ll see.

Issues when opening the lid (waking up from suspend)

This is the most strange thing I found in the migration, and the most dangerous one, I think.

As I said before, I don’t tweak too much my desktop, if it works with the default configuration. I’m not sure that I know the differences between suspend, hibernate, hard disks disconnections and so. When I was in stable, and I closed the lid of my laptop, it just shutdown the screen, then I heard something like the system going to suspend or whatever, and after some seconds, the harddisk and fans stop, the wireless led turns off, and the power led begins to blink. Ok. When I open the lid, then it was waking up itself (the power led stayed on, the wireless led turns on, and when I tap the touchpad or type anything, the screen was coming, with the xscreensaver asking for my password. Just sometimes, when the screen was turning on, I could see my desktop for less than a second, before xscreensaver turns the background black and asks for the password.

Now since I migrated to Jessie, I’m experiencing a different behavior. When I close the lid, the laptop behaves the same. When I open the lid, the laptop behaves the same, but when I type or tap the touchpad and xscreensaver comes to ask the password, before than I can type it, the laptop just suspends again (or hibernates, I’m not sure), and I have to press the power button in order to bring it back to life (then I see the xscreensaver again asking for the password, I type it, and my desktop is there, the same as I left it when I closed the lid).

Strange, isn’t it?

I have tried to suspend my laptop directly from the menu, and it comes to the same state in which I have to press the power button in order to bring it back to life, but then, no xscreensaver password is required (which is double strange, IMHO).

Things I miss in Jessie

Well, until now, the only thing I miss in Jessie is the software center. I rarely use it (I love apt) but I think it makes a good job in easing the installation of programs in Debian for people coming from other operative systems (specially after smartphones and their copied software stores became popular).

I hope the maintainer can upload a new version before the freeze, and so, it enters in the release. I’ll try to contact him.


I have a Debian stable laptop at job (this one with xfce + GNOME), I’ll try to upgrade it and see if I see the same problems that I notice in mine. Then, I’ll check the corresponding packages to see if there are open bugs about them, and if not, report them to their maintainers.

I have to review the wiki pages related to the Jessie Desktop theme selection, I think they wanted the wallpaper to be inside before the freeze. Maybe I can help in publicity about that, handle the votings and so. I like Joy, but it’s time to change a bit, new fresh air into the room!

Filed under: My experiences and opinion Tagged: Contributing to libre software, Debian, English, Free Software, Moving into free software

John Goerzen: Beautiful Earth

21 July, 2014 - 03:04

Sometimes you see something that takes your breath away, maybe even makes your eyes moist. That happened when I saw this photo one morning:

Photography has been one of my hobbies since I was a child, and I’ve enjoyed it all these years. Recently I was inspired by the growing ease of aerial photography using model aircraft, and now can fly two short-range RC quadcopters. That photo came from the first one, and despite being a low-res 1280×720 camera, tha image of our home in the yellow glow of sunrise brought a deep feeling a beauty and peace.

Somehow seeing our home surrounded by the beauty of the immense wheat fields and green pastures drives home how small we all are in comparison to the vastness of the earth, and how lucky we are to inhabit this beautiful planet.

As the sun starts to come up over the pasture, the only way you can tell the height of the grass at 300ft is to see the shadow it makes on the mowed pathway Laura and I use to get down to the creek.

This is a view of our church in a small town nearby — the church itself is right in the center of the photo. Off to the right, you see the grain elevators that can be seen for miles across the Kansas prairie, and of course the fields are never all that far off in the background.

Here you can see the quadcopter taking off from the driveway:

And here it is flying over my home church out in the country:

That’s the country church, at the corner of two gravel roads – with its lighted cross facing east that can be seen from a mile away at night. To the right is the church park, and the green area along the road farther back is the church cemetery.

Sometimes we get in debates about environmental regulations, politics, religion, whatever. We hear stories of missiles, guns, and destruction. It is sad, this damage we humans inflict on ourselves and our earth. Our earth — our home — is worth saving. Its stunning beauty from all its continents is evidence enough of that. To me, this photo of a small corner of flat Kansas is proof enough that the home we all share deserves to be treated well, and saved so that generations to come can also get damp eyes viewing its beauty from a new perspective.

Thomas Goirand: sysvinit not sending output to all consoles

21 July, 2014 - 00:10

I spent many, many hours trying to understand why I couldn’t have both “nova console-log” showing me the output of the log, AND have the OpenStack dashboard (eg: horizon) console to work at the same time. Normally, this is done very easily, by passing multiple times the console= parameter to the Linux kernel as follow:

console=tty0 console=ttyS0,115200

But it never worked. Always, it’s the last console= thing that was taken into account by sysvinit (or, shall I say, bootlogd). Spending hours trying to figure out what would be the correct kernel command to pass didn’t help. Then this week-end, by the magic pure chance of being subscribed to the sysvinit bug reports, I have finally found out. We’ve had this bug in Debian for more than 10 years:

And it has the patch! It just feels so lame that the issue has been pending since 2003, and with a patch since 2006, and nobody even tried to have it enter Debian. I tried the Wheezy patch in the above bug report, and then tadaaaaaa! I finally had both the “nova console-log” (eg: ttyS0) console output, and the interactive tty0 to work on my Debian cloud image. I have produced a fixed version of the sysvinit package for Wheezy, if anyone wants to try it:

This doesn’t only affect only the cloud images use case. Let’s say you have a server. If it’s a modern server, probably you have IPMI 2.0 on it. While having access through the integrated KVM over IP may be nice, seeing the boot process through the serial console redirection is often a lot more snappy than the (often VNC based) video output, plus it wouldn’t require Java. Too often, Java a requirement for these nasty IPMI web interface (that’s the case for at least: Dell DRAC, Supermicro IMPI, and HP iLO). Well, it should now be possible to just use ipmitools to debug the server boot process or to go fix stuff in the single user interactive shell, AND keep the “normal” video output! :)

But keeping this fix private doesn’t help much. I would really love to get this fixed within Debian. So I have sent the patch (which needed a very small rebase) in the Git repository of sysvinit (see I of course tested it in Sid too. Though I tested only under a Xen virtual machine, I see no reason why it would work there and not elsewhere. That being said, I would welcome more testing, given the high profile of sysvinit (everyone uses/needs it, and I wouldn’t like to carry alone the unhappiness of having no boot log). Please do test it from the sysvinit git, before it’s even uploaded to Sid. Also, these days, sysvinit gets often uploaded to Experimental first. It will probably also be the case for version 2.88dsf-56.

If it works well and nobody complains about the patch, maybe it’d be worth adding it to Wheezy as well (though that decision is of course left to the release team once the fix reaches Jessie).

DebConf team: Talks review and selection process. (Posted by René Mayorga)

20 July, 2014 - 22:10

Today we finished the talk selection process. We are very grateful to everyone who decided to submit talks and events for DebConf14.

If you have submitted an event, please check your email :). If you have not received any confirmation regarding your talk status, please contact us on

During the selection process, we bore in mind the number of talk slots during the conference, as well as maintaining a balance among the different submitted topics. We are pleased to announce that we have received a total of 115 events, of which 80 have been approved (69%). Approval means your event will be scheduled during the conference and you will be guaranteed to have video coverage.

The list of approved talks can be found on the following link:

If you got an email telling your talk have being approved, and your talk is not listed, don’t panic. Check the status on summit, and make sure to select a track, if you have some track suggestions please mail us and tell us about it.

This year, we expect to also have a sort of “unconference” schedule. This will take place during the designated “hacking time”. During that time the talks rooms will be empty, and ad hoc meetings can be scheduled on-site while we are in the Conference. The method for booking a room for your ad hoc meeting will be decided and announced later, but is expected to be flexible (i.e: open scheduling board / 1 day or less in advance booking), Please don’t abuse the system: bear in mind the space will be limited, and only book your event if you gather enough people to work on your idea.

Please make sure to read the email regarding your talk. :) and prepare yourself.

Time is ticking and we will be happy to meet you in Portland.

Steve Kemp: Did you know xine will download and execute scripts?

20 July, 2014 - 04:48

Today I was poking around the source of Xine, the well-known media player. During the course of this poking I spotted that Xine has skin support - something I've been blissfully ignorant of for many years.

How do these skins work? You bring up the skin-browser, by default this is achieved by pressing "Ctrl-d". The browser will show you previews of the skins available, and allow you to install them.

How does Xine know what skins are available? It downloads the contents of:

NOTE: This is an insecure URL.

The downloaded file is a simple XML thing, containing references to both preview-images and download locations.

For example the theme "Sunset" has the following details:

  • Download link:
  • Preview link:

if you choose to install the skin the Sunset.tar.gz file is downloaded, via HTTP, extracted, and the shell-script is executed, if present.

So if you control DNS on your LAN you can execute arbitrary commands if you persuade a victim to download your "corporate xine theme".

Probably a low-risk attack, but still a surprise.

Jo Shields: Transition tracker

20 July, 2014 - 03:35

Friday was my last day at Collabora, the awesome Open Source consultancy in Cambridge. I’d been there more than three years, and it was time for a change.

As luck would have it, that change came in the form of a job offer 3 months ago from my long-time friend in Open Source, Miguel de Icaza. Monday morning, I fly out to Xamarin’s main office in Boston, for just over a week of induction and face time with my new co workers, as I take on the title of Release Engineer.

My job is to make sure Mono on Linux is a first-class citizen, rather than the best-effort it’s been since Xamarin was formed from the ashes of the Attachmate/Novell deal. I’m thrilled to work full-time on what I do already as community work – including making Mono great on Debian/Ubuntu – and hope to form new links with the packer communities in other major distributions. And I’m delighted that Xamarin has chosen to put its money where its mouth is and fund continued Open Source development surrounding Mono.

If you’re in the Boston area next week or the week after, ping me via the usual methods!

Vasudev Kamath: Stop messing with my settings Network Manager

20 July, 2014 - 03:09

I use a laptop with Atheros wifi card with ath9k driver. I use hostapd to convert my laptop wifi into AP (Access point) so I can share network with my Nexus 7 and Kindle. This has been working fine for quite some time till my recent update.

After recent system update (I use Debian Sid), I couldn't for some reason convert my wifi into AP so my device can connect. I can't find anything in log nor in hostapd debug messages which is useful to trouble shoot the issue. Every time I start the laptop my wifi card will be blocked by RF-KILL and I have manually unblock (both hard and soft). The script which I use to convert my Wifi into AP is below

#Initial wifi interface configuration
ifconfig "$1" up netmask
sleep 2

# start dhcp
sudo systemctl restart dnsmasq.service

iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables -t nat -A POSTROUTING -o "$2" -j MASQUERADE
iptables -A FORWARD -i "$1" -j ACCEPT

sysctl -w net.ipv4.ip_forward=1

#start hostapd
hostapd /etc/hostapd/hostapd.conf  &> /dev/null &

I tried rebooting the laptop and for some time I managed to convert my wifi into AP, I noticed at same time that Network Manager is not started once laptop is booted, yeah this also started happening after recent upgrade which I guess is the black magic of systemd. After some time I noticed wifi has went down and now I can't bring it up because its blocked by RF-KILL. After checking the syslog I noticed following lines.

Jul 18 23:09:30 rudra kernel: [ 1754.891060] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): using nl80211 for WiFi device control
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): driver supports Access Point (AP) mode
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): new 802.11 WiFi device (driver: 'ath9k' ifindex: 10)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): exported as /org/freedesktop/NetworkManager/Devices/8
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2]
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): preparing device
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> devices added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> device added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0): no ifupdown configuration found.
Jul 18 23:09:33 rudra ModemManager[891]: <warn>  Couldn't find support for device at '/sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0': not supported by any plugin

Well I couldn't figure out much but it looks like NetworkManager has come up and after seeing interface mon.wlan0, a monitoring interface created by hostapd to monitor the AP goes mad and tries to do something with it. I've no clue what it is doing and don't have enough patience to debug that. Probably some expert can give me hints on this.

So as a last resort I purged the NetworkManager completely from the system and settled back to good old wicd and rebooted the system. After reboot wifi card is happy and is not blocked by RF-KILL and now I can convert it AP and use it as long as I wish without any problems. Wicd is not a great tool but its good enough to get the job done and does only what is asked to it unlike the NetworkManager.

So in short

NetworkManager please stop f***ing with my settings and stop acting oversmart.

Ian Donnelly: How-To: Write a Plug-in

19 July, 2014 - 03:49

Hi Everybody!

I wanted to write a how-to on how to write an Elektra plug-in. Plug-ins are what allow Elektra to translate regular configuration files into the Elektra key database, and allow keys to be coverted back into regular configuration files. For example, the hosts plug-in is used to transcribe a hosts file into a meaningful KeySet in the Elektra Key Database. This plugin is what allows the kdb tool to be able to work with hosts files like in our mount tutorial.

While there are already a good number of plug-ins written for Elektra, we obviously don’t cover all the different types of configuration files. If you want to be able to use the features of Elektra, you may have to write a plug-in for your type of configuration file. Luckily, its not hard to do. Over the new few weeks I will be writing a tutorial for how to write your very own plug-in as well as explaining all the components of an Elektra plug-in. I will link the tutorials below as I finish them for easy reference, or if you already follow my blog I am sure you will notice them as they get published.

Dominique Dumont: Looking for help to package Perl6, moar and others for Debian

19 July, 2014 - 01:32

Let’s face reality: I cannot find the time to properly maintain Perl6 related packages for Debian. Given the recent surge of popularity of rakudo, it would be a shame to let these packages rot.

Instead of throwing the towel, I’d rather call for help to maintain these packages. You don’t need to be a Debian Developer or Maintainer: I will gladly review and upload packages.

The following packages are looking for maintainer:

  • rakudo (currently RC buggy)
  • moar (needs to be packaged, some work has been done by Daniel Dehennin)
  • parrot (up to date)
  • nqp (need to be updated. current version no longer compiles on all arch)

Next step to help Perl6 on Debian is to join:

All the best


Tagged: debian, package, Perl6

Osamu Aoki: Debian does not boot ...Crucial/Micron RealSSD m4/C400/P400

18 July, 2014 - 23:50
Today, my PC did not boot as usual to Debian.  BIOS could not find my /dev/sda and was looking for the netboot image.  I restarted my PC and got into the BIOS boot setting menu.  Hmmm.... my first SDD (/dev/sda) was missing.  My second HDD (/dev/sdb) was there.  But I did not put the Grub boot-loader there.   No wonder it did not boot.

I have a 32GB USB3 stick with the full Debian system.  It is not a live CD image USB stick but a HDD formatted and encrypted system.  Though it is not the fastest system, it is very light and usable.  I plugged it in and powered it up.  It booted OK but /dev/sda was still missing.  While it booted, I saw "ata1: COMRESET failed (errorno=-16)" .  So this ata1 SSD cannot be accessed from BIOS nor Linux.   Sigh ...

Looking around the web under the USB stick system, I saw some people were talking that the loose serial ATA cable sometimes causes this message.   Since my PC is a laptop, It has no flexible cable but has an on-board connector inside for the SSD.

Hoping my problem is just a bad connection problem, I crack opened the back panel of my PC.  The SSD looked fine.  I unplugged it from the connector and reinserted back into the connector.  After repeating this several times to be sure, I closed the back panel and booted.

It boots as expected into Debian.  Looks like everything is fine.
  SMART Error Log Version: 1
  No Errors Logged

If you have any boot problem like mine, please reinsert your SSD to the connector like I did before you panic.

Good luck.


PS: This Crucial/Micron RealSSD m4/C400/P400 M4-CT256M4SSD2 previously had a problem.  A firmware bug made it read-only.  The firmware updates fixed my Debian system on this SSD.  I could fix this without Win*** OS since the firmware update was on a bootable disk image file.

Mario Lang: Croudsourced accessibility: Self-made digital menus

18 July, 2014 - 22:50

Something straight out from the real world: Menu cards in restaurants are not nice to deal with if you are blind. It is an old problem we grow used to ignoring over time, but still something that can be quite nagging.

There are a lot of psychological issues involved in this one. Of course, you can ask for the menu to be read out to you by the staff. While they usually do their best, you end up missing out on some things most of the time.

First of all, depending on the current workload in the restaurant, the staff will usually try to cut some time and not read everything to you. What they usually do is to try to understand what type of meal you are interested in, and just read the choices from that category to you. While this can be considered a service in some situations (human preprocessing), there are situations were you will definitely miss a highlight on the menu that you would have liked to choose if you knew that it was there.

And even if the staff decides to read the complete menu to you (which is rare), you are confronted with the 7-things-in-my-head-at-once problem. It is usually rather hard to decide amongst a list of more then 7 items, because our short-term memory is sort of limited. What the sighted restaurant goers do, is to skip back and forth between the available options, until they hit a decisive moment. True, that can take a while, but it is definitely a lot easier if you can perform "random access reads" to the list of choices yourself. However, if someone presents a substantial number of choices to you in a row, as sequential speech, you loose the random access ability. You either remember every choice from the beginning and do your choosing mentaully (if you do have extraordinary mental abilities), or you end up asking the staff to read previous items aloud again. This can work, but usually it doesn't. At some point, you do not want to bother the staff anymore, and you even start to feel stupid for asking again and again, while this is something totally normal to every sighted person, just that "they" do their "random access browsing" on their own, so "they" have no need to feel bad about how long it takes them to decide, minus the typical social pressure that arises after a a certain time for everyone, at least if you are dining in a group.

In very rare cases, you happen to meet staff that is truly "awake", doing their best to not let you feel that they might be pressed on time, and really taking as much time as necessary to help you make the perfect decision. This is rare, but if it happens, it is almost a magical moment. One of these moments, where there are no "artificial" barriers between humans doing communcation. Anyway, I am drifting away.

The perfect solution to this problem is to provide random access browsing of a restaurant menu with the help of digital devices. Trying to make braille menus available in all restaurants is a goal which is not realistically reachable. Menus go out of date, and need changing. And getting a physical braille copy updated and reprinted is considerably more involved as with digital media. Restaurant owners will also likely not see the benefit to rpvide a braille card for a very small circle of customers. With a digital online menu, that is a completely different story.

These days, almost every blind person in at least my social circles owns an iOS (or similar) device. These devices have speech synthesis and web browsers.

Of course, some restaurants especially in urban areas do already have a menu online. I have found them manually with google and friends sometimes in the past, which has already given me the ability to actually sit back, and really comfortably choose amongst the available offerings myself, without having to bother a human, and without having to feel bad about (ab)using their time.

However, the case where a restaurant really has their menu online is rather rare still in the area where I am. And, it can be tedious to google for a restaurant website. Sometimes, the website itself is just marginally accessible, which makes it even more frustrating to get a relaxed dinner-experience.

I have discovered a location-based solution for the restaurant-menu problem recently. Foursquare offers the ability to provide a direct link to the menu in a restaurant-entry. I figured, since all you need to do is write a single webpage where the (common) menu items are listed per restaurant, that I could begin to create restaurant menus for my favourite locations, on my own. Well, not quite, but almost. I will sometimes need help from others to get the menu digitized, but that's just a one-time piece of work I hopefully can outsource :-). Once the actual content is in my INBUX, I create a nice HTML page listing the menu in a rather speech-based browser friendly way.

I have begun to do this today, with the menu of a restaurant just about 500 meters away from my apartment. Unterm goldenen Dachl now has a menu online, and the foursquare change request to publish the corresponsing URL is already pending. I don't fully understand how the Foursquare change review process works yet, but I hope the URL should be published in the upcoming days/weeks.

I am using Foursquare because it is the backend of a rather popular mobile navigation App for blind people, called Blindsquare. Blindsquare lets you comfortably use Open Street Map and Foursquare data to get an overview of your surroundingds. If a food place has a menu entry listed in Foursquare, Blindsquare conveniently shows it to you and opens a browser if you tap it. So there is no need to actually search for the restaurant, you can just use the location based search of Blindsquare to discover the restaurant entry and its menu link directly from within Blindsquare. Actually, you could even find a restaurant by accident, and with a little luck, find the menu for it by location, without even knowing how the restaurant is called. Isn't that neat? Yeah, that's how it is supposed to work, that's as much independence as you can get.

And, it is, as the title suggests, croudsourced accessibility. Becuase while it is nice if a restaurant owner cares to publish their menu themselves, if they haven't, you can do it yourself. Either as a user of assistive technologies, to scratch your own itch. Or as a friend of a person with a need for assistive technologies. Next time you go to lunch with your blind friend, consider making available the menu to them digitally in advance, instead of reading it. Other people will likely thank you for that, and you have actually achieved something today. And if you happne to put a menu online, make sure to submit a change request to Foursquare. Many blind people are using blindsquare these days, which makes it super-easy for them to discover the menu.

Elena 'valhalla' Grandi: Reducing useless noise from irssi

18 July, 2014 - 20:24
Yesterday I missed a query from a friend (with the answer to a question *I* had asked in the first place) because it ended up in window 30-something and my statusbar was full of dim numbers from channels where people had just joined/left.

This morning I've set
activity_hide_level = JOINS PARTS QUITS
and my world is a much neater place :)

(I may have to add NICKS and possibly MODES, but they are rare enough and I'm still not sure I don't care about them, especially the latter.)

Jonathan McDowell: On the state of Free VoIP

18 July, 2014 - 06:00

Every now and then I decide I'll try and sort out my VoIP setup. And then I give up. Today I tried again. I really didn't think I was aiming that high. I thought I'd start by making my email address work as a SIP address. Seems reasonable, right? I threw in the extra constraints of wanting some security (so TLS, not UDP) and a soft client that would work on my laptop (I have a Grandstream hardphone and would like an Android client as well, but I figure those are the easy cases while the "I have my laptop and I want to remain connected" case is a bit trickier). I had a suitable Internet connected VM, access to control my DNS fully (so I can do SRV records) and time to read whatever HOWTOs required. And oh my ghod the state of the art is appalling.

Let's start with getting a SIP server up and running. I went with repro which seemed to be a reasonably well recommended SIP server to register against. And mostly getting it up and running and registering against it is fine. Until you try and make a TLS SIP call through it (to a test address). Problem the first; the StartCom free SSL certs are not suitable because they don't advertise TLS Client. So I switch to CACert. And then I get bitten by the whole question about whether the common name on the cert should be the server name, or the domain name on the SIP address (it's the domain name on the SIP address apparently, though that might make your SIP client complain).

That gets the SIP side working. Of course RTP is harder. repro looks like it's doing the right thing. The audio never happens. I capitulate at this point, and install Lumicall on my phone. That registers correctly and I can call the test number and hear the time. So the server is functioning, it's the client that's a problem. I try the following (Debian/testing):

  • jitsi - Registers fine, seems to lack any sort of TURN/STUN support.
  • ekiga - No sign of TLS registration support.
  • twinkle - Not in testing. A recompile leads to no sign of an actual client starting up when executed.
  • sflphone - Fails to start (Debian bug #745695).
  • Empathy - Fails to connect. Doesn't show any useful debug.
  • linphone - No TLS connect (Debian bug #743494).

I'm bored at this point. Can I "dial" my SIP address from Lumicall? Of course not; I get a "Codecs incompatible" (SIP 488 Not Acceptable Here) response. I have no idea what that means. I seem to have all of the options on Lumicall enabled. Is it a NAT thing? A codec thing? Did I sacrifice the wrong colour of goat?

At some point during this process I get a Skype call from some friends, which I answer. Up comes a video call with them, their newborn, perfect audio, and no hassle. I have a conversation with them that doesn't involve me cursing technology at all. And then I go back to fighting with SIP.

Gunnar makes the comment about Skype creating a VoIP solution 10 years ago when none was to be found. I believe they're still the market leader. It just works. I'm running the Linux client, and they're maintaining it (a little behind the curve, but close enough), and it works for text chat, voice chat and video calls. I've spent half a day trying to get a Free equivalent working and failing. I need something that works behind NAT, because it's highly likely when I'm on the move that's going to be the case. I want something that lets my laptop be the client, because I don't want to rely on my mobile phone. I want my email address to also be my VoIP address. I want some security (hell, I'm not even insisting on SRTP, though I'd like to). And the state of the Open VoIP stack just continues to make me embarrassed.

I haven't given up yet, but I'd appreciate some pointers. And Skype, if you're hiring, drop me a line. ;)

Daniel Pocock: MH17 and the elephant in the room

18 July, 2014 - 03:09

Just last week, air passengers were told of intrusive new checks on their electronic devices when flying.

For years, passengers have also suffered bans on basic essentials like drinking water and excesses like the patting down of babies that even Jimmy Saville would find offensive.

Of course, all this is being done for public safety.

So if western leaders claim the safety and security of their citizens is really their number one priority, just how is it that a passenger aircraft can be flying through a war zone where two other planes were shot down this month? When it comes to aviation security, this really is the elephant in the room. The MH17 tragedy today demonstrates that terror always finds a way. It is almost like the terrorists can have their cake and eat it too: they force "free" countries to give up their freedoms and public decency and then they still knock the occasional plane out of the sky anyway.

History in the making?

It is 100 years since the assassination of Austrian Archduke Franz Ferdinand started World War I and just over 50 years since the Cuban missile crisis. Will this incident also achieve similar notoriety in history? The downing of MH17 may well have been a "mistake" but the casualties are real and very tragic indeed. I've flown with Malaysia Airlines many times, including the same route MH17 and feel a lot of sympathy for these people who have been affected.

Tiago Bortoletto Vaz: HOPE X ical for schedule

18 July, 2014 - 01:05

As Adirondack (train line MTL-NYC) is not Internet-friendly for RSS feeds I can't profit of my ~11h travelling to check this huge schedule in the way I want to, (= having a timetable view including room, description and speakers). HOPE X has just released a pdf and a xls (wtf??), but these contain only titles and room.

So I've coded an ics generator to process their feed. The result file is available at and should be up to date with the original RSS.

Craig Small: No more dspam, now what?

17 July, 2014 - 20:33

I was surprised at first to see that a long-standing bug in dspam had been fixed. Until that is, I realised it was from the Debian ftp masters and the reason the bug was closing was that dspam was being removed from the Debian archive.




So, now what? What is a good replacement for dspam that is actually maintained? I don’t need anti-virus because mutt just ignores those sorts of things and besides doesn’t run too well on Debian. dspam basically used tokens to find common patterns of spam and ham, with you bouncing misses so it learnt from its mistakes. Already got postgrey running for greylisting so its really something that does the bayesan filtering.


Some intial comments:

  • bogfilter looks interesting and seems the closest thing so far
  • cluebringer aka policyd seems like a policy and bld type of spam filter, not bayesan
  • I’ve heard crm114 is good but hard to use
  • spamassasin – I used to use this, not sure why I stopped

There really is only me on the mailserver with a pretty light load so no need to worry about efficiencies.  Not sure if it matters but my MTA is postfix and I already use procmail for delivery.



Juliana Louback: Contribute a JSCommunicator Translation

17 July, 2014 - 16:06

To those who don’t know, JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging. We’ve recently added i18n internationalization support for English, Portuguese, Spanish, French and now Hebrew. We would like to have support for other languages and it’s a pretty straightforward process to add a translation, so even if you do not have a tech background you can contribute!


First, create and account in Github. Github is a hosting service for git repositories and open source projects get to use Github for FREE! As a result, many open source project repositories are hosted on Github, including but not limited to JSCommunicator.

Once you have created your git account, follow steps 1-4 in the section Setting up git listed in Github’s setup tutorial. This will help you install git and configure anything you need.


Now fork the JSCommunicator repo. I really like the word ‘fork’. Great term. If this is your first fork, know that this will make a copy of the JSCommunicator repo and all its respective code to your Github account. Here’s how we go about it:

  1. Sign in to Github

  2. Browse to

  3. Somewhere around the upper left corner, you will see a button labeled ‘Fork’. Click that.

You should now have your own JSCommunicator repository on your Github account. On your main page select the tab ‘Repositories’. It should be the first on the list. The url will be My Github username is JLouback, so my jscommunicator repo can be found at


Now you should download all that code to your local machine so you can edit it. On your JSCommunicator page somewhere around the lower right corner you’ll see a field entitled HTTPS clone URL:

I don’t know if this is correct, but you can just copy the URL on your browser too. I do that.

Now on your terminal, navigate to the directory you’d like to place your repository and enter

git clone

The current directory should now have a folder named ‘jscommunicator’ and in it is all the JSCommunicator code.

Switch branches

The JSCommunicator repo has a branch for the i18n support. A branch is basically a copy of the code that you can modify without changing the original code. It’s great for adding new features, once everything is done and tested you can add the new feature to the original code. A more detailed explanation of branches can be found here.

On the terminal, enter

cd jscommunicator
git branch -a

This will list all the branches in the repository, both local and remote. There should be a branch named ‘remotes/origin/i18n-support’. Switch to this branch by entering

git checkout --track origin/i18n-support

This will create a local branch named i18n-support with all the contents of the remote i18n-branch in

Create a .properties file

Now your jscommunicator directory will have some new files for the internationalization functionality. Among them you should find an INTERNATIONALIZATION_README file with instructions for adding a JSCommunicator translation. It’s a less detailed version of this post, but it’s worth a read in case I’ve missed something here.

Go to the directory ‘internationalization’. You’ll see a few .properties files with different language codes. Choose the language of your preference for the translation base and make a copy of it in the internationalization directory. Your copy should be named ‘’. For example, the language code for german is ‘de’, so the german file should be named ‘’. If you’re not sure about the code for your language, please check out this list of ISO 639-1 language codes so you’re using the same code as (most) browsers.

The .properties file is list of key-value pairs, a word or a few words joined by ‘_’ which is the key, the equals sign ‘=’ then a word or phrase which is the value. Do not change anything to the left of the equals sign. Translate what is on the right of the equals sign.

Modify available_languages.xml

Go back to the root directory (jscommunicator) and open the file named ‘available_languages.ruby’. This .ruby has a series of language elements. At the end of the last ‘</language>’ add a new language element with your language display name and code, copying the previous languge elements. You actually can put your language element anwhere, as long as it’s after the ‘' and before the , as well as not cutting into any of the other language elements. Your display name should be the name of the language in its native language, for example for a french translation we’ll put ‘Français’ (I think). And the code is the code we used for the .properties file. Here’s how it should look:


Save your changes and voila!

Test your work

Once you’ve made a .properties file, JSCommunicator will automatically load your translation if you’ve set your browser preference to that language. If your browser preference is french, it will load the If your browser preference is a language we don’t have a translation for (let’s say german), JScommunicator will load the default file which is in english.

The available_languages.ruby is used to build a language selection menu. If you add a language element without a corresponding .properties file, JSCommunicator will throw a JS error and load the default (english) translation. The same happens if you use different language codes in the .properties file name and the available_languages.ruby. The error won’t disturb your use of JSCommunicator, it just won’t load the language you selected. No harm, no foul. But if you want others to use your translation, do take care to do this correctly.

If you read the INTERNATIONALIZATION_README, you will have seen that we need the file that’s included in the .html pages. You can download that code here. Be sure to place it in the jscommunicator directory. This is only necessary if you want to see your addition at work, you don’t need this file to contribute your translation. There are other 3rd party code dependencies to run JSCommunicator too as you may have noticed. Just creating the .properties file and adding a language element to available_languages.ruby, if done correctly, is enough to make a pull request.


Here’s the part where we load all your local changes to your remote repository. In the terminal, navigate to the root directory (jscommunicator) and enter

git add internationalization/
git add available_languages.ruby

Of course, please replace ‘’ with the name of your new .properties file. And mind you, we don’t have a german translation yet!

Then enter

git commit -m "German translation"

You can enter whatever you’d like in the “ “ part, it’s a good idea to explain what language you are adding. Now we can push the changes:

git push -u origin i18n-support

You should now see the changes you made in your Github account page at The message you put in double quotations (“ “) in the commit will be beside each modified file. This push also creates a new branch in your jscommunicator repository, now you will have a ‘master’ and a ‘i18n-support’ branch.


Now it’s time to share your translation with everyone else if you feel so inclined. Your version of JSCommunicator has a new translation but not the official version. First navigate to you jscommunicator repo at and make sure you are on the i18n-branch as that will contain your changes.

Click the green button directly to the left of the branch drop down menu shown above. This will create a pull request to add your new code to the official jscommunicator code. Once you click that button, you should see this:

Make sure that the base repo (the first on the line above the green Create pull request button) is opentelecoms-org:i18n-support and not opentelecoms-org:master or another branch name. If it is, just click ‘Edit’ on the right and you’ll be able to select the branch from a dropdown like so:

If first repo is opentelecoms-org:i18n-support and the second is YOUR_USER:i18n-support and you’ve verified (scroll down) that there are 2 files changed which are your new .properties file and the added element to the available_languages.xml, go ahead and press create pull request. It’ll open a window with a text box you can add an explanation to with one final ‘Create pull request’ button. Click that and you’re done! You’ve kindly contributed with a translation for JSCommunicator!

Russ Allbery: wallet 1.1

17 July, 2014 - 08:16

Wallet is the secure credential management infrastructure that we use at Stanford, primarily for keytabs but increasingly for any sort of security keys that have to be stored somewhere and retrieved by specific systems or people.

The primary goal of this release is to add Duo support. This is currently somewhat preliminary, with only a single Duo integration object type that creates a UNIX integration. (Well, technically it can create any type of integration, but the integration information is returned in the format expected by the UNIX integration.) I expect a later release to rename all existing "duo" object types to "duo-unix" and add additional object types for the various other types of integrations that one wants to support, but that work will have to wait for another day.

Since it's been over a year since the previous release, there are also other accumulated bug fixes and improvements. I also tried to merge or address as many issues or patches that had been sent to me over the past year as I could, although many larger patches or improvements had to be deferred. Highlights:

  • The owner and getacl commands now return the name of the ACL instead of its numeric ID, as they probably should have from the beginning.

  • The date passed to expires can now be in any format Date::Parse supports. (On a related note, Date::Parse is now required.)

  • wallet-rekey now works properly on keytabs containing multiple principals. I had for some reason assumed that one could form a keytab containing multiple principals by just concatenating several together, but that definitely does not work. wallet-rekey now appends new keys to the end of the existing keytab. Unfortunately, I didn't get a chance to implement purging of old keys, for the folks stuck with MIT Kerberos ktutil instead of Heimdal's.

There are also multiple other bug fixes and general improvements, such as using DateTime objects uniformly for all database access that involves date fields, and recording ACL renames in the ACL history table. Both the API and the database layer are still kind of a mess, and I'd love to rewrite them with the benefit of experience and more knowledge, but that's a project for another day.

You can get the latest release from the wallet distribution page.

Steve Kemp: So what can I do for Debian?

17 July, 2014 - 04:49

So I recently announced my intention to rejoin the Debian project, having been a member between 2002 & 2011 (inclusive).

In the past I resigned mostly due to lack of time, and what has changed is that these days I have more free time - primarily because my wife works in accident & emergency and has "funny shifts". This means we spend many days and evenings together, then she might work 8pm-8am for three nights in a row, which then becomes Steve-time, and can involve lots of time browsing reddit, coding obsessively, and watching bad TV (currently watching "Lost Girl". Shades of Buffy/Blood Ties/similar. Not bad, but not great.)

My NM-progress can be tracked here, and once accepted I have a plan for my activities:

  • I will minimally audit every single package running upon any of my personal systems.
  • I will audit as many of the ITP-packages I can manage.
  • I may, or may not, actually package software.

I believe this will be useful, even though there will be limits - I've no patience for PHP and will just ignore it, along with its ecosystem, for example.

As progress today I reported #754899 / CVE-2014-4978 against Rawstudio, and discussed some issues with ITP: tiptop (the program seems semi-expected to be installed setuid(0), but if it is then it will allow arbitrary files to be truncated/overwritten via "tiptop -W /path/to/file"

(ObRandom still waiting for a CVE identifier for #749846/TS-2867..)

And now sleep.

Matthias Klumpp: AppStream 0.7 specification and library released

16 July, 2014 - 23:03

Today I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.


A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้