Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 34 min 14 sec ago

Russ Allbery: rra-c-util 5.9

1 hour 43 min ago

A minor release of my C utility library, including some changes required for the previous release of pam-afs-session and the upcoming release of remctl.

The Kerberos portability layer now correctly defines the strings for dealing with anonymous principals when built with Heimdal, and adds KRB5_ANON_REALM (required for doing the authentication). The PAM testing framework has some improvements for handling pam_modutil_getpwnam and supports testing against PAM_SESSION_ERR.

You can get the latest version from the rra-c-util distribution page.

Erich Schubert: ELKI 0.7.0 on Maven and GitHub

6 hours 58 min ago
Version 0.7.0 of our data mining toolkit ELKI is now available on the project homepage, GitHub and Maven. You can also clone this example project to get started easily. What is new in ELKI 0.7.0? Too much, see the release notes, please! What is ELKI exactly? ELKI is a Java based data mining toolkit. We focus on cluster analysis and outlier detection, because there are plenty of tools available for classification already. But there is a kNN classifier, and a number of frequent itemset mining algorithms in ELKI, too. ELKI is highly modular. You can combine almost everything with almost everything else. In particular, you can combine algorithms such as DBSCAN, with arbitrary distance functions, and you can choose from many index structures to accelerate the algorithm. But because we separate them well, you can add a new index, or a new distance function, or a new data type, and still benefit from the other parts. In other tools such as R, you cannot easily add a new distance function into an arbitrary algorithm and get good performance - all the fast code in R is written in C and Fortran; and cannot be easily extended this way. In ELKI, you can define a new data type, new distance function, new index, and still use most algorithms. (Some algorithms may have prerequisites that e.g. your new data type does not fulfill, of course). ELKI is also very fast. Of course a good C code can be faster - but then it usually is not as modular and easy to extend anymore. ELKI is documented. We have JavaDoc, and we annotate classes with their scientific references (see a list of all references we have). So you know which algorithm a class is supposed to implement, and can look up details there. This makes it very useful for science. ELKI is not: a turnkey solution. It aims at researchers, developers and data scientists. If you have a SQL database, and want to do a point-and-click analysis of your data, please get a business solution instead with commercial support.

Gergely Nagy: Feeding Emacs

27 November, 2015 - 21:01

For the past fifteen years, I have been tweaking my ~/.emacs continously, most recently by switching to Spacemacs. With that switch done, I started to migrate a few more things to Emacs, an Atom/RSS reader being one that's been in the queue for years - ever since Google Reader shut down. Since March 2013, I have been a Feedly user, but I wanted to migrate to something better for a long time. I wanted to use Free Software, for one.

I saw a mention of Elfeed somewhere a little while ago, and in the past few days, I decided to give it a go. The results are pretty amazing.

For now, I'm using my own fork of Elfeed, because of some new features I'm adding, which were not submitted upstream yet. It's all possible using the upstream sources, if one monkey-patches a few functions - but that's ugly.

Anyhow, this is how my feed reader looks right now:

This required quite a bit of elisp to be written, and depends on popwin and powerline, but I'm very happy with the results.

Without much further ado, the magic to make this happen:

(defun feed-reader/stats () "Count the number of entries and feeds being currently displayed." (if (and elfeed-search-filter-active elfeed-search-filter-overflowing) (list 0 0 0) (cl-loop with feeds = (make-hash-table :test 'equal) for entry in elfeed-search-entries for feed = (elfeed-entry-feed entry) for url = (elfeed-feed-url feed) count entry into entry-count count (elfeed-tagged-p 'unread entry) into unread-count do (puthash url t feeds) finally (cl-return (list unread-count entry-count (hash-table-count feeds)))))) (defun feed-reader/search-header () "Returns the string to be used as the Elfeed header." (let* ((separator-left (intern (format "powerline-%s-%s" (powerline-current-separator) (car powerline-default-separator-dir)))) (separator-right (intern (format "powerline-%s-%s" (powerline-current-separator) (cdr powerline-default-separator-dir))))) (if (zerop (elfeed-db-last-update)) (elfeed-search--intro-header) (let* ((db-time (seconds-to-time (elfeed-db-last-update))) (update (format-time-string "%Y-%m-%d %H:%M:%S %z" db-time)) (stats (feed-reader/stats)) (search-filter (cond (elfeed-search-filter-active "") (elfeed-search-filter elfeed-search-filter) (""))) (lhs (list (powerline-raw (concat search-filter " ") 'powerline-active1 'l) (funcall separator-right 'powerline-active1 'mode-line))) (center (list (funcall separator-left 'mode-line 'powerline-active2) (destructuring-bind (unread entry-count feed-count) stats (let* ((content (format " %d/%d:%d" unread entry-count feed-count)) (help-text nil) ) (if url-queue (let* ((total (length url-queue)) (in-process (cl-count-if #'url-queue-buffer url-queue))) (setf content (concat content " (*)")) (setf help-text (format " %d feeds pending, %d in process ... " total in-process)))) (propertize content 'face 'powerline-active2 'help-echo help-text))) (funcall separator-right 'powerline-active2 'mode-line))) (rhs (list (funcall separator-left 'mode-line 'powerline-active1) (powerline-raw (concat " " update) 'powerline-active1 'r)))) (concat (powerline-render lhs) (powerline-fill-center nil (/ (powerline-width center) 2.0)) (powerline-render center) (powerline-fill nil (powerline-width rhs)) (powerline-render rhs)))))) (defun popwin:elfeed-show-entry (buff) (popwin:popup-buffer buff :position 'right :width 0.5 :dedicated t :noselect nil :stick t)) (defun popwin:elfeed-kill-buffer () (interactive) (let ((window (get-buffer-window (get-buffer "*elfeed-entry*")))) (kill-buffer (get-buffer "*elfeed-entry*")) (delete-window window))) (setq elfeed-show-entry-switch #'popwin:elfeed-show-entry elfeed-show-entry-delete #'popwin:elfeed-kill-buffer elfeed-search-header-function #'feed-reader/search-header)

I probably learned more elisp with this experiment than in the past 15 years combined, and I finally start to understand how properties and powerline work. Great! Looking forward to simplify and then push the current setup further!

I'd like to be able to change the search page layout, for example: I want the tags on a separate column. Elfeed also looks like a great candidate for a number of pull requests I plan to make in December...

Norbert Preining: Slick Google Map 0.3 released

27 November, 2015 - 14:17

I just have pushed a new version of the Slick Google Map plugin for WordPress to the servers. There are not many changes, but a crucial fix for parsing coordinates in DMS (degree-minute-second) format.

The documentation described that all kind of DMS formats can be used to specify a location, but these DMS encoded locations were sent to Google for geocoding. Unfortunately it seems Google is incapable of handling DMS formats, and returns slightly of coordinates. By using a library for DMS conversion, which I adapted slightly, it is now possible to use a wide variety of location formats. Practically everything that can reasonably interpreted as a location will be properly converted to decimal coordinates.

Plans for the next release are:

  • prepare for translation via the WordPress translation team
  • add html support for the marker text via uuencoding

Please see the dedicated page or the WordPress page for more details and downloads.


Olivier Berger: Handling video files produced for a MOOC on Windows with git and git-annex

26 November, 2015 - 16:51

This post is intended to document some elements of workflow that I’ve setup to manage videos produced for a MOOC, where different colleagues work collaboratively on a set of video sequences, in a remote way.

We are a team of several schools working on the same course, and we have an incremental process, so we need some collaboration over a quite long period of many remote authors, over a set of video sequences.

We’re probably going to review some of the videos and make changes, so we need to monitor changes, and submit versions to colleagues on remote sites so they can criticize and get later edits. We may have more that one site doing video production. Thus we need to share videos along the flow of production, editing and revision of the course contents, in a way that is manageable by power users (we’re all computer scientists, used to SVN or Git).

I’ve decided to start an experiment with Git and Git-Annex to try and manage the videos like we use to do for slides sources in LaTeX. Obviously the main issue is that videos are big files, demanding in storage space and bandwidth for transfers.

We want to keep a track of everything which is done during the production of the videos, so that we can later re-do some of the video editing, for instance if we change the graphic design elements (logos, subtitles, frame dimensions, additional effects, etc.), for instance in the case where we would improve the classes over the seasons. On the other hand, not all colleagues want to have to download a full copy of all rushes on their laptop if they just want to review on particular sequence of the course. They will only need to download the final edit MP4. Even if they’re interested in being able to fetch all the rushes, should they want to try and improve the videos.

Git-Annex brings us the ability to decouple the presence of files in directories, managed by regular Git commands, from the presence of the file contents (the big stuff), which is managed by Git-Annex.

Here’s a quick description of our setup :

  • we do screen capture and video editing with Camtasia on a Windows 7 system. Camtasia (although proprietary) is quite manageable without being a video editing expert, and suits quite well our needs in terms of screen capture, green background shooting and later face insertion over slides capture, additional “motion design”-like enhancement, etc.
  • the rushes captured (audio, video) are kept on that machine
  • the MP4 rendering of the edits are performed on that same machine
  • all these files are stored locally on that computer, but we perform regular backups, on demand, on a remote system, with rsync+SSH. We have installed git for Windows so we use bash and rsync and ssh from git’s install. SSH happens using a public key without a passphrase, to connect easily to the Linux remote, but that isn’t mandatory.
  • the mirrored files appear on a Linux filesystem on another host (running Debian), where the target is actually managed with git and git-annex.
  • there we handle all the files added, removed or modified with git-annex.
  • we have 2 more git-annex remote repos, accessed through SSH (again using a passphrase-less public key), run by GitoLite, to which git-annex rsyncs copies of all the file contents. These repos are on different machines keeping backups in case of crashes. git-annex is setup to mandate keeping at least 2 copies of files (numcopies).
  • colleagues in turn clone from either of these repos and git-annex get to download the video contents, only for files which they are interested in (for instance final edits, but not rushes), which they can then play locally on their preferred OS and video player.

Why didn’t we use git-annex on windows directly, on the Windows host which is the source of the files ?

We tried, but that didn’t make it. Git-Annex assistant somehow crashed on us, thus causing the Git history to be strange, so that became unmanageable, and more important, we need robust backups, so we can’t allow to handle something we don’t fully trust: shooting again a video is really costly (setting up again a shooting set, with lighting, cameras, and a professor who has to repeat again the speech!).

The rsync (with –delete on destination) from windows to Linux is robust. Git-Annex on Linux seems robust so far. That’s enough for now

The drawback is that we need manual intervention for starting the rsync, and also that we must make sure that the rsync target is ready to get a backup.

The target of the rsync on Linux is a git-annex clone using the default “indirect” mode, which handles the files as symlinks to the actual copies managed by git-annex inside the .git/ directory. But that ain’t suitable to be compared to the origin of the rsync mirror which are plain files on the Windows computer.

We must then do a “git-annex edit” on the whole target of the rsync mirror before the rsync, so that the files are there as regular video files. This is costly, in terms of storage, and also copying time (our repo contains around 50 Gb, and the Linux host is a rather tiny laptop).

After the rsync, all the files need to be compared to the SHA256 known to git-annex so that only modified files are taken into account in the commit. We perform a “git-annex add” on the whole files (for new files having appeared at rsync time), and then a “git-annex sync”. That takes a lot of time, since all SHA256 computations are quite long for such a set of big files (the video rushes and edited videos are in HD).

So the process needs to be the following, on the target Linux host:

  1. git annex add .
  2. git annex sync
  3. git annex copy . –to server1
  4. git annex copy . –to server2
  5. git annex edit .
  6. only then : rsync

Iterate ad lib

I would have preferred to have a working git-annex on windows, but that is a much more manageable process for me for now, and until we have more videos to handle in our repo that our Linux laptop can hold, we’re quite safe.

Next steps will probably involve gardening the contents of the repo on the Linux host so we’re only keeping copies of current files, and older copies are only kept on the 2 servers, in case of later need.

I hope this can be useful to others, and I’d welcome suggestions on how to improve our process.

Tiago Bortoletto Vaz: Birthday as in the good old days

26 November, 2015 - 07:43

This year I've got zero happy birthday spam message from phone, post, email, and from random people on that Internet social thing. In these days, that's a WOW, yes it is.

On the other hand, full of love and simple celebrations together with local ones. A few emails and phone calls from close friends/family who are physically distant.

I'm happier than ever with my last years' choices of caring about my privacy, not spending time with fake relationships and keeping myself an unimportant one for the $SYSTEM. That means a lot for me.

Steve Kemp: A transient home-directory?

25 November, 2015 - 18:30

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

Daniel Pocock: Introducing elfpatch, for safely patching ELF binaries

25 November, 2015 - 17:30

I recently had a problem with a program behaving badly. As a developer familiar with open source, my normal strategy in this case would be to find the source and debug or patch it. Although I was familiar with the source code, I didn't have it on hand and would have faced significant inconvenience having it patched, recompiled and introduced to the runtime environment.

Conveniently, the program has not been stripped of symbol names, and it was running on Solaris. This made it possible for me to whip up a quick dtrace script to print a log message as each function was entered and exited, along with the return values. This gives a precise record of the runtime code path. Within a few minutes, I could see that just changing the return value of a couple of function calls would resolve the problem.

On the x86 platform, functions set their return value by putting the value in the EAX register. This is a trivial thing to express in assembly language and there are many web-based x86 assemblers that will allow you to enter the instructions in a web-form and get back hexadecimal code instantly. I used the bvi utility to cut and paste the hex code into a copy of the binary and verify the solution.

All I needed was a convenient way to apply these changes to all the related binary files, with a low risk of error. Furthermore, it needed to be clear for a third-party to inspect the way the code was being changed and verify that it was done correctly and that no other unintended changes were introduced at the same time.

Finding or writing a script to apply the changes seemed like the obvious solution. A quick search found many libraries and scripts for reading ELF binary files, but none offered a patching capability. Tools like objdump on Linux and elfedit on Solaris show the raw ELF data, such as virtual addresses, which must be converted manually into file offsets, which can be quite tedious if many binaries need to be patched.

My initial thought was to develop a concise C/C++ program using libelf to parse the ELF headers and then calculating locations for the patches. While searching for an example, I came across pyelftools and it occurred to me that a Python solution may be quicker to write and more concise to review.

elfpatch (on github) was born. As input, it takes a text file with a list of symbols and hexadecimal representations of the patch for each symbol. It then reads one or more binary files and either checks for the presence of the symbols (read-only mode) or writes out the patches. It can optionally backup each binary before changing it.

Daniel Pocock: Drone strikes coming to Molenbeek?

25 November, 2015 - 14:28

The St Denis siege last week and the Brussels lockdown this week provides all of us in Europe with an opportunity to reflect on why over ten thousand refugees per day have been coming here from the middle east, especially Syria.

At this moment, French warplanes and American drones are striking cities and villages in Syria, killing whole families in their effort to shortcut the justice system and execute a small number of very bad people without putting them on trial. Some observers estimate air strikes and drones kill twenty innocent people for every one bad guy. Women, children, the sick, elderly and even pets are most vulnerable. The leak of the collateral murder video simultaneously brought Wikileaks into the public eye and demonstrated how the crew of a US attack helicopter had butchered unarmed civilians and journalists like they were playing a video game.

Just imagine that the French president had sent the fighter jets to St Denis and Molenbeek instead of using law enforcement. After all, how are the terrorists there any better or worse than those in Syria, don't they deserve the same fate? Or what if Obama had offered to help out with a few drone strikes on suburban Brussels? After all, if the drones are such a credible solution for Syria's future, why won't they solve Brussels' (perceived) problems too?

If the aerial bombing solutions had been attempted in a western country, it would lead to chaos. Half the population of Paris and Brussels would find themselves camping at the migrant camps in Calais, hoping to sneak into the UK in the back of a truck.

Over a hundred years ago, Russian leaders proposed a treaty agreeing never to drop bombs from balloons and the US and UK happily signed it. Sadly, the treaty wasn't updated after the invention of fighter jets, attack helicopters, rockets, inter-continental ballistic missiles, satellites and drones.

The reality is that asymmetric warfare hasn't worked and never will work in the middle east and as long as it is continued, experts warn that Europe may continue to face the consequences of refugees, terrorists and those who sympathize with their methods. By definition, these people can easily move from place to place and it is ordinary citizens and small businesses who will suffer a lot more under lockdowns and other security measures.

In our modern world, people often look to technology for shortcuts. The use of drones in the middle east is a shortcut from a country that spent enormous money on ground invasions of Iraq and Afghanistan and doesn't want to do it again. Unfortunately, technological shortcuts can't always replace the role played by real human beings, whether it is bringing law and order to the streets or in any other domain.

The French police deserve significant credit for the relatively low loss of life in the St Denis siege. If their methods and results were replicated in Syria and other middle eastern hotspots, would it be more likely to improve the situation in the long term than drone strikes?

Ben Armstrong: Debian Live After Debian Live

25 November, 2015 - 08:00
Get involved

After this happened, my next step was to get re-involved in Debian Live to help it carry on after the loss of Daniel. Here’s a quick update on some team progress, notes that could help people building Stretch images right now, and what to expect next.

Team progress
  • Iain uploaded live-config, incorporating an important fix, #bc8914bc, that prevented images from booting.
  • I want to get live-images ready for an upload, including #8f234605 to fix wrong config/bootloaders that prevented images from building.
Test build notes
  • As always, build Stretch images with latest live-build from Sid (i.e. 5.x).
  • Build Stretch images, not Sid, as there’s less of a chance of dependency issues spoiling the build, and that’s the default anyway.
  • To make build iterations faster, make sure the config is modified to not build source & not include installer (edit auto/config before ‘lb config’) and use an apt caching proxy.
  • Don’t forget to inject fixed packages (e.g. live-config) into each config. Use apt pinning as per live-manual, or drop the debs into config/packages.chroot.
Test boot notes
  • Use kvm, giving it enough ram (-m 1024 works for me).
  • For gnome-desktop and kde-desktop, use -vga qxl, or else the desktop will crash and restart repeatedly.
  • When using qxl, edit boot params to add qxl.modeset=1 (workaround for #779515, which will be fixed in kernel >= 4.3).
  • My gnome image test was spoiled by #802929. The mouse doesn’t work (pointer moves, but no buttons work). Waiting on a new kernel to fix this. This is a test environment related bug only, i.e. should work fine on hardware. (Test pending.)
  • The Stretch standard, lxde-desktop, cinnamon-desktop, xfce-desktop, and gnome-desktop images all built and booted fine (except for the gnome issue noted above).
  • The Stretch kde-desktop and mate-desktop images are next on my list to test, along with Jessie images.
  • I’ve only tested on the standard and lxde-desktop images that if the installer is included, booting from the Install boot menu option starts the installer (i.e. didn’t do an actual install).
Coming soon

See the TODO in the wiki. We’re knocking these off steadily. It will be faster with more people helping (hint, hint).

Bernd Zeimetz: online again

25 November, 2015 - 02:41

Finally, is back online and I’m planning to start blogging again! Part of the reason why I became inactive was the usage of ikiwiki, which is great, but at end unnecessarily complicated. So I’ve migrated by page to - a static website generator, written in go. Hugo has an active community and it is easy to create themes for it or to enhance it. Also it is using plain Markdown syntax instead of special ikiwiki syntax mixed into it - should make it easy to migrate away again if necessary.

In case somebody else would like to convert from ikiwiki to Hugo, here is the script I’ve hacked together to migrate my old blog posts.


find . -type f -name '*.mdwn' | while read i; do
        echo '+++'
        slug="$(echo $i | sed 's,.*/,,;s,\.mdwn$,,')"
        echo "slug = \"${slug}\""
        echo "title = \"$(echo $i | sed 's,.*/,,;s,\.mdwn$,,;s,_, ,g;s/\b\(.\)/\u\1/;s,debian,Debian,g')\""
        if grep -q 'meta updated' $i; then
            echo -n 'date = '
            sed '/meta updated/!d;/.*meta updated.*/s,.*=",,;s,".*,,;s,^,",;s,$,",' $i
            echo -n 'date = '
            git log --diff-filter=A --follow --format='"%aI"' -1 -- $i
        if grep -q '\[\[!tag' $i; then
            echo -n 'tags ='
            sed '/\[\[!tag/!d;s,[^ ]*tag ,,;s,\]\],,;s,\([^ ]*\),"\1",g;s/ /,/g;s,^,[,;s,$,],' $i
        echo 'categories = ["linux"]'
        echo 'draft = false'
        echo '+++'
        echo ''

        sed -e '/\[\[!tag/d' \
            -e '/meta updated/d' \
            -e '/\[\[!plusone *\]\]/d' \
            -e 's,\[\[!img files[0-9/]*/\([^ ]*\) alt="\([^"]*\).*,![\2](../\1),g' \
            -e 's,\[\([^]]*\)\](\([^)]*\)),[\1](\2),g' \
            -e 's,\[\[\([^|]*\)|\([^]]*\)\]\],[\1](\2),g' \
    } > $tmp
    #cat $tmp; rm $tmp 
    mv $tmp `echo $i | sed 's,\.mdwn,.md,g'`

For the planet Debian readers - only linux related posts will show up on the planet. If you are interested in my mountain activities and other things I post, please follow my blog on directly.

Carl Chenet: db2twitter: Twitter out of the browser

25 November, 2015 - 01:00

You have a database, a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter.

db2twitter is pretty easy to use!  First define your Twitter credentials:


Then your database information:


Then the pattern of your tweet, a Python-style formatted string:

tweet={} hires a {}{}

Add db2twitter in your crontab:

*/10 * * * * db2witter db2twitter db2twitter.ini

And you’re all set! db2twitter will generate and tweet the following tweets:

MyGreatCompany hires a web developer
CoolStartup hires a devops skilled in Docker

db2twitter is developed by and run for, the job board of th french-speaking Free Software and Opensource community.

db2twitter also has cool options like;

  • only tweet during user-specified time (e.g 9AM-6PM)
  • use user-specified SQL filter in order to get data from the database (e.g only fetch rows where status == « edited »)

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

Rhonda D'Vine: Salut Salon

24 November, 2015 - 15:26

I don't really remember where or how I stumbled upon this four women so I'm sorry that I can't give credit where credit is due, and I even do believe that I started writing a blog entry about them already somewhere. Anyway, I want to present you today Salut Salon. They might play classic instruments, but not in a classic way. But see and hear yourself:

  • Wettstreit zu viert: This is the first that I stumbled upon that did catch my attention. Lovely interpretation of classic tunes and sweet mixup.
  • Ievan Polkka: I love the catchy tune—and their interpretation of the song.
  • We'll Meet Again: While the history of the song might not be so laughable the giggling of them is just contagious. :)

So like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Michal Čihař: Wammu 0.40

24 November, 2015 - 15:09

Yesterday, Wammu 0.40 has been released.

The list of changes is not really huge:

  • Correctly escape XML output.
  • Make error message selectable.
  • Fixed spurious D-Bus error message.
  • Translation updates.

I will not make any promises for future releases (if there will be any) as the tool is not really in active development.

Filed under: English Gammu Wammu | 0 comments

Riku Voipio: Using ser2net for serial access.

24 November, 2015 - 02:55
Is your table a mess of wires? Do you have multiple devices connected via serial and can't remember which is /dev/ttyUSBX is connected to what board? Unless you are a embedded developer, you are unlikely to deal with serial much anymore - In that case you can just jump to the next post in your news feed. Introducting ser2netUsually people start with minicom for serial access. There are better tools - picocom, screen, etc. But to easily map multiple serial ports, use ser2net. Ser2net makes serial ports available over telnet. Persistent usb device names and ser2netTo remember which usb-serial adapter is connected to what, we use the /dev/serial tree created by udev, in /etc/ser2net.conf:

# arndale
7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT
# cubox
7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT
# sonic-screwdriver
7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBIT
The by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a Patch from BTS is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at This patch among other interesting features can be also be found in various ser2net forks in github. Setting easy to remember names Finally, unless you want to memorize the port numbers, set TCP port to name mappings in /etc/services:

# Local services
arndale 7004/tcp
cubox 7005/tcp
sonic-screwdriver 7006/tcp
Now finally:
telnet localhost sonic-screwdriver
^Mandatory picture of serial port connection in action

C.J. Adams-Collier: Regarding fdupes

24 November, 2015 - 01:04

Dear readers,

There is a very useful tool for finding and merging shared permanent storage, and its name is fdupes. There was a terrible occurrence in the software after version 1.51, however. They removed the -L argument because too many people were complaining about lost data. It sounds like user error to me, and so I continue to use this one. I have to build from source, since the newer versions do not have the -L option.

And so there you are. I recommend using it, even though this most useful feature has been deprecated and removed from the software. Perhaps there should be a fdupes-danger package in Debian?

Lunar: Reproducible builds: week 30 in Stretch cycle

23 November, 2015 - 23:43

What happened in the reproducible builds effort this week:

Toolchain fixes
  • Markus Koschany uploaded antlr3/3.5.2-3 which includes a fix by Emmanuel Bourg to make the generated parser reproducible.
  • Markus Koschany uploaded maven-bundle-plugin/2.4.0-2 which includes a fix by Emmanuel Bourg to use the date in the DEB_CHANGELOG_DATETIME variable in the file embedded in the jar files.
  • Niels Thykier uploaded debhelper/9.20151116 which makes the timestamp of directories created by dh_install, dh_installdocs, and dh_installexamples reproducible. Patch by Niko Tyni.

Mattia Rizzolo uploaded a version of perl to the “reproducible” repository including the patch written by Niko Tyni to add support for SOURCE_DATE_EPOCH in Pod::Man.

Dhole sent an updated version of his patch adding support for SOURCE_DATE_EPOCH in GCC to the upstream mailing list. Several comments have been made in response which have been quickly addressed by Dhole.

Dhole also forwarded his patch adding support for SOURCE_DATE_EPOCH in libxslt upstream.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: antlr3/3.5.2-3, clusterssh, cme, libdatetime-set-perl, libgraphviz-perl, liblingua-translit-perl, libparse-cpan-packages-perl, libsgmls-perl, license-reconcile, maven-bundle-plugin/2.4.0-2, siggen, stunnel4, systemd, x11proto-kb.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Vagrant Cascadian has set up a new armhf node using a Raspberry Pi 2. It should soon be added to the Jenkins infrastructure.

diffoscope development

diffoscope version 42 was release on November 20th. It adds a missing dependency on python3-pkg-resources and to prevent similar regression another autopkgtest to ensure that the command line is functional when Recommends are not installed. Two more encoding related problems have been fixed (#804061, #805418). A missing Build-Depends has also been added on binutils-multiarch to make the test suite pass on architectures other than amd64.

Package reviews

180 reviews have been removed, 268 added and 59 updated this week.

70 new “fail to build from source” bugs have been reported by Chris West, Chris Lamb and Niko Tyni.

New issue this week: randomness_in_ocaml_preprocessed_files.


Jim MacArthur started to work on a system to rebuild and compare packages built on using .buildinfo and

On December 1-3rd 2015, a meeting of about 40 participants from 18 different free software projects will be held in Athens, Greece with the intent of improving the collaboration between projects, helping new efforts to be started, and brainstorming on end-user aspects of reproducible builds.

Jonathan Dowland: CDs should come with download codes

23 November, 2015 - 23:06

boxes of CDs & the same data on MicroSD

There's a Vinyl resurgence going on, with vinyl record sales growing year-on-year. Many of the people buying records don't have record players. Many records are sold including a download code, granting the owner an (often one-time) opportunity to download a digital copy of the album they just bought.

Some may be tempted to look down upon those buying vinyl records, especially those who don't have a means to play them. The record itself is, now more than ever, a physical totem rather than a media for the music. But is this really that different to how we've treated audio CDs this century?

For at least 15 years, I've ripped every CD I've bought and then stored it in a shoebox. (I'm up to 10 shoeboxes). The ripped copy is the only thing I listen to. The CD is little more than a totem, albeit one which I have to use in a relatively inconvenient ritual in order to get something I can conveniently listen to.

The process of ripping CDs has improved a lot in this time, but it's still a pain. CD-ROM drives are also becoming a lot more scarce. Ripping is not necessary reliable, either. The best tool to verify a rip is AccurateRip, a privately-owned database of track checksums. The private status is a problem for the community (Remember what happened to CDDB?) and is only useful if other people using an AccurateRip-supported ripper have already successfully ripped the CD.

Then there's things like CD pre-emphasis. It turns out that the Red Book standard defines a rarely-used flag that means the CD (or individual tracks) have had pre-emphasis applied to the treble-end of the frequency spectrum. The CD player is supposed to apply de-emphasis on playback. This doesn't happen if you fetch the audio data digitally, so it becomes the CD rippers responsibility to handle this. CD rippers have only relatively recently grown support for it. Awareness has been pretty low, so low that nobody has a good idea about how many CDs actually have pre-emphasis set: it's thought to be very rare, but (as far as I know) MusicBrainz doesn't (yet) track it.

So some proportion of my already-ripped CDs may have actually been ripped incorrectly, and I can't easily determine which ones without re-ripping them all. I know that at least my Quake computer game CD has it set, and I have suspicions about some other releases.

Going forward, this could be avoided entirely if CDs were treated more like totems, as vinyl records are, than the media delivering the music itself, and if record labels routinely included download cards with audio CDs. For just about anyone, no matter how the music was obtained, media-less digital is the canonical form for engaging with it. Attention should also be paid to make sure that digital releases are of a high quality: but that's a topic for another blog post.

Gergely Nagy: Keyboard updates

23 November, 2015 - 17:53

Last Friday, I compiled a list of keyboards I'm interested in, and received a lot of incredible feedback, thank you all! This allowed me to shorten the list considerably, two basically two pieces. I'm reasonably sure by now which one I want to buy (both), but will spend this week calming down to avoid impulse-buying. My attention was also brought to a few keyboards originally not on my list, and I'll take this opportunity to present my thoughts on those too.

The Finalists


  • Great design, by the looks of it.
  • Mechanical keys.
  • Open source hardware and firmware, thus programmable.
  • Thumb keys.
  • Available as an assembled product, from multiple sources.
  • Primarily a kit, but assembled available.
  • Assembled versions aren't as nice as home-made variants.

The keyboard looks interesting, primarily due to the thumb keys. From the ErgoDox EZ campaign, I'm looking at $270. That's friendly, and makes ErgoDox a viable option! (Thanks @miffe!)

There's also another option, FalbaTech, which ships sooner, I can customize the keyboard to some extent, and Poland is much closer to Hungary than the US. With this option, I'm looking at $205 + shipping, a very low price for what the keyboard has to offer. (Thanks @pkkolos for the suggestion!)

Keyboardio M01

  • Mechanical keyboard.
  • Hardwood body.
  • Blank and dot-only keycaps option.
  • Open source: firmware, hardware, and so on. Comes with a screwdriver.
  • The physical key layout has much in common with my TypeMatrix.
  • Numerous thumb-accessible keys.
  • A palm key, that allows me to use the keyboard as a mouse.
  • Fully programmable LEDs.
  • Custom macros, per-application even.
  • Fairly expensive.
  • Custom keycap design, thus rearranging them physically is not an option, which leaves me with the blank or dot-only keycap options only.
  • Available late summer, 2016.

With shipping cost and whatnot, I'm looking at something in the $370 ballpark, which is on the more expensive side. On the other hand, I get a whole lot of bang for my buck: LEDs, two center bars (tripod mounting sounds really awesome!), hardwood body, and a key layout that is very similar to what I came to love on the TypeMatrix.

I also have a thing for wooden stuff. I like the look of it, the feel of it.

The Verdict

Right now, I'm seriously considering the Model 01, because even if it is about twice the price of the ErgoDox, it also offers a lot more: hardwood body (I love wood), LEDs, palm key. I also prefer the layout of the thumb keys on the Model 01.

The Model 01 also comes pre-assembled, looks stunning, while the ErgoDox pales a little in comparsion. I know I could make it look stunning too, but I do not want to build things. I'm not good at it, I don't want to be good at it, I don't want to learn it. I hate putting things together. I'm the kind of guy who needs three tries to put together a set of IKEA shelves, and I'm not exaggerating. I also like the shape of the keys better on the Model 01.

Nevertheless, the ErgoDox is still an option, due to the price. I'd love to buy both, if I could. Which means that once I'm ready to replace my keyboard at work, I will likely buy an ErgoDox. But for home, Model 01 it is, unless something even better comes along before my next pay.

The Kinesis Advantage was also a strong contender, but I ended up removing it from my preferred options, because it doesn't come with blank keys, and is not a split keyboard. And similar to the ErgoDox, I prefer the Model 01's thumb-key layout. Despite all this, I'm very curious about the key wells, and want to try it someday.

Suggested options


Suggested by Andred Carter, a very interesting keyboard with a unique design.

  • Portable, foldable.
  • Active support for forearm and hand.
  • Hands never obstruct the view.
  • Not mechanical.
  • Needs a special inlay.
  • Best used for word processing, programmers may run into limitations.

I like the idea of the keyboard, and if it wouldn't need a special inlay, but used a small screen or something to show the keys, I'd like it even more. Nevertheless, I'm looking for a mechanical keyboard right now, which I can also use for coding.

But I will definitely keep the Yogitype in mind for later!

Matias Ergo Pro

  • Mechanical keys.
  • Simple design.
  • Split keyboard.
  • Doesn't seem to come with a blank keys option, nor in Dvorak.
  • No thumb key area.
  • Neither open source, nor open hardware.
  • I have no need for the dedicated undo, cut, paste keys.
  • Does not appear to be programmable.

This keyboard hardly meets any of my desired properties, and doesn't have anything standing out in comparison with the others. I had a quick look at this when compiling my original list, but was quickly discarded. Nevertheless, people asked me why, so I'm including my reasoning here:

While it is a split keyboard, with a fairly simple design, it doesn't come in the layout I'd prefer, nor with blank keys. It lacks the thumb key area that ErgoDox and the Model 01 have, and which I developed an affection for.

Microsoft Sculpt Ergonomic Keyboard

  • Numpad is a separate unit.
  • Reverse tilt.
  • Well positioned, big Alt keys.
  • Cheap.
  • Not a split keyboard.
  • Not mechanical.
  • No blank or Dvorak option as far as I see.

This keyboard does not buy me much over my current TypeMatrix 2030. If I'd be looking for the cheapest possible among ergonomic keyboards, this would be my choice. But only because of the price.

Truly Ergonomic Keyboard

  • Mechanical.
  • Detachable palm rest.
  • Programmable firmware.
  • Not a split keyboard.
  • Layouts are virtual only, the printed keycaps stay QWERTY, as far as I see.
  • Terrible navigation key setup.

Two important factors for me are physical layout and splittability. This keyboard fails both. While it is a portable device, that's not a priority for me at this time.

Thomas Goirand: OpenStack Liberty and Debian

23 November, 2015 - 15:30
Long over due post

It’s been a long time I haven’t written here. And lots of things happened in the OpenStack planet. As a full time employee with the mission to package OpenStack in Debian, it feels like it is kind of my duty to tell everyone about what’s going on.

Liberty is out, uploaded to Debian

Since my last post, OpenStack Liberty, the 12th release of OpenStack, was released. In late August, Debian was the first platform which included Liberty, as I proudly outran both RDO and Canonical. So I was the first to make the announcement that Liberty passed most of the Tempest tests with the beta 3 release of Liberty (the Beta 3 is always kind of the first pre-release, as this is when feature freeze happens). Though I never made the announcement that Liberty final was uploaded to Debian, it was done just a single day after the official release.

Before the release, all of Liberty was living in Debian Experimental. Following the upload of the final packages in Experimental, I uploaded all of it to Sid. This represented 102 packages, so it took me about 3 days to do it all.

Tokyo summit

I had the pleasure to be in Tokyo for the Mitaka summit. I was very pleased with the cross-project sessions during the first day. Lots of these sessions were very interesting for me. In fact, I wish I could have attended them all, but of course, I can’t split myself in 3 to follow all of the 3 tracks.

Then there was the 2 sessions about Debian packaging on upstream OpenStack infra. The goal is to setup the OpenStack upstream infrastructure to allow packaging using Gerrit, and gating each git commit using the usual tools: building the package and checking there’s no FTBFS, running checks like lintian, piuparts and such. I knew already the overview of what was needed to make it happen. What I didn’t know was the implementation details, which I hoped we could figure out during the 1:30 slot. Unfortunately, this didn’t happen as I expected, and we discussed more general things than I wished. I was told that just reading the docs from the infra team was enough, but in reality, it was not. What currently needs to happen is building a Debian based image, using disk-image-builder, which would include the usual tools to build packages: git-buildpackage, sbuild, and so on. I’m still stuck at this stage, which would be trivial if I knew a bit more about how upstream infra works, since I already know how to setup all of that on a local machine.

I’ve been told by Monty Tailor that he would help. Though he’s always a very busy man, and to date, he still didn’t find enough time to give me a hand. Nobody replied to my request for help in the openstack-dev list either. Hopefully, with a bit of insistence, someone will help.

Keystone migration to Testing (aka: Debian Stretch) blocked by python-repoze.who

Absolutely all of OpenStack Liberty, as of today, has migrated to Stretch. All? No. Keystone is blocked by a chain of dependency. Keystone depends on python-pysaml2, itself blocked by python-repoze.who. The later, I upgraded it to version 2.2. Though python-repoze.what depends on version <= 1.9, which is blocking the migration. Since python-repoze.who-plugins, python-repoze.what and python-repoze.what-plugins aren’t used by any package anymore, I asked for them to be removed from Debian (see #805407). Until this request is processed by the FTP masters, Keystone, which is the most important piece of OpenStack (it does the authentication) will be blocked for migration to Stretch.

New OpenStack server packages available

On my presentation at Debconf 15, I quickly introduced new services which were released upstream. I have since packaged them all:

  • Barbican (Key management as a Service)
  • Congress (Policy as a Service)
  • Magnum (Container as a Service)
  • Manila (Filesystem share as a Service)
  • Mistral (Workflow as a Service)
  • Zaqar (Queuing as a Service)

Congress, unfortunately, was not accepted to Sid yet, because of some licensing issues, especially with the doc of python-pulp. I will correct this (remove the non-free files) and reattempt an upload.

I hope to make them all available in jessie-backports (see below). For the previous release of OpenStack (ie: Kilo), I skipped the uploads of services which I thought were not really critical (like Ironic, Designate and more). But from the feedback of users, they would really like to have them all available. So this time, I will upload them all to the official jessie-backports repository.

Keystone v3 support

For those who don’t know about it, Keystone API v3 means that, on top of the users and tenant, there’s a new entity called a “domain”. All of the Liberty is now coming with Keystone v3 support. This includes the automated Keystone catalog registration done using debconf for all *-api packages. As much as I could tell by running tempest on my CI, everything still works pretty well. In fact, Liberty is, to my experience, the first release of OpenStack to support Keystone API v3.

Uploading Liberty to jessie-backports

I have rebuilt all of Liberty for jessie-backports on my laptop using sbuild. This is more than 150 packages (166 packages currently). It took me about 3 days to rebuild them all, including unit tests run at build time. As soon as #805407 is closed by the FTP masters, all what’s remaining will be available in Stretch (mostly Keystone), and the upload will be possible. As there will be a lot of NEW packages (from the point of view of backports), I do expect that the approval will take some time. Also, I have to warn the original maintainers of the packages that I don’t maintain (for example, those maintained within the DPMT), that because of the big number of packages, I will not be able to process the usual communication to tell that I’m uploading to backports. However, here’s the list of package. If you see one that you maintain, and that you wish to upload the backport by yourself, please let me know. Here’s the list of packages, hopefully, exhaustive, that I will upload to jessie-backports, and that I don’t maintain myself:

alabaster contextlib2 kazoo python-cachetools python-cffi python-cliff python-crank python-ddt python-docker python-eventlet python-git python-gitdb python-hypothesis python-ldap3 python-mock python-mysqldb python-pathlib python-repoze.who python-setuptools python-smmap python-unicodecsv python-urllib3 requests routes ryu sphinx sqlalchemy turbogears2 unittest2 zzzeeksphinx.

More than ever, I wish I could just upload these to a PPA^W Bikeshed, to minimize the disruption for both the backports FTP masters, other maintainers, and our OpenStack users. Hopefully, Bikesheds will be available soon. I am sorry to give that much approval work to the backports FTP masters, however, using the latest stable system with the latest release, is what most OpenStack users really want to do. All other major distributions have specific repositories too (ie: RDO for CentOS / Red Hat, and cloud archive for Ubuntu), and stable-backports is currently the only place where I can upload support for the Stable release.

Debian listed as supported distribution on

Good news! If you go at you will see a list of supported distributions. I am proud to be able to tell that, after 6 months of lobbying from my side, Debian is also listed there. The process of having Debian there included talking with folks from the OpenStack foundation, and having Bdale to sign an agreement so that the Debian logo could be reproduced on Thanks to Bdale Garbee, Neil McGovern, Jonathan Brice, and Danny Carreno, without who this wouldn’t have happen.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้