Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 58 min ago

Jamie McClelland: Trusted Mobile Device: How hard could it be?

1 September, 2016 - 10:36

I bought a new phone. After my experiences with signal and the helpful comments readers gave regarding the ability to run android and signal without Google Play using microg I thought I would give it a shot.

Since microg reports that signature spoofing is required and comes out-of-the-box with omnirom I thought I'd aim for installing omnirom's version of Android 6 (marshmallow) after years of using cyanomgenmod's version of Android.

The Nexus line of phones seemed well-supported by omnirom in particular (and the alternative ROM community in general) so I bought a Nexus 5x.

I carefully followed the directions for installing omnirom however when it came time to boot into omnirom, I just got the boot sequence animation over and over again.

Frustrated, I decided to go back to cyanogenmod and see if I could use one of the microg recommended methods for getting signature spoofing to work. The easiest seemed to be Needle by moosd but alas no such luck with Marshmallow. Someone else forked the code and might fix it one day. I then spent too much time trying to understand what xposed is before I gave up understanding it and just tried to install it (woops, looks like the installer page is out of date so instead I followed sketchy instructions from a forum thread). Well, to make a long story short it resulted in a boot loop.

So, I decided to return to omnirom. After reading some vague references to omnirom and supersu, I decided to flash both of them together and voila, it worked!

Next, I decided to enable full disk encryption. Not so fast. After clicking through the screens and hitting the final confirmation, my phone rebooted and spent the next 5 hours showing me the omnirom boot animation. Somehow, powering down and starting again resulted in a working machine, but no disk encryption.

After much web searching, guessing and trial and error, I fixed the problem by clicking on the SuperSU option to "Full unroot" the device (I pressed "no" when prompted to attempt to restore stock image). Then I rebooted and followed the directions to encrypt the device. And it worked! Hooray!

I had to reboot and re-flash the supersu to regain su privileges.

All was great.

The first root action I decided to take was to install the cryptfs program from f-droid because using the same password to decrypt your device as you use to unlock the screen seems either tedious or insecure.

That process didn't work so well. I got a message saying: use this command from a root shell before you reboot: vdc cryptfs changepw <password>. I followed the advice, carefully typing in my 12 character password which includes numbers and letters.

Then, I happily did what I expected to be my last reboot when, to my horror, I was prompted to decrypt my disk with ... a numeric-only keypad.

That wasn't going to work. At this point I had already spent 5 days and about 8 hours on this project. Sigh. So, I started over.

Guess what? It only took me 25 minutes but, it seems that cryptfs is broken. Even with a numeric password it fails. Ok, I guess I need a long pin to unlock my phone. This time it only took my 15 minutes to wipe and re-install everything.

There are only two positive things I can think of:

  • TWRP, which provides the recovery image, is really great. Everytime something went wrong I booted into the TWRP recovery image and could fix anything.
  • I'm starting to get used to the error on startup warning me that "Your device is corrupt. It can't be trusted and may not work properly." It's a good thing to remember about all digital devices.

p.s. I haven't even tried to install microg yet... which was the whole point.

Enrico Zini: Links for September 2016

1 September, 2016 - 05:00
A Few Useful Mental Tools from Richard Feynman [archive]

«These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day.»

Pasta [archive]

A comprehensive introduction to pasta, to keep at hand in case I meet someone who has little familiarity with it.

MPTP: One Designer Drug and Serendipity [archive]

Abstract: Through an unlikely series of coincidences and fortunate accidents, the development of Parkinson’s disease in several illicit drug users was traced to their use of a meperidine analog contaminated with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). The discovery of a chemical capable of producing animal models of the disease has revitalized research efforts and resulted in important new information. The serendipitous finding also prompted consideration of what changes seem advisable if designer drugs are to be dealt with more efficaciously.

The Debunking Handbook: now freely available for download

The Debunking Handbook, a guide to debunking misinformation, is now freely available to download. Although there is a great deal of psychological research on misinformation, there's no summary of the literature that offers practical guidelines on the most effective ways of reducing the influence of myths.

Faulty neon light jams radio appliances [archive]

«An apparent interference source began plaguing wireless vehicle key fobs, cell phones, and other wireless electronics. Key fob owners found they could not open or start their vehicles remotely until their vehicles were towed at least a block away, nor were they able to call for help on their cell phones when problems occurred»

Calvin & Muad'Dib

Calvin & Hobbes with text taken from Frank Herbert's Dune. It's been around since 2013 and I consistently found it moving and deep.

When Birds Attack - Bike Helmet Hacks [archive]

Australian magpies attacking cyclists has propted several creative adaptations, including attaching an afro wig to the bike helmet.

Chris Lamb: Free software activities in August 2016

1 September, 2016 - 04:48

Here is my monthly update covering what I have been doing in the free software world (previously):

  • Worked on nsntrace, a userspace tool to perform network traces on processes using kernel namespaces:
    • Overhauled error handling to ensure the return code of the wrapped process is returned to the surrounding environment. (#10).
    • Permit the -u argument to also accept uids as well as usernames. (#16).
    • Always kill the (hard-looping) udp_send utility, even on test failures. (#13).
    • Updated to look for iptables in /sbin & /usr/sbin (#11) and to raise an error if pcap.h is missing (#15).
    • Drop bashisms in #!/bin/sh script (#14) and ignore the generated manpage in the Git repository (#12).
  • Independently discovered an regression in the Django web development framework where field__isnull=False filters were not working with some foreign keys, resulting in extending the testsuite and release documentation. (#7104).
  • Proposed a change to django-enumfield (a custom field for type-safe constants) to ensure passing a string type to Enum.get returned None on error to match the documentation. (#36).
  • Fixed an issue in the Mopidy music player's podcast extension where the testsuite was failing tests in extreme timezones. (#40).
  • Proposed changes to make various upstream's reproducible:
    • botan, a crypto/TLS library for C++11. (#587).
    • cookiecutter, a project template generator, removing nondeterministic keyword arguments from appearing in the documentation. (#800).
    • pyicu, a Python wraper for the IBM Unicode library. (#27).
  • Integrated a number of issues raised by @piotr1212 to python-fadvise, my Python interface to posix_fadvise(2), where the API was not being applied to open file descriptors (#1) and moving the .so to a module directory (#2).
  • Various improvements to, a hosted version of the diffoscope in-depth and content-aware diff utility, including introducing an HTTP API (#21), updating the SSL certificate and correcting a logic issue where errors in diffoscope itself were not being detected correctly (b0ff49). Continued thanks to Bytemark for sponsoring the hardware.
  • Fixed a bug in django-slack, my library to easily post messages to the Slack group-messaging utility, correcting an EncodeError exception under Python 3 (#53) and updated the minimum required version of Django to 1.7 (#54).
  • Various updates to tickle-me-email, my Getting Things Done-inspired email toolbox, to also match / in IMAP's LIST separators (#6) and to encode the folder list as UTF-7 (#7). Thanks to @resiak.
  • Clarified the documentation for — my hosted script to easily test and build Debian packages on the Travis CI continuous integration platform — regarding how to integrate with Github (#20).

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most Linux distributions provide binary (or "compiled") packages to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously and accidentally — during this compilation process by promising identical binary packages are always generated from a given source.

Toolchain issues

I submitted the following patches to fix reproducibility-related toolchain issues:

My work in the Reproducible Builds project was also covered in our weekly reports. (#67, #68, #69, #70).


diffoscope is our "diff on steroids" that will not only recursively unpack archives but will transform binary formats into human-readable forms in order to compare them:

  • Added a command-line interface to the web service.
  • Added a JSON comparator.
  • In the HTML output, highlight lines when hovering to make it easier to visually track.
  • Ensure that we pass str types to our Difference class, otherwise we can't be sure we can render them later.
  • Testsuite improvements:
    • Generate test coverage reports.
    • Add tests for Haskell and GitIndex comparators.
    • Completely refactored all of the comparator tests, extracting out commonly-used routines.
    • Confirm rendering of text and HTML presenters when checking non-existing files.
    • Dropped a squashfs test as it was simply too unreliable and/or has too many requirements to satisfy.
  • A large number of miscellaneous cleanups, including:
    • Reworking the comparator setup/preference internals by dynamically importing classes via a single list.
    • Split exceptions out into dedicated diffoscope.exc module.
    • Tidying the PROVIDERS dict in diffoscope/
    • Use html.escape over xml.sax.saxutils.escape, cgi.escape, etc.
    • Removing hard-coding of manual page targets names in debian/rules.
    • Specify all string format arguments as logging function parameters, not using interpolation.
    • Tidying imports, correcting indentation levels and drop unnecessary whitespace.


disorderfs is our FUSE filesystem that deliberately introduces nondeterminism in system calls such as readdir(3).

  • Added a testsuite to prevent regressions. (f124965)
  • Added a --sort-dirents=yes|no option for forcing deterministic ordering. (2aae325)

  • Improved strip-nondeterminism, our tool to remove specific nondeterministic information after a build:
    • Match more styles of Java .properties files.
    • Remove hyphen from "non-determinism" and "non-deterministic" throughout package for consistency.
  • Improvements to our testing infrastucture:
    • Improve the top-level navigation so that we can always get back to "home" of a package.
    • Give expandable elements cursor: pointer CSS styling to highlight they are clickable.
    • Drop various trailing underlined whitespaces after links.
    • Explicitly log that build was successful or not.
    • Various code-quality improvements, including prefering str.format over concatentation.
  • Miscellaneous updates to our filter-packages internal tool:
    • Add --random=N and --url options.
    • Add support for --show=comments.
    • Correct ordering so that --show-version runs after --filter-ftbfs.
    • Rename --show-ftbfs to --filter-ftbfs and --show-version to --show=version.
  • Created a proof-of-concept reproducible-utils package to contain commonly-used snippets aimed at developers wishing to make their packages reproducible.

I also submitted 92 patches to fix specific reproducibility issues in advi, amora-server, apt-cacher-ng, ara, argyll, audiotools, bam, bedtools, binutils-m68hc1x, botan1.10, broccoli, congress, cookiecutter, dacs, dapl, dateutils, ddd, dicom3tools, dispcalgui, dnssec-trigger, echoping, eekboek, emacspeak, eyed3, fdroidserver, flashrom, fntsample, forkstat, gkrellm, gkrellm, gnunet-gtk, handbrake, hardinfo, ircd-irc2, ircd-ircu, jack-audio-connection-kit, jpy, kxmlgui, libbson, libdc0, libdevel-cover-perl, libfm, libpam-ldap, libquvi, librep, lilyterm, mozvoikko, mp4h, mp4v2, myghty, n2n, nagios-nrpe, nikwi, nmh, nsnake, openhackware, pd-pdstring, phpab, phpdox, phpldapadmin, pixelmed-codec, pleiades, pybit, pygtksourceview, pyicu, python-attrs, python-gflags, quvi, radare2, rc, rest2web, roaraudio, rt-extension-customfieldsonupdate, ruby-compass, ruby-pg, sheepdog, tf5, ttf-tiresias, ttf-tiresias, tuxpaint, tuxpaint-config, twitter-bootstrap3, udpcast, uhub, valknut, varnish, vips, vit, wims, winswitch, wmweather+ & xshisen.

Debian GNU/Linux Patches contributed

I also submitted 22 patches to fix typos in debian/rules files against ctsim, f2c, fonts-elusive-icons, ifrit, ldapscripts, libss7, libvmime, link-grammar, menulibre, mit-scheme, mugshot, nlopt, nunit, proftpd-mod-autohost, proftpd-mod-clamav, rabbyt, radvd, ruby-image-science, snmpsim, speech-tools, varscan & whatmaps.

Debian LTS

This month I have been paid to work 15 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Authored the patch & issued DLA 596-1 for extplorer, a web-based file manager, fixing an archive traversal exploit.
  • Issued DLA 598-1 for suckless-tools, fixing a segmentation fault in the slock screen locking tool.
  • Issued DLA 599-1 for cracklib2, a pro-active password checker library, fixing a stack-based buffer overflow when parsing large GECOS fields.
  • Improved the find-work internal tool adding optional colour highlighting and migrating it to Python 3.
  • Wrote an lts-missing-uploads tool to find mistakes where there was no correponding package in the archive after an announcement.
  • Added optional colour highlighting to the lts-cve-triage tool.
  • redis 2:3.2.3-1 — New upstream release, move to the DEP-5 debian/copyright format, ensure that we are running as root in LSB initscripts and add a README.Source regarding our local copies of redis.conf and sentinel.conf.
  • python-django:
    • 1:1.10-1 — New upstream release.
    • 1:1.10-2 — Fix test failures due to mishandled upstream translation updates.

  • gunicorn:
    • 19.6.0-2 — Reload logrotate in the postrotate action to avoid processes writing to the old files and move to DEP-5 debian/copyright format.
    • 19.6.0-3 — Drop our /usr/sbin/gunicorn{,3}-debian and related Debian-specific machinery to be more like upstream.
    • 19.6.0-4 — Drop "template" systemd .service files and point towards examples and documentation instead.

  • adminer:
    • 4.2.5-1 — Take over package maintenance, completely overhauling the packaging with a new upstream version, move to virtual-mysql-server to support MariaDB, updating package names of dependencies and fix the outdated Apache configuration.
    • 4.2.5-2 — Correct the php5 package names.

Bugs filed (without patches) RC bugs

I filed 3 RC bugs with patches:

I additionally filed 8 RC bugs for packages that access the internet during build against autopkgtest, golang-github-xenolf-lego, pam-python, pexpect, python-certbot, python-glanceclient, python-pykka & python-tornado.

I also filed 74 FTBFS bugs against airlift-airline, airlift-slice, alter-sequence-alignment, apktool, atril, auto-apt-proxy, bookkeeper, bristol, btfs, caja-extensions, ccbuild, cinder, clustalo, colorhug-client, cpp-netlib, dimbl, edk2, elasticsearch, ganv, git-remote-hg, golang-codegangsta-cli, golang-goyaml, gr-radar, imagevis3d, jacktrip, jalv, kdepim, kiriki, konversation, libabw, libcereal, libdancer-plugin-database-perl, libdist-zilla-plugins-cjm-perl, libfreemarker-java, libgraph-writer-dsm-perl, libmail-gnupg-perl, libminc, libsmi, linthesia, lv2-c++-tools, lvtk, mate-power-manager, mcmcpack, mopidy-podcast, nageru, nfstrace, nova, nurpawiki, open-gram, php-crypt-gpg, picmi, projectl, pygpgme, python-apt, python-django-bootstrap-form, python-django-navtag, python-oslo.config, qmmp, qsapecng, r-cran-sem, rocs, ruby-mini-magick, seahorse-nautilus, shiro, snap, tcpcopy, tiledarray, triggerhappy, ucto, urdfdom, vmmlib, yara-python, yi & z3.

FTP Team

As a Debian FTP assistant I ACCEPTed 90 packages: android-platform-external-jsilver, android-platform-frameworks-data-binding, camlpdf, consolation, dfwinreg, diffoscope, django-restricted-resource, django-testproject, django-testscenarios, gitlab-ci-multi-runner, gnome-shell-extension-taskbar, golang-github-flynn-archive-go-shlex, golang-github-jamesclonk-vultr, golang-github-weppos-dnsimple-go, golang-golang-x-time, google-android-ndk-installer, haskell-expiring-cache-map, haskell-hclip, haskell-hdbc-session, haskell-microlens-ghc, haskell-names-th, haskell-persistable-record, haskell-should-not-typecheck, haskell-soap, haskell-soap-tls, haskell-th-reify-compat, haskell-with-location, haskell-wreq, kbtin, libclipboard-perl, libgtk3-simplelist-perl, libjs-jquery-selectize.js, liblemon, libplack-middleware-header-perl, libreoffice, libreswan, libtest-deep-json-perl, libtest-timer-perl, linux, linux-signed, live-tasks, llvm-toolchain-3.8, llvm-toolchain-snapshot, lua-luv, lua-torch-image, lua-torch-nn, magic-wormhole, mini-buildd, ncbi-vdb, node-ast-util, node-es6-module-transpiler, node-es6-promise, node-inline-source-map, node-number-is-nan, node-object-assign, nvidia-graphics-drivers, openhft-chronicle-bytes, openhft-chronicle-core, openhft-chronicle-network, openhft-chronicle-threads, openhft-chronicle-wire, pycodestyle, python-aptly, python-atomicwrites, python-click-log, python-django-casclient, python-git-os-job, python-hypothesis, python-nosehtmloutput, python-overpy, python-parsel, python-prov, python-py, python-schema, python-tackerclient, python-tornado, pyvo, r-cran-cairo, r-cran-mi, r-cran-rcppgsl, r-cran-sem, ruby-curses, ruby-fog-rackspace, ruby-mixlib-archive, ruby-tzinfo-data, salt-formula-swift, scapy3k, self-destructing-cookies, trollius-redis & websploit.

Michal &#268;iha&#345;: Weblate 2.8

31 August, 2016 - 16:30

Quite on schedule (just one day later), Weblate 2.7 is out today. This release brings Subversion support or improved zen mode.

Full list of changes:

  • Documentation improvements.
  • Translations.
  • Updated bundled javascript libraries.
  • Added list_translators management command.
  • Django 1.8 is no longer supported.
  • Fixed compatibility with Django 1.10.
  • Added Subversion support.
  • Separated XML validity check from XML mismatched tags.
  • Fixed API to honor HIDE_REPO_CREDENTIALS settings.
  • Show source change in zen mode.
  • Alt+PageUp/PageDown/Home/End now works in zen mode as well.
  • Add tooltip showing exact time of changes.
  • Add option to select filters and search from translation page.
  • Added UI for translation removal.
  • Improved behavior when inserting placeables.
  • Fixed auto locking issues in zen mode.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments

Joey Hess: late summer

31 August, 2016 - 08:15

With days beginning to shorten toward fall, my house is in initial power saving mode. Particularly, the internet gateway is powered off overnight. Still running electric lights until bedtime, and still using the inverter and other power without much conservation during the day.

Indeed, I had two laptops running cpu-melting keysafe benchmarks for much of today and one of them had to charge up from empty too. That's why the house power is a little low, at 11.0 volts now, despite over 30 amp-hours of power having been produced on this mostly clear day. (1 week average is 18.7 amp-hours)

September/October is the tricky time where it's easy to fall off a battery depletion cliff and be stuck digging out for a long time. So time to start dusting off the conservation habits after summer's excess.

I think this is the first time I've mentioned any details of living off grid with a bare minimum of PV capacity in over 4 years. Solar has a lot of older posts about it, and I'm going to note down the typical milestones and events over the next 8 months.

Mike Gabriel: credential-sheets: User Account Credential Sheets Tool

31 August, 2016 - 03:05
Preface This little piece of work has been pending on my todo list for about two years now. For our local school project "IT-Zukunft Schule" I wrote the little tool credential-sheets. It is a little Perl script that turns a series of import files (CSV format) as they have to be provided for user mass import into GOsa² (i.e. LDAP) into a series of A4 sheets with little cards on them, containing initial user credential information. The upstream sources are on Github and I have just uploaded this little tool to Debian. Introduction After mass import of user accounts (e.g. into LDAP) most site administrators have to create information sheets (or snippets) containing those new credentials (like username, password, policy of usage, etc.). With this tiny tool, providing these pieces of information to multiple users, becomes really simple. Account data is taken from a CSV file and the sheets are output as PDF using easily configurable LaTeX template files. Usage Synopsis: credential-sheets [options] <CSV-file-1> [<CSV-file-2> [...]] Options The credential-sheets command accepts the following command-line options:
   --help Display a help with all available command line options and exit.

          Name of the template to use.

          Render <x> columns per sheet.

          Render <y> rows per sheet.

   --zip  Do create a ZIP file at the end.

          Alternative ZIP file name (default: name of parent folder).

          Don't remove temporary files.
CSV File Column Arrangement The credential-sheets tool can handle any sort of column arrangement in given CSV file(s). It expects the CSV file(s) to have column names in their first line. The given column names have to map to the VAR-<column-name> placeholders in credential-sheets's LaTeX templates. The shipped-with templates (students, teachers) can handle these column names:
  • login -- The user account's login id (uid)
  • lastName -- The user's last name(s)
  • firstName -- The user's first name(s)
  • password -- The user's password
  • form -- The form name/ID (student template only)
  • subjects -- A list of subjects taught by a teacher (teacher template only)
If you create your own templates, you can be very flexible in using your own column names and template names. Only make sure that the column names provided in the CSV file(s)'s first line match the variables used in the customized LaTeX template. Customizations The shipped-with credential sheets templates are expected to be installed in /usr/share/credential-sheets/ for system-wide installations. When customizing templates, simply place a modified copy of any of those files into ~/.credential-sheets/ or /etc/credential-sheets/. For further details, see below. The credential-sheets tool uses these configuration files:
  • header.tex (LaTeX file header)
  • <tpl-name>-template.tex (where as <tpl-name> students and teachers is provided on default installations, this is extensible by defining your own template files, see below).
  • footer.tex (LaTeX file footer)
Search paths for configuration files (in listed order):
  • $HOME/.credential-sheets/
  • ./
  • /etc/credential-sheets/
  • /usr/local/share/credential-sheets/
  • /usr/share/credential-sheets/
You can easily customize the resulting PDF files generated with this tool by placing your own template files, header and footer where appropriate. Dependencies This project requires the following dependencies:
  • Text::CSV Perl module
  • Archive::Zip Perl module
  • texlive-latex-base
  • texlive-fonts-extra
Copyright and License Copyright © 2012-2016, Mike Gabriel <>. Licensed under GPL-2+ (see COPYING file).

Daniel Stender: My work for Debian in August

31 August, 2016 - 00:42

Here's again a little list of my humble off-time contributions I'm happy to add to the large amount of work we're completing all together each month. Then there is one more "new in Debian" (meaning: "new in unstable") announcement. First, the uploads (a few of them are from July):

  • afl/2.21b-1
  • djvusmooth/0.2.17-1
  • python-bcrypt/3.1.0-1
  • python-latexcodec/1.0.3-4 (closed #830604)
  • pylint-celery/0.3-2 (closed #832826)
  • afl/2.28b-1 (closed #828178)
  • python-afl/0.5.4-1
  • vulture/0.10-1
  • afl/2.30b-1
  • prospector/0.12.2-1
  • pyinfra/0.1.1-1
  • python-afl/0.5.4-2 (fix of elinks_dump_varies_output_with_locale)
  • httpbin/0.5.0-1
  • python-afl/0.5.5-1 (closed #833675)
  • pyinfra/0.1.2-1
  • afl/2.33b-1 (experimental, build/run on llvm 3.8)
  • pylint-flask/0.3-2 (closed #835601)
  • python-djvulibre/0.8-1
  • pylint-flask/0.5-1
  • pytest-localserver/0.3.6-1
  • afl/2.33b-2
  • afl/2.33b-3

New packages:

  • keras/1.0.7-1 (initial packaging into experimental)
  • lasagne/0.1+git20160728.8b66737-1

Sponsored uploads:

  • squirrel3/3.1-4 (closed #831210)

Requested resp. suggested for packaging:

  • yapf: Python code formatter
  • spacy: industrial-strength natural language processing for Python
  • ralph: asset management and DCIM tool for data centers
  • pytest-cookies: Pytest plugin for testing Cookiecutter templates
  • blocks: another deep learning framework build on the top of Theano
  • fuel: data provider for Blocks and Python DNN frameworks in general
New in Debian: Lasagne (deep learning framework)

Now that the mathematical expression compiler Theano is available in Debian, deep learning frameworks resp. toolkits which have been build on top of it can become available within Debian, too (like Blocks, mentioned before). Theano is an own general computing engine which has been developed with a focus on machine learning resp. neural networks, which features an own declarative tensor language. The toolkits which have build upon it vary in the way how much they abstract the bare features of Theano, if they are "thick" or "thin" so to say. When the abstraction gets higher you gain more end user convenience up to the level that you have the architectural components of neural networks available for combination like in a lego box, while the more complicated things which are going on "under the hood" (like how the networks are actually implemented) are hidden. The downside is, thick abstraction layers usually makes it difficult to implement novel features like custom layers or loss functions. So more experienced users and specialists might to seek out for the lower abstraction toolkits, where you have to think more in terms of Theano.

I've got an initial package of Keras in experimental (1.0.7-1), it runs (only a Python 3 package is available so far) but needs some more work (e.g. building the documentation with mkdocs). Keras is a minimalistic, high modular DNN library inspired by Torch1. It has a clean, rather easy API for experimenting and fast prototyping. It can also run on top of Google's TensorFlow, and we're going to have it ready for that, too.

Lasagne follows a different approach. It's, like Keras and Blocks, a Python library to create and train multi-layered artificial neural networks in/on Theano for applications like image recognition resp. classification, speech recognition, image caption generation or other purposes like style transfers from paintings to pictures2. It abstracts Theano as little as possible, and could be seen rather like an extension or an add-on than an abstraction3. Therefore, knowledge on how things are working in Theano would be needed to make full use out of this piece of software.

With the new Debian package (0.1+git20160728.8b66737-1)4, the whole required software stack (the corresponding Theano package, NumPy, SciPy, a BLAS implementation, and the nividia-cuda-toolkit and NVIDIA kernel driver to carry out computations on the GPU5) could be installed most conveniently just by a single apt-get install python{,3}-lasagne command6. If wanted with the documentation package lasagne-doc for offline use (no running around on remote airports seeking for a WIFI spot), either in the Python 2 or the Python 3 branch, or both flavours altogether7. While others have to spend a whole weekend gathering, compiling and installing the needed libraries you can grab yourself a fresh cup of coffee. These are the advantages of a fully integrated system (sublime message, as always: desktop users switch to Linux!).

When the installation of packages has completed, the MNIST example of Lasagne could be used for a quick check if the whole library stack works properly8:

$ THEANO_FLAGS=device=gpu,floatX=float32 python /usr/share/doc/python-lasagne/examples/ mlp 5
Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN 5005)
Loading data...
Downloading train-images-idx3-ubyte.gz
Downloading train-labels-idx1-ubyte.gz
Downloading t10k-images-idx3-ubyte.gz
Downloading t10k-labels-idx1-ubyte.gz
Building model and compiling functions...
Starting training...
Epoch 1 of 5 took 2.488s
  training loss:        1.217167
  validation loss:      0.407390
  validation accuracy:      88.79 %
Epoch 2 of 5 took 2.460s
  training loss:        0.568058
  validation loss:      0.306875
  validation accuracy:      91.31 %

The example on how to train a neural network on the MNIST database of handwritten digits is refined (it also provides --help) and explained in detail in the Tutorial section of the documentation in /usr/share/doc/lasagne-doc/html/. Very good starting points are also the IPython notebooks that are available from the tutorials by Eben Olson9 and Geoffrey French on the PyData London 201610. There you have Theano basics, examples for employing convolutional neural networks (CNN) and recurrent neural networks (RNN) for a range of different purposes, how to use pre-trained networks for image recognition, etc.

  1. For a quick comparison of Keras and Lasagne with other toolkits, see Alex Rubinsteyn's PyData NYC 2015 presentation on using LSTM (long short term memory) networks on varying length sequence data like Grimm's fairy tales ( 27:30 sq.) 


  3. Great introduction to Theano and Lasagne by Eben Olson on the PyData NYC 2015: 

  4. The package is "freelancing" currently being in collab-maint, to set up a deep learning packaging team within Debian is in the stage of discussion. 

  5. Only available for amd64 and ppc64el. 

  6. You would need "testing" as package source in /etc/apt/sources.list to install it from the archive at the present time (I have that for years, but if Debian Testing could be advised as productive system is going to be discussed elsewhere), but it's coming up for Debian 9. The cuda-toolkit and pycuda are in the non-free section of the archive, thus non-free (mostly used in combination with contrib) must be added to main. Plus, it's a mere suggestion of the Theano packages to keep Theano in main, so --install-suggests is needed to pull it automatically with the same command, or this must be given explicitly. 

  7. For dealing with Theano in Debian, see this previous blog posting 

  8. Like suggested in the guideline From Zero to Lasagne on Ubuntu 14.04. cuDNN isn't available as official Debian package yet, but could be downloaded as a .deb package after registration at It integrates well out of the box. 


  10., video: 

Christoph Egger: DANE and DNSSEC Monitoring

31 August, 2016 - 00:11

At this year's FrOSCon I repeted my presentation on DNSSEC. In the audience, there was the suggestion of a lack of proper monitoring plugins for a DANE and DNSSEC infrastructure that was easily available. As I already had some personal tools around and some spare time to burn I've just started a repository with some useful tools. It's available on my website and has mirrors on Gitlab and Github. I intent to keep this repository up-to-date with my personal requirements (which also means adding a xmpp check soon) and am happy to take any contributions (either by mail or as "pull requests" on one of the two mirrors). It currently has smtp (both ssmtp and starttls) and https support as well as support for checking valid DNSSEC configuration of a zone.

While working on it it turned out some things can be complicated. My language of choice was python3 (if only because the ssl library has improved since 2.7 a lot), however ldns and unbound in Debian lack python3 support in their bindings. This seems fixable as the source in Debian is buildable and useable with python3 so it just needs packaging adjustments. Funnily the ldns module, which is only needed for check_dnssec, in debian is currently buggy for python2 and python3 and ldns' python3 support is somewhat lacking so I spent several hours hunting SWIG problems.

Rhonda D'Vine: Thomas D

30 August, 2016 - 23:12

It's not often that an artist touches you deeply, but Thomas D managed to do so to the point of that I am (only half) jokingly saying that if there would be a church of Thomas D I would absolutely join it. His lyrics always did stand out for me in the context of the band I found about him, and the way he lives his life is definitely outstanding. And additionally there are these special songs that give so much and share a lot. I feel sorry for the people who don't understand German to be able to appreciate him.

Here are three songs that I suggest you to listen to closely:

  • Fluss: This song gave me a lot of strengh in a difficult time of my life. And it still works wonders when I feel down to get my ass up from the floor again.
  • Gebet an den Planeten: This songs gives me shivers. Let the lyrics touch you. And take the time to think about it.
  • An alle Hinterbliebenen: This song might be a bit difficult to deal with. It's about loss and how to deal with suffering.

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Joachim Breitner: Explicit vertical alignment in Haskell

30 August, 2016 - 20:35

Chris Done’s automatic Haskell formatter hindent is released in a new version, and getting quite a bit of deserved attention. He is polling the Haskell programmers on whether two or four spaces are the right indentation. But that is just cosmetics…

I am in principle very much in favor of automatic formatting, and I hope that a tool like hindent will eventually be better at formatting code than a human.

But it currently is not there yet. Code is literature meant to be read, and good code goes at length to be easily readable, and formatting can carry semantic information.

The Haskell syntax was (at least I get that impression) designed to allow the authors to write nicely looking, easy to understand code. One important tool here is vertical alignment of corresponding concepts on different lines. Compare

maze :: Integer -> Integer -> Integer
maze x y
| abs x > 4  || abs y > 4  = 0
| abs x == 4 || abs y == 4 = 1
| x ==  2    && y <= 0     = 1
| x ==  3    && y <= 0     = 3
| x >= -2    && y == 0     = 4
| otherwise                = 2


maze :: Integer -> Integer -> Integer
maze x y
| abs x > 4 || abs y > 4 = 0
| abs x == 4 || abs y == 4 = 1
| x == 2 && y <= 0 = 1
| x == 3 && y <= 0 = 3
| x >= -2 && y == 0 = 4
| otherwise = 2

The former is a quick to grasp specification, the latter (the output of hindent at the moment) is a desert of numbers and operators.

I see two ways forward:

  • Tools like hindent get improved to the point that they are able to detect such patterns, and indent it properly (which would be great, but very tricky, and probably never complete) or
  • We give the user a way to indicate intentional alignment in a non-obtrusive way that gets detected and preserved by the tool.

What could such ways be?

  • For guards, it could simply detect that within one function definitions, there are multiple | on the same column, and keep them aligned.
  • More general, one could take the approach lhs2Tex (which, IMHO, with careful input, a proportional font and with the great polytable LaTeX backend, produces the most pleasing code listings) takes. There, two spaces or more indicate an alignment point, and if two such alignment points are in the same column, their alignment is preserved – even if there are lines in between!

    With the latter approach, the code up there would be written

    maze :: Integer -> Integer -> Integer
    maze x y
    | abs x > 4   ||  abs y > 4   = 0
    | abs x == 4  ||  abs y == 4  = 1
    | x ==  2     &&  y <= 0      = 1
    | x ==  3     &&  y <= 0      = 3
    | x >= -2     &&  y == 0      = 4
    | otherwise                   = 2

    And now the intended alignment is explicit.

(This post is cross-posted on reddit.)

Petter Reinholdtsen: First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public

30 August, 2016 - 15:10

In April we started to work on a Norwegian Bokmål edition of the "open access" book on how to set up and administrate a Debian system. Today I am happy to report that the first draft is now publicly available. You can find it on get the Debian Administrator's Handbook page (under Other languages). The first eight chapters have a first draft translation, and we are working on proofreading the content. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors. A good way to contribute is to proofread the text and update weblate if you find errors.

Our goal is still to make the Norwegian book available on paper as well as electronic form.

Dirk Eddelbuettel: RProtoBuf 0.4.5: now with protobuf v2 and v3!

30 August, 2016 - 09:55

A few short weeks after the 0.4.4 release of RProtoBuf, we are happy to announce a new version 0.4.5 which appeared on CRAN earlier today.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release brings support to the recently-release 'version 3' Protocol Buffers standard, used e.g. by the (very exciting) gRPC project (which was just released as version 1.0). RProtoBuf continues to supportv 'version 2' but now also cleanly support 'version 3'.

Changes in RProtoBuf version 0.4.5 (2016-08-29)
  • Support for version 3 of the Protcol Buffers API

  • Added 'syntax = "proto2";' to all proto files (PR #17)

  • Updated Travis CI script to test against both versions 2 and 3 using custom-built .deb packages of version 3 (PR #16)

  • Improved build system with support for custom CXXFLAGS (Craig Radcliffe in PR #15)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David Moreno: Webhook Setup with Facebook::Messenger::Bot

30 August, 2016 - 01:49

The documentation for the Facebook Messenger API points out how to setup your initial bot webhook. I just committed a quick patch that would make it very easy to setup a quick script to get it done using the unreleased and still in progress Perl’s Facebook::Messenger::Bot:

use Facebook::Messenger::Bot;

use constant VERIFY_TOKEN => 'imsosecret';

my $bot = Facebook::Messenger::Bot->new(); # no config specified!
$bot->expect_verify_token( VERIFY_TOKEN );

This should get you sorted. What endpoint would that be, though? Well that depends on how you’re giving Facebook access to your Plack’s .psgi application.

Michal &#268;iha&#345;: motranslator 1.1

29 August, 2016 - 23:00

Four months after 1.0 release, motranslator 1.1 is out. If you happen to use it for untrusted data, this might be as well called security release, though this is still not good idea until we remove usage of eval() used to evaluate plural formula.

Full list of changes:

  • Improved handling of corrupted mo files
  • Minor performance improvements
  • Stricter validation of plural expression

The motranslator is a translation library used in current phpMyAdmin master (upcoming 4.7.0) with focus on speed and memory usage. It uses Gettext MO files to load the translations. It also comes with testsuite (100% coverage) and basic documentation.

Recommended way to install it is using composer from Packagist repository:

composer require phpmyadmin/motranslator

The Debian package will be available probably at point phpMyAdmin 4.7.0 will be out, but if you see need to have it earlier, just let me know.

Filed under: Debian English phpMyAdmin | 0 comments

Zlatan Todorić: Support open source motion comic

29 August, 2016 - 20:25

There is an ongoing campaign for motion comic. It will be done entirely with FLOSS tools (Blender, Krita, GNU/Linux) and besides that, it really looks great (and no, it is not only for the kids!). Please support this effort if you can because it also shows the power of Free software tools. All will be released Creative Commons Atribution-ShareAlike license together with all sources.

Michal &#268;iha&#345;: Improving phpMyAdmin Docker container

29 August, 2016 - 15:00

Since I've created the phpMyAdmin container for Docker I've always felt strange about using PHP's built in web server there. It really made it poor choice for any production setup and probably was causing lot of problems users saw with this container. During the weekend, I've changed it to use more complex setup with Supervisor, nginx and PHP FPM.

As building this container is one of my first experiences with Docker (together with Weblate container), it was not as straightforward as I'd hope for, but in the end is seems to be working just fine. While touching the code, I've also improved testing of the Docker container to tests all supported setups and to better report in case of test fails.

The nice side effect of this is that the PHP code is no longer being executed under root in the container, so that should make it more sane for production use as well (honestly I never liked this approach that almost everything is executed as root in Docker containers).

Filed under: Debian English phpMyAdmin | 0 comments

Gergely Nagy: Recruitment mistakes: part 3

29 August, 2016 - 15:00

It has been a while that I have been contacted by a recruiter, and the last few ones were fairly decent conversations, where they made an effort to research me first, and even if they did not get everything right, they still listened, and we had a productive talk. But four days ago, I had another recruiter reach out to me, from a company I know oh so well: one I ranted about before: Google. Apparently, their recruiters still do carpet-bombing style outreach. My first thought was "what took them so long?" - it has been five years since my last contact with a Google recruiter. I almost started missing them. Almost. To think that Google is now powerful enough to read my mind, is scary. Yet, I believe, this is not the case; rather, it's just another embarrassing mistake.

To make my case, let me quote the full e-mail I was sent, with the name of the sender redacted, and my comments - which I'm also sending to the recruiter:

Hi Gergely,


How are you? Hope you're well.

Thank you, I'm fine, just back from vacation, and I was thrilled to read your e-mail. Although, I did find it surprising too, and considering past events, I thought it to be spam first.

My name is XXX and I am recruiting for the Google Engineering team. I have just found your details on Github...

I'm happy that you found me through GitHub, but I'm curious why you mailed me at my address then? That address is not on my GitHub profile, and even though I have some code signed with that address, that's not what I use normally. My GitHub profile also links to a page I created especially for recruiters, which I don't think you have read - but more on that below.

...and your experience with software development combined with your open source contributions is particularly relevant to Google

Which part of my recent contributions are relevant to Google? For the past few months, the vast majority (over 90%) of my open source work has been on keyboard firmware. If Google is developing a keyboard, then this may be relevant, otherwise, I find it doubtful.

Some of my past OSS contributions may be more relevant, but that's in the past, and it would take some digging to see those from GitHub. And if you did that kind of digging, you would have found the page on my site for recruiters, and would not have e-mailed me at my Debian address, either.

We are always interested in talking to top engineers with your mix of skills and I was wondering if you are at all open to exploring roles with Google in EMEA.

To make things short, my stance is the same as it was five years ago, when I wrote - and I quote - "So, here's a public request: do not email me ever again about job opportunities within Google. I do not wish to work for Google. Not now, not tomorrow, not ever."

This still stands. Please do not ever e-mail me about job opportunities within Google. I do not wish to work for Google, not now, not tomorrow, not five years from now, not ever. This will not change. Many things may change, this however, will not.

But even if I ignore this, let me ask you mix of skills are you exactly interested in? Keyboard firmware hacking, mixed with Emacs Lisp, some Clojure, and hacking on Hy (which, if you have explored my GitHub profile, will surely know) from time to time? Or is it my Debian Developer hat that got your interest? If so, why didn't you say so? Not that it would change anything, but I'm curious.

Is it my participation in 24pullrequests? Or my participation in GSoC in years past?

Nevertheless, my list of conditions I mentioned above, apply. Is Google able to fulfill all my requirements, and the preferences too? Last time I heard, working for Google required one to relocate, which I clearly said on that page, I'm not willing to do.

The teams I recruit for are responsible for Google's planet-scale systems. Just to give you more of an idea, some of the projects we work on that might be of interest to you would include:

  • MapReduce
  • App Engine
  • Mesa
  • Maglev

And how would these be interesting to me, considering my recent OSS work? (Hint: none of them interest me, not a bit. Ok perhaps MapReduce, a little.)

I think you could be a great fit here, where you can develop a great career and at the same time you will be part of the most mission critical team, designing and developing systems which run Google Search, Gmail, YouTube, Google+ and many more as you can imagine.

My career is already developing fine, thank you, I do not need Google to do that. I am already working on things I consider important and interesting, if I wanted to change, I would. But I certainly would not consider a company which I had asked numerous times NOT to contact me about opportunities. If you can't respect this wish, and forget about this in mere five years, why would I trust you to keep any other promises you make now?

Thank you so much for your time and I look forward to hearing from you.

I wish you had spent as much time researching me - or even half that - as I have spent replying to you. I suppose this is not the reply you were expecting, or looking for, but this is the only one I'll ever give to anyone from Google.

And here, at the end, if you read this far, I'm asking you, and everyone at Google to never contact me about job opportunities. I will not work for Google. Not now, not tomorrow, not five years from now, not ever. Please do not e-mail me again, and do not reply to this e-mail either. I'm not interested in neither apologies, nor promises that you won't contact me - just simply don't do it.

Thank you.

Russell Coker: Monitoring of Monitoring

29 August, 2016 - 09:23

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

Doing it Properly

Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

Related posts:

  1. Health and Status Monitoring via Smart Phone Health Monitoring Eric Topol gave an interesting TED talk about...
  2. Planning Servers for Failure Sometimes computers fail. If you run enough computers then you...
  3. Shelf-life of Hardware Recently I’ve been having some problems with hardware dying. Having...

Joey Hess: hiking the Roan

29 August, 2016 - 09:10

Three moments from earlier this week..

Sprawled under a tree after three hours of hiking with a heavy, water-filled pack, I look past my feet at six ranges of mountains behind mountains behind flowers.

From my campsite, I can see the rest of the path of the Appalachian Trail across the Roan balds, to Big Hump mountain. It seems close enough to touch, but not this trip. Good to have a goal.

Near sunset, land and sky merge as the mist moves in.

Reproducible builds folks: Reproducible builds: week 70 in Stretch cycle

29 August, 2016 - 06:01

What happened in the Reproducible Builds effort between Sunday August 21 and Saturday August 27 2016:

GSoC and Outreachy updates Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

10 package reviews have been added and 6 have been updated this week, adding to our knowledge about identified issues.

A large number of issue types have been updated:

Weekly QA work

29 FTBFS bugs have been reported by:

  • Chris Lamb (27)
  • Daniel Stender (1)
  • Santiago Vila (1)
diffoscope development strip-nondeterminism development

strip-nondeterminism 0.023-1 was uploaded by Chris Lamb:

 * Support Android .apk files with the JAR normalizer.
 * handlers/ Drop unused Archive::Zip import
 * Remove hyphen from non-determinism and non-deterministic.
 * Match more styles of .properties and loosen filename matching.
 * Improve tests:
   - Make fixture runner generic to all normalizer types.
   - Replace (single) pearregistry test with a fixture.
   - Set a canonical time for fixture tests.
   - Add gzip testcase fixture.
   - Replace t/javadoc.t with fixture
   - Replace t/ar.t with a fixture.
   - t/javaproperties: move and tests to fixtures
   - t/fixtures.t: move to using subtests
   - t/fixtures.t: Explicitly test that we can find a normalizer
   - t/fixtures.t: Don't run normalizer if we didn't find one.

strip-nondeterminism 0.023-2 uploaded by Mattia Rizzolo to allow stderr in autopkgtest.

disorderfs development


  • Link to jenkins documentation in every page (h01ger)
  • In the "pre build" check, whether a node is up, now also detects if a node has a read-only filesystem, which sometimes happens on some broken armhf nodes. (h01ger)
  • Collapse whitespace to avoid ugly "trailing underlines" in hyperlinks for diffoscope results and pkg sets (Chris Lamb)
  • Give details HTML elements "cursor: pointer" CSS property to highlight they are clickable. (Chris Lamb)

This week's edition was written by Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้