Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 7 min ago

Steve Kemp: If your code accepts URIs as input..

10 September, 2016 - 16:24

There are many online sites that accept reading input from remote locations. For example a site might try to extract all the text from a webpage, or show you the HTTP-headers a given server sends back in response to a request.

If you run such a site you must make sure you validate the schema you're given - also remembering to do that if you're sent any HTTP-redirects.

Really the issue here is a confusion between URL & URI.

The only time I ever communicated with Aaron Swartz was unfortunately after his death, because I didn't make the connection. I randomly stumbled upon the html2text software he put together, which had an online demo containing a form for entering a location. I tried the obvious input:


The software was vulnerable, read the file, and showed it to me.

The site gives errors on all inputs now, so it cannot be used to demonstrate the problem, but on Friday I saw another site on Hacker News with the very same input-issue, and it reminded me that there's a very real class of security problems here.

The site in question was and allows you to enter a URL to convert to markdown - I found this via the hacker news submission.

The following link shows the contents of /etc/hosts, and demonstrates the problem:

The output looks like this:

.. localhost broadcasthost
::1 localhost
fe80::1%lo0 localhost stage files brettt..

(Actually the actual output all newlines had been stripped. Weird.)

Despite reporting the problem to the author on Friday, and following up the report via Twitter this has not yet been fixed, but after four days I assume I'm not alone in spotting this.

Enrico Zini: Dreaming of being picked

10 September, 2016 - 14:47

From "Stop stealing dreams":

«Settling for the not-particularly uplifting dream of a boring, steady job isn’t helpful. Dreaming of being picked — picked to be on TV or picked to play on a team or picked to be lucky — isn’t helpful either. We waste our time and the time of our students when we set them up with pipe dreams that don’t empower them to adapt (or better yet, lead) when the world doesn’t work out as they hope.

The dreams we need are self-reliant dreams. We need dreams based not on what is but on what might be. We need students who can learn how to learn, who can discover how to push themselves and are generous enough and honest enough to engage with the outside world to make those dreams happen.»

This made me think that I know many hero stories based on "the chosen", like Matrix, like most superheros getting powers either from some entity chosing them for it, or from chance.

I have a hard time thinking of a superhero who becomes one just by working hard at acquiring and honing their skills: I can only think of Batman and Ironman, and they start off as super rich.

If I think of people who start from scratch as commoners and work hard to become exceptional, in the standard superhero narrative, I can only think of supervillains.


It makes me feel culturally biased into thinking that a common person cannot be trusted to act responsibly, and that only the rich, the chosen and the aristocrats can.

As a bias it may serve the rich and the aristocrats, but I don't think it serves society as a whole.

Dirk Eddelbuettel: RProtoBuf 0.4.6: bugfix update

10 September, 2016 - 06:40

Relatively quickly after version 0.4.5 of RProtoBuf was released, we have a new version 0.4.6 to announce which appeared on CRAN today.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This version contains a contributed bug-fix pull request covering conversion of zero-length vectors, and adding native support for S4 objects. At the request / suggestion of the CRAN maintainers, it also uncomments a LaTeX macro in the vignette (corresponding to our recent JSS paper paper) which older R versions do not (yet) have in their jss.cls file.

Changes in RProtoBuf version 0.4.6 (2016-09-08)
  • Support for serializing zero-length objects was added (PR #18 addressing #13)

  • S4 objects are natively encoded (also PR #18)

  • The vignette based on the JSS paper no longer uses a macro available only with the R-devel version of jss.cls, and hence builds on all R versions

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Lars Wirzenius: Thinking about CI, maybe writing ick2

10 September, 2016 - 01:04

A year ago I got tired of Jenkins and wrote a CI system for myself, Ick. It's served me well since, but it's a bit clunky and awkward and I have to hope nobody else wants to use it.

I've been thinking about re-architecting Ick from scratch, and so I wrote down some of my thinking about this. It's very raw, but just in case someone else might be interested, I put it online at ick2.

At this point I'm still thinking about very high level concepts. I've not written any code, and probably won't in the next couple of months. But I had to get this out of my brain.

Steve McIntyre: Time flies

9 September, 2016 - 22:57

Slightly belated...

Another year, another OMGWTFBBQ. By my count, we had 49 people (and a dog) in my house and garden at the peak on Saturday evening. It was excellent to see people from all over coming together again, old friends and new. This year we had some weather issues, but due to the delights of gazebo technology most people stayed mostly dry. :-)

Also: thanks to a number of companies near and far who sponsored the important refreshments for the weekend:

As far as I could tell, everybody enjoyed themselves; I know I definitely did!

Jonathan Dowland: Metropolis

9 September, 2016 - 17:55

Every year since 2010 the Whitley Bay Film Festival has put on a programme of movies in my home town, often with some quirk or gimmick. A few years back we watched "Dawn Of The Dead" in a shopping centre—the last act was interrupted by a fake film-reel break, then a load of zombies emerged from the shops. Sometime after that, we saw "The Graduate" within a Church as part of their annual "Secret Cinema" showing. Other famous stunts (which I personally did not witness) include a screening of Jaws on the beach and John Carpenter's "The Fog" in Whitley Bay Lighthouse.

Massive thanks to Hunter North Recruitment for sponsoring Metropolis @snattaz

— Whitley Film Fest (@wbayfilmfest) August 14, 2016

This year I only went to one showing, Fritz Lang's Metropolis. Two twists this time: it was being shown in The Rendezvous Cafe, an Art-Deco themed building on the sea front; the whole film was accompanied by a live, improvised synthesizer jam by a group of friends and synth/sound enthusiasts who branded themselves "The Mediators" for the evening.

Metropolis live soundtrack preparations at the Rendezvous Cafe. Doors open 19.30 #metropolis

— Whitley Film Fest (@wbayfilmfest) August 14, 2016

I've been meaning to watch Metropolis for a long time (I've got the Blu-Ray still sat in the shrink-wrap) and it was great to see the newly restored version, but the live synth accompaniment was what really made the night special for me. They used a bunch of equipment, most notably a set of Korg Volcas. The soundtrack varied in style and intensity to suit the scenes, with the various under-city scenes backed by a pumping, industrial-style improvisation which sounded quite excellent.

I've had an interest in playing with synthesisers and making music for years, but haven't put the time in to do it properly. I left newly inspired and energised to finally try to make the time to explore it.

Jamie McClelland: Wait... is that how you are supposed to configure your SSD card?

9 September, 2016 - 00:43

I bought a laptop with only SSD drives a while ago and based on a limited amount of reading, added the "discard" option to my /etc/fstab file for all partitions and happily went on my way expecting to avoid the performance degradation problems that happen on SSD cards without this setting).

Yesterday, after a several month ordeal, I finally installed SSD drives in one of May First/People Link's servers and started doing more research to find the best way to set things up.

I was quite surprised to learn that my change in /etc/fstab accomplished nothing. Well, not entirely true, my /boot partition was still getting empty sectors reported to the SSD card.

Since my filesystem is on top of LVM and LVM is on top of an encrypted disk, those messages from the files system to the disk were not getting through. I learned that when I tried to run the fstrim command on one of the partitions and received the message that the disk didn't support it. Since my /boot partition is not in LVM or encrypted, it worked on /boot.

I then made the necessary changes to /etc/lvm/lvm.conf and /etc/crypttab, restarted and... same result. Then I ran update-initramfs -u and rebooted and now fstrim works. I decided to remove the discard option from /etc/fstab and will set a cron job to run fstrim periodically.

Also, I learned of some security implications of using trim on an encrypted disk which don't seem to outweigh the benefits.

Antonio Terceiro: Debian CI updates for September 2016

8 September, 2016 - 05:07

debci 1.4 was released just a few days ago. Among general improvements, I would like to highlight:

  • pretty much every place in the web UI that mentions a PASS or a FAIL also displays the tested package version. This was suggested to me on IRC by Holger
  • I also tried to workaround an instability when setting up the LXC containers used for the tests, where the test bed process setup would finish without failure even though some steps in the middle of it failed. This caused the very final step for the debci-specific setup to fail, so there was no debci user inside the container, which caused tests to fail because that user was missing. Before that was fixed I was always keeping an eye on this issue, fixing the issue by hand, and re-triggering the affected packages by hand, so as far I can tell there is no package whose status has been permanently affected by this.
  • Last, but not least, this release brings an interesting contribution by Gordon Ball, which is keeping track of different failure states. debci will now let you know whether a currently failing package has always failed, has passed in a previous version, or if the same version that is currently failing has previously passed. has been upgraded to debci 1.4 just after that. At the same time I have also upgraded autodep8 and autopkgtest to their latest versions, available in jessie-backports. This means that it is now safe for Debian packages to assume the changes in autopkgtest 4.0 are available, in special the $AUTOPKGTEST_* environment variables.

In other news, for several weeks there were had issues with tests not being scheduled when they should have. I was just assuming that the issue was due to the existing test scheduler, debci-batch, being broken. Today I was working on a new implementation that is going to be a lot faster, I started to hit a similar issue on my local tests, and finally realized what was wrong. The fact is that debci-batch stores the timestamp of the last time a package has been scheduled to run, and it there are no test result after that timestamp, it assumes the package is still in the queue to be tested, and does not schedule it again. It turns out that a few weeks ago, during maintainance work, I had cleared the queue, discarding all jobs that were there, but forgot to reset those timestamps, so when debci-batch came around again, it checked the timestamp of the last request and did not make new requests because there was no test result after that timestamp! I cleared all those timestamps, and the system should now go back to normal.

That is it for now. I you want to contribute to the Debian CI project and want to get in touch, you can pop up on the #debci channel on the OFTC IRC network, or mail the autopkgtest-devel mailing list.

Wouter Verhelst: Installing files for other applications with autotools

7 September, 2016 - 23:40

Let's say you have a file which contains this:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")

and that it goes with a which contains this:

dist_p11_module_DATA = foo.module

Then things should work fine, right? When you run make install, your modules install to the right location, and p11-kit will pick up everything the way it should.

Well, no. Not exactly. That is, it will work for the common case, but not for some other cases. You see, if you do that, then make distcheck will fail pretty spectacularly. At least if you run that as non-root (which you really really should do). The problem is that by specifying the p11_moduledir variable in that way, you hardcode it; it doesn't honour any $prefix or $DESTDIR variables that way. The result of that is that when a user installs your package by specifying --prefix=/opt/testmeout, it will still overwrite files in the system directory. Obviously, that's not desireable.

The $DESTDIR bit is especially troublesome, as it makes packaging your software for the common distributions complicated (most packaging software heavily relies on DESTDIR support to "install" your software in a staging area before turning it into an installable package).

So what's the right way then? I've been wondering about that myself, and asked for the right way to do something like that on the automake mailinglist a while back. The answer I got there wasn't entirely satisfying, and at the time I decided to take the easy way out (EXTRA_DIST the file, but don't actually install it). Recently, however, I ran against a similar problem for something else, and decided to try to do it the proper way this time around.

p11-kit, like systemd, ships pkg-config files which contain variables for the default locations to install files into. These variables' values are meant to be easy to use from scripts, so that no munging of them is required if you want to directly install to the system-wide default location. The downside of this is that, if you want to install to the system-wide default location by default from an autotools package (but still allow the user to --prefix your installation into some other place, accepting that then things won't work out of the box), you do need to do the aforementioned munging.

Luckily, that munging isn't too hard, provided whatever package you're installing for did the right thing:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")
PKG_CHECK_VAR([p11kit_libdir], "p11-kit-1", "libdir")
if test -z $ac_cv_env_p11_moduledir_set; then
    p11_moduledir=$(echo $p11_moduledir|sed -e "s,$p11kit_libdir,\${libdir},g")

Whoa, what just happened?

First, we ask p11-kit-1 where it expects modules to be. After that, we ask p11-kit-1 what was used as "libdir" at installation time. Usually that should be something like /usr/lib or /usr/lib/gnu arch triplet or some such, but it could really be anything.

Next, we test to see whether the user set the p11_moduledir variable on the command line. If so, we don't want to munge it.

The next line looks for the value of whatever libdir was set to when p11-kit-1 was installed in the value of p11_module_path, and replaces it with the literal string ${libdir}.

Finally, we exit our if and AC_SUBST our value into the rest of the build system.

The resulting package will have the following semantics:

  • If someone installs p11-kit-1 your package with the same prefix, the files will install to the correct location.
  • If someone installs both packages with a different prefix, then by default the files will not install to the correct location. This is what you'd want, however; using a non-default prefix is the only way to install something as non-root, and if root installed something into /usr, a normal user wouldn't be able to fix things.
  • If someone installs both packages with a different prefix, but sets the p11_moduledir variable to the correct location, at configure time, then things will work as expected.

I suppose it would've been easier if the PKG_CHECK_VAR macro could (optionally) do that munging by itself, but then, can't have everything.

Mike Gabriel: Debian's GTK-3+ v3.21 breaks Debian MATE 1.14

7 September, 2016 - 18:21
sunweaver sighs...

This short post is to inform all Debian MATE users that the recent GTK-3+ upload to Debian (GTK-3+ v3.21) broke most parts of the MATE 1.14 desktop environment as currently available in Debian testing (aka stretch). This raises some questions here on the MATE maintainers' side...

  1. Isn't GTK-3+ a shared library? This one was rhetorical... Yes, it is.

  2. One that breaks other application with every point release? Well, unfortunately, as experience over the past years has shown: Yes, this has happened several times, so far — and it happened again.

  3. Why is it that GTK-3+ uploads appear in Debian without going through a proper transition? This question is not rhetorical. If someone has an answer, please enlighten me.

Potential Counter Measures

For Debian MATE users running on Debian testing: This is untested, but it is quite likely that your MATE desktop environment will work again, once you have reverted your GTK-3+ library back to v3.20. For obtaining old Debian package versions, please visit the site.


The MATE 1.16 release is expected for Sep 20th, 2016. We will do our best to provide MATE 1.16 in Debian before this month is over. MATE 1.16 will again run smoothly (so I heard) on GTK-3+ 3.21.

sunweaver (who is already scared of the 3.22 GTK+ release, luckily the last development release of the GTK+ 3-series)

Reproducible builds folks: Reproducible Builds: week 71 in Stretch cycle

7 September, 2016 - 15:14

What happened in the Reproducible Builds effort between Sunday August 28 and Saturday September 3 2016:

Media coverage

Antonio Terceiro blogged about testing build reprodubility with debrepro .

GSoC and Outreachy updates

The next round is being planned now: see their page with a timeline and participating organizations listing.

Maybe you want to participate this time? Then please reach out to us as soon as possible!

Packages reviewed and fixed, and bugs filed

The following packages have addressed reproducibility issues in other packages:

The following updated packages have become reproducible in our current test setup after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out yet. (Relevant changelogs did not mention reproducible builds.)

The following 4 packages were not changed, but have become reproducible due to changes in their build-dependencies:

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

706 package reviews have been added, 22 have been updated and 16 have been removed in this week, adding to our knowledge about identified issues.

5 issue types have been added:

1 issue type has been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (8)
  • Lucas Nussbaum (3)
diffoscope development

diffoscope development on the next version (60) continued in git, taking in contributions from:

  • Mattia Rizzolo:
    • Better and more thorough testing
    • Improvements to packaging
    • Improvements to the ppu comparator
strip-nondeterminism development

Mattia Rizzolo uploaded strip-nondeterminism 0.023-2~bpo8+1 to jessie-backports.

A new version of strip-nondeterminism 0.024-1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Improve code quality of zip, jar, ar, png processors
  • AYANOKOUZI, Ryuunosuke:
    • Preserve file attribute information of target file (#836075)

Holger added jobs on to run testsuites on every commit. There is one job for the master branch and one for the other branches.

disorderfs development

Holger added jobs on to run testsuites on every commit. There is one job for the master branch and one for the other branches.

Debian: We now vary the GECOS records of the two build users. Thanks to Paul Wise for providing the patch.


This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

Julian Andres Klode: New software: sicherboot

7 September, 2016 - 05:13

Today, I wrote sicherboot, a tool to integrate systemd-boot into a Linux distribution in an entirely new way: With secure boot support. To be precise: The use case here is to only run trusted code which then unmounts an otherwise fully encrypted disk, as in my setup:

If you want, sicherboot automatically creates db, KEK, and PK keys, and puts the public keys on your EFI System Partition (ESP) together with the KeyTool tool, so you can enroll the keys in UEFI. You can of course also use other keys, you just need to drop a db.crt and a db.key file into /etc/sicherboot/keys. It would be nice if sicherboot could enroll the keys directly in Linux, but there seems to be a bug in efitools preventing that at the moment. For some background: The Platform Key (PK) signs the Key Exchange Key (KEK) which signs the database key (db). The db key is the one signing binaries.

sicherboot also handles installing new kernels to your ESP. For this, it combines the kernel with its initramfs into one executable UEFI image, and then signs that. Combined with a fully encrypted disk setup, this assures that only you can run UEFI binaries on the system, and attackers cannot boot any other operating system or modify parts of your operating system (except for, well, any block of your encrypted data, as XTS does not authenticate the data; but then you do have to know which blocks are which which is somewhat hard).

sicherboot integrates with various parts of Debian: It can work together by dracut via an evil hack (diverting dracut’s kernel/postinst.d config file, so we can run sicherboot after running dracut), it should support initramfs-tools (untested), and it also integrates with systemd upgrades via triggers on the /usr/lib/systemd/boot/efi directory.

Currently sicherboot only supports Debian-style setups with /boot/vmlinuz-<version> and /boot/initrd.img-<version> files, it cannot automatically create combined boot images from or install boot loader entries for other naming schemes yet. Fixing that should be trivial though, with a configuration setting and some eval magic (or string substitution).

Future planned features include: (1) support for multiple ESP partitions, so you can have a fallback partition on a different drive (think RAID type situation, keep one ESP on each drive, so you can remove a failing one); and (2) a tool to create a self-contained rescue disk image from a directory (which will act as initramfs) and a kernel (falling back to a vmlinuz file )

It might also be interesting to add support for other bootloaders and setups, so you could automatically sign a grub cryptodisk image for example. Not sure how much sense that makes.

I published the source at (MIT licensed) and uploaded the package to Debian, it should enter the NEW queue soon (or be in NEW by the time you read this). Give it a try, and let me know what you think.

Filed under: Debian, sicherboot

Markus Koschany: My Free Software Activities in August 2016

7 September, 2016 - 04:28

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Android, Java, Games and LTS topics, this might be interesting for you.

Debian Android
  • This was the final month of the Google Summer of Code and the students achieved the main goal of packaging the Android SDK. It is now possible to build Android apps on Debian with packages only from the main distribution (apt install android-sdk). Chirayu Desai fixed the last remaining issue in android-platform-system-core (#827216).  That also means apktool is now ready to rebuild Android applications. You can find more information about the students’ work at and on their individual pages Chirayu Desai, Kai-Chung Yan and Mouaad Aallam.
  • I sponsored a new upstream release (2.2.0) of apktool for Chirayu Desai.
  • I also reviewed and sponsored the following packages for Kai-Chung and Chirayu Desai (RC bug fixes and new upstream releases): android-platform-dalvik, android-platform-frameworks-base, android-sdk-meta.
Debian Games
  • I started the month with package updates for foobillardplus, tuxpuck, etw, cube2, cube2-data and neverball.
  • I released a new revision of triplane to fix a reproducible build issue.
  • I packaged a new upstream release of springlobby.
  • I fixed GCC-6 FTBFS bugs in stormbaancoureur and love and updated both packages to use modern Debian helpers (stormbaancoureur needed it badly).
  • I invested some time to package Liquidwar 6 (#680023) and attached my preliminary work to the bug report. Liquidwar 6 has been in the works for a long time now and is a complete rewrite of the original Liquidwar game. The graphics are much more polished and dozens of new levels are available. I didn’t complete my work on Liquidwar 6 because, at least on my system, the game constantly consumes 100% CPU time. Network modus isn’t finished yet and it still depends on SDL 1. Nowadays I’m only interested in SDL 2 (or similar) games though because I think the library is more future-proof and SDL 1 will probably become a burden for future maintainers.
  • In the second half of the month I fixed a couple of RC bugs again caused by the Boost 1.61 transition and yes still more GCC-6 bugs : libclaw (GCC-6 and Boost 1.61 issues, new upstream release), freeorion (Boost 1.61 FTBFS, #833773. This one was arguably a regression in Boost 1.61 and I filed #833794 because of it), pokerth (GCC-6 RC bugs. I also took the opportunity to implement systemd support for pokerth-server and I modified the package to run the server as the _pokerth system user out-of-the-box.), 0ad (missing build-dependency on python).
  • Even music packages can pile up bug reports, so I went ahead and updated fretsonfire-songs-muldjord and fretsonfire-songs-sectoid.
  • In the last days of August 2016 I packaged a new upstream release of redeclipse and redeclipse-data, a first-person shooter. The older version was network-incompatible and long unsupported.
Debian Java Debian LTS

This was my seventh month as a paid contributor and I have been paid to work 14,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 01. August to 07. August I was in charge of our LTS frontdesk. I triaged CVEs in wordpress, mysql-5.5, libsys-syslog-perl, libspring-java, curl and squid and answered questions on the debian-lts mailing list.
  • DLA-586-1. Issued a security update for curl fixing 2 CVE.
  • DLA-585-1. Announced the security update for firefox-esr which was prepared by Mike Hommey.
  • I was involved in an embargoed security issue that currently affects two source packages in Wheezy. The update will be released on 15. September 2016 and will be coordinated with Debian’s Security Team and other distributions. I will add more information next month.
  • DLA-610-1. I spent most of the time this month on triaging and fixing security issues in tiff3, a library providing support for the Tagged Image File Format (TIFF). 99 source packages currently build-depend on this library in Wheezy. In total I triaged 35 CVEs and fixed 23 of them. I could confirm that CVE-2015-1547, CVE-2016-5322, CVE-2016-5314, CVE-2016-5315, CVE-2016-5316, CVE-2016-5317 and CVE-2016-5320 were duplicates of other CVEs fixed in this update. The update hardened the library and fixed possible denial-of-service (application crash) and arbitrary code execution issues. I tested whenever possible against the provided reproducers (malicious tiff images). The tiff3 package now includes all currently available patches. Most of the current open vulnerabilities do not directly affect end-users since no binary package has been provided for the tiff tools in Wheezy. However they can still pose a threat to people who build these tools from source manually. Though the majority of users should not be affected. It is also unlikely that the remaining issues will be fixed by tiff’s upstream developers since they decided to remove the affected applications from newer releases but again most of them can’t be exploited since the tools are not built by default in this version.
Non-maintainer uploads
  • I did a NMU for pacman fixing one GCC-6 RC bug.
  • I packaged a new upstream release of pygccxml and worked around a RC bug that threatened to remove spring. For similar reasons I filed #835121 against castxml that got quickly fixed by Gert Wollny.

Elena 'valhalla' Grandi: Candy from Strangers

7 September, 2016 - 01:46
Candy from Strangers

A few days ago I gave a talk at ESC about some reasons why I think that using software and especially libraries from the packages of a community managed distribution is important and much better than alternatives such as pypi, nmp etc. This article is a translation of what I planned to say before forgetting bits of it and luckily adding it back as an answer to a question :)

When I was young, my parents taught me not to accept candy from strangers, unless they were present and approved of it, because there was a small risk of very bad things happening. It was of course a simplistic rule, but it had to be easy enough to follow for somebody who wasn't proficient (yet) in the subtleties of social interactions.

One of the reasons why it worked well was that following it wasn't a big burden: at home candy was plenty and actual offers were rare: I only remember missing one piece of candy because of it, and while it may have been a great one, the ones I could have at home were also good.

Contrary to candy, offers of gratis software from random strangers are quite common: from suspicious looking websites to legit and professional looking ones, to platforms that are explicitly designed to allow developers to publish their own software with little or no checks.

Just like candy, there is also a source of trusted software in the Linux distributions, especially those lead by a community: I mention mostly Debian because it's the one I know best, but the same principles apply to Fedora and, to some measure, to most of the other distributions. Like good parents, distributions can be wrong, and they do leave room for older children (and proficient users) to make their own choices, but still provide a safe default.

Among the unsafe sources there are many different cases and while they do share some of the risks, they have different targets with different issues; for brevity the scope of this article is limited to the ones that mostly concern software developers: language specific package managers and software distribution platforms like PyPi, npm and rubygems etc.

These platforms are extremely convenient both for the writers of libraries, who are enabled to publish their work with minor hassles, and for the people who use such libraries, because they provide an easy way to install and use an huge amount of code. They are of course also an excellent place for distributions to find new libraries to package and distribute, and this I agree is a good thing.

What I however believe is that getting code from such sources and using it without carefully checking it is even more risky than accepting candy from a random stranger on the street in an unfamiliar neighbourhood.

The risk aren't trivial: while you probably won't be taken as an hostage for ransom, your data could be, or your devices and the ones who run your programs could be used in some criminal act causing at least some monetary damage both to yourself and to society at large.

If you're writing code that should be maintained in time there are also other risks even when no malice is involved, because each package on these platform has a different policy with regards to updates, their backwards compatibility and what can be expected in case an old version is found to have security issues.

The very fact that everybody can publish anything on such platforms is both their biggest strength and their main source of vulnerability: while most of the people who publish their libraries do so with good intentions, attacks have been described and publicly tested, such as the fun typo-squatting one (archived URL" target="_blank"> that published harmless malicious code under common typos for famous libraries.

Contrast this with Debian, where everybody can contribute, but before they are allowed full unsupervised access to the archive they have to establish a relationship with the rest of the community, which includes meeting other developers in real life, at the very least to get their gpg keys signed.

This doesn't prevent malicious people from introducing software, but raises significantly the effort required to do so, and once caught people can usually be much more effectively prevented from repeating it than a simple ban on an online-only account can do.

It is true that not every Debian maintainer actually does a full code review of everything that they allow in the archive, and in some cases it would be unreasonable to expect it, but in most cases they are at least reasonably familiar with the code to do at least bug triage, and most importantly they are in an excellent position to establish a relationship of mutual trust with the upstream authors.

Additionally, package maintainers don't work in isolation: a growing number of packages are being maintained by a team of people, and most importantly there are aspects that involve potentially the whole community, from the fact that new packages that enter the distribution are publicity announced on a mailing list to the various distribution-wide QA efforts.

Going back to the language specific distribution platforms, sometimes even the people who manage the platform themselves can't be fully trusted to do the right thing: I believe everybody in the field remembers the npm fiasco where a lawyer letter requesting the removal of a package started a series of events that resulted in potentially breaking a huge amount of automated build systems.

Here some of the problems were caused by some technical policies that caused the whole ecosystem to be especially vulnerable, but one big issue was the fact that the managers of the npm platform are a private entity with no oversight from the user community.

Here not all distributions are equal, but contrast this with Debian, where the distribution is managed by a community that is based on a social contract and is governed via democratic procedures established in its

Additionally, the long history of the distribution model means that many issues have already been met, the errors have already been done, and there are established technical procedures to deal with them in a better way.

So, shouldn't we use language specific distribution platforms at all? No! As developers we aren't children, we are adults who have the skills to distinguish between safe and unsafe libraries just as well as the average distribution maintainer can do. What I believe we should do is stop treating them as a safe source that can be used blindly and reserve that status to actual trustful sources like Debian, falling back to the language specific platforms only when strictly needed, and in that case:

actually check carefully what we are using, both by reading the code and by analysing the development and community practices of the authors;
if possible, share that work by becoming ourselves maintainers of that library in our favourite distribution, to prevent duplication of effort and to give back to the community whose work we get advantage from.

Guido Günther: Debian Fun in August 2016

7 September, 2016 - 01:08
Debian LTS

August marked the sixteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 9 hours (of allocated 8) mostly on Rails related CVEs which resulted in DLA-603-1 and DLA-604-1 fixing 6 CVEs and marking others as not affecting the packages. The hardest part was proper testing since the split packages in Wheezy don't allow to run the upstream test suite as is. There's still CVE-2016-0753 which I need to check if it affects activerecord or activesupport.

Additionally I had one relatively quiet week of LTS frontdesk work triaging 10 CVEs.

Other Debian stuff
  • I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to unstable. The later bringing all the enhanements and bugfixes since Debconf 16 to sid and testing.
  • The usual bunch of libvirt related uploads

Andrew Shadura: Manual control of OpenEmbedded -dbg packages

6 September, 2016 - 19:28

In December last year, OpenEmbebbed introduced automatic debug packages. Prior to that, you’d need to manually construct FILES_${PN}-dbg variable in your recipe. If you need to retain manual control over precisely what does into debug packages, set an undocumented NOAUTOPACKAGEDEBUG variable to 1, the same way Qt recipe does:

FILES_${PN}-dev = "${includedir}/${QT_DIR_NAME}/Qt/*"
FILES_${PN}-dbg = "/usr/src/debug/"
FILES_${QT_BASE_NAME}-demos-doc = "${docdir}/${QT_DIR_NAME}/qch/qt.qch"

P.S. Knowing this would have saved me and my colleagues days of work.

Norbert Preining: Yukio Mishima: Patriotism (憂国)

6 September, 2016 - 18:59

A masterpiece by Yukio Mishima – Patriotism – the story of love and dead. A short story about the double suicide of a Lieutenant and his wife following the Ni Ni Roku Incident where some parts of the military tried to overthrow government and military leaders. Although Lieutenant Takeyama wasn’t involved into the coup, because his friends wanted to safeguard him and his new wife, he found himself facing a fight and execution of his friends. Not being able to cope with this situation he commits suicide, followed by his wife.

Written in 1960 by one of the most interesting writers of Japanese modern history, Yukio Mishima, this book and the movie made by Mishima himself, are very disturbing images of the relation between human and state.

Although the English title says Patriotism, the Japanese one is 憂国 (Yukoku) which is closer to Concern for one’s own country. This concern, and the feeling of devotion to the Imperial system and the country that leads the two into their deed. We are guided through the whole book and movie by a large scroll with 至誠 (shisei, devotion) written on it.

But indeed, Patriotism is a good title I think – one of the most dangerous concepts mankind has brought forth. If Patriotism would be only the love for one’s own country, all would be fine. But reality shows that patriotism unfailingly brings along xenophobia and the feeling of superiority.

For someone coming from a small and unimportant country, I never had even the slightest allure to be patriotic in the bad sense. And looking at the world and people around me, I often have the feeling that mainly big countries produce the biggest and worst style of patriotism. This is obvious in countries like China, but having recently learned that all US pupils have to recite (obviously without understanding) the Pledge of Allegiance, the shock of how bad patriotism can start washing the brains of even the smallest kids in a seemingly free country is still present.

But back to the book: Here the patriotism is exhibited by the presence of the Imperial images and shrine in the entrance, in front of which the two pray the last time before executing themselves.

Not only the book is a masterpiece by itself, also the movie is a special piece of art: Filmed in silent movie style with text inserts, the whole story takes place on a Noh stage. This is in particular interesting as Mishima was one of the few, if not the only, modern Noh play writer. He has written several Noh plays.

Another very impressive scene for me was when, after her husbands suicide, Reiko returns from putting up her final make-up into the central room. Her kimono is already blood soaked and the trailing kimono leaves traces on the Noh stage resembling the strokes of a calligraphy, as if her movement is guided, too, by 至誠.

The final scene of the movie shows the two of them in a Zen stone garden, forming the stone, the unreachable island of happiness.

Very impressive, both the book as well as the movie.

Guido Günther: Debian Fun in Augst 2016

6 September, 2016 - 13:35
Debian LTS

August marked the sixteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 9 hours (of allocated 8) mostly on Rails related CVEs which resulted in DLA-603-1 and DLA-604-1 fixing 6 CVEs and marking others as not affecting the packages. The hardest part was proper testing since the split packages in Wheezy don't allow to run the upstream test suite as is. There's still CVE-2016-0753 which I need to check if it affects activerecord or activesupport.

Additionally I had one relatively quiet week of LTS frontdesk work triaging 10 CVEs.

Other Debian stuff
  • I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to unstable. The later bringing all the enhanements and bugfixes since Debconf 16 to sid and testing.
  • The usual bunch of libvirt related uploads

Clint Adams: Can't put your arms around a memory

6 September, 2016 - 08:41

“I think it stems from employing people who are capable of telling you what BGP stands for,” he said. “Watching my DevOps team in action is an infuriating mix of ‘Damn, that's a slick CI/CD process you’ve built,’ and ‘What do you mean you don't know what the output of netstat means?’”

Joachim Breitner: The new CIS-194

6 September, 2016 - 01:09

The Haskell minicourse at the University of Pennsylvania, also known as CIS-194, has always had a reach beyond the students of Penn. At least since Brent Yorgey gave the course in 2013, who wrote extensive lecture notes and eventually put the material on Github.

This year, it is my turn to give the course. I could not resist making some changes, at least to the first few weeks: Instead of starting with a locally installed compiler, doing execises that revolve mostly around arithmetic and lists, I send my students to CodeWorld, which is a web programming environment created by Chris Smith1.

This greatly lowers the initial hurlde of having to set up the local toolchain, and is inclusive towards those who have had little expose to the command line before. Not that I do not expect my students to handle that, but it does not hurt to move that towards later in the course.

But more importantly: CodeWorld comes with a nicely designed simple API to create vector graphics, to animate these graphics and even create interactive programs. This means that instead of having to come up with yet another set of exercieses revolving around lists and numbers, I can have the students create Haskell programs that are visual. I believe that this is more motivating and stimulating, and will nudge the students to spend more time programming and thus practicing.

In fact, the goal is that in their third homework assignemnt, the students will implement a fully functional, interactive Sokoban game. And all that before talking about the built-in lists or tuples, just with higher order functions and custom datatypes. (Click on the picture above, which is part of the second weeks’s homework. You can use the arrow keys to move the figure around and press the escape key to reset the game. Boxes cannot be moved yet -- that will be part of homework 3.)

If this sounds interesting to you, and you always wanted to learn Haskell from scratch, feel free to tag along. The lecture notes should be elaborate enough to learn from that, and with the homework problems, you should be able to tell whether you have solved it yourself. Just do not publish your solutions before the due date. Let me know if you have any comments about the course so far.

Eventually, I will move to local compilation, use of the interpreter and text-based IO and then start using more of the material of previous iterations of the course, which were held by Richard Eisenberg in 2014 and by Noam Zilberstein in 2015.

  1. Chris has been very helpful in making sure CodeWorld works in a way that suits my course, thanks for that!


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้