Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 32 min ago

Ben Hutchings: Debian LTS work, September 2017

9 October, 2017 - 23:25

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from August. I only worked 12 hours, so I will carry over 9 hours to the next month.

I prepared and released another update on the Linux 3.2 longterm stable branch (3.2.93). I then rebased the Debian linux package onto this version, added further security fixes, and uploaded it (DLA-1099-1).

Michal Čihař: Better acess control in Weblate

9 October, 2017 - 23:00

Upcoming Weblate 2.17 will bring improved access control settings. Previously this could be controlled only by server admins, but now the project visibility and access presets can be configured.

This allows you to better tweak access control for your needs. There is additional choice of making the project public, but restricting translations, what has been requested by several projects.

You can see the possible choices on the UI screenshot:

On Hosted Weblate this feature is currently available only to commercial hosting customers. Projects hosted for free are limited to public visibility only.

Filed under: Debian English SUSE Weblate

Antonio Terceiro: pristine-tar updates

9 October, 2017 - 22:06

pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team.

pristine-tar goals are to import the content of s a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don’t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway.

The current state of the art, and perspectives for the future

pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while.

So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar.

Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn’t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic.

Recent improvements

Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can’t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit.

In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let’s look at the improvements made since pristine-tar 1.38, which was the version included in stretch.

pristine-tar 1.39: xdelta3 by default.

This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports.

The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release.

pristine-tar 1.40: improved compression heuristics

pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz:

pristine-tar 1.41: support for signatures

This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb.

pristine-tar 1.42: optionally recompressing tarballs

I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the “pristine” bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine.

In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all:

In other cases, if the current heuristics can’t produce a reasonably small delta, recompressing makes a huge difference. It’s the case for mumble_1.1.8.orig.tar.gz:

Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations.

Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what “too large” means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.

Michal Čihař: stardicter 1.1

9 October, 2017 - 20:15

Stardicter 1.1, the set of scripts to convert some freely available dictionaries to StarDict format, has been released today. The biggest change is that it will also keep source data together with generated dictionaries. This is good for licensing reasons and will also allow to actually build these as packages within Debian.

Full list of changes:

  • Various cleanups for first stable release.
  • Fixed generating of README for dictionaries.
  • Added support for generating source tarballs.
  • Fixed installation on systems with non utf-8 locale.

As usual, you can install from pip, download source or download generated dictionaries from my website. The package should be soon available in Debian as well.

Filed under: Debian English StarDict

Petter Reinholdtsen: Generating 3D prints in Debian using Cura and Slic3r(-prusa)

9 October, 2017 - 15:50

At my nearby maker space, Sonen, I heard the story that it was easier to generate gcode files for theyr 3D printers (Ultimake 2+) on Windows and MacOS X than Linux, because the software involved had to be manually compiled and set up on Linux while premade packages worked out of the box on Windows and MacOS X. I found this annoying, as the software involved, Cura, is free software and should be trivial to get up and running on Linux if someone took the time to package it for the relevant distributions. I even found a request for adding into Debian from 2013, which had seem some activity over the years but never resulted in the software showing up in Debian. So a few days ago I offered my help to try to improve the situation.

Now I am very happy to see that all the packages required by a working Cura in Debian are uploaded into Debian and waiting in the NEW queue for the ftpmasters to have a look. You can track the progress on the status page for the 3D printer team.

The uploaded packages are a bit behind upstream, and was uploaded now to get slots in the NEW queue while we work up updating the packages to the latest upstream version.

On a related note, two competitors for Cura, which I found harder to use and was unable to configure correctly for Ultimaker 2+ in the short time I spent on it, are already in Debian. If you are looking for 3D printer "slicers" and want something already available in Debian, check out slic3r and slic3r-prusa. The latter is a fork of the former.

Gunnar Wolf: Achievement unlocked - Made with Creative Commons translated to Spanish! (Thanks, @xattack!)

9 October, 2017 - 11:05

I am very, very, very happy to report this — And I cannot believe we have achieved this so fast:

Back in June, I announced I'd start working on the translation of the Made with Creative Commons book into Spanish.

Over the following few weeks, I worked out the most viable infrastructure, gathered input and commitments for help from a couple of friends, submitted my project for inclusion in the Hosted Weblate translations site (and got it approved!)

Then, we quietly and slowly started working.

Then, as it usually happens in late August, early September... The rush of the semester caught me in full, and I left this translation project for later — For the next semester, perhaps...

Today, I received a mail that surprised me. That stunned me.

99% of translated strings! Of course, it does not look as neat as "100%" would, but there are several strings not to be translated.

So, yay for collaborative work! Oh, and FWIW — Thanks to everybody who helped. And really, really, really, hats off to Luis Enrique Amaya, a friend whom I see way less than I should. A LIDSOL graduate, and a nice guy all around. Why to him specially? Well... This has several wrinkles to iron out, but, by number of translated lines:

  • Andrés Delgado 195
  • scannopolis 626
  • Leo Arias 812
  • Gunnar Wolf 947
  • Luis Enrique Amaya González 3258

...Need I say more? Luis, I hope you enjoyed reading the book :-]

There is still a lot of work to do, and I'm asking the rest of the team some days so I can get my act together. From the mail I just sent, I need to:

  1. Review the Pandoc conversion process, to get the strings formatted again into a book; I had got this working somewhere in the process, but last I checked it broke. I expect this not to be too much of a hurdle, and it will help all other translations.
  2. Start the editorial process at my Institute. Once the book builds, I'll have to start again the stylistic correction process so the Institute agrees to print it out under its seal. This time, we have the hurdle that our correctors will probably hate us due to part of the work being done before we had actually agreed on some important Spanish language issues... which are different between Mexico, Argentina and Costa Rica (where translators are from).

    Anyway — This sets the mood for a great start of the week. Yay!

AttachmentSize Screenshot from 2017-10-08 20-55-30.png103.1 KB

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

9 October, 2017 - 03:45

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • SSH host keys are generated on first boot.
  • Old kernel versions are now removed from /boot/firmware when purged.
  • The image is built with vmdb2, the successor to debootstrap. The input files are available at
  • The image uses the linux-image-arm64 4.13.4-3 kernel, which provides HDMI output.
  • The image is now compressed using bzip2, reducing its size to 220M.

A couple of issues remain, notably the lack of WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome!

As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget
$ bunzip2 2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ sudo dd if=2017-10-08-raspberry-pi-3-buster-PREVIEW.img of=/dev/sdb bs=5M

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is “raspberry”

Joachim Breitner: e.g. in TeX

9 October, 2017 - 02:08

When I learned TeX, I was told to not write e.g. something, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write e.g.\␣.

Years later, I learned from a convincing, but since forgotten source, that in fact e.g.\@ is the proper thing to write. I vaguely remembering that e.g.\␣ supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact e.g.\␣ is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that e.g.\@ is proper.

What now? I guess I just have to find it out myself.

The problem and two solutions

The above image shows three variants: The obviously broken version with e.g., and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how \@ and \␣ react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;

Expanding whitespace, take 1

Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?

Expanding whitespace, take 3

Expanding whitespace, take 4

Again, I see the same behaviour.

Conclusion: It does not matter, but e.g.\␣ is less hassle when using lhs2tex than e.g.\@ (which has to be escaped as e.g.\@@), so the winner is e.g.\␣!

(Unless you put it in a macro, then \@ might be preferable, and it is still needed between a captial letter and a sentence period.)

Daniel Pocock: A step change in managing your calendar, without social media

9 October, 2017 - 00:36

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

Ricardo Mones: Cannot enable. Maybe the USB cable is bad?

8 October, 2017 - 23:17
One of the reasons which made me switch my old 17" BenQ monitor for a Dell U2413 three years ago was it had an integrated SD card reader. I find very convenient to take camera's card out, plug the card into the monitor and click on KDE device monitor's option “Open with digiKam” to download the photos or videos.

But last week, when trying to reconnect the USB cable to the new board just didn't work and the kernel log messages were not very hopeful:

[190231.770349] usb 2-2.3.3: new SuperSpeed USB device number 15 using xhci_hcd
[190231.890439] usb 2-2.3.3: New USB device found, idVendor=0bda, idProduct=0307
[190231.890444] usb 2-2.3.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[190231.890446] usb 2-2.3.3: Product: USB3.0 Card Reader
[190231.890449] usb 2-2.3.3: Manufacturer: Realtek
[190231.890451] usb 2-2.3.3: SerialNumber: F141000037E1
[190231.896592] usb-storage 2-2.3.3:1.0: USB Mass Storage device detected
[190231.896764] scsi host8: usb-storage 2-2.3.3:1.0
[190232.931861] scsi 8:0:0:0: Direct-Access     Generic- SD/MMC/MS/MSPRO  1.00 PQ: 0 ANSI: 6
[190232.933902] sd 8:0:0:0: Attached scsi generic sg5 type 0
[190232.937989] sd 8:0:0:0: [sde] Attached SCSI removable disk
[190243.069680] hub 2-2.3:1.0: hub_ext_port_status failed (err = -71)
[190243.070037] usb 2-2.3-port3: cannot reset (err = -71)
[190243.070410] usb 2-2.3-port3: cannot reset (err = -71)
[190243.070660] usb 2-2.3-port3: cannot reset (err = -71)
[190243.071035] usb 2-2.3-port3: cannot reset (err = -71)
[190243.071409] usb 2-2.3-port3: cannot reset (err = -71)
[190243.071413] usb 2-2.3-port3: Cannot enable. Maybe the USB cable is bad?

I was sure USB 3.0 ports were working, because I've already used them with a USB 3.0 drive, so first thought was the monitor USB hub had failed. It seemed unlikely that a cable which has not been moved in 3 years was suddenly failing, is that even possible?

But a few moments later the same cable plugged into a USB 2.0 worked flawlessly and all photos could be downloaded, just noticeably slower.

A bit confused, and thinking that, since everything else was working maybe the cable had to be replaced, it happened I upgraded the system in the meantime. And luck came into rescue, because now it works again in 4.9.30-2+deb9u5 kernel. Looking at the package changelog it seems the fix was this “usb:xhci:Fix regression when ATI chipsets detected“. So, not a bad cable but a little kernel bug ;-)

Thanks to all involved, specially Ben for the package update!

Iain R. Learmonth: Free Software Efforts (2017W40)

8 October, 2017 - 23:00

Here’s my weekly report for week 40 of 2017. In this week I have looked at censorship in Catalonia and had my “deleted” Facebook account hacked (which made HN front page). I’ve also been thinking about DRM on the web.


I have prepared and uploaded fixes for the measurement-kit and hamradio-maintguide packages.

I have also sponsored uploads for gnustep-base (to experimental) and chkservice.

I have given DM upload privileges to Eric Heintzmann for the gnustep-base package as he has shown to care for the GNUstep packages well. In the near future, I think we’re looking at a transition for gnustep-{base,back,gui} as these packages all have updates.

Bugs filed: #877680

Bugs closed (fixed/wontfix): #872202, #877466, #877468

Tor Project

This week I have participated in a discussion around renaming the “Operations” section of the Metrics website.

I have also filed a new ticket on Atlas, which I am planning to implement, to link to the new relay lifecycle post on the Tor Project blog if a relay is less than a week old to help new relay operators understand the bandwidth usage they’ll be seeing.

Finally, I’ve been hacking on a Twitter bot to tweet factoids about the public Tor network. I’ve detailed this in a separate blog post.

Bugs closed (fixed/wontfix): #23683


I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

Iain R. Learmonth: Tor Relays on Twitter

8 October, 2017 - 21:00

A while ago I played with a Twitter bot that would track radio amateurs using a packet radio position reporting system, tweet their location and a picture from Flickr that was taken near to their location and a link to their packet radio activity on It’s really not that hard to put these things together and they can be a lot of fun. The tweets looked like this:

VK4CVL is mobile near Chapel Hill, Australia #hamradio #hamr

— HamLocator (@HamLocator) June 12, 2015

This isn’t about building a system that serves any critical purpose, it’s about fun. As the radio stations were chosen essentially at random, there could be some cool things showing up that you wouldn’t otherwise have seen. Maybe you’d spot a callsign of a station you’ve spoken to before on HF or perhaps you’d see stations in areas near you or in cool places.

On Friday evening I took a go at hacking together a bot for Tor relays. The idea being to have regular snippets of information from the Tor network and perhaps you’ll spot something insightful or interesting. Not every tweet is going to be amazing, but it wasn’t running for very long before I spotted a relay very close to its 10th birthday:

esko in Finland started contributing bandwidth to the #Tor network 9 years and 51 weeks ago

— Tor Atlas (@TorAtlas) October 7, 2017

The relays are chosen at random, and tweet templates are chosen at random too. So far, tweets about individual relays can be about age or current bandwidth contribution to the Tor network. There are also tweets about how many relays run in a particular autonomous system (again, chosen at random) and tweets about the total number of relays currently running. The total relays tweets come with a map:

There are currently 6638 #Tor relays running.

— Tor Atlas (@TorAtlas) October 7, 2017

The maps are produced using xplanet. The Earth will rotate to show the current side in daylight at the time the tweet is posted.

Unfortunately, the bot currently cannot tweet as the account has been suspended. You should still be able to though and tweets will begin appearing again once I’ve resolved the suspension.

I plan to rewrite the mess of cron-activated Python scripts into a coherent Python (maybe Java) application and publish the sources soon. There are also a number of new templates for tweets I’d like to explore, including number of relays and bandwidth contributed per family and statistics on operating system diversity.

Update (2017-10-08): The @TorAtlas account should now be unsuspended.

Thomas Lange: FAI 5.4 enters the embedded world

8 October, 2017 - 20:05

Since DebConf 17 I was working on cross-architecture support for FAI. The new FAI release supports creating cross-architecture disk images, for e.g. you can build an image for Arm64 (aarch64) on a host running 64-bit x86 Linux (amd64) in about 6 minutes.

The release announcement has more details, and I also created a video showing the build process for an Arm64 disk image and booting this image using Qemu.

I'm happy to join the Debian cloud sprint in a week, where more FAI related work is waiting.

FAI embedded ARM

Chris Lamb: python-gfshare: Secret sharing in Python

7 October, 2017 - 17:12

I've just released python-gfshare, a Python library that implements Shamir’s method for secret sharing, a technique to split a "secret" into multiple parts.

An arbitrary number of those parts are then needed to recover the original file but any smaller combination of parts are useless to an attacker.

For instance, you might split a GPG key into a “3-of-5” share, putting one share on each of three computers and two shares on a USB memory stick. You can then use the GPG key on any of those three computers using the memory stick.

If the memory stick is lost you can ultimately recover the key by bringing the three computers back together again.

For example:

$ pip install gfshare
>>> import gfshare
>>> shares = gfshare.split(3, 5, b"secret")
>>> shares
{104: b'1\x9cQ\xd8\xd3\xaf',
 164: b'\x15\xa4\xcf7R\xd2',
 171: b'>\xf5*\xce\xa2\xe2',
 173: b'd\xd1\xaaR\xa5\x1d',
 183: b'\x0c\xb4Y\x8apC'}
>>> gfshare.combine(shares)

After removing two "shares" we can still reconstruct the secret as we have 3 out of the 5 originals:

>>> del shares['104']
>>> del shares['171']
>>> gfshare.combine(shares)

Under the hood it uses Daniel Silverstone’s libgfshare library. The source code is available on GitHub as is the documentation.

Patches welcome.

Scarlett Clark: KDE at #UbuntuRally in New York! KDE Applications snaps!

7 October, 2017 - 03:50

KDE at #UbuntuRally New York

I was happy to attend Ubuntu Rally last week in New York with Aleix Pol to represent KDE.
We were able toaccomplish many things during this week, and that is a result of having direct contact with Snap developers.
So a big thank you out to Canonical for sponsoring me. I now have all of KDE core applications,
and many KDE extragear applications in the edge channel looking for testers.
I have also made a huge dent in also making the massive KDE PIM snap!
I hope to have this done by week end.
Most of our issue list made it onto TO-DO lists 🙂
So from KDE perspective, this sprint was a huge success!

Raphaël Hertzog: My Free Software Activities in September 2017

6 October, 2017 - 15:30

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10.5h. During this time, I continued my work on exiv2. I finished reproducing all the issues and then went on doing code reviews to confirm that vulnerabilities were not present when the issue was not reproducible. I found two CVE where the vulnerability was present in the wheezy version and I posted patches in the upstream bug tracker: #57 and #55.

Then another batch of 10 CVE appeared and I started the process over… I’m currently trying to reproduce the issues.

While doing all this work on exiv2, I also uncovered a failure to build on the package in experimental (reported here).

Misc Debian/Kali work

Debian Live. I merged 3 live-build patches prepared by Matthijs Kooijman and added an armel fix to cope with the the rename of the orion5x image into the marvell one. I also uploaded a new live-config to fix a bug with the keyboard configuration. Finally, I also released a new live-installer udeb to cope with a recent live-build change that broke the locale selection during the installation process.

Debian Installer. I prepared a few patches on pkgsel to merge a few features that had been added to Ubuntu, most notably the possibility to enable unattended-upgrades by default.

More bug reports. I investigated much further my problem with non-booting qemu images when they are built by vmdebootstrap in a chroot managed by schroot (cf #872999) and while we have much more data, it’s not yet clear why it doesn’t work. But we have a working work-around…

While investigating issues seen in Kali, I opened a bunch of reports on the Debian side:

  • #874657: pcmanfm: should have explicit recommends on lxpolkit | polkit-1-auth-agent
  • #874626: bin-nmu request to complete two transitions and bring back some packages in testing
  • #875423: openssl: Please re-enable TLS 1.0 and TLS 1.1 (at least in testing)

Packaging. I sponsored two uploads (dirb and python-elasticsearch).

Debian Handbook. My work on updating the book mostly stalled. The only thing I did was to review the patch about wireless configuration in #863496. I must really get back to work on the book!


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Ross Gammon: My FOSS activities for August & September 2017

6 October, 2017 - 02:35

I am writing this from my hotel room in Bologna, Italy before going out for a pizza. After a successful Factory Acceptance Test today, I might also allow myself to celebrate with a beer. But anyway, here is what I have been up to in the FLOSS world for the last month and a bit.

  • Uploaded gramps (4.2.6) to stretch-backports & jessie-backports-sloppy.
  • Started working on the latest release of node-tmp. It needs further work due to new documentation being included etc.
  • Started working on packaging the latest goocanvas-2.0 package. Everything is ready except for producing some autopkgtests.
  • Moved node-coffeeify experimental to unstable.
  • Updated the Multimedia Blends Tasks with all the latest ITPs etc.
  • Reviewed doris for Antonio Valentino, and sponsored it for him.
  • Reviewed pyresample for Antonio Valentino, and sponsored it for him.
  • Reviewed a new parlatype package for Gabor Karsay, and sponsored it for him.
  • Successfully did my first merge using git-ubuntu for the Qjackctl package. Thanks to Nish for patiently answering my questions, reviewing my work, and sponsoring the upload.
  • Refreshed the gramps backport request to 4.2.6. Still no willing sponsor.
  • Tested Len’s rewrite of ubuntustudio-controls, adding a CPU governor option in particular. There are a couple of minor things to tidy up, but we have probably missed the chance to get it finalised for Artful.
  • Tested the First Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes. Also drafted my first release announcement on the Ubunti Studio website which Eylul reviewed and published.
  • Refreshed the ubuntustudio-meta package and requested sponsorship. This was done by Steve Langasek. Thanks Steve.
  • Tested the Final Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes.
  • Started working on a new Carla package, starting from where Víctor Cuadrado Juan left it (ITP in Debian).

Wouter Verhelst: Patching Firefox

5 October, 2017 - 21:49

At work, I help maintain a smartcard middleware that is provided to Belgian citizens who want to use their electronic ID card to, e.g., log on to government websites. This middleware is a piece of software that hooks into various browsers and adds a way to access the smartcard in question, through whatever APIs the operating system and the browser in question provide for that purpose. The details of how that is done differ between each browser (and in the case of Google Chrome, for the same browser between different operating systems); but for Firefox (and Google Chrome on free operating systems), this is done by way of a PKCS#11 module.

For Firefox 57, mozilla decided to overhaul much of their browser. The changes are large and massive, and in some ways revolutionary. It's no surprise, therefore, that some of the changes break compatibility with older things.

One of the areas in which breaking changes were made is in the area of extensions to the browser. Previously, Firefox had various APIs available for extensions; right now, all APIs apart from the WebExtensions API are considered "legacy" and support for them will be removed from Firefox 57 going forward.

Since installing a PKCS#11 module manually is a bit complicated, and since the legacy APIs provided a way to do so automatically provided the user would first install an add-on (or provided the installer of the PKCS#11 module sideloads it), most parties who provide a PKCS#11 module for use with Firefox will provide an add-on to automatically install it. Since the alternative involves entering the right values in a dialog box that's hidden away somewhere deep in the preferences screen, the add-on option is much more user friendly.

I'm sure you can imagine my dismay when I found out that there was no WebExtensions API to provide the same functionality. So, after asking around a bit, I filed bug 1357391 to get a discussion started. While it took some convincing initially to get people to understand the reasons for wanting such an API, eventually the bug was assigned the "P5" priority -- essentially, a "we understand the need and won't block it, but we don't have the time to implement it. Patches welcome, though" statement.

Since having an add-on was something that work really wanted, and since I had the time, I got the go-ahead from management to look into implementing the required code myself. I made it obvious rather quickly that my background in Firefox was fairly limited, though, and so was assigned a mentor to help me through the process.

Having been a Debian Developer for the past fifteen years, I do understand how to develop free software. Yet, the experience was different enough that still learned some new things about free software development, which was somewhat unexpected.

Unfortunately, the process took much longer than I had hoped, which meant that the patch was not ready by the time Firefox 57 was branched off mozilla's "central" repository. The result of that is that while my patch has been merged into what will eventually become Firefox 58, it looks strongly as though it won't make it into Firefox 57. That's going to cause some severe headaches, which I'm not looking forward to; and while I can certainly understand the reasons for not wanting to grant the exception for the merge into 57, I can't help but feeling like this is a missed opportunity.

Anyway, writing code for the massive Open Source project that mozilla is has been a load of fun, and in the process I've learned a lot -- not only about Open Source development in general, but also about this weird little thing that Javascript is. That might actually be useful for this other project that I've got running here.

In closing, I'd like to thank Tomislav 'zombie' Jovanovic for mentoring me during the whole process, without whom it would have been doubtful if I would even have been ready by now. Apologies for any procedural mistakes I've made, and good luck in your future endeavours!

Steve Kemp: Tracking aircraft in real-time, via software-defined-radio

5 October, 2017 - 04:00

So my last blog-post was about creating a digital-radio, powered by an ESP8266 device, there's a joke there about wireless-control of a wireless. I'm not going to make it.

Sticking with a theme this post is also about radio, software-defined radio. I know almost nothing about SDR, except that it can be used to let your computer "do stuff" with radio. The only application I've ever read about that seemed interesting was tracking aircraft.

This post is about setting up a Debian GNU/Linux system to do exactly that, show aircraft in real-time above your head! This was almost painless to setup.

  • Buy the hardware.
  • Plug in the hardware.
  • Confirm it is detected.
  • Install the appropriate sdr development-package(s).
  • Install the magic software.
    • Written by @antirez, no less, you know it is gonna be good!

So I bought this USB device from AliExpress for the grand total of €8.46. I have no idea if that URL is stable, but I suspect it is probably not. Good luck finding something similar if you're living in the future!

Once I connected the Antenna to the USB stick, and inserted it into a spare slot it showed up in the output of lsusb:

  $ lsusb
  Bus 003 Device 043: ID 0bda:2838 Realtek Semiconductor Corp. RTL2838 DVB-T

In more detail I see the major/minor numbers:

  idVendor           0x0bda Realtek Semiconductor Corp.
  idProduct          0x2838 RTL2838 DVB-T

So far, so good. I installed the development headers/library I needed:

  # apt-get install librtlsdr-dev libusb-1.0-0-dev

Once that was done I could clone antirez's repository, and build it:

  $ git clone
  $ cd dump1090
  $ make

And run it:

  $ sudo ./dump1090 --interactive --net

This failed initially as a kernel-module had claimed the device, but removing that was trivial:

  $ sudo rmmod dvb_usb_rtl28xxu
  $ sudo ./dump1090 --interactive --net

Once it was running I'd see live updates on the console, every second:

  Hex    Flight   Altitude  Speed   Lat       Lon       Track  Messages Seen       .
  4601fc          14200     0       0.000     0.000     0     11        1 sec
  4601f2          9550      0       0.000     0.000     0     58        0 sec
  45ac52 SAS1716  2650      177     60.252    24.770    47    26        1 sec

And opening a browser pointing at http://localhost:8080/ would show that graphically, like so:

NOTE: In this view I'm in Helsinki, and the airport is at Vantaa, just outside the city.

Of course there are tweaks to be made:

  • With the right udev-rules in place it is possible to run the tool as non-root, and blacklist the default kernel module.
  • There are other forks of the dump1090 software that are more up-to-date to explore.
  • SDR can do more than track planes.

Daniel Silverstone: F/LOSS (in)activity, September 2017

4 October, 2017 - 19:53

In the interests of keeping myself "honest" regarding F/LOSS activity, here's a report, sadly it's not very good.

Unfortunately, September was a poor month for me in terms of motivation and energy for F/LOSS work. I did some amount of Gitano work, merging a patch from Richard Ipsum for help text of the config command. I also submitted another patch to the STM32F103xx Rust repository, though it wasn't a particularly big thing. Otherwise I've been relatively quiet on the Rust/USB stuff and have otherwise kept away from projects.

Sometimes one needs to take a step away from things in order to recuperate and care for oneself rather than the various demands on ones time. This is something I had been feeling I needed for a while, and with a lack of motivation toward the start of the month I gave myself permission to take a short break.

Next weekend is the next Gitano developer day and I hope to pick up my activity again then, so I should have more to report for October.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้