Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 45 min 27 sec ago

Russell Coker: KMail Crashing and LIBGL

3 November, 2019 - 07:39

One problem I’ve had recently on two systems with NVideo video cards is KMail crashing (SEGV) while reading mail. Sometimes it goes for months without having problems, and then it gets into a state where reading a few messages (or sometimes reading one particular message) causes a crash. The crash happens somewhere in the Mesa library stack.

In an attempt to investigate this I tried running KMail via ssh (as that precludes a lot of the GL stuff), but that crashed in a different way (I filed an upstream bug report [1]).

I have discovered a workaround for this issue, I set the environment variable LIBGL_ALWAYS_SOFTWARE=1 and then things work. At this stage I can’t be sure exactly where the problems are. As it’s certain KMail operations that trigger it I think that’s evidence of problems originating in KMail, but the end result when it happens often includes a kernel error log so there’s probably a problem in the Nouveau driver. I spent quite a lot of time investigating this, including recompiling most of the library stack with debugging mode and didn’t get much of a positive result. Hopefully putting it out there will help the next person who has such issues.

Here is a list of environment variables that can be set to debug LIBGL issues (strangely I couldn’t find documentation on this when Googling it). If you are stuck with a problem related to LIBGL you can try setting each of these to “1” in turn and see if it makes a difference. That can either be for the purpose of debugging a problem or creating a workaround that allows you to run the programs you need to run. I don’t know why GL is required to read email.


Related posts:

  1. LUV Hardware Library after 20 Months 20 months ago I started the LUV Hardware Library [1]....
  2. Mplayer, Squeeze, and SE Linux on i386 I’ve just updated my SE Linux repository for Squeeze to...
  3. Executable Stacks in Lenny One thing that I would like to get fixed for...

Dirk Eddelbuettel: binb 0.0.5: More improvements

2 November, 2019 - 19:25

The fifth release of the binb package just arrived on CRAN. binb regroups four rather nice themes for writing LaTeX Beamer presentations much more easily in (R)Markdown. As a teaser, a quick demo combining all four themes follows; documentation and examples are in the package.

This release contains some nice extensions to the Monash theme by Rob Hyndman]( You can see the a longer demo in this pdf and the extended options (i.e. for titlepage) in this pdf. David Selby also correct a minor internal wart in Presento.

Changes in binb version 0.0.5 (2019-11-02)
  • The Monash theme was updated with new titlepage and font handling and an expanded demo (Rob in #20).

  • The presento theme is now correctly labeled as exported (David Selby in #22).

  • The two Monash demos are now referenced from (Dirk).

CRANberries provides a summary of changes to the previous version. For questions or comments, please use the issue tracker at GitHub.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: ∑(No,12k,Lg,17Mif) New Order + Liam Gillick: So It Goes..

1 November, 2019 - 22:56

For the Manchester International Festival in 2017, New Order teamed up with visual designer Liam Gillick to conceive of a new and very visual way of presenting their material. They performed a short run of concerts with a 12-piece accompanying "synth orchestra", housed in a specially designed stage that was more than a little influenced by Kraftwerk. For the set list, New Order deviated significantly from their usual staples, pulling some deep cuts and re-arranging them all for the five-piece band plus 12-piece orchestra set up.

The stage set-up

How the shows were designed, built and delivered, as well as some background on the synth orchestra performers, their rehersals, and good coverage of the visual elements of the show: these aspects were all covered pretty well in a documentary called "Decades" that aired on Sky Arts and a few other places but is not in general circulation.

And so it came as a little bit of a surprise that the band's official release marking this event is an audio only CD/digital/vinyl double (or triple) album. The booklet that accompanies the physical releases features some photos of the stage set up but doesn't really do a good enough job of capturing that aspect of the work.

So on to the music. The set list for the shows was largely static (as you might expect) and, for a fan wanting to hear something a bit different from the set you'd expect at a "regular" New Order live gig, very compelling. There's no Blue Monday here; not even Temptation, although the regular favourite Bizarre Love Triangle does sneak in, as does one of my personal favourites Your Silent Face. However, their presentation, at least without the visual element, is pretty much exactly what you'd expect from a regular gig: BLT is re-arranged along the lines of the classic Richard X remix, and YSF's arpeggio is the more simplistic one that has always been played live, relative to the record, but Bernard's melodica solo is significantly extended. You can enjoy almost the same arrangements on the recent NOMC15 live album.

BLT does end with the lovely synth orchestra motif from Vanishing Point, the next song on the set list, and the two are segued together, although not entirely seamlessly, as the motif stops for a few bars before resuming a few octaves higher.

For the less common songs, it's a great pleasure to hear, in particular, Ultraviolence, In a Lonely Place, All Day Long and Decades. For some inexplicable reason IaLP is performed a measure or two quicker than the single version, which frustrates me. I'm tempted to slow it down in a DAW and see if it sounds better. Of all the arrangements, Decades benefits the most from the synth orchestra, sounding sublime. Bernard's vocal delivery also works best on the Joy Division tracks from the set (the other ones are Disorder and Heart and Soul). He's not having one of his best days on most of the others.

To wrap up: the best way to have experienced these shows would undoubtedly have been to attend them in person, which I sadly was unable to do. Failing that, it seems very strange for the official record of what took place to be, well, a record, rather than a video, especially since there is a video in the form of the Sky Arts documentary. Having said all that, there are some lovely arrangements captured here, even if the ebb-and-flow of the set list as sequenced is unusual.

Mike Gabriel: My Work on Debian LTS/ELTS (October 2019)

1 November, 2019 - 21:47

In October 2019, I have worked on the Debian LTS project for 11.75 hours (of 11.75 hours planned) and on the Debian ELTS project for 0 hours (of 5 hours planned) as a paid contributor. I have given back those 5 ELTS hours to the pool.

LTS Work
  • Work on a pre-OpenSSL-1.0.2 patch, adding hostname validation support to imapfilter as found in Debian jessie (built against OpenSSL 1.0.1t) [1]
  • File a Github PR against imapfilter upstream that got OpenSSL versioned #ifdef'ed code sections straight [2]
  • upload imapfilter 2.5.2-2+deb8u1 to jessie-security (DLA-1976-1)
  • upload libvncserver 0.9.9+dfsg2+deb8u6 to jessie-security (DLA-1977-1)
  • do a security audit of libvncserver-derived packages in Debian [5]
  • upload italc 1:2.0.2+dfsg1-2+deb8u1 to jessie-security (DLA-1979-1 [6]

In fact, preparing the italc security upload needed more time (an extra of 1.7h) than available for my LTS work in October. In my mind, I will move over these 1.7h to November and invoice them then.

In November, I plan to follow-up on the VNC security audit and prepare several VNC related package uploads to Debian jessie LTS. I will also work on package .debdiff patches for packages versions in stretch, buster and unstable.

As a first action, I will likely NMU-upload a new upstream release of libvncserver to unstable the coming week [7].

  • I did not do any ELTS work in October 2019.

Steve Kemp: Keeping a simple markdown work-log, via emacs

1 November, 2019 - 20:30

For the past few years I've been keeping a work-log of everything I do. I don't often share these, though it is sometimes interesting to be able to paste into a chat-channel "Oh on the 17th March I changed that .."

I've had a couple of different approaches but for the past few years I've mostly settled upon emacs ~/ I just create a heading for the date and I'm done:

 # 10-03-2019

 * Did a thing.
   * See this link
 * Did another thing.

 ## Misc.

 Happy Birthday to me.

As I said I've been doing this for years, but it was only last week that I decided to start making it more efficient. Since I open this file often I should bind it to a key:

(defun worklog()
  (interactive "*")
  (find-file "~/Work.MD"))

(global-set-key (kbd "C-x w") 'worklog)

This allows me to open the log by just pressing C-x w. The next step was to automate the headers. So I came up with a function which will search for today's date, adding it if missing:

(defun worklog-today()
  "Move to today's date, if it isn't found then append it"
  (interactive "*")
  (if (not (search-forward (format-time-string "# %d-%m-%Y") nil t 1))
        (insert (format-time-string "\n\n# %d-%m-%Y\n")))))

Now we use some magic to makes this function run every time I open ~/

(defun worklog_hook ()
  (when (equalp (file-name-nondirectory (buffer-file-name)) "")

(add-hook 'find-file-hook 'worklog_hook)

Finally there is a useful package imenu-list which allows you to create an inline sidebar for files. Binding that to a key allows it to be toggled easily:

    (add-hook 'markdown-mode-hook
     (lambda ()
      (local-set-key (kbd "M-'") 'imenu-list-smart-toggle)

The end result is a screen that looks something like this:

If you have an interest in such things I store my emacs configuration on github, in a dotfile-repository. My init file is writting in markdown, which makes it easy to read:

Junichi Uekawa: I've learnt how to write Dockerfile.

1 November, 2019 - 19:18
I've learnt how to write Dockerfile. But I feel I am yet to learn how to run them.

Russ Allbery: Review: Sweep of the Blade

1 November, 2019 - 10:26

Review: Sweep of the Blade, by Ilona Andrews

Series: Innkeeper Chronicles #4 Publisher: NYLA Copyright: 2019 ISBN: 1-64197-104-5 Format: Kindle Pages: 310

Sweep of the Blade is the fourth book of the Innkeeper Chronicles and the first with a main character other than Dina. The identity of that main character and the nature of the plot are spoilers for parts of One Fell Sweep, the previous book in the series, so don't read further in this review if you would prefer to avoid those.

Maud was the wife of a vampire (which, as in previous books in this series, are more like religious Klingons than the typical fantasy creature). She learned their forms of combat, their languages, and their customs. Her behavior and willingness to fit in were unquestionable. For her trouble, she and her daughter were treated like curiosities and trophies, pawns in her husband's ambition. His schemes left her exiled and fighting for her and her daughter's life on a lawless desert planet. By the time she made her escape (in One Fell Sweep), she was utterly done with vampires. But then she met Arland.

Arland, being the Arland that readers of this series have come to know, is determined to win her heart and her hand. Her heart may be a lost cause. Her hand is another matter. Maud had promised herself to never return to the Holy Anocracy to be rejected a second time, or to return Helen to a life of being bullied as a mongrel child. Arland nonetheless convinces her to meet his formidable family, determined to convince her to marry him.

Maud's requirement is that they take her and Helen on their own terms, or not at all.

This is not the sort of book you read for the plot twists and surprise endings. If you've read much at all, you're going to know the ending within the first few chapters, and are likely to be able to predict most of the story beats. If that's not what you're in the mood for, if you want something less expected and more uncertain, save this for another time. But if you're in the mood for a kick-ass heroine facing down assholes and winning the respect of all the people who matter while protecting her adorably feral child (and it takes a lot for me to like child characters), this is great.

The reader knows, going into this book, that Maud is worthy of Arland, and that Arland can probably manage to be worthy of Maud. The authors know that the reader knows and don't call that into question. The only question is exactly how they're going to demonstrate that, how long it will take the other vampires to figure it out, and what Maud will do to her enemies along the way. The setting is a complex inter-species negotiation mixed with vampire politics and a somewhat strained conspiracy, which gives Maud plenty of opportunity to use her innkeeper background and the vampires lots of opportunities to utterly fail to understand people who are not vampires. The fun is in Maud letting people underestimate her until just the right moment, and then proving she's the smartest and most capable person in the book.

This series as a whole is great fun, but I think this is my favorite book to date. That's not because of complex plots or detailed world-building (the setting is fun but composed of not horribly original components), but because it's the kind of series that unabashedly delivers on a guaranteed reading experience. I wanted to read a book where I knew the good guys were going to win and all the important people were going to learn to like the protagonist as much as I did, where I could just relax into the story and enjoy myself and grin at Maud cutting down (metaphorically or literally) people who thoroughly deserved it.

Also, Helen is an absolute delight. She's fearless, sincere, and childish in turn, and in just the right amount. Her relationship with Maud has just the right mix of protectiveness and trust. A whole book of the two of them together is a wonderful treat.

If you too are in the mood for that sort of uncomplicated story, this (and the whole series) delivers exactly what it promises on the tin. Highly recommended.

This entry doesn't advance the over-arching series plot all that much, but the teaser at the end of the story implies the next book will. As of this writing, it doesn't yet have a title.

Rating: 8 out of 10

Kurt Kremitzki: Halloween Update for FreeCAD & Debian Science Work

1 November, 2019 - 10:04

There's been a spooky amount of activity in the FreeCAD & Debian Science world since I last wrote! Because this update covers August, September, and October, I'll try to be brief and only touch on the key points.


Staging & Merging App::Link Functionality

The "App::Link" object allows lightweight linking of objects in a document and from external documents.

In August, a major milestone towards unified, mainline mechanical assembly functionality in FreeCAD was reached.

One of the core challenges in implementing assembly functionality is the problem of topological naming. In a CAD model there are topological entities, such as solids, faces, edges, and vertices. We must choose some algorithm to name them so that you can refer to relationships to make an assembly. A simple example would be two cubes, connected by touching faces. If a parameter in your model changes, and after recalculation, your "Face_N" is on the wrong side of the cube, your assembly may break, or not be what you are expecting. Without a good approach to topological naming, parametric FreeCAD models won't be robust to changes and recalculations, which defeats the purpose of parametric modeling.

Because this is such a difficult problem, progress has been slow. However, recently a relatively new FreeCAD developer, 'realthunder', put significant work towards this problem, with a solution finally on the horizon. Because it required major changes to FreeCAD's internals, the review and testing period was and continues to be lengthy.

The milestone in particular was merging of a large chunk of this code, referred to in short as "App::Link functionality". This diff is huge: +70,441 -14,562 lines of C++. You can read more about it on the pull request itself.

I only played a minor contributing role in this effort, preparing a short-lived staging PPA to provide a package for testers after the pull request was opened in July, but it's rather significant news in terms of the project and so worth spreading. Mechanical assembly is (no surprise) a must-have for mechanical engineers interested in FreeCAD, and it's considered by some to be the last remaining blocker for FreeCAD's 1.0 release.

Extra links: notes for code reviewers from the PR author, for the dedicated readers out there.

FreeCAD Python 2 removal in Debian Testing

The Python 2 removal in Debian Testing continues, and with it, FreeCAD's Python 2 package is gone. However, upon upload, several new build failures appeared on the i386, mipsel, and s390x platforms. Turns out there was a regression in dwz, a software I had never heard of before. I tried troubleshooting but unfortunately had FreeCAD drop out of testing due to the Python 2 removal bug filed against it.

Luckily, when I filed a bug against dwz, Matthias Klose and Tom de Vries helped isolate and upstream the problem, with Tom even bisecting the regressing commit in the upstream bug. Thank you, you two!

After adding a workaround to fix those build failures, the new Python 3-only FreeCAD package is on its way to Debian Testing.

FreeCAD Python 3 support added for Ubuntu 16.04

Qt 5 & Python 3 builds for FreeCAD have been available on Ubuntu 18.04 and newer and Debian 10 and newer platforms for some time now. However, Ubuntu 16.04 was problematic due to its old Qt version, which meant a Qt 4, Python 3 build had to be produced. This had been on the back burner for a while because when I initially attempted backporting the necessary packages, I encountered some friction.

However, since it's the last holdout Python 2 platform, I took another look at things to try to speed up the "py2ectomy". It turns out that the missing package needed for a Python 3 build, pivy, was failing to build because of "PIE hardening", a Debian security hardening flag. I had all such flags turned on, so I just had to disable that particular one to get things going.

So, Python 3 builds are now available for Ubuntu 16.04 in the FreeCAD Daily Builds PPA, and they will be coming soon to the stable PPA as well. A new bugfix release has been made for stable, 0.18.4, so I am working on that first, and the Python 3 package will come with it.

Netgen and Pybind11 Builds

Good news for users of FreeCAD's finite element modeling workbench!

Integration with Netgen, a mesh generator, has long been an optional but off-by-default build option for FreeCAD, due to packaging difficulties. However, since I have taken over things in recent years, I have finally gotten things to the state where we can turn this back on by default. As part of this change, I am also building FreeCAD with Pybind11 instead of Boost.Python, marking another milestone in managing FreeCAD's dependencies.

Since this may introduce bugs, I've started by making this change for all of FreeCAD's daily builds in the Ubuntu PPA, as well as the package currently in Debian Unstable. Eventually, this change may come to the stable PPA.

OpenCASCADE 7.4.0

An assembly of a single solid box replicated ~93,000 times. This test case is more than 10x faster in OCCT 7.4.0.

After more than a year of development (PDF warning), a new minor version of OpenCASCADE Technology (OCCT) has been released.

OCCT is the geometry & topology kernel of FreeCAD, and it is also a dependency for several related projects including Gmsh, IFCOpenShell, Netgen, and OpenCAMLib. New releases in OCCT generally herald stability and performance upgrades for core behavior. However, there are some breaking changes and so these improvements are yet to be seen.

For the time being, OCCT 7.4.0 packages are available in my OpenCASCADE PPA and by building the package directly from

OpenFOAM 1906

I uploaded the latest version of OpenFOAM, the toolbox for computational fluid dynamics. It's now available in Ubuntu 19.10, Debian Testing, and via the FreeCAD Community Extras PPA.

Gmsh 4.4.1

The latest version of Gmsh, a 3D finite element mesh generator, is also in Ubuntu 19.10, Debian Testing, and the Community Extras PPA. Thanks to Nico Schlömer for helping maintain this package.

Netgen 6.2.1905

This version of Netgen is only available via the FreeCAD Daily and Community Extras PPAs. Unfortunately Netgen has been stuck in the Debian NEW queue for over 8 months now.

GitHub Sponsors Program

I was accepted into the GitHub Sponsors program! GitHub is matching donations for the first year. Hopefully this helps fund my FOSS work, and FOSS work in general.

Thanks for your support

I appreciate any feedback you might have.

You can get in touch with me via Twitter @thekurtwk.

If you'd like to donate to help support my work, there are several methods available on my site.

Paul Wise: FLOSS Activities October 2019

1 November, 2019 - 07:48
Changes Issues Review Administration
  • Debian: restart dead stunnel, report offline machine to hoster
  • Debian mentors: proposed additional admin
  • Debian wiki: welcome folks to the community, whitelist email addresses
Communication Sponsors

The purple-discord and libapache-mod-auth-kerb work was sponsored by my employer. All other work was done on a volunteer basis.

Thadeu Lima de Souza Cascardo: Overriding ACPI tables on Debian

1 November, 2019 - 05:08

Linux supports overriding your computer ACPI tables by adding updated tables to the initrd. Checkout your local documentation at the linux tree at Documentation/admin-guide/acpi/initrd_table_override.rst.

I just uploaded to Debian a small initramfs-tools hook to add tables found at /var/lib/acpi-override/ to the initrd. For now, it's all that it does. The users have to modify the tables themselves and drop them at the referred directory.

It should be on the NEW queue and out to unstable after some time, and it's maintained at salsa.

My usecase for this is a small change to the DMAR table of my Libreboot-enabled X200, so I can experiment with IOMMU on it.

As future work, I may consider a tool to help at least update the OEM revision of the tables, to make it less error-prone to users to update them.

I hope this is useful to other people.

Jonathan Wiltshire: Daisy and George’s Corfian Holiday

1 November, 2019 - 03:34

Daisy and George have worked hard all year being diplomats in Arabia, helping test Debian CDs and writing best-selling books. They deserve a holiday!

This resort is beautiful.

George has five different pools to choose from as well as the sea

There’s no shortage of food and drink on the menu at dinner. Daisy has been enjoying sampling the local cocktails.

“Which one do you want to try next, Daisy?”

George is determined to get the hang of some Greek, but he isn’t finding it easy. Perhaps it makes more sense upside-down?

“I don’t think this helps much, George.”

The post Daisy and George’s Corfian Holiday appeared first on

Chris Lamb: Free software activities in October 2019

1 November, 2019 - 02:28

Here is my monthly update covering what I have been doing in the free software world during October 2019 (previous month):

  • Made some changes to my tickle-me-email library which implements Gettings Things Done-like behaviours in IMAP inboxes including ensuring attached files have their "basename" path as the filename metadata, not the full/absolute one passed to the program [...].

  • As part of my duties of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meeting and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics and policy etc.

  • Opened pull requests to make the build reproducible in:

    • SPIRV-Tools, part of the Khronos 3D graphics processing libraries etc. to ensure a timestamp does not vary with the build timezone. [...]

    • The "stacked" Git stgit tool. [...]

    • The traitlets Python type-checking/enforcement library to make sure that traitlet.Set values are returned in a sorted order. [...]

    • The flask microframework for building Python web applications to make the documentation build reproducibly. [...]

    • The ROS Robot Operating System code generation library for Python to ensure that generated struct constructs are reproducible. [...]

    • khard, a commandline address book utility. [...]

  • Even more hacking on the Lintian static analysis tool for Debian packages:

    • New checks/features:

      • Suggest switching from debian/compat to the debhelper-compat virtual package. (#933304)
      • Warn about missing ${sphinxdoc:Depends} when --with sphinxdoc or dh_sphinxdoc is used. (#940999, #943711)
      • Warn about packages that use the deprecated $ADTTMP autopkgtest variable. [...]
      • Add 4.4.1 as a known Standards-Version. [...]
    • Bug fixes / false-positive corrections:

    • Reporting/output:

    • Misc:

      • Don't build Git tags on salsa. [...]
      • Add a trailing ellipsis to the "Preparing X work directories" message to denote processing is occuring in the background. [...]
      • Improve the test package generation logging output to include a current/total progress indicator. [...]
      • Refresh data/fields/perl-provides. [...][...]
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • I spent some more time working on our website this month too, including:

    • Improving the formatting of our reports. [...]
    • Adding some missing space. [...]
    • Tidying the new "Testing framework" links. [...]
    • Updating the monthly report template. [...]
  • strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, I dropped the test fixture as it is no longer compatible with the latest version of Perl's Archive::Zip. (#940973)


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, I made the following changes:

  • Disassembling and reporting on files related to the R (programming language):

    • Expose an .rdb file's absolute paths in the semantic/human-readable output, not hidden deep in a hexdump. [...]
    • Rework and refactor the handling of .rdb files with respect to locating the parallel .rdx prior to inspecting the file to ensure that we do not add files to the user's filesystem in the case of directly comparing two .rdb files or — worse — overwriting a file in is place. [...]
    • Query the container for the full path of the parallel .rdx file to the .rdb file as well as looking in the same directory. This ensures that comparing two Debian packages shows any varying path. [...]
    • Correct the matching of .rds files by also detecting newer versions of this file format. [...]
    • Don't read the site and user environment when comparing .rdx, .rdb or .rds files by using Rscript's --vanilla option. [...][...]
    • Ensure all object names are displayed, including ones beginning with a fullstop (.) [...] and sort package fields when dumping data from .rdb files [...].
    • Mask/hide standard error when processing .rdb files [...] and don't include useless/misleading NULL when dumping data from them. [...]
    • Format package contents as foo = bar rather than using ugly and misleading brackets, etc. [...] and include the object's type [...].
    • Don't pass our long script to parse .rdb files via the command line; use standard input instead. [...]
    • Call thedeparse function to ensure that we do not error out and revert to a binary diff when processing .rdb files with internal "vector" types; they do not automatically coerce to strings. [...]
    • Other misc/cosmetic changes. [...][...][...]
  • Output/logging:

    • When printing an error from a command, format the command for the user. [...]
    • Truncate very long command lines when displaying them as an external source of data. [...]
    • When formatting command lines ensure newlines and other metacharacters appear escaped as \n, etc. [...][...]
    • When displaying the standard error from commands, ensure we use the escaped version. [...]
    • Use "exit code" over "return code" terminology when referring to UNIX error codes in displayed differences. [...]
  • Internal API:

    • Add ability to pass bytestring input to external commands. [...]
    • Split out command-line formatting into a separate utility function. [...]
    • Add support for easily masking the standard error of commands. [...][...]
    • To match the libarchive container, raise a KeyError exception if we request an invalid member from a directory. [...]
    • Correct string representation output in the traceback when we cannot locate a specific item in a container. [...]
  • Misc:

    • Move build-dependency on python-argcomplete to its Python 3 equivalent to facilitate Python 2.x removal. (#942967)
    • Track and report on missing Python modules. (#72)
    • Move from deprecated $ADTTMP to $AUTOPKGTEST_TMP in the autopkgtests. [...]


I filed two patches against the r-base package for not respecting the nocheck and nodoc build profiles respectfully (#942867 & #942870) as well as filing a bug against python3-pluggy for missing a dependency on python3-importlib-metadata (#943320).

Uploads Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the Debian LTS project via the following video:

FTP Team

As a Debian FTP assistant I ACCEPTed 25 packages: backintime, celery-batches, eslint, golang-github-containers-image, gtk-d, jsbundle-web-interfaces, networkx, node-eslint-plugin-eslint-plugin, node-eslint-plugin-node, node-eslint-scope, node-eslint-visitor-keys, node-esquery, node-file-entry-cache, node-flatted, node-functional-red-black-tree, node-ignore, node-leche, node-mock-fs, node-proxyquire, numpy, openvswitch, puppet-module-voxpupuli-collectd, pyrsistent, python-dbussy & z3.

I additionally filed 5 RC bugs against packages that had potentially-incomplete debian/copyright files against backintime, celery-batches, networkx, openvswitch & z3.

Sylvain Beucler: Debian LTS and ELTS - October 2019

1 November, 2019 - 02:10

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In October, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 22.75h for LTS (out of 30 max) and 20h for ELTS (max).

There was a bit of backlog during my LTS triage week and for once I didn't make a pass at classifying old undetermined issues.

MITRE was responsive for public (non-embargoed) issues in common free software packages, when I submitted new references or requested a CVE to identify known issues. There was more ball passing and delays when there was an another CNA (CVE Numbering Authorities).

Interestingly some issues were not fixed in LTS due to them being marked 'ignored' in later distros (sometimes, regrettably, with no clear rationale), as this means there would be a regression when upgrading. It's probably worth I check on my past security uploads to see if such as discrepancies appeared (this month's nfs-utils comes to mind, I'll re-offer a oldstable/stable upload next month).

Ubuntu recently started a post-LTS extended security support as well, with private updates. For now it's not clear whether we can get access to ease cooperation.

The last uploads I did took me some more hours than expected, so I'm a bit over my time - that means I have a few hours in advance for next month (not accounted for above).

ELTS - Wheezy

  • CVE-2019-16928/exim4: triage: not-affected
  • CVE-2019-15165/libpcap: security upload
  • libpcap: triage: other vulnerabilities not-affected
  • CVE-2019-3689/nfs-util: proposed patch and testing procedure to upstream and sid/salsa, ping, security upload
  • CVE-2019-3689/nfs-util: update security tracker: proposed fs.protected_symlinks mitigation is not valid as /var/lib/nfs has no sticky-bit; coordinate with MITRE and SuSE to update CVE
  • CVE-2019-17041,CVE-2019-17042/rsyslog: security upload, clarify triage description
  • CVE-2019-17040/rsyslog: triage: not-affected
  • CVE-2019-14287/sudo: request backport to maintainer, security upload
  • CVE-2019-17544/aspell: security upload
  • CVE-2019-11043/php5: security upload, provide feedback about applicability to cgi
  • CVE triage week part 1
    • CVE-2019-13464/modsecurity-crs: triage: not-affected (affected rules is not present)
    • CVE-2019-14847,CVE-2019-14833/samba: triage: not-affected
    • CVE-2019-10218/samba: triage: affected
    • CVE-2019-14866/cpio: triage: affected
    • CVE-2018-21029/systemd: triage: not-affected

LTS - Jessie

  • Front-Desk week
    • firefox: ping i386 build status following user request
    • CVE-2019-3689/nfs-utils: triage: affected
    • CVE-2019-16723/cacti: triage: affected
    • CVE-2019-16892/ruby-zip: triage: postponed (minor issue, fix is zip bomb mitigation not enabled by default)
    • CVE-2018-21016,CVE-2018-21015/gpac: triage: postponed (minor issue, local DoS)
    • CVE-2019-13376/phpbb3: triage: reference fixes, request CVE for prior incomplete CSRF fix (SECURITY-188), fix-up confusion following that
    • CVE-2018-20839/xorg-server: re-triage: clarify and mark for later fix
    • CVE-2019-13504/exiv2: update: reference missing patch, check that it's not needed for jessie
    • CVE-2019-14369,CVE-2019-14370/exiv2: triage: not-affected
    • CVE-2019-11755/thunderbird: triage: affected
    • CVE-2019-16370,CVE-2019-15052/gradle: triage: postponed (old gradle mainly used for build Debian packages in restricted environment)
    • CVE-2019-12412/libapreq2: triage: affected
    • CVE-2019-0193/lucene-solr: triage: affected; research commit for actual fix
    • CVE-2019-12401/lucene-solr: triage: affected; issue potentially in dependencies
    • CVE-2017-18635/novnc: triage: affected
    • CVE-2019-16239/openconnect: triage: affected
    • CVE-2019-14491,CVE-2019-14492,CVE-2019-14493/opencv: triage: postponed (DoS, PoC not crashing)
    • CVE-2019-14850,CVE-2019-14851/nbdkit: triage: ignored (DoS/amplification for specific configuration, non-trivial backport, low popcon)
    • CVE-2019-16910/polarssl: triage: affected, locate and reference patch
    • CVE-2019-16276/golang: triage: affected; later marked ignored, clarify that it's for consistency with later distros
    • CVE-2019-10723/libpodofo: revisit my early triage: ignored->postponed (minor but easy to add in later security upload)
    • DSA 4509-2/subversion: triage: not-affected
    • CVE-2019-8943/wordpress: triage: add precisions
    • CVE-2019-12922/phpmyadmin: triage: postponed (minor issue, unlikely situation); reference patch, reference patch at MITRE, mark unfixed
    • CVE-2019-16910/polarssl: reference patch at MITRE
    • CVE-2019-10219/libhibernate-validator-java: triage: no changes (still no clear information nor patch)
    • CVE-2019-11027/ruby-openid: triage: no changes (still no clear information nor patch)
    • CVE-2019-3685/osc: triage: no changes, report bug to packager, reference BTS
    • CVE-2019-1010091/tinymce: triage: ignored (questionable self-xss)
    • CVE-2019-16866/unbound: triage: not-affected
    • tcpdump,libpcap: triage: affected
    • CVE-2018-16301/libpcap: triage: asked upstream for commit, conclude duplicate, relay info to MITRE (not clear enough for them to mark duplicate AFAICS)
    • CVE-2019-14553/edk2: triage: end-of-life (non-free)
    • CVE-2019-9959/poppler: triage: affected
    • CVE-2019-10871/poppler: triage: cancel postponed (new upstream fix)
    • Remove remaining "not used by any sponsor" justification for Jessie LTS (one left-over from April clean-up)
  • CVE-2019-14287/sudo: security upload
  • CVE-2019-3689/nfs-utils: security upload
  • CVE-2019-11043/php5: security upload


  • Development: add reminder to add package short description / context in security announcements, some team members tend to forget it (myself included)
  • ampache: provide feedback about maintaining support
  • libclamunrar: provide feedback about dropping support

Jonathan McDowell: Getting native IPv6 over Fibre To The Home

1 November, 2019 - 01:35

Last week I changed ISP. My primary reason was to get native IPv6 at home. As a side effect I’ve lowered my monthly costs and moved from VDSL2 (Fibre To The Cabinet/FTTC) to GPON (Fibre To The Premises/FTTP). But trust me when I say the thing that prompted the move was the desire for native v6.

First, some words of thanks to my previous ISP. I was with MCL Services who have been absolutely fantastic; no issues with service, and responsive support when I had queries. The problem was that they’re a Gamma reseller, and Gamma are showing no signs of enabling v6 (I had Daniel poke them several times, because even a rough ETA would have kept me hanging around to see if they made good on it).

What caused me to even start looking elsewhere was BT mailshotting me about the fact I’m in a Fibre First area and FTTP was thus now available to me. They dangled some pretty attractive pricing in front of me (£50/month for 300M/50M). BT have enabled v6 across their consumer network (and should be applauded for that), but unfortunately don’t provide a static v6 range as part of that. One of the things I wanted was to give my internal hosts static IPs. A dynamic range doesn’t allow for that. So BT was a no.

Conveniently enough there’d been a thread on the debian-uk mailing list about server-friendly ISPs. I’m not looking to run services on the end of my broadband line - as long as I can SSH in and provided a basic HTTPS endpoint for some remote services to call in that’s perfect - but a mention of Aquiss came up as a competent option. I was already aware of them as I know several existing users, and I knew they use Entanet to provide pieces of their service. Enta are long time IPv6 supporters, so I took a look. And discovered that I could move to an equivalent service to what I was on, except over fibre and for cheaper (because there was no need to pay for phone line rental I wasn’t using). No brainer.

So last Thursday an engineer from Openreach turned up. Like last time the job was bigger than expected (I think the Openreach database has just failed to record the fact the access isn’t where they think it is). Also like last time they didn’t just go away, but instead arranged for another engineer to turn up to help with the two-man bit of the job, and got it all done that day. The only worrying bit was when my existing line went down - FTTP is a brand new install rather than a migration - but that turned out to be because they run a new hybrid cable from the pole with both fibre and copper on it. Once the new cable was spliced back in the existing connection came back fine. Total outage was just over an hour - something to be aware of if you’re trying to work from home during the install like I was. Thankfully I have enough spare data on my Three contract that I was able to keep working.

A picture of the ONT as installed is above; it’s a new style one with no battery backup and a single phone port + ethernet port. I had it placed beside my existing master socket, because that’s where everything is currently situated, but I was given the option to have it placed elsewhere. There’s a wall-wart for power, so you do need a free socket. The ethernet port provides a GigE connection (even though my line is currently only configured for 80M/20M), and it does PPPoE - no VLANs or anything required, though you do need the username/password from your ISP for CHAP authentication, which looks exactly like a normal ADSL username/password.

I rejigged my OpenWRT setup so I had a spare port on the HomeHub 5A, then configured up a “wan2” interface with the PPPoE login details and IPv6 enabled:

config interface 'wan2'
    option ifname 'eth0.100'
    option proto 'pppoe'
    option username 'noodles@fttp'
    option password 'gimmev6fttp'
    option ipv6 '1'
    option ip6prefix '2001:xxxx:yyyy:zz00::/56'
    option defaultroute 0

(I’d put the spare port into VLAN 100, hence eth0.100)

For the moment I’m using the old line for IPv4 (I have a 30 day notice on it) and the new line for just IPv6, hence setting defaultroute to 0. I actually end up with more IPv6 traffic than I’d expect (though there’d be more if my TV did v6 for Netflix):

I had to do a bunch of internal reconfiguration as well; I’d previously used a Hurricane Electric tunnel, but only enabled it for certain hosts (I couldn’t saturate my connection over the tunnel). Now I have native IPv6 I wanted everything configured up properly, with internal DNS properly sorted so internal traffic tried to use v6 where possible. That means my MQTT broker is doing v6 (though unfortunately not for my ESP8266 devices), and I’m accessing my Home Assistant instance over v6 (needed server_host: ::0 in the http configuration section to make it listen on v6, and stops it listening on v4. Not a problem for me as I front it with an SSL proxy that can do both). Equally SSH to all my internal hosts and containers is now over v6.

Of course, ultimately there’s no real external visible indication of the fact things are using IPv6, even for external bits. Which is exactly as it should be.

John Goerzen: TCP/IP over LoRa radios

31 October, 2019 - 07:28

As I wrote yesterday, I have been experimenting with LoRa radios. Today, I got TCP/IP working over them!

The AX.25 protocol did indeed turn out to be well-suited to this. It’s simple and works. The performance is, predictably, terrible; ping times around 500-600ms, but it does work. I fired up ssh, ran emacs, did a bit with bash, and — yep! Very cool. I tried mosh as well, thinking it would be great for this, but for some reason it just flooded the link with endless packets and was actually rather terrible.

I wrote up how to use it. It’s not even all that hard!

Pretty satisfying seeing this work.

Ritesh Raj Sarraf: Comments on Hugo with Isso

30 October, 2019 - 20:41
Integrating Comments on Hugo with Isso

Oh! Boy. Finally been able to get something set up almost to my liking. After moving away from Drupal to Hugo, getting the commenting system in place was challenging. There were many solutions but I was adamant to what I wanted.

  • Simple. Something very simple so that I can spend very limited time to tickle with (especially 2 years down the line)
  • Independent. No Google/Facebook/3rd Party dependency
  • Simple workflow. No Github/Gitlab/Staticman dependency
  • Simple moderation workflow

First, the migration away from Drupal itself was a PITA. I let go of all the comments there. Knowing my limited web skills, it was the foremost requisite to have something simplest as possible.


Tracking is becoming very very common. While I still have Google Analytics enabled on my website, that is more of a personal choice as I would like to see some monthly reporting. And whenever I want, I can just quietly disable it, if in case it becomes perversely invasive. As for the commenting system, I can be assured now that it wouldn’t depend on a 3rd party service.

Simple workflow

Lots and lots of people moved to static sites and chose a self-hosted model on Github (and other similar services). Then, service like Staticman, complemented it with integration of a commenting system, with Github’s Issue Tracker workflow.

From my past experiences, my quest now has been to keep my setups as simple as possible. For example, for passwords, I wouldn’t want to ever again rely on Seahorse or Kwallet etc. Similarly, after getting out of the clutch of Drupal, I wanted to resort back to keeping the website content structure simple. And Hugo does a good job there. And so was the quest for the commenting system.

Simple moderation workflow

SPAM is inevitable. I wanted to have something simple, basic and well tested. What better than the good old email. Every comment added is put into a moderation queue and the admin is sent with an email for the same, including a unique approve/reject link in the email.

Isso Commenting System

To describe it:

Description-en: lightweight web-based commenting system
 A lightweight commenting system written in Python. It supports CRUD comments
 written in Markdown, Disqus import, I18N, website integration via JavaScript.
 Comments are stored in SQLite.

I had heard the name a couple months ago but hadn’t spent much time. This week, with some spare time on, I stumbled across some good articles about setup and integration for Isso.

The good part is that Isso is already packaged for Debian so I didn’t have to go through the source based setup. I did spend some time fiddling around with the .service vs .socket discrepancies but my desperate focus was to get the bits together and have the commenting system in place.

Once I get some free time again, I’d like to extract useful information and file a bug report on Debian BTS. But for now, Thank You to the package maintainer for packaging/maintaining Isso for Debian.

Integration for my setup (nginx + Hugo + BeautifulHugo + Isso)

For nginx, just the following addition lines to ingerate Isso

        location /isso {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_pass http://localhost:8000;

For Hugo and BeautifulHugo

commit 922a88c41d784dc59aa17a9cbdba4a1898984a3e (HEAD -> isso)
Author: Ritesh Raj Sarraf <>
Date:   Tue Oct 29 12:52:13 2019 +0530

    Add display content for isso

diff --git a/layouts/_default/single.html b/layouts/_default/single.html
index 0ab1bf5..afda94e 100644
--- a/layouts/_default/single.html
+++ b/layouts/_default/single.html
@@ -55,6 +55,11 @@
       {{ end }}
+     {{"<!-- begin comments //-->" | safeHTML}}
+       <section id="isso-thread">
+       </section>
+    {{"<!-- end comments //-->" | safeHTML}}
       {{ if (.Params.comments) | or (and (or (not (isset .Params "comments")) (eq .Params.comments nil)) (and .Site.Params.comments (ne .Type "page"))) }}
         {{ if .Site.DisqusShortname }}
diff --git a/layouts/partials/header.html b/layouts/partials/header.html
index 2182534..161d3f0 100644
--- a/layouts/partials/header.html
+++ b/layouts/partials/header.html
@@ -81,6 +81,11 @@
+    {{ "<!-- isso -->" | safeHTML }}
+       <script data-isso="{{ .Site.BaseURL }}isso/" src="{{ .Site.BaseURL }}isso/js/embed.min.js"></script>
+    {{ "<!-- end isso -->" | safeHTML }}
 {{ else }}
   <div class="intro-header"></div>

And the JavaScript:

commit 3e1d52cefe8be425777d60387c8111c908ddc5c1
Author: Ritesh Raj Sarraf <>
Date:   Tue Oct 29 12:44:38 2019 +0530

    Add javascript for isso comments

diff --git a/static/isso/js/embed.min.js b/static/isso/js/embed.min.js
new file mode 100644
index 0000000..1d9a0e3
--- /dev/null
+++ b/static/isso/js/embed.min.js
@@ -0,0 +1,1430 @@
+ * @license almond 0.3.1 Copyright (c) 2011-2014, The Dojo Foundation All Rights Reserved.
+ * Available via the MIT or new BSD license.
+ * see: for details
+ */
+  Copyright (C) 2013 Gregory Schier <>
+  Copyright (C) 2013 Martin Zimmermann <>
+  Inspired by


That’s pretty much it. Happy Me :-)

Bálint Réczey: New tags on the block: update-excuse and friends!

30 October, 2019 - 18:45

In Ubuntu’s development process new package versions don’t immediately get released, but they enter the -proposed pocket first, where they are built and tested. In addition to testing the package itself other packages are also tested together with the updated package, to make sure the update doesn’t break the other packages either.

The packages in the -proposed pocket are listed on the update excuses page with their testing status. When a package is successfully built and all triggered tests passed the package can migrate to the release pocket, but when the builds or tests fail, the package is blocked from migration to preserve the quality of the release.

Sometimes packages are stuck in -proposed for a longer period because the build or test failures can’t be solved quickly. In the past several people may have triaged the same problem without being able to easily share their observations, but from now on if you figured out something about what broke, please open a bug against the stuck package with your findings and mark the package with the update-excuse tag. The bug will be linked to from the update excuses page so the next person picking up the problem can continue from there. You can even leave a patch in the bug so a developer with upload rights can find it easily and upload it right away.

The update-excuse tag applies to the current development series only, but it does not come alone. To leave notes for a specific release’s -proposed pocket, use the update-excuse-$SERIES tag, for example update-excuse-bionic to have the bug linked from 18.04’s (Bionic Beaver’s ) update excuses page.

Fixing failures in -proposed is big part of the integration work done by Ubuntu Developers and help is always very welcome. If you see your favorite package being stuck on update excuses, please take a look at why and maybe open an update-excuse bug. You may be the one who helped the package making it to the next Ubuntu release!

(The new tags were added by Tiago Stürmer Daitx and me during the last Canonical engineering sprint’s coding day. Fun! )

John Goerzen: Long-Range Radios: A Perfect Match for Unix Protocols From The 70s

30 October, 2019 - 16:41

It seems I’ve been on a bit of a vintage computing kick lately. After connecting an original DEC vt420 to Linux and resurrecting some old operating systems, I dove into UUCP.

In fact, it so happened that earlier in the week, my used copy of Managing UUCP & Usenet by none other than Tim O’Reilly arrived. I was reading about the challenges of networking in the 70s: half-duplex lines, slow transmission rates, and modems that had separate dialers. And then I stumbled upon long-distance radio. It turns out that a lot of modern long-distance radio has much in common with the challenges of communication in the 1970s – 1990s, and some of our old protocols might be particularly well-suited for it. Let me explain — I’ll start with the old software, and then talk about the really cool stuff going on in hardware (some radios that can send a signal for 10-20km or more with very little power!), and finally discuss how to bring it all together.


UUCP, for those of you that may literally have been born after it faded in popularity, is a batch system for exchanging files and doing remote execution. For users, the uucp command copies files to or from a remote system, and uux executes commands on a remote system. In practical terms, the most popular use of this was to use uux to execute rmail on the remote system, which would receive an email message on stdin and inject it into the system’s mail queue. All UUCP commands are queued up and transmitted when a “call” occurs — over a modem, TCP, ssh pipe, whatever.

UUCP had to deal with all sorts of line conditions: very slow lines (300bps), half-duplex lines, noisy and error-prone communication, poor or nonexistent flow control, even 7-bit communication. It supports a number of different transport protocols that can accommodate these varying conditions. It turns out that these mesh fairly perfectly with some properties of modern long-distance radio.


The AX.25 stack is a frame-based protocol used by amateur radio folks. Its air speed is 300bps, 1200bps, or (rarely) 9600bps. The Linux kernel has support for the AX.25 protocol and it is quite possible to run TCP/IP atop it. I have personally used AX.25 to telnet to a Linux box 15 miles away over a 1200bps air speed, and have also connected all the way from Kansas to Texas and Indiana using 300bps AX.25 using atmospheric skip. AX.25 has “connected” packets (as TCP) and unconnected/broadcast ones (similar to UDP) and is a error-detected protocol with retransmit. The radios generally used with AX.25 are always half-duplex and some of them have iffy carrier detection (which means collision is frequent). Although the whole AX.25 stack has grown rare in recent years, a subset of it is still in wide use as the basis for APRS.

A lot of this is achieved using equipment that’s not particularly portable: antennas on poles, radios that transmit with anywhere from 1W to 100W of power (even 1W is far more than small portable devices normally use), etc. Also, under the regulations of the amateur radio service, transmitters must be managed by a licensed operator and cannot be encrypted.

Nevertheless, AX.25 is just a protocol and it could, of course, run on other kinds of carriers than traditional amateur radios.

Long-range low-power radios

There is a lot being done with radios these days, much of which I’m not going to discuss. I’m not covering very short-range links such as Bluetooth, ZigBee, etc. Nor am I covering longer-range links that require large and highly-directional antennas (such as some are doing in the 2.4GHz and 5GHz bands). What I’m covering is long-range links that can be used by portable devices.

There is always a compromise in radios, and if we are going to achieve long-range links with poor antennas and low power, the compromise is going to be in bitrate. These technologies may scale down to as low at 300bps or up to around 115200bps. They can, as a side bonus, often be quite cheap.

HC-12 radios

HC-12 is a radio board, commonly used with Arduino, that sports 500bps to 115200bps communication. According to the vendor, in 500bps mode, the range is 1800m or 0.9mi, while at 115200bps, the range is 100m or 328ft. They’re very cheap, at around $5 each.

There are a few downsides to HC-12. One is that the lowest air bitrate is 500bps, but the lowest UART bitrate is 1200bps, and they have no flow control. So, if you are running in long-range mode, “only small packets can be sent: max 60 bytes with the interval of 2 seconds.” This would pose a challenge in many scenarios: though not much for UUCP, which can be perfectly well configured to have a 60-byte packet size and a window size of 1, which would wait for a remote ACK before proceeding.

Also, they operate over 433.4-473.0 MHz which appears to fall outside the license-free bands. It seems that many people using HC-12 are doing so illegally. With care, it would be possible to operate it under amateur radio rules, since this range is mostly within the 70cm allocation, but then it must follow amateur radio restrictions.

LoRa radios

LoRa is a set of standards for long range radios, which are advertised as having a range of 15km (9mi) or more in rural areas, and several km in cities.

LoRa can be done in several ways: the main LoRa protocol, and LoRaWAN. LoRaWAN expects to use an Internet gateway, which will tell each node what frequency to use, how much power to use, etc. LoRa is such that a commercial operator could set up roughly one LoRaWAN gateway per city due to the large coverage area, and some areas have good LoRa coverage due to just such operators. The difference between the two is roughly analogous to the difference between connecting two machines with an Ethernet crossover cable, and a connection over the Internet; LoRaWAN includes more protocol layers atop the basic LoRa. I have yet to learn much about LoRaWAN; I’ll follow up later on that point.

The speed of LoRa ranges from (and different people will say different things here) about 500bps to about 20000bps. LoRa is a packetized protocol, and the maximum packet size depends

LoRa sensors often advertise battery life in the months or years, and can be quite small. The protocol makes an excellent choice for sensors in remote or widely dispersed areas. LoRa transceiver boards for Arduino can be found for under $15 from places like Mouser.

I wound up purchasing two LoStik USB LoRa radios from Amazon. With some experimentation, with even very bad RF conditions (tiny antennas, one of them in the house, the other in a car), I was able to successfully decode LoRa packets from 2 miles away! And these aren’t even the most powerful transmitters available.

Talking UUCP over LoRa

In order to make this all work, I needed to write interface software; the LoRa radios don’t just transmit things straight out. So I wrote lorapipe. I have successfully transmitted files across this UUCP link!

Developing lorapipe was somewhat more challenging than I expected. For one, the LoRa modem raw protocol isn’t well-suited to rapid fire packet transmission; after receiving each packet, the modem exits receive mode and must be told to receive again. Collisions with protocols that ACKd data and had a receive window — which are many — were a problem so bad that it rendered some of the protocols unusable. I wound up adding a “expect more data after this packet” byte to every transmission, and have the receiver not transmit until it believes the sender is finished. This dramatically improved things. There’s more detail on this in my lorapipe documentation.

So far, I have successfully communicated over LoRa using UUCP, kermit, and YMODEM. KISS support will be coming next.

I am also hoping to discover the range I can get from this thing if I use more proper antennas (outdoor) and transmitters capable of transmitting with more power.

All in all, a fun project so far.

Wouter Verhelst: Announcing policy-rcd-declarative

30 October, 2019 - 02:57

A while ago, Debian's technical committee considered a request to figure out what a package should do if a service that is provided by that package does not restart properly on upgrade.

Traditional behavior in Debian has been to restart a service on upgrade, and to cause postinst (and, thus, the packaging system) to break if the daemon start fails. This has obvious disadvantages; when package installation is not initiated by a person running apt-get upgrade in a terminal, failure to restart a service may cause unexpected downtime, and that is not something you want to see.

At the same time, when restarting a service is done through the command line, having the packaging system fail is a pretty good indicator that there is a problem here, and therefore, it tells the system administrator early on that there is a problem, soon after the problem was created -- which is helpful for diagnosing that issue and possibly fixing it.

Eventually, the bug was closed with the TC declining to take a decision (for good reasons; see the bug report for details), but one takeaway for me was that the current interface on Debian for telling the system whether or not to restart a service upon package installation or upgrade, known as polic-rc.d, is flawed, and has several issues:

  1. The interface is too powerful; it requires an executable, which will probably be written in a turing-complete language, when all most people want is to change the standard configuration for pretty much every init script
  2. The interface is undetectable. That is, for someone who has never heard of it, it is actually very difficult to discover that it exists, since the default state ("allow everything") of the interface is defined by "there is no file on the filesystem that points to it".
  3. Although the design document states that policy-rc.d scripts MUST (in the RFC sense of that word) be installed through the alternatives system, in practice the truth is that most cases where a policy-rc.d script is used, this is not done. Since only one policy-rc.d can exist at any one time, the resulting situation is so that one policy script will be overwritten by the other one, or that the cleanup routine of the one policy script will in fact clean up the other one. This situation has caused at least one Debian derivative to end up such that they thought they were installing a policy-rc.d script when in fact they were not.
  4. In some cases, it might be useful to have partial policies installed through various packages that cooperate. Given point 3, this is currently not possible.

Because I believe the current situation leaves room for improvement, by way of experiment I wrote up a proof-of-concept script called policy-rc.d-declarative, whose intent it is to use the current framework to provide a declarative interface to replace policy-rc.d with. I uploaded it to experimental back in March (because the buster release was too close at the time for me to feel comfortable to upload it to unstable still), but I have just uploaded a new version to unstable.

This is currently still an experiment, and I might decide in the end that it's a bad idea; but I do think that most things are an improvement over a plan policy-rc.d interface, so let's see how this goes.

Comments are (most certainly) welcome.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้