Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 38 min ago

Ritesh Raj Sarraf: Comments on Hugo with Isso

30 October, 2019 - 20:41
Integrating Comments on Hugo with Isso

Oh! Boy. Finally been able to get something set up almost to my liking. After moving away from Drupal to Hugo, getting the commenting system in place was challenging. There were many solutions but I was adamant to what I wanted.

  • Simple. Something very simple so that I can spend very limited time to tickle with (especially 2 years down the line)
  • Independent. No Google/Facebook/3rd Party dependency
  • Simple workflow. No Github/Gitlab/Staticman dependency
  • Simple moderation workflow

First, the migration away from Drupal itself was a PITA. I let go of all the comments there. Knowing my limited web skills, it was the foremost requisite to have something simplest as possible.


Tracking is becoming very very common. While I still have Google Analytics enabled on my website, that is more of a personal choice as I would like to see some monthly reporting. And whenever I want, I can just quietly disable it, if in case it becomes perversely invasive. As for the commenting system, I can be assured now that it wouldn’t depend on a 3rd party service.

Simple workflow

Lots and lots of people moved to static sites and chose a self-hosted model on Github (and other similar services). Then, service like Staticman, complemented it with integration of a commenting system, with Github’s Issue Tracker workflow.

From my past experiences, my quest now has been to keep my setups as simple as possible. For example, for passwords, I wouldn’t want to ever again rely on Seahorse or Kwallet etc. Similarly, after getting out of the clutch of Drupal, I wanted to resort back to keeping the website content structure simple. And Hugo does a good job there. And so was the quest for the commenting system.

Simple moderation workflow

SPAM is inevitable. I wanted to have something simple, basic and well tested. What better than the good old email. Every comment added is put into a moderation queue and the admin is sent with an email for the same, including a unique approve/reject link in the email.

Isso Commenting System

To describe it:

Description-en: lightweight web-based commenting system
 A lightweight commenting system written in Python. It supports CRUD comments
 written in Markdown, Disqus import, I18N, website integration via JavaScript.
 Comments are stored in SQLite.

I had heard the name a couple months ago but hadn’t spent much time. This week, with some spare time on, I stumbled across some good articles about setup and integration for Isso.

The good part is that Isso is already packaged for Debian so I didn’t have to go through the source based setup. I did spend some time fiddling around with the .service vs .socket discrepancies but my desperate focus was to get the bits together and have the commenting system in place.

Once I get some free time again, I’d like to extract useful information and file a bug report on Debian BTS. But for now, Thank You to the package maintainer for packaging/maintaining Isso for Debian.

Integration for my setup (nginx + Hugo + BeautifulHugo + Isso)

For nginx, just the following addition lines to ingerate Isso

        location /isso {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_pass http://localhost:8000;

For Hugo and BeautifulHugo

commit 922a88c41d784dc59aa17a9cbdba4a1898984a3e (HEAD -> isso)
Author: Ritesh Raj Sarraf <>
Date:   Tue Oct 29 12:52:13 2019 +0530

    Add display content for isso

diff --git a/layouts/_default/single.html b/layouts/_default/single.html
index 0ab1bf5..afda94e 100644
--- a/layouts/_default/single.html
+++ b/layouts/_default/single.html
@@ -55,6 +55,11 @@
       {{ end }}
+     {{"<!-- begin comments //-->" | safeHTML}}
+       <section id="isso-thread">
+       </section>
+    {{"<!-- end comments //-->" | safeHTML}}
       {{ if (.Params.comments) | or (and (or (not (isset .Params "comments")) (eq .Params.comments nil)) (and .Site.Params.comments (ne .Type "page"))) }}
         {{ if .Site.DisqusShortname }}
diff --git a/layouts/partials/header.html b/layouts/partials/header.html
index 2182534..161d3f0 100644
--- a/layouts/partials/header.html
+++ b/layouts/partials/header.html
@@ -81,6 +81,11 @@
+    {{ "<!-- isso -->" | safeHTML }}
+       <script data-isso="{{ .Site.BaseURL }}isso/" src="{{ .Site.BaseURL }}isso/js/embed.min.js"></script>
+    {{ "<!-- end isso -->" | safeHTML }}
 {{ else }}
   <div class="intro-header"></div>

And the JavaScript:

commit 3e1d52cefe8be425777d60387c8111c908ddc5c1
Author: Ritesh Raj Sarraf <>
Date:   Tue Oct 29 12:44:38 2019 +0530

    Add javascript for isso comments

diff --git a/static/isso/js/embed.min.js b/static/isso/js/embed.min.js
new file mode 100644
index 0000000..1d9a0e3
--- /dev/null
+++ b/static/isso/js/embed.min.js
@@ -0,0 +1,1430 @@
+ * @license almond 0.3.1 Copyright (c) 2011-2014, The Dojo Foundation All Rights Reserved.
+ * Available via the MIT or new BSD license.
+ * see: for details
+ */
+  Copyright (C) 2013 Gregory Schier <>
+  Copyright (C) 2013 Martin Zimmermann <>
+  Inspired by


That’s pretty much it. Happy Me :-)

Bálint Réczey: New tags on the block: update-excuse and friends!

30 October, 2019 - 18:45

In Ubuntu’s development process new package versions don’t immediately get released, but they enter the -proposed pocket first, where they are built and tested. In addition to testing the package itself other packages are also tested together with the updated package, to make sure the update doesn’t break the other packages either.

The packages in the -proposed pocket are listed on the update excuses page with their testing status. When a package is successfully built and all triggered tests passed the package can migrate to the release pocket, but when the builds or tests fail, the package is blocked from migration to preserve the quality of the release.

Sometimes packages are stuck in -proposed for a longer period because the build or test failures can’t be solved quickly. In the past several people may have triaged the same problem without being able to easily share their observations, but from now on if you figured out something about what broke, please open a bug against the stuck package with your findings and mark the package with the update-excuse tag. The bug will be linked to from the update excuses page so the next person picking up the problem can continue from there. You can even leave a patch in the bug so a developer with upload rights can find it easily and upload it right away.

The update-excuse tag applies to the current development series only, but it does not come alone. To leave notes for a specific release’s -proposed pocket, use the update-excuse-$SERIES tag, for example update-excuse-bionic to have the bug linked from 18.04’s (Bionic Beaver’s ) update excuses page.

Fixing failures in -proposed is big part of the integration work done by Ubuntu Developers and help is always very welcome. If you see your favorite package being stuck on update excuses, please take a look at why and maybe open an update-excuse bug. You may be the one who helped the package making it to the next Ubuntu release!

(The new tags were added by Tiago Stürmer Daitx and me during the last Canonical engineering sprint’s coding day. Fun! )

John Goerzen: Long-Range Radios: A Perfect Match for Unix Protocols From The 70s

30 October, 2019 - 16:41

It seems I’ve been on a bit of a vintage computing kick lately. After connecting an original DEC vt420 to Linux and resurrecting some old operating systems, I dove into UUCP.

In fact, it so happened that earlier in the week, my used copy of Managing UUCP & Usenet by none other than Tim O’Reilly arrived. I was reading about the challenges of networking in the 70s: half-duplex lines, slow transmission rates, and modems that had separate dialers. And then I stumbled upon long-distance radio. It turns out that a lot of modern long-distance radio has much in common with the challenges of communication in the 1970s – 1990s, and some of our old protocols might be particularly well-suited for it. Let me explain — I’ll start with the old software, and then talk about the really cool stuff going on in hardware (some radios that can send a signal for 10-20km or more with very little power!), and finally discuss how to bring it all together.


UUCP, for those of you that may literally have been born after it faded in popularity, is a batch system for exchanging files and doing remote execution. For users, the uucp command copies files to or from a remote system, and uux executes commands on a remote system. In practical terms, the most popular use of this was to use uux to execute rmail on the remote system, which would receive an email message on stdin and inject it into the system’s mail queue. All UUCP commands are queued up and transmitted when a “call” occurs — over a modem, TCP, ssh pipe, whatever.

UUCP had to deal with all sorts of line conditions: very slow lines (300bps), half-duplex lines, noisy and error-prone communication, poor or nonexistent flow control, even 7-bit communication. It supports a number of different transport protocols that can accommodate these varying conditions. It turns out that these mesh fairly perfectly with some properties of modern long-distance radio.


The AX.25 stack is a frame-based protocol used by amateur radio folks. Its air speed is 300bps, 1200bps, or (rarely) 9600bps. The Linux kernel has support for the AX.25 protocol and it is quite possible to run TCP/IP atop it. I have personally used AX.25 to telnet to a Linux box 15 miles away over a 1200bps air speed, and have also connected all the way from Kansas to Texas and Indiana using 300bps AX.25 using atmospheric skip. AX.25 has “connected” packets (as TCP) and unconnected/broadcast ones (similar to UDP) and is a error-detected protocol with retransmit. The radios generally used with AX.25 are always half-duplex and some of them have iffy carrier detection (which means collision is frequent). Although the whole AX.25 stack has grown rare in recent years, a subset of it is still in wide use as the basis for APRS.

A lot of this is achieved using equipment that’s not particularly portable: antennas on poles, radios that transmit with anywhere from 1W to 100W of power (even 1W is far more than small portable devices normally use), etc. Also, under the regulations of the amateur radio service, transmitters must be managed by a licensed operator and cannot be encrypted.

Nevertheless, AX.25 is just a protocol and it could, of course, run on other kinds of carriers than traditional amateur radios.

Long-range low-power radios

There is a lot being done with radios these days, much of which I’m not going to discuss. I’m not covering very short-range links such as Bluetooth, ZigBee, etc. Nor am I covering longer-range links that require large and highly-directional antennas (such as some are doing in the 2.4GHz and 5GHz bands). What I’m covering is long-range links that can be used by portable devices.

There is always a compromise in radios, and if we are going to achieve long-range links with poor antennas and low power, the compromise is going to be in bitrate. These technologies may scale down to as low at 300bps or up to around 115200bps. They can, as a side bonus, often be quite cheap.

HC-12 radios

HC-12 is a radio board, commonly used with Arduino, that sports 500bps to 115200bps communication. According to the vendor, in 500bps mode, the range is 1800m or 0.9mi, while at 115200bps, the range is 100m or 328ft. They’re very cheap, at around $5 each.

There are a few downsides to HC-12. One is that the lowest air bitrate is 500bps, but the lowest UART bitrate is 1200bps, and they have no flow control. So, if you are running in long-range mode, “only small packets can be sent: max 60 bytes with the interval of 2 seconds.” This would pose a challenge in many scenarios: though not much for UUCP, which can be perfectly well configured to have a 60-byte packet size and a window size of 1, which would wait for a remote ACK before proceeding.

Also, they operate over 433.4-473.0 MHz which appears to fall outside the license-free bands. It seems that many people using HC-12 are doing so illegally. With care, it would be possible to operate it under amateur radio rules, since this range is mostly within the 70cm allocation, but then it must follow amateur radio restrictions.

LoRa radios

LoRa is a set of standards for long range radios, which are advertised as having a range of 15km (9mi) or more in rural areas, and several km in cities.

LoRa can be done in several ways: the main LoRa protocol, and LoRaWAN. LoRaWAN expects to use an Internet gateway, which will tell each node what frequency to use, how much power to use, etc. LoRa is such that a commercial operator could set up roughly one LoRaWAN gateway per city due to the large coverage area, and some areas have good LoRa coverage due to just such operators. The difference between the two is roughly analogous to the difference between connecting two machines with an Ethernet crossover cable, and a connection over the Internet; LoRaWAN includes more protocol layers atop the basic LoRa. I have yet to learn much about LoRaWAN; I’ll follow up later on that point.

The speed of LoRa ranges from (and different people will say different things here) about 500bps to about 20000bps. LoRa is a packetized protocol, and the maximum packet size depends

LoRa sensors often advertise battery life in the months or years, and can be quite small. The protocol makes an excellent choice for sensors in remote or widely dispersed areas. LoRa transceiver boards for Arduino can be found for under $15 from places like Mouser.

I wound up purchasing two LoStik USB LoRa radios from Amazon. With some experimentation, with even very bad RF conditions (tiny antennas, one of them in the house, the other in a car), I was able to successfully decode LoRa packets from 2 miles away! And these aren’t even the most powerful transmitters available.

Talking UUCP over LoRa

In order to make this all work, I needed to write interface software; the LoRa radios don’t just transmit things straight out. So I wrote lorapipe. I have successfully transmitted files across this UUCP link!

Developing lorapipe was somewhat more challenging than I expected. For one, the LoRa modem raw protocol isn’t well-suited to rapid fire packet transmission; after receiving each packet, the modem exits receive mode and must be told to receive again. Collisions with protocols that ACKd data and had a receive window — which are many — were a problem so bad that it rendered some of the protocols unusable. I wound up adding a “expect more data after this packet” byte to every transmission, and have the receiver not transmit until it believes the sender is finished. This dramatically improved things. There’s more detail on this in my lorapipe documentation.

So far, I have successfully communicated over LoRa using UUCP, kermit, and YMODEM. KISS support will be coming next.

I am also hoping to discover the range I can get from this thing if I use more proper antennas (outdoor) and transmitters capable of transmitting with more power.

All in all, a fun project so far.

Wouter Verhelst: Announcing policy-rcd-declarative

30 October, 2019 - 02:57

A while ago, Debian's technical committee considered a request to figure out what a package should do if a service that is provided by that package does not restart properly on upgrade.

Traditional behavior in Debian has been to restart a service on upgrade, and to cause postinst (and, thus, the packaging system) to break if the daemon start fails. This has obvious disadvantages; when package installation is not initiated by a person running apt-get upgrade in a terminal, failure to restart a service may cause unexpected downtime, and that is not something you want to see.

At the same time, when restarting a service is done through the command line, having the packaging system fail is a pretty good indicator that there is a problem here, and therefore, it tells the system administrator early on that there is a problem, soon after the problem was created -- which is helpful for diagnosing that issue and possibly fixing it.

Eventually, the bug was closed with the TC declining to take a decision (for good reasons; see the bug report for details), but one takeaway for me was that the current interface on Debian for telling the system whether or not to restart a service upon package installation or upgrade, known as polic-rc.d, is flawed, and has several issues:

  1. The interface is too powerful; it requires an executable, which will probably be written in a turing-complete language, when all most people want is to change the standard configuration for pretty much every init script
  2. The interface is undetectable. That is, for someone who has never heard of it, it is actually very difficult to discover that it exists, since the default state ("allow everything") of the interface is defined by "there is no file on the filesystem that points to it".
  3. Although the design document states that policy-rc.d scripts MUST (in the RFC sense of that word) be installed through the alternatives system, in practice the truth is that most cases where a policy-rc.d script is used, this is not done. Since only one policy-rc.d can exist at any one time, the resulting situation is so that one policy script will be overwritten by the other one, or that the cleanup routine of the one policy script will in fact clean up the other one. This situation has caused at least one Debian derivative to end up such that they thought they were installing a policy-rc.d script when in fact they were not.
  4. In some cases, it might be useful to have partial policies installed through various packages that cooperate. Given point 3, this is currently not possible.

Because I believe the current situation leaves room for improvement, by way of experiment I wrote up a proof-of-concept script called policy-rc.d-declarative, whose intent it is to use the current framework to provide a declarative interface to replace policy-rc.d with. I uploaded it to experimental back in March (because the buster release was too close at the time for me to feel comfortable to upload it to unstable still), but I have just uploaded a new version to unstable.

This is currently still an experiment, and I might decide in the end that it's a bad idea; but I do think that most things are an improvement over a plan policy-rc.d interface, so let's see how this goes.

Comments are (most certainly) welcome.

Jonas Meurer: debian lts report 2019.10

29 October, 2019 - 21:47
Debian LTS report for October 2019

This month I was allocated 0 hours and carried over 14.5 hours from August. Unfortunately, once again I didn't find time to work on LTS issues. Since I expect it to stay that way for a few more months, I set the limit of hours that I get allocated to 0 last month already. I'll give back the remaining 14.5 hours and continue with LTS work once I again have some spare cycles to do so.


Jaldhar Vyas: Sal Mubarak 2076

29 October, 2019 - 10:42

It's the Gujarati new year and to the entire Debian community, best wishes for good health and great prosperity in Vikram Samvat 2076 (named Virodhakrt.)

The image above is not exactly related; it's actually the Ganga Arati at Haradwar which I recently visited and will write more about later.

Joey Hess: how I maybe didn't burn out

28 October, 2019 - 23:21

Last week I found myself in the uncomfortable position of many users strongly disagreeing with a decision I had made about git-annex. It felt much like a pile-on of them vs me, strong language was being deployed, and it was starting to get mentioned in multiple places on the website, in ways I felt would lead to fear, uncertainty, and doubt when other users saw it.

It did not help that I had dental implant surgery recently, and was still more in recovery than I knew when I finally tackled looking at this long thread. So it hit hard.

I've been involved in software projects that have a sometimes adversarial relationship with their users. At times, Debian has been one. I don't know if it is today, but I remember being on #debian and #debian-devel, or debian-user@lists and debian-devel@lists, and seeing two almost entirely diverged communities who were interacting only begrudgingly and with friction.

I don't want that in any of my projects now. My perspective on the history of git-annex is that most of the best developments have come after I made a not great decision or a mistake and got user feedback, and we talked it over and found a way to improve things, leading to a better result than would have been possible without the original mistake, much how a broken bone heals stronger. So this felt wrong, wrong, wrong.

Part of the problem with this discussion was that, though I'd tried to explain the constraints that led to the design decision -- which I'd made well over three years ago -- not everyone was able to follow that or engage with it constructively. Largely, I think because git-annex has a lot more users now, with a wider set of viewpoints. (Which is generally why Debian has to split off user discussions of course.) The users are more fearful of change than earlier adopters tended to be, and have more to lose if git-annex stops meeting their use case. They're invested in it, and so defensive of how they want it to work.

It also doesn't help that, surgery aside, I lack time to keep up with every discussion about git-annex now, if I'm going to also develop it. Just looking at the website tends to eat an entire day with maybe a couple bug fixes and some support answers being the only productive result. So, I have stepped back from following the git-annex website at all, for now. (I'll eventually start looking at parts of it again I'm sure.)

I did find enough value in the thread that I was able to develop a fix that should meet everyone's needs, as I now understand them (released in version 7.20191024). I actually come away with entirely new use cases; I did not realize that some users would perhaps use git-annex for a single large file in a repository otherwise kept entirely in git. Or quite how many users mix storing files in git and git-annex, which I have always seen as somewhat of an exception aside from the odd dotfile.

So the open questions are: How do I keep up with discussion and support traffic now; can I find someone to provide lower-level support and filtering or something? (Good news is, some funding could probably be arranged.) How do I prevent the git-annex community fracturing along users/developer lines as it grows, given that I don't want to work within such a fractured community?

I've been working on git-annex for 9 years this week. Have I avoided burning out? Probably, but maybe too early to tell. I think that being able to ask these questions is a good thing. I'd appreciate hearing from anyone who has grappled with these issues in their own software communities.

Bits from Debian: Debian Donates to Support GNOME Patent Defense

28 October, 2019 - 23:00

Today, the Debian Project pledges to donate $5,000 to the GNOME Foundation in support of their ongoing patent defense. On October 23, we wrote to express our support for GNOME in an issue that affects the entire free software community. Today we make that support tangible.

"This is bigger than GNOME," said Debian Project Leader Sam Hartman. "By banding together and demonstrating that the entire free software community is behind GNOME, we can send a strong message to non-practicing entities (patent trolls). When you target anyone in the free software community, you target all of us. We will fight, and we will fight to invalidate your patent. For us, this is more than money. This is about our freedom to build and distribute our software."

"We're incredibly grateful to Debian for this kind donation, and also for their support," said Neil McGovern, Executive Director of the GNOME Foundation. "It's been heartening to see that when free software is attacked in this way we all come together on a united front."

If GNOME needs more money later in in this defense, Debian will be there to support the GNOME Foundation. We encourage individuals and organizations to join us and stand strong against patent trolls.

Didier Raboud: miniDebConf Vaumarcus happened

28 October, 2019 - 17:45

The miniDebConf19 Vaumarcus was this week-end in Vaumarcus. Some 35 attendees gathered together in LeCamp, which provided for accomodation, food and all the hacking and talk spaces.

The view is really fantastic from here! Thanks for all the fish!

A dozen of talks and BoFs ranging from ZFS to keyboard firmwares were presented, and, thanks to the awesome DebConf Video Team volunteers, a live video feed was provided covering most talks for remote attendees. Most talk videos are available already on the Meetings Archive.

Thank you to our sponsors and supporters! Sponsors Supporters

We’ll be getting in touch with the attendees soon to gather feedback about the miniDebConf; the association will then discuss early next year whether to organize another miniDebConf in 2020. We’ll keep you posted!

Romain Perier: Announcing official IRC channel for Raspberry PI in Debian

27 October, 2019 - 20:20
Hi !

Debian for Raspberry Pi exists since few months on IRC, but nothing were done for creating an official channel (see nor communication for announcing this channel on planet. The creation of #debian-raspberrypi, the official Debian channel for topics related to the Raspberry Pi boards in Debian, was done two weeks ago, so this is the right moment to announce it officially !

Join us and say hello !

Dirk Eddelbuettel: littler 0.3.9: More nice new features

27 October, 2019 - 19:27

The tenth release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do more recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.

A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release adds several new helper scripts / examples such as a Solaris-checker for rhub, a Sweave runner, and bibtex-record printer for packages. It also extends several existing scripts: render.r can now compact pdf files, build.r does this for package builds, tt.r covers parallel tinytest use, rcc.r reports the exit code from rcmdcheck, update.r corrects which package library directories it looks at, kitten.r can add puppies for tinytest, and thanks to Stefan the dratInsert.r (and render.r) script use call. correctly in stop().

The NEWS file entry is below.

Changes in littler version 0.3.9 (2019-10-27)
  • Changes in examples

    • The use of call. in stop() was corrected (Stefan Widgren in #72).

    • New script cos.r to check (at rhub) on Solaris.

    • New script compactpdf.r to compact pdf files.

    • The build.r script now compacts vignettes and resaves data.

    • The tt.r script now supports parallel tests and side effects.

    • The rcc.r script can now report error codes.

    • The '–libloc' option to update.r was updated.

    • The render.r script can optionally compact pdfs.

    • New script sweave.r to render (and compact) pdfs.

    • New script pkg2bibtex.r to show bibtex entries.

    • The kitten.r script has a new option --puppy to add tinytest support in purring packages.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jo Shields: My name is Jo and this is home now

27 October, 2019 - 05:59

After just over three years, my family and I are now Lawful Permanent Residents (Green Card holders) of the United States of America. It’s been a long journey.


Before anything else, I want to credit those who made it possible to reach this point. My then-manager Duncan Mak, his manager Miguel de Icaza. Amy and Alex from work for bailing us out of a pickle. Microsoft’s immigration office/HR. Gigi, the “destination services consultant” from DwellWorks. The immigration attorneys at Fragomen LLP. Lynn Community Health Center. And my family, for their unwavering support.

The kick-off

It all began in July 2016. With support from my management chain, I went through the process of applying for an L-1 intracompany transferee visa – a 3-5 year dual-intent visa, essentially a time-limited secondment from Microsoft UK to Microsoft Corp. After a lot of paperwork and an in-person interview at the US consulate in London, we were finally granted the visa (and L-2 dependent visas for the family) in April 2017. We arranged the actual move in July 2017, giving us a short window to wind up our affairs in the UK as much as possible, and run out most of my eldest child’s school year.

We sold the house, sold the car, gave to family all the electronics which wouldn’t work in the US (even with a transformer), and stashed a few more goodies in my parents’ attic. Microsoft arranged for movers to come and pack up our lives; they arranged a car for us for the final week; and a hotel for the final week too (we rejected the initial golf-spa-resort they offered and opted for a budget hotel chain in our home town, to keep sending our eldest to school with minimal disruption). And on the final day we set off at the crack of dawn to Heathrow Airport, to fly to Boston, Massachusetts, and try for a new life in the USA.

Finding our footing

I cannot complain about the provisions made by Microsoft – although not without snags. The 3.5 hours we spent in Logan airport waiting at immigration due to some computer problem on the day did not help us relax. Neither did the cat arriving at our company-arranged temporary condo before we did (with no food, or litter, or anything). Nor did the fact that the satnav provided with the company-arranged hire car not work – and that when I tried using my phone to navigate, it shot under the passenger seat the first time I had to brake, leading to a fraught commute from Logan to Third St, Cambridge.

Nevertheless, the liquor store under our condo building, and my co-workers Amy and Alex dropping off an emergency run of cat essentials, helped calm things down. We managed a good first night’s exhausted sleep, and started the following day with pancakes and syrup at a place called The Friendly Toast.

With the support of Gigi, a consultant hired to help us with early-relocation basics like social security and bank accounts, we eventually made our way to our own rental in Melrose (a small suburb north of Boston, a shortish walk from the MBTA Orange Line); with our own car (once the money from selling our house in the UK finally arrived); with my eldest enrolled in a local school. Aiming for normality.

The process

Fairly soon after settling in to office life, the emails from Microsoft Immigration started, for the process to apply for permanent residency. We were acutely aware of the time ticking on the three year visas – and we already burned 3 months of time prior to the move. Work permits; permission to leave and re-enter; Department of Labor certification. Papers, forms, papers, forms. Swearing that none of us have ever recruited child soldiers, or engaged in sex trafficking.

Tick tock.

Months at a time without hearing anything from USCIS.

Tick tock.

Work permits for all, but big delays listed on the monthly USCIS visa bulletin.

Tick tock.

We got to August 2019, and I started to really worry about the next deadline – our eldest’s passport expiring, along with the initial visas a couple of weeks later.

Tick tock.

Then my wife had a smart idea for plan B, something better than the burned out Mad Max dystopia waiting for us back in the UK: Microsoft just opened a big .NET development office in Prague, so maybe I could make a business justification for relocation to the Czech Republic?

I start teaching myself Czech.

Duolingo screenshot, Czech language, “can you see my goose”

Tick tock.

Then, a month later, out of the blue, a notice from USCIS: our Adjustment of Status interviews (in theory the final piece before being granted green cards) were scheduled, for less than a month later. Suddenly we went from too much time, to too little.



The problem with the one month of notice is we had one crucial missing piece of paperwork – for each of us, an I-693 medical exam issued by a USCIS-certified civil surgeon. I started calling around, and got a response from an immigration clinic in Lynn, with a date in mid October. They also gave us a rough indication of medical exams and extra vaccinations required for the I-693 which we were told to source via our normal doctors (where they would be billable to insurance, if not free entirely). Any costs in the immigration clinic can’t go via insurance or an HSA, because they’re officially immigration paperwork, not medical paperwork. Total cost ends up being over a grand.

More calling around. We got scheduled for various shots and tests, and went to our medical appointment with everything sorted.


Turns out the TB tests the kids had were no longer recognised by USCIS. And all four of us had vaccination record gaps. So not only unexpected jabs after we promised them it was all over – unexpected bloodletting too. And a follow-up appointment for results and final paperwork, only 2 days prior to the AOS interview.

By this point, I’m something of a wreck. The whole middle of October has been a barrage of non-stop, short-term, absolutely critical appointments.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

Wednesday, I can’t eat, I can’t sleep, and various other physiological issues. The AOS interview is the next day. I’m as prepared as I can be, but still more terrified than I ever have been.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

I was never this worried about going through a comparable process when applying for the visa, because the worst case there was the status quo. Here the worst case is having to restart our green card process, with too little time to reapply before the visas expire. Having wasted two years of my family’s comfort with nothing to show for it. The year it took my son to settle again at school. All of it riding on one meeting.


Our AOS interviews are perfectly timed to coincide with lunch, so we load the kids up on snacks, and head to the USCIS office in Lawrence.

After parking up, we head inside, and wait. We have all the paperwork we could reasonably be expected to have – birth certificates, passports, even wedding photos to prove that our marriage is legit.

To keep the kids entertained in the absence of electronics (due to a no camera rule which bars tablets and phones) we have paper and crayons. I suggest “America” as a drawing prompt for my eldest, and he produces a statue of liberty and some flags, which I guess covers the topic for a 7 year old.

Finally we’re called in to see an Immigration Support Officer, the end-boss of American bureaucracy and… It’s fine. It’s fine! She just goes through our green card forms and confirms every answer; takes our medical forms and photos; checks the passports; asks us about our (Caribbean) wedding and takes a look at our photos; and gracefully accepts the eldest’s drawing for her wall.

We’re in and out of her office in under an hour. She tells us that unless she finds an issue in our background checks, we should be fine – expect an approval notice within 3 weeks, or call up if there’s still nothing in 6. Her tone is congratulatory, but with nothing tangible, and still the “unless” lingering, it’s hard to feel much of anything. We head home, numb more than anything.


After two fraught weeks, we’re both not entirely sure how to process things. I had expected a stress headache then normality, but instead it was more… Gradual.

During the following days, little things like the colours of the leaves leave me tearing up – and as my wife and I talk, we realise the extent to which the stress has been getting to us. And, more to the point, the extent to which being adrift without having somewhere we can confidently call home has caused us to close ourselves off.

The first day back in the office after the interview, a co-worker pulls me aside and asks if I’m okay – and I realise how much the answer has been “no”. Friday is the first day where I can even begin to figure that out.

The weekend continues with emotions all over the place, but a feeling of cautious optimism alongside.

I-485 Adjustment of Status approval notifications

On Monday, 4 calendar days after the AOS interview, we receive our notifications, confirming that we can stay. I’m still not sure I’m processing it right. We can start making real, long term plans now. Buying a house, the works.

I had it easy, and don’t deserve any sympathy

I’m a white guy, who had thousands of dollars’ worth of support from a global megacorp and their army of lawyers. The immigration process was fraught enough for me that I couldn’t sleep or eat – and I went through the process in one of the easiest routes available.

Youtube video from HBO show Last Week Tonight, covering legal migration into the USA

I am acutely aware of how much more terrifying and exhausting the process might be, for anyone without my resources and support.

Never, for a second, think that migration to the US – legal or otherwise – is easy.

The subheading where I answer the inevitable question from the peanut gallery

My eldest started school in the UK in September 2015. Previously he’d been at nursery, and we’d picked him up around 6-6:30pm every work day. Once he started at school, instead he needed picking up before 3pm. But my entire team at Xamarin was on Boston time, and did not have the world’s earliest risers – meaning I couldn’t have any meaningful conversations with co-workers until I had a child underfoot and the TV blaring. It made remote working suck, when it had been fine just a few months earlier. Don’t underestimate the impact of time zones on remote workers with families. I had begun to consider, at this point, my future at Microsoft, purely for logistical reasons.

And then, in June 2016, the UK suffered from a collective case of brain worms, and voted for self immolation.

I relocated my family to the US, because I could make a business case for my employer to fund it. It was the fastest, cheapest way to move my family away from the uncertainty of life in the UK after the brain-worm-addled plan to deport 13% of NHS workers. To cut off 40% of the national food supply. To make basic medications like Metformin and Estradiol rarities, rationed by pharmacists.

I relocated my family to the US, because despite all the country’s problems, despite the last three years of bureaucracy, it still gave them a better chance at a safe, stable life than staying in the UK.

And even if time proves me wrong about Brexit, at least now we can make our new lives, permanently, regardless.

Russ Allbery: remctl 3.16

27 October, 2019 - 04:00

remctl is a simple RPC mechanism (generally based on GSS-API) with rich ACLs and native support in multiple languages.

The primary change in this release is support for Python 3. This has only been tested with Python 2.7 and Python 3.7, but should work with any version of Python 3 later than Python 3.1. This release also cleans up some obsolete syntax in the Python code and deprecates passing in a command as anything other than an iterable of str or bytes.

This release also adds a -t flag to the remctl command-line tool to specify the network timeout, fixes a NULL pointer dereference after memory allocation failure in the client library, adds GCC attributes to the client library functions, and fixes a few issues with the build system.

You can get the latest release from the remctl distribution page.

Arturo Borrero González: seville kubernetes meetup 2019-10-24 - summary

25 October, 2019 - 15:28

Yesterday I attended a meetup event in Seville organized by the SVK (seville kubernetes) group. The event was held in the offices of Bitnami, now a VMware business.

The agenda for the event consisted in a couple of talks strongly focused on kubernetes, both of which interested me personally.

First one was Deploying apps with kubeapps, a talk by Andres Martinez Gotor, engineer at Bitnami. He presented the kubeapps utility, which is an application dashboard for kubernetes developed by Bitnami. We got a variety of information, from how to use kubeapps, to how this integrates with helm/tiller, and how this works in a multi-tenant enabled cluster. Some comments were added from the security point of view, things to take into account, etc. In general, kubeapps seems easy to install and use, and enables end users to easily deploy arbitrary apps into kubernetes.

My feeling during the talk was that this technology is quite interesting for several use cases, including ours in Toolforge, where we allow users to run arbitrary (mostly webservices) apps in the platform. Enabling operations that doesn’t require users to dive into a terminal is always welcomed, since we offer our services to a wide range of users with very different technical backgrounds, knowledge and experience.

Next talk was A kube-proxy deep-dive, by Laura Garcia Liebana, engineer and founder of Zevenet. She started the talk by giving an overview on how docker uses iptables to set up networking and proxying. As she pointed out, the way docker does it has a direct influence on how kubernetes does the default networking, in the iptables-based kube-proxy component. On the many ways we have for load-balancing and network design for this kind of environments, kube-proxy uses by default an iptables ruleset that is not very performant. It generates about 4 iptables rules per endpoint which is not great for a kubernetes cluster with 10k endpoints (you would have 40k iptables rules in each node). It was mentioned that some people are using the ipvs-based kube-proxy component to gain a bit of performance.

But Laura had an even more interesting proposal. They are developing a new tool called kube-nftlb, which is a kube-proxy replacement based on nftlb, which is a load-balancing solution based on nftables. It seems kube-nftlb is still in the development stage, but in a live-demo she showed how the nftables rulesets generated by the tool are way more performant and optimized than those generated by kube-proxy, which results in greatly improved scalability of the kubernetes cluster.

After the talks, some pizza time followed, and I greeted many old and new friends. Interesting day! Thanks Bitnami for organizing the event and thanks to the speakers for giving us new ideas and points of views!

Dirk Eddelbuettel: dang 0.0.11: Small improvements

25 October, 2019 - 08:35

A new release of what may be my most minor package, dang, is now on CRAN. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another. More recently added were helpers for data.table to xts conversion and a git repo root finder.

Some of these functions (like lsos()) where hanging in my .Rprofile, other just lived in scripts so some years ago I started to collect them in a package, and as of February this is now on CRAN too for reasons that are truly too bizarre to go about. It’s a weak and feeble variant of the old Torvalds line about backups and ftp sites …

As I didn’t blog about the 0.0.10 release, the NEWS entry for both follows:

Changes in version 0.0.11 (2019-10-24)
  • New functions getGitRoot, inGit and isConnected.

  • Improved function

Changes in version 0.0.10 (2019-02-10)
  • Initial CRAN release. See ChangeLog for earlier changes.

Courtesy of CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Norbert Preining: Calibre 4.2 based on Python3 in Debian/experimental

25 October, 2019 - 07:59

Following up on the last post on the state of Calibre in Debian, some news for those who want to get rid of Python2 on their computers as soon as possible.

With the finally finished Qt transition to 5.12, Calibre 4 can be included into Debian. I just have uploaded Calibre 4.2 build using Python 3 to the experimental suite in Debian. This allows all those who want to get rid of Python2 to upgrade to the experimental version.


There are a few warnings you shouldn’t forget:

  • the Python3 version is experimental
  • practically all external plugins will be broken, and you will need to remove them from ~/.config/calibre/plugins

That’s it, happy e-booking!

Steinar H. Gunderson: Exploring the DDR arcade CDs

24 October, 2019 - 04:45

Dance Dance Revolution, commonly known as DDR, is/was a series of music games by Konami, where the player has to hit notes by stepping on four panels in time with music. I say “was” because while there are still new developments, the phenomenon has largely faded, at least in my parts of the world.

Back in the heyday, the arcade machines (beasts of 220 kg, plus the two pads weighing 100 kg each!) were based off of Konami's System 573 (573 is chosen because with the appropriate amount of creative readings in Japanese, you can make it sound like “go-na-mi”). System 573 (well, 573D, to be exact) is basically a Playstation with a custom controller connector and an I/O board capable of decoding MP3s. The songs are loaded from a regular CD-ROM.

Recently, MAME developers have cracked the encryption used in S573 so as to be able to emulate the system (a heroic effort!), which allowed me to finally have a look at what's going on in the ISOs. I wasn't involved in this at all, but you can have a look at the source code at GitHub.

The algorithm is home-grown (curiously enough, neither a pure block cipher nor a pure stream cipher) and naturally very weak, but it kept people at bay for 10+ years, so I guess it was a success nevertheless? A quick rundown goes as follows (this is the decryption routine; reverse for encryption):

  1. A 16-bit ciphertext word, V, is read from the input. (I had to do byteswapping, but I don't know if this is just because of my internals or because something in the 573 naturally works in different endians.)
  2. Two 16-bit keys, S1 and S2, are XOR-ed to form a temporary subkey M. Depending on certain bits in M, neighboring bits in V are swapped.
  3. Certain other bits (eight of them) in M are extracted, interleaved with zero bits, and XOR-ed into V.
  4. A 8-bit counter (it starts at the value S3, which is then the last part of the key), is extended to 16 bits by shuffling and duplicating the bits around in a fixed pattern, and then XOR-ed into V, producing tha plaintext word.
  5. S1 is rotated to one bit to the right, except that the sign bit (the 15th bit) just stays still.
  6. If the 0th and 15th bits of S1 are unequal, S2 is rotated to the right by one bit (this time, the sign bit is like any other bit).
  7. S3 is incremented by one.

The key is different per-file; someone has published a long list of known keys. I don't know where these come from (perhaps extracted from the images themselves), but I thought it would be a fun challenge to do some entry-level cryptanalysis and see if I could crack a few of the files myself.

There are a couple of observations we can make right away. First, due to the way S1/S2 are updated, and because only every their XOR is used for anything, we can invert both of them to get an equivalent key. (That is; if {S1,S2,S3} form a key, {S1 XOR 0xFFFF,S2 XOR 0xFFFF,S3} will have the exact same effect.) This reduces the effective keyspace from 40-bit to 39-bit. I'm fairly certain this was unintentional.

Second, the algorithm splits very naturally into two orthogonal pieces; the S1/S2 part does one thing, the S3 part does something else, and they don't really interact (they run cleanly after each other). My algorithm was a simple known-plaintext attack; since most of the MP3s seem to have a few large blocks of zeros in the first couple of hundred bytes, we can test the entire keyspace and check if we suddenly get a lot of zero bytes after decryption. But due to this orthogonality, we can first apply a given S1/S2, store the result, and then compare it against all possible offsets in the S3 keystream. (The S3 step just XORs in a known sequence, after all; all we need to figure out what the offset in the sequence is.) This saves a lot of time, especially as we can precompute the S3 sequence. It's a good example of how testing N keys is much cheaper than N times testing one key, due to structural similarities between the operations given by the key.

Third, while the S1/S2 steps are fairly slow in traditional hardware, they map really well to the PDEP/PEXT instructions in BMI2. Add some tables, some AVX2, and we can bruteforce the entire keyspace in about half a core-hour (and it's all trivially parallelizable).

And finally, note that the avalanche effect by the S1/S2 steps is really poor. There is this effect where if your key is only off by a few bits, your plaintext will also by wrong by only a few bits. You can see this either as a blessing (if you suddenly start getting zero bytes, you know you are very close to the key and can just try flipping a few bits here and there) or a curse (it's much harder to know whether you have the right key or not, since your plaintext looks “almost” right even if it does not form a valid MP3 file).

So, with that in mind, I cracked most of the files on the EuroMix 2 and DDR Extreme CDs. I was pleased to hear that the files would play with no issues whatsoever in VLC after decryption; unfortunately, there were no interesting ID3 tags or the likes. (A fair amount of files had a “LAME3.92” tag, though. Strangely, not all of them. Obviously, Konami's asset management didn't involve batch encoding all songs from golden masters.)

All the files are encoded in CBR; I don't know if this is because the 573D can't handle VBR, or if Konami just didn't care all that much. EM2 is in 160 kbit/sec, and Extreme is in a more paltry 112 kbit/sec. Some of the preview snippets are in as little as 56 kbit/sec, and encoded in 32 kHz instead of 44.1! (I always thought the short for Electro Tuned sounded a bit off, and now I finally understand why.)

Nearly all of the songs include a second of three of dead silence at the start, presumably to give the player a bit of time to find the scrolling arrows on the screen. I always intuitively interpreted this as loading time, but it's really part of the MP3—and given CBR, it wastes space on the disc. Similarly, the previews (which are looped) include the fade-out, but this is more forgiveable.

Finally, we can uncover a mystery that's been bothering players for a long time; DDR Extreme features 240 songs (when all are unlocked), but there are only 239 song files on the CD. (All 240 previews are there, though, plus some menu music and such.) Which one is missing, and where is it? After some searching and correlating with the previews, I found an answer that will probably surprise nobody: It's indeed the One More Extra Stage song, Dance Dance Revolution! And for some inexplicable reason, it is stored in the flash image. Given the consequences, I guess they didn't want OMES to skip, ever...

Dirk Eddelbuettel: linl 0.0.4: Now with footer

23 October, 2019 - 18:48

A new release of our linl package for writing LaTeX letters with (R)markdown just arrived on CRAN. linl makes it easy to write letters in markdown, with some extra bells and whistles thanks to some cleverness chiefly by Aaron.

This version now supports a (pdf, png, …) footer along with the already-supported header, thanks to an intiial PR by Michal Bojanowski to which Aaron added nice customization for scale and placement (as supported by LaTeX package wallpaper). I also added support for continued integration testing at Travis CI via a custom Docker RMarkdown container—which is something I should actually say more about at another point.

Here is screenshot of the vignette showing the simple input for some moderately fancy output (now with a footer):

The NEWS entry follows:

Changes in linl version 0.0.4 (2019-10-23)
  • Continuous integration tests at Travis are now running via custom Docker container (Dirk in #21).

  • A footer for the letter can now be specified (Michal Bojanowski in #23 fixing #10).

  • The header and footer options be customized more extensively, and are documented (Aaron in #25 and #26).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the linl page. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: The Debian Project stands with the GNOME Foundation in defense against patent trolls

23 October, 2019 - 15:00

In 2012, the Debian Project published our Position on Software Patents, stating the threat that patents pose to Free Software.

The GNOME Foundation has announced recently that they are fighting a lawsuit alleging that Shotwell, a free and Open Source personal photo manager, infringes a patent.

The Debian Project firmly stands with the GNOME Foundation in their efforts to show the world that we in the Free Software communities will vigorously defend ourselves against any abuses of the patent system.

Please read this blog post about GNOME's defense against this patent troll and consider making a donation to the GNOME Patent Troll Defense Fund.

Steve Kemp: /usr/bin/timedatectl

23 October, 2019 - 13:30

Today I was looking over a system to see what it was doing, checking all the running processes, etc, and I spotted that it was running openntpd.

This post is a reminder to myself that systemd now contains an NTP-client, and I should go round and purge the ntpd/openntpd packages from my systems.

You can check on the date/time via:

$ timedatectl 
                      Local time: Wed 2019-10-23 09:17:08 EEST
                  Universal time: Wed 2019-10-23 06:17:08 UTC
                        RTC time: Wed 2019-10-23 06:17:08
                       Time zone: Europe/Helsinki (EEST, +0300)
       System clock synchronized: yes
systemd-timesyncd.service active: yes
                 RTC in local TZ: no

If the system is not setup to sync it can be enabled via:

$ sudo timedatectl set-ntp true

Finally logs can be checked as you would expect:

$ journalctl -u systemd-timesyncd.service


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้