Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 40 min ago

Michal Čihař: New projects on Hosted Weblate

12 February, 2018 - 18:00

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Julien Danjou: Scaling a polling Python application with asyncio

12 February, 2018 - 16:01

This article is a follow-up of my previous blog post about scaling a large number of connections. If you don't remember, I was trying to solve one of my followers' problem:

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

In the first article, we wrote a program that could handle large scale of this problem by using multiple threads. While this worked pretty well, this had some severe limitations. This time, we're going to take a different approach.

The job

The job has not changed and is still about connecting to a remote server via ssh. This time, rather than faking it by using ping instead, we are going to connect for real to an ssh server. Once connected to the remote server, the mission will be to run a single command. For the sake of this example, the command that will be run here is just a simple "echo hello world".

Using an event loop

This time, rather than leveraging threads, we are using asyncio. Asyncio is the leading Python event loop system implementation. It allows executing multiple functions (named coroutines) concurrently. The idea is that each time a coroutine performs an I/O operation, it yields back the control to the event loop. As the input or output might be blocking (e.g., the socket has no data yet to be read), the event loop will reschedule the coroutine as soon as there is work to do. In the meantime, the loop can schedule another coroutine that has something to do – or wait for that to happen.

Not all libraries are compatible with the asyncio framework. In our case, we need an ssh library that has support for asyncio. It happens that AsyncSSH is a Python library that provides ssh connection handling support for asyncio. It is particularly easy to use, and the documentation has plenty of examples.

Here's the function that we're going to use to execute our command on a remote host:

import asyncssh
async def run_command(host, command):
async with asyncssh.connect(host) as conn:
result = await
return result.stdout

The function run_command runs a command on a remote host once connected to it via ssh. It then returns the standard output of the command. The function uses the keywords async and await that are specific to Python 3 and asyncio. It indicates that the called functions are coroutine that might be blocking, and that the control is yield back to the event loop.

As I don't own hundreds of servers where I can connect to, I will be using a single remote server as the target – but the program will connect to it multiple times. The server is at a latency of about 6 ms, so that'll magnify a bit the results.

The first version of this program is simple and stupid. It'll run N times the run_command function serially by providing the tasks one at a time to the asyncio event loop:

loop = asyncio.get_event_loop()
outputs = [
run_command("myserver", "echo hello world %d" % i))
for i in range(200)

Once executed, the program prints the following:

$ time python3
['hello world 0\n', 'hello world 1\n', 'hello world 2\n', … 'hello world 199\n']
python3 6.11s user 0.35s system 15% cpu 41.249 total

It took 41 seconds to connect 200 times to the remote server and execute a simple printing command.

To make this faster, we're going to schedule all the coroutines at the same time. We just need to feed the event loop with the 200 coroutines at once. That will give it the ability to schedule them efficiently.

outputs = loop.run_until_complete(asyncio.gather(
*[run_command("myserver", "echo hello world %d" % i)
for i in range(200)]))

By using asyncio.gather, it is possible to pass a list of coroutines and wait for all of them to be finished. Once run, this program prints the following:

$ time python3
['hello world 0\n', 'hello world 1\n', 'hello world 2\n', … 'hello world 199\n']
python3 4.90s user 0.34s system 35% cpu 14.761 total

This version only took ⅓ of the original execution time to finish! As a fun note, the main limitation here is that my remote server is having trouble to handle more than 150 connections in parallel, so this program is a bit tough for it alone.


To show how great this method is, I've built a chart below that shows the difference of execution time between the two approaches, depending on the number of hosts the application has to connect to.

The trend lines highlight the difference of execution time and how important the concurrency is here. For 10,000 nodes, the time needed for a serial execution would be around 40 minutes whereas it would be only 7 minutes with a cooperative approach – quite a difference. The concurrent approach allows executing one command 205 times a day rather than only 36 times!

That was the second step

Using an event loop for tasks that can run concurrently due to their I/O intensive nature is really a great way to maximize the throughput of a program. This simple changes made the program 6× faster.

Anyhow, this is now the only way to scale Python program. There are a few other options available on top of this mechanism – I've covered those in my book Scaling Python, if you're interested in learning more!

Until then, stay tuned for the next article of this series!

Russ Allbery: February Haul

12 February, 2018 - 10:49

Most of this is the current StoryBundle: Black Narratives, in honor of Black History Month in the United States. But there's also a random selection of other things that I couldn't resist.

(I'm still reading this year too! Just a touch behind on reviews at the moment.)

Alicia Wright Brewster — Echo (sff)
T. Thorne Coyle — To Raise a Clenched Fist to the Sky (sff)
T. Thorne Coyle — To Wrest Our Bodies from the Fire (sff)
Julie E. Czerneda — Riders of the Storm (sff)
Julie E. Czerneda — Rift in the Sky (sff)
Terah Edun — Blades of Magic (sff)
Terah Edun — Blades of Illusion (sff)
L.L. Farmer — Black Borne (sff)
Jim C. Hines — Goblin Quest (sff)
Jim C. Hines — The Stepsister Scheme (sff)
Nalo Hopkinson — The Salt Roads (sff)
S.L. Huang — Root of Unity (sff)
Ursula K. Le Guin — Steering the Craft (nonfiction)
Nnedi Okorafor — Kabu-Kabu (sff collection)
Karen Lord — Redemption in Indigo (sff)
L. Penelope — Angelborn (sff)
Elizabeth Wein — The Pearl Thief (mainstream)

I'm slowly reading through the Czerneda that I missed, since I liked the Species Imperative series so much. Most of it isn't that good, and Czerneda has a few predictable themes, but it's fun and entertaining.

The Wein is a prequel to Code Name Verity, so, uh, yes please.

Russ Allbery: pgpcontrol 2.6

12 February, 2018 - 03:26

This is the legacy bundle of Usenet control message signing and verification tools, distributed primarily via (which hasn't updated yet as I write this). You can see the files for the current release at

This release adds support for using gpg for signature verification, provided by Thomas Hochstein, since gpgv may no longer support insecure digest algorithms.

Honestly, all the Perl Usenet control message code I maintain is a mess and needs some serious investment in effort, along with a major migration for the Big Eight signing key (and probably the signing key for various other archives). A lot of this stuff hasn't changed substantially in something like 20 years now, still supports software that no one has used in eons (like the PGP 2.6.3i release), doesn't use modern coding idioms, doesn't have a working test suite any longer, and is full of duplicate code to mess about with temporary files to generate signatures.

The formal protocol specification is also a pretty old and scanty description from the original project, and really should be a proper RFC.

I keep wanting to work on this, and keep not clearing the time to start properly and do a decent job of it, since it's a relatively large effort. But this could all be so much better, and I could then unify about four different software distributions I currently maintain, or at least layer them properly, and have something that would have a modern test suite and could be packaged properly. And then I could start a migration for the Big Eight signing key, which has been needed for quite some time.

Not sure when I'm going to do this, though, since it's several days of work to really get started. Maybe my next vacation?

(Alternately, I could just switch everything over to Julien's Python code. But I have a bunch of software already written in Perl of which the control message processing is just a component, so it would be easier to have a good Perl implementation.)

Steinar H. Gunderson: Chess960 opening position analysis

12 February, 2018 - 01:54

Magnus Carlsen and Hikaru Nakamura are playing an unofficial Chess960 world championship, so I thought I'd have a look at what the white advantage is for the 960 different positions. Obviously, you can't build anything like a huge opening book, but I let Stockfish run on the positions for increasing depths until I didn't have time anymore (in all, it was a little over a week, multiplied by 20 cores plus hyperthreading).

I've been asked to publish the list, so here it is. It's calculated deterministically using a prerelease version of Stockfish 9 (git from about a month before release), using only a single thread and consistently cleared 256 MB hash. All positions are calculated to depth 39, which is about equivalent to looking at the position for 2–3 hours, but a few are at depth 40. (At those depths, the white advantage varies from 0.01 to 0.57 pawns.) Unfortunately, I didn't save the principal variation, so it can be hard to know exactly why it thinks one position is particularly good for white, but generally, the best positions for white contain early attacks that are hard to counter without putting the pieces in awkward positions.

One thing you may notice is that the evaluation varies quite a lot between depths. This means you shouldn't take the values as absolute gospel; it's fairly clear that the +0.57 position is better than the +0.01 position, but the difference between +0.5 and +0.4 is much less clear-cut, as you can easily see one position varying between e.g. +0.2 and +0.5.

Note that my analysis page doesn't use this information, since Stockfish doesn't have persistent hash; it calculates from scratch every game.

Petter Reinholdtsen: How hard can æ, ø and å be?

11 February, 2018 - 23:10

We write 2018, and it is 30 years since Unicode was introduced. Most of us in Norway have come to expect the use of our alphabet to just work with any computer system. But it is apparently beyond reach of the computers printing recites at a restaurant. Recently I visited a Peppes pizza resturant, and noticed a few details on the recite. Notice how 'ø' and 'å' are replaced with strange symbols in 'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi gleder oss til å se deg igjen'.

I would say that this state is passed sad and over in embarrassing.

I removed personal and private information to be nice.

Shirish Agarwal: A hack and a snowflake

11 February, 2018 - 18:04

This would be a long post. Before starting, I would like to share or explain that I am not a native English speaker. I say this as what I’m going to write, may or may not be the same terms or meaning that others understand as I’m an Indian who uses American English + British English.

So it’s very much possible that I have picked up all the bad habits of learning and understanding and not any of the good ones in writing English as bad habits as elsewhere in life are the easier ones to pick up. Also I’m not a trained writer or have taken any writing lessons ever apart from when I was learning English in school as a language meant to communicate.

Few days back, I was reading an opinion piece (I tried to find the opinion piece again but have failed to do since) if anybody finds it, please share in the comments so will link here. A feminist author proclaimed how some poets preached or shared violence against women in their writings, poems including some of the most famous poets we admire today. The author of the article was talking about poets and artists like William Wordsworth and others. She picked out particular poems from their body of work which seemed to convey that message. Going further than that, she chose to de-hypnate between the poet and their large body of work. I wished she had cared enough to also look a bit more deeply in the poet’s life than just labeling him from one poem among perhaps tens or hundreds he may have written. I confess I haven’t read much of Wordsworth than what was in school and from those he seemed to be a nature lover rather than a sexual predator he was/is being made out to be. It is possible that I might have been mis-informed.


– Courtesy – CC-by-SA

The reason I say this is because I’m a hack. A hack in the writing department or ‘business’ is somebody who is not supposed to tell any back story and just go away. Writers though, even those who write short stories need to have a backstory at least in the back of their mind about each character that s/he introduces into the story. Because it’s a short story s/he cannot reveal where they come from but only point at the actions they are doing. I had started the process two times and two small stories got somewhat written through me but I stopped both the times midway.

while I was hammering through the keyboard for the stories, it was as if the characters themselves who were taking me on a journey which was dark and I didn’t want to venture more. I had heard this from quite a few authors, few of them published as well and I had dismissed it as a kind of sales pitch or something.

When I did write those stories for the sake of argument, I realized the only thing that the author has is an idea and the overall arc of the story. You have to have complete faith in your characters even if they led you astray or in unexpected places. The characters speak to you and through you rather than the other way around. It is the most maddest and the most mysterious of journeys and it seemed the characters liked the darker tones more than the lighter ones. I do not know whether it the same for all the writers/hacks (at least in the beginning) or just me ? Or Maybe its a cathartic expression. I do hope to still do more stories and even complete them even if they have dark overtones just to understand the process. By dark I mean violence here.

That is why I asked that maybe if the author of the opinion piece had taken the pain and shared more of the context surrounding the poem themselves as to when did Mr. Wordsworth wrote that poem or other poets did, perhaps I could identify with that as well as many writers/authors themselves.

I was disappointed with the article in two ways, in one way they were dismissing the poet/the artist and yet they seemed or did not want to critique/ban all the other works because

a. either they liked the major part of the work or

b. they knew the audience to whom they were serving the article to probably likes the other works and chose not to provoke them.

Another point was I felt when you are pushing and punishing poets are you doing so because they are the soft targets now more than ever? Almost all the poets she had talked about are unfortunately not in this mortal realm anymore. On the business side of things, the publishing industry is in for grim times . The poets and the poems being perhaps the easiest targets atm as they are not the center of the world anymore as they used to do. Both in United States as well as here in India, literature or even fiction for that matter has been booted out of the educational system. The point I’m trying to make here that publishers would and are not in a position to protect authors or even themselves when such articles are written and opinions are being formed. Also see for an Indian viewpoint of the same.

I also did not understand what the author wanted when she named and shamed the poets. If you really want to name and shame people who have and are committing acts of violence against women, then the horror film genre apart from action genre should be easily targeted. In many of the horror movies, both in hollywood, Bollywood and perhaps in other countries as well, the female protagonist/lead is often molested,sexually assaulted, maimed, killed, cannibalized so and so forth. Should we ban such movies forthwith ?

Also does ‘banning’ a work of art really work ? The movie ‘Padmavaat‘ has been mired in controversies due to a cultural history where as the story/myth goes ‘Rani Padmavati’ (whether she is real or an imaginary figure is probably fodder for another blog post) when confronted with Khilji committed ‘Jauhar’ or self-immolation so that she remains ‘pure’. The fanatics rally around her as she is supposed to have paid the ultimate price, sacrificing herself. But if she were really a queen, shouldn’t she have thought of her people and lived to lead the fight, run away and fight for another day or if she was cunning enough to worm her way into Khilji’s heart and topple him from within. The history and politics of those days had all those options in front of her if she were a real character, why commit suicide ?

Because of the violence being perpetuated around Rani Padmavati there hasn’t been either a decent critique either about the movie or the historical times in which she lived. It perhaps makes the men of the land secure in the knowledge that the women then and even now should kill themselves than either falling in love with the ‘other’ (a Muslim) romantically thought of as the ‘invader’ a thought which has and was perpetuated by the English ever since the East India company came for their own political gains. Another idea being women being pure, oblivious and not ‘devious’ could also be debated.

(sarcasm) Of course, the idea that Khilji and Rani Padmavati living in the same century is not possible by actual historians is too crackpot to believe as the cultural history wins over real history. (/sarcasm)

The reason this whole thing got triggered was the ‘snowflake’ comments on . The article itself is a pretty good read as even though I’m an outsider to how the kernel comes together and although I have the theoretical knowledge about how the various subsystem maintainers pull and push patches up the train and how Linus manages to eke out a kernel release every 3-4 months, I did have an opportunity to observe how fly-by-contributors are ignored by subsystem-maintainers.

About a decade or less ago, my 2-button wheel Logitech mouse at the time was going down and I had no idea why sometimes the mouse used to function and why sometimes it didn’t. A hacker named ‘John Hill’ put up a patch. What the patch did essentially was trigger warnings on the console when the system was unable to get signal from my 2-button wheel mouse. I did comment and try to get it pushed into the trunk but it didn’t and there was also no explanation by anyone why the patch was discarded. I did come to know while building the mouse module as to how many types and models of mouse there were which was a revelation to me at that point in time. By pushing I had commented on where the patch was posted and the mailing list where threads for patches are posted and posted couple of times that the patch by John Hill should be considered but nobody either got back to me or him.

It’s been a decade since then and still we do not have any proper error reporting process AFAIK if the mouse/keyboard fails to transmit messages/signals to the system.

That apart the real long thread was about the term ‘snowflake’. I had been called that in the past but had sort of tuned it out as I didn’t know what the term means/meant.

When I went to wikipedia and came up with the ‘snowflake’ and it came with 3 meanings to the same word.

a. A unique crystalline shape of white

b. A person who believes that s/he is unique and hence entitled

c. A person who is weak or thin-skinned (overly sensitive)

I believe we all are of the above, the only difference is perhaps a factor. If we weren’t meant to be unique we wouldn’t have been given a personality, a body type, a sense of reasoning and logic and perhaps most important a sense of what is right or wrong. To be thick-skinned also comes the inability to love and have empathy with others.

To round off on a somewhat hopeful note, I was re-reading maybe for the umpteenth time ‘Sacred Stone‘ an action thriller in which four hindus along with a corrupt, wealthy and hurt billionaire try to blow the most sacred site of the Muslims, the Mecca and Medina. While I don’t know whether it would be possible or not, I would for sure like to see people using the pious days for reflection . I don’t have to do anything, just be.

Similarly, the spanish pilgrimage as shown in the Way . I don’t think any of my issues will be resolved in being either of the two places but it may trigger paths within which I have not yet explored or forgotten longtime ago.

At the end I would like to share two interesting articles that I saw/read over the week, the first one is about the ‘Alphonso‘ and the other Samarkhand . I hope you enjoy both the articles.

Steve Kemp: Decoding 433Mhz-transmissions with software-defined radio

11 February, 2018 - 05:00

This blog-post is a bit of a diversion, and builds upon my previous entry of using 433Mhz radio-transmitters and receivers with Arduino and/or ESP8266 devices.

As mentioned in my post I've recently been overhauling my in-house IoT buttons, and I decided to go down the route of using commercially-available buttons which broadcast signals via radio, rather than using IR, or WiFi. The advantage is that I don't need to build any devices, or worry about 3D-printing a case - the commercially available buttons are cheap, water-proof, portable, and reliable, so why not use them? Ultimately I bought around ten buttons, along with a radio-receiver and radio-transmitter modules for my ESP8266 device. I wrote code to run on my device to receive the transmissions, decode the device-ID, and take different actions based upon the specific button pressed.

In the gap between buying the buttons (read: radio transmitters) and waiting for the transmitter/receiver modules I intended to connect to my ESP8266/arduino device(s) I remembered that I'd previously bought a software-defined-radio receiver, and figured I could use it to receive and react to the transmissions directly upon my PC.

The dongle I'd bought in the past was a simple USB-device which identifies itself as follows when inserted into my desktop:

  [17844333.387774] usb 3-9: New USB device found, idVendor=0bda, idProduct=2838
  [17844333.387777] usb 3-9: New USB device strings: Mfr=1, Product=2, SerialNumber=3
  [17844333.387778] usb 3-9: Product: RTL2838UHIDIR
  [17844333.387778] usb 3-9: Manufacturer: Realtek
  [17844333.387779] usb 3-9: SerialNumber: 00000001

At the time I bought it I wrote a brief blog post, which described tracking aircraft, and I said "I know almost nothing about SDR, except that it can be used to let your computer do stuff with radio."

So my first step was finding some suitable software to listen to the right-frequency and ideally decode the transmissions. A brief search lead me to the following repository:

The RTL_433 project is pretty neat as it allows receiving transmissions and decoding them. Of course it can't decode everything, but it has the ability to recognize a bunch of commonly-used hardware, and when it does it outputs the payload in a useful way, rather than just dumping a bitstream/bytestream.

Once you've got your USB-dongle plugged in, and you've built the project you can start receiving and decoding all discovered broadcasts like so:

  skx@deagol ~$ ./build/src/rtl_433 -U -G
  trying device  0:  Realtek, RTL2838UHIDIR, SN: 00000001
  Found Rafael Micro R820T tuner
  Using device 0: Generic RTL2832U OEM
  Exact sample rate is: 250000.000414 Hz
  Sample rate set to 250000.
  Bit detection level set to 0 (Auto).
  Tuner gain set to Auto.
  Reading samples in async mode...
  Tuned to 433920000 Hz.

Here we've added flags:

  • -G
    • Enable all decoders. So we're not just listening for traffic at 433Mhz, but we're actively trying to decode the payload of the transmissions.
  • -U
    • Timestamps are in UTC

Leaving this running for a few hours I noted that there are several nearby cars which are transmitting data about their tyre-pressure:

  2018-02-10 11:53:33 :      Schrader       :      TPMS       :      25
  ID:          1B747B0
  Pressure:    2.500 bar
  Temperature: 6 C
  Integrity:   CRC

The second log is from running with "-F json" to cause output to be generated in JSON format:

  {"time" : "2018-02-10 09:51:02",
   "model" : "Toyota",
   "type" : "TPMS",
   "id" : "5e7e0637",
   "code" : "63e6026d",
   "mic" : "CRC"}

In both cases we see "TPMS", and according to wikipedia that is Tyre Pressure Monitoring System. I'm a little shocked to receive this data, unencrypted!

Other events also become visible, when I left the scanner running, which is presumably some kind of temperature-sensor a neighbour has running:

  2018-02-10 13:19:08 : RF-tech
     Id:              0
     Battery:         LOW
     Button:          0
     Temperature:     0.0 C

Anyway I have a bunch of remote-controlled sockets, branded "NEXA", which look like this:

When I press the remote I can see the transmissions and program my PC to react to them:

  2018-02-11 07:31:20 : Nexa
    House Code:  39920705
    Group:  1
    Channel: 3
    State:   ON
    Unit:    2

In conclusion:

  • SDR can be used to easily sniff & decode cheap and commonly available 433Mhz-based devices.
  • "Modern" cars transmit their tyre-pressure, apparently!
  • My neighbours can probably overhear my button presses.

Norbert Preining: In memoriam Staszek Wawrykiewicz

10 February, 2018 - 20:05

We have lost a dear member of our community, Staszek Wawrykiewicz. I got notice that our friend died in an accident the other day. My heart stopped for an instant when I read the news, it cannot be – one of the most friendly, open, heart-warming friends has passed.

Staszek was an active member of the Polish TeX community, and an incredibly valuable TeX Live Team member. His insistence and perseverance have saved TeX Live from many disasters and bugs. Although I have been in contact with Staszek over the TeX Live mailing lists since some years, I met him in person for the first time on my first ever BachoTeX, the EuroBachoTeX 2007. His friendliness, openness to all new things, his inquisitiveness, all took a great place in my heart.

I dearly remember the evenings with Staszek and our Polish friends, in one of the Bachotek huts, around the bonfire place, him playing the guitar and singing traditional and not-so-traditional Polish music, inviting everyone to join and enjoy together. Rarely technical and social abilities have found such a nice combination as in Staszek.

Despite his age he often felt like someone in his twens, always ready for a joke, always ready to party, always ready to have fun. It is this kind of attitude I would like to carry with me when I get older. Thanks for giving me a great example.

The few times I managed to come to BachoTeX from far Japan, Staszek was as usual welcoming – it is the feeling of close friends that even if you haven’t seen each other for long time, the moment you meet it feels like it was just yesterday. And wherever you go during a BachoTeX conference, his traces and funniness were always present.

It is a very sad loss for all of those who knew Staszek. If I could I would like to board the plane just now and join the final service to a great man, a great friend.

Staszek, we will miss you. BachoTeX will miss you, TeX Live will miss you, I will miss you badly. A good friend has passed away. May you rest in peace.

Photo credit goes to various people attending BachoTeX conferences.

Junichi Uekawa: Writing chrome extensions.

10 February, 2018 - 19:46
Writing chrome extensions. I am writing javascript with lots of async/await, and I forget. It's also a bit annoying that many system provided functions don't support promises yet.

John Goerzen: The Big Green Weasel Song

10 February, 2018 - 04:08

One does not normally get into one’s car intending to sing about barfing green weasels to the tune of a Beethoven symphony for a quarter of an hour. And yet, somehow, that’s what I wound up doing today.

Little did I know that when Jacob started band last fall, it would inspire him to sing about weasels to the tune of Beethoven’s 9th. That may come as a surprise to his band teacher, too, who didn’t likely expect that having the kids learn the theme to the Ninth Symphony would inspire them to sing about large weasels.

But tonight, as we were driving, I mentioned that I knew the original German words. He asked me to sing. I did.

Then, of course, Jacob and Oliver tried to sing it back in German. This devolved into gibberish and a fit of laughter pretty quick, and ended with something sounding like “schneezel”. So of course, I had to be silly and added, “I have a big green weasel!”

From then on, they were singing about big green weasels. It wasn’t long before they decided they would sing “the chorus” and I was supposed to improvise verse after verse about these weasels. Improvising to the middle of the 9th Symphony isn’t the easiest thing, but I had verses about giving weasels a hug, about weasels smelling like a bug, about a green weasel on a chair, about a weasel barfing on the stair. And soon, Jacob wanted to record the weasel song to make a CD of it. So we did, before we even got to town. Here it is:

[Youtube link]

I love to hear children delight in making music. And I love to hear them enjoying Beethoven. Especially with weasels.

I just wonder what will happen when they learn about Mozart.

Erich Schubert: Genius Nonsense & Spam

9 February, 2018 - 22:01 just spammed me with an email that claims that I were a “frequent traveller” (which I am not), and thus would get “Genius” status, and rebates (which means they are going to hide some non-partner search results from me…) - I hate such marketing spam.

What a big rip-off.

I have rarely ever used, and in fact I have last used it 2015.

That is certainly not what you would call a “frequent traveler”.

But sell this to their hotel customers as “most loyal guests”. As I am clearly not a “loyal guest”, I consider this claim of to be borderline to fraud. And beware, that since this is a partner programme, it does come with a downside for the user: the partner results will be “boosted in our search results”. In other words, your search results will be biased. They will hide other results to boost their partners, that would otherwise come first (for example, because they are closer to your desired location, or even cheaper).

Forget and their “Genius program”. It’s a marketing fake.

Going to report this as spam, and kill my account there now.

Pro tip: use incognito mode whenever possible for surfing. For Chromium (or Google Chrome), add the option --incognito to your launcher icon, for Firefox use --private-window. On a smartphone, you may want to switch to Firefox Focus, or the DuckDuckGo browser.

Looks like those hotel booking brokers (who are in a fierce competition) are getting quite despeate. We are certainly heading into the second big Dot-com bubble, and it is probably going to bust rather sooner than later. Maybe that current stock market fragility will finally trigger this. If some parts of the “old” economy have to cut down their advertisement budgets, this will have a very immediate effect on Google, Facebook, and many others.

Lars Wirzenius: Qvisqve - an authorisation server, first alpha release

9 February, 2018 - 21:41

My company, QvarnLabs Ab, has today released the first alpha version of our new product, Qvisqve. Below is the press release. I wrote pretty much all the code, and it's free software (AGPL+).

Helsinki, Finland - 2018-02-09. QvarnLabs Ab is happy to announce the first public release of Qvisqve, an authorisation server and identity provider for web and mobile applications. Qvisqve aims to be secure, lightweight, fast, and easy to manage. "We have big plans for Qvisqve, and helping customers' manage cloud identities" says Kaius Häggblom, CEO of QvarnLabs.

In this alpha release, Qvisqve supports the OAuth2 client credentials grant, which is useful for authenticating and authorising automated systems, including IoT devices. Qvisqve can be integrated with any web service that can use OAuth2 and JWT tokens for access control.

Future releases will provide support for end-user authentication by implementing the OpenID Connect protocol, with a variety of authentication methods, including username/password, U2F, TOTP, and TLS client certificates. Multi-factor authentication will also be supported. "We will make Qvisqve be flexible for any serious use case", says Lars Wirzenius, software architect at QvarnLabs. "We hope Qvisqve will be useful to the software freedom ecosystem in general" Wirzenius adds.

Qvisqve is developed and supported by QvarnLabs Ab, and works together with the Qvarn software, which is award-winning free and open-source software for managing sensitive personal information. Qvarn is in production use in Finland and Sweden and manages over a million identities. Both Qvisqve and Qvarn are released under the Affero General Public Licence.

Olivier Berger: A review of Virtual Labs virtualization solutions for MOOCs

9 February, 2018 - 21:17

I’ve just uploaded a new memo A review of Virtual Labs virtualization solutions for MOOCs in the form of a page on my blog, before I eventually publish something more elaborated (and valuated by peer review).

The subtitle is “From Virtual Machines running locally or on IaaS, to containers on a PaaS, up to hypothetical ports of tools to WebAssembly for serverless execution in the Web browser

Excerpt from the intro :

In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.

We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.

We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.

I hope this will be of some interest for some.

Feel free to comment in this blog post.

Steve Kemp: Creating an IoT button, the smart way

9 February, 2018 - 05:00

There are several projects out there designed to create an IoT button:

  • You press a button.
  • Magic happens, and stuff runs on your computer, or is otherwise triggered remotely.

I made my own internet-button, an esp8266-based alarm-button, and recently I've wanted to have a few more dotted around our flat. To recap, the basic way these things work is that you have a device with a button on it.

Once deployed you would press the button, your device wakes up, connects to your WiFi and sends a "message". That message then goes on to trigger some kind of defined action. In my case my button would mute any existing audio-playback, then trigger the launch of an "alarm.mp3" file. In short - if somebody presses the button I would notice.

I wanted a few more doing slightly more complex things in the flat, such as triggering lights and various programs. Unfortunately these buttons are actually relatively heavy-weight, largely because connecting to WiFi demands a reasonable amount of power-draw. Even with deep-sleeping between invocations, driving such a device from battery-power means the life-span is not great. (In my case I cheat, my button is powered by a small phone-charger, which means power isn't a concern, but my "button" is hobbled.)

Ideally what everybody wants is security, simplicity, and availability. Running from batteries, avoiding the need to program WiFi credentials and having a decent form-factor makes an IoT button a lot simpler to sell - you don't need to do complicated setup, and things are nice and neat.

So I wondered is such an impossible dream actually possible, and it turns out that yes, such a device is trivial.

Instead of building WiFi into a bunch of buttons you could you build the smarts into one device, a receiver, connected to your PC via a USB cable - the buttons are very very simple, don't use WiFi, don't need to be programmed, and don't draw a lot of current. How? Radio.

There exist pre-packaged and simple radio-based buttons, such as this one:

You press a button and it broadcasts a simple message on 433Mhz. There exist very cheap and reliable 433Mhz recievers which you can connect to an arduino, or ESP8266-based device. Which means you have a different approach:

  • You build a device based upon an Arduino/ESP8266/similar.
  • It listens for 433Mhz transmissions, decodes the device ID.
  • Once it finds something it recognizes it can write to STDOUT (more or less)
  • The host system opens /dev/ttyUSB0 and reads the output
    • Which it can then use to trigger actions.

The net result is you can buy a bunch of buttons, for €5 each and add them to your system. The transmissions are easy to decode, and each button has a unique ID. You don't need to program them with your WiFi credentials, or set them up - except on the host - and because these devices don't do much, they just sleep waiting for a press, make a brief radio-transmission, then go back to sleep, their batteries can last for months.

So that's what I've done. I've written a simple program which decodes the trasmissions and posts to an MQ instance "button-press-a", "button-press-b", etc, and I can react to them uniquely. (In some cases the actions taken depend upon the time of day.)

No code to show here, because it depends upon the precise flavour of button(s) that you buy. But I had fun because some of the remote-controls around my house use the same frequency - and a lot of the cheap "remote-controlled power-switches" use this fequency too. If you transmit as well as receive you can have a decent amount of fun. :)

Of course radio is "broadcast", so somebody nearby could tell I've pressed a button, but as far as security goes there are no obvious IoT-snafus that I think I care about.

In my next post I might even talk about something more interesting - SMS-based things. In Europe (data)-roaming fees have recently been abolished, and anti-monopoly regulations make it "simple" to get your own SIMs made. This means you can buy a bunch of SIMS, stick them in your devices and get cheap data-transfer Europe-wide. There are obvious commercial aspects available if you go down this route, if you can accept the caveat that you need to add a SIM-card to each transmitter and each receiver. If you can a lot of things that become possible, especially when coupled with GPS. Not only do you gain the ability to send messages/events/data, but you can see where it came from, physically/geographically, and that is something that I think has a lot of interesting use-cases.

Enrico Zini: Gnome without chrome-gnome-shell

8 February, 2018 - 16:26

New laptop, has a touchscreen, can be folded into a tablet, I heard gnome-shell would be a good choice of desktop environment, and I managed to tweak it enough that I can reuse existing habits.

I have a big problem, however, with how it encourages one to download random extensions off the internet and run them as part of the whole desktop environment. I have an even bigger problem with gnome-core having a hard dependency on chrome-gnome-shell, a plugin which cannot be disabled without root editing files in /etc, which exposes parts of my destktop environment to websites.

Visit this site and it will know which extensions you have installed, and it will be able to install more. I do not trust that, I do not need that, I do not want that. I am horrified by the idea of that.

I made a workaround.

How can one do the same for firefox?


chrome-gnome-shell is a hard dependency of gnome-core, and it installs a browser plugin that one may not want, and mandates its use by system-wide chrome policies.

I consider having chrome-gnome-shell an unneeded increase of the attack surface of my system, in exchange for the dubious privilege of being able to download and execute, as my main user, random unreviewed code.

This package satifies the chrome-gnome-shell dependency, but installs nothing.

Note that after installing this package you need to purge chrome-gnome-shell if it was previously installed, to have it remove its chromium policy files in /etc/chromium

apt install equivs
equivs-build contain-gnome-shell
sudo dpkg -i contain-gnome-shell_1.0_all.deb
sudo dpkg --purge chrome-gnome-shell

Russell Coker: Thinkpad X1 Carbon

8 February, 2018 - 11:18

I just bought a Thinkpad X1 Carbon to replace my Thinkpad X301 [1]. It cost me $289 with free shipping from an eBay merchant which is a great deal, a new battery for the Thinkpad X301 would have cost about $100.

It seems that laptops aren’t depreciating in value as much as they used to. Grays Online used to reliably have refurbished Thinkpads with manufacturer’s warranty selling for about $300. Now they only have IdeaPads (a cheaper low-end line from Lenovo) at good prices, admittedly $100 to $200 for an IdeaPad is a very nice deal if you want a cheap laptop and don’t need something too powerful. But if you want something for doing software development on the go then you are looking at well in excess of $400. So I ended up buying a second-hand system from an eBay merchant.


I was quite excited to read the specs that it has an i7 CPU, but now I have it I discovered that the i7-3667U CPU scores 3990 according to passmark ( [2]. While that is much better than the U9400 in the Thinkpad X301 that scored 968, it’s only slightly better than the i5-2520M in my Thinkpad T420 that scored 3582 [3]. I bought the Thinkpad T420 in August 2013 [4], I had hoped that Moore’s Law would result in me getting a system at least twice as fast as my last one. But buying second-hand meant I got a slower CPU. Also the small form factor of the X series limits the heat dissipation and therefore limits the CPU performance.


Thinkpads have traditionally had the best keyboards, but they are losing that advantage. This system has a keyboard that feels like an Apple laptop keyboard not like a traditional Thinkpad. It still has the Trackpoint which is a major feature if you like it (I do). The biggest downside is that they rearranged the keys. The PgUp/PgDn keys are now by the arrow keys, this could end up being useful if you like the SHIFT-PgUp/SHIFT-PgDn combinations used in the Linux VC and some Xterms like Konsole. But I like to keep my keys by the home keys and I can’t do that unless I use the little finger of my right hand for PgUp/PgDn. They also moved the Home, End, and Delete keys which is really annoying. It’s not just that the positions are different to previous Thinkpads (including X series like the X301), they are different to desktop keyboards. So every time I move between my Thinkpad and a desktop system I need to change key usage.

Did Lenovo not consider that touch typists might use their products?

The keyboard moved the PrtSc key, and lacks ScrLk and Pause keys, but I hardly ever use the PrtSc key, and never use the other 2. The lack of those keys would only be of interest to people who have mapped them to useful functions and people who actually use PrtSc. It’s impractical to have a key as annoying to accidentally press as PrtSc between the Ctrl and Alt keys.

One significant benefit of the keyboard in this Thinkpad is that it has a backlight instead of having a light on the top of the screen that shines on the keyboard. It might work better than the light above the keyboard and looks much cooler! As an aside I discovered that my Thinkpad X301 has a light above the keyboard, but the key combination to activate it sometimes needs to be pressed several times.


X1 Carbon 1600*900
T420 1600*900
T61 1680*1050
X301 1440*900

Above are the screen resolutions for all my Thinkpads of the last 8 years. The X301 is an anomaly as I got it from a rubbish pile and it was significantly older than Thinkpads usually are when I get them. It’s a bit disappointing that laptop screen resolution isn’t increasing much over the years. I know some people have laptops with resolutions as high as 2560*1600 (as high as a high end phone) it seems that most laptops are below phone resolution.

Kogan is currently selling the Agora 8+ phone new for $239, including postage that would still be cheaper than the $289 I paid for this Thinkpad. There’s no reason why new phones should have lower prices and higher screen resolutions than second-hand laptops. The Thinkpad is designed to be a high-end brand, other brands like IdeaPad are for low end devices. Really 1600*900 is a low-end resolution by today’s standards, 1920*1080 should be the minimum for high-end systems. Now I could have bought one of the X series models with a higher screen resolution, but most of them have the lower resolution and hunting for a second hand system with the rare high resolution screen would mean missing the best prices.

I wonder if there’s an Android app to make a phone run as a second monitor for a Linux laptop, that way you could use a high resolution phone screen to display data from a laptop.

This display is unreasonably bright by default. So bright it hurt my eyes. The xbacklight program doesn’t support my display but the command “xrandr –output LVDS-1 –brightness 0.4” sets the brightness to 40%. The Fn key combination to set brightness doesn’t work. Below a brightness of about 70% the screen looks grainy.


This Thinkpad has a 180G SSD that supports contiguous reads at 500MB/s. It has 8G of RAM which is the minimum for a usable desktop system nowadays and while not really fast the CPU is fast enough. Generally this is a nice system.

It doesn’t have an Ethernet port which is really annoying. Now I have to pack a USB Ethernet device whenever I go anywhere. It also has mini-DisplayPort as the only video connector, as that is almost never available at a conference venue (VGA and HDMI are the common ones) I’ll have to pack an adaptor when I give a lecture. It also only has 2 USB ports, the X301 has 3. I know that not having HDMI, VGA, and Ethernet ports allows designing a thinner laptop. But I would be happier with a slightly thicker laptop that has more connectivity options. The Thinkpad X301 has about the same mass and is only slightly thicker and has all those ports. I blame Apple for starting this trend of laptops lacking IO options.

This might be the last laptop I own that doesn’t have USB-C. Currently not having USB-C is not a big deal, but devices other than phones supporting it will probably be released soon and fast phone charging from a laptop would be a good feature to have.

This laptop has no removable battery. I don’t know if it will be practical to replace the battery if the old one wears out. But given that replacing the battery may be more than the laptop is worth this isn’t a serious issue. One significant issue is that there’s no option to buy a second battery if I need to have it run without mains power for a significant amount of time. When I was travelling between Australia and Europe often I used to pack a second battery so I could spend twice as much time coding on the plane. I know it’s an engineering trade-off, but they did it with the X301 and could have done it again with this model.


This isn’t a great laptop. The X1 Carbon is described as a flagship for the Thinkpad brand and the display is letting down the image of the brand. The CPU is a little disappointing, but it’s a trade-off that I can deal with.

The keyboard is really annoying and will continue to annoy me for as long as I own it. The X301 managed to fit a better keyboard layout into the same space, there’s no reason that they couldn’t have done the same with the X1 Carbon.

But it’s great value for money and works well.

Related posts:

  1. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...
  2. I Just Bought a new Thinkpad and the Lenovo Web Site Sucks I’ve just bought a Thinkpad T61 at auction for $AU796....
  3. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...

Dirk Eddelbuettel: RcppEigen

8 February, 2018 - 09:28

A new minor release of RcppEigen hit CRAN earlier today, and just went to Debian as well. It brings Eigen 3.3.4 to R.

Yixuan once again did the leg-work of bringing the most recent Eigen release in along with the small set of patches we have carried forward for a few years.

One additional and recent change was the accomodation of a recent CRAN Policy change to not allow gcc or clang to mess with diagnostic messages. A word of caution: this may make your compilation of packages uses RcppEigen very noisy so consider adding -Wno-ignored-attributes to the compiler flags added in your ~/.R/Makevars.

The complete NEWS file entry follows.

Changes in RcppEigen version (2018-02-05)
  • Updated to version 3.3.4 of Eigen (Yixuan in #49)

  • Also carried over on new upstream (Yixuan, addressing #48)

  • As before, condition long long use on C++11.

  • Pragmas for g++ & clang to suppress diagnostics messages are disabled per CRAN Policy; use -Wno-ignored-attributes to quieten.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Iustin Pop: Releasing Corydalis

8 February, 2018 - 08:59

It's been a long time baking…

Back when I started doing photography more seriously, I settled for Lightroom as a RAW processor and image catalog solution. It works, but it's not perfect.

The main issue that I had over time with Lightroom is that while on the technical (hard) aspects of RAW processing, editing, etc. it is doing a good job, on the catalog aspect… it leaves things to be desired. Thus, over time I started using more and more of Jeffrey Friedl's plugins for Lightroom, which makes it better, but still it is hard get a grasp of your entire collection, besides just of the RAW sources. And even for the RAW files, Lightroom's UI is sluggish enough that I try to avoid as much as possible, outside of image development.

On top of that, ten years ago most of my image viewing (and my family's) was on the desktop, using things such as geeqie reading the pictures from the NAS. In the meantime, things have changed, and now a lot of image viewing is done either on desktop or mobile clients, but without guaranteed file-system access to the actual images. Thus, I wanted something to be able to view all my pictures, in a somewhat seamless mode, based on various global searches - e.g. "show all my pictures that contain person $x and taken in location $foo". Also, I wanted something that could view all my pictures, RAW or JPEGs, without the user having to care about this aspect (some, but not all, viewing-oriented programs do this).

So, for the last ~5 years or so, I've been slowly working on a very basic program to do what I wanted. First git commit is on August 19th, 2013, titled "Initial commit - After trying to bump deps to Yesod 1.2.", so that was not the start. In an old backup, I find files from April 27th that year, so that's probably around when I started writing it.

At one point I even settled on a name, with commit 3c00458, which was the last major hurdle for releasing this, I thought. Ha! Three years later, I finally was able to bring to a shape where there is probably one other person somewhere who could actually use it and have it be of any help. It even has documentation now!

So, without further ado, … wait, I already said everything! Corydalis v0.2.0 (a rather arbitrarily chosen version number) is up on GitHub.

Looking forward to bug reports/suggestions/feedback, it's been a really long time since I last open sourced anything not entirely trivial.

P.S.: Yes, I know, there are no (meaningful) unit tests; yes, I feel ashamed, this being 2018.

P.P.S.: Yes, of course it's Haskell! with a sprinkle of JavaScript, sadly. And I'm not familiar with best JavaScript practices, so the way I bundled things with the application is probably not good.

P.P.P.S.: If you actually try this, don't try it against your entire picture tree—for large trees (tens of thousands of pictures), it will take many hours/maybe days to scan/extract all; it is designed more for incremental updates, so the initial scan is what it is.

P⁴.S.: It's not slow because of Haskell!!

Ben Hutchings: Debian LTS work, January 2018

8 February, 2018 - 02:41

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from December. I worked all these hours.

I put together and tested a more-or-less complete backport of KPTI/KAISER to Linux 3.2, based on work by Hugh Dickins and several others. This mitigates the Meltdown vulnerability on amd64 (only). I prepared and uploaded an update for wheezy with this and several other security fixes, and issued DLA-1232-1. I also released another update on the Linux 3.2 longterm stable branch (3.2.98), and started work on the next (3.2.99).


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้