Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 42 min 16 sec ago

Thomas Lange: build service now supports creation of VM disk images

13 March, 2018 - 23:27

A few days ago, I've added a new feature to the build service.

Additionally to creating an installation image, can now build bootable disk images. These disk images can be booted in a VM like KVM, Virtualbox or VMware or openstack.

You can define a disk image size, select a language, set a user and root password, select a Debian distribution and enable backports just by one click. It's possible to add your public key for access to the root account without a password. This can also be done by just specifying your GitHub account. Several disk formats are supports, like raw (compressed with xz or zstd), qcow2, vdi, vhdx and vmdk. And you can add your own list of packages, you want to have inside this OS. After a few minutes the disk image is created and you will get a download link, including a log the the creation process and a link to the FAI configuration that was used to create your customized image.

The new service is available at

If you have any comments, feature requests or feedback, do not hesitate to contact me.

Petter Reinholdtsen: First rough draft Norwegian and Spanish edition of the book Made with Creative Commons

13 March, 2018 - 19:00

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Antoine Beaupré: The cost of hosting in the cloud

13 March, 2018 - 01:33

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

Should we host in the cloud or on our own servers? This question was at the center of Dmytro Dyachuk's talk, given during KubeCon + CloudNativeCon last November. While many services simply launch in the cloud without the organizations behind them considering other options, large content-hosting services have actually moved back to their own data centers: Dropbox migrated in 2016 and Instagram in 2014. Because such transitions can be expensive and risky, understanding the economics of hosting is a critical part of launching a new service. Actual hosting costs are often misunderstood, or secret, so it is sometimes difficult to get the numbers right. In this article, we'll use Dyachuk's talk to try to answer the "million dollar question": "buy or rent?"

Computing the cost of compute

So how much does hosting cost these days? To answer that apparently trivial question, Dyachuk presented a detailed analysis made from a spreadsheet that compares the costs of "colocation" (running your own hardware in somebody else's data center) versus those of hosting in the cloud. For the latter, Dyachuk chose Amazon Web Services (AWS) as a standard, reminding the audience that "63% of Kubernetes deployments actually run off AWS". Dyachuk focused only on the cloud and colocation services, discarding the option of building your own data center as too complex and expensive. The question is whether it still makes sense to operate your own servers when, as Dyachuk explained, "CPU and memory have become a utility", a transition that Kubernetes is also helping push forward.

Another assumption of his talk is that server uptime isn't that critical anymore; there used to be a time when system administrators would proudly brandish multi-year uptime counters as a proof of server stability. As an example, Dyachuk performed a quick survey in the room and the record was an uptime of 5 years. In response, Dyachuk asked: "how many security patches were missed because of that uptime?" The answer was, of course "all of them". Kubernetes helps with security upgrades, in that it provides a self-healing mechanism to automatically re-provision failed services or rotate nodes when rebooting. This changes hardware designs; instead of building custom, application-specific machines, system administrators now deploy large, general-purpose servers that use virtualization technologies to host arbitrary applications in high-density clusters.

When presenting his calculations, Dyachuk explained that "pricing is complicated" and, indeed, his spreadsheet includes hundreds of parameters. However, after reviewing his numbers, I can say that the list is impressively exhaustive, covering server memory, disk, and bandwidth, but also backups, storage, staffing, and networking infrastructure.

For servers, he picked a Supermicro chassis with 224 cores and 512GB of memory from the first result of a Google search. Once amortized over an aggressive three-year rotation plan, the $25,000 machine ends up costing about $8,300 yearly. To compare with Amazon, he picked the m4.10xlarge instance as a commonly used standard, which currently offers 40 cores, 160GB of RAM, and 4Gbps of dedicated storage bandwidth. At the time he did his estimates, the going rate for such a server was $2 per hour or $17,000 per year. So, at first, the physical server looks like a much better deal: half the price and close to quadruple the capacity. But, of course, we also need to factor in networking, power usage, space rental, and staff costs. And this is where things get complicated.

First, colocation rates will vary a lot depending on location. While bandwidth costs are often much lower in large urban centers because of proximity to fast network links, real estate and power prices are often much higher. Bandwidth costs are now the main driver in hosting costs.

For the purpose of his calculation, Dyachuk picked a real-estate figure of $500 per standard cabinet (42U). His calculations yielded a monthly power cost of $4,200 for a full rack, at $0.50/kWh. Those rates seem rather high for my local data center, where that rate is closer to $350 for the cabinet and $0.12/kWh for power. Dyachuk took into account that power is usually not "metered billing", when you pay for the actual power usage, but "stepped billing" where you pay for a circuit with a (say) 25-amp breaker regardless of how much power you use in said circuit. This accounts for some of the discrepancy, but the estimate still seems rather too high to be accurate according to my calculations.

Then there's networking: all those machines need to connect to each other and to an uplink. This means finding a bandwidth provider, which Dyachuk pinned at a reasonable average cost of $1/Mbps. But the most expensive part is not the bandwidth; the cost of managing network infrastructure includes not only installing switches and connecting them, but also tracing misplaced wires, dealing with denial-of-service attacks, and so on. Cabling, a seemingly innocuous task, is actually the majority of hardware expenses in data centers, as previously reported. From networking, Dyachuk went on to detail the remaining cost estimates, including storage and backups, where the physical world is again cheaper than the cloud. All this is, of course, assuming that crafty system administrators can figure out how to glue all the hardware together into a meaningful package.

Which brings us to the sensitive question of staff costs; Dyachuk described those as "substantial". These costs are for the system and network administrators who are needed to buy, order, test, configure, and deploy everything. Evaluating those costs is subjective: for example, salaries will vary between different countries. He fixed the person yearly salary costs at $250,000 (counting overhead and an actual $150,000 salary) and accounted for three people on staff. Those costs may also vary with the colocation service; some will include remote hands and networking, but he assumed in his calculations that the costs would end up being roughly the same because providers will charge extra for those services.

Dyachuk also observed that staff costs are the majority of the expenses in a colocation environment: "hardware is cheaper, but requires a lot more people". In the cloud, it's the opposite; most of the costs consist of computation, storage, and bandwidth. Staff also introduce a human factor of instability in the equation: in a small team, there can be a lot of variability in ability levels. This means there is more uncertainty in colocation cost estimates.

In our discussions after the conference, Dyachuk pointed out a social aspect to consider: cloud providers are operating a virtual oligopoly. Dyachuk worries about the impact of Amazon's growing power over different markets:

A lot of businesses are in direct competition with Amazon. A fear of losing commercial secrets and being spied upon has not been confirmed by any incidents yet. But Walmart, for example, moved out of AWS and requested that its suppliers do the same.

Demand management

Once the extra costs described are factored in, colocation still would appear to be the cheaper option. But that doesn't take into account the question of capacity: a key feature of cloud providers is that they pool together large clusters of machines, which allow individual tenants to scale up their services quickly in response to demand spikes. Self-hosted servers need extra capacity to cover for future demand. That means paying for hardware that stays idle waiting for usage spikes, while cloud providers are free to re-provision those resources elsewhere.

Satisfying demand in the cloud is easy: allocate new instances automatically and pay the bill at the end of the month. In a colocation, provisioning is much slower and hardware must be systematically over-provisioned. Those extra resources might be used for preemptible batch jobs in certain cases, but workloads are often "transaction-oriented" or "realtime" which require extra resources to deal with spikes. So the "spike to average" ratio is an important metric to evaluate when making the decision between the cloud and colocation.

Cost reductions are possible by improving analytics to reduce over-provisioning. Kubernetes makes it easier to estimate demand; before containerized applications, estimates were per application, each with its margin of error. By pooling together all applications in a cluster, the problem is generalized and individual workloads balance out in aggregate, even if they fluctuate individually. Therefore Dyachuk recommends to use the cloud when future growth cannot be forecast, to avoid the risk of under-provisioning. He also recommended "The Art of Capacity Planning" as a good forecasting resource; even though the book is old, the basic math hasn't changed so it is still useful.

The golden ratio

Colocation prices finally overshoot cloud prices after adding extra capacity and staff costs. In closing, Dyachuk identified the crossover point where colocation becomes cheaper at around $100,000 per month, or 150 Amazon m4.2xlarge instances, which can be seen in the graph below. Note that he picked a different instance type for the actual calculations: instead of the largest instance (m4.10xlarge), he chose the more commonly used m4.2xlarge instance. Because Amazon pricing scales linearly, the math works out to about the same once reserved instances, storage, load balancing, and other costs are taken into account.

He also added that the figure will change based on the workload; Amazon is more attractive with more CPU and less I/O. Inversely, I/O-heavy deployments can be a problem on Amazon; disk and network bandwidth are much more expensive in the cloud. For example, bandwidth can sometimes be more than triple what you can easily find in a data center.

Your mileage may vary; those numbers shouldn't be taken as an absolute. They are a baseline that needs to be tweaked according to your situation, workload and requirements. For some, Amazon will be cheaper, for others, colocation is still the best option.

He also emphasized that the graph stops at 500 instances; beyond that lies another "wall" of investment due to networking constraints. At around the equivalent of 2000-3000 Amazon instances, networking becomes a significant bottleneck and demands larger investments in networking equipment to upgrade internal bandwidth, which may make Amazon affordable again. It might also be that application design should shift to a multi-cluster setup, but that implies increases in staff costs.

Finally, we should note that some organizations simply cannot host in the cloud. In our discussions, Dyachuk specifically expressed concerns about Canada's government services moving to the cloud, for example: what is the impact on state sovereignty when confidential data about its citizen ends up in the hands of private contractors? So far, Canada's approach has been to only move "public data" to the cloud, but Dyachuk pointed out this already includes sensitive departments like correctional services.

In Dyachuk's model, the cloud offers significant cost reduction over traditional hosting in small clusters, at least until a deployment reaches a certain size. However, different workloads significantly change that model and can make colocation attractive again: I/O and bandwidth intensive services with well-planned growth rates are clear colocation candidates. His model is just a start; any project manager would be wise to make their own calculations to confirm the cloud really delivers the cost savings it promises. Furthermore, while Dyachuk wisely avoided political discussions surrounding the impact of hosting in the cloud, data ownership and sovereignty remain important considerations that shouldn't be overlooked.

A YouTube video and the slides [PDF] from Dyachuk's talk are available online.

This article first appeared in the Linux Weekly News, under the title "The true costs of hosting in the cloud".

Junichi Uekawa: I've been writing js more for chrome extensions.

12 March, 2018 - 14:54
I've been writing js more for chrome extensions. I write python using pandas for plotting graphs now. I wonder if there's good graphing solution for js. I don't remember how I crafted R graphs annymore.

Ben Hutchings: Debian LTS work, February 2018

12 March, 2018 - 07:51

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 13 hours. I will carry over 2 hours to March.

I made another release on the Linux 3.2 longterm stable branch (3.2.99) and started the review cycle for the next update (3.2.100). I rebased the Debian package onto 3.2.99 but didn't upload an update to Debian this month.

I also discussed the possibilities for cooperation between Debian LTS and CIP, briefly reviewed leptonlib for additional security issues, and updated the wiki page about the status of Spectre and Meltdown in Debian.

Elena Gjevukaj: CoderGals Hackathon

11 March, 2018 - 16:04

CoderGals Hackathon was organized for the first time in my country. This event took place in the beautiful city of Prizren. This hackathon held for 24 to 48 hours, was an idea which started from two girls majoring in Computer Science, Qendresa and Albiona Hoti.

Thanks to them, we had the chance to work on exciting projects as well as be mentored by key tech people including: Mergim Cahani, Daniel Pocock, Taulant Mehmeti, Mergim Krasniqi, Kolos Pukaj, Bujar Dervishaj, Arta Shehu Zaimi and Edon Bajrami.

We brainstormed for about 3-4 hours to decide for the project. We discussed many ideas that ranged from Doppler effect to GUI interfaces for phone calls. Finally we ended up making an project for linking the PC with your phone so it will ease the procedure not to use both when you need to add a contact, make a call or even sent text messages. We called it Phone Client project.

You can check our work online:

Phone Client

It was a challenge for us because we worked for the first time on Debian OS.

Project that other girls worked on:

Vasudev Kamath: Biboumi - A XMPP - IRC Gateway

11 March, 2018 - 12:19

IRC is a communication mode (technically a communication protocol) used by many Free Software projects for communication and collaboration. It is serving these projects well even 30 years after its inception. Though I'm pretty much okay with IRC I had a problem of not able to use IRC from the mobile phones. Main problem is the inconsistent network connection, where IRC needs always to be connected. This is where I came across Biboumi.

Biboumi by itself does not have anything to do with mobile phones, its just a gateway which will allow you to connect with IRC channel as if it is a XMPP MUC room from any XMPP client. Benefit of this is it allows to enjoy some of XMPP feature in your IRC channel (not all but those which can be mapped).

I run Biboumi with my ejabbered instance and there by now I can connect to some of the Debian IRC channel directly from my phone using Conversations XMPP client for Android.

Biboumi is packaged for Debian, though I'm co-maintainer of the package most hardwork is done by Jonas Smedegaard in keeping the package in shape. It is also available for stretch-backports (though slightly outdated as its not packaged by us for backports). Once you install the package, copy example configuration file from /usr/share/doc/biboumi/examples/example.conf to /etc/biboumi/biboumi.cfg and modify the values as needed. Below is my sample file with password redacted.


Explanation for all the key, values in the configuration file is available in the man page (man biboumi).

Biboumi is configured as external component of the XMPP server. In my case I'm using ejabberd to host my XMPP service. Below is the configuration needed for allowing biboumi to connect with ejabberd.

  port: 8888
  ip: ""
  module: ejabberd_service
  acess: all
       password: xxx

password field in biboumi configuration should match password value in your XMPP server configuration.

After doing above configuration reload ejabberd (or your XMPP server) and start biboumi. Biboumi package provides systemd service file so you might need to enable it first. That's it now you have an XMPP to IRC gateway ready.

You might notice that I'm using local host name for hostname key as well as ip field in ejabberd configuration. This is because TLS support was added to biboumi Debian package only after 7.2 release as botan 2.x was not available till that point in Debian. Hence using proper domain name and making biboumi listen to public will be not safe at least prior to Debian package version 7.2-2. Also making the biboumi service public means you will also need to handle spam bots trying to connect from your service to IRC, which might get your VPS banned from IRC.

Connection Semantics

Once biboumi is configured and running you can now use XMPP client of your choice (Gajim, Conversation etc.) to connect to IRC. To connect to OFTC from your XMPP client you need to following address in Group Chat section

Replace part after @ to what you have configured in hostname field in biboumi configuration. To join a specific channel on a IRC server you need to join the group conversation with following format

If your nick name is registered and you would want to identify yourself to IRC server you can do that by joining in group conversation with NickServ using following address

Once connected you can send NickServ command directly in this virtual channel. Like identify password nick. It is also possible to configure your XMPP clients like Gajim to send Ad-Hoc commands on connection to particular IRC server for identifying your self with IRC servers. But this part I did not get working in Gajim.

If you are running your own XMPP server then biboumi gives you best way to connect to IRC from your mobile phones. And with applications like Conversation running XMPP application won't be hard on your phone battery.


Jeremy Bicha: webkitgtk in Debian Stretch: Report Card

11 March, 2018 - 00:25

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).

Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.

Release History

Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).

Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).

Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)

Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).

Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).

Release Schedule

webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.

Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).

There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).

In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.

Report Card

From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).

Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.

To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.

If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.


I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.

Andrew Shadura: Say no to Slack, say yes to Matrix

10 March, 2018 - 20:50

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled.

Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working.

So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider MatterMost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using MatterMost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software.

If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS.

You don’t have to self-host a Matrix homeserver, since runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it!

To help you with the decision to move from Slack to Matrix, you should know that since Matrix has a Slack gateway, you can gradually migrate your colleagues to the new infrastructure, by joining the Slack and Matrix chats together, and dropping the gateway only when everyone moves from Slack.

Repeating Gunnar, say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Michael Stapelberg: dput usability changes

10 March, 2018 - 16:00

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

Gunnar Wolf: On the demise of Slack's IRC / XMPP gateways

10 March, 2018 - 08:23

I have grudgingly joined three Slack workspaces , due to me being part of proejects that use it as a communications center for their participants. Why grudgingly? Because there is very little that it adds to well-established communications standards that we have had for long years decades.

On this topic, I must refer you to the talk and article presented by Megan Squire, one of the clear highlights of my participation last year at the 13th International Conference on Open Source Systems (OSS2017): «Considering the Use of Walled Gardens for FLOSS Project Communication». Please do have a good read of this article.

Thing is, after several years of playing open with probably the best integration gateway I have seen, Slack is joining the Embrace, Extend and Extinguish">-minded companies. Of course, I strongly doubt they will manage to extinguish XMPP or IRC, but they want to strengthen the walls around their walled garden...

So, once they have established their presence among companies and developer groups alike, Slack is shutting down their gateways to XMPP and IRC, arguing it's impossible to achieve feature-parity via the gateway.

Of course, I guess all of us recognize and understand there has long not been feature parity. But that's a feature, not a bug! I expressly dislike the abuse of emojis and images inside what's supposed to be a work-enabling medium. Of course, connecting to Slack via IRC, I just don't see the content not meant for me.

The real motivation is they want to control the full user experience.

Well, they have lost me as a user. The day my IRC client fails to connect to Slack, I will delete my user account. They already had record of all of my interactions using their system. Maybe I won't be able to move any of the groups I am part of away from Slack – But many of us can help create a flood.

Say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Adnan Hodzic: Hello world!

10 March, 2018 - 05:08

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Sven Hoexter: half-assed Oracle JRE/JDK 10 support for java-package

10 March, 2018 - 01:22

I spent an hour to add very basic support for the upcoming Java 10 to my fork of java-package. It still has some edges and the list of binary executables managed via the alternatives system requires some major cleanup. I think once Java 8 is EOL in September it's a good point to consolidate and strip everything except for Java 11 support. If someone requires an older release he can still get back on an earlier version, but by then we won't see any new releases of Java 8, 9, 10. Not speaking about even older stuff.

[sven@digital lib (master)]$ java -version
java version "10" 2018-03-20
Java(TM) SE Runtime Environment 18.3 (build 10+46)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10+46, mixed mode)

Olivier Berger: Adding a reminder notification in XFCE systray that I should launch a backup script

9 March, 2018 - 21:10

I’ve started using borg and borgmatic for backups of my machines. I won’t be using a fully automated backup via a crontab for a start. Instead, I’ve added a recurrent reminder system that will appear on my XFCE desktop to tell me it may be time to do backups.

I’m using yad (a zenity on steroids) to add notifications in the desktop via an anacron.

The notification icon, when clicked, will start a shell script that performs the backups, starting borgmatic.

Here are some bits of my setup :

crontab -l excerpt:

@hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

~/.anacron/etc/anacrontab excerpt:

7 15      borgmatic-home  /home/olivier/bin/

The idea of this anacrontab is to remind me weekly that I should do a backup, 15 minutes after I’ve booted the machine. Another reminding mechanism may be more handy… time will tell.

Then, the script :

notify-send 'Borg backups at home!' "It's time to do a backup." --icon=document-save

# borrowed from

# create a FIFO file, used to manage the I/O redirection from shell
PIPE=$(mktemp -u --tmpdir ${0##*/}.XXXXXXXX)
mkfifo $PIPE

# attach a file descriptor to the file
exec 3<> $PIPE

# add handler to manage process shutdown
function on_exit() {
 echo "quit" >&3
 rm -f $PIPE
trap on_exit EXIT

# add handler for tray icon left click
function on_click() {
 # echo "pid: $YAD_PID"
 echo "icon:document-save" >/proc/$YAD_PID/fd/3
 echo "visible:blink" >/proc/$YAD_PID/fd/3
 xterm -e bash -c "/home/olivier/bin/ --verbosity 1 -c /home/olivier/borgmatic/home-config.yaml; read -p 'Press any key ...'"
 echo "quit" >/proc/$YAD_PID/fd/3
 # kill -INT $YAD_PID
export -f on_click

# create the notification icon
yad --notification \
 --listen \
 --image="appointment-soon" \
 --text="Click icon to start borgmatic backup at home" \
 --command="bash -c on_click $YAD_PID" <&3

The script will start yad so that it displays an icon in the systray. When the icon is clicked, it will start borgmatic, after having changed the icon. Borgmatic will be started inside an xterm so as to get passphrase input, and display messages. Once borgmatic is done backing up, yad will be terminated.

There may be a more elegant way to pass commands to yad listening on file descriptor 3/pipe, but I couldn’t figure out, so the /proc hack. This works on Linux… but not sure in other Unices.

Hope this helps.

Christoph Berg: Cool Unix Features: paste

9 March, 2018 - 16:06

paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.

One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.

$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)

[1] See "J" in The ABCs of Unix

[PS: I meant to blog this in 2011, but apparently never committed the file...]

Christoph Berg: Stepping down as DAM

9 March, 2018 - 15:58

After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.

Thanks for the fish!

Iustin Pop: Corydalis 0.3.0 release

9 March, 2018 - 15:45

Short notice: just release 0.3.0, with a large number of new features and improvements - see the changelog for details.

Without aiming for, this release follows almost exactly a month after v0.2, so maybe a monthly release cycle while I still have lots of things to add (and some time to actually do it) would be an interesting goal.

One potentially interesting thing: since v0.2, I've added a demo site at using a few photos from my own collection, so if you're curious what this actually is, check that out.

Steve Kemp: A change of direction ..

9 March, 2018 - 15:00

In my previous post I talked about how our child-care works here in wintery Finland, and suggested there might be a change in the near future.

So here is the predictable update; I've resigned from my job and I'm going to be taking over childcare/daycare. Ideally this will last indefinitely, but it is definitely going to continue until November. (Which is the earliest any child could be moved into public day-care if there problems.)

I've loved my job, twice, but even though it makes me happy (in a way that several other positions didn't) there is no comparison. Child-care makes me happier-still. Sure there are days when your child just wants to scream, refuse to eat, and nothing works. But on average everything is awesome.

It's a hard decision, a "brave" decision too apparently (which I read negatively!), but also an easy one to make.

It'll be hard. I'll have no free time from 7AM-5PM, except during nap-time (11AM-1PM, give or take). But it will be worth it.

And who knows, maybe I'll even get to rant at people who ask "Where's his mother?" I live for those moments. Truly.

Joey Hess: prove you are not an Evil corporate person

9 March, 2018 - 12:24

In which Google be Google and I drop a hot AGPL tip.

Google Is Quietly Providing AI Technology for Drone Strike Targeting Project
Google Is Helping the Pentagon Build AI for Drones

to automate the identification and classification of images taken by drones — cars, buildings, people — providing analysts with increased ability to make informed decisions on the battlefield

These news reports don't mention reCaptcha explicitly, but it's been asking about a lot of cars lately. Whatever the source of the data that Google is using for this, it's disgusting that they're mining it from us without our knowledge or consent.

Google claims that "The technology flags images for human review, and is for non-offensive uses only". So, if a drone operator has a neural network that we all were tricked & coerced into training to identify cars and people helping to highlight them on their screen and center the crosshairs just right, and the neural network is not pressing the kill switch, is it being used for "non-offensive purposes only"?

Google is known to be deathly allergic to the AGPL license. Not only on servers; they don't even allow employees to use AGPL software on workstations. If you write free software, and you'd prefer that Google not use it, a good way to ensure that is to license it under the AGPL.

I normally try to respect the privacy of users of my software, and of personal conversations. But at this point, I feel that Google's behavior has mostly obviated those moral obligations. So...

Now seems like a good time to mention that I have been contacted by multiple people at Google about several of my AGPL licensed projects (git-annex and either keysafe or debug-me I can't remember which) trying to get me to switch them to the GPL, and had long conversations with them about it.

Google has some legal advice that the AGPL source provision triggers much more often than it's commonly understood to. I encouraged them to make that legal reasoning public, so the community could address/debunk it, but I don't think they have. I won't go into details about it here, other than it seemed pretty bonkers.

Mixing in some AGPL code with an otherwise GPL codebase also seems sufficient to trigger Google's allergy. In the case of git-annex, it's possible to build all releases (until next month's) with a flag that prevents linking with any AGPL code, which should mean the resulting binary is GPL licensed, but Google still didn't feel able to use it, since the git-annex source tree includes AGPL files.

I don't know if Google's allergy to the AGPL extends to software used for drone murder applications, but in any case I look forward to preventing Google from using more of my software in the future.

(Illustration by scatter//gather)

Russ Allbery: My friend Stirge

9 March, 2018 - 09:52

Eric Sturgeon, one of my oldest and dearest friends, died this week of complications from what I'm fairly certain was non-alcoholic fatty liver disease.

It was not entirely unexpected. He'd been getting progressively worse over the past six months. But at the same time there's no way to expect this sort of hole in my life.

I've known Stirge for twenty-five years, more than half of my life. We were both in college when we first met on Usenet in 1993 in the rec.arts.comics.* hierarchy, where Stirge was the one with the insane pull list and the canonical knowledge of the Marvel Universe. We have been friends ever since: part of on-line fiction groups, IRC channels, and free-form role-playing groups. He's been my friend through school and graduation, through every step of my career, through four generations of console systems, through two moves for me and maybe a dozen for him, through a difficult job change... through my entire adult life.

For more than fifteen years, he's been spending a day or a week or two, several times a year, sitting on my couch and playing video games. Usually he played and I navigated, researching FAQs and walkthroughs. Twitch was immediately obvious to me the moment I discovered it existed; it's the experience I'd had with Stirge for years before that. I don't know what video games are without his thoughts on them.

Stirge rarely was able to put his ideas into stories he could share with other people. He admired other people's art deeply, but wasn't an artist himself. But he loved fictional worlds, loved their depth and complexity and lore, and was deeply and passionately creative. He knew the stories he played and read and watched, and he knew the characters he played, particularly in World of Warcraft and Star Wars: The Old Republic. His characters had depth and emotions, histories, independent viewpoints, and stories that I got to hear. Stirge wrote stories the way that I do: in our heads, shared with a small number of people if anyone, not crafted for external consumption, not polished, not always coherent, but deeply important to our thoughts and our emotions and our lives. He's one of the very few people in the world I was able to share that with, who understood what that was like.

He was the friend who I could not see for six months, a year, and then pick up a conversation with as if we'd seen each other yesterday.

After my dad had a heart attack and emergency surgery to embed a pacemaker while we were on vacation in Oregon, I was worrying about how we would manage to get him back home. Stirge immediately volunteered to drive down from Seattle to drive us. He had a crappy job with no vacation, and if he'd done that he almost certainly would have gotten fired, and I knew with absolute certainty that he would have done it anyway.

I didn't take him up on the offer (probably to his vast relief). When I told him years later how much it meant to me, he didn't feel like it should have counted, since he didn't do anything. But he did. In one of the worst moments of my life, he said exactly the right thing to make me feel like I wasn't alone, that I wasn't bearing the burden of figuring everything out by myself, that I could call on help if I needed it. To this day I start crying every time I think about it. It's one of the best things that anyone has ever done for me.

Stirge confided in me, the last time he visited me, that he didn't think he was the sort of person anyone thought about when he wasn't around. That people might enjoy him well enough when he was there, but that he'd quickly fade from memory, with perhaps a vague wonder about what happened to that guy. But it wasn't true, not for me, not ever. I tried to convince him of that while he was alive, and I'm so very glad that I did.

The last time I talked to him, he explained the Marvel Cinematic Universe to me in detail, and gave me a rundown of the relative strength of every movie, the ones to watch and the ones that weren't as good, and then did the same thing for the DC movies. He got to see Star Wars before he died. He would have loved Black Panther.

There were so many games we never finished, and so many games we never started.

I will miss you, my friend. More than I think you would ever have believed.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้