Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 8 min 48 sec ago

Russ Allbery: Review: The Last Emperox

3 hours 32 min ago

Review: The Last Emperox, by John Scalzi

Series: Interdependency #3 Publisher: Tor Copyright: April 2020 ISBN: 0-7653-8917-7 Format: Kindle Pages: 318

This is the conclusion of the Interdependency trilogy, which is a single story told in three books. Start with The Collapsing Empire. You don't want to read this series out of order.

All the pieces and players are in place, the causes and timeline of the collapse of the empire she is accidentally ruling are now clear, and Cardenia Wu-Patrick knows who her friends and enemies are. What she doesn't know is what she can do about it. Her enemies, unfettered Cardenia's ethics or desire to save the general population, have the advantage of clearer and more achievable goals. If they survive and, almost as important, remain in power, who cares what happens to everyone else?

As with The Consuming Fire, the politics may feel a bit too on-the-nose for current events, this time for the way that some powerful people are handling (or not handling) the current pandemic. Also as with The Consuming Fire, Scalzi's fast-moving story, likable characters, banter, and occasional humorous descriptions prevent those similarities from feeling heavy or didactic. This is political wish fulfillment to be sure, but it doesn't try to justify itself or linger too much on its improbabilities. It's a good story about entertaining people trying (mostly) to save the world with a combination of science and political maneuvering.

I picked up The Last Emperox as a palate cleanser after reading Gideon the Ninth, and it provided exactly what I was looking for. That gave me an opportunity to think about what Scalzi does in his writing, why his latest novel was one of my first thoughts for a palate cleanser, and why I react to his writing the way that I do.

Scalzi isn't a writer about whom I have strong opinions. In my review of The Collapsing Empire, I compared his writing to the famous description of Asimov as the "default voice" of science fiction, but that's not quite right. He has a distinct and easily-recognizable style, heavy on banter and light-hearted description. But for me his novels are pleasant, reliable entertainment that I forget shortly after reading them. They don't linger or stand out, even though I enjoy them while I'm reading them.

That's my reaction. Others clearly do not have that reaction, fully engage with his books, and remember them vividly. That indicates to me that there's something his writing is doing that leaves substantial room for difference of personal taste and personal reaction to the story, and the sharp contrast between The Last Emperox and Gideon the Ninth helped me put my finger on part of it. I don't feel like Scalzi's books try to tell me how to feel about the story.

There's a moment in The Last Emperox where Cardenia breaks down crying over an incredibly difficult decision that she's made, one that the readers don't find out about until later. In another book, there would be considerably more emotional build-up to that moment, or at least some deep analysis of it later once the decision is revealed. In this book, it's only a handful of paragraphs and then a few pages of processing later, primarily in dialogue, and less focused on the emotions of the characters than on the forward-looking decisions they've made to deal with those emotions. The emotion itself is subtext. Many other authors would try to pull the reader into those moments and make them feel what the characters are feeling. Scalzi just relates them, and leaves the reader free to feel what they choose to feel.

I don't think this is a flaw (or a merit) in Scalzi's writing; it's just a difference, and exactly the difference that made me reach for this book as an emotional break after a book that got its emotions all over the place. Calling Scalzi's writing emotionally relaxing isn't quite right, but it gives me space to choose to be emotionally relaxed if I want to be. I can pick the level of my engagement. If I want to care about these characters and agonize over their decisions, there's enough information here to mull over and use to recreate their emotional states. If I just want to read a story about some interesting people and not care too much about their hopes and dreams, I can choose to do that instead, and the book won't fight me. That approach lets me sidle up on the things that I care about and think about them at my leisure, or leave them be.

This approach makes Scalzi's books less intense than other novels for me. This is where personal preference comes in. I read books in large part to engage emotionally with the characters, and I therefore appreciate books that do a lot of that work for me. Scalzi makes me do the work myself, and the result is not as effective for me, or as memorable.

I think this may be part of what I and others are picking up on when we say that Scalzi's writing is reminiscent of classic SF from decades earlier. It used to be common for SF to not show any emotional vulnerability in the main characters, and to instead focus on the action plot and the heroics and martial virtues. This is not what Scalzi is doing, to be clear; he has a much better grasp of character and dialogue than most classic SF, adds considerable light-hearted humor, and leaves clear clues and hooks for a wide range of human emotions in the story. But one can read Scalzi in that tone if one wants to, since the emotional hooks do not grab hard at the reader and dig in. By comparison, you cannot read Gideon the Ninth without grappling with the emotions of the characters. The book will not let you.

I think this is part of why Scalzi is so consistent for me. If you do not care deeply about Gideon Nav, you will not get along with Gideon the Ninth, and not everyone will. But several main characters in The Last Emperox (Mance and to some extent Cardenia) did little or nothing for me emotionally, and it didn't matter. I liked Kiva and enjoyed watching her strategically smash her way through social conventions, but it was easy to watch her from a distance and not get too engrossed in her life or her thoughts. The plot trundled along satisfyingly, regardless. That lack of emotional involvement precludes, for me, a book becoming the sort of work that I will rave about and try to press into other people's hands, but it also makes it comfortable and gentle and relaxing in a way that a more emotionally fraught book could not be.

This is a long-winded way to say that this was a satisfying conclusion to a space opera trilogy that I enjoyed reading, will recommend mildly to others, and am already forgetting the details of. If you liked the first two books, this is an appropriate and fun conclusion with a few new twists and a satisfying amount of swearing (mostly, although not entirely, from Kiva). There are a few neat (albeit not horribly original) bits of world-building, a nice nod to and subversion of Asimov, a fair bit of political competency wish fulfillment (which I didn't find particularly believable but also didn't mind being unbelievable), and one enjoyable "oh no she didn't" moment. If you like the thing that Scalzi is doing, you will enjoy this book.

Rating: 8 out of 10

Enrico Zini: Music links

7 hours 46 min ago
Great Big Sea - End Of The World music 2020-05-25 It's the end of the world as we know it, twice as fast Smash Mouth - All Star (Melon Cover) music 2020-05-25 Brett Domino: Bad Romance (Lady Gaga) - Korg Monotron and Kaossilator music 2020-05-25 J.S. Bach - Crab Canon on a Möbius Strip music 2020-05-25 Homemade Instruments and PVC Pipe - Rare And Strange Instruments music archive.org 2020-05-25 After Homemade Instruments Week on the facebook page, here is an article with some PVC pipes instruments! Percussion on PVC pipes A classic, long pipes for big bass, easy to tune by changing the length … Ut queant laxis - Wikipedia history music archive.org 2020-05-25 "Ut queant laxis" or "Hymnus in Ioannem" is a Latin hymn in honor of John the Baptist, written in Horatian Sapphics and traditionally attributed to Paulus Diaconus, the eighth-century Lombard historian. It is famous for its part in the history of musical notation, in particular solmization. The hymn belongs to the tradition of Gregorian chant.

Dirk Eddelbuettel: #3 T^4: Customizing The Shell

9 hours 55 min ago

The third video (following the announcement, the shell colors) one as well as last week’s shell prompt one, is up in the stil new T^4 series of video lightning talks with tips, tricks, tools, and toys. Today we cover customizing the shell some more.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: More reliable vlc bittorrent plugin in Debian (version 2.9)

24 May, 2020 - 22:00

I am very happy to report that a more reliable VLC bittorrent plugin was just uploaded into debian. This fixes a couple of crash bugs in the plugin, hopefully making the VLC experience even better when streaming directly from a bittorrent source. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

It also support magnet links and local .torrent files.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Holger Levsen: 20200523-i3statusbar

24 May, 2020 - 20:22
new i3 statusbar
🔌 96% 🚀 192.168.x.y 🐁 🤹 5+4+1 Qubes (80 avail/5 sys/16 tpl) 💾 77G 🧠 4495M/15596M 🤖 11% 🌡️ 50°C 2955🍥 dos y cuarto 🗺  

- and soon, with Qubes 4.1, it will become this colorful too

That depends on whether fonts-symbola and/or fonts-noto-color-emoji are being available.

François Marier: Printing hard-to-print PDFs on Linux

24 May, 2020 - 10:05

I recently found a few PDFs which I was unable to print due to those files causing insufficient printer memory errors:

I found a detailed explanation of what might be causing this which pointed the finger at transparent images, a PDF 1.4 feature which apparently requires a more recent version of PostScript than what my printer supports.

Using Okular's Force rasterization option (accessible via the print dialog) does work by essentially rendering everything ahead of time and outputing a big image to be sent to the printer. The quality is not very good however.

Converting a PDF to DjVu

The best solution I found makes use of a different file format: .djvu

Such files are not PDFs, but can still be opened in Evince and Okular, as well as in the dedicated DjVuLibre application.

As an example, I was unable to print page 11 of this paper. Using pdfinfo, I found that it is in PDF 1.5 format and so the transparency effects could be the cause of the out-of-memory printer error.

Here's how I converted it to a high-quality DjVu file I could print without problems using Evince:

pdf2djvu -d 1200 2002.04049.pdf > 2002.04049-1200dpi.djvu
Converting a PDF to PDF 1.3

I also tried the DjVu trick on a different unprintable PDF, but it failed to print, even after lowering the resolution to 600dpi:

pdf2djvu -d 600 dow-faq_v1.1.pdf > dow-faq_v1.1-600dpi.djvu

In this case, I used a different technique and simply converted the PDF to version 1.3 (from version 1.6 according to pdfinfo):

ps2pdf13 -r1200x1200 dow-faq_v1.1.pdf dow-faq_v1.1-1200dpi.pdf

This eliminates the problematic transparency and rasterizes the elements that version 1.3 doesn't support.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2020

23 May, 2020 - 23:10

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In April, 284.5 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 10.0h (out of 14h assigned), thus carrying over 4h to May.
  • Adrian Bunk did nothing (out of 28.75h assigned), thus is carrying over 28.75h for May.
  • Ben Hutchings did 26h (out of 20h assigned and 8.5h from March), thus carrying over 2.5h to May.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 6h (out of 6h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 28.75h assigned plus 17.25h from March), thus is carrying over 46h for May.
  • Markus Koschany did 11.5h (out of 28.75h assigned and 38.75h from March), thus carrying over 56h to May.
  • Mike Gabriel did 1.5h (out of 8h assigned), thus carrying over 6.5h to May.
  • Ola Lundqvist did 13.5h (out of 12h assigned and 8.5h from March), thus carrying over 7h to May.
  • Roberto C. Sánchez did 28.75h (out of 28.75h assigned).
  • Sylvain Beucler did 28.75h (out of 28.75h assigned).
  • Thorsten Alteholz did 28.75h (out of 28.75h assigned).
  • Utkarsh Gupta did 24h (out of 24h assigned).
Evolution of the situation

In April we dispatched more hours than ever and another was new too, we had our first (virtual) contributors meeting on IRC! Logs and minutes are available and we plan to continue doing IRC meetings every other month.
Sadly one contributor decided to go inactive in April, Hugo Lefeuvre.
Finally, we like to remind you, that the end of Jessie LTS is coming in less than two months!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 4 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Dirk Eddelbuettel: RcppSimdJson 0.0.5: Updated Upstream

23 May, 2020 - 22:09

A new RcppSimdJson release with updated upstream simdjson code just arrived on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

This release brings updated upstream code (thanks to Brendan Knapp) plus a new example and minimal tweaks. The full NEWS entry follows.

Changes in version 0.0.5 (2020-05-23)
  • Add parseExample from earlier upstream announcement (Dirk).

  • Synced with upstream (Brendan in #12) closing #11).

  • Updated example parseExample to API changes (Brendan).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve Kemp: Updated my linux-security-modules for the Linux kernel

22 May, 2020 - 12:15

Almost three years ago I wrote my first linux-security-module, inspired by a comment I read on LWN

I did a little more learning/experimentation and actually produced a somewhat useful LSM, which allows you to restrict command-execution via the use of a user-space helper:

  • Whenever a user tries to run a command the LSM-hook receives the request.
  • Then it executes a userspace binary to decide whether to allow that or not (!)

Because the detection is done in userspace writing your own custom rules is both safe and easy. No need to touch the kernel any further!

Yesterday I rebased all the modules so that they work against the latest stable kernel 5.4.22 in #7.

The last time I'd touched them they were built against 5.1, which was itself a big jump forwards from the 4.16.7 version I'd initially used.

Finally I updated the can-exec module to make it gated, which means you can turn it on, but not turn it off without a reboot. That was an obvious omission from the initial implementation #11.

Anyway updated code is available here:

I'd kinda like to write more similar things, but I lack inspiration.

Bits from Debian: Debian welcomes the 2020 GSOC interns

22 May, 2020 - 07:30

We are very excited to announce that Debian has selected nine interns to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, students, and details of the tasks to be performed.

Project: Android SDK Tools in Debian

  • Student(s): Manas Kashyap, Raman Sarda, and Samyak-jn

Deliverables of the project: Make the entire Android toolchain, Android Target Platform Framework, and SDK tools available in the Debian archives.

Project: Packaging and Quality assurance of COVID-19 relevant applications

  • Student: Nilesh

Deliverables of the project: Quality assurance including bug fixing, continuous integration tests and documentation for all Debian Med applications that are known to be helpful to fight COVID-19

Project: BLAS/LAPACK Ecosystem Enhancement

  • Student: Mo Zhou

Deliverables of the project: Better environment, documentation, policy, and lintian checks for BLAS/LAPACK.

Project: Quality Assurance and Continuous integration for applications in life sciences and medicine

  • Student: Pranav Ballaney

Deliverables of the project: Continuous integration tests for all Debian Med applications, QA review, and bug fixes.

Project: Systemd unit translator

  • Student: K Gopal Krishna

Deliverables of the project: A systemd unit to OpenRC init script translator. Updated OpenRC package into Debian Unstable.

Project: Architecture Cross-Grading Support in Debian

  • Student: Kevin Wu

Deliverables of the project: Evaluate, test, and develop tools to evaluate cross-grade checks for system and user configuration.

Project: Upstream/Downstream cooperation in Ruby

  • Student: utkarsh2102

Deliverables of the project: Create guide for rubygems.org on good practices for upstream maintainers, develop a tool that can detect problems and, if possible fix those errors automatically. Establish good documentation, design the tool to be extensible for other languages.

Congratulations and welcome to all the interns!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

Norbert Preining: Plasma 5.19 coming to Debian

20 May, 2020 - 09:33

The KDE Plasma desktop is soon getting an update to 5.19, and beta versions are out for testing.

In this release, we have prioritized making Plasma more consistent, correcting and unifying designs of widgets and desktop elements; worked on giving you more control over your desktop by adding configuration options to the System Settings; and improved usability, making Plasma and its components easier to use and an overall more pleasurable experience.

There are lots of new features mentioned in the release announcement, I like in particular the much more usable settings application as well as the new info center.




I have been providing builds of KDE related packages since quite some time now, see everything posted under the KDE tag. In the last days I have prepared Debian packages for Plasma 5.18.90 on OBS, for now only targeting Debian/sid and amd64 architecture.

These packages require Qt 5.14, which is only available in the experimental suite, and there is no way to simply update to Qt 5.14 since all Qt related packages need to be recompiled. So as long as Qt 5.14 doesn’t hit unstable, I cannot really run these packages on my main machine, but I tried a clean Debian virtual machine installing only Plasma 5.18.90 and depending packages, plus some more for a pleasant desktop experience. This worked out quite well, the VM runs Plasma 5.18.90.

I don’t have 3D running on the VM, so I cannot really check all the nice new effects, but I am sure on my main system they would work.

Well, bottom line, as soon as we have Qt 5.14 in Debian/unstable, we are also ready for Plasma 5.19!

Junichi Uekawa: After much investigation I decided to write a very simple page for getUserMedia.

19 May, 2020 - 15:14
After much investigation I decided to write a very simple page for getUserMedia. When I am performing music I provide audio in line input with all echo and noise and other concerns resolved. The idea is that I can cast the tab to video conferencing software and conferencing software will hopefully not reduce noise or echo. here. If the video conferencing software is reducing noise or echo from a tab, I will ask, why is it doing so?

François Marier: Displaying client IP address using Apache Server-Side Includes

19 May, 2020 - 04:50

If you use a Dynamic DNS setup to reach machines which are not behind a stable IP address, you will likely have a need to probe these machines' public IP addresses. One option is to use an insecure service like Oracle's http://checkip.dyndns.com/ which echoes back your client IP, but you can also do this on your own server if you have one.

There are multiple options to do this, like writing a CGI or PHP script, but those are fairly heavyweight if that's all you need mod_cgi or PHP for. Instead, I decided to use Apache's built-in Server-Side Includes.

Apache configuration

Start by turning on the include filter by adding the following in /etc/apache2/conf-available/ssi.conf:

AddType text/html .shtml
AddOutputFilter INCLUDES .shtml

and making that configuration file active:

a2enconf ssi

Then, find the vhost file where you want to enable SSI and add the following options to a Location or Directory section:

<Location /ssi_files>
    Options +IncludesNOEXEC
    SSLRequireSSL
    Header set Content-Security-Policy: "default-src 'none'"
    Header set X-Content-Type-Options: "nosniff"
</Location>

before adding the necessary modules:

a2enmod headers
a2enmod include

and restarting Apache:

apache2ctl configtest && systemctl restart apache2.service
Create an shtml page

With the web server ready to process SSI instructions, the following HTML blurb can be used to display the client IP address:

<!--#echo var="REMOTE_ADDR" -->

or any other built-in variable.

Note that you don't need to write a valid HTML for the variable to be substituted and so the above one-liner is all I use on my server.

Security concerns

The first thing to note is that the configuration section uses the IncludesNOEXEC option in order to disable arbitrary command execution via SSI. In addition, you can also make sure that the cgi module is disabled since that's a dependency of the more dangerous side of SSI:

a2dismod cgi

Of course, if you rely on this IP address to be accurate, for example because you'll be putting it in your DNS, then you should make sure that you only serve this page over HTTPS, which can be enforced via the SSLRequireSSL directive.

I included two other headers in the above vhost config (Content-Security-Policy and X-Content-Type-Options) in order to limit the damage that could be done in case a malicious file was accidentally dropped in that directory.

Finally, I suggest making sure that only the root user has writable access to the directory which has server-side includes enabled:

$ ls -la /var/www/ssi_includes/
total 12
drwxr-xr-x  2 root     root     4096 May 18 15:58 .
drwxr-xr-x 16 root     root     4096 May 18 15:40 ..
-rw-r--r--  1 root     root        0 May 18 15:46 index.html
-rw-r--r--  1 root     root       32 May 18 15:58 whatsmyip.shtml

Arturo Borrero González: A better Toolforge: upgrading the Kubernetes cluster

19 May, 2020 - 00:00

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

One of the most successful and important products provided by the Wikimedia Cloud Services team at the Wikimedia Foundation is Toolforge. Toolforge is a platform that allows users and developers to run and use a variety of applications that help the Wikimedia movement and mission from the technical point of view in general. Toolforge is a hosting service commonly known in the industry as a Platform as a Service (PaaS). Toolforge is powered by two different backend engines, Kubernetes and GridEngine.

This article focuses on how we made a better Toolforge by integrating a newer version of Kubernetes and, along with it, some more modern workflows.

The starting point in this story is 2018. Yes, two years ago! We identified that we could do better with our Kubernetes deployment in Toolforge. We were using a very old version, v1.4. Using an old version of any software has more or less the same consequences everywhere: you lack security improvements and some modern key features.

Once it was clear that we wanted to upgrade our Kubernetes cluster, both the engineering work and the endless chain of challenges started.

It turns out that Kubernetes is a complex and modern technology, which adds some extra abstraction layers to add flexibility and some intelligence to a very old systems engineering need: hosting and running a variety of applications.

Our first challenge was to understand what our use case for a modern Kubernetes was. We were particularly interested in some key features:

  • The increased security and controls required for a public user-facing service, using RBAC, PodSecurityPolicies, quotas, etc.
  • Native multi-tenancy support, using namespaces
  • Advanced web routing, using the Ingress API

Soon enough we faced another Kubernetes native challenge: the documentation. For a newcomer, learning and understanding how to adapt Kubernetes to a given use case can be really challenging. We identified some baffling patterns in the docs. For example, different documentation pages would assume you were using different Kubernetes deployments (Minikube vs kubeadm vs a hosted service). We are running Kubernetes like you would on bare-metal (well, in CloudVPS virtual machines), and some documents directly referred to ours as a corner case.

During late 2018 and early 2019, we started brainstorming and prototyping. We wanted our cluster to be reproducible and easily rebuildable, and in the Technology Department at the Wikimedia Foundation, we rely on Puppet for that. One of the first things to decide was how to deploy and build the cluster while integrating with Puppet. This is not as simple as it seems because Kubernetes itself is a collection of reconciliation loops, just like Puppet is. So we had to decide what to put directly in Kubernetes and what to control and make visible through Puppet. We decided to stick with kubeadm as the deployment method, as it seems to be the more upstream-standardized tool for the task. We had to make some interesting decisions by trial and error, like where to run the required etcd servers, what the kubeadm init file would look like, how to proxy and load-balance the API on our bare-metal deployment, what network overlay to choose, etc. If you take a look at our public notes, you can get a glimpse of the number of decisions we had to make.

Our Kubernetes wasn’t going to be a generic cluster, we needed a Toolforge Kubernetes service. This means we don’t use some of the components, and also, we add some additional pieces and configurations to it. By the second half of 2019, we were working full-speed on the new Kubernetes cluster. We already had an idea of what we wanted and how to do it.

There were a couple of important topics for discussions, for example:

  • Ingress
  • Validating admission controllers
  • Security policies and quotas
  • PKI and user management

We will describe in detail the final state of those pieces in another blog post, but each of the topics required several hours of engineering time, research, tests, and meetings before reaching a point in which we were comfortable with moving forward.

By the end of 2019 and early 2020, we felt like all the pieces were in place, and we started thinking about how to migrate the users, the workloads, from the old cluster to the new one. This migration plan mostly materialized in a Wikitech page which contains concrete information for our users and the community.

The interaction with the community was a key success element. Thanks to our vibrant and involved users, we had several early adopters and beta testers that helped us identify early flaws in our designs. The feedback they provided was very valuable for us. Some folks helped solve technical problems, helped with the migration plan or even helped make some design decisions. Worth noting that some of the changes that were presented to our users were not easy to handle for them, like new quotas and usage limits. Introducing new workflows and deprecating old ones is always a risky operation.

Even though the migration procedure from the old cluster to the new one was fairly simple, there were some rough edges. We helped our users navigate them. A common issue was a webservice not being able to run in the new cluster due to stricter quota limiting the resources for the tool. Another example is the new Ingress layer failing to properly work with some webservices’s particular options.

By March 2020, we no longer had anything running in the old Kubernetes cluster, and the migration was completed. We then started thinking about another step towards making a better Toolforge, which is introducing the toolforge.org domain. There is plenty of information about the change to this new domain in Wikitech News.

The community wanted a better Toolforge, and so do we, and after almost 2 years of work, we have it! All the work that was done represents the commitment of the Wikimedia Foundation to support the technical community and how we really want to pursue technical engagement in general in the Wikimedia movement. In a follow-up post we will present and discuss more in-depth about some technical details of the new Kubernetes cluster, stay tuned!

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

Russell Coker: A Good Time to Upgrade PCs

18 May, 2020 - 16:09

PC hardware just keeps getting cheaper and faster. Now that so many people have been working from home the deficiencies of home PCs are becoming apparent. I’ll give Australian prices and URLs in this post, but I think that similar prices will be available everywhere that people read my blog.

From MSY (parts list PDF ) [1] 120G SATA SSDs are under $50 each. 120G is more than enough for a basic workstation, so you are looking at $42 or so for fast quiet storage or $84 or so for the same with RAID-1. Being quiet is a significant luxury feature and it’s also useful if you are going to be in video conferences.

For more serious storage NVMe starts at around $100 per unit, I think that $124 for a 500G Crucial NVMe is the best low end option (paying $95 for a 250G Kingston device doesn’t seem like enough savings to be worth it). So that’s $248 for 500G of very fast RAID-1 storage. There’s a Samsung 2TB NVMe device for $349 which is good if you need more storage, it’s interesting to note that this is significantly cheaper than the Samsung 2TB SSD which costs $455. I wonder if SATA SSD devices will go away in the future, it might end up being SATA for slow/cheap spinning media and M.2 NVMe for solid state storage. The SATA SSD devices are only good for use in older systems that don’t have M.2 sockets on the motherboard.

It seems that most new motherboards have one M.2 socket on the motherboard with NVMe support, and presumably support for booting from NVMe. But dual M.2 sockets is rare and the price difference is significantly greater than the cost of a PCIe M.2 card to support NVMe which is $14. So for NVMe RAID-1 it seems that the best option is a motherboard with a single NVMe socket (starting at $89 for a AM4 socket motherboard – the current standard for AMD CPUs) and a PCIe M.2 card.

One thing to note about NVMe is that different drivers are required. On Linux this means means building a new initrd before the migration (or afterwards when booted from a recovery image) and on Windows probably means a fresh install from special installation media with NVMe drivers.

All the AM4 motherboards seem to have RADEON Vega graphics built in which is capable of 4K resolution at a stated refresh of around 24Hz. The ones that give detail about the interfaces say that they have HDMI 1.4 which means a maximum of 30Hz at 4K resolution if you have the color encoding that suits text (IE for use other than just video). I covered this issue in detail in my blog post about DisplayPort and 4K resolution [2]. So a basic AM4 motherboard won’t give great 4K display support, but it will probably be good for a cheap start.

$89 for motherboard, $124 for 500G NVMe, $344 for a Ryzen 5 3600 CPU (not the cheapest AM4 but in the middle range and good value for money), and $99 for 16G of RAM (DDR4 RAM is cheaper than DDR3 RAM) gives the core of a very decent system for $656 (assuming you have a working system to upgrade and peripherals to go with it).

Currently Kogan has 4K resolution monitors starting at $329 [3]. They probably won’t be the greatest monitors but my experience of a past cheap 4K monitor from Kogan was that it is quite OK. Samsung 4K monitors started at about $400 last time I could check (Kogan currently has no stock of them and doesn’t display the price), I’d pay an extra $70 for Samsung, but the Kogan branded product is probably good enough for most people. So you are looking at under $1000 for a new system with fast CPU, DDR4 RAM, NVMe storage, and a 4K monitor if you already have the case, PSU, keyboard, mouse, etc.

It seems quite likely that the 4K video hardware on a cheap AM4 motherboard won’t be that great for games and it will definitely be lacking for watching TV documentaries. Whether such deficiencies are worth spending money on a PCIe video card (starting at $50 for a low end card but costing significantly more for 3D gaming at 4K resolution) is a matter of opinion. I probably wouldn’t have spent extra for a PCIe video card if I had 4K video on the motherboard. Not only does using built in video save money it means one less fan running (less background noise) and probably less electricity use too.

My Plans

I currently have a workstation with 2*500G SATA SSDs in a RAID-1 array, 16G of RAM, and a i5-2500 CPU (just under 1/4 the speed of the Ryzen 5 3600). If I had hard drives then I would definitely buy a new system right now. But as I have SSDs that work nicely (quiet and fast enough for most things) and almost all machines I personally use have SSDs (so I can’t get a benefit from moving my current SSDs to another system) I would just get CPU, motherboard, and RAM. So the question is whether to spend $532 for more than 4* the CPU performance. At the moment I’ll wait because I’ll probably get a free system with DDR4 RAM in the near future, while it probably won’t be as fast as a Ryzen 5 3600, it should be at least twice as fast as what I currently have.

Related posts:

  1. SSD and M.2 The Need for Speed One of my clients has an...
  2. CPL I’ve just bught an NVidia video card from Computers and...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...

Enrico Zini: Art links

18 May, 2020 - 06:00
Guglielmo Achille Cavellini - Wikipedia art history italy archive.org 2020-05-18 Guglielmo Achille Cavellini (11 September 1914 – 20 November 1990), also known as GAC, was an Italian artist and art collector. After an initial activity as a painter, in the 1940s and 1950s he became one of the major collectors of contemporary Italian abstract art, developing a deep relationship of patronage and friendship with the artists. This experience has its pinnacle in the exhibition Modern painters of the Cavellini collection at the National Gallery of Modern Art in Rome in 1957. In the 1960s Cavellini resumed his activity as an artist, with an ample production spanning from Neo-Dada to performance art to mail art, of which he became one of the prime exponents with the Exhibitions at Home and the Round Trip works. In 1971 he invented autostoricizzazione (self-historicization), upon which he acted to create a deliberate popular history surrounding his existence. He also authored the books Abstract Art (1959), Man painter (1960), Diary of Guglielmo Achille Cavellini (1975), Encounters/Clashes in the Jungle of Art (1977) and Life of a Genius (1989). Gustave Doré - Wikipedia art comics history archive.org 2020-05-18 Paul Gustave Louis Christophe Doré (/dɔːˈreɪ/; French: [ɡys.tav dɔ.ʁe]; 6 January 1832 – 23 January 1883[1]) was a French artist, printmaker, illustrator, comics artist, caricaturist, and sculptor who worked primarily with wood-engraving. Enrico Baj: Generale art history italy archive.org 2020-05-18 «Enrico Baj era bravissimo a pijà per culo er potere usanno ‘a fantasia. Co quaa sempricità che è solo dii granni, raccatta robbe tipo bottoni, pezzi de stoffa, cordoni, passamanerie varie, e l’appiccica su ‘a tela insieme aa pittura sua: che pare quasi che sta a giocà ma giocanno giocanno, zitto zitto, riesce a rovescià er monno.…>> Tamara de Lempicka: Autoritratto sulla Bugatti verde art history italy archive.org 2020-05-18

Dirk Eddelbuettel: #2 T^4: Customizing The Shell Prompt

18 May, 2020 - 05:27

The second video (following the announcement and last week’s shell colors) is up in the stil new T^4 series of video lightning talks with tips, tricks, tools, and toys. Today we cover customizing shell prompts.

The slides are available here. Next week we likely continue on shell customization with aliases.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: krb5-strength 3.2

18 May, 2020 - 00:43

krb5-strength provides password strength checking for Kerberos KDCs (either MIT or Heimdal), and also provides a password history implementation for Heimdal.

This release adds a check-only mode to the heimdal-history command to interrogate history without modifying it and increases the default hash iterations used when storing old passwords. explicit_bzero is now used, where available, to clear the memory used for passwords after processing. krb5-strength can now optionally be built without CrackLib support at all, if you only want to use the word list, edit distance, or length and character class rules.

It's been a few years since the previous release, so this release also updates all the portability code, overhauls valgrind testing, and now passes tests when built with system CrackLib (by skipping tests for passwords that are rejected by the stronger rules of the embedded CrackLib fork).

You can get the latest release from the krb5-strength distribution page. New packages will be uploaded to Debian unstable shortly (as soon as a Perl transition completes enough to make the package buildable in unstable).

Dirk Eddelbuettel: RcppArmadillo 0.9.880.1.0

17 May, 2020 - 22:17

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 719 other packages on CRAN.

Conrad released a new upstream version 9.880.1 of Armadillo on Friday which I packaged and tested as usual (result log here in the usual repo). The R package also sports a new OpenMP detection facility once again motivated by macOS which changed its setup yet again.

Changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.880.1.0 (2020-05-15)
  • Upgraded to Armadillo release 9.880.1 (Roasted Mocha Detox)

    • expanded qr() to optionally use pivoted decomposition

    • updated physical constants to NIST 2018 CODATA values

    • added ARMA_DONT_USE_CXX11_MUTEX confguration option to disable use of std::mutex

  • OpenMP capability is tested explicitly (Kevin Ushey and Dirk in #294, #295, and #296 all fixing #290).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Erich Schubert: Contact Tracing Apps are Useless

17 May, 2020 - 19:04

Some people believe that automatic contact tracing apps will help contain the Coronavirus epidemic. They won’t.

Sorry to bring the bad news, but IT and mobile phones and artificial intelligence will not solve every problem.

In my opinion, those that promise to solve these things with artificial intelligence / mobile phones / apps / your-favorite-buzzword are at least overly optimistic and “blinder Aktionismus” (*), if not naive, detachted from reality, or fraudsters that just want to get some funding.

(*) there does not seem to be an English word for this – “doing something just for the sake of doing something, without thinking about whether it makes sense to do so”

Here are the reasons why it will not work:

  1. Signal quality. Forget detecting proximity with Bluetooth Low Energy. Yes, there are attempts to use BLE beacons for indoor positioning. But these use that you can learn “fingerprints” of which beacons are visible at which points, combined with additional information such as movement sensors and history (you do not teleport around in a building). BLE signals and antennas apparently tend to be very prone to orientation differences, signal reflections, and of course you will not have the idealized controlled environment used in such prototypes. The contacts have a single device, and they move – this is not comparable to indoor positioning. I strongly doubt you can tell whether you are “close” to someone, or not.
  2. Close vs. protection. The app cannot detect protection in place. Being close to someone behind a plexiglass window or even a solid wall is very different from being close otherwise. You will get a lot of false contacts this way. That neighbor that you have never seen living in the appartment above will likely be considered a close contact of yours, as you sleep “next” to each other every day…
  3. Low adoption rates. Apparently even in technology affine Singapore, fewer than 20% of people installed the app. That does not even mean they use it regularly. In Austria, the number is apparently below 5%, and people complain that it does not detect contact… But in order for this approach to work, you will need Chinese-style mass surveillance that literally puts you in prison if you do not install the app.
  4. False alerts. Because of these issues, you will get false alerts, until you just do not care anymore.
  5. False sense of security. Honestly: the app does not pretect you at all. All it tries to do is to make the tracing of contacts easier. It will not tell you reliably if you have been infected (as mentioned above, too many false positives, too few users) nor that you are relatively safe (too few contacts included, too slow testing and reporting). It will all be on the quality of “about 10 days ago you may or may not have contact with someone that tested positive, please contact someone to expose more data to tell you that it is actually another false alert”.
  6. Trust. In Germany, the app will be operated by T-Systems and SAP. Not exactly two companies that have a lot of fans… SAP seems to be one of the most hated software around. Neither company is known for caring about privacy much, but they are prototypical for “business first”. Its trust the cat to keep the cream. Yes, I know they want to make it open-source. But likely only the client, and you will still have to trust that the binary in the app stores is actually built from this source code, and not from a modified copy. As long as the name T-Systems and SAP are associated to the app, people will not trust it. Plus, we all know that the app will be bad, given the reputation of these companies at making horrible software systems…
  7. Too late. SAP and T-Systems want to have the app ready in mid June. Seriously, this must be a joke? It will be very buggy in the beginning (because it is SAP!) and it will not be working reliably before end of July. There will not be a substantial user before fall. But given the low infection rates in Germany, nobody will bother to install it anymore, because the perceived benefit is 0 one the infection rates are low.
  8. Infighting. You may remember that there was the discussion before that there should be a pan-european effort. Except that in the end, everybody fought everybody else, countries went into different directions and they all broke up. France wanted a centralized systems, while in Germany people pointed out that the users will not accept this and only a distributed system will have a chance. That failed effort was known as “Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT)” vs. “Decentralized Privacy-Preserving Proximity Tracing (DP-3T)”, and it turned out to have become a big “clusterfuck”. And that is just the tip of the iceberg.

Iceleand, probably the country that handled the Corona crisis best (they issued a travel advisory against Austria, when they were still happily spreading the virus at apres-ski; they massively tested, and got the infections down to almost zero within 6 weeks), has been experimenting with such an app. Iceland as a fairly close community managed to have almost 40% of people install their app. So did it help? No: “The technology is more or less … I wouldn’t say useless […] it wasn’t a game changer for us.”

The contact tracing app is just a huge waste of effort and public money.

And pretty much the same applies to any other attempts to solve this with IT. There is a lot of buzz about solving the Corona crisis with artificial intelligence: bullshit!

That is just naive. Do not speculate about magic power of AI. Get the data, understand the data, and you will see it does not help.

Because its real data. Its dirty. Its late. Its contradicting. Its incomplete. It is all what AI currently can not handle well. This is not image recognition. You have no labels. Many of the attempts in this direction already fail at the trivial 7-day seasonality you observe in the data… For example, the widely known John Hopkins “Has the curve flattened” trend has a stupid, useless indicator based on 5 day averages. And hence you get the weekly up and downs due to weekends. They show pretty “up” and “down” indicators. But these are affected mostly by the day of the week. And nobody cares. Notice that they currently even have big negative infections in their plots?

There is no data on when someone was infected. Because such data simply does not exist. What you have is data when someone tested positive (mostly), when someone reported symptons (sometimes, but some never have symptoms!), and when someone dies (but then you do not know if it was because of Corona, because of other issues that became “just” worse because of Corona, or hit by a car without any relation to Corona). The data that we work with is incredibly delayed, yet we pretend it is “live”.

Stop reading tea leaves. Stop pretending AI can save the world from Corona.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้