Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 28 min 47 sec ago

Andrew Cater: CD / DVD image testing happening now - 202005091330

9 May, 2020 - 20:33
The usual suspects very much involved - Isy, myself, RattusRattus, schweer and Sledge. Image testing going on to good effect. Update pulses are starting to hit mirrors: just done a dist-upgrade on one of the machines here. All good

Dirk Eddelbuettel: ttdo 0.0.5: Reflect tinytest update

9 May, 2020 - 18:36

A maintenance release of our (still small) ttdo package just arrived on CRAN. As introduced last fall, the ttdo package extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs:

tinytest has an extension mechanism we use, and as tinytest was just upgraded to version 1.2.0 changing, among other nice extensions, one interface by allowing for a new error class argument, we had to rebuild as well in order to document the new argument.

The release was actually prepared three days ago when tinytest itself was updated, but we waited for the binaries at CRAN to be updated and rebuilt to take advantage of the fully automated submission and test process at CRAN.

The NEWS entry follow.

Changes in ttdo version 0.0.5 (2020-05-06)
  • Rebuilt under tinytest 1.2.0 to add support for class argument in error-code test predicates

CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Andrew Cater: And again - CD release for Buster release 4 is going to happen over the weekend ...

9 May, 2020 - 17:30
And for our regular readers - it's happening again. I'm sitting here with two laptops, a desktop, a connection to IRC - and friends in Cambridge and elsewhere involved in this too.

All good fun, as ever :)
 

Thorsten Alteholz: My Debian Activities in April 2020

9 May, 2020 - 17:23

FTP master

This month I accepted 384 packages and rejected 47. The overall number of packages that got accepted was 457.

Debian LTS

This was my seventieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 28.75h. During that time I did LTS uploads of:

  • [DLA 2183-1] libgsf security update for one CVE
  • [DLA 2184-1] jsch security update for one CVE
  • [DLA 2185-1] eog security update for one CVE
  • [DLA 2187-1] radicale security update for one CVE
  • [DLA 2186-1] ncmpc security update for one CVE
  • [DLA 2188-1] php5 security update for three CVEs
  • [DLA 2189-1] rzip security update for one CVE
  • [DLA 2195-1] w3m security update for two CVEs
  • [DLA 2194-1] yodl security update for one CVE
  • [DLA 2197-1] miniupnpc security update for one CVE
  • [DLA 2196-1] pound security update for one CVE

As there have been lots of no-dsa-CVEs I continued my work on wireshark.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty second ELTS month.

During my small allocated time I only uploaded:

  • ELA-227-1 for php5 fixing four CVEs

I also did some days of frontdesk duties.

Other stuff

Unfortunately this month again strange things happened outside Debian and I only got some stuff done.

I improved packaging of …

I sponsored uploads of …

  • … ulfius

I uploaded the new package …

  • … puppet-module-cirrax-gitolite

On my Go challenge I uploaded:
golang-github-facebookgo-subset, golang-github-facebookgo-ensure, golang-github-shurcool-gopherjslib, golang-github-grafana-grafana-plugin-model, golang-github-crewjam-httperr, golang-github-hashicorp-terraform-svchost, golang-github-neelance-sourcemap, golang-github-neelance-astrewrite, golang-github-kisielk-gotool, golang-github-gopherjs-gopherjs, golang-github-yvasiyarov-newrelic-platform-go, golang-github-rhnvrm-simples3, golang-github-robfig-go-cache, golang-github-xorcare-pointer, golang-github-goburrow-serial

Sandro Tosi: It's a waiting game... but just how long we gotta wait?

9 May, 2020 - 10:08
While waiting for my priority date to become current, and with enough "quarantine time" on my hand, i just come up with a very simple Python tool to parse the USCIS Visa Bulletin to gather some data from that.

You can find code and images in this GitHub repo.

For now it only contains a single plot for the EB3 final action date; it answers a simple question: how many months ago your priority date should be if you want to file your AOS on that month. We started from FY2016, to cover the final full year of the Obama administration.

If you're interested in more classes/visas, let me know and the tool could be easily extended to cover that too. PRs are always welcome.

Petter Reinholdtsen: Jami as a Zoom client, a trick for password protected rooms...

8 May, 2020 - 18:30

Half a year ago, I wrote about the Jami communication client, capable of peer-to-peer encrypted communication. It handle both messages, audio and video. It uses distributed hash tables instead of central infrastructure to connect its users to each other, which in my book is a plus. I mentioned briefly that it could also work as a SIP client, which came in handy when the higher educational sector in Norway started to promote Zoom as its video conferencing solution. I am reluctant to use the official Zoom client software, due to their copyright license clauses prohibiting users to reverse engineer (for example to check the security) and benchmark it, and thus prefer to connect to Zoom meetings with free software clients.

Jami worked OK as a SIP client to Zoom as long as there was no password set on the room. The Jami daemon leak memory like crazy (approximately 1 GiB a minute) when I am connected to the video conference, so I had to restart the client every 7-10 minutes, which is not a great. I tried to get other SIP Linux clients to work without success, so I decided I would have to live with this wart until someone managed to fix the leak in the dring code base. But another problem showed up once the rooms were password protected. I could not get my dial tone signaling through from Jami to Zoom, and dial tone signaling is used to enter the password when connecting to Zoom. I tried a lot of different permutations with my Jami and Asterisk setup to try to figure out why the signaling did not get through, only to finally discover that the fundamental problem seem to be that Zoom is simply not able to receive dial tone signaling when connecting via SIP. There seem to be nothing wrong with the Jami and Asterisk end, it is simply broken in the Zoom end. I got help from a very skilled VoIP engineer figuring out this last part. And being a very skilled engineer, he was also able to locate a solution for me. Or to be exact, a workaround that solve my initial problem of connecting to password protected Zoom rooms using Jami.

So, how do you do this, I am sure you are wondering by now. The trick is already documented from Zoom, and it is to modify the SIP address to include the room password. What is most surprising about this is that the automatically generated email from Zoom with instructions on how to connect via SIP do not mention this. The SIP address to use normally consist of the room ID (a number), an @ character and the IP address of the Zoom SIP gateway. But Zoom understand a lot more than just the room ID in front of the at sign. The format is "[Meeting ID].[Password].[Layout].[Host Key]", and you can hear see how you can both enter password, control the layout (full screen, active presence and gallery) and specify the host key to start the meeting. The full SIP address entered into Jami to provide the password will then look like this (all using made up numbers):

sip:657837644.522827@192.168.169.170

Now if only jami would reduce its memory usage, I could even recommend this setup to others. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Dirk Eddelbuettel: Rcpp Virtual Talk on June 5

8 May, 2020 - 07:48

We had to cancel R/Finance 2020 due to what is happening all around us. But I plan to present the one-hour workshop I often give in the tutorial session preceding the first day—but this time online!

To keep it simple, we will stick with the same day, and possibly the same time: Friday morning at 8:00am! So that makes Friday, June 5, at 08:00h Central time.

This YouTube! link should then provide the stream, I reckon there may also be a recording afterwards.

The talk / demo / presentation will be about an hour long, and material should be similar to the previous ones (of the same length) still available at the talks page (which also has longer talks all the way to the two-day workshops).

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Norbert Preining: KDE/Plasma 5.18.5 for Debian

7 May, 2020 - 14:52

After the KDE Apps update 20.04, now the recently released Plasma 5.18.5 is ready for Debian.

Furthermore, since the most recent version of the KDE frameworks have been uploaded to Debian/experimental, I have adapted the packages to make upgrades to the versions in experimental – and hopefully soon in unstable – smooth. I am also working with the Debian KDE Qt Team to update KDE Apps and Plasma in Debian proper. Stay tuned.

For now, here is what you need: Use the following APT sources in /etc/apt/sources.list.d/obs-npreining-kde.list:

For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Testing/ ./

And don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

Enjoy.

Tim Retout: Blog Posts

7 May, 2020 - 05:23

Jonathan Dowland: template haskell

6 May, 2020 - 22:43

I've been meaning to write more about my PhD work for absolutely ages, but I've held myself back by wanting to try and keep a narrative running through the blog posts. That's not realistic for a number of reasons so I'm going to just write about different aspects of things without worrying about whether they make sense in the context of recent blog posts or not.

Part of what I am doing at the moment is investigating Template Haskell to see whether it would usefully improve our system implementation. Before I write more about how it might apply to our system, I'll first write a bit about Template Haskell itself.

Template Haskell (TH) is a meta-programming system: you write programs that are executed at compile time and can output code to be spliced into the parent program. The approach used by TH is really nice: you perform your meta-programming in real first-class Haskell, and it integrates really well with the main program.

TH provides two pairs of special brackets. Oxford brackets surrounding any Haskell expression cause the whole expression to be replaced by the result of parsing the expression — an expression tree — which can be inspected and manipulated by the main program:

[| \x -> x + 1 |]

The expression data-type is a series of mutually-recursive data types that represent the complete Haskell grammar. The top-level is Exp, for expression, which has constructors for the different expression types. The above lambda expression is represented as

LamE [VarP x_1]
    (InfixE (Just (VarE x_1))
            (VarE GHC.Num.+)
            (Just (LitE (IntegerL 1))))

Such expressions can be pattern-matched against, constructed, deconstructed etc just like any other data type.

The other bracket type performs the opposite operation: it takes an expression structure and splices it into code in the main program, to be compiled as normal:

λ> 1 + $( litE (IntegerL 1) )
2

The two are often intermixed, sometimes nested to several levels. What follows is a typical beginner TH meta-program. The standard function fst operators on a 2-tuple and returns the first value. It cannot operate on a tuple of a different valence. However, a meta-program can generate a version of fst specialised for an n-tuple of any n:

genfst n = do
    xs <- replicateM n (newName "x")
    let ntup = tupP (map varP xs)
    [| \ $(ntup) ->  $(varE (head xs)) |]

Used like so

λ> $(genfst 2) (1,2)
1
λ> $(genfst 3) ('a','b','c')
'a'
λ> :t $(genfst 10)
$(genfst 10) :: (a, b, c, d, e, f, g, h, i, j) -> a

That's a high-level gist of how you can use TH. I've skipped over a lot of detail, in particular an important aspect relating to scope and naming, which is key to the problem I am exploring at the moment. Oxford brackets and slice brackets do not operate directly on the simple Exp data-type, but upon an Exp within the Q Monad:

λ> :t [| 1 |]
[| 1 |] :: ExpQ

ExpQ is a synonym for Q Exp. Eagle-eyed Haskellers will have noticed that genfst above was written in terms of some Monad. And you might also have noticed the case discrepancy between the constructor types VarE (Etc) and varE, tupP, varP used in that function definition. These are convenience functions that wrap the relevant constructor in Q. The point of the Q Monad is (I think) to handle name scoping, and avoid unintended name clashes. Look at the output of these simple expressions, passed through runQ:

λ> runQ [| \x -> x |]
LamE [VarP x_1] (VarE x_1)
λ> runQ [| \x -> x |]
LamE [VarP x_2] (VarE x_2)

Those x are not the same x in the context they are evaluated (a GHCi session). And that's the crux of the problem I am exploring. More in a later blog post!

Reproducible Builds: Reproducible Builds in April 2020

6 May, 2020 - 22:11

Welcome to the April 2020 report from the Reproducible Builds project. In our regular reports we outline the most important things that we and the rest of the community have been up to over the past month.

What are reproducible builds? One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. But whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.

News

It was discovered that more than 725 malicious packages were downloaded thousands of times from RubyGems, the official channel for distributing code for the Ruby programming language. Attackers used a variation of “typosquatting” and replaced hyphens and underscores (for example, uploading a malevolent atlas-client in place of atlas_client) that executed a script that intercepted Bitcoin payments. (Ars Technica report)

Bernhard M. Wiedemann launched ismypackagereproducibleyet.org, a service that takes a package name as input and displays whether the package is reproducible in a number of distributions. For example, it can quickly show the status of Perl as being reproducible on openSUSE but not in Debian. Bernhard also improved the documentation of his “unreproducible package” to add some example patches for hash issues. [].

There was a post on Chaos Computer Club’s website listing Ten requirements for the evaluation of “Contact Tracing” apps in relation to the SARS-CoV-2 epidemic. In particular:

4. Transparency and verifiability: The complete source code for the app and infrastructure must be freely available without access restrictions to allow audits by all interested parties. Reproducible build techniques must be used to ensure that users can verify that the app they download has been built from the audited source code.

Elsewhere, Nicolas Boulenguez wrote a patch for the Ada programming language component of the GCC compiler to skip -f.*-prefix-map options when writing Ada Library Information files. Amongst other properties, these .ali files embed the compiler flags used at the time of the build which results in the absolute build path being recorded via -ffile-prefix-map, -fdebug-prefix-map, etc.

In the Arch Linux project, kpcyrd reported that they held their first “rebuilder workshop”. The session was held on IRC and participants were provided a document with instructions on how to install and use Arch’s repro tool. The meeting resulted in multiple people with no prior experience of Reproducible Builds validate their first package. Later in the month he also announced that it was now possible to run independent rebuilders under Arch in a “hands-off, everything just works™” solution to distributed package verification.

Mathias Lang submitted a pull request against dmd, the canonical compiler for the ‘D’ programming languageto add support for our SOURCE_DATE_EPOCH environment variable as well the other C preprocessor tokens such __DATE__, __TIME__ and __TIMESTAMP__ which was subsequently merged. SOURCE_DATE_EPOCH defines a distribution-agnostic standard for build toolchains to consume and emit timestamps in situations where they are deemed to be necessary. []

The Telegram instant-messaging platform announced that they had updated to version 5.1.1 continuing their claim that they are reproducible according to their full instructions and therefore verifying that its original source code is exactly the same code that is used to build the versions available on the Apple App Store and Google Play distribution platforms respectfully.

Lastly, Hervé Boutemy reported that 97% of the current development versions of various Maven packages appear to have a reproducible build. []


Distribution work

In Debian this month, 89 reviews of Debian packages were added, 21 were updated and 33 were removed this month adding to our knowledge about identified issues. Many issue types were noticed, categorised and updated by Chris Lamb, including:

In addition, Holger Levsen filed a feature request against debrebuild, a tool for rebuilding a Debian package given a .buildinfo file, proposing to add --standalone or --one-shot-mode functionality.


In openSUSE, Bernhard M. Wiedemann made the following changes:

In Arch Linux, a rebuilder instance has been setup at reproducible.archlinux.org that is rebuilding Arch’s [core] repository directly. The first rebuild has led to approximately 90% packages reproducible contrasting with 94% on the Reproducible Build’s project own ArchLinux status page on tests.reproducible-builds.org that continiously builds packages and does not verify Arch Linux packages. More information may be found on the corresponding wiki page and the underlying decisions were explained on our mailing list.


Software development diffoscope

Chris Lamb made the following changes to diffoscope, the Reproducible Builds project’s in-depth and content-aware diff utility that can locate and diagnose reproducibility issues (including preparing and uploading versions 139, 140, 141, 142 and 143 to Debian which were subsequently uploaded to the backports repository):

  • Comparison improvements:

    • Dalvik .dex files can also serve as APK containers so restrict the narrower identification of .dex files to files ending with this extension and widen the identification of APK files to when file(1) discovers a Dalvik file. (#28)
    • Add support for Hierarchical Data Format (HD5) files. (#95)
    • Add support for .p7c and .p7b certificates. (#94)
    • Strip paths from the output of zipinfo(1) warnings. (#97)
    • Don’t uselessly include the JSON “similarity” percentage if it is “0.0%”. []
    • Render multi-line difference comments in a way to show indentation. (#101)
  • Testsuite improvements:

    • Add pdftotext as a requirement to run the PDF test_metadata text. (#99)
    • apktool 2.5.0 changed the handling of output of XML schemas so update and restrict the corresponding test to match. (#96)
    • Explicitly list python3-h5py in debian/tests/control.in to ensure that we have this module installed during a test run to generate the fixtures in these tests. []
    • Correct parsing of ./setup.py test --pytest-args arguments. []
  • Misc:

    • Capitalise “Ordering differences only” in text comparison comments. []
    • Improve documentation of FILE_TYPE_HEADER_PREFIX and FALLBACK_FILE_TYPE_HEADER_PREFIX to highlight that only the first 16 bytes are used. []

Michael Osipov created a well-researched merge request to return diffoscope to using zipinfo directly instead of piping input via /dev/stdin in order to ensure portability to the BSD operating system []. In addition, Ben Hutchings documented how --exclude arguments are matched against filenames [] and Jelle van der Waa updated the LLVM test fixture difference for LLVM version 10 [] as well as adding a reference to the name of the h5dump tool in Arch Linux [].

Lastly, Mattia Rizzolo also fixed in incorrect build dependency [] and Vagrant Cascadian enabled diffoscope to locate the openssl and h5dump packages on GNU Guix [][], and updated diffoscope in GNU Guix to version 141 [] and 143 [].

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. In April, Chris Lamb made the following changes:

  • Add deprecation plans to all handlers documenting how — or if — they could be disabled and eventually removed, etc. (#3)
  • Normalise *.sym files as Java archives. (#15)
  • Add support for custom .zip filename filtering and exclude two patterns of files generated by Maven projects in “fork” mode. (#13)
disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

This month, Chris Lamb fixed a long-standing issue by not drop UNIX groups in FUSE multi-user mode when we are not root (#1) and uploaded version 0.5.9-1 to Debian unstable. Vagrant Cascadian subsequently refreshed disorderfs in GNU Guix to version 0.5.9 [].

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

In addition, Bernhard informed the following projects that their packages are not reproducible:

  • acoular (report unknown non-determinism)
  • cri-o (report a date issue)
  • gnutls (report certtool being unable to extend certificates beyond 2049)
  • gnutls (report copyright year variation)
  • libxslt (report a bug about non-deterministic output from data corruption)
  • python-astropy (report a future build failure in 2021)
Project documentation

This month, Chris Lamb made a large number of changes to our website and documentation in the following categories:

  • Community engagement improvements:

    • Update instructions to register for Salsa on our Contribute page now that the signup process has been overhauled. []
    • Make it clearer that joining the rb-general mailing list is probably a first step for contributors to take. []
    • Make our full contact information easier to find in the footer (#19) and improve text layout using bullets to separate sections [].
  • Accessibility:

    • To improve accessibility, make all links underlined. (#12)
    • Use an enhanced foreground/background contrast ratio of 7.04:1. (#11)
  • General improvements:

  • Internals:

    • Move to using jekyll-redirect-from over manual redirect pages [][] and add a redirect from /docs/buildinfo/ to /docs/recording/. (#23)
    • Limit the website self-check to not scan generated files [] and remove the “old layout” checker now that I have migrated all them [].
    • Move the news archive under the /news/ namespace [] and improve formatting of archived news links [].
    • Various improvements to the draft template generation. [][][][]

In addition, Holger Levsen clarified exactly which month we ceased to do weekly reports [] and Mattia Rizzolo adjusted the title style of an event page [].

Marcus Hoffman also started a discussion on our website’s issue tracker asking for clarification on embedded signatures and Chris Lamb subsequently replied and asked Marcus to go ahead and propose a concrete change.

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org that, amongst many other tasks, tracks the status of our reproducibility efforts as well as identifies any regressions that have been introduced.

  • Chris Lamb:

    • Print the build environment prior to executing a build. []
    • Drop a misleading disorderfs-debug prefix in log output when we change non-disorderfs things in the file and, as it happens, do not run disorderfs at all. []
    • The CSS for the package report pages added a margin to all <a> HTML elements under <li> ones, which was causing a comma/bullet spacing issue. []
    • Tidy the copy in the project links sidebar. []
  • Holger Levsen:

    • General:
    • Debian:

      • Reduce scheduling frequency of the buster distribution on the arm64 architecture, etc.. [][]
      • Show builds per day on a per-architecture basis for the last year on the Debian dashboard. []
      • Drop the Subgraph OS package set as development halted in 2017 or 2018. []
      • Update debrebuild to version from the latest version of devscripts. [][]
      • Add or improve various parts of the documentation. [][][]
    • Work on a Debian rebuilder:

      • Integrate sbuild. [][][][][]
      • Select a random .buildinfo file and attempt to build and compare the result. [][][][]
      • Improve output and related output formatting. [][][][][]
      • Outline next steps for the development of the tool. [][][]
      • Various refactoring and code improvements. [][][]

Lastly, Mattia Rizzolo fixed some log parsing code regarding potentially-harmless warnings from package installation [][] and the usual build node maintenance was performed by Holger Levsen [][][] and Mattia Rizzolo [][][].

Misc news

On our mailing list this month, Santiago Torres asked whether we were still publishing releases of our tools to our website and Chris Lamb replied that this was not the case and fixed the issue. Later in the month Santiago also reported that the signature for the disorderfs package did not pass its GPG verification which was also fixed by Chris Lamb.

Hans-Christoph Steiner of the Guardian Project asked whether there would be interest in making our website translatable which resulted in a WIP merge request being filed against the website and a discussion on how to track translation updates.


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Daniel Shahaf, Holger Levsen, Jelle van der Waa, kpcyrd, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Russell Coker: About Reopening Businesses

6 May, 2020 - 14:12

Currently there is political debate about when businesses should be reopened after the Covid19 quarantine.

Small Businesses

One argument for reopening things is for the benefit of small businesses. The first thing to note is that the protests in the US say “I need a haircut” not “I need to cut people’s hair”. Small businesses won’t benefit from reopening sooner.

For every business there is a certain minimum number of customers needed to be profitable. There are many comments from small business owners that want it to remain shutdown. When the government has declared a shutdown and paused rent payments and provided social security to employees who aren’t working the small business can avoid bankruptcy. If they suddenly have to pay salaries or make redundancy payouts and have to pay rent while they can’t make a profit due to customers staying home they will go bankrupt.

Many restaurants and cafes make little or no profit at most times of the week (I used to be 1/3 owner of an Internet cafe and know this well). For such a company to be viable you have to be open most of the time so customers can expect you to be open. Generally you don’t keep a cafe open at 3PM to make money at 3PM, you keep it open so people can rely on there being a cafe open there, someone who buys a can of soda at 3PM one day might come back for lunch at 1:30PM the next day because they know you are open. A large portion of the opening hours of a most retail companies can be considered as either advertising for trade at the profitable hours or as loss making times that you can’t close because you can’t send an employee home for an hour.

If you have seating for 28 people (as my cafe did) then for about half the opening hours you will probably have 2 or fewer customers in there at any time, for about a quarter the opening hours you probably won’t cover the salary of the one person on duty. The weekend is when you make the real money, especially Friday and Saturday nights when you sometimes get all the seats full and people coming in for takeaway coffee and snacks. On Friday and Saturday nights the 60 seat restaurant next door to my cafe used to tell customers that my cafe made better coffee. It wasn’t economical for them to have a table full for an hour while they sell a few cups of coffee, they wanted customers to leave after dessert and free the table for someone who wants a meal with wine (alcohol is the real profit for many restaurants).

The plans of reopening with social distancing means that a 28 seat cafe can only have 14 chairs or less (some plans have 25% capacity which would mean 7 people maximum). That means decreasing the revenue of the most profitable times by 50% to 75% while also not decreasing the operating costs much. A small cafe has 2-3 staff when it’s crowded so there’s no possibility of reducing staff by 75% when reducing the revenue by 75%.

My Internet cafe would have closed immediately if forced to operate in the proposed social distancing model. It would have been 1/4 of the trade and about 1/8 of the profit at the most profitable times, even if enough customers are prepared to visit – and social distancing would kill the atmosphere. Most small businesses are barely profitable anyway, most small businesses don’t last 4 years in normal economic circumstances.

This reopen movement is about cutting unemployment benefits not about helping small business owners. Destroying small businesses is also good for big corporations, kill the small cafes and restaurants and McDonald’s and Starbucks will win. I think this is part of the motivation behind the astroturf campaign for reopening businesses.

Forbes has an article about this [1].

Psychological Issues

Some people claim that we should reopen businesses to help people who have psychological problems from isolation, to help victims of domestic violence who are trapped at home, to stop older people being unemployed for the rest of their lives, etc.

Here is one article with advice for policy makers from domestic violence experts [2]. One thing it mentions is that the primary US federal government program to deal with family violence had a budget of $130M in 2013. The main thing that should be done about family violence is to make it a priority at all times (not just when it can be a reason for avoiding other issues) and allocate some serious budget to it. An agency that deals with problems that affect families and only has a budget of $1 per family per year isn’t going to be able to do much.

There are ongoing issues of people stuck at home for various reasons. We could work on better public transport to help people who can’t drive. We could work on better healthcare to help some of the people who can’t leave home due to health problems. We could have more budget for carers to help people who can’t leave home without assistance. Wanting to reopen restaurants because some people feel isolated is ignoring the fact that social isolation is a long term ongoing issue for many people, and that many of the people who are affected can’t even afford to eat at a restaurant!

Employment discrimination against people in the 50+ age range is an ongoing thing, many people in that age range know that if they lose their job and can’t immediately find another they will be unemployed for the rest of their lives. Reopening small businesses won’t help that, businesses running at low capacity will have to lay people off and it will probably be the older people. Also the unemployment system doesn’t deal well with part time work. The Australian system (which I think is similar to most systems in this regard) reduces the unemployment benefits by $0.50 for every dollar that is earned in part time work, that effectively puts people who are doing part time work because they can’t get a full-time job in the highest tax bracket! If someone is going to pay for transport to get to work, work a few hours, then get half the money they earned deducted from unemployment benefits it hardly makes it worthwhile to work. While the exact health impacts of Covid19 aren’t well known at this stage it seems very clear that older people are disproportionately affected, so forcing older people to go back to work before there is a vaccine isn’t going to help them.

When it comes to these discussions I think we should be very suspicious of people who raise issues they haven’t previously shown interest in. If the discussion of reopening businesses seems to be someone’s first interest in the issues of mental health, social security, etc then they probably aren’t that concerned about such issues.

I believe that we should have a Universal Basic Income [3]. I believe that we need to provide better mental health care and challenge the gender ideas that hurt men and cause men to hurt women [4]. I believe that we have significant ongoing problems with inequality not small short term issues [5]. I don’t think that any of these issues require specific changes to our approach to preventing the transmission of disease. I also think that we can address multiple issues at the same time, so it is possible for the government to devote more resources to addressing unemployment, family violence, etc while also dealing with a pandemic.

Related posts:

  1. McDonalds – Wifi Without Power I am writing this post at a small cafe while...
  2. Nando’s Voucher Interpretation Every year my parents buy a book of vouchers for...
  3. economics of a computer store (why they don’t stock what you want) In some mailing list discussions recently some people demonstrated a...

Jonathan Dowland: Introducing Red Hat UBI OpenJDK runtime images

5 May, 2020 - 20:27

At Red Hat we've just shipped Universal Base Image (UBI) build images for OpenJDK. Download them with your favourite container manager, e.g.:

podman pull registry.access.redhat.com/ubi8/openjdk-8
podman pull registry.access.redhat.com/ubi8/openjdk-11

UBI, announced a year ago, is an initiative where you can obtain, share and build upon official Red Hat container images without needing a Red Hat subscription. Unlike something like CentOS, they aren't modified in any way (e.g. to remove branding), they're exactly the same base images that Red Hat products are built upon, composed entirely of Open Source software. Your precise rights are covered in the EULA.

I work on the Red Hat OpenJDK container images, which are designed primarily for use with OpenShift. We've been based upon the RHEL base images since inception. Although our containers are open source (of course), we haven't been able to distribute the binary images more widely than to Red Hat customers, until now.

These are based upon the "minimal" flavour base image and add an OpenJDK runtime and development packages (either JDK8 or JDK11, as per the image name) as well as Maven and OpenShift integration scripts that allow the image to be used with Source To Image.

If you give these a try, please let us know what you think! Comment here, on twitter, or file Issues or Pull Requests in the GitHub project.

Dima Kogan: Redirection and /dev/stderr

5 May, 2020 - 04:30

I just hit some unexpected unix-ism (or maybe Linux-ism?) that I'd like to mention here. I found a workaround, but if anybody knows what's actually happening and can enlighten me, please send me an email, and I'll update the post.

OK, so I have a program that does stuff, and outputs its progress to stderr. And I'm invoking this from a shell script that also outputs stuff to stderr. A minimized sample:

echo "Outer print 1111" > /dev/stderr
perl -E 'say STDERR ("1111 1111 1111 1111 1111 1111 1111 1111\n")'

echo "Outer print 2222" > /dev/stderr
perl -E 'say STDERR ("2222 2222 2222 2222 2222 2222 2222 2222\n")'

Let's call this dothing.sh. This works just fine. Now let's say I want to log the output to a file. Naturally I do this:

./dothing.sh 2> /tmp/log

Surprisingly, this does not work:

$ bash dothing.sh 2> /tmp/log && cat /tmp/log
Outer print 2222
2222 2222 2222 2222 2222 2222 2222 2222

$ bash dothing.sh 2> /tmp/log && xxd /tmp/log
0000000: 4f75 7465 7220 7072 696e 7420 3232 3232  Outer print 2222
0000010: 0a00 0000 0000 0000 0000 0000 0000 0000  ................
0000020: 0000 0000 0000 0000 0032 3232 3220 3232  .........2222 22
0000030: 3232 2032 3232 3220 3232 3232 2032 3232  22 2222 2222 222
0000040: 3220 3232 3232 2032 3232 3220 3232 3232  2 2222 2222 2222
0000050: 0a0a                                     ..

The 1111 prints are gone, and we now have a block of binary 0 bytes in there for some reason. I had been assuming that writes to stderr block until they're finished. So I can, without thinking about it, mix writing to fd 2 and opening, and then writing to /dev/stderr. Apparently the right way to do this is to write to fd 2 in the script directly, instead of opening /dev/stderr:

echo "Outer print 1111" >&2
perl -E 'say STDERR ("1111 1111 1111 1111 1111 1111 1111 1111\n")'

echo "Outer print 2222" >&2
perl -E 'say STDERR ("2222 2222 2222 2222 2222 2222 2222 2222\n")'

Why?

Jonathan Dowland: Amiga floppy recovery project: what next?

5 May, 2020 - 02:08

This is the ninth part in a series of blog posts. The previous post was Amiga floppy recovery project scope. The whole series is available here: Amiga.

It's been a while since I've reported on my Amiga floppy recovery project. With my bulky Philips CRT attached I slowly ground through the process of importing all my floppy disks, which is now done. The majority of disks were imported without errors. Commercial disks were the most likely to fail to import. Possibly I'd have more success with them if I used a different copying technique than X-COPY's default, but my focus was not on the commercial disks.

I have not yet restored the use of my LCD TV with the Amiga. I was waiting to hear back from Amiga Kit about an agreed return, but despite their web store claiming they're still open for business, I haven't been able to get any response to my emails to them since mid-February. I've given up, written off that order and bought an RGB/SCART adaptor elsewhere instead. Meanwhile my bulky CRT has returned to the loft.

The next step is pretty boring: time to check over my spreadsheet of disks, the imported copies of them, make sure everything is accounted for and decide whether to make further attempts for any that failed or write them off. For commercial disks it's almost certainly less effort to track down someone else's rip than to try mine again.

After that I can return to moving the Gotek floppy adaptor inside the Amiga case, as described in the last entry, and start playing.

Christoph Berg: arm64 on apt.postgresql.org

4 May, 2020 - 16:20

The apt.postgresql.org has been extended to cover the arm64 architecture.

We had occasionally received user request to add "arm" in the past, but it was never really clear which kind of "arm" made sense to target for PostgreSQL. In terms of Debian architectures, there's (at least) armel, armhf, and arm64. Furthermore, Raspberry Pis are very popular (and indeed what most users seems to were asking about), but the raspbian "armhf" port is incompatible with the Debian "armhf" port.

Now that most hardware has moved to 64-bit, it was becoming clear that "arm64" was the way to go. Amit Khandekar made it happen that HUAWEI Cloud Services donated a arm64 build host with enough resources to build the arm64 packages at the same speed as the existing amd64, i386, and ppc64el architectures. A few days later, all the build jobs were done, including passing all test-suites. Very few arm-specific issues were encountered which makes me confident that arm64 is a solid architecture to run PostgreSQL on.

We are targeting Debian buster (stable), bullseye (testing), and sid (unstable), and Ubuntu bionic (18.04) and focal (20.04). To use the arm64 archive, just add the normal sources.list entry:

deb https://apt.postgresql.org/pub/repos/apt buster-pgdg main
Ubuntu focal

At the same time, I've added the next Ubuntu LTS release to apt.postgresql.org: focal (20.04). It ships amd64, arm64, and ppc64el binaries.

deb https://apt.postgresql.org/pub/repos/apt focal-pgdg main
Old PostgreSQL versions

Many PostgreSQL extensions are still supporting older server versions that are EOL. For testing these extension, server packages need to be available. I've built packages for PostgreSQL 9.2+ on all Debian distributions, and all Ubuntu LTS distributions. 9.1 will follow shortly.

This means people can move to newer base distributions in their .travis.yml, .gitlab-ci.yml, and other CI files.

Russ Allbery: Review: Seraphina

4 May, 2020 - 11:51

Review: Seraphina, by Rachel Hartman

Series: Seraphina #1 Publisher: Ember Copyright: 2012 ISBN: 0-375-89658-9 Format: Kindle Pages: 360

Forty years ago, dragons and humans negotiated a fragile truce. The fighting stopped, the dragon-killing knights were outlawed, and dragons were allowed to visit the city in peace, albeit under stringent restrictions. Some on both sides were never happy with that truce and now, as the anniversary approaches, Prince Rufus has been murdered while hunting. His head was never found, and not a few members of the court are certain that it was eaten.

Sixteen-year-old Seraphina had no intention of being part of that debate. She's desperately trying to keep a low profile as the assistant court music director and music tutor to a princess. Her father is furious that she's at court at all, since that they are hiding a family secret that cannot get out. But Seraphina has a bad habit of being competent in ways that are hard to ignore: improving the princess's willingness to learn music beyond all expectations, performing memorably at Prince Rufus's funeral, and then helping, with her dragon tutor, a newskin dragon (one new to shapeshifting) who was attacked by a mob. This brings her to the attention of Prince Lucian Kiggs: royal bastard, fiance of the princess, head of the royal guard, and observant investigator. For Seraphina and her secrets, that's a threat, but she has made more friends at court than she realizes.

I probably should spoil Seraphina's secret, since it's hard to talk about this book without it and Hartman reveals it relatively early, but I try to avoid spoilers. I'll instead say that Seraphina is in danger from both the court and the dragons if her secret is uncovered, but she has an ability that will prove more useful than she ever expected in helping the kingdom avoid war. That ability is not something flashy; it lies in listening, understanding, and forming connections.

As you have probably guessed from the age of the protagonist, this is a young adult fantasy. It has that YA shape; Seraphina is uncertain but brave, gets into trouble by being unable to keep her mouth shut or stand by when she can prevent bad things from happening, and is caught by surprise when others find those characteristics likable. The cast is small despite an epic fantasy setup, and the degree to which Seraphina ends up at the heart of the kingdom's affairs is perhaps a touch unrealistic. Like a lot of YA, Seraphina is very centered on its main character. Your enjoyment of this book will likely hinge on how much you like her mix of uncertainty, determination, and ethics.

I liked her. I also appreciated the way that Hartman had her stumble into the plot through a series of accidents and entanglements with her past and her secret, despite her own best intentions. Seraphina is trying to avoid attention, not get into the middle of a novel, but she's naturally the sort of person who rushes towards danger to help others whenever events happen too fast for her to think. She has also attracted the attention (and unexpected friendship) of critical members of the royal family who like to meddle, which is bad for her attempts to hide. This could have felt artificial and too coincidental, but it didn't.

The one thing that did bother me about this book, though, was the nature of dragons, although it's possible that I'm being unfair. Dragons in Hartman's world can shapeshift into human form, but they don't understand (and deeply distrust) human emotions, finding them overwhelming and impure. This bit of world-building is not original to this book, and perhaps I should attribute it to the ubiquitous influence of Spock and Vulcans. But I kept stumbling over the feeling like dragons were based partly on stereotypes of the autism spectrum, which hurt my ability to engross myself in the story. It would not surprise me if I had this all wrong, Hartman didn't intend anything of the sort, and no one else will read it that way. But it still seemed worth mentioning.

Seraphina's dynamic with Kiggs becomes the core of the story, but it's slow and stumbling and occasionally frustrating when Seraphina is more cautious than the reader thinks she needs to be. The payoff is mostly worth the frustration, though. I wish Seraphina had been a bit more curious about her abilities, a bit more willing to notice the obvious (the bit with the dancers drug on far too long), and a bit more trusting of people who deserve her trust, and I wish Hartman had taken a different approach with the dragon attitude towards emotions. But this was fun.

Recommended if you want a good-hearted story where doing the right thing is rewarded and people in positions of power notice when someone is a good person.

Followed by Shadow Scale.

Rating: 7 out of 10

Enrico Zini: Tech links

4 May, 2020 - 06:00
Debops (2014) | Hacker News debian devel archive.org 2020-05-04 The way developers somehow think DevOps is (or should be) an abbreviation of "Developers doing/replacing Operations" is terrifying to me.I'm also in the same boat as the author, in that I recommend and target Debian Stable + Backports (and some vendor/community repos when required). What nobody tells you about documentation - Blog - Divio devel archive.org 2020-05-04 However hard you work on documentation, it won't work for your software - unless you do it the right way. Eagle's Path: (2019-11-09) devel python archive.org 2020-05-04 I'm going to preach the wonders of Python dataclasses, but for reasons of interested to those who have already gone down the delightful rabbit-hole of typed Python. So let me start with a quick plug for mypy if you haven't heard about it. Encoding your WiFi access point password into a QR code howto security archive.org 2020-05-04 Subscribe in a reader Subscribe by Email My Abandonware: because old video games were better games history archive.org 2020-05-04 Database of 15500 abandonware games free. One of the most complete video games museum. Take a trip down Memory Lane now! Warning: whole weekends can be lost.

Dirk Eddelbuettel: #0: Introducing T^4: Tips, Tricks, Tools, and Toys

4 May, 2020 - 05:18

For way too long now something I had meant to start was a little series about tips, tricks, tools, and toys. I had mentioned the idea a few times to a friend or two, and generally received a thumbs up or a ‘go for it’. But it takes a little to get over the humb and get going. And it turns out that last week’s r^4 talk on upgrading to R 4.0.0 hit some latent demand as we are now at 1400 views on YouTube. Wowser.

So hence without further ado, let’s kick off T^4. Similar in spirit to R^4, but broader in scope and going beyond R. The opening slides explaining what we plan to do are here, and the video link follows below:

With some luck we should have the first actual talk next week. See you then!

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sean Whitton: Processing mail from discussion lists differently

4 May, 2020 - 03:09

I realised this week that for some years I have been applying Inbox Zero indiscriminately to all e-mail that I receive, but this does not make sense, and has some downsides.

My version of Inbox Zero works very well when applied to mail addressed directly to me, and for mail to certain mailing lists, where each post to the list might as well be addressed directly to me, in addition to the list. However, I also receive by e-mail the following things to which I now believe Inbox Zero should not be applied:

  • discussion lists like debian-devel, notmuch@notmuchmail.org, etc.

  • mail to aliases like ftpmaster@debian.org (except when that mail is in reply to mail written by me from that address)

  • automated notifications received via Debian team mailing lists, where I’m not solely responsible for the Debian package in question, such as notifications received via the Debian Perl Group’s mailing list

  • RSS feed articles supplied by my (years old but still going strong) rss2email cronjob.

I believe that applying Inbox Zero to these sorts of things is not only incorrect but is actually harming my engagement with these mediums. Let us distinguish

  1. processing e-mail – this means applying the Inbox Zero decision procedure to incoming messages, at set times during the day (I do it once, around 4pm)

  2. browsing and sometimes catching up RSS articles and list mail – looking through unread items, replying if I think I have something useful to say, leaving things for later, and occasionally marking all as read if I’ve not had time for that group of lists lately.

These should be kept apart. The easy case to see is why you shouldn’t apply the browsing/catching up approach to e-mail which should rather be processed – that’s just the original wisdom of Inbox Zero. And clearly you don’t want ever to have to resort to just marking all mail addressed directly to you as read.

What goes wrong, then, if you misapply the processing mentality to e-mail which should, rather, be browsed/caught up? Well, the core of Inbox Zero is deciding whether to read and reply to something right now, add it to your todo list to handle later, or decide the mail cannot be dealt with quickly but is not important enough to go on that todo list. If you apply this to RSS articles and discussion list mail, then the third option is basically ruled out, because almost nothing on mailing lists or on blogs that I follow is important enough to go on my list of Real Tasks. But then you’re faced with either reading the article/post right now, or discarding it. There is no option to leave the item marked as unread and then maybe come back to it, or postpone discarding it until you’ve decided to catch up the group. Applying Inbox Zero to discussion lists and RSS feeds creates a false sense of urgency.

I’ve realised that my implicit response to this has been reluctance to subscribe to new mailing lists and feeds, because I don’t have things set up to allow me to read them in a leisurely way. But then I’m missing out on discussions and writing that might be relevant and beneficial to me if I could only approach them when I’m in a frame of mind other than “time to get my inbox down to zero”.

This also means that subscribing to new mailing lists just in order to post something has an unreasonably high cost: introducing all mail on those lists into my Inbox Zero processing window.

So, what’s needed is to make the virtual folder views which I use to read new mail correspond cleanly to the distinction between mail to be processed and mail to be browsed. I.e. for each virtual folder it should be clear whether mail there is to be processed or to be browsed. Then I can continue to use my processing views once per day, and access the browsing views at leisure. (Indeed, I’ve created a keybinding to cycle through the new browsing views, and another to catch them up.)

As I mentioned, some list mail ought to be processed. For example, I want to process rather than browse the debian-policy mailing list, as one of the Policy Editors. With notmuch, it’s easy to include this mail in the relevant processing views (I’ve long had one processing view for weekdays and another for weekends).

Additionally, I’ve used a bit of Emacs Lisp to create a dynamic “uncategorised unread” view which catches mail to be browsed which (i) doesn’t show up in one of the other virtual folders for mail to be browsed, and (ii) doesn’t show up in one of the processing views. So now subscribing to a mailing list is cheap: mail will end up in the uncategorised view, and I can decide whether to leave it there for a low traffic list; create a new virtual folder for the list (trivial as it’s all just more Emacs Lisp, no actual moving of files required); or unsubscribe.

I’ve been experiencing nostalgia for my time in secondary school when my friends and I had all sorts of interesting stuff flowing into our mailstores, from RSS feeds to each other’s blogs where we’d post both our own content and links elsewhere, RSS feeds to stranger’s blogs, and e-mails sent to each other with links. We didn’t have to worry about processing anything back then, as our e-mail accounts were used for intellectual enrichment but not the completion of tasks. I think I can recapture something of that with my new virtual folders for browsing.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้