Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 2 hours 4 min ago

Norbert Preining: Multi-device and RAID1 with btrfs

26 May, 2020 - 08:01

I have been using btrfs, a modern modern copy on write filesystem for Linux, since many years now. But only recently I realized how amateurish my usage has been. Over the last day I switched to multiple devices and threw in a RAID1 level at the same time.

For the last years, I have been using btrfs in a completely naive way, simply creating new filesystems, mounting them, moving data over, linking the directories into my home dir, etc etc. It all became a huge mess over time. I have heard of “multi-device support“, but always thought that this is for the big players in the data centers – not realizing that it works trivially on your home system, too. Thanks to an article by Mark McBride I learned how to better use it!

Btrfs has an impressive list of features, and is often compared to (Open)ZFS (btw, they domain openzfs.org has a botched SSL certificate .. umpf, my trust disappears even more) due to the high level of data security. I have been playing around with the idea to use ZFS for quite some time, but first of all it requires compiling extra modules all the time, because ZFS cannot be included in the kernel source. And much more, I realized that ZFS is simply too inflexible with respect to disks of different sizes in a storage pool.

Btrfs on the other hand allows adding and removing devices to the “filesystem” on a running system. I just added a 2TB disk to my rig, and called:

btrfs device add /dev/sdh1 /

and with that alone, my root filesystem grew immediately. At the end I have consolidated data from 4 extra SSDs into this new “filesystem” spanning multiple disks, and got rid of all the links and loops.

For good measure, and since I had enough space left, I also switched to RAID1 for this filesystem. This again, surprisingly, works on a running system!

btrfs balance start -dconvert=raid1 -mconvert=raid1 /

Here, both data and metadata are mirrored on the devices. With 6TB of total disk space, the balancing operation took quite some time, about 6h in my case, but finished without a hiccup.

After all that, the filesystem now looks like this:

$ sudo btrfs fi show /
Label: none  uuid: XXXXXX
	Total devices 5 FS bytes used 2.19TiB
	devid    1 size 899.01GiB used 490.03GiB path /dev/sdb3
	devid    2 size 489.05GiB used 207.00GiB path /dev/sdd1
	devid    3 size 1.82TiB used 1.54TiB path /dev/sde1
	devid    4 size 931.51GiB used 649.00GiB path /dev/sdf1
	devid    5 size 1.82TiB used 1.54TiB path /dev/sdc1

and using btrfs fi usage / I can get detailed information about the device usage and status.

Stumbling blocks

You wouldn’t expect such a deep rebuilding of the intestines of a system to go without a few bumps, and indeed, there are a few:

First of all, update-grub is broken when device names are used. If you have GRUB_DISABLE_LINUX_UUID=true, so that actual device nodes are used in grub.cfg, the generated entries are broken because they list all the devices. This comes from the fact that grub-mkconfig uses grub-probe --target=device / to determine the root device, and this returns in our case:

# grub-probe --target=device /
/dev/sdb3
/dev/sdd1
/dev/sde1
/dev/sdf1
/dev/sdc1

and thus the grub config file contains entries like:

...
menuentry 'Debian GNU/Linux ...' ... {
  ...
  linux	/boot/vmlinuz-5.7.0-rc7 root=/dev/sdb3
/dev/sdd1
/dev/sde1
/dev/sdf1
/dev/sdc1 ro <other options>
}

This is of course an invalid entry, but fortunately grub still boots, but ignores the rest of the command line options.

So I decided to turn back to using UUID for the root entry, which should be better supported. But alas, what happened, I couldn’t even boot anymore. Grub gave me very cryptic messages like cannot find UUID device and dropping you into the grub rescue shell, then having the grub rescue shell being unable to read any filesystem at all (not even FAT or ext2!). The most cryptic one was grub header bytenr is not equal node addr, where even Google gave up on it.

At the end I booted into a rescue image (you always have something like SystemRescueCD on an USB stick next to you during these operations, right?), mounted the filesystem manually, and reinstalled grub, which fixed the problem, and now the grub config file contains only the UUID for root.

I don’t blame btrfs for that, this is more like we are, after sooo many years, we still don’t have a good boot system

All in all, a very smooth transition, and at least for some time I don’t have to worry about which partition has still some space left.

Thanks btrfs and Open Source!

Elana Hashman: Beef and broccoli

26 May, 2020 - 06:30

Beef and broccoli is one of my favourite easy weeknight meals. It's savoury, it's satisfying, it's mercifully quick, and so, so delicious.

The recipe here is based on this one but with a few simplifications and modifications. The ingredients are basically the same, but the technique is a little different. Oh, and I include some fermented black beans because they're yummy :3

Made 🥩 and 🥦 pic.twitter.com/cSpyA3hzP6

— e. hashman (@ehashdn) April 21, 2020

🥩 and 🥦

Serves 3. Active time: 20-30m. Total time: 30m.

Ingredients:

  • ¾ lb steak, thinly sliced (leave it in the freezer for ~30m to make slicing easier)
  • ¾ lb broccoli florets
  • 3 large cloves garlic, chopped
  • 1 teaspoon fermented black beans, rinsed and chopped (optional)

For the marinade:

  • 1 teaspoon soy sauce
  • 1 teaspoon shaoxing rice wine or dry sherry
  • ½ teaspoon corn starch
  • &frac18 teaspoon ground black pepper

For the sauce:

  • 2 tablespoons oyster sauce
  • 2 teaspoons soy sauce
  • 1 teaspoon shaoxing rice wine or dry sherry
  • ¼ cup chicken broth (I use Better than Bouillon)
  • 2 tablespoons water
  • 1 teaspoon corn starch

For the rice:

  • ¾ cup white medium-grain rice (I use calrose)
  • 1¼ cup water
  • large pinch of salt

Recipe:

  1. Begin by preparing the rice: combine rice, water, and salt in a small saucepan and bring to a boil. Once it reaches a boil, cover and reduce to a simmer for 20 minutes. Then turn off the heat and let it rest.
  2. Mix the marinade in a bowl large enough to hold the meat.
  3. Thinly slice the beef. Place slices in the marinade bowl and toss to evenly coat.
  4. Start heating a wok or similar pan on medium-high to high heat. This will ensure you get a good sear.
  5. Prep the garlic, black beans, and broccoli. If you use frozen broccoli, you can save some time here :)
  6. Combine the ingredients for the sauce and mix well to ensure the corn starch doesn't form any lumps.
  7. By this point the beef should have been marinating for about 10-15m. Add a tablespoon of neutral cooking oil (like canola or soy) to the meat, give it one more toss to ensure it's thoroughly coated, then add a layer of meat to the dry pan. You may need to sear in batches. On high heat, cooking will take 1-2 minutes per side, ensuring each is nicely browned. Once the strips are cooked, remove from the pan and set aside.
  8. While the beef is cooking, steam the broccoli until it's almost (but not quite) cooked through. I typically do this by putting it in a large bowl, adding 2 tbsp water, covering it and microwaving: 3 minutes on high if frozen, 1½ minutes on high if fresh, pausing halfway through to stir.
  9. Once all the beef is cooked and set aside, spray the pan or add oil to coat and add the garlic and black bean (if using). Stir and cook for 30-60 seconds until fragrant.
  10. When the garlic is ready, give the sauce a quick stir and add it to the pan, using it to deglaze. Once there are no more bits stuck to the bottom and the sauce is thick to your liking, add the broccoli and beef to the pan. Mix until thoroughly coated in sauce and heated through. Taste and adjust seasoning (salt, pepper, soy, etc.) if necessary.
  11. Fluff the rice and serve. Enjoy!
But I am a vegetarian???

Seitan can substitute for the beef relatively easily. Finding a substitute for oyster sauce is a little bit trickier, especially since it's the star ingredient. You can buy vegetarian or vegan oyster sauce (they're usually mushroom-based), but I have no idea how good they taste. You can also try making it at home! If you do, let me know how it turns out?

Bits from Debian: DebConf20 registration is open!

25 May, 2020 - 16:30

We are happy to announce that registration for DebConf20 is now open. The event will take place from August 23rd to 29th, 2020 at the University of Haifa, in Israel, and will be preceded by DebCamp, from August 16th to 22nd.

Although the Covid-19 situation is still rather fluid, as of now, Israel seems to be on top of the situation. Days with less than 10 new diagnosed infections are becoming common and businesses and schools are slowly reopening. As such, we are hoping that, at least as far as regulations go, we will be able to hold an in-person conference. There is more (and up to date) information at the conference's FAQ. Which means, barring a second wave, that there is reason to hope that the conference can go forward.

For that, we need your help. We need to know, assuming health regulations permit it, how many people intend to attend. This year probably more than ever before, prompt registration is very important to us. If after months of staying at home you feel that rubbing elbows with fellow Debian Developers is precisely the remedy that will salvage 2020, then we ask that you do register as soon as possible.

Sadly, things are still not clear enough for us to make a final commitment to holding an in-person conference, but knowing how many people intend to attend will be a great help in making that decision. The deadline for deciding on postponing, cancelling or changing the format of the conference is June 8th.

To register for DebConf20, please visit our website and log into the registration system and fill out the form. You can always edit or cancel your registration, but please note that the last day to confirm or cancel is July 26th, 2020 23:59:59 UTC. We cannot guarantee availability of accommodation, food and swag for unconfirmed registrations.

We do suggest that attendees begin making travel arrangements as soon as possible, of course. Please bear in mind that most air carriers allow free cancellations and changes.

Any questions about registrations should be addressed to registration@debconf.org.

Bursary for travel, accomodation and meals

In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register.

As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined:

  • Debian funded bursaries are available to active Debian contributors.
  • Debian diversity bursaries are available to newcomers to Debian/DebConf. Especially from under-represented communities.

Giving a talk, organizing an event or helping during DebConf20 is taken into account when deciding upon your bursary, so please mention them in your bursary application.

For more information about bursaries, please visit Applying for a Bursary to DebConf

Attention: deadline to apply for bursaries using the registration form before May 31st, 2019 23:59:59 UTC. This deadline is necessary in order to the organisers to have some time to analyze the requests.

To register for the Conference, either with or without a bursary request, please visit: https://debconf20.debconf.org/register

Participation to DebConf20 is conditional to your respect of our Code of Conduct. We require you to read, understand and abide by this code.

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsor Lenovo and Gold Sponsors deepin and Matanel Foundation. DebConf20 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

Russ Allbery: Review: The Last Emperox

25 May, 2020 - 10:14

Review: The Last Emperox, by John Scalzi

Series: Interdependency #3 Publisher: Tor Copyright: April 2020 ISBN: 0-7653-8917-7 Format: Kindle Pages: 318

This is the conclusion of the Interdependency trilogy, which is a single story told in three books. Start with The Collapsing Empire. You don't want to read this series out of order.

All the pieces and players are in place, the causes and timeline of the collapse of the empire she is accidentally ruling are now clear, and Cardenia Wu-Patrick knows who her friends and enemies are. What she doesn't know is what she can do about it. Her enemies, unfettered Cardenia's ethics or desire to save the general population, have the advantage of clearer and more achievable goals. If they survive and, almost as important, remain in power, who cares what happens to everyone else?

As with The Consuming Fire, the politics may feel a bit too on-the-nose for current events, this time for the way that some powerful people are handling (or not handling) the current pandemic. Also as with The Consuming Fire, Scalzi's fast-moving story, likable characters, banter, and occasional humorous descriptions prevent those similarities from feeling heavy or didactic. This is political wish fulfillment to be sure, but it doesn't try to justify itself or linger too much on its improbabilities. It's a good story about entertaining people trying (mostly) to save the world with a combination of science and political maneuvering.

I picked up The Last Emperox as a palate cleanser after reading Gideon the Ninth, and it provided exactly what I was looking for. That gave me an opportunity to think about what Scalzi does in his writing, why his latest novel was one of my first thoughts for a palate cleanser, and why I react to his writing the way that I do.

Scalzi isn't a writer about whom I have strong opinions. In my review of The Collapsing Empire, I compared his writing to the famous description of Asimov as the "default voice" of science fiction, but that's not quite right. He has a distinct and easily-recognizable style, heavy on banter and light-hearted description. But for me his novels are pleasant, reliable entertainment that I forget shortly after reading them. They don't linger or stand out, even though I enjoy them while I'm reading them.

That's my reaction. Others clearly do not have that reaction, fully engage with his books, and remember them vividly. That indicates to me that there's something his writing is doing that leaves substantial room for difference of personal taste and personal reaction to the story, and the sharp contrast between The Last Emperox and Gideon the Ninth helped me put my finger on part of it. I don't feel like Scalzi's books try to tell me how to feel about the story.

There's a moment in The Last Emperox where Cardenia breaks down crying over an incredibly difficult decision that she's made, one that the readers don't find out about until later. In another book, there would be considerably more emotional build-up to that moment, or at least some deep analysis of it later once the decision is revealed. In this book, it's only a handful of paragraphs and then a few pages of processing later, primarily in dialogue, and less focused on the emotions of the characters than on the forward-looking decisions they've made to deal with those emotions. The emotion itself is subtext. Many other authors would try to pull the reader into those moments and make them feel what the characters are feeling. Scalzi just relates them, and leaves the reader free to feel what they choose to feel.

I don't think this is a flaw (or a merit) in Scalzi's writing; it's just a difference, and exactly the difference that made me reach for this book as an emotional break after a book that got its emotions all over the place. Calling Scalzi's writing emotionally relaxing isn't quite right, but it gives me space to choose to be emotionally relaxed if I want to be. I can pick the level of my engagement. If I want to care about these characters and agonize over their decisions, there's enough information here to mull over and use to recreate their emotional states. If I just want to read a story about some interesting people and not care too much about their hopes and dreams, I can choose to do that instead, and the book won't fight me. That approach lets me sidle up on the things that I care about and think about them at my leisure, or leave them be.

This approach makes Scalzi's books less intense than other novels for me. This is where personal preference comes in. I read books in large part to engage emotionally with the characters, and I therefore appreciate books that do a lot of that work for me. Scalzi makes me do the work myself, and the result is not as effective for me, or as memorable.

I think this may be part of what I and others are picking up on when we say that Scalzi's writing is reminiscent of classic SF from decades earlier. It used to be common for SF to not show any emotional vulnerability in the main characters, and to instead focus on the action plot and the heroics and martial virtues. This is not what Scalzi is doing, to be clear; he has a much better grasp of character and dialogue than most classic SF, adds considerable light-hearted humor, and leaves clear clues and hooks for a wide range of human emotions in the story. But one can read Scalzi in that tone if one wants to, since the emotional hooks do not grab hard at the reader and dig in. By comparison, you cannot read Gideon the Ninth without grappling with the emotions of the characters. The book will not let you.

I think this is part of why Scalzi is so consistent for me. If you do not care deeply about Gideon Nav, you will not get along with Gideon the Ninth, and not everyone will. But several main characters in The Last Emperox (Mance and to some extent Cardenia) did little or nothing for me emotionally, and it didn't matter. I liked Kiva and enjoyed watching her strategically smash her way through social conventions, but it was easy to watch her from a distance and not get too engrossed in her life or her thoughts. The plot trundled along satisfyingly, regardless. That lack of emotional involvement precludes, for me, a book becoming the sort of work that I will rave about and try to press into other people's hands, but it also makes it comfortable and gentle and relaxing in a way that a more emotionally fraught book could not be.

This is a long-winded way to say that this was a satisfying conclusion to a space opera trilogy that I enjoyed reading, will recommend mildly to others, and am already forgetting the details of. If you liked the first two books, this is an appropriate and fun conclusion with a few new twists and a satisfying amount of swearing (mostly, although not entirely, from Kiva). There are a few neat (albeit not horribly original) bits of world-building, a nice nod to and subversion of Asimov, a fair bit of political competency wish fulfillment (which I didn't find particularly believable but also didn't mind being unbelievable), and one enjoyable "oh no she didn't" moment. If you like the thing that Scalzi is doing, you will enjoy this book.

Rating: 8 out of 10

Enrico Zini: Music links

25 May, 2020 - 06:00
Great Big Sea - End Of The World music 2020-05-25 It's the end of the world as we know it, twice as fast Smash Mouth - All Star (Melon Cover) music 2020-05-25 Brett Domino: Bad Romance (Lady Gaga) - Korg Monotron and Kaossilator music 2020-05-25 J.S. Bach - Crab Canon on a Möbius Strip music 2020-05-25 Homemade Instruments and PVC Pipe - Rare And Strange Instruments music archive.org 2020-05-25 After Homemade Instruments Week on the facebook page, here is an article with some PVC pipes instruments! Percussion on PVC pipes A classic, long pipes for big bass, easy to tune by changing the length … Ut queant laxis - Wikipedia history music archive.org 2020-05-25 "Ut queant laxis" or "Hymnus in Ioannem" is a Latin hymn in honor of John the Baptist, written in Horatian Sapphics and traditionally attributed to Paulus Diaconus, the eighth-century Lombard historian. It is famous for its part in the history of musical notation, in particular solmization. The hymn belongs to the tradition of Gregorian chant.

Dirk Eddelbuettel: #3 T^4: Customizing The Shell

25 May, 2020 - 03:51

The third video (following the announcement, the shell colors) one as well as last week’s shell prompt one, is up in the stil new T^4 series of video lightning talks with tips, tricks, tools, and toys. Today we cover customizing the shell some more.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: More reliable vlc bittorrent plugin in Debian (version 2.9)

24 May, 2020 - 22:00

I am very happy to report that a more reliable VLC bittorrent plugin was just uploaded into debian. This fixes a couple of crash bugs in the plugin, hopefully making the VLC experience even better when streaming directly from a bittorrent source. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

It also support magnet links and local .torrent files.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Holger Levsen: 20200523-i3statusbar

24 May, 2020 - 20:22
new i3 statusbar
🔌 96% 🚀 192.168.x.y 🐁 🤹 5+4+1 Qubes (80 avail/5 sys/16 tpl) 💾 77G 🧠 4495M/15596M 🤖 11% 🌡️ 50°C 2955🍥 dos y cuarto 🗺  

- and soon, with Qubes 4.1, it will become this colorful too

That depends on whether fonts-symbola and/or fonts-noto-color-emoji are being available.

François Marier: Printing hard-to-print PDFs on Linux

24 May, 2020 - 10:05

I recently found a few PDFs which I was unable to print due to those files causing insufficient printer memory errors:

I found a detailed explanation of what might be causing this which pointed the finger at transparent images, a PDF 1.4 feature which apparently requires a more recent version of PostScript than what my printer supports.

Using Okular's Force rasterization option (accessible via the print dialog) does work by essentially rendering everything ahead of time and outputing a big image to be sent to the printer. The quality is not very good however.

Converting a PDF to DjVu

The best solution I found makes use of a different file format: .djvu

Such files are not PDFs, but can still be opened in Evince and Okular, as well as in the dedicated DjVuLibre application.

As an example, I was unable to print page 11 of this paper. Using pdfinfo, I found that it is in PDF 1.5 format and so the transparency effects could be the cause of the out-of-memory printer error.

Here's how I converted it to a high-quality DjVu file I could print without problems using Evince:

pdf2djvu -d 1200 2002.04049.pdf > 2002.04049-1200dpi.djvu
Converting a PDF to PDF 1.3

I also tried the DjVu trick on a different unprintable PDF, but it failed to print, even after lowering the resolution to 600dpi:

pdf2djvu -d 600 dow-faq_v1.1.pdf > dow-faq_v1.1-600dpi.djvu

In this case, I used a different technique and simply converted the PDF to version 1.3 (from version 1.6 according to pdfinfo):

ps2pdf13 -r1200x1200 dow-faq_v1.1.pdf dow-faq_v1.1-1200dpi.pdf

This eliminates the problematic transparency and rasterizes the elements that version 1.3 doesn't support.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, April 2020

23 May, 2020 - 23:10

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In April, 284.5 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 10.0h (out of 14h assigned), thus carrying over 4h to May.
  • Adrian Bunk did nothing (out of 28.75h assigned), thus is carrying over 28.75h for May.
  • Ben Hutchings did 26h (out of 20h assigned and 8.5h from March), thus carrying over 2.5h to May.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 6h (out of 6h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 28.75h assigned plus 17.25h from March), thus is carrying over 46h for May.
  • Markus Koschany did 11.5h (out of 28.75h assigned and 38.75h from March), thus carrying over 56h to May.
  • Mike Gabriel did 1.5h (out of 8h assigned), thus carrying over 6.5h to May.
  • Ola Lundqvist did 13.5h (out of 12h assigned and 8.5h from March), thus carrying over 7h to May.
  • Roberto C. Sánchez did 28.75h (out of 28.75h assigned).
  • Sylvain Beucler did 28.75h (out of 28.75h assigned).
  • Thorsten Alteholz did 28.75h (out of 28.75h assigned).
  • Utkarsh Gupta did 24h (out of 24h assigned).
Evolution of the situation

In April we dispatched more hours than ever and another was new too, we had our first (virtual) contributors meeting on IRC! Logs and minutes are available and we plan to continue doing IRC meetings every other month.
Sadly one contributor decided to go inactive in April, Hugo Lefeuvre.
Finally, we like to remind you, that the end of Jessie LTS is coming in less than two months!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 4 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Dirk Eddelbuettel: RcppSimdJson 0.0.5: Updated Upstream

23 May, 2020 - 22:09

A new RcppSimdJson release with updated upstream simdjson code just arrived on CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

This release brings updated upstream code (thanks to Brendan Knapp) plus a new example and minimal tweaks. The full NEWS entry follows.

Changes in version 0.0.5 (2020-05-23)
  • Add parseExample from earlier upstream announcement (Dirk).

  • Synced with upstream (Brendan in #12) closing #11).

  • Updated example parseExample to API changes (Brendan).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible Builds (diffoscope): diffoscope 145 released

23 May, 2020 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 145. This version includes the following changes:

  [ Chris Lamb ]

  * Improvements:

    - Add support for Apple Xcode mobile provisioning .mobilepovision files.
      (Closes: reproducible-builds/diffoscope#113)
    - Add support for printing the signatures via apksigner(1).
      (Closes: reproducible-builds/diffoscope#121)
    - Use SHA256 over MD5 when generating page names for the HTML directory
      presenter, validate checksums for files referenced in .changes files
      using SHA256 too, and move to using SHA256 in "Too much input for diff"
      output too. (Closes: reproducible-builds/diffoscope#124)
    - Don't leak the full path of the temporary directory in "Command [..]
      exited with 1".  (Closes: reproducible-builds/diffoscope#126)
    - Identify "iOS App Zip archive data" files as .zip files.
      (Closes: reproducible-builds/diffoscope#116)

  * Bug fixes:

    - Correct "differences" typo in the ApkFile handler.
      (Closes: reproducible-builds/diffoscope#127)

  * Reporting/output improvements:

    - Never emit the same id="foo" TML anchor reference twice, otherwise
      identically-named parts will not be able to linked to via "#foo".
      (Closes: reproducible-builds/diffoscope#120)
    - Never emit HTML with empty "id" anchor lements as it is not possible to
      link to "#" (vs "#foo"). We use "#top" as a fallback value so it will
      work for the top-level parent container.
    - Clarify the message when we cannot find the "debian" Python module.
    - Clarify "Command [..] failed with exit code" to remove duplicate "exited
      with exit" but also to note that diffoscope is interpreting this as an
      error.
    - Add descriptions for the 'fallback' Debian module file types.
    - Rename the --debugger command-line argument to --pdb.

  * Testsuite improvements:

    - Prevent CI (and runtime) apksigner test failures due to lack of
      binfmt_misc on Salsa CI and elsewhere.

  * Codebase improvements:

    - Initially add a pair of comments to tidy up a slightly abstraction level
      violating code in diffoscope.comparators.mising_file and the
      .dsc/.buildinfo file handling, but replace this later by by inlining
      MissingFile's special handling of deb822 to prevent leaking through
      abstraction layers in the first place.
    - Use a BuildinfoFile (etc.) regardless of whether the associated files
      such as the orig.tar.gz and the .deb are present, but don't treat them as
      actual containers. (Re: reproducible-builds/diffoscope#122)
    - Rename the "Openssl" command class to "OpenSSLPKCS7" to accommodate other
      commands with this prefix.
    - Wrap a docstring across multiple lines, drop an inline pprint import and
      comment the HTMLPrintContext class, etc.

  [ Emanuel Bronshtein ]
  * Avoid build-cache in building the released Docker image.
    (Closes: reproducible-builds/diffoscope#123)

  [ Holger Levsen ]
  * Wrap long lines in older changelog entries.

You find out more by visiting the project homepage.

Steve Kemp: Updated my linux-security-modules for the Linux kernel

22 May, 2020 - 12:15

Almost three years ago I wrote my first linux-security-module, inspired by a comment I read on LWN

I did a little more learning/experimentation and actually produced a somewhat useful LSM, which allows you to restrict command-execution via the use of a user-space helper:

  • Whenever a user tries to run a command the LSM-hook receives the request.
  • Then it executes a userspace binary to decide whether to allow that or not (!)

Because the detection is done in userspace writing your own custom rules is both safe and easy. No need to touch the kernel any further!

Yesterday I rebased all the modules so that they work against the latest stable kernel 5.4.22 in #7.

The last time I'd touched them they were built against 5.1, which was itself a big jump forwards from the 4.16.7 version I'd initially used.

Finally I updated the can-exec module to make it gated, which means you can turn it on, but not turn it off without a reboot. That was an obvious omission from the initial implementation #11.

Anyway updated code is available here:

I'd kinda like to write more similar things, but I lack inspiration.

Bits from Debian: Debian welcomes the 2020 GSOC interns

22 May, 2020 - 07:30

We are very excited to announce that Debian has selected nine interns to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, students, and details of the tasks to be performed.

Project: Android SDK Tools in Debian

  • Student(s): Manas Kashyap, Raman Sarda, and Samyak-jn

Deliverables of the project: Make the entire Android toolchain, Android Target Platform Framework, and SDK tools available in the Debian archives.

Project: Packaging and Quality assurance of COVID-19 relevant applications

  • Student: Nilesh

Deliverables of the project: Quality assurance including bug fixing, continuous integration tests and documentation for all Debian Med applications that are known to be helpful to fight COVID-19

Project: BLAS/LAPACK Ecosystem Enhancement

  • Student: Mo Zhou

Deliverables of the project: Better environment, documentation, policy, and lintian checks for BLAS/LAPACK.

Project: Quality Assurance and Continuous integration for applications in life sciences and medicine

  • Student: Pranav Ballaney

Deliverables of the project: Continuous integration tests for all Debian Med applications, QA review, and bug fixes.

Project: Systemd unit translator

  • Student: K Gopal Krishna

Deliverables of the project: A systemd unit to OpenRC init script translator. Updated OpenRC package into Debian Unstable.

Project: Architecture Cross-Grading Support in Debian

  • Student: Kevin Wu

Deliverables of the project: Evaluate, test, and develop tools to evaluate cross-grade checks for system and user configuration.

Project: Upstream/Downstream cooperation in Ruby

  • Student: utkarsh2102

Deliverables of the project: Create guide for rubygems.org on good practices for upstream maintainers, develop a tool that can detect problems and, if possible fix those errors automatically. Establish good documentation, design the tool to be extensible for other languages.

Congratulations and welcome to all the interns!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

Norbert Preining: Plasma 5.19 coming to Debian

20 May, 2020 - 09:33

The KDE Plasma desktop is soon getting an update to 5.19, and beta versions are out for testing.

In this release, we have prioritized making Plasma more consistent, correcting and unifying designs of widgets and desktop elements; worked on giving you more control over your desktop by adding configuration options to the System Settings; and improved usability, making Plasma and its components easier to use and an overall more pleasurable experience.

There are lots of new features mentioned in the release announcement, I like in particular the much more usable settings application as well as the new info center.




I have been providing builds of KDE related packages since quite some time now, see everything posted under the KDE tag. In the last days I have prepared Debian packages for Plasma 5.18.90 on OBS, for now only targeting Debian/sid and amd64 architecture.

These packages require Qt 5.14, which is only available in the experimental suite, and there is no way to simply update to Qt 5.14 since all Qt related packages need to be recompiled. So as long as Qt 5.14 doesn’t hit unstable, I cannot really run these packages on my main machine, but I tried a clean Debian virtual machine installing only Plasma 5.18.90 and depending packages, plus some more for a pleasant desktop experience. This worked out quite well, the VM runs Plasma 5.18.90.

I don’t have 3D running on the VM, so I cannot really check all the nice new effects, but I am sure on my main system they would work.

Well, bottom line, as soon as we have Qt 5.14 in Debian/unstable, we are also ready for Plasma 5.19!

Junichi Uekawa: After much investigation I decided to write a very simple page for getUserMedia.

19 May, 2020 - 15:14
After much investigation I decided to write a very simple page for getUserMedia. When I am performing music I provide audio in line input with all echo and noise and other concerns resolved. The idea is that I can cast the tab to video conferencing software and conferencing software will hopefully not reduce noise or echo. here. If the video conferencing software is reducing noise or echo from a tab, I will ask, why is it doing so?

François Marier: Displaying client IP address using Apache Server-Side Includes

19 May, 2020 - 04:50

If you use a Dynamic DNS setup to reach machines which are not behind a stable IP address, you will likely have a need to probe these machines' public IP addresses. One option is to use an insecure service like Oracle's http://checkip.dyndns.com/ which echoes back your client IP, but you can also do this on your own server if you have one.

There are multiple options to do this, like writing a CGI or PHP script, but those are fairly heavyweight if that's all you need mod_cgi or PHP for. Instead, I decided to use Apache's built-in Server-Side Includes.

Apache configuration

Start by turning on the include filter by adding the following in /etc/apache2/conf-available/ssi.conf:

AddType text/html .shtml
AddOutputFilter INCLUDES .shtml

and making that configuration file active:

a2enconf ssi

Then, find the vhost file where you want to enable SSI and add the following options to a Location or Directory section:

<Location /ssi_files>
    Options +IncludesNOEXEC
    SSLRequireSSL
    Header set Content-Security-Policy: "default-src 'none'"
    Header set X-Content-Type-Options: "nosniff"
</Location>

before adding the necessary modules:

a2enmod headers
a2enmod include

and restarting Apache:

apache2ctl configtest && systemctl restart apache2.service
Create an shtml page

With the web server ready to process SSI instructions, the following HTML blurb can be used to display the client IP address:

<!--#echo var="REMOTE_ADDR" -->

or any other built-in variable.

Note that you don't need to write a valid HTML for the variable to be substituted and so the above one-liner is all I use on my server.

Security concerns

The first thing to note is that the configuration section uses the IncludesNOEXEC option in order to disable arbitrary command execution via SSI. In addition, you can also make sure that the cgi module is disabled since that's a dependency of the more dangerous side of SSI:

a2dismod cgi

Of course, if you rely on this IP address to be accurate, for example because you'll be putting it in your DNS, then you should make sure that you only serve this page over HTTPS, which can be enforced via the SSLRequireSSL directive.

I included two other headers in the above vhost config (Content-Security-Policy and X-Content-Type-Options) in order to limit the damage that could be done in case a malicious file was accidentally dropped in that directory.

Finally, I suggest making sure that only the root user has writable access to the directory which has server-side includes enabled:

$ ls -la /var/www/ssi_includes/
total 12
drwxr-xr-x  2 root     root     4096 May 18 15:58 .
drwxr-xr-x 16 root     root     4096 May 18 15:40 ..
-rw-r--r--  1 root     root        0 May 18 15:46 index.html
-rw-r--r--  1 root     root       32 May 18 15:58 whatsmyip.shtml

Arturo Borrero González: A better Toolforge: upgrading the Kubernetes cluster

19 May, 2020 - 00:00

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

One of the most successful and important products provided by the Wikimedia Cloud Services team at the Wikimedia Foundation is Toolforge. Toolforge is a platform that allows users and developers to run and use a variety of applications that help the Wikimedia movement and mission from the technical point of view in general. Toolforge is a hosting service commonly known in the industry as a Platform as a Service (PaaS). Toolforge is powered by two different backend engines, Kubernetes and GridEngine.

This article focuses on how we made a better Toolforge by integrating a newer version of Kubernetes and, along with it, some more modern workflows.

The starting point in this story is 2018. Yes, two years ago! We identified that we could do better with our Kubernetes deployment in Toolforge. We were using a very old version, v1.4. Using an old version of any software has more or less the same consequences everywhere: you lack security improvements and some modern key features.

Once it was clear that we wanted to upgrade our Kubernetes cluster, both the engineering work and the endless chain of challenges started.

It turns out that Kubernetes is a complex and modern technology, which adds some extra abstraction layers to add flexibility and some intelligence to a very old systems engineering need: hosting and running a variety of applications.

Our first challenge was to understand what our use case for a modern Kubernetes was. We were particularly interested in some key features:

  • The increased security and controls required for a public user-facing service, using RBAC, PodSecurityPolicies, quotas, etc.
  • Native multi-tenancy support, using namespaces
  • Advanced web routing, using the Ingress API

Soon enough we faced another Kubernetes native challenge: the documentation. For a newcomer, learning and understanding how to adapt Kubernetes to a given use case can be really challenging. We identified some baffling patterns in the docs. For example, different documentation pages would assume you were using different Kubernetes deployments (Minikube vs kubeadm vs a hosted service). We are running Kubernetes like you would on bare-metal (well, in CloudVPS virtual machines), and some documents directly referred to ours as a corner case.

During late 2018 and early 2019, we started brainstorming and prototyping. We wanted our cluster to be reproducible and easily rebuildable, and in the Technology Department at the Wikimedia Foundation, we rely on Puppet for that. One of the first things to decide was how to deploy and build the cluster while integrating with Puppet. This is not as simple as it seems because Kubernetes itself is a collection of reconciliation loops, just like Puppet is. So we had to decide what to put directly in Kubernetes and what to control and make visible through Puppet. We decided to stick with kubeadm as the deployment method, as it seems to be the more upstream-standardized tool for the task. We had to make some interesting decisions by trial and error, like where to run the required etcd servers, what the kubeadm init file would look like, how to proxy and load-balance the API on our bare-metal deployment, what network overlay to choose, etc. If you take a look at our public notes, you can get a glimpse of the number of decisions we had to make.

Our Kubernetes wasn’t going to be a generic cluster, we needed a Toolforge Kubernetes service. This means we don’t use some of the components, and also, we add some additional pieces and configurations to it. By the second half of 2019, we were working full-speed on the new Kubernetes cluster. We already had an idea of what we wanted and how to do it.

There were a couple of important topics for discussions, for example:

  • Ingress
  • Validating admission controllers
  • Security policies and quotas
  • PKI and user management

We will describe in detail the final state of those pieces in another blog post, but each of the topics required several hours of engineering time, research, tests, and meetings before reaching a point in which we were comfortable with moving forward.

By the end of 2019 and early 2020, we felt like all the pieces were in place, and we started thinking about how to migrate the users, the workloads, from the old cluster to the new one. This migration plan mostly materialized in a Wikitech page which contains concrete information for our users and the community.

The interaction with the community was a key success element. Thanks to our vibrant and involved users, we had several early adopters and beta testers that helped us identify early flaws in our designs. The feedback they provided was very valuable for us. Some folks helped solve technical problems, helped with the migration plan or even helped make some design decisions. Worth noting that some of the changes that were presented to our users were not easy to handle for them, like new quotas and usage limits. Introducing new workflows and deprecating old ones is always a risky operation.

Even though the migration procedure from the old cluster to the new one was fairly simple, there were some rough edges. We helped our users navigate them. A common issue was a webservice not being able to run in the new cluster due to stricter quota limiting the resources for the tool. Another example is the new Ingress layer failing to properly work with some webservices’s particular options.

By March 2020, we no longer had anything running in the old Kubernetes cluster, and the migration was completed. We then started thinking about another step towards making a better Toolforge, which is introducing the toolforge.org domain. There is plenty of information about the change to this new domain in Wikitech News.

The community wanted a better Toolforge, and so do we, and after almost 2 years of work, we have it! All the work that was done represents the commitment of the Wikimedia Foundation to support the technical community and how we really want to pursue technical engagement in general in the Wikimedia movement. In a follow-up post we will present and discuss more in-depth about some technical details of the new Kubernetes cluster, stay tuned!

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

Russell Coker: A Good Time to Upgrade PCs

18 May, 2020 - 16:09

PC hardware just keeps getting cheaper and faster. Now that so many people have been working from home the deficiencies of home PCs are becoming apparent. I’ll give Australian prices and URLs in this post, but I think that similar prices will be available everywhere that people read my blog.

From MSY (parts list PDF ) [1] 120G SATA SSDs are under $50 each. 120G is more than enough for a basic workstation, so you are looking at $42 or so for fast quiet storage or $84 or so for the same with RAID-1. Being quiet is a significant luxury feature and it’s also useful if you are going to be in video conferences.

For more serious storage NVMe starts at around $100 per unit, I think that $124 for a 500G Crucial NVMe is the best low end option (paying $95 for a 250G Kingston device doesn’t seem like enough savings to be worth it). So that’s $248 for 500G of very fast RAID-1 storage. There’s a Samsung 2TB NVMe device for $349 which is good if you need more storage, it’s interesting to note that this is significantly cheaper than the Samsung 2TB SSD which costs $455. I wonder if SATA SSD devices will go away in the future, it might end up being SATA for slow/cheap spinning media and M.2 NVMe for solid state storage. The SATA SSD devices are only good for use in older systems that don’t have M.2 sockets on the motherboard.

It seems that most new motherboards have one M.2 socket on the motherboard with NVMe support, and presumably support for booting from NVMe. But dual M.2 sockets is rare and the price difference is significantly greater than the cost of a PCIe M.2 card to support NVMe which is $14. So for NVMe RAID-1 it seems that the best option is a motherboard with a single NVMe socket (starting at $89 for a AM4 socket motherboard – the current standard for AMD CPUs) and a PCIe M.2 card.

One thing to note about NVMe is that different drivers are required. On Linux this means means building a new initrd before the migration (or afterwards when booted from a recovery image) and on Windows probably means a fresh install from special installation media with NVMe drivers.

All the AM4 motherboards seem to have RADEON Vega graphics built in which is capable of 4K resolution at a stated refresh of around 24Hz. The ones that give detail about the interfaces say that they have HDMI 1.4 which means a maximum of 30Hz at 4K resolution if you have the color encoding that suits text (IE for use other than just video). I covered this issue in detail in my blog post about DisplayPort and 4K resolution [2]. So a basic AM4 motherboard won’t give great 4K display support, but it will probably be good for a cheap start.

$89 for motherboard, $124 for 500G NVMe, $344 for a Ryzen 5 3600 CPU (not the cheapest AM4 but in the middle range and good value for money), and $99 for 16G of RAM (DDR4 RAM is cheaper than DDR3 RAM) gives the core of a very decent system for $656 (assuming you have a working system to upgrade and peripherals to go with it).

Currently Kogan has 4K resolution monitors starting at $329 [3]. They probably won’t be the greatest monitors but my experience of a past cheap 4K monitor from Kogan was that it is quite OK. Samsung 4K monitors started at about $400 last time I could check (Kogan currently has no stock of them and doesn’t display the price), I’d pay an extra $70 for Samsung, but the Kogan branded product is probably good enough for most people. So you are looking at under $1000 for a new system with fast CPU, DDR4 RAM, NVMe storage, and a 4K monitor if you already have the case, PSU, keyboard, mouse, etc.

It seems quite likely that the 4K video hardware on a cheap AM4 motherboard won’t be that great for games and it will definitely be lacking for watching TV documentaries. Whether such deficiencies are worth spending money on a PCIe video card (starting at $50 for a low end card but costing significantly more for 3D gaming at 4K resolution) is a matter of opinion. I probably wouldn’t have spent extra for a PCIe video card if I had 4K video on the motherboard. Not only does using built in video save money it means one less fan running (less background noise) and probably less electricity use too.

My Plans

I currently have a workstation with 2*500G SATA SSDs in a RAID-1 array, 16G of RAM, and a i5-2500 CPU (just under 1/4 the speed of the Ryzen 5 3600). If I had hard drives then I would definitely buy a new system right now. But as I have SSDs that work nicely (quiet and fast enough for most things) and almost all machines I personally use have SSDs (so I can’t get a benefit from moving my current SSDs to another system) I would just get CPU, motherboard, and RAM. So the question is whether to spend $532 for more than 4* the CPU performance. At the moment I’ll wait because I’ll probably get a free system with DDR4 RAM in the near future, while it probably won’t be as fast as a Ryzen 5 3600, it should be at least twice as fast as what I currently have.

Related posts:

  1. SSD and M.2 The Need for Speed One of my clients has an...
  2. CPL I’ve just bught an NVidia video card from Computers and...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...

Enrico Zini: Art links

18 May, 2020 - 06:00
Guglielmo Achille Cavellini - Wikipedia art history italy archive.org 2020-05-18 Guglielmo Achille Cavellini (11 September 1914 – 20 November 1990), also known as GAC, was an Italian artist and art collector. After an initial activity as a painter, in the 1940s and 1950s he became one of the major collectors of contemporary Italian abstract art, developing a deep relationship of patronage and friendship with the artists. This experience has its pinnacle in the exhibition Modern painters of the Cavellini collection at the National Gallery of Modern Art in Rome in 1957. In the 1960s Cavellini resumed his activity as an artist, with an ample production spanning from Neo-Dada to performance art to mail art, of which he became one of the prime exponents with the Exhibitions at Home and the Round Trip works. In 1971 he invented autostoricizzazione (self-historicization), upon which he acted to create a deliberate popular history surrounding his existence. He also authored the books Abstract Art (1959), Man painter (1960), Diary of Guglielmo Achille Cavellini (1975), Encounters/Clashes in the Jungle of Art (1977) and Life of a Genius (1989). Gustave Doré - Wikipedia art comics history archive.org 2020-05-18 Paul Gustave Louis Christophe Doré (/dɔːˈreɪ/; French: [ɡys.tav dɔ.ʁe]; 6 January 1832 – 23 January 1883[1]) was a French artist, printmaker, illustrator, comics artist, caricaturist, and sculptor who worked primarily with wood-engraving. Enrico Baj: Generale art history italy archive.org 2020-05-18 «Enrico Baj era bravissimo a pijà per culo er potere usanno ‘a fantasia. Co quaa sempricità che è solo dii granni, raccatta robbe tipo bottoni, pezzi de stoffa, cordoni, passamanerie varie, e l’appiccica su ‘a tela insieme aa pittura sua: che pare quasi che sta a giocà ma giocanno giocanno, zitto zitto, riesce a rovescià er monno.…>> Tamara de Lempicka: Autoritratto sulla Bugatti verde art history italy archive.org 2020-05-18

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้