Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 4 min 21 sec ago

Matthew Garrett: Cloud desktops aren't as good as you'd think

6 October, 2022 - 15:31
Fast laptops are expensive, cheap laptops are slow. But even a fast laptop is slower than a decent workstation, and if your developers want a local build environment they're probably going to want a decent workstation. They'll want a fast (and expensive) laptop as well, though, because they're not going to carry their workstation home with them and obviously you expect them to be able to work from home. And in two or three years they'll probably want a new laptop and a new workstation, and that's even more money. Not to mention the risks associated with them doing development work on their laptop and then drunkenly leaving it in a bar or having it stolen or the contents being copied off it while they're passing through immigration at an airport. Surely there's a better way?

This is the thinking that leads to "Let's give developers a Chromebook and a VM running in the cloud". And it's an appealing option! You spend far less on the laptop, and the VM is probably cheaper than the workstation - you can shut it down when it's idle, you can upgrade it to have more CPUs and RAM as necessary, and you get to impose all sorts of additional neat security policies because you have full control over the network. You can run a full desktop environment on the VM, stream it to a cheap laptop, and get the fast workstation experience on something that weighs about a kilogram. Your developers get the benefit of a fast machine wherever they are, and everyone's happy.

But having worked at more than one company that's tried this approach, my experience is that very few people end up happy. I'm going to give a few reasons here, but I can't guarantee that they cover everything - and, to be clear, many (possibly most) of the reasons I'm going to describe aren't impossible to fix, they're simply not priorities. I'm also going to restrict this discussion to the case of "We run a full graphical environment on the VM, and stream that to the laptop" - an approach that only offers SSH access is much more manageable, but also significantly more restricted in certain ways. With those details mentioned, let's begin.

The first thing to note is that the overall experience is heavily tied to the protocol you use for the remote display. Chrome Remote Desktop is extremely appealing from a simplicity perspective, but is also lacking some extremely key features (eg, letting you use multiple displays on the local system), so from a developer perspective it's suboptimal. If you read the rest of this post and want to try this anyway, spend some time working with your users to find out what their requirements are and figure out which technology best suits them.

Second, let's talk about GPUs. Trying to run a modern desktop environment without any GPU acceleration is going to be a miserable experience. Sure, throwing enough CPU at the problem will get you past the worst of this, but you're still going to end up with users who need to do 3D visualisation, or who are doing VR development, or who expect WebGL to work without burning up every single one of the CPU cores you so graciously allocated to their VM. Cloud providers will happily give you GPU instances, but that's going to cost more and you're going to need to re-run your numbers to verify that this is still a financial win. "But most of my users don't need that!" you might say, and we'll get to that later on.

Next! Video quality! This seems like a trivial point, but if you're giving your users a VM as their primary interface, then they're going to do things like try to use Youtube inside it because there's a conference presentation that's relevant to their interests. The obvious solution here is "Do your video streaming in a browser on the local system, not on the VM" but from personal experience that's a super awkward pain point! If I click on a link inside the VM it's going to open a browser there, and now I have a browser in the VM and a local browser and which of them contains the tab I'm looking for WHO CAN SAY. So your users are going to watch stuff inside their VM, and re-compressing decompressed video is going to look like shit unless you're throwing a huge amount of bandwidth at the problem. And this is ignoring the additional irritation of your browser being unreadable while you're rapidly scrolling through pages, or terminal output from build processes being a muddy blur of artifacts, or the corner case of "I work for Youtube and I need to be able to examine 4K streams to determine whether changes have resulted in a degraded experience" which is a very real job and one that becomes impossible when you pass their lovingly crafted optimisations through whatever codec your remote desktop protocol has decided to pick based on some random guesses about the local network, and look everyone is going to have a bad time.

The browser experience. As mentioned before, you'll have local browsers and remote browsers. Do they have the same security policy? Who knows! Are all the third party services you depend on going to be ok with the same user being logged in from two different IPs simultaneously because they lost track of which browser they had an open session in? Who knows! Are your users going to become frustrated? Who knows oh wait no I know the answer to this one, it's "yes".

Accessibility! More of your users than you expect rely on various accessibility interfaces, be those mechanisms for increasing contrast, screen magnifiers, text-to-speech, speech-to-text, alternative input mechanisms and so on. And you probably don't know this, but most of these mechanisms involve having accessibility software be able to introspect the UI of applications in order to provide appropriate input or expose available options and the like. So, I'm running a local text-to-speech agent. How does it know what's happening in the remote VM? It doesn't because it's just getting an a/v stream, so you need to run another accessibility stack inside the remote VM and the two of them are unaware of each others existence and this works just as badly as you'd think. Alternative input mechanism? Good fucking luck with that, you're at best going to fall back to "Send synthesized keyboard inputs" and that is nowhere near as good as "Set the contents of this text box to this unicode string" and yeah I used to work on accessibility software maybe you can tell. And how is the VM going to send data to a braille output device? Anyway, good luck with the lawsuits over arbitrarily making life harder for a bunch of members of a protected class.

One of the benefits here is supposed to be a security improvement, so let's talk about WebAuthn. I'm a big fan of WebAuthn, given that it's a multi-factor authentication mechanism that actually does a good job of protecting against phishing, but if my users are running stuff inside a VM, how do I use it? If you work at Google there's a solution, but that does mean limiting yourself to Chrome Remote Desktop (there are extremely good reasons why this isn't generally available). Microsoft have apparently just specced a mechanism for doing this over RDP, but otherwise you're left doing stuff like forwarding USB over IP, and that means that your USB WebAuthn no longer works locally. It also doesn't work for any other type of WebAuthn token, such as a bluetooth device, or an Apple TouchID sensor, or any of the Windows Hellow support. If you're planning on moving to WebAuthn and also planning on moving to remote VM desktops, you're going to have a bad time.

That's the stuff that comes to mind immediately. And sure, maybe each of these issues is irrelevant to most of your users. But the actual question you need to ask is what percentage of your users will hit one or more of these, because if that's more than an insignificant percentage you'll still be staffing all the teams that dealt with hardware, handling local OS installs, worrying about lost or stolen devices, and the glorious future of just being able to stop worrying about this is going to be gone and the financial benefits you promised would appear are probably not going to work out in the same way.

A lot of this falls back to the usual story of corporate IT - understand the needs of your users and whether what you're proposing actually meets them. Almost everything I've described here is a corner case, but if your company is larger than about 20 people there's a high probability that at least one person is going to fall into at least one of these corner cases. You're going to need to spend a lot of time understanding your user population to have a real understanding of what the actual costs are here, and I haven't seen anyone do that work before trying to launch this and (inevitably) going back to just giving people actual computers.

There are alternatives! Modern IDEs tend to support SSHing out to remote hosts to perform builds there, so as long as you're ok with source code being visible on laptops you can at least shift the "I need a workstation with a bunch of CPU" problem out to the cloud. The laptops are going to need to be more expensive because they're also going to need to run more software locally, but it wouldn't surprise me if this ends up being cheaper than the full-on cloud desktop experience in most cases.

Overall, the most important thing to take into account here is that your users almost certainly have more use cases than you expect, and this sort of change is going to have direct impact on the workflow of every single one of your users. Make sure you know how much that's going to be, and take that into consideration when suggesting it'll save you money.


Jonathan Dowland: git worktrees

6 October, 2022 - 14:04

I work on OpenJDK backports: taking a patch that was committed to a current version of JDK, and adapting it to an older one. There are four main OpenJDK versions that I am concerned with: the current version ("jdk"), 8, 11 and 17. These are all maintained in separate Git(Hub) repositories.

It's very useful to have access to the other JDKs when working on any particular version. For example, to backport a patch from the latest version to 17, where the delta is not too big, a lot of the time you can cherry-pick the patch unmodified. To do git cherry-pick <some-commit> in a git repository tracking JDK17, where <some-commit> is in "jdk", I need the "jdk" repository configured as a remote for my local jdk17 repository.

Maintaining completely separate local git repositories for all four JDK versions, with each of them having a subset of the others added as remotes, adds up to a lot of duplicated data on local storage.

For a little while I was exploring using shared clones: a local clone of another local git repository which share some local metadata. This saves on some disc space, but it does not share the configuration for remotes: so I still have to add any other JDK versions I want as remotes in each shared clone (even if the underlying objects already exist in the shared metadata)

Then I discovered git worktree. The git repositories that I've used up until now have had exactly zero (for a bare clone) or one worktree: in other words, the check-out, the actual source code files. Git does actually support having more than one worktree, which can be achieved like so:

git worktree add --checkout \
    -b jdk8u-master \
    ../jdk.worktree.jdk8u \

The result (in this example) is a new checkout, in this case of a new local branch named jdk8u-master at the sibling directory path jdk.worktree.jdk8u, tracking the remote branch jdk8u-dev/master. Within that checkout, there is a file .git which contains a pointer to (indirectly) the main local repository path:

gitdir: /home/jon/rh/git/jdk/.git/worktrees/jdk.worktree.jdk8u-dev

The directory itself behaves exactly like the real one, in that I can see all the configured remotes, and other checked out branches in other worktree paths:

$ git branch
  JDK-8214520-11u                               + 8284977-jdk11u-dev
  JDK-8268362-11u                               + master
  8231111-jdk11u-merged                         * 8237479-jdk8u-dev

Above, you can see that the current worktree is the branch 8237479-jdk8u-dev, marked (as usual) by the prefix *, and two other branches are checked out in other worktrees, marked by the prefix +.

I only need to configure one local git repository with all of the remotes I am concerned about; I can inspect, compare, cherry-pick, check out, etc. any objects from any of those branches from any of my work trees; there's only one .git directory with all the configuration and storage for the git blobs across all the versions.

Dirk Eddelbuettel: RVowpalWabbit 0.0.17: Maintenance

6 October, 2022 - 05:13

Almost to the week one year since the last maintenance release, we can announce another maintenance release, now at version 0.0.17, of the RVowpalWabbit package. The CRAN maintainers kindly and politly pointed out that I was (cough, cough) apparently the last maintainer who had packages that set StagedInstall: no. Guilty as charged.

RVowpalWabbit is one the two packages; the other one will hopefully follow ‘shortly’. And while I long suspected linking aspects to drive this (this is an old package, newer R packaging of the awesome VowpalWabbit is in rvw, I was plain wrong here. The cause was an absolute path to an included dataset, computed in an example, which then gets serialized. As Tomas Kalibera suggested, we can replace the constant with a function and all is well. So here is 0.0.17.

As noted before, there is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into VowpalWabbit from R go check it out.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: rewrite rule representation

5 October, 2022 - 03:29

I've begun writing up my phd and, not for the first time, I'm pondering issues of how best to represent things. Specifically, rewrite rules.

Here's one way of representing an example rewrite rule:

streamFilter g . streamFilter f = streamFilter (g . f)

This is a fairly succinct representation. It's sort-of Haskell, but not quite. It's an equation. The left-hand side is a pattern: it's intended to describe not one expression but a family of expressions that match. The lower case individual letters g and f are free variables: labelled placeholders within the pattern that can be referred to on the right hand side. I haven't stipulated what defines a free variable and what is a more concrete part of the pattern. It's kind-of Haskell, and I've used the well-known operator . to represent the two stream operators (streamFilters) being connected together. (In practice, when we get to the system where rules are actually applied, the connecting operator is not going to be . at all, so this is also an approximation).

One thing I don't like about . here, despite its commonness, is having to read right-to-left. I adopted the habit of using the lesser-known >>> in a lot of my work (which is defined as (>>>) = flip (.)), which reads left-to-right. And then I have the reverse problem: people aren't familiar with >>>, and, just like ., it's a stand-in anyway.

Towards the beginning of my PhD, I spent some time inventing rewrite rules to operate on pairs of operators taken from a defined, known set. I began representing the rules much as in the example above. Later on, I wanted to encode them as real Haskell, in order to check them more thoroughly. The above rule, I first encoded like this

filterFilterPre     = streamFilter g . streamFilter f
filterFilterPost    = streamFilter (g . f)
prop_filterFilter s = filterFilterPre s == filterFilterPost s

This is real code: the operators were already implemented in StrIoT, and the final expression defined a property for QuickCheck. However, it's still not quite a rewrite rule. The left-hand side, which should be a pattern, is really a concrete expression. The names f and g are masquerading as free variables but are really concretely defined in a preamble I wrote to support running QuickCheck against these things: usually simple stuff like g = odd, etc.

Eventually, I had to figure out how to really implement rewrite rules in StrIoT. There were a few approaches I could take. One would be to express the rules in some abstract form like the first example (but with properly defined semantics) and write a parser for them: I really wanted to avoid doing that.

As a happy accident, the solution I landed on was enabled by the semantics of algebraic-graphs, a Graph library we adopted to support representing a stream-processing topology. I wrote more about that in data-types for representing stream-processing programs.

I was able to implement rewrite rules as ordinary Haskell functions. The left-hand side of the rewrite rule maps to the left-hand side (pattern) part of a function definition. The function body implements the right-hand side. The system that applies the rules attempts to apply each rewrite rule to every sub-graph of a stream-processing program. The rewrite functions therefore need to signal whether or not they're applicable at runtime. For that reason, the return type is wrapped in Maybe, and we provide a catch-all pattern for every rule which simply returns Nothing. The right-hand side implementation can be pretty thorny. On the left-hand side, the stream operator connector we've finally ended up with is Connect from algebraic-graphs.

Here's filter fuse, taken from the full ruleset:

filterFuse :: RewriteRule
filterFuse (Connect (Vertex a@(StreamVertex i (Filter sel1) (p:_) ty _ s1))
                    (Vertex b@(StreamVertex _ (Filter sel2) (q:_) _ _ s2))) =
    let c = a { operator    = Filter (sel1 * sel2)
              , parameters  = [\[| (\p q x -> p x && q x) $(p) $(q) |]]
              , serviceTime = sumTimes s1 sel1 s2
    in Just (removeEdge c c . mergeVertices (`elem` [a,b]) c)

filterFuse _ = Nothing

That's perhaps the simplest rule in the set. (See e.g. hoistOp for one of the worst!)

The question that remains to me, is, which representation, or representations, to use in the thesis? I'm currently planning to skip the abstract example I started with and start with the concrete Haskell encoding using QuickCheck. I'm not sure if it seems weird to have two completely separate implementations of the rules, but the simpler QuickCheck-checked rules are much closer to the "core essence" of the rules than the final implementation in StrIoT. And the derivation of the rules comes earlier in the timeline than the design work that led to the final StrIoT implementation. The middle-option is still compromised, however, by having concrete expressions pretending to be patterns. So I'm not sure.

Dima Kogan: mrcal 2.2 released

4 October, 2022 - 15:05

Today I released mrcal 2.2 (the release notes are available here). This release contains lots of medium-important internal improvements, and is a result of

  • Heavy dogfooding. I use these tools a lot every day, and many things are nicer, easier and work better in 2.2 than in 2.1
  • Not-yet-completed cool new functionality. Some of the required internal improvements for the big new features are being released here.

The biggest single new feature in this release is the interactive graphical tool for examining dense stereo results: accessed via mrcal-stereo --viz stereo.

The next pressing thing is improved documentation. The tour of mrcal is still a good overview of some of the functionality that makes mrcal unique and far better than traditional calibration tools. But it doesn't do a good job of demonstrating how you would actually use mrcal to diagnose and handle common calibration issues. I need to gather some releasable representative data, and write docs around that.

Then I'm going to start finishing the big new features in the roadmap (these are all already functional, but need polish):

  • Triangulation functions in the optimization loop for efficient structure-from-motion
  • Support for non-central projection to remove a significant source of bias present in very wide lenses
  • Improved projection uncertainty quantification to resolve accuracy and performance issues in the current projection uncertainty method

Russ Allbery: Review: The Dragon Never Sleeps

4 October, 2022 - 10:01

Review: The Dragon Never Sleeps, by Glen Cook

Publisher: Night Shade Books Copyright: 1988 Printing: 2008 ISBN: 1-59780-099-6 Format: MOBI Pages: 449

Canon Space is run, in a way, by the noble mercantile houses, who spread their cities, colonies, and mines through the mysterious Web that allows faster-than-light interstellar travel. The true rulers of Canon Space, though, are the Guardships: enormous, undefeatable starships run by crews who have become effectively immortal by repeated uploading and reincarnation. Or, in the case of the Deified, without reincarnation, observing, ordering, advising, and meddling from an entirely virtual existence. The Guardships have enforced the status quo for four thousand years.

House Tregesser thinks they have the means to break the stranglehold of the Guardships. They have contact with Outsiders from beyond Canon Space who can provide advanced technology. They have their own cloning technology, which they use to create backup copies of their elites. And they have Lupo Provik, a quietly brilliant schemer who has devoted his life to destroying Guardships.

This book was so bad. A more sensible person than I would have given up after the first hundred pages, but I was too stubborn. The stubbornness did not pay off.

Sometimes I pick up an older SFF novel and I'm reminded of how much the quality bar in the field has been raised over the past twenty years. It's not entirely fair to treat The Dragon Never Sleeps as typical of 1980s science fiction: Cook is far better known for his Black Company military fantasy series, this is one of his minor novels, and it's only been intermittently in print. But I'm dubious this would have been published at all today.

First, the writing is awful. It's choppy, cliched, awkward, and has no rhythm or sense of beauty. Here's a nearly random paragraph near the beginning of the book as a sample:

He hurled thunders and lightnings with renewed fury. The whole damned universe was out to frustrate him. XII Fulminata! What the hell? Was some malign force ranged against him?

That was his most secret fear. That somehow someone or something was using him the way he used so many others.

(Yes, this is one of the main characters throwing a temper tantrum with a special effects machine.)

In a book of 450 pages, there are 151 chapters, and most chapters switch viewpoint characters. Most of them also end with a question or some vaguely foreboding sentence to try to build tension, and while I'm willing to admit that sometimes works when used sparingly, every three pages is not sparingly.

This book is also weirdly empty of description for its size. We get a few set pieces, a few battles, and a sentence or two of physical description of most characters when they're first introduced, but it's astonishing how little of a mental image I had of this universe after reading the whole book. Cook probably describes a Guardship at some point in this book, but if he does, it completely failed to stick in my memory. There are aliens that everyone recognizes as aliens, so presumably they look different than humans, but for most of them I have no idea how. Very belatedly we're told one important species (which never gets a name) has a distinctive smell. That's about it.

Instead, nearly the whole book is dialogue and scheming. It's clear that Cook is intending to write a story of schemes and counter-schemes and jousting between brilliant characters. This can work if the dialogue is sufficiently sharp and snappy to carry the story. It does not.

"What mischief have you been up to, Kez Maefele?"

"Staying alive in a hostile universe."

"You've had more than your share of luck."

"Perhaps luck had nothing to do with it, WarAvocat. Till now."

"Luck has run out. The Ku Question has run its course. The symbol is about to receive its final blow."

There are hundreds of pages of this sort of thing.

The setting is at least intriguing, if not stunningly original. There are immortal warships oppressing human space, mysterious Outsiders, great house politics, and an essentially immortal alien warrior who ends up carrying most of the story. That's material for a good space opera if the reader slowly learns the shape of the universe, its history, and its landmarks and political factions. Or the author can decline to explain any of that. I suppose that's also a choice.

Here are some things that you may have been curious about after reading my summary, and which I'm still curious about after having finished the book: What laws do the Guardships impose and what's the philosophy behind those laws? How does the economic system work? Who built the Guardships originally, and how? How do the humans outside of Canon Space live? Who are the Ku? Why did they start fighting the humans? How many other aliens are there? What do they think of this? How does the Canon government work? How have the Guardships managed to maintain technologically superior for four thousand years?

Even where the reader gets a partial explanation, such as what Web is and how it was built, it's an unimportant aside that's largely devoid of a sense of wonder. The one piece of world-building that this book is interested in is the individual Guardships and the different ways in which they've handled millennia of self-contained patrol, and even there we only get to see a few of them.

There is a plot with appropriately epic scope, but even that is undermined by the odd pacing. Five, ten, or fifty years sometimes goes by in a sentence. A war starts, with apparently enormous implications for Canon Space, and then we learn that it continues for years without warranting narrative comment. This is done without transitions and without signposts for the reader; it's just another sentence in the narration, mixed in with the rhetorical questions and clumsy foreshadowing.

I would like to tell you that at least the book has a satisfying ending that resolves the plot conflict that it finally reveals to the reader, but I had a hard time understanding why the ending even mattered. The plot was so difficult to follow that I'm sure I missed something, but it's not difficult to follow in the fun way that would make me want to re-read it. It's difficult to follow because Cook doesn't seem able to explain the plot in his head to the reader in any coherent form. I think the status quo was slightly disrupted? Maybe? Also, I no longer care.

Oh, and there's an gene-engineered sex slave in this book, who various male characters are very protective and possessive of, who never develops much of a personality, and who has no noticeable impact on the plot despite being a major character. Yay.

This was one of the worst books I've read in a long time. In retrospect, it was an awful place to start with Glen Cook. Hopefully his better-known works are also better-written, but I can't say I feel that inspired to find out.

Rating: 2 out of 10

Mike Hommey: Announcing git-cinnabar 0.6.0rc1

4 October, 2022 - 05:26

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.10?
  • Full rewrite of git-cinnabar in Rust.
  • Push performance is between twice and 10 times faster than 0.5.x, depending on scenarios.
  • Based on git 2.38.0.
  • git cinnabar fetch now accepts a --tags flag to fetch tags.
  • git cinnabar bundle now accepts a -t flag to give a specific bundlespec.
  • git cinnabar rollback now accepts a --candidates flag to list the metadata sha1 that can be used as target of the rollback.
  • git cinnabar rollback now also accepts a --force flag to allow any commit sha1 as metadata.
  • git cinnabar now has a self-update subcommand that upgrades it when a new version is available. The subcommand is only available when building with the self-update feature (enabled on prebuilt versions of git-cinnabar).

Shirish Agarwal: Death Certificate, Legal Heir, Succession Certificate, and Indian Beaureacracy.

4 October, 2022 - 04:00
Death Certificate

After waiting for almost two, two, and a half months, I finally got mum’s death certificate last week. A part of me was saddened as it felt like I was nailing her or putting nails to the coffin or whatever it is, (even though I’m an Agarwal) I just felt sad and awful. I was told just get a death certificate and your problems will be over. Some people wanted me to give some amount under the table or something which I didn’t want to party of and because of that perhaps it took a month, month and a half more as I came to know later that it had been issued almost a month and a half back. The inflation over the last 8 years of the present Govt. has made the corrupt even more corrupt, all the while projecting and telling others that the others are corrupt. There had been also a few politicians who were caught red-handed but then pieces of evidence & witnesses vanish overnight. I don’t really wanna go in that direction as it would make for an unpleasant reading with no solutions at all unless the present Central Govt. goes out.

Intestate and Will

I came to know the word Intestate. This was a new word/term for me. A lookup told me that intestate means a person dying without putting a will. That legal term comes from U.K. law. I had read a long long time back that almost all our laws have and were made or taken from U.K. law. IIRC, massive sections of the CRPC Act even today have that colonial legacy. While in its (BJP) manifesto that had been shared with the public at the time of the election, they had shared that they will remove a whole swathe of laws that don’t make sense in today’s environment. But when hard and good questions were asked, they trimmed a few, modified a few, and left most of them as it is. Sadly, most of the laws that they did modify increased Government control over people instead of decreasing, It’s been 8 years and yet we still don’t have a Privacy law. They had made something but it was too vague and would have invited suits from day 1 so pretty much on backburner :(. A good insight into what I mean is an article in the Hindu I read a few days back. Once you read that article, I am sure you will have as many questions as I have but sadly no answers.

Law is not supposed to be partisan but today it is. I could cite examples from both the U.S. and UK courts about progressive judgments or the way they go about it, but then again when our people think they know better But this again does not help me apart from setting some kind of background of where we are.) I have on this blog also shared how Africans have been setting new records in transparency and they did it almost 5 years back. For those new to the blog, African countries have been now broadcasting proceedings of their SC for almost 5 years now. I noticed it when privacy law was being debated and a few handles that I follow on Twitter and elsewhere had gone and given their submission in their SC. It was fascinating to not only hear but also read about the case from multiple viewpoints.

And just to remind people, I am sharing all of this from Pune, Maharashtra which is the second-biggest city in Maharashtra that has something like six million people and probably a million or more transitory students, and casual laborers but then again that doesn’t help me other than providing a kind of context to what I’m sharing..

Now a couple of years back or more I had asked mum to make a will. If she wanted to bequeath something to somebody else she could do that, had shared about that. There was some article in Indian Express or elsewhere that told people what they should be doing, especially if they had cost the barrier of age 60. Now for reasons best known to her, she refused and now I have to figure out what is the right way to go about doing things.

Twitter Experiences

Now before Twitter, a few people had been asking me about having a legal heir certificate, while others are asking about a succession certificate and some claim a Death Certificate is enough. Now I asked the same question on Twitter hoping at the max of 5-10 responses but was overwhelmed by the response. I got something like 50-60 odd replies. Probably, one of the better responses was given by Dr. Paras Jain who shared the following –

“Answer is qualified Movable assets nothing required Bank LIC flat with society nomination done nothing required except death certificate. However, each will insist on a notarized indemnity bond If the nomination is not done. Depends on whims & fancy of each mind legal heir certificate,+ all” – Dr. Paras Jain. (cleared up the grammar a little, otherwise, views are of Dr. Paras.)

What was interesting for me is that most people just didn’t give me advice, many of them also shared their own experiences or what they did or went through. I was surprised to learn e.g. that a succession certificate can take up to 6 months or more. Part of me isn’t surprised to learn that as do know we have a huge pendency of cases in High Courts, District Courts leading all the way to the Supreme Court. India Today shared a brief article sharing the same and similar issues. Such delays have become far too common now

Supertech Demolition and Others

Over the last couple of months, a number of high-profile demolitions have taken place and in most cases, the loss has been of homebuyers. See for e.g. the case of Supertech. A much more detailed article was penned by Moneylife. There were a few Muslims whose homes were demolished just a couple of months back that were being celebrated, but now just 2-3 days back a politician by the name of Shrikant Tyagi, a BJP leader, his flat was partly demolished and there was a lot of hue and cry. Although we shouldn’t be discussing on the basis of religion but legality, somehow the idea has been put that there are two kinds of laws, one for the majority, the other for the minority. And this has been going on for the last 8 odd years, hence you see different reactions to the same incidents instead of similar reactions. In all the cases, no strictures are passed either against the Municipality or against lenders.

The most obvious question, let’s say for argument’s sake, I was a homeowner in Supertech. I bought a flat for say 10 lakhs in 2012. According to the courts, today I am supposed to get 22 lakhs at 12% simple interest for 10 years. Let’s say even if the builder was in a position and does honor the order, the homeowner will not get a house in the same area as the circle rate would probably have quadrupled by then at the very least. The circle rate alone might be the above amount. The reason is very simple, a builder buys land on the cheap when there is no development around. His/her/their whole idea is once development happens due to other builders also building flats, the whole area gets developed and they are able to sell the flats at a premium. Even Circle rates get affected as the builder pays below the table and asks the officers of the municipal authority to hike the circle rate every few months. Again, wouldn’t go into much depth as the whole thing is rotten to the core. There are many such projects. I have shared Krishnaraj Rao’s videos on this blog a few times. I am sure there are a few good men like him. At the end, sadly this is where we are

P.S. – I haven’t shared any book reviews this week as this post itself has become too long. I probably may blog about a couple of books in the next couple of days, till later.

Paul Wise: FLOSS Activities September 2022

3 October, 2022 - 19:47

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review Administration
  • Debian QA services: deploy changes
  • Debian wiki: approve accounts
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

All work was done on a volunteer basis.

Thorsten Alteholz: My Debian Activities in September 2022

3 October, 2022 - 19:01
FTP master

This month I accepted 226 and rejected 33 packages. The overall number of packages that got accepted was 232.

All in all I addressed about 60 RM-bugs and either simply removed the package or added a moreinfo tag. In total I spent 5 hours for this task.

Anyway, I have to repeat my comment from last month: please have a look at the removal page and check whether the created dak command is really what you wanted. It would also help if you check the reverse dependencies and write a comment whether they are important or can be ignored or also file a new bug for them. Each removal must have one bug!

Debian LTS

This was my ninety-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3111-1] mod-wsgi security update for one CVE
  • [#1020596] bullseye-pu: mod-wsgi/4.7.1-3+deb11u1
  • [DLA 3119-1] expat security update for one CVE
  • [DLA 3125-1] libvncserver security update for two CVEs
  • [DLA 3126-1] libsndfile security update for one CVE
  • [DLA 3127-1] libhttp-daemon-perl security update for one CVE
  • [DLA 3130-1] tinyxml security update for one CVE

I also started to work on frr.

Last but not least I did some days of frontdesk duties and took care of issues on security-master.

Debian ELTS

This month was the fiftieth ELTS month.

During my allocated time I uploaded:

  • [ELA-685-1] ntfs-3g security update of Stretch for eight CVE
  • [ELA-686-1] expat security update of Jessie and Stretch for one CVE
  • [ELA-690-1] libvncserver security update of Stretch for one CVE

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

Debian IoT

This month I uploaded new upstream versions or improved packaging of:

Debian Mobcom

This month I started another upload session for new upstrea versions:

Other stuff

This month I uploaded new packages:

Russ Allbery: Review: Jingo

3 October, 2022 - 10:27

Review: Jingo, by Terry Pratchett

Series: Discworld #21 Publisher: Harper Copyright: 1997 Printing: May 2014 ISBN: 0-06-228020-1 Format: Mass market Pages: 455

This is the 21st Discworld novel and relies on the previous Watch novels for characterization and cast development. I would not start here.

In the middle of the Circle Sea, the body of water between Ankh-Morpork and the desert empire of Klatch, a territorial squabble between one fishing family from Ankh-Morpork and one from Klatch is interrupted by a weathercock rising dramatically from the sea. When the weathercock is shortly followed by the city to which it is attached and the island on which that city is resting, it's justification for more than a fishing squabble. It's a good reason for a war over new territory.

The start of hostilities is an assassination attempt on a prince of Klatch. Vimes and the Watch start investigating, but politics outraces police work. Wars are a matter for the nobility and their armies, not for normal civilian leadership. Lord Vetinari resigns, leaving the city under the command of Lord Rust, who is eager for a glorious military victory against their long-term rivals. The Klatchians seem equally eager to oblige.

One of the useful properties of a long series is that you build up a cast of characters you can throw at a plot, and if you can assume the reader has read enough of the previous books, you don't have to spend a lot of time on establishing characterization and can get straight to the story. Pratchett uses that here. You could read this cold, I suppose, because most of the Watch are obvious enough types that the bits of characterization they get are enough, but it works best with the nuance and layers of the previous books. Of course Colon is the most susceptible to the jingoism that prompts the book's title, and of course Angua's abilities make her the best detective. The familiar characters let Pratchett dive right in to the political machinations.

Everyone plays to type here: Vetinari is deftly maneuvering everyone into place to make the situation work out the way he wants, Vimes is stubborn and ethical and needs Vetinari to push him in the right direction, and Carrot is sensible and effortlessly charismatic. Colon and Nobby are, as usual, comic relief of a sort, spending much of the book with Vetinari while not understanding what he's up to. But Nobby gets an interesting bit of characterization in the form of an extended turn as a spy that starts as cross-dressing and becomes an understated sort of gender exploration hidden behind humor that's less mocking than one might expect. Pratchett has been slowly playing more with gender in this series, and while it's simple and a bit deemphasized, I like it.

I think the best part of this book, thematically, is the contrast between Carrot's and Vimes's reactions to the war. Carrot is a paragon of a certain type of ethics in Watch novels, but a war is one of the things that plays to his weaknesses. Carrot follows rules, and wars have rules of a type. You can potentially draw Carrot into them. But Vimes, despite being someone who enforces rules professionally, is deeply suspicious of them, which makes him harder to fool. Pratchett uses one of the Klatchian characters to hold a mirror up to Vimes in ways that are minor spoilers, but that I quite liked.

The argument of jingoism, made by both Lord Rust and by the Klatchian prince, is that wars are something special, outside the normal rules of justice. Vimes absolutely refuses this position. As someone from the US, his reaction to Lord Rust's attempted militarization of the Watch was one of the best moments of the book.

Not a muscle moved on Rust's face. There was a clink as Vimes's badge was set neatly on the table.

"I don't have to take this," Vimes said calmly.

"Oh, so you'd rather be a civilian, would you?"

"A watchman is a civilian, you inbred streak of pus!"

Vimes is also willing to think of a war as a possible crime, which may not be as effective as Vetinari's tricky scheming but which is very emotionally satisfying.

As with most Pratchett books, the moral underpinnings of the story aren't that elaborate: people are people despite cultural differences, wars are bad, and people are too ready to believe the worst of their neighbors. The story arc is not going to provide great insights into human character that the reader did not already have. But watching Vimes stubbornly attempt to do the right thing regardless of the rule book is wholly satisfying, and watching Vetinari at work is equally, if differently, enjoyable.

Not the best Discworld novel, but one of the better ones.

Followed by The Last Continent in publication order, and by The Fifth Elephant thematically.

Rating: 8 out of 10

Marco d'Itri: Debian bookworm on a Lenovo T14s Gen3 AMD

3 October, 2022 - 08:12

I recently upgraded my laptop to a Lenovo T14s Gen3 AMD and I am happy to report that it works just fine with Debian/unstable using a 5.19 kernel.

The only issue is that some firmware files are still missing and I had to install them manually.

Updates are needed for the firmware-amd-graphics package (#1019847) for the Radeon 680M GPU (AMD Rembrandt) and for the firmware-atheros package (#1021157) for the Qualcomm NFA725A Wi-Fi card (which is actually reported as a NFA765).

s2idle (AKA "modern suspend") works too.

For improved energy efficiency it is recommended to switch from the acpi_cpufreq CPU frequency scaling driver to amd_pstate. Please note that so far it is not loaded automatically.

As expected, fwupdmgr can update the system BIOS and the firmware of the NVMe device.

Dirk Eddelbuettel: RcppArmadillo on CRAN: Updates

2 October, 2022 - 23:08

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1023 packages other packages on CRAN, downloaded 26.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 497 times according to Google Scholar.

This release reflect as new upstream release 11.4.0 Conrad made recently. It turns out that it triggered warnings under g++-12 for about five packages in the fortify mode default for Debian builds. Conrad then kindly addressed this with a few fixes.

The full set of changes (since the last CRAN release follows.

Changes in RcppArmadillo version (2022-10-01
  • Upgraded to Armadillo release 11.4.0 (Ship of Theseus)

    • faster handling of compound expressions by sum()

    • extended pow() with various forms of element-wise power operations

    • added find_nan() to find indices of NaN elements

  • Also applied fixes to avoid g++-12 warnings affecting just a handful of CRAN packages.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve McIntyre: Firmware vote result - the people have spoken!

2 October, 2022 - 21:15

It's time for another update on Debian's firmware GR. I wrote about the problem back in April and about the vote itself a few days back.

Voting closed last night and we have a result! This is unofficial so far - the official result will follow shortly when the Project Secretary sends a signed mail to confirm it. But that's normally just a formality at this point.

A Result!

The headline result is: Choice 5 / Proposal E won: Change SC for non-free firmware in installer, one installer. I'm happy with this - it's the option that I voted highest, after all. More importantly, however, it's a clear choice of direction, as I was hoping for. Of the 366 Debian Developers who voted, 289 of them voted this option above NOTA and 63 below, so it also meets the 3:1 super-majority requirement for amending a Debian Foundation Document (Constitution section 4.1.3).

So, what happens next?

We have quite a few things to do now, ideally before the freeze for Debian 12 (bookworm), due January 2023. This list of work items is almost definitely not complete, and Cyril and I are aiming to get together this week and do more detailed planning for the d-i pieces. Off the top of my head I can think of the following:

  • Update the SC with the new text, update the website.
  • Check/add support for the non-free-firmware section in various places:
    • d-i build
    • debian-cd
    • debmirror (?)
    • ftpsync (?)
    • Any others?
  • Uploads of firmware packages targeting non-free-firmware.
  • Extra d-i code to inform users about what firmware blobs have been loaded and the matching non-free-firmware packages. Plus information about the hardware involved. Maybe a new d-i module / udeb for this? Exact details here still TBD. Probably the biggest individual piece of work here.
  • Tweaks to add the non-free-firmware section in the apt-setup module if desired/needed.
  • An extra boot option (a debconf variable) to disable loading extra firmware automatically, then exposed as an extra option through the isolinux and GRUB menus. d-i "expert mode" can also be used to tweak this behaviour later, except (obviously) for things like audio firmware that must be loaded early if they're going to be useful at all.
  • Update the image build scripts to include the n-f-f packages, only build one type of image. I'll do my best to keep config and support around too for images without n-f-f included, to make it easier to still build those for people who still want them.
  • Matching updates to docs, website, wiki etc.
  • ...

If you think I've missed anything here, please let me and Cyril know - the best place would be the mailing list ( If you'd like to help implement any of these changes, that would be lovely too!

Junichi Uekawa: I've sent a kernel patch.

1 October, 2022 - 20:07
I've sent a kernel patch. It's been 4 years since my last upstream kernel patch it seems. Time flies.

Jonathan Dowland: vim-css-color

1 October, 2022 - 02:48

Last year I wrote about a subset of the vim plugins I was using, specifically those created by master craftsman Tim Pope. Off and on since then I've reviewed the other plugins I use and tried a few others, so I thought I'd write about them.

automatic colour name colouring

vim-css-color is a simple plugin that recognised colour names specified in CSS-style: e.g. 'red', '#ff0000', '#rgb(255,0,0)' etc., and colours them accordingly. True to its name, once installed it's active when editing CSS files, but it's also supported for many other file types, and extending it further is not hard.

Reproducible Builds (diffoscope): diffoscope 223 released

30 September, 2022 - 07:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 223. This version includes the following changes:

[ Chris Lamb ]
* The cbfstools utility is now provided in Debian via the coreboot-utils
  Debian package, so we can enable that functionality within Debian.
  (Closes: #1020630)

[ Mattia Rizzolo ]
* Also include coreboot-utils in Build-Depends and Test-Depends so it is
  available for the tests.

[ Jelle van der Waa ]
* Add support for file 5.43.

You find out more by visiting the project homepage.

Antoine Beaupré: Detecting manual (and optimizing large) package installs in Puppet

30 September, 2022 - 02:05

Well this is a mouthful.

I recently worked on a neat hack called puppet-package-check. It is designed to warn about manually installed packages, to make sure "everything is in Puppet". But it turns out it can (probably?) dramatically increase the bootstrap time of Puppet bootstrap when it needs to install a large number of packages.

Detecting manual packages

On a cleanly filed workstation, it looks like this:

root@emma:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
0 unmanaged packages found

A messy workstation will look like this:

root@curie:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
288 unmanaged packages found
apparmor-utils beignet-opencl-icd bridge-utils clustershell cups-pk-helper davfs2 dconf-cli dconf-editor dconf-gsettings-backend ddccontrol ddrescueview debmake debootstrap decopy dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dnsdiag dropbear-initramfs ebtables efibootmgr elpa-lua-mode entr eog evince figlet file file-roller fio flac flex font-manager fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi freetype2-demos ftp fwupd-amd64-signed gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdisk gdm3 gdu gedit gedit-plugins gettext-base git-debrebase gnome-boxes gnote gnupg2 golang-any golang-docker-credential-helpers golang-golang-x-tools grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gtypist gvfs-backends hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-gtk3 ibus-libpinyin ibus-pinyin im-config imediff img2pdf imv initramfs-tools input-utils installation-birthday internetarchive ipmitool iptables iptraf-ng jackd2 jupyter jupyter-nbextension-jupyter-js-widgets jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools keyboard-configuration kfind konsole krb5-locales kwin-x11 leiningen lightdm lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 lynx lz4json magic-wormhole mailscripts mailutils manuskript mat2 mate-notification-daemon mate-themes mime-support mktorrent mp3splt mpdris2 msitools mtp-tools mtree-netbsd mupdf nautilus nautilus-sendto ncal nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache npm2deb ntfs-3g ntpdate nvme-cli nwipe obs-studio okular-extra-backends openstack-clients openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby ruby-dev ruby-feedparser ruby-magic ruby-mocha ruby-ronn rygel-playbin rygel-tracker s-tui sanoid saytime scrcpy scrcpy-server screenfetch scrot sdate sddm seahorse shim-signed sigil smartmontools smem smplayer sng sound-juicer sound-theme-freedesktop spectre-meltdown-checker sq ssh-audit sshuttle stress-ng strongswan strongswan-swanctl syncthing system-config-printer system-config-printer-common system-config-printer-udev systemd-bootchart systemd-container tardiff task-desktop task-english task-ssh-server tasksel tellico texinfo texlive-fonts-extra texlive-lang-cyrillic texlive-lang-french texlive-lang-german texlive-lang-italian texlive-xetex tftp-hpa thunar-archive-plugin tidy tikzit tint2 tintin++ tipa tpm2-tools traceroute tree trocla ucf udisks2 unifont unrar-free upower usbguard uuid-runtime vagrant-cachier vagrant-libvirt virt-manager vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas wireshark xapian-tools xclip xdg-user-dirs-gtk xlax xmlto xsensors xserver-xorg xsltproc xxd xz-utils yubioath-desktop zathura zathura-pdf-poppler zenity zfs-dkms zfs-initramfs zfsutils-linux zip zlib1g zlib1g-dev
157 old: apparmor-utils clustershell davfs2 dconf-cli dconf-editor ddccontrol ddrescueview decopy dnsdiag ebtables efibootmgr elpa-lua-mode entr figlet file-roller fio flac flex font-manager freetype2-demos ftp gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdu gedit git-debrebase gnote golang-docker-credential-helpers golang-golang-x-tools gtypist hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-pinyin imediff input-utils internetarchive ipmitool iptraf-ng jackd2 jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools kfind konsole leiningen lightdm lynx lz4json magic-wormhole manuskript mat2 mate-notification-daemon mktorrent mp3splt msitools mtp-tools mtree-netbsd nautilus nautilus-sendto nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache ntpdate nwipe obs-studio openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby-feedparser ruby-magic ruby-mocha ruby-ronn s-tui saytime scrcpy screenfetch scrot sdate seahorse shim-signed sigil smem smplayer sng sound-juicer spectre-meltdown-checker sq ssh-audit sshuttle stress-ng system-config-printer system-config-printer-common tardiff tasksel tellico texlive-lang-cyrillic texlive-lang-french tftp-hpa tikzit tint2 tintin++ tpm2-tools traceroute tree unrar-free vagrant-cachier vagrant-libvirt vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas xdg-user-dirs-gtk xlax xmlto xsensors xxd yubioath-desktop zenity zip
131 new: beignet-opencl-icd bridge-utils cups-pk-helper dconf-gsettings-backend debmake debootstrap dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dropbear-initramfs eog evince file fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi fwupd-amd64-signed gdisk gdm3 gedit-plugins gettext-base gnome-boxes gnupg2 golang-any grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gvfs-backends ibus-gtk3 ibus-libpinyin im-config img2pdf imv initramfs-tools installation-birthday iptables jupyter jupyter-nbextension-jupyter-js-widgets keyboard-configuration krb5-locales kwin-x11 lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 mailscripts mailutils mate-themes mime-support mpdris2 mupdf ncal npm2deb ntfs-3g nvme-cli okular-extra-backends openstack-clients plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap ruby ruby-dev rygel-playbin rygel-tracker sanoid scrcpy-server sddm smartmontools sound-theme-freedesktop strongswan strongswan-swanctl syncthing system-config-printer-udev systemd-bootchart systemd-container task-desktop task-english task-ssh-server texinfo texlive-fonts-extra texlive-lang-german texlive-lang-italian texlive-xetex thunar-archive-plugin tidy tipa trocla ucf udisks2 unifont upower usbguard uuid-runtime virt-manager wireshark xapian-tools xclip xserver-xorg xsltproc xz-utils zathura zathura-pdf-poppler zfs-dkms zfs-initramfs zfsutils-linux zlib1g zlib1g-dev

Yuck! That's a lot of shit to go through.

Notice how the packages get sorted between "old" and "new" packages. This is because popcon is used as a tool to mark which packages are "old". If you have unmanaged packages, the "old" ones are likely things that you can uninstall, for example.

If you don't have popcon installed, you'll also get this warning:

popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'

The error can otherwise be safely ignored, but you won't get "help" prioritizing the packages to add to your manifests.

Note that the tool ignores packages that were "marked" (see apt-mark(8)) as automatically installed. This implies that you might have to do a little bit of cleanup the first time you run this, as Debian doesn't necessarily mark all of those packages correctly on first install. For example, here's how it looks like on a clean install, after Puppet ran:

root@angela:/home/anarcat# ./bin/puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
127 unmanaged packages found
ca-certificates console-setup cryptsetup-initramfs dbus file gcc-12-base gettext-base grub-common grub-efi-amd64 i3lock initramfs-tools iw keyboard-configuration krb5-locales laptop-detect libacl1 libapparmor1 libapt-pkg6.0 libargon2-1 libattr1 libaudit-common libaudit1 libblkid1 libbpf0 libbsd0 libbz2-1.0 libc6 libcap-ng0 libcap2 libcap2-bin libcom-err2 libcrypt1 libcryptsetup12 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libedit2 libelf1 libext2fs2 libfdisk1 libffi8 libgcc-s1 libgcrypt20 libgmp10 libgnutls30 libgpg-error0 libgssapi-krb5-2 libhogweed6 libidn2-0 libip4tc2 libiw30 libjansson4 libjson-c5 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 liblocale-gettext-perl liblockfile-bin liblz4-1 liblzma5 libmd0 libmnl0 libmount1 libncurses6 libncursesw6 libnettle8 libnewt0.52 libnftables1 libnftnl11 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnss-systemd libp11-kit0 libpam-systemd libpam0g libpcre2-8-0 libpcre3 libpcsclite1 libpopt0 libprocps8 libreadline8 libselinux1 libsemanage-common libsemanage2 libsepol2 libslang2 libsmartcols1 libss2 libssl1.1 libssl3 libstdc++6 libsystemd-shared libsystemd0 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libtinfo6 libtirpc-common libtirpc3 libudev1 libunistring2 libuuid1 libxtables12 libxxhash0 libzstd1 linux-image-amd64 logsave lsb-base lvm2 media-types mlocate ncurses-term pass-extension-otp puppet python3-reportbug shim-signed tasksel ucf usr-is-merged util-linux-extra wpasupplicant xorg zlib1g
popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'

Normally, there should be unmanaged packages here. But because of the way Debian is installed, a lot of libraries and some core packages are marked as manually installed, and are of course not managed through Puppet. There are two solutions to this problem:

  • really manage everything in Puppet (argh)
  • mark packages as automatically installed

I typically chose the second path and mark a ton of stuff as automatic. Then either they will be auto-removed, or will stop being listed. In the above scenario, one could mark all libraries as automatically installed with:

apt-mark auto $(./bin/puppet-package-check | grep -o 'lib[^ ]*')

... but if you trust that most of that stuff is actually garbage that you don't really want installed anyways, you could just mark it all as automatically installed:

apt-mark auto $(./bin/puppet-package-check)

In my case, that ended up keeping basically all libraries (because of course they're installed for some reason) and auto-removing this:

dh-dkms discover-data dkms libdiscover2 libjsoncpp25 libssl1.1 linux-headers-amd64 mlocate pass-extension-otp pass-otp plocate x11-apps x11-session-utils xinit xorg

You'll notice xorg in there: yep, that's bad. Not what I wanted. But for some reason, on other workstations, I did not actually have xorg installed. Turns out having xserver-xorg is enough, and that one has dependencies. So now I guess I just learned to stop worrying and live without X(org).

Optimizing large package installs

But that, of course, is not all. Why make things simple when you can have an unreadable title that is trying to be both syntactically correct and click-baity enough to flatter my vain ego? Right.

One of the challenges in bootstrapping Puppet with large package lists is that it's slow. Puppet lists packages as individual resources and will basically run apt install $PKG on every package in the manifest, one at a time. While the overhead of apt is generally small, when you add things like apt-listbugs, apt-listchanges, needrestart, triggers and so on, it can take forever setting up a new host.

So for initial installs, it can actually makes sense to skip the queue and just install everything in one big batch.

And because the above tool inspects the packages installed by Puppet, you can run it against a catalog and have a full lists of all the packages Puppet would install, even before I even had Puppet running.

So when reinstalling my laptop, I basically did this:

apt install puppet-agent/experimental
puppet agent --test --noop
apt install $(./puppet-package-check --debug \
    2>&1 | grep ^puppet\ packages 
    | sed 's/puppet packages://;s/ /\n/g'
    | grep -v -e onionshare -e golint -e git-sizer -e github-backup -e hledger -e xsane -e audacity -e chirp -e elpa-flycheck -e elpa-lsp-ui -e yubikey-manager -e git-annex -e hopenpgp-tools -e puppet
) puppet-agent/experimental

That massive grep was because there are currently a lot of packages missing from bookworm. Those are all packages that I have in my catalog but that still haven't made it to bookworm. Sad, I know. I eventually worked around that by adding bullseye sources so that the Puppet manifest actually ran.

The point here is that this improves the Puppet run time a lot. All packages get installed at once, and you get a nice progress bar. Then you actually run Puppet to deploy configurations and all the other goodies:

puppet agent --test

I wish I could tell you how much faster that ran. I don't know, and I will not go through a full reinstall just to please your curiosity. The only hard number I have is that it installed 444 packages (which exploded in 10,191 packages with dependencies) in a mere 10 minutes. That might also be with the packages already downloaded.

In any case, I have that gut feeling it's faster, so you'll have to just trust my gut. It is, after all, much more important than you might think.

Russell Coker: Links September 2022

29 September, 2022 - 19:55

Tony Kern wrote an insightful document about the crash of a B-52 at Fairchild air base in 1994 as a case study of failed leadership [1].

Cory Doctorow wrote an insightful medium article “We Should Not Endure a King” describing the case for anti-trust laws [2]. We need them badly.

Insightful Guardian article about the way reasonable responses to the bad situations people are in are diagnosed as mental health problems [3]. Providing better mental healthcare is good, but the government should also work on poverty etc.

Cory Doctorow wrote an insightful Locus article about some of the issues that have to be dealt with in applying anti-trust legislation to tech companies [4]. We really need this to be done.

Ars Technica has an interesting article about Stable Diffusion, an open source ML system for generating images [5], the results that it can produce are very impressive. One interesting thing is that the license has a set of conditions for usage which precludes exploiting or harming minors or generating false information [6]. This means it will need to go in the non-free section of Debian at best.

Dan Wang wrote an interesting article on optimism as human capital [7] which covers the reasons that people feel inspired to create things.

Related posts:

  1. Links September 2020 MD5 cracker, find plain text that matches MD5 hash [1]....
  2. Links Aug 2022 Armor is an interesting technology from Manchester University for stopping...
  3. Links July 2022 Darren Hayes wrote an interesting article about his battle with...

Jelmer Vernooij: Northcape 4000

29 September, 2022 - 05:00

This summer, I signed up to participate in the Northcape 4000 <>, an annual 4000km bike ride between Rovereto (in northern Italy) and the northernmost point of Europe, the North cape.

The Northcape event has been held for several years, and while it always ends on the North Cape, the route there varies. Last years’ route went through the Baltics, but this years’ was perhaps as direct as possible - taking us through Italy, Austria, Switzerland, Germany, the Czech republic, Germany again, Sweden, Finland and finally Norway.

The ride is unsupported, meaning you have to find your own food and accomodation and can only avail yourself of resupply and sleeping options on the route that are available to everybody else as well. The event is not meant to be a race (unlike the Transcontinental, which starts at the same day), so there is a minimum time to finish it in (10 days) and a maximum (21 days).

Unfortunately, this meant skipping some other events I’d wanted attend (DebConf, MCH).


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้