Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 30 min 23 sec ago

Russ Allbery: Review: Thick

6 April, 2020 - 11:21

Review: Thick, by Tressie McMillan Cottom

Publisher: The New Press Copyright: 2019 ISBN: 1-62097-437-1 Format: Kindle Pages: 247

Tressie McMillan Cottom is an associate professor of sociology at Virginia Commonwealth University. I first became aware of her via retweets and recommendations from other people I follow on Twitter, and she is indeed one of the best writers on that site. Thick: And Other Essays is an essay collection focused primarily on how American culture treats black women.

I will be honest here, in part because I think much of the regular audience for my book reviews is similar to me (white, well-off from working in tech, and leftist but privileged) and therefore may identify with my experience. This is the sort of book that I always want to read and then struggle to start because I find it intimidating. It received a huge amount of praise on release, including being named as a finalist for the National Book Award, and that praise focused on its incisiveness, its truth-telling, and its depth and complexity. Complex and incisive books about racism are often hard for me to read; they're painful, depressing, and infuriating, and I have to fight my tendency to come away from them feeling more cynical and despairing. (Despite loving his essays, I'm still procrastinating reading Ta-Nehisi Coates's books.) I want to learn and understand but am not good at doing anything with the information, so this reading can feel like homework.

If that's also your reaction, read this book. I regret having waited as long as I did.

Thick is still, at times, painful, depressing, and infuriating. It's also brilliantly written in a way that makes the knowledge being conveyed easier to absorb. Rather than a relentless onslaught of bearing witness (for which, I should stress, there is an important place), it is a scalpel. Each essay lays open the heart of a subject in a few deft strokes, points out important features that the reader has previously missed, and then steps aside, leaving you alone with your thoughts to come to terms with what you've just learned. I needed this book to be an essay collection, with each thought just long enough to have an impact and not so long that I became numb. It's the type of collection that demands a pause at the end of each essay, a moment of mental readjustment, and perhaps a paging back through the essay again to remember the sharpest points.

The essays often start with seeds of the personal, drawing directly on McMillan Cottom's own life to wrap context around their point. In the first essay, "Thick," she uses advice given her younger self against writing too many first-person essays to talk about the writing form, its critics, and how the backlash against it has become part of systematic discrimination because black women are not allowed to write any other sort of authoritative essay. She then draws a distinction between her own writing and personal essays, not because she thinks less of that genre but because that genre does not work for her as a writer. The essays in Thick do this repeatedly. They appear to head in one direction, then deepen and shift with the added context of precise sociological analysis, defying predictability and reaching a more interesting conclusion than the reader had expected. And, despite those shifts, McMillan Cottom never lost me in a turn. This is a book that is not only comfortable with complexity and nuance, but helps the reader become comfortable with that complexity as well.

The second essay, "In the Name of Beauty," is perhaps my favorite of the book. Its spark was backlash against an essay McMillan Cottom wrote about Miley Cyrus, but the topic of the essay wasn't what sparked the backlash.

What many black women were angry about was how I located myself in what I'd written. I said, blithely as a matter of observable fact, that I am unattractive. Because I am unattractive, the argument went, I have a particular kind of experience of beauty, race, racism, and interacting with what we might call the white gaze. I thought nothing of it at the time I was writing it, which is unusual. I can usually pinpoint what I have said, written, or done that will piss people off and which people will be pissed off. I missed this one entirely.

What follows is one of the best essays on the social construction of beauty I've ever read. It barely pauses at the typical discussion of unrealistic beauty standards as a feminist issue, instead diving directly into beauty as whiteness, distinguishing between beauty standards that change with generations and the more lasting rules that instead police the bounds between white and not white. McMillan Cottom then goes on to explain how beauty is a form of capital, a poor and problematic one but nonetheless one of the few forms of capital women have access to, and therefore why black women have fought to be included in beauty despite all of the problems with judging people by beauty standards. And the essay deepens from there into a trenchant critique of both capitalism and white feminism that is both precise and illuminating.

When I say that I am unattractive or ugly, I am not internalizing the dominant culture's assessment of me. I am naming what has been done to me. And signaling who did it. I am glad that doing so unsettles folks, including the many white women who wrote to me with impassioned cases for how beautiful I am. They offered me neoliberal self-help nonsense that borders on the religious. They need me to believe beauty is both achievable and individual, because the alternative makes them vulnerable.

I could go on. Every essay in this book deserves similar attention. I want to quote from all of them. These essays are about racism, feminism, capitalism, and economics, all at the same time. They're about power, and how it functions in society, and what it does to people. There is an essay about Obama that contains the most concise explanation for his appeal to white voters that I've read. There is a fascinating essay about the difference between ethnic black and black-black in U.S. culture. There is so much more.

We do not share much in the U.S. culture of individualism except our delusions about meritocracy. God help my people, but I can talk to hundreds of black folks who have been systematically separated from their money, citizenship, and personhood and hear at least eighty stories about how no one is to blame but themselves. That is not about black people being black but about people being American. That is what we do. If my work is about anything it is about making plain precisely how prestige, money, and power structure our so-called democratic institutions so that most of us will always fail.

I, like many other people in my profession, was always more comfortable with the technical and scientific classes in college. I liked math and equations and rules, dreaded essay courses, and struggled to engage with the mandatory humanities courses. Something that I'm still learning, two decades later, is the extent to which this was because the humanities are harder work than the sciences and I wasn't yet up to the challenge of learning them properly. The problems are messier and more fluid. The context required is broader. It's harder to be clear and precise. And disciplines like sociology deal with our everyday lived experience, which means that we all think we're entitled to an opinion.

Books like this, which can offer me a hand up and a grounding in the intellectual rigor while simultaneously being engaging and easy to read, are a treasure. They help me fill in the gaps in my education and help me recognize and appreciate the depth of thought in disciplines that don't come as naturally to me.

This book was homework, but the good kind, the kind that exposes gaps in my understanding, introduces topics I hadn't considered, and makes the time fly until I come up for air, awed and thinking hard. Highly recommended.

Rating: 9 out of 10

Enrico Zini: Burnout links

6 April, 2020 - 06:00
Demystifying Burnout in Tech burnout selfcare archive.org 2020-04-06 How to save your soul from getting too callused FOSDEM 2020 - Recognising Burnout burnout selfcare archive.org 2020-04-06 Mental health is becoming an increasingly important topic. For this talk Andrew will focus on one particular aspect of mental health, burnout. Including his own personal experiences of when it can get really bad and steps that could be taken to help catch it early. Burnout is Not Your Fault burnout selfcare archive.org 2020-04-06 Let’s unpack society’s general misunderstanding of the latest buzzword- burnout, shall we? Demystifying Burnout in Tech burnout selfcare archive.org 2020-04-06 How to save your soul from getting too callused Christina Maslach: Burnout From Heroic Action burnout selfcare 2020-04-06 Christina Maslach defines and explains burnout, in particular relating it to activism. She gives tips and lessons for avoiding it. Recorded at the Hero Round... Understanding Job Burnout - Dr. Christina Maslach burnout selfcare 2020-04-06 DOES19 London — Burnout is a hot topic in today's workplace, given its high costs for both employees and organizations. What causes this problem? And what ca...

Vincent Bernat: Safer SSH agent forwarding

5 April, 2020 - 22:50

ssh-agent is a program to hold in memory the private keys used by SSH for public-key authentication. When the agent is running, ssh forwards to it the signature requests from the server. The agent performs the private key operations and returns the results to ssh. It is useful if you keep your private keys encrypted on disk and you don’t want to type the password at each connection. Keeping the agent secure is critical: someone able to communicate with the agent can authenticate on your behalf on remote servers.

ssh also provides the ability to forward the agent to a remote server. From this remote server, you can authenticate to another server using your local agent, without copying your private key on the intermediate server. As stated in the manual page, this is dangerous!

Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent’s UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).

As mentioned, a better alternative is to use the jump host feature: the SSH connection to the target host is tunneled through the SSH connection to the jump host. See the manual page and this blog post for more details.

If you really need to use SSH agent forwarding, you can secure it a bit through a dedicated agent with two main attributes:

  • it holds only the private key to connect to the target host, and
  • it asks confirmation for each requested signature.

The following wrapper around the ssh command will spawn such an ephemeral agent:

assh() {
  (
    # Ensure we don't use the "regular" agent.
    unset SSH_AUTH_SOCK
    # Spawn a new, empty, agent.
    eval $(ssh-agent)
    [ -n "$SSH_AUTH_SOCK" ] || exit 1
    # On exit, kill the agent.
    trap "ssh-agent -k > /dev/null" EXIT
    # Invoke SSH with agent forwarding enabled and
    # automatically add the needed private key in
    # the agent, with "confirm" mode.
    ssh -o AddKeysToAgent=confirm \
        -o ForwardAgent=yes \
        "$@"
  )
}

With the -o AddKeysToAgent=confirm directive, ssh adds the unencrypted private key to the agent but each use must be confirmed.1 Once connected, you get a password prompt for each signature request:2

Request for the agent to use the specified private key

But, again, avoid using agent forwarding! ☠️

  1. Alternatively, you can add the keys with ssh-add -c. ↩︎

  2. Unfortunately, the dialog box default answer is “Yes.” ↩︎

Hideki Yamane: Zoom: You should hire an appropriate package maintainer

5 April, 2020 - 21:19
Through my daily job, sometimes I should use zoom for meetings and webinar but several resources indicate that they didn't pay enough security effort to their product, so I've decided to remove it from my laptop. However, I've found a weird message at that time.
The following packages will be REMOVED:
  zoom*
0 upgraded, 0 newly installed, 1 to remove and 45 not upgraded.
After this operation, 269 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 362466 files and directories currently installed.)
Removing zoom (3.5.374815.0324) ...
run post uninstall script, action is remove ...
current home is /root
Processing triggers for mime-support (3.64) ...
Processing triggers for gnome-menus (3.36.0-1) ...
Processing triggers for shared-mime-info (1.15-1) ...
Processing triggers for desktop-file-utils (0.24-1) ...
(Reading database ... 361169 files and directories currently installed.)
Purging configuration files for zoom (3.5.374815.0324) ...
run post uninstall script, action is purge ...
current home is /rootWait. "current home is /root"? What did you do? Then I've extracted its package (ar -x zoom_amd64.deb; tar xvf contro.tar.xz; view post*)
#!/bin/bash
# Program:
#       script to be run after package installation

echo "run post install script, action is $1..."

#ln -s -f /opt/zoom/ZoomLauncher /usr/bin/zoom

#$1 folder path
function remove_folder
{
        if [ -d $1 ]; then
                rm -rf $1
        fi
}

echo current home is $HOME
remove_folder "$HOME/.cache/zoom"
(snip)Ouch. When I run apt with sudo, $HOME is /root. So, their maintscript tried to remove files under /root! Did they do any tests? Even if it would work well, touch user's files under $Home is NOT a good idea...

And it seems that not only for .deb package but also .rpm package.

Their linux installer scripts are clueless and icky too:

𝚛𝚎𝚖𝚘𝚟𝚎_𝚏𝚘𝚕𝚍𝚎𝚛 "/𝚘𝚙𝚝/𝚣𝚘𝚘𝚖"
𝚛𝚎𝚖𝚘𝚟𝚎_𝚏𝚘𝚕𝚍𝚎𝚛 "$𝙷𝙾𝙼𝙴/.𝚣𝚘𝚘𝚖/𝚕𝚘𝚐𝚜"
𝚛𝚎𝚖𝚘𝚟𝚎_𝚏𝚘𝚕𝚍𝚎𝚛 "$𝙷𝙾𝙼𝙴/.𝚌𝚊𝚌𝚑𝚎/𝚣𝚘𝚘𝚖"

rpm -q --scripts zoom output: https://t.co/HwnvljYyp4— Will Stephenson (@wstephenson) March 31, 2020

Craig Small: WordPress 5.4

5 April, 2020 - 09:59

Debian packages for WordPress version 5.4 will be uploaded shortly. I’m just going through the install testing now.

One problem I have noticed is, at least for my setup, there is an issue with network updates. The problem is that WordPress will ask me if I want to update the network sites, I say yes and get a SSL error.

After lots of debugging, the problem is that the fsockopen option to use SNI is turned off for network updates. My sites need SNI so without this they just bomb out with a SSL handshake error.

I’m not sure what the real fix is, but my work-around was to temporary set the SNI in the fsockopen transport while doing the site updates.

The file you want wp-includes/Requests/Transport/fsockopen.php and in the request method of Requests_Transport_fsockopen you’ll see something like:

                       stream_context_set_option($context, array('ssl' => $context_options)); 
                } 
                else { 
                        $remote_socket = 'tcp://' . $host; 
                }

Just before the stream_context_set_option put the line:

                        $context_options['SNI_enabled'] = true;

Ugly but it works

Joey Hess: solar powered waterfall controlled by a GPIO port

5 April, 2020 - 03:56

This waterfall is beside my yard. When it's running, I know my water tanks are full and the spring is not dry.

Also it's computer controlled, for times when I don't want to hear it. I'll also use the computer control later on to avoid running the pump excessively and wearing it out, and for some safety features like not running when the water is frozen.

This is a whole hillside of pipes, water tanks, pumps, solar panels, all controlled by a GPIO port. Easy enough; the pump controller has a float switch input and the GPIO drives a 4n35 optoisolator to open or close that circuit. Hard part will be burying all the cable to the pump. And then all the landscaping around the waterfall.

There's a bit of lag to turning it on and off. It can take over an hour for it to start flowing, and around half an hour to stop. The water level has to get high enough in the water tanks to overcome some airlocks and complicated hydrodynamic flow stuff. Then when it stops, all that excess water has to drain back down.

Anyway, enjoy my soothing afternoon project and/or massive rube goldberg machine, I certainly am.

Thorsten Alteholz: My Debian Activities in March 2020

4 April, 2020 - 23:02

FTP master

This month I accepted 156 packages and rejected 26. The overall number of packages that got accepted was 203.

Debian LTS

This was my sixty ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads of:

  • [DLA 2156-1] e2fsprogs security update for one CVE
  • [DLA 2157-1] weechat security update for three CVEs
  • [DLA 2160-1] php5 security update for two CVEs
  • [DLA 2164-1] gst-plugins-bad0.10 security update for four CVEs
  • [DLA 2165-1] apng2gif security update for one CVE

Also my work on graphicsmagic was accepted which resulted in:

  • [DSA 4640-1] graphicsmagick security update in Buster and Strech for 16 CVEs

Further I sent debdiffs of weechat/stretch, weechat/buster, e2fsprogs/stretch to the corresponding maintainers but got no feedback yet.

As there have been lots of no-dsa-CVEs accumulated for wireshark, I started to work on them but could not upload yet.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty first ELTS month.

During my really allocated time I uploaded:

  • ELA-218-1 for e2fsprogs
  • ELA-220-1 for php5
  • ELA-221-1 for nss

I also did some days of frontdesk duties.

Other stuff

Unfortunately this month again strange things happened outside Debian and the discussions within Debian did not stop. Nonetheless I got some stuff done.

I improved packaging of …

I sponsored uploads of …

  • … ocf-spec-core
  • … theme-d-gnome

Sorry to all people who also requested sponsoring, but sometimes things happen and your upload might be delayed.

I uploaded new upstream versions of …

On my Go challenge I uploaded:
golang-github-dreamitgetit-statuscake, golang-github-ensighten-udnssdk, golang-github-apparentlymart-go-dump, golang-github-suapapa-go-eddystone, golang-github-joyent-gosdc, golang-github-nrdcg-goinwx, golang-github-bmatcuk-doublestar, golang-github-go-xorm-core, golang-github-svanharmelen-jsonapi, golang-github-goji-httpauth, golang-github-phpdave11-gofpdi

Sean Whitton: Manifest to run Debian pre-upload tests on builds.sr.ht

4 April, 2020 - 00:47

Before uploading stuff to Debian, I build in a clean chroot, and then run piuparts, autopkgtest and lintian. For some of my packages this can take around an hour on my laptop, which is fairly old. Normally I don’t mind waiting, but sometimes I want to put my laptop away, and then it would be good for things to be faster. It occurred to me that I could make use of my builds.sr.ht account to run these tests on more powerful hardware.

This build manifest seems to work:

# BEGIN CONFIGURABLE
sources:
  - https://salsa.debian.org/perl-team/modules/packages/libgit-annex-perl.git
environment:
  source: libgit-annex-perl
  quilt:  auto
# END CONFIGURABLE

image: debian/unstable
packages:
  - autopkgtest
  - build-essential
  - devscripts
  - dgit
  - lintian
  - piuparts
tasks:
  - setup: |
      cd $source
      source_version=$(dpkg-parsechangelog -SVersion)
      echo "source_version=$source_version" >>~/.buildenv
      git deborig || origtargz
      sudo apt-get -y build-dep .
  - build: |
      cd $source
      dgit --quilt=$quilt build
  - lintian: |
      lintian ${source}_${source_version}_multi.changes
  - piuparts: |
      sudo piuparts ${source}_${source_version}_multi.changes
  - autopkgtest: |
      sudo autopkgtest ${source}_${source_version}_multi.changes -- null

Jonathan Dowland: More Switch games

3 April, 2020 - 22:44

Sonic Mania

Sonic Mania is a really lovely homage to the classic 90s Sonic the Hedgehog platform games. Featuring more or less the classic gameplay, and expanded versions of the original levels, with lots of secrets, surprises and easter eggs for fans of the original. On my recommendation a friend of mine bought it for her daughter's birthday recently but her daughter will now have to prise her mum off it! Currently on sale at 30% off (£11.19). The one complaint I have about it is the lack of females in the roster of 5 playable characters.

Butcher is a Doom-esque aesthetic, very violent side-scrolling shooter/platformer, currently on sale at 70% off (just £2.69, the price of a coffee). I've played it for about 10 minutes during coffee breaks and it's fun, hard, and pretty intense. The sound track is great, and available to buy separately but only if you own or buy the original game from the same store, which is a strange restriction. It's also on Spotify.

Dirk Eddelbuettel: RcppSimdJson 0.0.4: Even Faster Upstream!

3 April, 2020 - 22:15

A new (upstream) simdjson release was announced by Daniel Lemire earlier this week, and my Twitter mentions have been running red-hot ever since as he was kind enough to tag me. Do look at that blog post, there is some impressive work in there. We wrapped up the (still very simple) rcppsimdjson around it last night and shipped it this morning.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. For illustration, I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed.

This release brings upstream 0.3 (and 0.3.1) plus a minor tweak (also shipped back upstream). Our full NEWS entry follows.

Changes in version 0.0.4 (2020-04-03)
  • Upgraded to new upstream releases 0.3 and 0.3.1 (Dirk in #9 closing #8)

  • Updated example validateJSON to API changes.

But because Daniel is such a fantastic upstream developer to collaborate with, he even filed a full feature-request ‘maybe you can consider upgrading’ as issue #8 at our repo containing the fully detailed list of changes. As it is so impressive I will simple quote the upper half of just the major changes:

Highlights
  • Multi-Document Parsing: Read a bundle of JSON documents (ndjson) 2-4x faster than doing it individually. API docs / Design Details
  • Simplified API: The API has been completely revamped for ease of use, including a new JSON navigation API and fluent support for error code and exception styles of error handling with a single API. Docs
  • Exact Float Parsing: Now simdjson parses floats flawlessly without any performance loss (https://github.com/simdjson/simdjson/pull/558). Blog Post
  • Even Faster: The fastest parser got faster! With a shiny new UTF-8 validator and meticulously refactored SIMD core, simdjson 0.3 is 15% faster than before, running at 2.5 GB/s (where 0.2 ran at 2.2 GB/s).

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: Opinionated IkiWiki

3 April, 2020 - 21:10

For various personal projects and things, past and present (including my personal site) I use IkiWiki, which (by modern standards) is a bit of a pain to set up and maintain. For that reason I find it hard to recommend to people. It would be nice to fire up a snapshot of an existing IkiWiki instance to test what the outcome of some changes might be. That's cumbersome enough at the moment that I haven't bothered to do it more than once. Separately, some months ago I did a routine upgrade of Debian for the web server running this site, and my IkiWiki installation broke for the first time in ten years. I've never had issues like this before.

For all of these reasons I've just dusted off an old experiment of mine now renamed Opinionated IkiWiki. It's IkiWiki in a container, configured to be usable out-of-the-box, with some opinionated configuration decisions made for you. The intention is you should be able to fire up this container and immediately have a useful IkiWiki instance to work from. It should hopefully be easier to clone an existing wiki— content, configuration and all—for experimentation.

You can check out the source at GitHub, and grab container images from quay.io. Or fire one up immediately at http://127.0.0.1:8080 with something like

podman run --rm -ti -p 8080:8080 \
quay.io/jdowland/opinionated-ikiwiki:latest

This was a good excuse to learn about multi-stage container builds and explore quay.io.

Feedback gratefully received: As GitHub issues, comments here, or mail.

Norbert Preining: KDE/Plasma updates for Debian sid/testing

3 April, 2020 - 07:07

I have written before about getting updated packages for KDE/Plasma on Debian. In the meantime I have moved all package building to the openSUSE Build Service, thus I am able to provide builds for Debian/testing, both i386 and amd64 architectures.

For those in hurry: new binary packages that can be used on both Debian/testing and Debian/sid can be obtained for both i386 and amd64 archs here:

deb http://download.opensuse.org/repositories/home:/npreining:/debian-plasma/Debian_Testing  ./

To make this repository work out of the box, you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

The sources for the above binaries are available at the OBS site for the debian-plasma sub-project, but I will also try to keep them apt-get-able on my server as before:

deb-src https://www.preining.info/debian unstable kde

I have choosen the openSUSE build service because of its ease to push new packages, and automatic resolution of package dependencies within the same repository. No need to compile the packages myself, nor search for the correct order. I have also added a few new packages and updates (dolphin, umbrello, kwalletmanager, kompare,…), at the moment we are at 131 packages that got updated. If you have requests for update, drop me an email!

Enjoy

Norbert

Dirk Eddelbuettel: RQuantLib 0.4.12: Small QuantLib 1.18 update

3 April, 2020 - 04:57

A new release 0.4.12 of RQuantLib arrived on CRAN today, and was uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This version does relatively little. When QuantLib 1.18 came out, I immediately did my usual bit of packaging it for Debian as well creating binaries via my Ubuntu PPA so that I could test the package against it. And a few call from RQuantLib are now hitting interface functions marked as ‘deprecated’ leading to compiler nags. So I fixed that in PR #146. And today CRAN sent me email to please fix in the released version—so I rolled this up as 0.4.12. Not other changes.

Changes in RQuantLib version 0.4.12 (2020-04-01)
  • Changes in RQuantLib code:

    • Calls deprecated-in-QuantLib 1.18 were updated (Dirk in #146).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sven Hoexter: New TLDs and Automatic link detection was a bad idea

2 April, 2020 - 20:07

Update: Seems this is a Firefox specific bug in the Slack Webapplication, it works in Chrome and the Slack Electron Application as it should. But does not in my Firefox ESR 68.6.0esr-1~deb10u1.

Ah I like it that we now have so many TLDs and matching on those seems to go bad more often now. Last occassion is Slack (which I think is a pile of shit written by morons, but that is a different story) which somehow does not properly match on .co domains. Leading to this auto linking:

Now I'm not sure if someone enocountered the same issue, or people just registered random domains just because they could. I found registrations for

  • resolv.co
  • pam.co
  • sysctl.co
  • so.co (ld.so.co woud've been really cute)

I've a few more .conf files in /etc which could be interesting in an IT environment, but for the sake of playing with it I registered nsswitch.co at godaddy. I do not want to endorse them in anyway, but for the first year it's only 13.08EUR right now, which is okay to pay for a stupid demo. So if you feel like it, you can probably register something stupid for yourself to play around with. I do not intent to renew this domain next year, so be aware of what happens then with the next owner.

Ulrike Uhlig: Breaking the chain reaction of reactions to reactions

2 April, 2020 - 14:00

Sometimes, in our day-to-day-interactions, communication becomes disruptive, resembling a chain of reactions to reactions to reactions. Sometimes we lose the capacity to express our ideas and feelings. Sometimes communication just gets stuck, maybe conflict breaks out. When we see these same patterns over and over again, this might be due to the ever same roles that we adopt and play. Learnt in childhood, these roles are deeply ingrained in our adult selves, and acted out as unconscious scripts. Until we notice and work on them.

This is a post inspired by contents from my mediation training.

In the 1960s, Stephen Karpman has thought of a model of human communication that maps the destructive interactions which occur between people. This map is known as the drama triangle.

Karpman defined three roles that interact with each other. We can play one role at work, and a different one at home, and another one with our children. Or we can switch from one role to the other in just one conversation. The three roles are:

  • The Persecutor. I'm right. It's all your fault. The Persecutor acts out criticism, accusation, and condemnation. Their behavior is controlling, blaming, shaming, oppressive, hurtful, angry, authoritarian, superior. They know everything better, they laugh about others, bully, shame, or belittle them. The Persecutor discounts others' value, looking down on them. Persecutor's thought: I'm okay, you're not okay.
  • The Victim. I'm blameless. Poor me. The Victim feels not accepted by others, oppressed, helpless, hopeless, powerless, ashamed, inferior. The Victim thinks they are unable or not good enough to solve problems on their own. The Victim discounts themselves. Victim's thought: I'm not okay, you're okay.
  • The Rescuer. I'm good. Let me help you! The Rescuer is a person who has unsolicited and unlimited advice concerning the Victim's problems. They think for the Victim, and comfort them, generally without having been asked to do so. The Rescuer acts seemingly to help the Victim but rescuing mostly helps them to feel better themselves, as it allows them to ignore their own anxieties, worries, or shortcomings. The Rescuer needs a Victim to rescue, effectively keeping the Victim powerless. The Rescuer discounts others' abilities to think and act for themselves, looking down on them. Rescuer's thought: I'm okay, you're not okay.
Does this sound familiar?

"Involvement in an unhealthy drama triangle is not something another person is doing to you. It's something you are doing with another person or persons." Well, to be more precise, it's something that we are all doing to each other: "Drama triangles form when participants who are predispositioned to adopt the roles of a drama triangle come together over an issue." (quoted from: Escaping conflict and the Karpman Drama Triangle.)

People act out these roles to meet personal (often unconscious) needs. But each of these roles is toxic in that it sees others as problems to react to. In not being able to see that we take on these roles, we keep the triangle going, like in a dispute in which one word provokes another until someone leaves, slamming the door. This is drama. When we are stuck in the drama triangle, no one wins because all three roles "cause pain", "perpetuate shame [and] guilt", and "keep people caught in dysfunctional behavior" (quoted from Lynne Namka: The Drama Triangle, Three Roles of Victim-hood).

How to get out of the drama triangle

Awareness. To get out of the triangle, it is foremost suggested to be aware of its existence. I agree, it helps. I see it everywhere now.

Identifying one's role and starting to act differently. While we switch roles, we generally take on a preferred role that we act out most of the time, and that was learnt in childhood. (I found a test to identify one's common primary role — in German.)

But how do we act differently? We need to take another look at that uncanny triangle.

From the drama triangle to the winner triangle

I found it insightful to ask what benefit each role could potentially bring into the interaction.
Acey Choy has created the Winner triangle, in 1990, as an attempt to transform social interactions away from drama. Her winner triangle shifts our perceptions of the roles: the Victim becomes the Vulnerable, the Rescuer becomes the Caring, the Persecutor becomes the Assertive.

Persecutor            Rescuer    Assertive              Caring
    ----------------------           ----------------------
    \                    /           \                    /
     \                  /             \                  /
      \                /               \                /
       \              /                 \              /
        \            /                   \            /
         \          /                     \          /
          \        /                       \        /
           \      /                         \      /
            \    /                           \    /
             \  /                             \  /
              \/                               \/
            Victim                         Vulnerable

Karpman Dreaded Drama Triangle        Choy's Winner Triangle

The Assertive "I have needs." has a calling, aims at change, initiates, and gives feedback. Skills to learn: The Assertive needs to learn to identify their needs, communicate them, and negotiate with others on eye level without shaming, punishing, or belittling them. The Assertive needs to learn to give constructive feedback, without dismissing others. The Assertive could benefit from learning to use I-Statements.

The Caring "I'm listening." shows good will and sensitivity, cares, is empathic and supportive. Skills to learn: The Caring needs to learn to respect the boundaries of others: trusting their abilities to think, problem solve and talk for themselves. Therefore, the Caring could benefit from improving their active listening skills. Furthermore the Caring needs to learn to identify and respect their own boundaries and not to do things only because it makes them feel better about themselves.

The Vulnerable "I'm struggling." has the skill of seeing and naming problems. Skills to learn: The Vulnerable needs to learn to acknowledge their feelings and needs, practice self-awareness, and self-empathy. They need to untie their self-esteem from the validation of other people. They need to learn to take care of themselves, and to strengthen their problem solving and decision making skills.

What has this got to do with autonomy and power structures?

Each of these interactions is embedded in larger society, and, as said above, we learn these roles from childhood. Therefore, we perpetually reproduce power structures, and learnt behavior. I doubt that fixing this on an individual level is sufficient to transform our interactions outside of small groups, families or work places. Although that would be a good start.

We can see that the triangle holds together because the Victim, seemingly devoid of a way to handle their own needs, transfers care of their needs to the Rescuer, thereby giving up on their autonomy. The Rescuer is provided by the Victim with a sense of autonomy, knowledge, and power, that only works while denying the Victim their autonomy. At the same time, the Persecutor denies everyone else's needs and autonomy, and feels powerful by dismissing others. I've recently mentioned the importance of autonomy in order to avoid burnout, and as a means to control one's own life. If the Rescuer can acknowledge being in the triangle, and give the Victim autonomy, by supporting them with compassion, empathy, and guidance, and at the same time respecting their own boundaries, we could find even more ways to escape the drama triangle.

Notes

My description of the roles was heavily inspired by the article Escaping Conflict and the Karpman Drama Triangle that has a lot more detail on how to escape the triangle, and how to recognize when we're moving into one of the roles. While the article is informing families living with a person suffering from a spectrum of Borderline Personality Disorder, the content applies to any dysfunctional interaction.

Mike Gabriel: Q: RoamingProfiles under GN/Linux? What's your Best Practice?

2 April, 2020 - 13:36

This post is an open question to the wide range of GNU/Linux site admins out there. Possibly some of you have the joy of maintaining GNU/Linux also on user endpoint devices (i.e. user workstations, user notebooks, etc.), not only on corporate servers.

TL;DR; In the context of a customer project, I am researching ways of mimicking (or inventing anew) a feature well known (and sometimes also well hated) from the MS Windows world: Roaming User Profiles. If anyone does have any input on that, please contact me (OFTC/Freenode IRC, Telegram, email). I am curious what your solution may be.

The Use Case Scenario

In my use case, all user machines shall be mobile (notebooks, convertibles, etc). The machines maybe on-site most of the time, but they need offline capabilities so that the users can transparently move off-site and continue their work. At the same time, a copy of the home directory (or the home directory itself) shall be stored on some backend fileservers (for central backups as well as for providing the possibility to the user to login to another machine and be up-and-running +/- out-of-the-box).

The Vision Initial Login

Ideally, I'd like to have a low level file system feature for this that handles it all. On corporate user logon (which must take place on-site and uses some LDAP database as backend), the user credentials get cached locally (and get re-mapped and re-cached with every on-site login later on), and the home directory gets mounted from a remote server at first.

Shortly after having logged in everything in the user's home gets sync'ed to a local cache in the background without the user noticing. At the end of the sync a GUI user notification would be nice, e.g. like "All user data has been cached locally, you are good to go and leave off-site now with this machine."

Moving Off-Site

A day later, the user may be travelling or such, the user logs into the machine again, the machine senses being offline or on some alien (not corporate) network, but the user can just continue their work, all in local cache.

Several days later, the same user with the same machine returns back to office, logs into the machine again, and immediately after login, all cached data gets synced back to the user's server filespace.

Possible Conflict Policies

Now there might be cases where the user has been working locally for a while and all the profile data received slight changes. The user might have had the possibility to log into other corporate servers from the alien network he*she is on and with that login, some user profile files probably will have gotten changed.

Regarding client-server sync policies, one could now enforce a client-always-wins policy that leads to changes being dropped server-side once the user's mobile workstation returns back on-site. One could also set up a bi-directional sync policy for normal data files, but a client-always-wins policy for configuration files (.files and .folders). Etc.pp.

Request for Feedback and Comments

I could go on further and further with making up edges and corner cases of all this. We had a little discussion on this some days ago on the #debian-devel IRC channel already. Thanks to all contributors to that discussion.

And again, if you have solved the above riddle on your site and are corporate-wise allowed to share the concept, I'd be happy about your feedback.

Plese get in touch!

light+love
Mike (aka sunweaver on the Fediverse and in Debian)

Ben Hutchings: Debian LTS work, March 2020

2 April, 2020 - 04:34

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and carried over 0.75 hours from February. I only worked 12.25 hours this month, so I will carry over 8.5 hours to April.

I issued DLA 2114-1 for the update to linux-4.9.

I continued preparing and testing the next update to Linux 3.16. This includes a number of filesystem fixes that require running the "xfstests" test suite.

I also replied to questions from LTS contributors and users, sent to me personally or on the public mailing list.

Joachim Breitner: 30 years of Haskell

2 April, 2020 - 01:16

Vitaly Bragilevsky, in a mail to the GHC Steering Committee, reminded me that the first version of the Haskell programming language was released exactly 30 years ago. On April 1st. So that raises the question: Was Haskell just an April fool's joke that was never retracted?

The cover of the 1.0 Haskell report

My own first exposure to Haskell was in April 2005; the oldest piece of Haskell I could find on my machine is this part of a university assignment from April:

> pascal 1 = [1]
> pascal (n+1) = zipWith (+) (x ++ [0]) (0 : x) where x = pascal n

This means that I now have witnessed half of Haskell's existence. I have never regretted getting into Haskell, and every time I come back from having worked in other languages (which all have their merits too), I greatly enjoy the beauty and elegance of expressing my ideas in a lazy and strictly typed language with a concise syntax.

I am looking forward to witnessing (and, to a very small degree, shaping) the next 15 years of Haskell.

Sylvain Beucler: Debian LTS and ELTS - March 2020

1 April, 2020 - 21:26

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In March, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 30h for LTS (out of 30 max; all done) and 20h for ELTS (out of 20 max; I did 0).

Most contributors claimed vulnerabilities by performing early CVE monitoring/triaging on their own, making me question the relevance of the Front-Desk role. It could be due to a transient combination of higher hours volume and lower open vulnerabilities.

Working as a collective of hourly paid freelancers makes it more likely to work in silos, resulting in little interaction when raising workflow topics on the mailing list. Maybe we're reaching a point where regular team meetings will be benefical.

As previously mentioned, I structure my work keeping the global Debian security in mind. It can be stressful though, and I believe current communication practices may deter such initiatives.

ELTS - Wheezy

  • No work. ELTS has few sponsors right now and few vulnerabilities to fix, hence why I could not work on it this month. I gave back my hours at the end of the month.

LTS - Jessie

  • lua-cgi: global triage: CVE-2014-10399,CVE-2014-10400/lua-cgi not-affected, CVE-2014-2875/lua-cgi referenced in BTS
  • libpcap: global triage: request CVE-2018-16301 rejection as upstream failed to; got MITRE to reject (not "dispute") a CVE for the first time!
  • nfs-utils: suites harmonization: CVE-2019-3689: ping upstream again, locate upstream'd commit, reference it at BTS and MITRE; close MR which had been ignored and now redone following said referencing
  • slurm-llnl: re-add; create CVE-2019-12838 reproducer, test abhijith's pending upload; reference patches; witness regression in CVE-2019-19728, get denied access to upstream bug, triage as ignored (minor issue + regression); security upload DLA 2143-1
  • xerces-c: global triage progress: investigate ABI-(in)compatibility of hle's patch direction; initiate discussion at upstream and RedHat; mark postponed
  • nethack: jessie triage fix: mark end-of-life
  • tor: global triage fix: CVE-2020-10592,CVE-2020-10593: fix upstream BTS links, fix DSA reference
  • php7.3: embedded copies: removed from unstable (replaced with php7.4); checked whether libonig is still bundled (no, now properly unbundled at upstream level); jessie still not-affected
  • okular: CVE-2020-9359: reference PoC, security upload DLA 2159-1

Documentation/Scripts

  • data/dla-needed.txt: tidy/refresh pending packages status
  • LTS/Development: DLA regression numbering when a past DLA affects a different package
  • LTS/FAQ: document past LTS releases archive location following a user request; trickier than expected, 3 contributors required to find the answer
  • Question aggressive package claims; little feedback
  • embedded-copies: libvncserver: reference various state of embedded copies in italc/ssvnc/tightvnc/veyon/vncsnapshot; builds on initial research from sunweaver
  • Attempt to progress on libvncserver embedded copies triaging; technical topic not anwered, organizational topic ignored
  • phppgadmin: provide feedback on CVE-2019-10784
  • Answer general workflow question about vulnerability severity
  • Answer GPAC CVE information request from a PhD student at CEA, following my large security update

Joey Hess: DIN distractions

1 April, 2020 - 21:12

My offgrid house has an industrial automation panel.

I started building this in February, before covid-19 was impacting us here, when lots of mail orders were no big problem, and getting an unusual 3D-printed DIN rail bracket for a SSD was just a couple clicks.

I finished a month later, deep into social isolation and quarentine, scrounging around the house for scrap wire, scavenging screws from unused stuff and cutting them to size, and hoping I would not end up in a "need just one more part that I can't get" situation.

It got rather elaborate, and working on it was often a welcome distraction from the news when I couldn't concentrate on my usual work. I'm posting this now because people sometimes tell me they like hearing about my offfgrid stuff, and perhaps you could use a distraction too.

The panel has my house's computer on it, as well as both AC and DC power distribution, breakers, and switching. Since the house is offgrid, the panel is designed to let every non-essential power drain be turned off, from my offgrid fridge to the 20 terabytes of offline storage to the inverter and satellite dish, the spring pump for my gravity flow water system, and even the power outlet by the kitchen sink.

Saving power is part of why I'm using old-school relays and stuff and not IOT devices, the other reason is of course: IOT devices are horrible dystopian e-waste. I'm taking the utopian Star Trek approach, where I can command "full power to the vacuum cleaner!"

At the core of the panel, next to the cubietruck arm board, is a custom IO daughterboard. Designed and built by hand to fit into a DIN mount case, it uses every GPIO pin on the cubietruck's main GPIO header. Making this board took 40+ hours, and was about half the project. It got pretty tight in there.

This was my first foray into DIN rail mount, and it really is industrial lego -- a whole universe of parts that all fit together and are immensely flexible. Often priced more than seems reasonable for a little bit of plastic and metal, until you look at the spec sheets and the ratings. (Total cost for my panel was $400.) It's odd that it's not more used outside its niche -- I came of age in the Bay Area, surrounded by rack mount equipment, but no DIN mount equipment. Hacking the hardware in a rack is unusual, but DIN invites hacking.

Admittedly, this is a second system kind of project, replacing some unsightly shelves full of gear and wires everywhere with something kind of overdone. But should be worth it in the long run as new gear gets clipped into place and it evolves for changing needs.

Also, wire gutters, where have you been all my life?

Finally, if you'd like to know what everything on the DIN rail is, from left to right: Ground block, 24v DC disconnect, fridge GFI, spare GFI, USB hub switch, computer switch, +24v block, -24v block, IO daughterboard, 1tb SSD, arm board, modem, 3 USB hubs, 5 relays, AC hot block, AC neutral block, DC-DC power converters, humidity sensor.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้