Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 26 min 59 sec ago

Markus Koschany: My Free Software Activities in December 2019

12 January, 2020 - 00:36

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • I started the month by backporting the latest version of minetest to buster-backports.
  • New versions of Springlobby, the single and multiplayer lobby for the Spring RTS engine, and Freeciv (now at 2.6.1) were packaged.
  • I had to remove python-pygccxml as a build-dependency from spring because of the Python 2 removal and there was also another unrelated build failure that got fixed as well.
  • I also released a new version of the debian-games metapackages. A considerable number of games were removed from Debian in the past months, in parts due to the ongoing Python 2 removal but also because of inactive maintainers or upstreams. There were also some new games though. Check out the 3.1 changelog for more information. As a consequence of our Python 2 goal, the development metapackage for Python 2 is gone now.
Debian Java Misc
  • The imlib2 image library was updated to version 1.6.1 and now supports the webp image format.
  • I backported the Thunderbird addon dispmua to Buster and Stretch because the new Thunderbird ESR version had made it unusable.
  • I also updated binaryen, a compiler and library for WebAssembly and asked upstream if they could relax the build-dependency on Git which they did.
Debian LTS

This was my 46. month as a paid contributor and I have been paid to work 16,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

From 23.12.2019 until 05.01.2020 I was in charge of our LTS frontdesk. I investigated and triaged CVE in sudo, shiro, waitress, sa-exim, imagemagick, nss, apache-log4j1.2, sqlite3, lemonldap-ng, libsixel, graphicsmagick, debian-lan-config, xerces-c, libpodofo, vim, pure-ftpd, gthumb, opencv, jackson-databind, pillow, fontforge, collabtive, libhibernate-validator-java, lucene-solr and gpac.

  • DLA-2051-1. Issued a security update for intel-microcode fixing 2 CVE.
  • DLA-2058-1. Issued a security update for nss fixing 1 CVE.
  • DLA-2062-1. Issued a security update for sa-exim fixing 1 CVE.
  • I prepared a security update for tomcat7 by updating to the latest upstream release in the 7.x series. It is pending review by Mike Gabriel at the moment.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my nineteenth month and I have been assigned to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 23.12.2019 until 05.01.2020 and I triaged CVE in sqlite3, libxml2 and nss.
  • ELA-200-2. Issued a security update for intel-microcode.
  • Worked on tomcat7, CVE-2019-12418 and CVE-2019-17563, and finished the patches prepared by Mike Gabriel. We have discovered some unrelated test failures and are currently investigating the root cause of them.
  • Worked on nss, which is required to build OpenJDK 7 and also needed at runtime for the SunEC security provider. I am currently investigating CVE-2019-17023 which has been assigned only a few days ago.
  • ELA-206-1. Issued a security update for apache-log4j1.2 fixing 1 CVE.

Thanks for reading and see you next time.

Ritesh Raj Sarraf: Laptop Mode Tools 1.73

11 January, 2020 - 16:44
Laptop Mode Tools 1.73

I am pleased to announce the release of Laptop Mode Tools version 1.73

This release includes many bug fixes. For user convenience, 2 command options have been added.

rrs@priyasi:~$ laptop_mode -h
****************************
Following user commands are understood
status      :   Display a Laptop Mode Tools power savings status
power-stats  :  Display the power statistics on the machine
power-events :  Trap power related events on the machine
help        :   Display this help message (--help, -h)
version     :   Display program version (--version, -v)
****************************
15:22 â™’ ༐  â˜ş đŸ˜„    


rrs@priyasi:~$ sudo laptop_mode status
[sudo] password for rrs: 
Mounts:
   /dev/mapper/nvme0n1p4_crypt on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,autodefrag,subvolid=5,subvol=/)
   /dev/nvme0n1p3 on /boot type ext4 (rw,relatime)
   /dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
   /dev/fuse on /run/user/1000/doc type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
 
Drive power status:
   Cannot read /dev/[hs]d[abcdefgh], permission denied - /usr/sbin/laptop_mode needs to be run as root
 
(NOTE: drive settings affected by Laptop Mode cannot be retrieved.)
 
Readahead states:
   /dev/mapper/nvme0n1p4_crypt: 128 kB
   /dev/nvme0n1p3: 128 kB
   /dev/nvme0n1p1: 128 kB
 
Laptop Mode Tools is allowed to run: /var/run/laptop-mode-tools/enabled exists.
 
/proc/sys/vm/laptop_mode:
   0
 
/proc/sys/vm/dirty_ratio:
   40
 
/proc/sys/fs/xfs/xfssyncd_centisecs:
   3000
 
/proc/sys/vm/dirty_background_ratio:
   10
 
/proc/sys/vm/dirty_expire_centisecs:
   3000
 
/proc/sys/vm/dirty_writeback_centisecs:
   500
 
......SNIPPED......

/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq:
   400000
 
/sys/devices/system/cpu/cpu5/cpufreq/cpuinfo_max_freq:
   2001000
 
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor:
   schedutil
 
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor:
   schedutil
 
/proc/acpi/button/lid/LID0/state:
   state:      open
 
/sys/class/power_supply/AC/online:
   1
 
/sys/class/power_supply/BAT0/status:
   Charging
 
15:22 â™’ ༐  â˜ş đŸ˜„    



rrs@priyasi:~$ laptop_mode power-stats
Power Supply details for /sys/class/power_supply/AC

P: /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
L: 0
E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
E: POWER_SUPPLY_NAME=AC
E: POWER_SUPPLY_ONLINE=1
E: SUBSYSTEM=power_supply

Power Supply details for /sys/class/power_supply/BAT0

P: /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
L: 0
E: DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
E: POWER_SUPPLY_NAME=BAT0
E: POWER_SUPPLY_STATUS=Charging
E: POWER_SUPPLY_PRESENT=1
E: POWER_SUPPLY_TECHNOLOGY=Li-poly
E: POWER_SUPPLY_CYCLE_COUNT=0
E: POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
E: POWER_SUPPLY_VOLTAGE_NOW=8760000
E: POWER_SUPPLY_CURRENT_NOW=545000
E: POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
E: POWER_SUPPLY_CHARGE_FULL=6592000
E: POWER_SUPPLY_CHARGE_NOW=6526000
E: POWER_SUPPLY_CAPACITY=98
E: POWER_SUPPLY_CAPACITY_LEVEL=Normal
E: POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
E: POWER_SUPPLY_MANUFACTURER=SMP
E: POWER_SUPPLY_SERIAL_NUMBER=1549
E: SUBSYSTEM=power_supply

15:23 â™’ ༐  â˜ş đŸ˜„    



rrs@priyasi:~$ laptop_mode power-events
Running Laptop Mode Tools in event tracing mode. Press ^C to interrupt
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[140321.536870] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=0
SEQNUM=5908

KERNEL[140321.569526] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Discharging
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8761000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5909

UDEV  [140321.577770] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=0
SEQNUM=5908
USEC_INITIALIZED=140321550931

UDEV  [140321.582123] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Discharging
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8761000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5909
USEC_INITIALIZED=140321580812

KERNEL[140324.857185] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=1
SEQNUM=5912

UDEV  [140324.916156] change   /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0003:00/power_supply/AC
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=AC
POWER_SUPPLY_ONLINE=1
SEQNUM=5912
USEC_INITIALIZED=140324887055

KERNEL[140324.917955] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Unknown
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8622000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5913

UDEV  [140324.922916] change   /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0 (power_supply)
ACTION=change
DEVPATH=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0A:00/power_supply/BAT0
SUBSYSTEM=power_supply
POWER_SUPPLY_NAME=BAT0
POWER_SUPPLY_STATUS=Unknown
POWER_SUPPLY_PRESENT=1
POWER_SUPPLY_TECHNOLOGY=Li-poly
POWER_SUPPLY_CYCLE_COUNT=0
POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7600000
POWER_SUPPLY_VOLTAGE_NOW=8622000
POWER_SUPPLY_CHARGE_FULL_DESIGN=6842000
POWER_SUPPLY_CHARGE_FULL=6592000
POWER_SUPPLY_CHARGE_NOW=6526000
POWER_SUPPLY_CAPACITY=98
POWER_SUPPLY_CAPACITY_LEVEL=Normal
POWER_SUPPLY_MODEL_NAME=DELL G8VCF6C
POWER_SUPPLY_MANUFACTURER=SMP
POWER_SUPPLY_SERIAL_NUMBER=1549
SEQNUM=5913
USEC_INITIALIZED=140324922572

^C
15:24 â™’ ༐   ☚ đŸ˜Ÿ=> 130  

A filtered list of changes is mentioned below. For the full log, please refer to the git repository.

1.73 - Sat Jan 11 14:52:11 IST 2020
* Respect black/white lists when disabling autosuspend
* Add newer power supply names
* Fix crash due external battery of mouse
* Honor configuration setting for battery level polling
* cpufreq: intel_pstate should use performance governors
* runtime-pm: Speed up by avoiding fork in echo_to_file
* runtime-pm: Inline echo_to_file_do
* runtime-pm: Fix echo_to_file* indentation
* runtime-pm: Speed up by avoiding fork in listed_by_{id,type}
* runtime-pm: Simplify vendor/product match
* add help and verison user commands
* Add a power-stats status command
* Separate power sysfs attributes and add sysfs status attribute
* Add device type 'sd' to default blacklist
* Fix rpm spec file for new installable files

Source tarball, Feodra/SUSE RPM Packages available at: https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki

Mailing List: https://groups.google.com/d/forum/laptop-mode-tools

What is Laptop Mode Tools
Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

Russ Allbery: Review: True Porn Clerk Stories

11 January, 2020 - 13:06

Review: True Porn Clerk Stories, by Ali Davis

Publisher: Amazon.com Copyright: August 2009 ASIN: B002MKOQUG Format: Kindle Pages: 160

The other day I realized, as a cold claw of pure fear squeezed my frantic heart, that I have been working as a video clerk for ten months.

This is a job that I took on a temporary basis for just a month or two until freelancing picked back up and I got my finances in order.

Ten months.

It has been a test of patience, humility, and character.

It has been a lesson in dealing with all humankind, including their personal bodily fluids.

It has been $6.50 an hour.

If you're wondering whether you'd heard of this before and you were on the Internet in the early 2000s, you probably have. This self-published book is a collection of blog posts from back when blogs were a new thing and went viral before Twitter existed. It used to be available on-line, but I don't believe it is any more. I ran across a mention of it about a year ago and felt like reading it again, and also belatedly tossing the author a tiny bit of money.

I'm happy to report that, unlike a lot of nostalgia trips, this one holds up. Davis's stories are still funny and the meanness fairy has not visited and made everything awful. (The same, alas, cannot be said for Acts of Gord, which is of a similar vintage but hasn't aged well.)

It's been long enough since Davis wrote her journal that I feel like I have to explain the background. Back in the days when the Internet was slow and not many people had access to it, people went to a local store to rent movies on video tapes (which had to be rewound after watching, something that customers were irritatingly bad at doing). Most of those only carried normal movies (Blockbuster was the ubiquitous chain store, now almost all closed), but a few ventured into the far more lucrative, but more challenging, business of renting porn. Some of those were dedicated adult stores; others, like the one that Davis worked at, carried a mix of regular movies and, in a separate part of the store, porn. Prior to the days of ubiquitous fast Internet, getting access to video porn required going into one of those stores and handing lurid video tape covers and money to a human being who would give you your rented videos. That was a video clerk.

There is now a genre of web sites devoted to stories about working in retail and the bizarre, creepy, abusive, or just strange things that customers do (Not Always Right is probably the best known). Davis's journal predated all of that, but is in the same genre. I find most of those sites briefly interesting and then get bored with them, but I had no trouble reading this (short) book cover to cover even though I'd read the entries on the Internet years ago.

One reason for that is that Davis is a good story-teller. She was (and I believe still is) an improv comedian, and it shows. Many of the entries are stories about specific customers, who Davis gives memorable code names (Mr. Gentle, Mr. Cheekbones, Mr. Creaky) and describes quickly and efficiently. She has a good sense of timing and keeps the tone at "people are amazingly strange and yet somehow fascinating" rather than slipping too far into the angry ranting that, while justified, makes a lot of stories of retail work draining to read.

That said, I think a deeper reason why this collection works is that a porn store does odd things to the normal balance of power between a retail employee and their customers. Most retail stories are from stores deeply embedded in the "customer is always right" mentality, where the employee is essentially powerless and has to take everything the customer dishes out with a smile. The stories told by retail employees are a sort of revenge, re-asserting the employee's humanity by making fun of the customer. But renting porn is not like a typical retail transaction.

A video clerk learns things about a customer that perhaps no one else in their life knows, shifting some of the vulnerability back to the customer. The store Davis worked at was one of the most comprehensive in the area, and in a relatively rare business, so the store management knew they were going to get business anyway and were not obsessed with keeping every customer happy. They had regular trouble with customers (the 5% of retail customers who get weird in a porn store often get weird in disgusting and illegal ways) and therefore empowered the store clerks to be more aggressive about getting rid of unwanted business. That meant the power balance between the video clerks and the customers, while still not exactly equal, was more complicated and balanced in ways that make for better (and less monotonously depressing) stories.

There are, of course, stories of very creepy customers here, as well as frank thoughts on porn and people's consumption habits from a self-described first-amendment feminist who tries to take the over-the-top degrading subject matter of most porn with equanimity but sometimes fails. But those are mixed with stories of nicer customers, which gain something that's hard to describe from the odd intimacy of knowing little about them except part of their sex life. There are also some more-typical stories of retail work that benefit from the incongruity between their normality and the strangeness of the product and customers. Davis's account of opening the store by playing Aqua mix tapes is glorious. (Someone else who likes Aqua for much the same reason that I do!)

Content warning for public masturbation, sex-creep customers, and lots of descriptions of the sorts of degrading (and sexist and racist) sex acts portrayed on porn video boxes, of course. But if that doesn't drive you away, these are still-charming and still-fascinating slice-of-life stories about retail work in a highly unusual business that thrived for one brief moment in time and effectively no longer exists. Recommended, particularly if you want the nostalgia factor of re-reading something you vaguely remember from twenty years ago.

Rating: 7 out of 10

Anisa Kuci: Outreachy post 3 - Midterm report

11 January, 2020 - 01:37

Time passes by quickly when you do the things that you like. And so have passed by very quickly the first six weeks of Outreachy. The first half of the internship has been an amazing experience for me. I have worked and learned so many new things. I got familiar more closely with the Debian project that I was already contributing to in the past, but less intensively. I am very happy to get to know more people from the community, feel so welcomed and find such a warm environment.

Since the first weeks of the internship I started working on fundraising materials for DebConf20 as part of my tasks, using LaTeX which is an amazing tool to work on creating different types of documents. My skills on using LaTeX are improved, and the more I use it the more I discover how powerful a tool it is and the variety of things that you can do with it. Lately I worked on the flyer and brochure that will be sent to potential sponsors.

On the flyer I removed the translation elements, since this year the materials will be only in English. I updated the content making it relevant for this year, and also updated the logo to the winning entry of a contest the local team ran. Matching to the dominant color of the DebConf20 logo I created a color scale that we are using for headlines and decorative elements within the fundraising material and the conference web page.

As for the fundraising brochure, I took the content from a Google doc, which was carefully created by my mentor Karina and converted it into LaTeX. I adapted it with the new logo, colors and monetary values in the local currency. For this I needed to create a TeX \newcommand as the ILS currency symbol (₪) is not supported natively. This also led to a restriction in the choice of fonts available because the ILS symbol needs to be part of the font. With support from the wider DebConf team we settled on Liberation Sans. As we are working on the visual identity of DebConf20, we are almost finalizing the fundraising materials for this edition.

I have also worked on the draft email templates that I have proposed for the next phases of contacting sponsors, hoping I will receive a good feedback from the team. They are available on a private DebConf git repo. The basic idea is to provide new aspects of the benefits of sponsoring a DebConf with each contact that we have reaching out to sponsors.

Beside practicing LaTeX I have also worked a lot on git and it has been very helpful for me to practice. There is so much information to work on and so much you can do with git. I am trying to get beyond the common level of understanding git:

Another task I have is documentation, so, I have worked on this too, in parallel. As each DebConf is organized every year in another country, you might imagine that for the local team not everything is familiar, even if they might be part of Debian, and of course depending also on the experience they might have on organizing events or specifically fundraising. So, working on fundraising now, I have had many things that I was not completely familiar with and I have started documenting the workflow so it will be hopefully more convenient and smooth process for future DebConf local organizing teams.

As mentioned on my last blog post, I have already joined the main communication channels that the Debian community uses. I try to be as much available as I can and try to stay updated with all the info that might be relevant information for my internship. I participate in all the biweekly team meetings for DebConf20, giving updates about my progress and staying in the loop of the current situation regarding organizational topics related to the conference.

I stay in contact with my mentors Daniel and Karina via IRC and emails. I would like to take a moment and thank them for all their encouragement, support and feedback which has helped me improve and has motivated me a lot to continue working in this awesome project. I keep connection to the wider community as well via IRC, Planet Debian or constantly following the mailing lists.

Last but not least, I also participate in the Outreachy webchats where I had the chance to have a little bit of background from other Outreachy interns and meet the people who are running the Outreachy program. I am so glad to see what a safe, easygoing and inclusive environment they have created for everyone.

My experience so far has been a blast!

Dirk Eddelbuettel: rfoaas 2.1.0: New upstream so new access point!

10 January, 2020 - 07:37

FOAAS, having been resting upstream for some time, released version 2.1.0 of its wonderful service this week! So without too much further ado we went to work and added support for it. And now we are in fact thrilled to announce that release 2.1.0 of rfoaas is now on CRAN as of this afternoon (with a slight delay as yours truly managed to state the package release date as 2019-01-09 which was of course flagged as ‘too old’).

The new 2.1.0 release of FOAAS brings a full eleven new REST access points, namely even(), fewer(), ftfty(), holygrail(), idea(), jinglebells(), legend(), logs(), ratsarse(), rockstar(), and waste(). On our end, documentation and tests were updated.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Lisandro Damián Nicanor Pérez Meyer: Qt 4 removed from Debian bullseye (current testing)

10 January, 2020 - 05:31
Today Qt 4 (aka src:qt4-x11) has been removed from Debian bullseye, what as of today we know as "testing". We plan to remove it from unstable pretty soon.



Russ Allbery: DocKnot 3.02

9 January, 2020 - 08:04

DocKnot is my set of tools for generating package documentation and releases. The long-term goal is for it to subsume the various tools and ad hoc scripts that I use to manage my free software releases and web site.

This release includes various improvements to docknot dist for generating a new distribution tarball: xz-compressed tarballs are created automatically if necessary, docknot dist now checks that the distribution tarball contains all of the expected files, and it correctly handles cleaning the staging directory when regenerating distribution tarballs. This release also removes make warnings when testing C++ builds since my current Autoconf machinery in rra-c-util doesn't properly exclude options that aren't supported by C++

This release also adds support for the No Maintenance Intended badge for orphaned software in the Markdown README file, and properly skips a test on Windows that requires tar.

With this release, the check-dist script on my scripts page is now obsolete, since its functionality has been incorporated into DocKnot. That script will remain available from my page, but I won't be updating it further.

You can get the latest release from CPAN or the DocKnot distribution page. I've also uploaded Debian packages to my personal repository. (I'm still not ready to upload this to Debian proper since I want to make another major backwards-incompatible change first.)

Dirk Eddelbuettel: BH 1.72.0-3 on CRAN

9 January, 2020 - 05:30

The BH 1.72.0-1 release of BH required one update 1.72.0-2 when I botched a hand-edited path (to comply with the old-school path-length-inside-tar limit).

Turns out another issue needed a fix. This release improved on prior ones by starting from a pristine directory. But as a side effect, Boost Accumulators ended up incomplete with only the dependented-upon-by-others files included (by virtue of the bcp tool). So now we declared Boost Accumulators a full-fledged part of BH ensuring that bcp copies it “whole”. If you encounter issues with another incomplete part, please file an issue ticket at the GitHub repo.

No other changes were made.

Also, this fix was done initially while CRAN took a well-deserved winter break, and I had tweeted on Dec 31 about availability via drat and may use this more often for pre-releases. CRAN is now back, and this (large !!) package is now processed as part of the wave of packages that were in waiting (and Henrik got that right yesterday…).

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve Kemp: I won't write another email client

9 January, 2020 - 00:30

Once upon a time I wrote an email client, in a combination of C++ and Lua.

Later I realized it was flawed, and because I hadn't realized that writing email clients is hard I decided to write it anew (again in C++ and Lua).

Nowadays I do realize how hard writing email clients is, so I'm not going to do that again. But still .. but still ..

I was doing some mail-searching recently and realized I wanted to write something that processed all the messages in a Maildir folder. Imagine I wanted to run:

 message-dump ~/Maildir/people-foo/ ~/Maildir/people-bar/  \
     --format '${flags} ${filename} ${subject}'

As this required access to (arbitrary) headers I had to read, parse, and process each message. It was slow, but it wasn't that slow. The second time I ran it, even after adjusting the format-string, it was nice and fast because buffer-caches rock.

Anyway after that I wanted to write a script to dump the list of folders (because I store them recursively so ls -1 ~/Maildir wasn't enough):

 maildir-dump --format '${unread}/${total} ${path}'

I guess you can see where this is going now! If you have the following three primitives, you have a mail-client (albeit read-only)

  • List "folders"
  • List "messages"
  • List a single message.

So I hacked up a simple client that would have a sub-command for each one of these tasks. I figured somebody else could actually use that, be a little retro, be a little cool, pretend they were using MH. Of course I'd have to write something horrid as a bash-script to prove it worked - probably using dialog to drive it.

And then I got interested. The end result is a single golang binary that will either:

  • List maildirs, with a cute format string.
  • List messages, with a cute format string.
  • List a single message, decoding the RFC2047 headers, showing text/plain, etc.
  • AND ALSO USE ITSELF TO PROVIDE A GUI

And now I wonder, am I crazy? Is writing an email client hard? I can't remember

Probably best to forget the GUI exists. Probably best to keep it a couple of standalone sub-commands for "scripting email stuff".

But still .. but still ..

Enrico Zini: Checking sphinx code blocks

8 January, 2020 - 23:14

I'm too lazy to manually check code blocks in autogenerated sphinx documentation to see if they are valid and reasonably up to date. Doing it automatically feels much more interesting to me: here's how I did it.

This is a simple sphinx extension to extract code blocks in a JSON file.

If the documentation is written well enough, I even get annotation on what programming language each snippet is made of:

## Extract code blocks from sphinx

from docutils.nodes import literal_block, Text
import json

found = []


def find_code(app, doctree, fromdocname):
    for node in doctree.traverse(literal_block):
        # if "dballe.DB.connect" in str(node):
        lang = node.attributes.get("language", "default")
        for subnode in node.traverse(Text):
            found.append({
                "src": fromdocname,
                "lang": lang,
                "code": subnode,
            })


def output(app, exception):
    if exception is not None:
        return

    dest = app.config.test_code_output
    if dest is None:
        return

    with open(dest, "wt") as fd:
        json.dump(found, fd)


def setup(app):
    app.add_config_value('test_code_output', None, '')

    app.connect('doctree-resolved', find_code)
    app.connect('build-finished', output)

    return {
        "version": '0.1',
        'parallel_read_safe': True,
        'parallel_write_safe': True,
    }

And this is an early prototype python code that runs each code block in a subprocess to see if it works.

It does interesting things, such as:

  • Walk the AST to see if the code expects some well known variables to have been set, and prepare the test environment accordingly
  • Collect DeprecationWarnings to spot old snippets using deprecated functions
  • Provide some unittest-like assert* functions that snippets can then use if they want
  • Run every snippet in a subprocess, which then runs in a temporary directory, deleted after execution
  • Colorful output, including highlighting of code lines that threw exceptions

Thomas Lange: 20 years of FAI and a new release

8 January, 2020 - 19:21

20 years ago, on December 20, 1999 FAI 1.0 was released. Many things have happened since then. Some milestones:

  • 1999: version 1.0
  • 2000: first official Debian package
  • 2001: first detailed user report ("No complete installs. Teething problems.")
  • 2005: Wiki page and IRC
  • 2005: FAI CD
  • 2006: fai dirinstall
  • 2007: new partitioning tool setup-storage
  • 2009: new web design
  • 2014: brtfs support
  • 2016: autodiscover function, profiles menu
  • 2016: fai-diskimage, cloud images
  • 2017: cross architecture builds
  • 2017: Fai.me web service
  • 2020: UEFI support

Besides that, a lot of other things happened in the FAI project. Apart from the first report, we got more than 300 detailed reports containing positive feedback. We had several FAI developers meetings and I did more than 40 talks about FAI all over the world. We had a discussion about an alleged GPL violation of FAI in the past, I did several attempts to get a logo for FAI, but we still do not have one. We moved from subversion to git, which was very demanding for me. The FAI.me service for customized installation and cloud images was used more than 5000 times. The Debian Cloud team now uses FAI to build the official Debian cloud images.

I'm very happy with the outcome of this project and I like to thank all people who contributed to FAI in the past 20 years!

This week, I've released the new FAI version 5.9. It supports UEFI boot from CD/DVD and USB stick. Also two new tools were added:

  • fai-sed - call sed on a file but check for changes before writing
  • fai-link - create symlink idempotent

UEFI support in fai-cd only used grub, no syslinux or isolinux is needed. New FAI installation images are also available from

https://fai-project.org/fai-cd

The FAI.me build service is also using the newest FAI version and the customized ISO images can now be booted in an UEFI environment.

https://fai-project.org/FAIme

Russ Allbery: C TAP Harness 4.6

8 January, 2020 - 10:04

C TAP Harness is my test framework for C software packages.

This release is mostly a release for my own convenience to pick up the reformatting of the code using clang-format, as mentioned in my previous release of rra-c-util. There are no other user-visible changes in this release.

I did do one more bit of housekeeping, namely added proper valgrind testing support to the test infrastructure. I now run the test suite under valgrind as part of the release process to look for any memory leaks or other errors in the harness or in the C TAP library.

The test suite for this package is written entirely in shell (with some C helpers), and I'm now regretting that. The original goal was to make this package maximally portable, but I ended up adding Perl tests anyway to test the POD source for the manual pages, and then to test a few other things, and now the test suite effectively depends on Perl and could have from the start. At some point, I'll probably rewrite the test machinery in Perl, which will make it far more maintainable and easier to read.

I think I've now finally learned my lesson for new packages: Trying to do things in shell for portability isn't worth it. As soon as any bit of code becomes non-trivial, and possibly before then, switch to a more maintainable programming language with better primitives and library support.

You can get the latest release from the C TAP Harness distribution page.

Ingo Juergensmann: XMPP - Prosody & Ejabberd

8 January, 2020 - 03:21

In my day job I'm responsible of maintaining the VoIP and XMPP infrastructure. That's about approx. 40.000 phones and several thousand users using Enterprise XMPP software. Namely it is Cisco CUCM and IM&P on the server side and Cisco Jabber on the client side. There is also Cisco Webex and Cisco Telepresence infrastructure to maintain.

On the other hand I'm running an XMPP server myself for a few users. It all started with ejabberd more than a decade ago or so. Then I moved to Openfire, because it was more modern and had a nice web GUI for administration. At some point there was Prosody as a new shiny star. This is now running for many users, mostly without any problems, but without much love and attention as well.

It all started as "Let's see what this Jabber stuff is..." on a subdomain like jabber.domain.com - it was later that I discovered the benefits of SRV records and the possibility of having the same address for mail, XMPP and SIP. So I began to provide XMPP acounts as well for some of my mail domains.

A year ago I enabled XMPP for my Friendica node on Nerdica.net, the second largest Friendica node according to the-federation.info. Although there are hundreds of monthly active users on Friendica, only a handful of users are using XMPP. XMPP has a hard stand since Google and Facebook went from open federation to closing in their user base.

My personal impression is that there is a lot of development in the last years in regards of XMPP - thanks to the Conversations client on Android - and its Compliance Tester. With that tool it is quite easy to have a common ground for the most needed features of todays user expectation in a mobile world. There is also some news in regards to XMPP clients on Apple iOS, but that's for another article.

This is about the server side, namely Prosody and Ejabberd. Of course there are already several excellent comparisons between these two server softwares. So, this is just my personal opinion and personal impressions about the two softwares I got in the past two weeks.

Prosody:
As I have the most experience with Prosody I'll start with it. Prosody has the advantage of being actively maintained and having lots of community modules to extend its functionality. This is a big win - but there is also the other side of truth: you'll need to install and configure many contrib modules to pass 100% in the Compliance Tester. Some modules might be not that well maintained. Another obstacle I faced with Prosody is the configuration style: usually you have the main config file where you can configure common settings, modules for all virtual hosts and components like PubSub, MUC, HTTP Upload and such. And then there are the config files for the virtual hosts, which feature the same kind of configuration. Important to all is (apparently): order does matter! This can get confusing: Components are similar to loading modules, using both for the same purpose can be, well, interesting. and configuration of modules and components can be challenging as well. When trying to get mod_http_upload working in the last days I experienced that a config on one virtual host was working, but the same config on a different host was not working. This was when I thought I might give Ejabberd a chance...

Ejabberd:
Contrary to Prosody there is a company behind Ejabberd. And this is often perceived as being good and bring some stability to Ejabberd. However, when I joined Ejabberd chat room, I learned in the first minutes by regarding the chat log that the main developer of that company left and the company itself seemed to have lost interest in Ejabberd. However the people in the chat room were relaxed: it's not the end of the world and there are other developers working on the code. So, no issue in the end, but that's not something you expect to read when you join a chat room for the first time. ;)
Contrary to Prosody Ejabberd seems to be well-prepared to pass the Compliance Tester without installing (too many) modules. Large sites such as conversations.im are running on Ejabberd. It is also said that Ejabberd doesn't need restarts of the server for certain config changes as Prosody does. The config file itself appears to be more straightforward and doesn't differentiate between modules and components which makes it a little more easy to understand.

Currently I haven't been able to deal much with Ejabberd, but one other difference is: there is a Debian repository on Prosody.im, but for Ejabberd there is no such repository. You'll have to use backports.debian.org for a newer version of Ejabberd on Debian Buster. It's up to you to decide what is better for you.

I'm still somewhat undecided whether or not to proceed with Ejabberd and migrate from Prosody. The developer of Prosody is very helpful and responsive and I like that. On the other hand, the folks in the Ejabberd chat rooms are very supportive as well. I like the flexibility and the various number of contrib modules for Prosody, but then again it's hard to find the correct/best one to load and to configure for a given task and to satisfy the Compliance Tester. Then again, both servers do feature a Web GUI for some basic tasks, but I like the one of Ejabberd more.

So, in the end, I'm also open for suggestions about either one. Some people will state of course that neither is the best way and I should consider Matrix, Briar or some other solutions, but that's maybe another article comparing XMPP and other options. This one is about XMPP server options: Prosody or Ejabberd. What do you prefer and why?

 

Kategorie: DebianTags: DebianServerSoftwareXMPP

Enrico Zini: Staticsite for blogging

7 January, 2020 - 20:53

I just released staticsite version 1.4, dedicated to creating a blog.

After reorganising the documentation, I decided to write a simple tutorial showing how to get a new blog started.

The goal of this release was to make it so that the tutorial would be as simple as possible: the result is "A new blog in under one minute".

Once staticsite is installed1, one can start a new blog by copypasting a short text file, then just adding markdown files anywhere in its directory. Staticsite can then serve a live preview of the site, automatically updated as pages are saved, and build an HTML version ready to be served.

I enjoyed picking a use case to drive a release. Next up is going to be "use staticsite to view a git repository, and preview its documentation". I already use it for that, and let's see what comes out after polishing for it.

  1. I just uploaded staticsite 1.4.1 to Debian Unstable 

Reproducible Builds: Reproducible Builds in December 2019

6 January, 2020 - 19:54

Welcome to the December 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In this report for December, we cover:

  • Media coverageA Google whitepaper, The Update Framework graduates within the Cloud Native Computing Foundation, etc.
  • Reproducible Builds Summit 2019What happened at our recent meetup?
  • Distribution workThe latest reports from Arch, Debian and openSUSE, etc.
  • Software developmentPatches, patches, patches…
  • Mailing list summary
  • Contact — *How to contribute

If you are interested in contributing to our project, please visit the Contribute page on our website.

Media coverage

Google published Binary Authorization for Borg, a whitepaper on how they reduce exposure of user data to unauthorised code as well as methods for verifying code provenance using their Borg cluster manager. In particular, the paper notes how they attempt to limit their “insider risk”, ie. the potential for internal personnel to use organisational credentials or knowledge to perform malicious activities.

The Linux Foundation announced that The Update Framework (TUF) has graduated within the Cloud Native Computing Foundation (CNCF) and thus becomes the first specification and first security-focused project to reach the highest maturity level in that group. TUF is a technology that secures software update systems initially developed by Justin Cappos at the NYU Tandon School of Engineering.

Andrew “bunnie” Huang published a blog post asking Can We Build Trustable Hardware? Whilst it concludes pessimistically that “open hardware is precisely as trustworthy as closed hardware” it does mention that reproducible builds can:

Enable any third-party auditor to download, build, and confirm that the program a user is downloading matches the intent of the developers.

At the 36th Chaos Communication Congress (36C3) in Leipzig, Hannes Mehnert from the MirageOS project gave a presentation called Leaving legacy behind which talks generally about Mirage system offering a potential alternative and minimalist approach to security but has a section on reproducible builds (at 38m41s).

Reproducible Builds Summit 2019

We held our fifth annual Reproducible Builds summit between the 1st and 8th December at Priscilla, Queen of the Medina in Marrakesh, Morocco.

The aim of the meeting was to spend time dicussing and working on Reproducible Builds with a widely diverse agenda and the event was a huge success.

During our time together, we updated and exchanged the status of reproducible builds in our respective projects, improved collaboration between and within these efforts, expanded the scope and reach of reproducible builds to yet more interested parties, established and continued strategic long-term thinking in a way not typically possible via remote channels, and brainstormed designs for tools to enable end-users to get the most benefit from reproducible builds.

Outside of these achievements in the hacking sessions kpcyrd made a breakthrough in Alpine Linux by producing the first reproducible package — specifically, py3-uritemplate — in this operating system. After this, progress was accelerated and by the denouement of our meeting the reproducibility status in Alpine reached 94%. In addition, Jelle van der Waa, Mattia Rizzolo and Paul Spooren discussed and implemented substantial changes to the database that underpins the testing framework that powers tests.reproducible-builds.org in order to abstract the schema in a distribution agnostic way, for example to allow submitting the results of attempts to verify officially distributed Arch Linux packages.

Lastly, Jan Nieuwenhuizen, David Terry and Vagrant Cascadian used three entirely-separate distributions (GNU Guix, NixOS and Debian) to produce a bit-for-bit identical GNU Mes binary despite using three different major versions of GCC and other toolchain components to build an initial binary, which was then used to build a final, bit-for-bit identical, binary of Mes.

The event was held at Priscilla, Queen of the Medina in Marrakesh, a location sui generis that stands for gender equality, female empowerment and the engagement of vulnerable communities locally through cultural activism. The event was open to anybody interested in working on Reproducible Builds issues, with or without prior experience.

A number of reports and blog posts have already been written, including for:

… as well as a number of tweets including ones from Jan Nieuwenhuizen celebrating progress in GNU Guix [] and Hannes [].

Distribution work

Within Debian, Chris Lamb categorised a large number of packages and issues in the Reproducible Builds notes.git repository, including identifying and creating markdown_random_email_address_html_entities and nondeterministic_devhelp_documentation_generated_by_gtk_doc.

In openSUSE, Bernhard published his monthly Reproducible Builds status update and filed the following patches:

Bernhard also filed bugs against:

The Yocto Project announced that it is running continuous tests on the reproducibility of its output which can observed through the oe-selftest runs on their build server. This was previously limited to just the mini images but this has now been extended to the larger graphical images. The test framework is available for end users to use against their own builds. Of particular interest is the production of binary identical results — despite arbitrary build paths — to allow more efficient builds through reuse of previously built objects, a topic covered in more-depth in a recent LWN article.

In Arch Linux, the database structure on tests.reproducible-builds.org was changed and the testing jobs updated to match and work has been started on a verification test job which rebuilds the officially released packages and verifies if they are reproducible or not. In the “hacking” time after our recent summit, several key packages were made reproducible, raising the amount of reproducible packages by approximately 1.5%. For example libxslt was patched with the patch originating from Debian and openSUSE.

Software development diffoscope

diffoscope is our in-depth and content-aware diff-like utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, diffoscope version 134 was uploaded to Debian unstable by Chris Lamb. He also made the following changes to diffoscope itself, including:

  • Always pass a filename with a .zip extension to zipnote otherwise it will return with an UNIX exit code of 9 and we fallback to displaying a binary difference for the entire file. []
  • Include the libarchive file listing for ISO images to ensure that timestamps – and not just dates – are visible in any difference. (#81)
  • Ensure that our autopkgtests are run with our pyproject.toml present for the correct black source code formatter settings. (#945993)
  • Rename the text_option_with_stdiout test to text_option_with_stdout [] and tidy some unnecessary boolean logic in the ISO9660 tests [].

In addition, Eli Schwartz fixed an error in the handling of the progress bar [] and Vagrant Cascadian added external tool reference for the zstd compression format for GNU Guix [] as well as updated the version to 133 [] and 134 [] in that distribution.

Project website & documentation

There was more work performed on our website this month, including:

In addition, Paul Spooren added a new page overviewing our Continuous Tests overview [], Hervé Boutemy made a number of improvements to our Java and JVM documentation expanding and clarifying various definitions as well as adding external links [][][][] and Mariana Moreira added a .jekyll-cache entry to the .gitignore file [].

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Holger Levsen:

    • Alpine:

      • Indicate where Alpine is being built on the node overview page. []
      • Turn off debugging output. []
      • Sleep longer if no packages are to be built. []
    • Misc:

      • Add some help text to our script to powercycle IONOS (neé Profitbricks) nodes. []
      • Install mosh everywhere. []
      • Only install ripgrep on Debian nodes. []
  • Mattia Rizzolo:

    • Arch Linux:

      • Normalise the suite names in the database. [][][][][]
      • Drop an unneeded line in the scheduler. []
    • Debian:

      • Fix a number of SQL errors. [][][][]
      • Use the debian.debian_support Python library over apt_pkg to perform version comparisons. []
    • Misc:

      • Permit other distributions to use our web-based package scheduling script. []
      • Reformat our power-cycling script using Black and use the Python logging module. []
      • Introduce a dsources database view to simplify some queries [] and add a build_type field to support both “doublerebuilds” and verification rebuilds [].
      • Move (almost) all the timestamps in the database schema from raw strings to “real” timestamp data types. []
      • Only block bots on jenkins.debian.net and tests.reproducible-builds.org, not any other sites. []

  • kpcyrd (for Alpine Linux):

    • Patch/install the abuild utility to one that is reproducible. [][][][]
    • Bump the number of build workers and collect garbage more frequently. [][][][]
    • Classify and display build results consistently. [][][]
    • Ensure that tmux and ripgrep is installed. [][]
    • Support building packages in the future. [][][]

Lastly, Paul Spooren removed the project overview from the bottom-left of the generated pages [] and the usual node maintenance was performed by Holger Levsen [] and Mattia Rizzolo [][].

Mailing list summary

There was considerable activity on our mailing list this month. Firstly, Bernhard M. Wiedemann posted a thread asking What is the goal of reproducible builds? in order to encourage refinements, extra questions and other contributions to what an end-user experience of reproducible builds should or even could look like.

Eli Schwartz then resurrected a previous thread titled Progress in rpm and openSUSE in 2019 to clarify some points around Arch Linux and Python package installation. Hans-Christoph Steiner followed-up to a separate thread originally started by Hervé Boutemy announcing the status of .buildinfo file support in the Java ecosystem, and Paul Spooren then informed the list that Google Summer of Code is now looking for projects for the latest cohort.

Lastly, Lars Wirzenius enquired about the status of Reproducible system images which resulted in a large number of responses.

Contact

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Hervé Boutemy, Holger Levsen, Jelle van der Waa, Lukas Puehringer and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Julien Danjou: Atomic lock-free counters in Python

6 January, 2020 - 17:47

At Datadog, we're really into metrics. We love them, we store them, but we also generate them. To do that, you need to juggle with integers that are incremented, also known as counters.

While having an integer that changes its value sounds dull, it might not be without some surprises in certain circumstances. Let's dive in.

The Straightforward Implementation
class SingleThreadCounter(object):
	def __init__(self):
    	self.value = 0
        
    def increment(self):
        self.value += 1

Pretty easy, right?

Well, not so fast, buddy. As the class name implies, this works fine with a single-threaded application. Let's take a look at the instructions in the increment method:

>>> import dis
>>> dis.dis("self.value += 1")
  1           0 LOAD_NAME                0 (self)
              2 DUP_TOP
              4 LOAD_ATTR                1 (value)
              6 LOAD_CONST               0 (1)
              8 INPLACE_ADD
             10 ROT_TWO
             12 STORE_ATTR               1 (value)
             14 LOAD_CONST               1 (None)
             16 RETURN_VALUE

The self.value +=1 line of code generates 8 different operations for Python. Operations that could be interrupted at any time in their flow to switch to a different thread that could also increment the counter.

Indeed, the += operation is not atomic: one needs to do a LOAD_ATTR to read the current value of the counter, then an INPLACE_ADD to add 1, to finally STORE_ATTR to store the final result in the value attribute.

If another thread executes the same code at the same time, you could end up with adding 1 to an old value:

Thread-1 reads the value as 23
Thread-1 adds 1 to 23 and get 24
Thread-2 reads the value as 23
Thread-1 stores 24 in value
Thread-2 adds 1 to 23
Thread-2 stores 24 in value

Boom. Your Counter class is not thread-safe. 😭

The Thread-Safe Implementation

To make this thread-safe, a lock is necessary. We need a lock each time we want to increment the value, so we are sure the increments are done serially.

import threading

class FastReadCounter(object):
    def __init__(self):
        self.value = 0
        self._lock = threading.Lock()
        
    def increment(self):
        with self._lock:
            self.value += 1

This implementation is thread-safe. There is no way for multiple threads to increment the value at the same time, so there's no way that an increment is lost.

The only downside of this counter implementation is that you need to lock the counter each time you need to increment. There might be much contention around this lock if you have many threads updating the counter.

On the other hand, if it's barely updated and often read, this is an excellent implementation of a thread-safe counter.

A Fast Write Implementation

There's a way to implement a thread-safe counter in Python that does not need to be locked on write. It's a trick that should only work on CPython because of the Global Interpreter Lock.

While everybody is unhappy with it, this time, the GIL is going to help us. When a C function is executed and does not do any I/O, it cannot be interrupted by any other thread. It turns out there's a counter-like class implemented in Python: itertools.count.

We can use this count class as our advantage by avoiding the need to use a lock when incrementing the counter.

If you read the documentation for itertools.count, you'll notice that there's no way to read the current value of the counter. This is tricky, and this is where we'll need to use a lock to bypass this limitation. Here's the code:

import itertools
import threading

class FastWriteCounter(object):
    def __init__(self):
        self._number_of_read = 0
        self._counter = itertools.count()
        self._read_lock = threading.Lock()

    def increment(self):
        next(self._counter)

    def value(self):
        with self._read_lock:
            value = next(self._counter) - self._number_of_read
            self._number_of_read += 1
        return value

The increment code is quite simple in this case: the counter is just incremented without any lock. The GIL protects concurrent access to the internal data structure in C, so there's no need for us to lock anything.

On the other hand, Python does not provide any way to read the value of an itertools.count object. We need to use a small trick to get the current value. The value method increments the counter and then gets the value while subtracting the number of times the counter has been read (and therefore incremented for nothing).

This counter is, therefore, lock-free for writing, but not for reading. The opposite of our previous implementation

Measuring Performance

After writing all of this code, I wanted to make sure how the different implementations impacted speed. Using the timeit module and my fancy laptop, I've measured the performance of reading and writing to this counter.

Operation SingleThreadCounter FastReadCounter FastWriteCounter increment 176 ns 390 ns 169 ns value 26 ns 26 ns 529 ns

I'm glad that the performance measurements in practice match the theory 😅. Both SingleThreadCounter and FastReadCounter have the same performance for reading. Since they use a simple variable read, it makes absolute sense.

The same goes for SingleThreadCounter and FastWriteCounter, which have the same performance for incrementing the counter. Again they're using the same kind of lock-free code to add 1 to an integer, making the code fast.

Conclusion

It's pretty obvious, but if you're using a single-threaded application and do not have to care about concurrent access, you should stick to using a simple incremented integer.

For fun, I've published a Python package named fastcounter that provides those classes. The sources are available on GitHub. Enjoy!

Russ Allbery: rra-c-util 8.1

6 January, 2020 - 13:13

rra-c-util is my collection of utility code that I use in my various other software packages (mostly, but not only, C).

I now forget what I was reading, but someone on-line made a side reference to formatting code with clang-format, which is how I discovered that it exists. I have become a big fan of automated code reformatting, mostly via very positive experiences with Python's black and Rust's rustfmt. (I also use perltidy for my Perl code, but I'm not as fond of it; it's a little too aggressive and it changes how it formats code from version to version.) They never format things in quite the way that I want, but some amount of inelegant formatting is worth it for not having to think about or manually fix code formatting or argue with someone else about it.

So, this afternoon I spent some time playing with clang-format and got it working well enough. For those who are curious, here's the configuration file that I ended up with:

Language: Cpp
BasedOnStyle: LLVM
AlignConsecutiveMacros: true
AlignEscapedNewlines: Left
AlwaysBreakAfterReturnType: AllDefinitions
BreakBeforeBinaryOperators: NonAssignment
BreakBeforeBraces: WebKit
ColumnLimit: 79
IndentPPDirectives: AfterHash
IndentWidth: 4
IndentWrappedFunctionNames: false
MaxEmptyLinesToKeep: 2
SpaceAfterCStyleCast: true

This fairly closely matches my personal style, and the few differences are minor enough that I'm happy to change. The biggest decision that I'm not fond of is how to format array initializers that are longer than a single line, and clang-format's tendency to move __attribute__ annotations onto the same line as the end of the function arguments in function declarations.

I had some trouble with __attribute__ annotations on function definitions, but then found that moving the annotation to before the function return value made the right thing happen, so I'm now content there.

I did have to add some comments to disable formatting in various places where I lined related code up into columns, but that's normal for code formatting tools and I don't mind the minor overhead.

This release of rra-c-util reformats all of the code with clang-format (version 10 required since one of the options above is only in the latest version). It also includes the changes to my Perl utility code to drop support for Perl 5.6, since I dropped that in my last two hold-out Perl packages, and some other minor fixes.

You can get the latest version from the rra-c-util distribution page.

Enrico Zini: Gender in history links

6 January, 2020 - 06:00
Amelio Robles Ávila - Wikipedia history gender archive.org 2020-01-06 Amelio Robles Ávila (3 November 1889 – 9 December 1984) was a colonel during the Mexican Revolution. Assigned female at birth with the name Amelia Robles Ávila, Robles fought in the Mexican Revolution, rose to the rank of colonel, and lived openly as a man from age 24 until his death at age 95. Alan L. Hart - Wikipedia history gender archive.org 2020-01-06 Alan L. Hart (October 4, 1890 – July 1, 1962) was an American physician, radiologist, tuberculosis researcher, writer and novelist. He was in 1917–18 one of the first trans men to undergo hysterectomy in the United States, and lived the rest of his life as a man. He pioneered the use of x-ray photography in tuberculosis detection, and helped implement TB screening programs that saved thousands of lives.[1] Wartime cross-dressers - Wikipedia history gender 2020-01-06 Many people have engaged in cross-dressing during wartime under various circumstances and for various motives. This has been especially true of women, whether while serving as a soldier in otherwise all-male armies, while protecting themselves or disguising their identity in dangerous circumstances, or for other purposes. Breeching (boys) - Wikipedia history gender archive.org 2020-01-06 Breeching was the occasion when a small boy was first dressed in breeches or trousers. From the mid-16th century[1] until the late 19th or early 20th century, young boys in the Western world were unbreeched and wore gowns or dresses until an age that varied between two and eight.[2] Various forms of relatively subtle differences usually enabled others to tell little boys from little girls, in codes that modern art historians are able to understand. Sino al termine dell'800 anche i bambini Maschi venivano vestiti da Femmine history gender archive.org 2020-01-06 Sull’opportunità di rimarcare o meno le differenze di genere negli anni della prima infanzia è stato scritto tutto e il contrario di tutto. Indipendentemente da ciò che ognuno di noi può pensare, ancora una volta pare proprio che la storia smentisca solide convinzioni.

Michael Prokop: Revisiting 2019

6 January, 2020 - 05:58

Mainly to recall what happened last year and to give thoughts and to plan for the upcoming year(s) I’m once again revisiting my previous year (previous editions: 2018, 2017, 2016, 2015, 2014, 2013 + 2012).

In terms of IT events, I attended Grazer Linuxdays 2019 and gave a talk (Best Practices in der IT-Administration, Version 2019) and was interviewed by Radio Helsinki there. With the Grml project, we attended the Debian Bug Squashing Party in Salzburg in April. I also visited a meeting of the Foundation for Applied Privacy in Vienna. Being one of the original founders I still organize the monthly Security Treff Graz (STG) meetups. In 2020 I might attend DebConf 20 in Israel (though not entirely sure about it yet), will definitely attend Grazer Linuxdays (maybe with a talk about »debugging for sysadmins« or alike) and of course continue with the STG meetups.

I continued to play Badminton in the highest available training class (in german: “Kader”) at the University of Graz (Universitäts-Sportinstitut, USI). I took part in the Zoo run in Tiergarten Schönbrunn (thanks to an invitation by a customer).

I started playing the drums at the »HTU Big Band Graz« (giving a concert on 21st of November). Playing in a big band was like a dream come true, being a big fan of modern Jazz big bands since being a kid and I even played the drums in a big band more than 20 years ago, so I’m back™. I own a nice e-drum set and recently bought a Zildjian Gen16 cymbal set and also own a master-keyboard (AKA MIDI keyboard) for many years, which is excellent for recording. But in terms of “living room practicality”, I wanted something more piano alike, and we bought a Yamaha CLP-645 B digital piano, which my daughters quite regularly use and now and then I manage to practice on it as well. As you might guess, I want to make music a more significant part of my life again.

I visited some concerts, including Jazz Redoute, Jazzwerkstatt Graz, Billy Cobham’s Crosswinds Project, Jazz Night Musikforum Viktring, Gnackbruch evening with AMMARITE, a concert of the Kärntner Sinfonieorchester, Steven Wilson’s To The Bone tour, Sting’s My Songs tour and the Corteo show of Cirque du Soleil. I took some local trips in Graz, including a Murkraftwerk Graz tour and a »Kanalführung«.

Business-wise it was the sixth year of business with SynPro Solutions, and we moved the legal form of our company from GesnbR to GmbH. No big news but steady and ongoing work with my other business duties Grml Solutions and Grml-Forensic.

I also continued with taking care of our kids every Monday and half another day of the week – which is still challenging now and then with running your own business, but so absolutely worth it. With a kid going to school, it was quite some change for my schedule and day planning as well. Now having a fixed schedule for most of the days, the Sonos soundbox wakes us up with Ö1 news and its Ö1 signature tune Monday to Friday. Thanks to Pavlovian conditioning, when waking up on Saturdays and Sundays I also hear the Ö1 signature tune in my head while no radio is present then. :)

I tried to minimize my Amazon usage as much as possible and will try to continue doing so in 2020 as well.

I had quite some troubles with my Vespa PX125, hopefully things are sorted out nowadays though. *knockingonwood*

After being ~20 years in the Usenet (mostly de.* + at.* + tu-graz.*) I pretty much gave it up.

Book reading became more of a habit again, and I managed to complete 42 books (see Bookdump 01/2019 and Bookdump 02/2019). I noticed that what felt like good days for me always included reading books, and want to keep my reading pace for 2020.

Russ Allbery: Term::ANSIColor 5.00

6 January, 2020 - 04:12

Term::ANSIColor is the core Perl module that provides functions to change text attributes using ECMA-048 escape sequences.

This release adds support for true color (24-bit color), with which I was not previously familiar but which turns out to be widely supported, including by xterm (which calls it direct-color). The new color attributes are in the form rNNNgNNNbNNN and on_rNNNgNNNbNNN and work like the existing color attributes. There is no constant support for somewhat obvious logistical reasons, so they're supported only in the function interface. Thanks to Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯 for the initial patch and drawing true color to my attention.

Color aliases now can expand to more than one attribute. This means that you can do things like:

coloralias('warning', 'black', 'on_bright_red');

and have the expected thing happen. I took advantage of this to clean up the alias handling in general, so you can also now define aliases in terms of other aliases (although the second-level alias doesn't change when the first-level alias changes). The alias fixes are based on work by Yanick Champoux.

Also in this release are a few minor cleanups and documentation improvements.

Finally, this release drops support for Perl 5.6. The minimum supported version is now Perl 5.8. Testing on 5.6 is spotty and Travis-CI doesn't support it, so I don't think I can truly claim it's supported any more.

You can get the latest release from CPAN or from the Term::ANSIColor distribution page.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้