Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 2 hours 48 min ago

Norbert Preining: Debian package updates preining.info: Digikam (6.4 and 7), Elixir, Kitty, Certbot

11 hours 16 min ago

I have updated some of the Debian packages distributed at https://www.preining.info/debian/, the complete list as of now is as below. Packages are signed with my gpg key GPG: RSA 0x860CDC13, fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13. Use at your own risk. Enjoy.

Let’s Encrypt related backports to Buster (certbot, acme)

Currently 1.1.0-1~bpo10+1 of python-acme and certbot packages

deb http://www.preining.info/debian/letsencryt buster main
deb-src http://www.preining.info/debian/letsencryt buster main
Digikam 7 pre-releases

Currently Digikam 7 beta1

deb http://www.preining.info/debian/digikam7 unstable main
deb-src http://www.preining.info/debian/digikam7 unstable main
Other packages

Currently sshguard 2.4.0, digikam 6.4, kitty 0.16.0, elixir 1.10.0

deb http://www.preining.info/debian/other unstable main
deb-src http://www.preining.info/debian/other unstable main

Matthew Garrett: Avoiding gaps in IOMMU protection at boot

11 hours 43 min ago
When you save a large file to disk or upload a large texture to your graphics card, you probably don't want your CPU to sit there spending an extended period of time copying data between system memory and the relevant peripheral - it could be doing something more useful instead. As a result, most hardware that deals with large quantities of data is capable of Direct Memory Access (or DMA). DMA-capable devices are able to access system memory directly without the aid of the CPU - the CPU simply tells the device which region of memory to copy and then leaves it to get on with things. However, we also need to get data back to system memory, so DMA is bidirectional. This means that DMA-capable devices are able to read and write directly to system memory.

As long as devices are entirely under the control of the OS, this seems fine. However, this isn't always true - there may be bugs, the device may be passed through to a guest VM (and so no longer under the control of the host OS) or the device may be running firmware that makes it actively malicious. The third is an important point here - while we usually think of DMA as something that has to be set up by the OS, at a technical level the transactions are initiated by the device. A device that's running hostile firmware is entirely capable of choosing what and where to DMA.

Most reasonably recent hardware includes an IOMMU to handle this. The CPU's MMU exists to define which regions of memory a process can read or write - the IOMMU does the same but for external IO devices. An operating system that knows how to use the IOMMU can allocate specific regions of memory that a device can DMA to or from, and any attempt to access memory outside those regions will fail. This was originally intended to handle passing devices through to guests (the host can protect itself by restricting any DMA to memory belonging to the guest - if the guest tries to read or write to memory belonging to the host, the attempt will fail), but is just as relevant to preventing malicious devices from extracting secrets from your OS or even modifying the runtime state of the OS.

But setting things up in the OS isn't sufficient. If an attacker is able to trigger arbitrary DMA before the OS has started then they can tamper with the system firmware or your bootloader and modify the kernel before it even starts running. So ideally you want your firmware to set up the IOMMU before it even enables any external devices, and newer firmware should actually do this automatically. It sounds like the problem is solved.

Except there's a problem. Not all operating systems know how to program the IOMMU, and if a naive OS fails to remove the IOMMU mappings and asks a device to DMA to an address that the IOMMU doesn't grant access to then things are likely to explode messily. EFI has an explicit transition between the boot environment and the runtime environment triggered when the OS or bootloader calls ExitBootServices(). Various EFI components have registered callbacks that are triggered at this point, and the IOMMU driver will (in general) then tear down the IOMMU mappings before passing control to the OS. If the OS is IOMMU aware it'll then program new mappings, but there's a brief window where the IOMMU protection is missing - and a sufficiently malicious device could take advantage of that.

The ideal solution would be a protocol that allowed the OS to indicate to the firmware that it supported this functionality and request that the firmware not remove it, but in the absence of such a protocol we're left with non-ideal solutions. One is to prevent devices from being able to DMA in the first place, which means the absence of any IOMMU restrictions is largely irrelevant. Every PCI device has a busmaster bit - if the busmaster bit is disabled, the device shouldn't start any DMA transactions. Clearing that seems like a straightforward approach. Unfortunately this bit is under the control of the device itself, so a malicious device can just ignore this and do DMA anyway. Fortunately, PCI bridges and PCIe root ports should only forward DMA transactions if their busmaster bit is set. If we clear that then any devices downstream of the bridge or port shouldn't be able to DMA, no matter how malicious they are. Linux will only re-enable the bit after it's done IOMMU setup, so we should then be in a much more secure state - we still need to trust that our motherboard chipset isn't malicious, but we don't need to trust individual third party PCI devices.

This patch just got merged, adding support for this. My original version did nothing other than clear the bits on bridge devices, but this did have the potential for breaking devices that were still carrying out DMA at the moment this code ran. Ard modified it to call the driver shutdown code for each device behind a bridge before disabling DMA on the bridge, which in theory makes this safe but does still depend on the firmware drivers behaving correctly. As a result it's not enabled by default - you can either turn it on in kernel config or pass the efi=disable_early_pci_dma kernel command line argument.

In combination with firmware that does the right thing, this should ensure that Linux systems can be protected against malicious PCI devices throughout the entire boot process.

comments

Andrej Shadura: FOSDEM by train

15 hours 13 min ago

I’ve always loved train journeys, but with flygskam changing people’s travel preferences across Europe (and possibly worldwide, though probably not that much), I decided to take train to FOSDEM this time.

When I first went to FOSDEM which, just in case you don’t know, happens each February in Brussels at ULB, I flew with Ryanair from Bratislava to Charleroi because it was cheaper. After repeating the same journey a couple of times and I once nearly missed the last bus coach to Brussels because of a late flight, and decided to rather pay more but travel with more comfort to Brussels Zaventem, the main airport of Brussels. It’s well-connected with Brussels, trains run fast and run often, which is a significant upgrade in comparison to Charleroi, where the options were limited to bus coaches and a slow train connection from Charleroi the town.

As some of my readers may know, my backpack was stolen from me after FOSDEM two years ago, and with it were gone, among other things, my passport and my residence permit card. With my flight home having been planned two and half hours from the moment when I realised my things are gone, I couldn’t get a replacement travel document quickly enough from the embassy, so I had to stay at my friends in Vilvoorde (thanks a lot again, Jurgen!) and travel with the cheapest ground transportation I could find. In my case, it was a night RegioJet coach to Prague with a connection to (again) RegioJet train to Bratislava. (I couldn’t fly even though I already had my temporary travel document since I might need to somehow prove that I’m allowed to be in the Schengen zone, which is difficult to do without a valid residence permit.) Sleeping on a bus isn’t the best way to travel for long distances, and I was knackered when I finally dropped on my sofa in Bratislava next morning. However, what I learnt was that it was possible, and were it a bit more comfortable, I wouldn’t mind something like this again.

Here I must admit that I’ve travelled by long-distance trains quite a fair bit: in my childhood we went by train to summer holidays to Eupatoria in Crimea and Obzor in Bulgaria (through Varna). Both journeys took days, and the latter also involved a long process of changing the bogies on the Moldovan-Romanian border (Giurgiulești/Galați). Since I moved to Slovakia, I many times took the night train from Minsk to Warsaw with a connection to Bratislava or Žilina, a journey which usually takes at least 18 hours. Which is to say, I’m not exactly new to this mode of travel.

With the Austrian railway company ÖBB expaning their night train services as a part of their Nightjet brand, the Vienna to Brussels sleeper returned to service last week. Prices are still a bit higher than I would have preferred (at the time of me writing this, ticket in a compartment with 6 couchettes start at €79, but it’s not as bad as it could be (apparently the last minute price is more than €200). Anyway, when I decided to go to Brussels by train, this service didn’t exist yet, so instead I followed the very useful tips from the Man in the Seat 61 and booked a day-time connection: Bratislava to Vienna to Frankfurt to Brussels.

Train route from Bratislava to Vienna to Frankfurt to Brussels

Date Station Arrival Departure Train 30.1 Bratislava hl.st. 9:38 REX 2511 Wien Hbf 10:44 11:15 ICE 26 Frankfurt am Main Hbf 17:36 18:29 ICE 10 Bruxelles-Nord / Brussel Noord 21:26 21:37 IC 3321 Vilvoorde 21:45 21:46 Date Station Arrival Departure Train 4.2 Bruxelles-Midi 8:23 ICE 13 Frankfurt am Main Hbf 11:31 12:22 ICE 27 Wien Hbf 18:45 19:16 R 2530 Bratislava hl.st. 20:22

I’ve booked two through tickets since in this case the Super Sparschiene discount offered lower prices than normally an international return ticket would offer. For some reason neither ZSSK (Slovak railways) nor ÖBB offered ticket online (or for a comparable price in a ticket office anyway), so I booked online with Deutsche Bahn for €59.90 each way. This sort of ticket, while bookable online, had to be posted for a €5.90 extra.

Since I’m staying the first night at friend’s in Vilvoorde again, I also had buy a ticket for this small stretch of the track from the Belgian railways directly.

See you at FOSDEM!

Building AW Inside the K building Grand Place

Steve Kemp: Initial server migration complete..

28 January, 2020 - 14:30

So recently I talked about how I was moving my email to a paid GSuite account, that process has now completed.

To recap I've been paying approximately €65/month for a dedicated host from Hetzner:

  • 2 x 2Tb drives.
  • 32Gb RAM.
  • 8-core CPU.

To be honest the server itself has been fine, but the invoice is a little horrific regardless:

  • SB31 - €26.05
  • Additional subnet /27 - €26.89

I'm actually paying more for the IP addresses than for the server! Anyway I was running a bunch of virtual machines on this host:

  • mail
    • Exim4 + Dovecot + SSH
    • I'd SSH to this host, daily, to read mail with my console-based mail-client, etc.
  • www
    • Hosted websites.
    • Each different host would run an instance of lighttpd, serving on localhost:XXX running under a dedicated UID.
    • Then Apache would proxy to the right one, and handle SSL.
  • master
    • Puppet server, and VPN-host.
  • git
  • ..
    • Bunch more servers, nine total.

My plan is to basically cut down and kill 99% of these servers, and now I've made the initial pass:

I've now bought three virtual machines, and juggled stuff around upon them. I now have:

  • debian - €3.00/month
  • dns - €3.00/month
    • This hosts my commercial DNS thing
    • Admin overhead is essentially zero.
    • Profit is essentially non-zero :)
  • shell - €6.00/month
    • The few dynamic sites I maintain were moved here, all running as www-data behind Apache. Meh.
    • This is where I run cron-jobs to invoke rss2email, my google mail filtering hack.
    • This is also a VPN-provider, providing a secure link to my home desktop, and the other servers.

The end result is that my hosting bill has gone down from being around €50/month to about €20/month (€6/month for gsuite hosting), and I have far fewer hosts to maintain, update, manage, and otherwise care about.

Since I'm all cloudy-now I have backups via the provider, as well as those maintained by rsync.net. I'll need to rebuild the shell host over the next few weeks as I mostly shuffled stuff around in-place in an adhoc fashion, but the two other boxes were deployed entirely via Ansible, and Deployr. I made the decision early on that these hosts should be trivial to relocate and they have been!

All static-sites such as my blog, my vanity site and similar have been moved to netlify. I lose the ability to view access-logs, but I'd already removed analytics because I just don't care,. I've also lost the ability to have custom 404-pages, etc. But the fact that I don't have to maintain a host just to serve static pages is great. I was considering using AWS to host these sites (i.e. S3) but chose against it in the end as it is a bit complex if you want to use cloudfront/cloudflare to avoid bandwidth-based billing surprises.

I dropped MX records from a bunch of domains, so now I only receive email at steve.fi, steve.org.uk, and to a lesser extent dns-api.com. That goes to Google. Migrating to GSuite was pretty painless although there was a surprise: I figured I'd setup a single user, then use aliases to handle the mail such that:

  • debian@example -> steve
  • facebook@example -> steve
  • webmaster@example -> steve

All told I have about 90 distinct local-parts configured in my old Exim setup. Turns out that Gsuite has a limit of like 20 aliases per-user. Happily you can achieve the same effect with address maps. If you add an address map you can have about 4000 distinct local-parts, and reject anything else. (I can't think of anything worse than having wildcard handling; I've been hit by too many bounce-attacks in the past!)

Oh, and I guess for completeness I should say I also have a single off-site box hosted by Scaleway for €5/month. This runs monitoring via overseer and notification via purppura. Monitoring includes testing that websites are up, that responses contain a specific piece of text, DNS records resolve to expected values, SSL certificates haven't expired, & etc.

Monitoring is worth paying for. I'd be tempted to charge people to use it, but I suspect nobody would pay. It's a cute setup and very flexible and reliable. I've been pondering adding a scripting language to the notification - since at the moment it alerts me via Pushover, Email, and SMS-messages. Perhaps I should just settle on one! Having a scripting language would allow me to use different mechanisms for different services, and severities.

Then again maybe I should just pay for pingdom, or similar? I have about 250 tests which run every two minutes. That usually exceeds most services free/cheap offerings..

Norbert Preining: SUSI.AI release 20200120: Desktop and Smart Speaker

28 January, 2020 - 05:31

More than a month has passed, but the winter holidays allowed me to update, fix, and stream line a lot of corners in SUSI.AI. And above all, work on a desktop version that can easily be installed. Thus, the FOSSASIA Team finally can release a SUSI.AI 2020-01-20 of SUSI.AI, the privacy aware personal assistant.

Since long it has been possible to install and run SUSI.AI on Debian based desktops, and recent months have brought (partial) support for other distributions (RedHat, SUSE etc). This is the first release that provides an installation package for desktop environments that allow for easy installation. As usual, we also ship an image for the SUSI.AI Smart Speaker based on Raspberry Pi.

Changes in this release are:

  • Much improved documentation concerning necessary requirements
  • Separate installers for required dependencies and actual SUSI.AI, with support for different packaging infrastructure
  • fixed problems in portaudio due to upgrade to Debian/buster
  • reworked hotword/audio system for increased stability
  • release for desktop environments: fully relocatable SUSI.AI folder
  • susi-config program now can install .desktop files, systemd service files, and link binaries to directories in the PATH
  • initial work towards DeepSpeech support
  • many fixes and internal improvements

We are looking forward to feedback and suggestions, improvements, pull request! Please send either issues to one of the git repositories, or join us at one of the gitter channels (susi_hardware, susi_server, susi_android, susi_webclient, or generally at fossasia)!

Emmanuel Kasper: Render HTML from standard input with w3m

27 January, 2020 - 23:25
From a very long time, I was using the links browser as my default browser. It allows quick testing of a web site with a simple
links -dump debian.org
Alas links does not support rendering html from standard input, so I finally switched to w3m, handles that well. So I could do:
cat bla.html | w3m -T text/html -dump
Using curl and any member of the Unix Toolkit gang, you see which possibilities it opens. Oh and UTF-8 works well too !
w3m -T text/html -dump https://www.debian.org/index.ru.html | grep свободная
Debian — это свободная операционная система (ОС) для вашего компьютера.
свободная that’s Russian for Free.

Emmanuel Kasper: Mark a Screenshot on Linux

27 January, 2020 - 23:09
More that than often to explain things quickly, I like to take a screenshot of the (web) application I am talking about, and then circle the corresponding area so that everything is clear. Possibly with a rounded rectangle, as I find it the cutest variant.

This is how I do it on Linux:
Install necessary tools:
apt install gimp scrot                                                                   
Take the screenshot:
# Interactively select a window or rectangle with the mouse                              
scrot --selection screenshot.png
Open the screenshot and annotate it with gimp:
gimp screenshot.png                                                                      
Then in gimp:
  • Tools -> Selection Tools -> Rectangle Select, and mark the area
  • Select -> Rounded Rectangle, and keep the default
  • Change the color to a nice blue shade in the toolbox
  • Edit -> Stroke selection
Maybe gimp is a bit overkill for that. But instead of learning a limited tool, I prefer to learn an advanced one like gimp step by step.

Matthias Klumpp: A big AppStream status update

27 January, 2020 - 21:48

It has been a while since the last AppStream-related post (or any post for that matter) on this blog, but of course development didn’t stand still all this time. Quite the opposite – it was just me writing less about it, which actually is a problem as some of the new features are much less visible. People don’t seem to re-read the specification constantly for some reason . As a consequence, we have pretty good adoption of features I blogged about (like fonts support), but much of the new stuff is still not widely used. Also, I had to make a promise to several people to blog about the new changes more often, and I am definitely planning to do so. So, expect posts about AppStream stuff a bit more often now.

What actually was AppStream again? The AppStream Freedesktop Specification describes two XML metadata formats to describe software components: One for software developers to describe their software, and one for distributors and software repositories to describe (possibly curated) collections of software. The format written by upstream projects is called Metainfo and encompasses any data installed in /usr/share/metainfo/, while the distribution format is just called Collection Metadata. A reference implementation of the format and related features written in C/GLib exists as well as Qt bindings for it, so the data can be easily accessed by projects which need it.

The software metadata contains a unique ID for the respective software so it can be identified across software repositories. For example the VLC Mediaplayer is known with the ID org.videolan.vlc in every software repository, no matter whether it’s the package archives of Debian, Fedora, Ubuntu or a Flatpak repository. The metadata also contains translatable names, summaries, descriptions, release information etc. as well as a type for the software. In general, any information about a software component that is in some form relevant to displaying it in software centers is or can be present in AppStream. The newest revisions of the specification also provide a lot of technical data for systems to make the right choices on behalf of the user, e.g. Fwupd uses AppStream data to describe compatible devices for a certain firmware, or the mediatype information in AppStream metadata can be used to install applications for an unknown filetype easier. Information AppStream does not contain is data the software bundling systems are responsible for. So mechanistic data how to build a software component or how exactly to install it is out of scope.

So, now let’s finally get to the new AppStream features since last time I talked about it – which was almost two years ago, so quite a lot of stuff has accumulated!

Specification Changes/Additions Web Application component type

(Since v0.11.7) A new component type web-application has been introduced to describe web applications. A web application can for example be GMail, YouTube, Twitter, etc. launched by the browser in a special mode with less chrome. Fundamentally though it is a simple web link. Therefore, web apps need a launchable tag of type url to specify an URL used to launch them. Refer to the specification for details. Here is a (shortened) example metainfo file for the Riot Matrix client web app:

<component type="web-application">
  <id>im.riot.webapp</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>Apache-2.0</project_license>
  <name>Riot.im</name>
  <summary>A glossy Matrix collaboration client for the web</summary>
  <description>
    <p>Communicate with your team[...]</p>
  </description>
  <icon type="stock">im.riot.webapp</icon>
  <categories>
    <category>Network</category>
    <category>Chat</category>
    <category>VideoConference</category>
  </categories>
  <url type="homepage">https://riot.im/</url>
  <launchable type="url">https://riot.im/app</launchable>
</component>
Repository component type

(Since v0.12.1) The repository component type describes a repository of downloadable content (usually other software) to be added to the system. Once a component of this type is installed, the user has access to the new content. In case the repository contains proprietary software, this component type pairs well with the agreements section.

This component type can be used to provide easy installation of e.g. trusted Debian or Fedora repositories, but also can be used for other downloadable content. Refer to the specification entry for more information.

Operating System component type

(Since v0.12.5) It makes sense for the operating system itself to be represented in the AppStream metadata catalog. Information about it can be used by software centers to display information about the current OS release and also to notify about possible system upgrades. It also serves as a component the software center can use to attribute package updates to that do not have AppStream metadata. The operating-system component type was designed for this and you can find more information about it in the specification documentation.

Icon Theme component type

(Since v0.12.8) While styles, themes, desktop widgets etc. are already covered in AppStream via the addon component type as they are specific to the toolkit and desktop environment, there is one exception: Icon themes are described by a Freedesktop specification and (usually) work independent of the desktop environment. Because of that and on request of desktop environment developers, a new icon-theme component type was introduced to describe icon themes specifically. From the data I see in the wild and in Debian specifically, this component type appears to be very underutilized. So if you are an icon theme developer, consider adding a metainfo file to make the theme show up in software centers! You can find a full description of this component type in the specification.

Runtime component type

(Since v0.12.10) A runtime is mainly known in the context of Flatpak bundles, but it actually is a more universal concept. A runtime describes a defined collection of software components used to run other applications. To represent runtimes in the software catalog, the new AppStream component type was introduced in the specification, but it has been used by Flatpak for a while already as a nonstandard extension.

Release types

(Since v0.12.0) Not all software releases are created equal. Some may be for general use, others may be development releases on the way to becoming an actual final release. In order to reflect that, AppStream introduced at type property to the release tag in a releases block, which can be either set to stable or development. Software centers can then decide to hide or show development releases.

End-of-life date for releases

(Since v0.12.5) Some software releases have an end-of-life date from which onward they will no longer be supported by the developers. This is especially true for Linux distributions which are described in a operating-system component. To define an end-of-life date, a release in AppStream can now have a date_eol property using the same syntax as a date property but defining the date when the release will no longer be supported (refer to the releases tag definition).

Details URL for releases

(Since v0.12.5) The release descriptions are short, text-only summaries of a release, usually only consisting of a few bullet points. They are intended to give users a fast, quick to read overview of a new release that can be displayed directly in the software updater. But sometimes you want more than that. Maybe you are an application like Blender or Krita and have prepared an extensive website with an in-depth overview, images and videos describing the new release. For these cases, AppStream now permits an url tag in a release tag pointing to a website that contains more information about a particular release.

Release artifacts

(Since v0.12.6) AppStream limited release descriptions to their version numbers and release notes for a while, without linking the actual released artifacts. This was intentional, as any information how to get or install software should come from the bundling/packaging system that Collection Metadata was generated for.

But the AppStream metadata has outgrown this more narrowly defined purpose and has since been used for a lot more things, like generating HTML download pages for software, making it the canonical source for all the software metadata in some projects. Coming from Richard Hughes awesome Fwupd project was also the need to link to firmware binaries from an AppStream metadata file, as the LVFS/Fwupd use AppStream metadata exclusively to provide metadata for firmware. Therefore, the specification was extended with an artifacts tag for releases, to link to the actual release binaries and tarballs. This replaced the previous makeshift “release location” tag.

Release artifacts always have to link to releases directly, so the releases can be acquired by machines immediately and without human intervention. A release can have a type of source or binary, indicating whether a source tarball or binary artifact is linked. Each binary release can also have an associated platform triplet for Linux systems, an identifier for firmware, or any other identifier for a platform. Furthermore, we permit sha256 and blake2 checksums for the release artifacts, as well as specifying sizes. Take a look at the example below, or read the specification for details.

<releases>
​  <release version="1.2" date="2014-04-12" urgency="high">
​    [...]
​    <artifacts>
​      <artifact type="binary" platform="x86_64-linux-gnu">
​        <location>https://example.com/mytarball.bin.tar.xz</location>
​        <checksum type="blake2">852ed4aff45e1a9437fe4774b8997e4edfd31b7db2e79b8866832c4ba0ac1ebb7ca96cd7f95da92d8299da8b2b96ba480f661c614efd1069cf13a35191a8ebf1</checksum>
​        <size type="download">12345678</size>
​        <size type="installed">42424242</size>
​      </artifact>
​      <artifact type="source">
​        <location>https://example.com/mytarball.tar.xz</location>
​        [...]
​      </artifact>
​    </artifacts>
​  </release>
​</releases>
Issue listings for releases

(Since v0.12.9) Software releases often fix issues, sometimes security relevant ones that have a CVE ID. AppStream provides a machine-readable way to figure out which components on your system are currently vulnerable to which CVE registered issues. Additionally, a release tag can also just contain references to any normal resolved bugs, via bugtracker URLs. Refer to the specification for details. Example for the issues tag in AppStream Metainfo files:

<issues>
​  <issue url="https://example.com/bugzilla/12345">bz#12345</issue>
​  <issue type="cve">CVE-2019-123456</issue>
​</issues>
Requires and Recommends relations

(Since v0.12.0) Sometimes software has certain requirements only justified by some systems, and sometimes it might recommend specific things on the system it will run on in order to run at full performance.

I was against adding relations to AppStream for quite a while, as doing so would add a more “functional” dimension to it, impacting how and when software is installed, as opposed to being only descriptive and not essential to be read in order to install software correctly. However, AppStream has pretty much outgrown its initial narrow scope and adding relation information to Metainfo files was a natural step to take. For Fwupd it was an essential step, as Fwupd firmware might have certain hard requirements on the system in order to be installed properly. And AppStream requirements and recommendations go way beyond what regular package dependencies could do in Linux distributions so far.

Requirements and recommendations can be on other software components via their id, on a modalias, specific kernel version, existing firmware version or for making system memory recommendations. See the specification for details on how to use this. Example:

<requires>
  <id version="1.0" compare="ge">org.example.MySoftware</id>
​  <kernel version="5.6" compare="ge">Linux</kernel>
​</requires>
<recommends>
​  <memory>2048</memory> <!-- recommend at least 2GiB of memory -->
​</recommends>

This means that AppStream currently supported provides, suggests, recommends and requires relations to refer to other software components or system specifications.

Agreements

(Since v0.12.1) The new agreement section in AppStream Metainfo files was added to make it easier for software to be compliant to the EU GDPR. It has since been expanded to be used for EULAs as well, which was a request coming (to no surprise) from people having to deal with corporate and proprietary software components. An agreement consists of individual sections with headers and descriptive texts and should – depending on the type – be shown to the user upon installation or first use of a software component. It can also be very useful in case the software component is a firmware or driver (which often is proprietary – and companies really love their legal documents and EULAs).

Contact URL type

(Since v0.12.4) The contact URL type can be used to simply set a link back to the developer of the software component. This may be an URL to a contact form, their website or even a mailto: link. See the specification for all URL types AppStream supports.

Videos as software screenshots

(Since v0.12.8) This one was quite long in the making – the feature request for videos as screenshots had been filed in early 2018. I was a bit wary about adding video, as that lets you run into a codec and container hell as well as requiring software centers to support video and potentially requiring the appstream-generator to get into video transcoding, which I really wanted to avoid. Alternatively, we would have had to make AppStream add support for multiple likely proprietary video hosting platforms, which certainly would have been a bad idea on every level. Additionally, I didn’t want to have people add really long introductory videos to their applications.

Ultimately, the problem was solved by simplification and reduction: People can add a video as “screenshot” to their software components, as long as it isn’t the first screenshot in the list. We only permit the vp9 and av1 codecs and the webm and matroska container formats. Developers should expect the audio of their videos to be muted, but if audio is present, the opus codec must be used. Videos will be size-limited, for example Debian imposes a 14MiB limit on video filesize. The appstream-generator will check for all of these requirements and reject a video in case it doesn’t pass one of the checks. This should make implementing videos in software centers easy, and also provide the safety guarantees and flexibility we want.

So far we have not seen many videos used for application screenshots. As always, check the specification for details on videos in AppStream. Example use in a screenshots tag:

​<screenshots>
​  <screenshot type="default">
​    <image type="source" width="1600" height="900">https://example.com/foobar/screenshot-1.png</image>
​  </screenshot>
​  <screenshot>
​    <video codec="av1" width="1600" height="900">https://example.com/foobar/screencast.mkv</video>
​  </screenshot>
​ </screenshots>
Emphasis and code markup in descriptions

(Since v0.12.8) It has long been requested to have a little bit more expressive markup in descriptions in AppStream, at least more than just lists and paragraphs. That has not happened for a while, as it would be a breaking change to all existing AppStream parsers. Additionally, I didn’t want to let AppStream descriptions become long, general-purpose “how to use this software” documents. They are intended to give a quick overview of the software, and not comprehensive information. However ultimately we decided to add support for at least two more elements to format text: Inline code elements as well as em emphases. There may be more to come, but that’s it for now. This change was made about half a year ago, and people are currently advised to use the new styling tags sparingly, as otherwise their software descriptions may look odd when parsed with older AppStream implementation versions.

Remove-component merge mode

(Since v0.12.4) This addition is specified for the Collection Metadata only, as it affects curation. Since AppStream metadata is in one big pool for Linux distributions, and distributions like Debian freeze their repositories, it sometimes is required to merge metadata from different sources on the client system instead of generating it in the right format on the server. This can also be used for curation by vendors of software centers. In order to edit preexisting metadata, special merge components are created. These can permit appending data, replacing data etc. in existing components in the metadata pool. The one thing that was missing was a mode that permitted the complete removal of a component. This was added via a special remove-component merge mode. This mode can be used to pull metadata from a software center’s catalog immediately even if the original metadata was frozen in place in a package repository. This can be very useful in case an inappropriate software component is found in the repository of a Linux distribution post-release. Refer to the specification for details.

Custom metadata

(Since v0.12.1) The AppStream specification is extensive, but it can not fit every single special usecase. Sometimes requests come up that can’t be generalized easily, and occasionally it is useful to prototype a feature first to see if it is actually used before adding it to the specification properly. For that purpose, the custom tag exists. The tag defines a simple key-value structure that people can use to inject arbitrary metadata into an AppStream metainfo file. The libappstream library will read this tag by default, providing easy access to the underlying data. Thereby, the data can easily be used by custom applications designed to parse it. It is important to note that the appstream-generator tool will by default strip the custom data from files unless it has been whitelisted explicitly. That way, the creator of a metadata collection for a (package) repository has some control over what data ends up in the resulting Collection Metadata file. See the specification for more details on this tag.

Miscellaneous additions

(Since v0.12.9) Additionally to JPEG and PNG, WebP images are now permitted for screenshots in Metainfo files. These images will – like every image – be converted to PNG images by the tool generating Collection Metadata for a repository though.

(Since v0.12.10) The specification now contains a new name_variant_suffix tag, which is a translatable string that software lists may append to the name of a component in case there are multiple components with the same name. This is intended to be primarily used for firmware in Fwupd, where firmware may have the same name but actually be slightly different (e.g. region-specific). In these cases, the additional name suffix is shown to make it easier to distinguish the different components in case multiple are present.

(Since v0.12.10) AppStream has an URI format to install applications directly from webpages via the appstream: scheme. This URI scheme now permits alternative IDs for the same component, in case it switched its ID in the past. Take a look at the specification for details about the URI format.

(Since v0.12.10) AppStream now supports version 1.1 of the Open Age Rating Service (OARS), so applications (especially games) can voluntarily age-rate themselves. AppStream does not replace parental guidance here, and all data is purely informational.

Library & Implementation Changes

Of course, besides changes to the specification, the reference implementation also received a lot of improvements. There are too many to list them all, but a few are noteworthy to mention here.

No more automatic desktop-entry file loading

(Since v0.12.3) By default, libappstream was loading information from local .desktop files into the metadata pool of installed applications. This was done to ensure installed apps were represented in software centers to allow them to be uninstalled. This generated much more pain than it was useful for though, with metadata appearing two to three times in software centers because people didn’t set the X-AppStream-Ignore=true tag in their desktop-entry files. Also, the generated data was pretty bad. So, newer versions of AppStream will only load data of installed software that doesn’t have an equivalent in the repository metadata if it ships a metainfo file. One more good reason to ship a metainfo file!

Software centers can override this default behavior change by setting the AS_POOL_FLAG_READ_DESKTOP_FILES flag for AsPool instances (which many already did anyway).

LMDB caches and other caching improvements

(Since v0.12.7) One of the biggest pain points in adding new AppStream features was always adjusting the (de)serialization of the new markup: AppStream exists as a YAML version for Debian-based distributions for Collection Metadata, an XML version based on the Metainfo format as default, and a GVariant binary serialization for on-disk caching. The latter was used to drastically reduce memory consumption and increase speed of software centers: Instead of loading all languages, only the one we currently needed was loaded. The expensive icon-finding logic, building of the token cache for searches and other operations were performed and the result was saved as a binary cache on-disk, so it was instantly ready when the software center was loaded next.

Adjusting three serialization formats was pretty laborious and a very boring task. And at one point I benchmarked the (de)serialization performance of the different formats and found out the the XML reading/writing was actually massively outperforming that of the GVariant cache. Since the XML parser received much more attention, that was only natural (but there were also other issues with GVariant deserializing large dictionary structures).

Ultimately, I removed the GVariant serialization and replaced it with a memory-mapped XML-based cache that reuses 99.9% of the existing XML serialization code. The cache uses LMDB, a small embeddable key-value store. This makes maintaining AppStream much easier, and we are using the same well-tested codepaths for caching now that we also use for normal XML reading/writing. With this change, AppStream also uses even less memory, as we only keep the software components in memory that the software center currently displays. Everything that isn’t directly needed also isn’t in memory. But if we do need the data, it can be pulled from the memory-mapped store very quickly.

While refactoring the caching code, I also decided to give people using libappstream in their own projects a lot more control over the caching behavior. Previously, libappstream was magically handling the cache behind the back of the application that was using it, guessing which behavior was best for the given usecase. But actually, the application using libappstream knows best how caching should be handled, especially when it creates more than one AsPool instance to hold and search metadata. Therefore, libappstream will still pick the best defaults it can, but give the application that uses it all control it needs, down to where to place a cache file, to permit more efficient and more explicit management of caches.

Validator improvements

(Since v0.12.8) The AppStream metadata validator, used by running appstreamcli validate <file>, is the tool that each Metainfo file should run through to ensure it is conformant to the AppStream specification and to give some useful hints to improve the metadata quality. It knows four issue severities: Pedantic issues are hidden by default (show them with the --pedantic flag) and affect upcoming features or really “nice to have” things that are completely nonessential. Info issues are not directly a problem, but are hints to improve the metadata and get better overall data. Things the specification recommends but doesn’t mandate also fall into this category. Warnings will result in degraded metadata but don’t make the file invalid in its entirety. Yet, they are severe enough that we fail the validation. Things like that are for example a vanishing screenshot from an URL: Most of the data is still valid, but the result may not look as intended. Invalid email addresses, invalid tag properties etc. fall into that category as well: They will all reduce the amount of metadata systems have available. So the metadata should definitely be warning-free in order to be valid. And finally errors are outright violation of the specification that may likely result in the data being ignored in its entirety or large chunks of it being invalid. Malformed XML or invalid SPDX license expressions would fall into that group.

Previously, the validator would always show very long explanations for all the issues it found, giving detailed information on an issue. While this was nice if there were few issues, it produces very noisy output and makes it harder to quickly spot the actual error. So, the whole validator output was changed to be based on issue tags, a concept that is also known from other lint tools such as Debian’s Lintian: Each error has its own tag string, identifying it. By default, we only show the tag string, line of the issue, severity and component name it affects as well a short repeat of an invalid value (in case that’s applicable to the issue). If people do want to know detailed information, they can get it by passing --explain to the validation command. This solution has many advantages:

  • It makes the output concise and easy to read by humans and is mostly already self-explanatory
  • Machines can parse the tags easily and identify which issue was emitted, which is very helpful for AppStream’s own testsuite but also for any tool wanting to parse the output
  • We can now have translators translate the explanatory texts

Initially, I didn’t want to have the validator return translated output, as that may be less helpful and harder to search the web for. But now, with the untranslated issue tags and much longer and better explanatory texts, it makes sense to trust the translators to translate the technical explanations well.

Of course, this change broke any tool that was parsing the old output. I had an old request by people to have appstreamcli return machine-readable validator output, so they could integrate it better with preexisting CI pipelines and issue reporting software. Therefore, the tool can now return structured, machine-readable output in the YAML format if you pass --format=yaml to it. That output is guaranteed to be stable and can be parsed by any CI machinery that a project already has running. If needed, other output formats could be added in future, but for now YAML is the only one and people generally seem to be happy with it.

Create desktop-entry files from Metainfo

(Since v0.12.9) As you may have noticed, an AppStream Metainfo file contains some information that a desktop-entry file also contains. Yet, the two file formats serve very different purposes: A desktop file is basically launch instructions for an application, with some information about how it is displayed. A Metainfo file is mostly display information and less to none launch instructions. Admittedly though, there is quite a bit of overlap which may make it useful for some projects to simply generate a desktop-entry file from a Metainfo file. This may not work for all projects, most notably ones where multiple desktop-entry files exists for just one AppStream component. But for the simplest and most common of cases, a direct mapping between Metainfo and desktop-entry file, this option is viable.

The appstreamcli tool permits this now, using the appstreamcli make-desktop-file subcommand. It just needs a Metainfo file as first parameter, and a desktop-entry output file as second parameter. If the desktop-entry file already exists, it will be extended with the new data from tbe Metainfo file. For the Exec field in a desktop-entry file, appstreamcli will read the first binary entry in a provides tag, or use an explicitly provided line passed via the --exec parameter.

Please take a look at the appstreamcli(1) manual page for more information on how to use this useful feature.

Convert NEWS files to Metainfo and vice versa

(Since v0.12.9) Writing the XML for release entries in Metainfo files can sometimes be a bit tedious. To make this easier and to integrate better with existing workflows, two new subcommands for appstreamcli are now available: news-to-metainfo and metainfo-to-news. They permit converting a NEWS textfile to Metainfo XML and vice versa, and can be integrated with an application’s build process. Take a look at AppStream itself on how it uses that feature.

In addition to generating the NEWS output or reading it, there is also a second YAML-based option available. Since YAML is a structured format, more of the features of AppStream release metadata are available in the format, such as marking development releases as such. You can use the --format flag to switch the output (or input) format to YAML.

Please take a look at the appstreamcli(1) manual page for a bit more information on how to use this feature in your project.

Support for recent SPDX syntax

(Since v0.12.10) This has been a pain point for quite a while: SPDX is a project supported by the Linux Foundation to (mainly) provide a unified syntax to identify licenses for Open Source projects. They did change the license syntax twice in incompatible ways though, and AppStream already implemented a previous versions, so we could not simply jump to the latest version without supporting the old one.

With the latest release of AppStream though, the software should transparently convert between the different version identifiers and also support the most recent SPDX license expressions, including the WITH operator for license exceptions. Please report any issues if you see them!

Future Plans?

First of all, congratulations for reading this far into the blog post! I hope you liked the new features! In case you skipped here, welcome to one of the most interesting sections of this blog post!

So, what is next for AppStream? The 1.0 release, of course! The project is certainly mature enough to warrant that, and originally I wanted to get the 1.0 release out of the door this February, but it doesn’t look like that date is still realistic. But what does “1.0” actually mean for AppStream? Well, here is a list of the intended changes:

  • Removal of almost all deprecated parts of the specification. Some things will remain supported forever though: For example the desktop component type is technically deprecated for desktop-application but is so widely used that we will support it forever. Things like the old application node will certainly go though, and so will the /usr/share/appdata path as metainfo location, the appcategory node that nobody uses anymore and all other legacy cruft. I will be mindful about this though: If a feature still has a lot of users, it will stay supported, potentially forever. I am closely monitoring what is used mainly via the information available via the Debian archive. As a general rule of thumb though: A file for which appstreamcli validate passes today is guaranteed to work and be fine with AppStream 1.0 as well.
  • Removal of all deprecated API in libappstream. If your application still uses API that is flagged as deprecated, consider migrating to the supported functions and you should be good to go! There are a few bigger refactorings planned for some of the API around releases and data serialization, but in general I don’t expect this to be hard to port.
  • The 1.0 specification will be covered by an extended stability promise. When a feature is deprecated, there will be no risk that it is removed or become unsupported (so the removal of deprecated stuff in the specification should only happen once). What is in the 1.0 specification will quite likely be supported forever.

So, what is holding up the 1.0 release besides the API cleanup work? Well, there are a few more points I want to resolve before releasing the 1.0 release:

  • Resolve hosting release information at a remote location, not in the Metainfo file (#240): This will be a disruptive change that will need API adjustments in libappstream for sure, and certainly will – if it happens – need the 1.0 release. Fetching release data from remote locations as opposed to having it installed with software makes a lot of sense, and I either want to have this implemented and specified properly for the 1.0 release, or have it explicitly dismissed.
  • Mobile friendliness / controls metadata (#192 & #55): We need some way to identify applications as “works well on mobile”. I also work for a company called Purism which happens to make a Linux-based smartphone, so this is obviously important for us. But it also is very relevant for users and other Linux mobile projects. The main issue here is to define what “mobile” actually means and what information makes sense to have in the Metainfo file to be future-proof. At the moment, I think we should definitely have data on supported input controls for a GUI application (touch vs mouse), but for this the discussion is still not done.
  • Resolving addon component type complexity (lots of issue reports): At the moment, an addon component can be created to extend an existing application by $whatever thing This can be a plugin, a theme, a wallpaper, extra content, etc. This is all running in the addon supergroup of components. This makes it difficult for applications and software centers to occasionally group addons into useful groups – a plugin is functionally very different from a theme. Therefore I intend to possibly allow components to name “addon classes” they support and that addons can sort themselves into, allowing easy grouping and sorting of addons. This would of course add extra complexity. So this feature will either go into the 1.0 release, or be rejected.
  • Zero pending feature requests for the specification: Any remaining open feature request for the specification itself in AppStream’s issue tracker should either be accepted & implemented, or explicitly deferred or rejected.

I am not sure yet when the todo list will be completed, but I am certain that the 1.0 release of AppStream will happen this year, most likely before summer. Any input, especially from users of the format, is highly appreciated.

Thanks a lot to everyone who contributed or is contributing to the AppStream implementation or specification, you are great! Also, thanks to you, the reader, for using AppStream in your project . I definitely will give a bit more frequent and certainly shorter updates on the project’s progress from now on. Enjoy your rich software metadata, firmware updates and screenshot videos meanwhile!

Norbert Preining: Switching audio output devices for OpenAL based software

27 January, 2020 - 07:26

When playing a few rounds of Talos Principle, I realized I cannot get the sound output to my USB headphones. Selecting a different sink never worked. It was only in a forum post that I found the answer. Put the following into ~/.alsoftrc

[pulse]
allow-moves=true

and all will be fine. Quoting from the forum post

Recent versions of OpenAL default to disallow pulse streams from being moved.

Thanks!

Enrico Zini: Conspiracy links

27 January, 2020 - 06:00

Why bother inventing conspiracies?

Perception management - Wikipedia propaganda archive.org 2020-01-27 Perception management is a term originated by the US military.[citation needed] The US Department of Defense (DOD) gives this definition: Tuskegee syphilis experiment - Wikipedia racism history archive.org 2020-01-27 Tuskegee Study of Untreated Syphilis in the Negro Male[a] was a clinical study conducted between 1932 and 1972 by the U.S. Public Health Service.[1][2] The purpose of this study was to observe the natural history of untreated syphilis; the African-American men in the study were only told they were receiving free health care from the United States government.[3] Cellulite Isn't Real. This Is How It Was Invented. privilege health italian archive.org 2020-01-27 Here's how cellulite came to be the most endemic and untreatable “invented disease” of all time. Propaganda Due - Wikipedia politics history italy archive.org 2020-01-27 Propaganda Due (Italian pronunciation: [propaˈɡanda ˈduːe]; P2) was a Masonic lodge under the Grand Orient of Italy, founded in 1877. Its Masonic charter was withdrawn in 1976, and it transformed into a clandestine, pseudo-Masonic, ultraright[1][2][3] organization operating in contravention of Article 18 of the Constitution of Italy that banned secret associations. In its latter period, during which the lodge was headed by Licio Gelli, P2 was implicated in numerous Italian crimes and mysteries, including the collapse of the Vatican-affiliated Banco Ambrosiano, the murders of journalist Mino Pecorelli and banker Roberto Calvi, and corruption cases within the nationwide bribe scandal Tangentopoli. P2 came to light through the investigations into the collapse of Michele Sindona's financial empire.[4] Operation Gladio - Wikipedia politics history italy archive.org 2020-01-27 Operation Gladio is the codename for clandestine "stay-behind" operations of armed resistance that was planned by the Western Union (WU), and subsequently by NATO, for a potential Warsaw Pact invasion and conquest in Europe. Although Gladio specifically refers to the Italian branch of the NATO stay-behind organizations, "Operation Gladio" is used as an informal name for all of them. Stay-behind operations were prepared in many NATO member countries, and some neutral countries.[1] Strategia della tensione in Italia - Wikipedia archive.org 2020-01-27 La strategia della tensione in Italia è una teoria politica che indica generalmente un periodo storico molto tormentato della storia d'Italia, in particolare negli anni settanta del XX secolo, conosciuto come anni di piombo e che, mediante un disegno eversivo, tendeva alla destabilizzazione o al disfacimento degli equilibri precostituiti.

Birger Schacht: Installing Debian on the Pinebook Pro

27 January, 2020 - 00:30

@brion on mastodon:

If you want the Linux-circa-2004 experience back, just try Linux on ARM!

  • everything compiles slowly
  • distro-hopping to find better hardware support
  • oops, you need proprietary drivers for that
  • forum posts hold the authoritative documentation and code for your distro

:D

In November last year, I ordered a Pinebook Pro from Pine64. The Pinebook Pro is a 14” (1080p) ARM laptop based on a RK3399 SOC. It has an eMMC built in and it is possible to add an NVMe SSD drive using an adapter. In addition it also has a micro SD card reader and can boot from that. The notebook is very lightweight and the case seems solid. The bottom cover is attached using normal Philips head screws and there is a lot of detailed documentation in the wiki about the parts of the board and how to access the internals.

I’m not really a fan of the keyboard, because in my opinion it feels a bit cheap - pressing the keys does not feel as smooth as I’m used to from other keyboards, like from the Thinkpad x230 or the 2012 MacBook Air. In addition to that I made the mistake of choosing an ANSI keyboard, which makes it harder for me to reach the Enter key. The big advantage of the device is definitely the battery. In the first week of playing around with the device (not using it that much, but doing a lot of tests with booting different images) I didn’t even unpack the power supply. Another nice feature are the privacy switches - when you press F1, F2 or F3 for 10 seconds you cut the power for the BT/WiFI module (F1), the webcam (F2) or the microphone (F3). At least that’s the theory, it does not work with my Pinebook, but there is a firmware update for the keyboard that I did not yet install, which might fix that.

I also really like how the Pinebook Pro creators keep you up to date with news regarding their products and related software. They publish monthly updates about updates in their blog, they also try to take part in the discussions in the pine64 forum and they have a presence on the fediverse (there are more communication channels, but those are the ones I follow/used).

There are a couple of different pre built operating system images one can dd to SD cards or the eMMC and there are also some scripts to install (instead of dding) systems. The laptop comes preinstalled with what is usually (in the forums and the wiki) called Debian Desktop. It is a Debian based image with a Mate Desktop and a lot of modifications. The images for this system are distributed via a github repository. I did not find any source code for the images nor documentation about the changes from upstream Debian, so I have no idea how they are built (the archives behind the Source code links on the release page of the images only contain the README.md file). I only started the preinstalled system once or twice, but it seemed to work very well (suspend worked) and it ships a lot of useful software for end users. But I did not take a deeper look at this image. There are also two Ubuntu based images listed in the Pine64 wiki, one of which comes with LXDE as desktop system, the other one with the Mate Desktop. They are also distributed via github release pages, but in these cases the repository also contains the code of the build scripts. Manjaro, an Arch Linux based distribution, also provides images for the Pinebook Pro. Besides those there are Armbian images, Android images, Chromium images and some more.

I did not really want to use any of the provided images, but rather install my own Debian system. There is an installer script which installs Debian on a SD card or the eMMC using debootstrap. This script does a lot of useful stuff, and a good part of my approach of installing Debian is based on it.

I installed Debian on an SD card using my older HP laptop. There are two main parts one needs that are not part of Debian yet, a heavily patched kernel on the one hand and the u-boot bootloader, also with some patches. There is a great tutorial on how to build an (almost) upstream u-boot for the Pinebook. This is based on this git repository which contains the u-boot upstream sources modified to work on the Pinebook and with some changes to the boot order. The main path is this one which was posted to the u-boot mailinglist in November, but I’m not sure whats the status of it. Lets hope it will be merged upstream for the next release of u-boot.

For the kernel there is a repository in the manjaor gitlab and the maintainer of this kernel repository announced that they plan on mainlining the patches. The only thing not working yet is suspend to RAM. I’m currently using the v5.5-rc7-panfrost-fixes branch of the kernel.

To crossbuild the kernel, I had to first prepare my build machine (which is AMD64):

apt install crossbuild-essential-arm64 flex bison fakeroot build-essential bc libssl-dev

then I cloned the repository and copied the configuration the Manjaro kernel is using from their kernel package repository. I also had to disable compression of kernel modules.

git clone https://gitlab.manjaro.org/tsys/linux-pinebook-pro
cd linux-pinebook-pro
wget https://gitlab.manjaro.org/manjaro-arm/packages/core/linux-pinebookpro/raw/master/config -O .config
scripts/config --set-str LOCALVERSION -custom
scripts/config --disable MODULE_COMPRESS
make -j`nproc` ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- KBUILD_IMAGE=arch/arm64/boot/Image deb-pkg

The developer who maintains the kernel also published a repository with firmware for the Broadcom Wifi module and the DisplayPort. Another manjaro repository contains the firmware for the bluetooth chip.

Next step was to bootstrap Debian. First I installed the packages to bootstrap a system with another architecture and then I prepared the SD card:

apt install qemu-user-static binfmt-support
sfdisk /dev/sdc < gpt.sfdisk
mkfs.ext4 /dev/sdc1
cryptsetup luksFormat /dev/sdc2
cryptsetup luksOpen /dev/sdc2 sdc2_crypt
mkfs.ext4 /dev/mapper/sdc2_crypt

With gpt.sfdisk containing the following partition layout:

label: gpt
unit: sectors

/dev/sdc1 : start=      442368, size=     1024000, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, name="Boot"
/dev/sdc2 : start=     1466368,                    type=0FC63DAF-8483-4772-8E79-3D69D8477DE4

Then I created a temporary folder, mounted the partitions and used qemu-debootstrap to install the base system:

CHROOT=`mktemp -d`
mount /dev/sdc2_crypt $CHROOT
mkdir $CHROOT/boot
mount /dev/sdc1 $CHROOT/boot

sudo qemu-debootstrap --arch=arm64 --include=u-boot-menu,initramfs-tools,sudo,network-manager,cryptsetup,cryptsetup-initramfs bullseye $CHROOT

Then I copied the kernel package I built on the SD card and installed it in the chroot. Part of the linux image is also a *.dtb file for the RK3399, which I had to copy to /boot (because u-boot needs this file and the /-filesystem is encrypted).

mount -o bind /dev $CHROOT/dev
mount -o bind /sys $CHROOT/sys
mount -t proc /proc $CHROOT/proc
chroot $CHROOT
dpkg -i linux-image-5.5.0-rc7-custom+_5.5.0-rc7-custom+-1_arm64.deb
cp /usr/lib/linux-image-5.5.0-rc7-custom+/rockchip/rk3399-pinebook-pro.dtb /boot/

echo UUID=$(blkid -s UUID -o value /dev/mapper/sdc2_crypt) / ext4 defaults 0 1 >> /etc/fstab
echo sdc2_crypt PARTUUID=$(blkid -s PARTUUID -o value /dev/sdc2 ) none luks,discard,initramfs >> /etc/crypttab
echo UUID=$(blkid -s UUID -o value /dev/sdc1) /boot ext4 defaults 0 1 >> /etc/fstab
Finally I pointed the u-boot-update script to the *.dtb file and added my user account:
# in /etc/defaults/u-boot
U_BOOT_FDT="rk3399-pinebook-pro.dtb"
U_BOOT_PARAMETERS="console=tty1"

adduser bisco
adduser bisco sudo

In the running system I then also enabled s2idle, because suspend to RAM does not work yet.

I haven’t had time to do any more tests on this device, but I hope I’ll get to that in February. If I manage to set up the system to a usable state, I’ll bring it to FOSDEM, which will be its first outside test…

On the software side most stuff until now works fine. The main downside is the missing TorBrowser package, but this is tracked upstream. Alacritty does not work and won’t in the near future, it seems. When I tried to use tilix, that led to #949952, so I’m using rxvt-unicode for now…

There is now also a wiki page for the Debian installer script which lists some issues and tips how to fix them.

Bits from Debian: New Debian Developers and Maintainers (November and December 2019)

26 January, 2020 - 21:00

The following contributors got their Debian Developer accounts in the last two months:

  • Louis-Philippe Véronneau (pollo)
  • Olek Wojnar (olek)
  • Sven Eckelmann (ecsv)
  • Utkarsh Gupta (utkarsh)
  • Robert Haist (rha)

The following contributors were added as Debian Maintainers in the last two months:

  • Denis Danilov
  • Joachim Falk
  • Thomas Perret
  • Richard Laager

Congratulations!

Wouter Verhelst: SReview kubernetes update

26 January, 2020 - 14:11

About a week and a half ago, I mentioned that I'd been working on making SReview, my AGPLv3 video review and transcode system work from inside a Kubernetes cluster. I noted at the time that while I'd made it work inside minikube, it couldn't actually be run from within a real Kubernetes cluster yet, mostly because I misunderstood how Kubernetes works, and assumed you could just mount the same Kubernetes volume from multiple pods, and share data that way (answer: no you can't).

The way to fix that is to share the data not through volumes, but through something else. That would require that the individual job containers download and upload files somehow.

I had a look at how the Net::Amazon::S3 perl module works (answer: it's very simple really) and whether it would be doable to add a transparent file access layer to SReview which would access files either on the local file system, or an S3 service (answer: yes).

So, as of yesterday or so, SReview supports interfacing with an S3 service (only tested with MinIO for now) rather than "just" files on the local file system. As part of that, I also updated the code so it would not re-scan all files every time the sreview-detect job for detecting new files runs, but only when the "last changed" time (or mtime for local file system access) has changed -- otherwise it would download far too many files every time.

This turned out to be a lot easier than I anticipated, and I have now successfully managed, using MinIO, to run a full run of a review cycle inside Kubernetes, without using any volumes except for the "ReadWriteOnce" ones backing the database and MinIO containers.

Additionally, my kubernetes configuration files are now split up a bit (so you can apply the things that make sense for your configuration somewhat more easily), and are (somewhat) tested.

If you want to try out SReview and you've already got Kubernetes up and running, this may be for you! Please give it a try and don't forget to send some feedback my way.

Joey Hess: announcing arduino-copilot

26 January, 2020 - 03:31

arduino-copilot, released today, makes it easy to use Haskell to program an Arduino. It's a FRP style system, and uses the Copilot DSL to generate embedded C code.

gotta blink before you can run

To make your arduino blink its LED, you only need 4 lines of Haskell:

import Copilot.Arduino
main = arduino $ do
    led =: blinking
    delay =: constant 100

Running that Haskell program generates an Arduino sketch in an .ino file, which can be loaded into the Arduino IDE and uploaded to the Arduino the same as any other sketch. It's also easy to use things like Arduino-Makefile to build and upload sketches generated by arduino-copilot.

shoulders of giants

Copilot is quite an impressive embedding of C in Haskell. It was developed for NASA by Galois and is intended for safety-critical applications. So it's neat to be able to repurpose it into hobbyist microcontrollers. (I do hope to get more type safety added to Copilot though, currently it seems rather easy to confuse eg miles with kilometers when using it.)

I'm not the first person to use Copilot to program an Arduino. Anthony Cowley showed how to do it in Abstractions for the Functional Roboticist back in 2013. But he had to write a skeleton of C code around the C generated by Copilot. Amoung other features, arduino-copilot automates generating that C skeleton. So you don't need to remember to enable GPIO pin 13 for output in the setup function; arduino-copilot sees you're using the LED and does that for you.

frp-arduino was a big inspiration too, especially how easy it makes it to generate an Arduino sketch withough writing any C. The "=:" operator in copilot-arduino is copied from it. But ftp-arduino contains its own DSL, which seems less capable than Copilot. And when I looked at using frp-arduino for some real world sensing and control, it didn't seem to be possible to integrate it with existing Arduino libraries written in C. While I've not done that with arduino-copilot yet, I did design it so it should be reasonably easy to integrate it with any Arduino library.

a more interesting example

Let's do something more interesting than flashing a LED. We'll assume pin 12 of an Arduino Uno is connected to a push button. When the button is pressed, the LED should stay lit. Otherwise, flash the LED, starting out flashing it fast, but flashing slower and slower over time, and then back to fast flashing.

{-# LANGUAGE RebindableSyntax #-}
import Copilot.Arduino.Uno

main :: IO ()
main = arduino $ do
        buttonpressed <- readfrom pin12
        led =: buttonpressed || blinking
        delay =: longer_and_longer * 2

This is starting to use features of the Copilot DSL; "buttonpressed || blinking" combines two FRP streams together, and "longer_and_longer * 2" does math on a stream. What a concise and readable implementation of this Arduino's behavior!

Finishing up the demo program is the implementation of longer_and_longer. This part is entirely in the Copilot DSL, and actually I lifted it from some Copilot example code. It gives a reasonable flavor of what it's like to construct streams in Copilot.

longer_and_longer :: Stream Int16
longer_and_longer = counter true $ counter true false `mod` 64 == 0

counter :: Stream Bool -> Stream Bool -> Stream Int16
counter inc reset = cnt
   where
        cnt = if reset then 0 else if inc then z + 1 else z
        z = [0] ++ cnt

This whole example turns into just 63 lines of C code, which compiles to a 1248 byte binary, so there's plenty of room left for larger, more complex programs.

simulating an Arduino

One of Copilot's features is it can interpret code, without needing to run it on the target platform. So the Arduino's behavior can be simulated, without ever generating C code, right at the console!

But first, one line of code needs to be changed, to provide some button states for the simulation:

        buttonpressed <- readfrom' pin12 [False, False, False, True, True]

Now let's see what it does:

# runghc demo.hs -i 5
delay:         digitalWrite: 
(2)            (13,false)    
(4)            (13,true)     
(8)            (13,false)    
(16)           (13,true)     
(32)           (13,true)     

Which is exactly what I described it doing! To prove that it always behaves correctly, you could use copilot-theorem.

Development of arduino-copilot was sponsored by Trenton Cronholm and Jake Vosloo on Patreon.

Vincent Bernat: ThinkPad X1 Carbon 2014: 5 years later

26 January, 2020 - 01:30

I have recently replaced my ThinkPad X1 Carbon 2014 (second generation). I have kept it for more than five years, using it every day and carrying it everywhere. The expected lifetime of a laptop is always an unknown. Let me share my feedback.

ThinkPad X1 Carbon 20A7 with its lid closed

My configuration embeds an Intel vPro Core i7-4600U, 8 Gib of RAM, a 256 Gib SATA SSD, a matte WQHD display and a WWAN LTE card. I got it in June 2014. It has spent these years running Debian Sid, starting from Linux 3.14 to Linux 5.4.

The inside is still quite dust-free! In the bottom left, there is the Intel WLAN card, the Sierra WWAN card as well as the SSD.

This generation of ThinkPad X1 Carbon has been subject to a variety of experiences around the keyboard. We are still hunting the culprits. The layout is totally messed up, with many keys displaced.1 I have remapped most of them. It also lacks physical function keys: they have been replaced by a non-customizable touch bar. I do not like it due to absence of tactile feedback and it is quite easy to hit a key by mistake. I would recommend to not buy this generation as a second-hand device because of this.

The keyboard layout is madening: check the “Home”, “End”, “Esc” and “Backspace” keys. The backquote key is between “AltGr” and right “Ctrl” while it should be where the “Esc” is. The touch bar is not very usable and shows significant signs of wear.

The screen is a WQHD display 2560x1440 (210 DPI). In 2014, Linux HiDPI support was in its infancy. This has not changed that much for X11 and the 1.5× factor is still a challenge: fonts can be scaled correctly, but many applications won’t adapt their interfaces. However, my most used applications are a terminal, Emacs, and Firefox. They handle this fractional factor without issue. As the power usage of a 4K display is significantly higher, in my opinion, a WQHD screen still is the perfect balance for a laptop: you get crisp texts while keeping power usage low.

After two or three years, white spots have started appearing on the screen. They are noticeable when displaying an uniform color. This seems a common problem due to pressure when the laptop sits closed in a bag. Most of the time, I don’t pay attention to this defect. Lenovo did not really acknowledge this issue but agrees to replace the screen under warranty.

After several years, the screen exhibits several white spots. The effect is not as strong when sitting just in front and hardly noticeable when not displaying a solid color.

The battery was replaced three years ago as a precautionary measure. I am still able to get around four hours from it despite its wear—65% of its design capacity. During the years, Linux became more power-efficient. At the beginning, powertop was reporting around 10 W of power usage when the screen brightness is at 20%, with Emacs, Firefox and a few terminals running. With a 5.4 kernel, I now get around 7 W in the same conditions.

The laptop contains a Sierra Wireless EM7345 4G LTE WWAN card. It is supported by Modem Manager when operating as a MBIM device. In the early days, the card dropped the network every 20 minutes. A firmware upgrade solved this reliability issue. This is not an easy task as you need to find the right firmware for your card and the right tool to flash it. At the time, I was only able to do that with Windows. I don’t recommend using a WWAN card anymore. They are black boxes with unreliable firmwares. I had the same kind of issues with the Qualcomm Gobi 2000 WWAN card present in my previous ThinkPad laptop. Lenovo switched from Sierra to Fibocom for the recent generations of ThinkPad and they are even more difficult to use with Linux, despite being manufactured by Intel. It is less trouble using a phone as a wireless hotspot.

At work, I was plugging the laptop to a dock, a ThinkPad OneLink Pro Dock. The proprietary connector for the dock combines power, USB3 and DisplayPort. The dock features both a DisplayPort and a DVI-I connector and acts as an MST hub. The support of such a configuration was pretty recent in Linux since it has been added in version 3.17 (October 2014). Along the years, I didn’t run into much trouble with this dock.

Here is the rear face of the ThinkPad OneLink Pro Dock, featuring two USB3 ports, two USB2 ports, one Ethernet port, one DisplayPort and a DVI-I connector. The front face features two USB3 ports and an audio jack.

In summary, after five years of daily use, the laptop is still in good working condition. Only the screen and the touch bar show major signs of wear. Therefore, Lenovo keeps my trust for building durable and reliable laptops. I have replaced it with another ThinkPad X1 Carbon.

  1. The Swiss German layout may not help, but I didn’t care much about what is written on the keycaps. ↩︎

Norbert Preining: Let’s Encrypt on Debian/Buster: switching from acmetool to certbot

25 January, 2020 - 08:14

I have been using acmetool on my Buster server for quite some time. When choosing a client I liked that it is written in Go, that it is small and not overboarding with features, thus I had decided against Certbot and in favor of acmetool. Unfortunately, times are changing and the Let’s Encrypt v1 protocol will be discontinued in June 2020, and in preparation for this I have switched to certbot.

Certbot is somehow the default choice, proposed by Let’s Encrypt and developed by the Electronic Frontier Foundation (EFF). Acmetool is a personal project. Both program versions are quite old in Buster (acmetool 0.0.62-3+b11 and certbot0.31.0-1), while the latest releases are 0.0.67 and 1.1.0 (though I have to say that there are no functional changes in the acmetool releases). Acmetool has a beta version supporting the v2 protocol, but I wasn’t convinced I want to try that out.

So I turned to certbot, and first of all did update the packaging of python3-acme and certbot in Debian to the latest released version 1.1.0. These package can be installed on Debian Buster, testing, and sid:

deb https://www.preining.info/debian/letsencrypt buster main
deb-src https://www.preining.info/debian/letsencrypt buster main

My git updates are available at github: acme-debian and certbot-debian, based on the official (but outdated and broken due to missing pushes in the pristine-tar branch) repositories on Salsa.

The new protocol version of Let’s Encrypt also support wildcard certificates, so I thought I opt for that, but DNS authentication is necessary for that, so I needed a plugin for my DNS registrar, which is Gandi. Fortunately, there is a third-party plugin certbot-plugin-gandi that can do that trick. After installing the package with

pip3 install certbot-plugin-gandi

as root, and saving the API key into /etc/letsencrypt/gandi.ini, a call to

certbot certonly --certbot-plugin-gandi:dns-credential /etc/letsencrypt/gandi.ini -d preining.info,*.preining.info

gave new certificates in /etc/letsencrypt/live/preining.info.

What remained is updating the location of keys and certificates in all the domains hosted here, as well as the certificate for the imap server. All in all very painless and quick. Finally, a purge of acmetool package and removing the according cron job finalized the switch. The certbot package already installs a systemd timer and cron job (which is not run if systemd is used) so updates should be automatized.

Problems

Several things felt a bit painful with the switch to certbot:

  • The version in Debian/Buster and sid is too old, and I am not sure whether it would work with the external plugin for gandi. Furthermore, with items like certificates I prefer latest versions incorporating fixes.
  • Certbot itself has the problem that one cannot configure external plugins in the configuration file cli.ini, see here and here. As seen above, the configuration needs colons in the keys (certbot-plugin-gandi:dns-credential) but colons are not allowed in keys in the used Python module for config file reading. This is known since 1.5 years (or longer) and unfortunately no progress.
  • Certbot development seems to be stuck or frozen with respect to external plugin support: About a year ago there was the announcement that the inclusion of plugins will be frozen to clarify the interface etc, but since then no changes.

I can only hope that over time the issues with such an important piece of software will be resolved positively, and I a looking forward to see updates to certbot and friends in Debian.

Dirk Eddelbuettel: RcppArmadillo 0.9.800.4.0

25 January, 2020 - 08:08

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 680 other packages on CRAN.

A second small Armadillo bugfix upstream update 9.800.4 came out yesterday for the 9.800.* series, following a similar bugfix release 9.800.3 in December. This time just one file was changed (see below).

Changes in RcppArmadillo version 0.9.800.4.0 (2020-20-24)
  • Upgraded to Armadillo release 9.800.4 (Horizon Scraper)

    • fixes for incorrect type promotion in normpdf()

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve Kemp: procmail for gmail?

24 January, 2020 - 17:30

After 10+ years I'm in the process of retiring my mail-host. In the future I'll no longer be running exim4/dovecot/similar, and handling my own mail. Instead it'll all go to a (paid) Google account.

It feels like the end of an era, as it means a lot of my daily life will not be spent inside a single host no longer will I run:

ssh steve@mail.steve.org.uk

I'm still within my Gsuite trial, but I've mostly finished importing my vast mail archive, via mbsync.

The only outstanding thing I need is some scripting for the mail. Since my mail has been self-hosted I've evolved a large and complex procmail configuration file which sorted incoming messages into Maildir folders.

Having a quick look around last night I couldn't find anything similar for the brave new world of Google Mail. So I hacked up a quick script which will automatically add labels to new messages that don't have any.

Finding messages which are new/unread and which don't have labels is a matter of searching for:

is:unread -has:userlabels

From there adding labels is pretty simple, if you decide what you want. For the moment I'm keeping it simple:

  • If a message comes from "Bob Smith" <bob.smith@example.com>
    • I add the label "bob.smith".
    • I add the label "example.com".

Both labels will be created if they don't already exist, and the actual coding part was pretty simple. To be more complex/flexible I would probably need to integrate a scripting language (oh, I have one of those), and let the user decide what to do for each message.

The biggest annoyance is setting up the Google project, and all the OAUTH magic. I've documented briefly what I did but I don't actually know if anybody else could run the damn thing - there's just too much "magic" involved in these APIs.

Anyway procmail-lite for gmail. Job done.

Bdale Garbee: Digital Photo Creation Dates

24 January, 2020 - 16:09

I learned something new yesterday, that probably shouldn't have shocked me as much as it did. For legacy reasons, the "creation time" in the Exif metadata attached to digital camera pictures is not expressed in absolute time, but rather in some arbitrary expression of "local" time! This caused me to spend a long evening learning how to twiddle Exif data, and then how to convince Piwigo to use the updated metadata. In case I or someone else need to do this in the future, it seems worth taking the time to document what I learned and what I did to "make things right".

The reason photo creation time matters to me is that my wife Karen and I are currently in the midst of creating a "best of" subset of photos taken on our recently concluded family expedition to Antarctica and Argentina. Karen loves taking (sometimes award-winning) nature photos, and during this trip she took thousands of photos using her relatively new Nikon COOLPIX P900 camera. At the same time, both of us and our kids also took many photos using the cameras built into our respective Android phones. To build our "best of" list, we wanted to be able to pick and choose from the complete set of photos taken, so I started by uploading all of them to the Piwigo instance I host on a virtual machine on behalf of the family, where we assigned a new tag for the subset and started to pick photos to include.

Unfortunately, to our dismay, we noted that all the photos taken on the P900 weren't aligning correctly in the time-line. This was completely unexpected, since one of the features of the P900 is that it includes a GPS chip and adds geo-tags to every photo taken, including a GPS time stamp.

Background

We've grown accustomed to the idea that our phones always know the correct time due to their behavior on the mobile networks around the world. And for most of us, the camera in our phone is probably the best camera we own. Naively, my wife and I assumed the GPS time stamps on the photos taken by the P900 would allow it to behave similarly and all our photos would just automatically align in time... but that's not how it worked out!

The GPS time stamp implemented by Nikon is included as an Exif extension separate from the "creation time", which is expressed in the local time known by the camera. While my tiny little mind revolts at this and thinks all digital photos should just have a GPS-derived UTC creation time whenever possible... after thinking about it for a while, I think I understand how we got here.

In the early days of Exif, most photos were taken using chemical processes and any associated metadata was created and added manually after the photo existed. That's probably why there are separate tags for creation time and digitization time, for example. As cameras went digital and got clocks, it became common to expect the photographer to set the date and time in their camera, and of course most people would choose the local time since that's what they knew.

With the advent of GPS chips in cameras, the hardware now has access to an outstanding source of "absolute time". But the Nikon guys aren't actually using that directly to set image creation time. Instead, they still assume the photographer is going to manually set the local time, but added a function buried in one of the setup menus to allow a one-time set of the camera's clock from GPS satellite data.

So, what my wife needs to do in the future is remember at the start of any photo shooting period where time sync of her photos with those of others is important, she needs to make sure her camera's time is correctly set, taking advantage of the function that allows here to set the local time from the GPS time. But of course, that only helps future photos...

How I fixed the problem

So the problem in front of me was several thousand images taken with the camera's clock "off" by 15 hours and 5 minutes. We figured that out by a combinaton of noting the amount the camera's clock skewed by when we used the GPS function to set the clock, then noticing that we still had to account for the time zone to make everything line up right. As far as I can tell, 12 hours of that was due to AM vs PM confusion when my wife originally set the time by hand, less 1 hour of daylight savings time not accounted for, plus 4 time zones from home to where the photos were taken. And the remaining 5 minutes probably amount to some combination of imprecision when the clock was originally set by hand, and drift of the camera's clock in the many months since then.

I thought briefly about hacking Piwigo to use the GPS time stamps, but quickly realized that wouldn't actually solve the problem, since they're in UTC and the pictures from our phone cameras were all using local time. There's probably a solution lurking there somewhere, but just fixing up the times in the photo files that were wrong seemed like an easier path forward.

A Google search or two later, and I found jhead, which fortunately was already packaged for Debian. It makes changing Exif timestamps of an on-disk Jpeg image file really easy. Highly recommended!

Compounding my problem was that my wife had already spent many hours tagging her photos in the Piwigo web GUI, so it really seemed necessary to fix the images "in place" on the Piwigo server. The first problem with that is that as you upload photos to the server, they are assigned unique filenames on disk based on the upload date and time plus a random hash, and the original filename becomes just an element of metadata in the Piwigo database. Piwigo scans the Exif data at image import time and stuffs the database with a number of useful values from there, including the image creation time that is fundamental to aligning images taken by different cameras on a timeline.

I could find no Piwigo interface to easily extract the on-disk filenames for a given set of photos, so I ended up playing with the underlying database directly. The Piwigo source tree contains a file piwigo_structure-mysql.sql used in the installation process to set up the database tables that served as a handy reference for figuring out the database schema. Looking at the piwigo_categories table, I learned that the "folder" I had uploaded all of the raw photos from my wife's camera to was category 109. After a couple hours of re-learning mysql/mariadb query semantics and just trying things against the database, this is the command that gave me the list of all the files I wanted:

select piwigo_images.path into outfile '/tmp/imagefiles' from piwigo_image_category, piwigo_images where piwigo_image_category.category_id=109 and piwigo_images.date_creation >= '2019-12-14' and piwigo_image_category.image_id=piwigo_images.id;

That gave me a list of the on-disk file paths (relative to the Piwigo installation root) of images uploaded from my wife's camera since the start of this trip in a file. A trivial shell script loop using that list of paths quickly followed:

        cd /var/www/html/piwigo
        for i in `cat /tmp/imagefiles`
        do
                echo $i
                sudo -u www-data jhead -ta+15:05 $i
        done

At this point, all the files on disk were updated, as a little quick checking with exif and exiv2 at the command line confirmed. But my second problem was figuring out how to get Piwigo to notice and incorporate the changes. That turned out to be easier than I thought! Using the admin interface to go into the photos batch manager, I was able to select all the photos in the folder we upload raw pictures from Karen's camera to that were taken in the relevant date range (which I expressed as taken:2019-12-14..2021), then selected all photos in the resulting set, and performed action "synchronize metadata". All the selected image files were rescanned, the database got updated...

Voila! Happy wife!

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, December 2019

24 January, 2020 - 01:19

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports In December, 208.00 work hours have been dispatched among 14 paid contributors. Their reports are available: Evolution of the situation

Though December was as quiet as to be expected due to the holiday season, the usual amount of security updates were still released by our contributors.
We currently have 59 LTS sponsors each month sponsoring 219h. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 33 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้