Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 2 hours 56 min ago

Louis-Philippe Véronneau: Paying for The Internet

6 August, 2019 - 11:00

For a while now, I've been paying for The Internet. Not the internet connection provided by my ISP, mind you, but for the stuff I enjoy online and the services I find useful.

Most of the Internet as we currently know it is funded by ads. I hate ads and I take a vicious pride in blocking them with the help of great projects like uBlock Orign and NoScript. More fundamentally, I believe the web shouldn't be funded via ads:

  • they control your brain (that alone should be enough to ban ads)
  • they create morally wrong economic incentives towards consumerism
  • they create important security risks and make websites gather data on you

I could go on like this, but I feel those are pretty strong arguments. Feel free to disagree.

So I've started paying. Paying for my emails. Paying for the comics I enjoy online 1. Paying for the few YouTube channels I like. Paying for the newspapers I read.

At the moment, The Internet costs me around 260 USD per year. Luckily for me, I'm privileged enough that it doesn't have a significant impact on my finances. I also pay for a lot of the software I use and enjoy by making patches and spending time working on them. I feel that's a valid way to make The Internet a more sustainable place.

I don't think individual actions like this one have a very profound impact on how things work, but like riding your bike to work or eating locally produced organic food, it opens a window into a possible future. A better future.

  1. I currently like these comics enough to pay for them:

Dirk Eddelbuettel: #23: Debugging with Docker and Rocker – A Concrete Example helping on macOS

6 August, 2019 - 08:48

Welcome to the 23nd post in the rationally reasonable R rants series, or R4 for short. Today’s post was motivated by an exchange on the r-devel list earlier in the day, and a few subsequent off-list emails.

Roger Koenker posted a question: how to best debug an issue arising only with gfortran-9 which is difficult to get hold off on his macOS development platform. Some people followed up, and I mentioned that I had good success using Docker, and particularly our Rocker containers—and outlined a quick mini-tutorial (which had one mini-typo lacking the imporant slash in -w /work). Roger and I followed up over a few more off-list emails, and by and large this worked for him.

So what follows below is a jointly written / edited ‘mini HOWTO’ of how to deploy Docker on macOS for debugging under particular toolchains more easily available on Linux. Windows and Linux use should be very similar, albeit differ in the initial install. In fact, I frequently debug or test in Docker sessions when I do not want to install on my Linux host system. Roger sent one version (I had also edited) back to the list. What follows is my final version.

Debugging with Docker: Getting Hold of Particular Compilers

Context: The quantreg package was seen exhibiting errors when compiled with gfortran-9. The following shows how to use gfortran-9 on macOS by virtue of Docker. It is written in Roger Koenker’s voice, but authored by Roger and myself.

With extensive help from Dirk Eddelbuettel I have installed docker on my mac mini from

https://hub.docker.com/editions/community/docker-ce-desktop-mac

which installs from a dmg in quite standard fashion. This has allowed me to simulate running R in a Debian environment with gfortran-9 and begin the process of debugging my ancient rqbr.f code.

Some further details:

Step 0: Install Docker and Test

Install Docker for macOS following this Docker guide. Do some initial testing, e.g.

docker --version 
docker run hello-world       
Step 1: Download r-base and test OS

We use the plainest Rocker container rocker/r-base, in the aliased form of the official Docker container for, i.e. r-base. We first ‘pull’, then test the version and drop into bash as second test.

docker pull r-base                       # downloads r-base for us 
docker run --rm -ti r-base R --version   # to check we have the R we want
docker run --rm -ti r-base bash          # now in shell, Ctrl-d to exit
Step 2: Setup the working directory

We tell Docker to run from the current directory and access the files therein. For the work on quantreg package this is projects/rq for RogerL

cd projects/rq
docker run --rm -ti -v ${PWD}:/work -w /work r-base bash

This put the contents of projects/rq into the /work directory, and starts the session in /work (as can be seen from the prompt).

Next, we update the package information inside the container:

root@90521904fa86:/work# apt-get update
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian testing InRelease [117 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8,385 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian testing/main amd64 Packages [7,916 kB]
Fetched 16.6 MB in 4s (4,411 kB/s)                           
Reading package lists... Done
Step 3: Install gcc-9 and gfortran-9
root@90521904fa86:/work# apt-get install gcc-9 gfortran-9
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
cpp-9 gcc-9-base libasan5 libatomic1 libcc1-0 libgcc-9-dev libgcc1 libgfortran-9-dev
libgfortran5 libgomp1 libitm1 liblsan0 libquadmath0 libstdc++6 libtsan0 libubsan1
Suggested packages:
gcc-9-locales gcc-9-multilib gcc-9-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg
libasan5-dbg liblsan0-dbg libtsan0-dbg libubsan1-dbg libquadmath0-dbg gfortran-9-multilib
gfortran-9-doc libgfortran5-dbg libcoarrays-dev
The following NEW packages will be installed:
cpp-9 gcc-9 gfortran-9 libgcc-9-dev libgfortran-9-dev
The following packages will be upgraded:
gcc-9-base libasan5 libatomic1 libcc1-0 libgcc1 libgfortran5 libgomp1 libitm1 liblsan0
libquadmath0 libstdc++6 libtsan0 libubsan1
13 upgraded, 5 newly installed, 0 to remove and 71 not upgraded.
Need to get 35.6 MB of archives.
After this operation, 107 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libasan5 amd64 9.1.0-10 [390 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libubsan1 amd64 9.1.0-10 [128 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libtsan0 amd64 9.1.0-10 [295 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9-base amd64 9.1.0-10 [190 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libstdc++6 amd64 9.1.0-10 [500 kB]
Get:6 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libquadmath0 amd64 9.1.0-10 [145 kB]
Get:7 http://cdn-fastly.deb.debian.org/debian testing/main amd64 liblsan0 amd64 9.1.0-10 [137 kB]
Get:8 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libitm1 amd64 9.1.0-10 [27.6 kB]
Get:9 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgomp1 amd64 9.1.0-10 [88.1 kB]
Get:10 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran5 amd64 9.1.0-10 [633 kB]
Get:11 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libcc1-0 amd64 9.1.0-10 [47.7 kB]
Get:12 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libatomic1 amd64 9.1.0-10 [9,012 B]
Get:13 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc1 amd64 1:9.1.0-10 [40.5 kB]
Get:14 http://cdn-fastly.deb.debian.org/debian testing/main amd64 cpp-9 amd64 9.1.0-10 [9,667 kB]
Get:15 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc-9-dev amd64 9.1.0-10 [2,346 kB]
Get:16 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9 amd64 9.1.0-10 [9,945 kB]
Get:17 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran-9-dev amd64 9.1.0-10 [676 kB]
Get:18 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gfortran-9 amd64 9.1.0-10 [10.4 MB]
Fetched 35.6 MB in 6s (6,216 kB/s)      
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../libasan5_9.1.0-10_amd64.deb ...
Unpacking libasan5:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../libubsan1_9.1.0-10_amd64.deb ...
Unpacking libubsan1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../libtsan0_9.1.0-10_amd64.deb ...
Unpacking libtsan0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../gcc-9-base_9.1.0-10_amd64.deb ...
Unpacking gcc-9-base:amd64 (9.1.0-10) over (9.1.0-8) ...
Setting up gcc-9-base:amd64 (9.1.0-10) ...
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../libstdc++6_9.1.0-10_amd64.deb ...
Unpacking libstdc++6:amd64 (9.1.0-10) over (9.1.0-8) ...
Setting up libstdc++6:amd64 (9.1.0-10) ...
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../0-libquadmath0_9.1.0-10_amd64.deb ...
Unpacking libquadmath0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../1-liblsan0_9.1.0-10_amd64.deb ...
Unpacking liblsan0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../2-libitm1_9.1.0-10_amd64.deb ...
Unpacking libitm1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../3-libgomp1_9.1.0-10_amd64.deb ...
Unpacking libgomp1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../4-libgfortran5_9.1.0-10_amd64.deb ...
Unpacking libgfortran5:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../5-libcc1-0_9.1.0-10_amd64.deb ...
Unpacking libcc1-0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../6-libatomic1_9.1.0-10_amd64.deb ...
Unpacking libatomic1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../7-libgcc1_1%3a9.1.0-10_amd64.deb ...
Unpacking libgcc1:amd64 (1:9.1.0-10) over (1:9.1.0-8) ...
Setting up libgcc1:amd64 (1:9.1.0-10) ...
Selecting previously unselected package cpp-9.
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../cpp-9_9.1.0-10_amd64.deb ...
Unpacking cpp-9 (9.1.0-10) ...
Selecting previously unselected package libgcc-9-dev:amd64.
Preparing to unpack .../libgcc-9-dev_9.1.0-10_amd64.deb ...
Unpacking libgcc-9-dev:amd64 (9.1.0-10) ...
Selecting previously unselected package gcc-9.
Preparing to unpack .../gcc-9_9.1.0-10_amd64.deb ...
Unpacking gcc-9 (9.1.0-10) ...
Selecting previously unselected package libgfortran-9-dev:amd64.
Preparing to unpack .../libgfortran-9-dev_9.1.0-10_amd64.deb ...
Unpacking libgfortran-9-dev:amd64 (9.1.0-10) ...
Selecting previously unselected package gfortran-9.
Preparing to unpack .../gfortran-9_9.1.0-10_amd64.deb ...
Unpacking gfortran-9 (9.1.0-10) ...
Setting up libgomp1:amd64 (9.1.0-10) ...
Setting up libasan5:amd64 (9.1.0-10) ...
Setting up libquadmath0:amd64 (9.1.0-10) ...
Setting up libatomic1:amd64 (9.1.0-10) ...
Setting up libgfortran5:amd64 (9.1.0-10) ...
Setting up libubsan1:amd64 (9.1.0-10) ...
Setting up cpp-9 (9.1.0-10) ...
Setting up libcc1-0:amd64 (9.1.0-10) ...
Setting up liblsan0:amd64 (9.1.0-10) ...
Setting up libitm1:amd64 (9.1.0-10) ...
Setting up libtsan0:amd64 (9.1.0-10) ...
Setting up libgcc-9-dev:amd64 (9.1.0-10) ...
Setting up gcc-9 (9.1.0-10) ...
Setting up libgfortran-9-dev:amd64 (9.1.0-10) ...
Setting up gfortran-9 (9.1.0-10) ...
Processing triggers for libc-bin (2.28-10) ...
root@90521904fa86:/work# pwd

Here filenames and versions reflect the Debian repositories as of today, August 5, 2019. While minor details may change at a future point in time, the key fact is that we get the components we desire via a single call as the Debian system has a well-honed package system

Step 4: Prepare Package

At this point Roger removed some dependencies from the package quantreg that he knew were not relevant to the debugging problem at hand.

Step 5: Set Compiler Flags

Next, set compiler flags as follows:

root@90521904fa86:/work# mkdir ~/.R; vi ~/.R/Makevars 

adding the values

CC=gcc-9
FC=gfortran-9
F77=gfortran-9

to the file. Alternatively, one can find the settings of CC, FC, CXX, … in /etc/R/Makeconf (which for the Debian package is a softlink to R’s actual Makeconf) and alter them there.

Step 6: Install the Source Package

Now run

R CMD INSTALL quantreg_5.43.tar.gz

which uses the gfortran-9 compiler, and this version did reproduce the error initially reported by the CRAN maintainers.

Step 7: Debug!

With the tools in place, and the bug reproduces, it is (just!) a matter of finding the bug and fixing it.

And that concludes the tutorial.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Romain Perier: Free software activities in May, June and July 2019

6 August, 2019 - 01:32
Hi Planet, it is been a long time since my last post.
Here is an update covering what I have been doing in my free software activities during May, June and July 2019.
MayOnly contributions related to Debian were done in May
  •  linux: Update to 5.1 (including porting of all debian patches to the new release)
  • linux: Update to 5.1.2
  • linux: Update to 5.1.3
  • linux: Update to 5.1.5
  • firmware-nonfree: misc-nonfree: Add GV100 signed firmwares (Closes: #928672)
June Debian
  • linux: Update to 5.1.7
  • linux: Update to 5.1.8
  • linux: Update to 5.1.10
  • linux: Update to 5.1.11
  • linux: Update to 5.1.15
  • linux: [sparc64] Fix device naming inconsistency between sunhv_console and sunhv_reg (Closes: #926539)
  • raspi3-firmware:  New upstream version 1.20190517
  • raspi3-firmware: New upstream version 1.20190620+1
Kernel Self Protection ProjectI have recently joined the kernel self protection protect, which basically intends to harden the mainline linux kernel the most as possible by adding subsystems that improve the security or make internal subsystems more robust to some common errors that might lead to security issues.

As a first contribution, Kees Cook asked me to check all the NLA_STRING for non-nul terminated strings. Internal functions of NLA attrs expect to have standard nul-terminated strings and use standard strings functions like strcmp() or equivalent. Few drivers were using non-nul terminated strings in some cases, which might lead to buffer overflow. I have checked all the NLA_STRING uses in all drivers and forwarded a status for all of these. Everything were already fixed in linux-next (hopefully).
JulyDebian
  • linux: Update to 5.1.16
  • linux: Update to 5.2-rc7 (including porting of all debian patches to the new release)
  • linux: Update to 5.2
  • linux: Update to 5.2.1
  • linux: [rt] Update to 5.2-rt1
  • linux: Update to 5.2.4
  • ethtool: New upstream version 5.2
  • raspi3-firmware: Fixed lintians warnings about the binaries blobs for the raspberry PI 4
  • raspi3-firmware: New upstream version 1.20190709
  • raspi3-firmware: New upstream version 1.20190718
The following CVEs are for buster-security:
  • linux: [x86] x86/insn-eval: Fix use-after-free access to LDT entry (CVE-2019-13233)
  • linux: [powerpc*] mm/64s/hash: Reallocate context ids on fork (CVE-2019-12817)
  • linux: nfc: Ensure presence of required attributes in the deactivate_target handler (CVE-2019-12984)
  • linux: binder: fix race between munmap() and direct reclaim (CVE-2019-1999)
  • linux: scsi: libsas: fix a race condition when smp task timeout (CVE-2018-20836)
  • linux: Input: gtco - bounds check collection indent level (CVE-2019-13631)
Kernel Self Protection ProjectI am currently improving the API of the internal kernel subsystem "tasklet". This is an old API and like "timer" it has several limitations regarding the way informations are passed to the callback handler. A future patch set will be sent to upstream, I will probably write a blog post about it.

Reproducible Builds: Reproducible Builds in July 2019

5 August, 2019 - 23:06

Welcome to the July 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In July’s report, we cover:

  • Front pageMedia coverage, upstream news, etc.
  • Distribution workShenanigans at DebConf19
  • Software developmentSoftware transparency, yet more diffoscope work, etc.
  • On our mailing listGNU tools, education and buildinfo files
  • Getting in touch… and how to contribute

If you are interested in contributing to our project, we enthusiastically invite you to visit our Contribute page on our website.

Front page

Nico Alt wrote a detailed and well-researched article titled “Trust is good, control is better” which discusses Reproducible builds in F-Droid the alternative application repository for Android mobile phones. In contrast to the bigger commercial app stores F-Droid only offers apps that are free and open source software. The post not only demonstrates using diffoscope but talks more generally about how reproducible builds can prevent single developers or other important centralised infrastructure becoming targets for toolchain-based attacks.

Later in the month, F-Droid’s aforementioned reproducibility status was mentioned on episode 68 of the Late Night Linux podcast. (direct link to 14:12)

Morten (“Foxboron”) Linderud published his academic thesis “Reproducible Builds: break a log, good things come in trees” which investigates and describes how transparency log overlays can provide additional security guarantees for computers automatically producing software packages. The thesis was part of Morten’s studies at the University of Bergen, Norway and is an extension of the work New York University Tandon School of Engineering has been doing with package rebuilder integration in APT.

Mike Hommey posted to his blog about Reproducing the Linux builds of Firefox 68 which leverages that builds shipped by Mozilla should be reproducible from this version. He discusses the problems caused by the builds being optimised with Profile-Guided Optimisation (PGO) but armed with the now-published profiling data, Mike provides Docker-based instructions how to reproduce the published builds yourself.

Joel Galenson has been making progress on a reproducible Rust compiler which includes support for a --remap-path-prefix argument, related to the concepts and problems involved in the BUILD_PATH_PREFIX_MAP proposal to fix issues with build paths being embedded in binaries.

Lastly, Alessio Treglia posted to their blog about Cosmos Hub and Reproducible Builds which describes the reproducibility work happening in the Cosmos Hub, a network of interconnected blockchains. Specifically, Alessio talks about work being done on the Gaia development kit for the Hub.


Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution where enabling. Enabling Link Time Optimization (LTO) in this distribution’s “Tumbleweed” branch caused multiple issues due to the number of cores on the build host being added to the CFLAGS variable. This affected, for example, a debuginfo/rpm header as well as resulted in in CFLAGS appearing in built binaries such as fldigi, gmp, haproxy, etc.

As highlighted in last month’s report, the OpenWrt project (a Linux operating system targeting embedded devices such as wireless network routers) hosted a summit in Hamburg, Germany. Their full summit report and roundup is now available that covers many general aspects within that distribution, including the work on reproducible builds that was done during the event.

Debian

It was an extremely productive time in Debian this month in and around DebConf19, the 20th annual conference for both contributors and users and was held at the Federal University of Technology in Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28. The conference was preceded by “DebCamp” from the 14th until the 19th with an additional “Open Day” that is targeted at the more-general public on the 20th.

There were a number of talks touching on the topic of reproducible builds and secure toolchains throughout the conference, including:

There were naturally countless discussions regarding Reproducible Builds in and around the conference on the questions of tooling, infrastructure and our next steps as a project.

The release of Debian 10 buster has also meant the release cycle for the next release (codenamed “bullseye”) has just begun. As part of this, the Release Team recently announced that Debian will no longer allow binaries built and uploaded by maintainers on their own machines to be part of the upcoming release. This is great news not only for toolchain security in general but also in that it will ensure that all binaries that will form part of this release will likely have a .buildinfo file and thus metadata that could be used by others to reproduce and verify the builds.

Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated if packages had to be manually approved or processed in the so-called “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a potential patch for the issue which is pending review.

David Bremner posted to his blog post about “Yet another buildinfo database” that provides a SQL interface for querying .buildinfo attestation documents, particularly focusing on identifying packages that were built with a specific — and possibly buggy — build-dependency. Later at DebConf, David demonstrated his tool live (starting at 36:30).

Ivo de Decker (“ivodd”) scheduled rebuilds of over 600 packages that last experienced an upload to the archive in December 2016 or earlier. This was so that they would be built using a version of the low-level dpkg package build tool that supports the generation of reproducible binary packages. The effect of this on the main archive will be deliberately staggered and thus visible throughout the upcoming weeks, potentially resulting in some of these packages now failing to build.

Joaquin de Andres posted an update regarding the work being done on continuous integration on Debian’s Gitlab instance at DebConf19 in which he mentions, inter alia, a tool called atomic-reprotest. This is a relatively new utility to help debug failures logged by the our reprotest tool which attempts to test whether a build is reproducible or not. This tool was also mentioned in a subsequent lightning talk.

Chris Lamb filed two bugs to drop the test jobs for both strip-nondeterminism (#932366) and reprotest (#932374) after modifying them to built on the Salsa server’s own continuous integration platform and Holger Levsen shortly resolved them.

Lastly, 63 reviews of Debian packages were added, 72 were updated and 22 were removed this month, adding to our large knowledge about identified issues. Chris Lamb added and categorised four new issue types, umask_in_java_jar_file, built_by-in_java_manifest_mf, timestamps_in_manpages_generated_by_lopsubgen and codadef_coda_data_files.

Software development

The goal of Benjamin Hof’s Software Transparency effort is to improve on the cryptographic signatures of the APT package manager by introducing a Merkle tree-based transparency log for package metadata and source code, in a similar vein to certificate transparency. This month, he pushed a number of repositories to our revision control system for further future development and review.

In addition, Bernhard M. Wiedemann updated his (deliberately) unreproducible demonstration project to add support for floating point variations as well as changes in the project’s copyright year.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Neal Gompa, Michael Schröder & Miro Hrončok responded to Fedora’s recent change to rpm-config with some new developments within rpm to fix an unreproducible “Build Date” and reverted a change to the Python interpreter to switch back to unreproducible/time-based compile caches.

Lastly, kpcyrd submitted a pull request for Alpine Linux to add SOURCE_DATE_EPOCH support to the abuild build tool in this operating system.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-determistic behaviour.

This month, Chris Lamb made the following changes:

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) []
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [].
  • Cease ignoring test failures in stable-backports. []
  • Add missing textual DESCRIPTION headers for .zip and “Mozilla”-optimised .zip files. []
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. []
  • Update some reporting:
    • Re-add “return code” noun to “Command foo exited with X” error messages. []
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. []
    • Skip the extra newline in Output:\nfoo. []
  • Add some explicit return values to appease Pylint, etc. []
  • Also include the python3-tlsh in the Debian test dependencies. []
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [][][][][]

In addition, Marc Herbert provided a patch to catch failures to disassemble ELF binaries. []


Project website

There was a yet more effort put into our our website this month, including:

  • Bernhard M. Wiedemann:
    • Update multiple works to use standard (or at least consistent) terminology. []
    • Document an alternative Python snippet in the SOURCE_DATE_EPOCH examples examples. []
  • Chris Lamb:
    • Split out our non-fiscal sponsors with a description [] and make them non-display three-in-a-row [].
    • Correct references to 1&1 IONOS (née Profitbricks). []
    • Reduce ambiguity in our environment names. []
    • Recreate the badge image, saving the .svg alongside it. []
    • Update our fiscal sponsors. [][][]
    • Tidy the weekly reports section on the news page [], fixup the typography on the documentation page [] and make all headlines stand out a bit more [].
    • Drop some old CSS files and fonts. []
    • Tidy news page a bit. []
    • Fixup a number of issues in the report template and previous reports. [][][][][][]

Holger Levsen also added explanations on how to install diffoscope on OpenBSD [] and FreeBSD [] to its homepage and Arnout Engelen added a prelimary and work-in-progress idea for a badge or “shield” program for upstream projects. [][][].

A special thank you to Alexander Borkowski [] Georg Faerber [], and John Scott [] for their individual fixes. To err is human; to reproduce, divine.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Niko Tyni provided a patch to use the Perl Sub::Override library for some temporary workarounds for issues in Archive::Zip instead of Monkey::Patch which was due for deprecation. [].

In addition, Chris Lamb made the following changes:

  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. []
  • Support OpenJDK “.jmod” files. []
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don’t just run the tests but build the Debian package instead using Salsa’s centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [][]
  • Update tests:
    • Don’t build release Git tags on salsa.debian.org. []
    • Merge the debian branch into the master branch to simplify testing and deployment [] and update debian/gbp.conf to match [].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. []


Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were performed in the last month:

  • Holger Levsen:
    • Debian-specific changes:
      • Make a large number of adjustments to support the new Debian bullseye distribution and the release of buster. [][][][][][][] [][][][]
      • Fix the colours for the five suites now being built. []
      • Make a number code improvements to the calculation of our “metapackage” sets including refactoring and changes of email address, etc. [][][][][]
      • Add the “http-proxy” variable to the displayed node info. []
    • Alpine changes:
      • Rebuild the webpages every two hours (instead of twice per hour). []
    • Reproducible tooling:
      • Fix the detection of version number in Arch Linux. []
      • Drop reprotest and strip-nondeterminism jobs as we run that via Salsa CI now. [][]
      • Add a link to current SQL database schema. []
  • Mattia Rizzolo:
    • Make a number of adjustments to support the new Debian bullseye distribution. [][][][]
    • Ensure that our arm64 hosts always trust the Debian archive keyring. []
    • Enable the backports respositories on the arm64 build hosts. []

Holger Levsen [][][] and Mattia Rizzolo [][][] performed the usual node maintenance and lastly, Vagrant Cascadian added support to generate a reproducible-tracker.json metadata file for the next release of Debian (bullseye). []

On the mailing list

Chris Lamb cross-posted his reply to the “Re: file(1) now with seccomp support enabled thread that was originally started on the debian-devel Debian list. In his post, he refers to a strip-nondeterminism not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to do fix this issue which was causing a huge number of regressions in our testing framework.

Matt Bearup wrote about his experience when he generated different checksums for the libgcrypt20 package which resulted in some pointers etc. in that one should be using the equivalent .buildinfo post-build certificate when attempting to reproduce any particular build.

Vagrant Cascadian posted a request for comments regarding a potential proposal to the GNU Tools “Cauldron” gathering to be held in Montréal, Canada during September 2019 and Bernhard M. Wiedemann posed a query about using consistent terms on our webpages and elsewhere.

Lastly, in a thread titled “Reproducible Builds - aiming for bullseye: comments and a purpose” Jathan asked about whether we had considered offering “101”-like beginner sessions to fix packages that are not currently reproducible.


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Benjamin Hof, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Thorsten Alteholz: My Debian Activities in July 2019

5 August, 2019 - 01:30

FTP master

After the release of Buster I could start with real work in NEW again. Even the temperature could not hinder me to reject something. So this month I accepted 279 packages and rejected 15. The overall number of packages that got accepted was 308.

Debian LTS

This was my sixty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.5h. During that time I did LTS uploads of:

  • [DLA 1849-1] zeromq3 security update for one CVE
  • [DLA 1833-2] bzip2 regression update for one patch
  • [DLA 1856-1] patch security update for one CVE
  • [DLA 1859-1] bind9 security update for one CVE
  • [DLA 1864-1] patch security update for one CVE

I am glad that I could finish the bind9 upload this month.
I also started to work on ruby-mini-magick and python2.7. Unfortunatley when building both packages (even without new patches), the test suite fails. So I first have to fix that as well.

Last but not least I did ten days of frontdesk duties. This was more than a week as everybody was at DebConf and I seemed to be the only one at home …

Debian ELTS

This month was the fourteenth ELTS month.

During my allocated time I uploaded:

  • ELA-132-2 of bzip2 for an upstream regression
  • ELA-144-1 of patch for one CVE
  • ELA-147-1 of patch for one CVE
  • ELA-148-1 of bind9 for one CVE

I also did some days of frontdesk duties.

Other stuff

This month I reuploaded some go packages, that would not migrate due to being binary uploads.

I also filed rm bugs to remove all alljoyn packages. Upstream is dead, no one is using this software anymore and bugs won’t be fixed.

Emmanuel Kasper: Debian 9 -> 10 Ugrade report

4 August, 2019 - 22:23
I upgraded my laptop and VPS to Debian 10, as usual in Debian everything worked out of the box, the necessary daemons restarted without problems.
I followed my usual upgrade approach, which involves upgrading a backup of the root FS of the server in a container, to test the upgrade path, followed by a config file merge.

I had one major problem, though, connecting to my php based Dokuwiki subsole.org website, which displayed a rather unwelcoming screen after the upgrade:




I was a bit unsure at first, as I thought I would need to fight my way through the nine different config files of the dokuwiki debian package in /etc/dokuwiki

However the issue was not so complicated: as  the apache2 php module was disabled, apache2 was outputting the source code of dokuwiki instead of executing it. As you see, I don't php that often.

A simple
a2enmod php7.3
systemctl restart apache2


fixed the issue.

I understood the problem after noticing that a simple phpinfo() would not get executed by the server.

I would have expected the upgrade to automatically enable the new php7.3 module, since the oldstable php7.0 apache module was removed as part of the upgrade, but I am not sure what the Debian policy would recommend here, or if I am missing something else.
If I can reproduce the issue in a upgrade scenario, I'll probably submit a bug to the php package maintainers.

Debian GSoC Kotlin project blog: Packaging Dependencies Part 2; and plan on how to.

4 August, 2019 - 18:52
Mapping and packaging dependencies part 1.

Hey all, I had my exams during weeks 8 ad 9 so I couldn't update my blog nor get much accomplished; but last week was completely free so I managed to finish packaging all the dependencies from pacakging dependencies part 1. Since some of you may not remember how I planned to tackle pacakging dependencies I'll mention it here one more time.

"I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it."

This is taken from my last blog which was specifically on packaging dependencies in part 1. Now I am happy to tell all of you that packaging dependencies for part 1 is now complete and all the needed pacakges are either in the new queue or already in sid archive as of 04 August 2019. I would like to thank ebourg, seamlik and andrewsh for helping me with this.

How to build kotlin 1.3.30 after dependency packaging part 1 and design choices.

Before I go into how to build the project as it is now I'll briefly talk of some of the choices I made while packaging dependencies in part 1 and general things you should know.

Two dependencies in part 1 were Jcabi-aether and sonatype-aether, both of these are incompatible with maven-3 and these were only used in one single file in the entire dist task graph. Considering the time it would take to migrate these dependencies to maven-3 I chose to patch out the one file that needed both of these and that change is denoted by this {commit](https://salsa.debian.org/m36-guest/kotlin-1.3.30/commit/cb298ba550ca9f727ff66e4ffca0cb73e9ee03f1). Also it must be noted that so far we are only trying to build the dist task which only and only builds the basic kotlin compiler; it doesn't build the maven artifacts with poms nor does it build the kotlin-gradle-plugin. Those things are built and installed in the local maven repository (.m2 file in surce project when you invoke debuild) using the install task which I am planning to do once we finish successfully building the dist task. Invoking the install task in our master as of Aug 04 2019 will build and install all available maven artifacts into the local maven repo but this again will not have kotlin-gradle-plugin or such since I have removed those subprojects as they aren't needed by the dist task. Keeping them would mean that I have to convert and patch them to groovy if they are written in .kts since they are evaluated during the initialization phase.

Now we are ready to build the project. I have written a simple makefile which copies all the needed bootstrap jars and prebuilts to their proper places. All you need to build the project is

1.git clone https://salsa.debian.org/m36-guest/kotlin-1.3.30.git  
2.cd kotlin-1.3.30
3.git checkout buildv1  
4.debian/pseudoBootstrap bootstrap
5.debuild -b -rfakeroot -us -uc
Note that we need only do steps 1 though 4 the very first time you are building this project. everytime after that just invoke step 5
Packaging dependencies part 2.

Now packaging dependencies part 2 involves package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build. This is the folder that is taking up the most space in Kotlin-1.3.30-temp-requirements. The sole purpose of this task is reduce the jars in this folder and substitue them with jar from the debian environment. I have managed to map out the needed jars from these for the dist task graph and they are

``` saif@Hope:/srv/chroot/KotlinCh/home/kotlin/kotlin-1.3.30-debian-maintained/buildSrc/prepare-deps/intellij-sdk/repo/kotlin.build.custom.deps/183.5153.4$ ls -R .: intellij-core intellij-core.ivy.xml intellijUltimate intellijUltimate.ivy.xml jps-standalone jps-standalone.ivy.xml

./intellij-core:
asm-all-7.0.jar  intellij-core.jar  java-compatibility-1.0.1.jar

./intellijUltimate:
lib

./intellijUltimate/lib:
asm-all-7.0.jar  guava-25.1-jre.jar  jna.jar           log4j.jar      openapi.jar    picocontainer-1.2.jar  platform-impl.jar   trove4j.jar
extensions.jar   jdom.jar            jna-platform.jar  lz4-1.3.0.jar  oro-2.0.8.jar  platform-api.jar       streamex-0.6.7.jar  util.jar

./jps-standalone:
jps-model.jar
```

This folder is treated as an ant repository and the code to that is here. Build.gradle files use this via methods like this which tells the project to take only the needed jars from the collection. I am planning on replacing this with just plain old maven repository resolution using format like compile(groupID:artifactId:version) but we will need the jars to be in our system anyways, atleast now we know that this particular file structure can be avoided.

Please note that these jars listed above by me are only needed for the dist task and the ones needed for other subprojects in the original install task can still be found here.

Like I did for packaging part 1, I will post all the needed pacakges with their source links here in this blog.

So if any of you kind souls want to help me out please kindly take on any of these and package them.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Mike Gabriel: MATE 1.22 landed in Debian unstable

4 August, 2019 - 17:55

Last week, I did a bundle upload of (nearly) all MATE 1.22 related components to Debian unstable. Packages should have been built by now for most of the 24 architectures supported by Debian (I just fixed an FTBFS of mate-settings-daemon on non-Linux host archs). The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Credits

Again a big thanks goes to the packaging team and also to the upstream maintainers of the MATE desktop environment. Martin Wimpress and I worked on most parts of the packaging for the 1.22 release series this time. On the upstream side, a big thanks goes to all developers, esp. Vlad Orlov and Wolfgang Ulbrich for fixing / reviewing many many issues / merge requests. Good work, folks!!! plus Big Thanks!!!

References


light+love,
Mike Gabriel (aka sunweaver)

Andy Simpkins: Debconf19: Curitiba, Brazil – AV Setup

4 August, 2019 - 01:37

I write this on Monday whilst sat in the airport in São Paulo awaiting my onward flight back to the UK and the fun of the change of personnel in Downing street that has been something I have fortunately been able to ignore whilst at DebConf.  [Edit: and finishing writing the Saturday after getting home after much sleep]

Arriving on the first Sunday of DebCamp meant that I was one of the first people to arrive; however most of the video team were either arriving about the same time or had landed before me.  We spent most of our daytime time during DebCamp setting up for the following weeks conference.

Step one was getting a network operational.  We had been offered space for our servers in a university machine room, but chose instead to occupy the two ‘green’ rooms below the main auditorium stage, using one as a makeshift V/NOC and the other as our machine room as this enabled us continuous and easy access [0] to our servers whilst separating us from the fan noise.  I ran additional network cable between the back of the stage and our makeshift machine room, routing the cable around the back of the stage and into the ceiling void to just outside the V/NOC was relatively simple.   Routing into the V/NOC needed a bit of help to get the cable through a small gap we found where some other cables ran through the ‘fire break’.  Getting a cable between the two ‘green rooms’ however was a PITA.  Many people, including myself, eventually giving up before I finally returned to the problem and with the aid of a fully extended server rail gaffer taped to a clothing rail to make a 4m long pole I was eventually able to deliver a cable through the 3 floor supports / fire breaks that separated the two rooms (and before someone suggests I should have used a ‘fish’ wire that was what we tried first).   The university were providing us with backbone network but it did take a couple of meetings to get our video network in it’s own separate VLAN and get it to pass traffic unmolested between nodes.

The final network setup (for video that is – the conference was piggy-backing on the university WiFi network and there was also a DebConf network in the National Inn) was to make live the fibre links that had been installed prior to our arrival.  Two links had been pulled through so that we could join the ‘Video Confrencia’ room and the ‘Front Desk’ to the rest of the university network, however when we came to commission them we discovered that the wrong media converters had been supplied, they should have been for single mode fibre but multi-mode converters had been delivered.  Nothing that the university IT department couldn’t solve, indeed they did as soon as we pointed out the mistake.  The provided us with replacement media converters capable of driving a signal down *both* single and multi-mode fiber, something I have not seen before.

For the rest of the week Paddatrapper and myself spent most of our time running cables and setting up the three talk rooms that were to be filmed.  Phls had been able to provide us with details of the venue’s AV system AND scale plans of the three talk rooms, this along with the photos provided by the local team, & Tumbleweed’s visit to the sight enabled us to plan the cable runs right down to the location of power sockets.

I am going to add scale plans & photos to the things that we request for all future DebConfs.  They made planning and setup so much easier and faster.  Of cause we still ended up running more cables than we originally expected – we ran Ethernet front to back in all three rooms when we originally intended to only do this in Video Confrencia (the small BoF room), this was because it turned out that the sockets at different ends of the room were on differing packet switches that in turn feed into the university backbone.  We were informed that the backbone is 1Gb/s which meant that the video LAN would have consumed the entire bandwidth of the backbone with nothing left over.

We have 200Mb/s streams from between OPSIS frame grabbers and a 2nd 200Mb/s output stream from each room.  That equates to exactly 1Gb/s (the video-confrencia BoF room is small enough that we were always going to run a front/back cable) and that is before any backups of recordings to our server.  As it turns out that wasn’t true but by then we had already run the cables and got things working…

I won’t blog about the software setup the servers, our back-end CDN or the review process – this is not my area of expertise.  You need to thank Olasd, Tumbleweed & Ivo for the on-site system setup and Walter for the review process.  Actually I there is also Carlfk, Ubec, Valhalla and I am sure numerous other people that I am too tired to remember, I appologise for forgetting you…

So back to physical setup.  The main auditorium was operational.  I had re-patched the mixing desk to give a setup as close as possible in all three talk rooms – we are most interested in audio for the stream/recording and so use the main mix output for this, and move the room PA onto a sub group output.  Unusually for a DebConf, I found that I needed to ensure that there *was* a ground connection at the desk for all output feeds – It appears that there was NO earth in the entire auditorium; well there was at some point back in time but had been systematically removed either by cutting off the earth pin on power plugs, or unfortunately for us, by cutting and removing cables from any bonding points, behind sockets etc.   Done, probably, because RCDs kept tripping and clearly the problem is that there is an earth present to leak into and not that there is a leak in the first place, or just long cable runs into inductive loads that mean that a different ‘trip curve’ needed to be selected <sigh>.

We still had significant mains hum on the PA system (slightly less than was present before I started room setup so nothing I had done).  The venue AV team pointed out that they had a magnetic coupler AND an audio DSP unit in front of the PA amplifier stack – telling me that this was to reduce the hum. Fortunately for us the venue had 4 equalisers that I could use, one for each of the mics So I was able to knock out 60Hz, 120Hz and dip higher harmonics, this again made an improvement.  Apparently we were getting the best results in living memory of the on-site AV team so at this point I stooped tweaking the setup “It was good enough”, we could live with the remaining hum.

The other two talk rooms were pretty much the same setup, only the rooms are smaller.  The exception being that whilst we do have a small portable PA in the Video Conferancia room we only use it for audio from the presenters laptop – the room was so small there was no point in amplifying presenters…

Right I could now move on to ‘lighting’.  We were supposed to use the flood ‘work’ lights above the stage, but quite a few of the halogen lamps were blown.  This meant that there were far too many ‘dark’ patches along the stage.  Additionally the colour temperatures of the different work lights were all over the place, and this would cause havoc with white balance, still we could have lived with this…  I asked about getting the lamps replaced.  Initially I was told no, but once I pointed out the problem to a more senior member of staff they agreed that the lamps could be replaced and that it would be done the following day.  It wasn’t.  I offered that we could replace the lamps but was then told that they would now be doing this as part of a service in a few weeks time.  I was however told that instead, if I was prepared to rig them myself, that we could use the stage lights on the dimmers.  Win!  This would have been my preferred option all along and I suspect we were only offered this having started to build a reasonable working relationship with the site AV team.  I was able to sign out a bunch of lamps from the stores and rig then as I saw fit.  I was given a large wooden step ladder, and shown how to access the catwalk.  I could now rig lights where I wanted them.

Two over head floods and two spots were used to light the lectern from different angles.  Three overhead floods and three focused cans were used to light the discussion table.  I also hung to forward facing spots to illuminate someone stood at the question mic, and finally 4 cans (2 focus cans and a pair of 1kW par cans sharing the same plug) to add some light to the front 5 or 6 rows of the audience.  The Venue AV team repaired the DMX cable to the lighting dimmers and we were good to go…  well just as soon as I had worked out the DMX addressing / cable patching at the dimmer banks and then there was a short time whilst I read the instructions for the desk – enough to apply ‘soft patches’ so I could allocate a fader to each dimmer channel we were using.  I then red the instructions a bit further and came back the following day and programmed appropriate scenes so that the table could be lit using one ‘slider’, the lectern by another and so on.  JMW came back later in the week and updated the program again to add a timed fade up or down and we also set a maximum level on the audience lights to stop us from ‘blinding’ people in the first couple of rows (we set the maximum value of that particular scene to be 20% available intensity).

Lighting in the mini auditorium was from simple overhead ‘domestic’ lamps, I still needed to get some bulbs replaced, and then move / adjust them to best light a speaker stood at the lectern or a discussion panel sat at the table.   Finally we had no control of lighting in Video Confeencia (about normal for a DebConf).

Later in the week we revisited the hum problem again.  We confirmed that the Hum was no longer being emitted out of the desk, so it must have be on the cable run to the stack or in the stack itself.  The hum was still annoying and Kyle wanted to confirm that the DSP at the top of the amp stack was correctly setup – could we improve things?  It took a little persuasion but eventually we were granted permission, and the password, to access the DSP.  The DSP had not been configured properly at all.  Kyle applied a 60Hz notch filter, and this made some difference.  I suggested a comb filter which Kyle then applied for 60Hz and 5 or 6 orders of harmonics, that did the trick (thanks Kyle – I wouldn’t have had a clue how to drive the DSP).  There was no longer any perceivable noise coming out of the left hand speakers, but there was still a noticeable, but much lower, hum from the right.  We removed the input cable to the amp stack and yes the hum was still there, so we were picking up noise between the amps and the speaker!  a quick confirmation of turning off the lighting dimmers and the noise dropped again.  I started chasing the right hand speaker cables – they run up and over the stage along the catwalk, in the same bundle as all the unearthed lighting AND permanent power cables.  We were inducing mains noise directly onto the speaker cables.  The only fix for this would be to properly screen AND separate the speaker fed cables.  Better yet send a balanced audio feed, separated from the power cables, to the right hand side of the stage and move the right hand amplifiers to that side of the stage.  Nothing we could do – but something that we could point out to the venue AV team, who strangely, hadn’t considered this before…

 

 

[0] Where continuous access meant “whilst we had access to the site” (the whole campus is closed overnight)

Jonas Meurer: debian lts report 2019.07

3 August, 2019 - 22:44
Debian LTS report for July 2019

This month I was allocated 17 hours. I also had 2 hours left over from Juney, which makes a total of 19 hours. I spent all of them on the following tasks/ issues.

  • DLA-1843-1: Fixed CVE-2019-10162 and CVE-2019-10163 in pdns.
  • DLA-1852-1: Fixed CVE-2019-9948 in python3.4. Also found, debugged and fixed several further regressions in the former CVE-2019-9740 patches. <<<<<<< HEAD
    • [Improved testing of LTS uploads]: We had some discussion on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing on the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.
  • Improved testing of LTS uploads: We had some internal discussion in the Debian LTS team on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing of the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.

    9e6e9e3f9c6b424db781275a8916e298c970e611

Links

Dirk Eddelbuettel: RcppCCTZ 0.2.6

3 August, 2019 - 19:45

A shiny new release 0.2.6 of RcppCCTZ is now at CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version updates to CCTZ release 2.3 from April, plus changes accrued since then. It also switches to tinytest which, among other benefits, permits continued testing of the installed package.

Changes in version 0.2.6 (2019-08-03)
  • Synchronized with upstream CCTZ release 2.3 plus commits accrued since then (Dirk in #30).

  • The package now uses tinytest for unit tests (Dirk in #31).

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: New Debian Developers and Maintainers (May and June 2019)

3 August, 2019 - 15:00

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Philippe Mengual (jpmengual)
  • Taowa Munene-Tardif (taowa)
  • Georg Faerber (georg)
  • Kyle Robbertze (paddatrapper)
  • Andy Li (andyli)
  • Michal Arbet (kevko)
  • Sruthi Chandran (srud)
  • Alban Vidal (zordhak)
  • Denis Briand (denis)
  • Jakob Haufe (sur5r)

The following contributors were added as Debian Maintainers in the last two months:

  • Bobby de Vos
  • Jongmin Kim
  • Bastian Germann
  • Francesco Poli

Congratulations!

Elana Hashman: My favourite bash alias for git

3 August, 2019 - 11:00

I review a lot of code. A lot. And an important part of that process is getting to experiment with said code so I can make sure it actually works. As such, I find myself with a frequent need to locally run code from a submitted patch.

So how does one fetch that code? Long ago, when I was a new maintainer, I would add the remote repository I was reviewing to my local repo so I could fetch that whole fork and target branch. Once downloaded, I could play around with that on my local machine. But this was a lot of overhead! There was a lot of clicking, copying, and pasting involved in order to figure out the clone URL for the remote repo, and a bunch of commands to set it up. It felt like a lot of toil that could be easily automated, but I didn't know a better way.

One day, when a coworker of mine saw me struggling with this, he showed me the better way.

Turns out, most hosted git repos with pull request functionality will let you pull down a read-only version of the changeset from the upstream fork using git, meaning that you don't have to set up additional remote tracking to fetch and run the patch or use platform-specific HTTP APIs.

Using GitHub's git references for pull requests

I first learned how to do this on GitHub.

GitHub maintains a copy of pull requests against a particular repo at the pull/NUM/head reference. (More documentation on refs here.) This means that if you have set up a remote called origin and someone submits a pull request #123 against that repository, you can fetch the code by running

$ git fetch origin pull/123/head
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 4 (delta 3), reused 3 (delta 3), pack-reused 1
Unpacking objects: 100% (4/4), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

Woah.

Using pull request references for CI

As a quick aside: This is also handy if you want to write your own CI scripts against users' pull requests. Even better—on GitHub, you can fetch a tree with the pull request already merged onto the top of the current master branch by fetching pull/NUM/merge. (I'm not sure if this is officially documented somewhere, and I don't believe it's widely supported by other hosted git platforms.)

If you also specify the --depth flag in your fetch command, you can fetch code even faster by limiting how much upstream history you download. It doesn't make much difference on small repos, but it is a big deal on large projects:

elana@silverpine:/tmp$ time git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Enumerating objects: 295, done.
remote: Counting objects: 100% (295/295), done.
remote: Compressing objects: 100% (167/167), done.
remote: Total 980446 (delta 148), reused 136 (delta 128), pack-reused 980151
Receiving objects: 100% (980446/980446), 648.95 MiB | 12.47 MiB/s, done.
Resolving deltas: 100% (686795/686795), done.
Checking out files: 100% (20279/20279), done.

real    1m31.035s
user    1m17.856s
sys     0m7.782s

elana@silverpine:/tmp$ time git clone --depth=10 https://github.com/kubernetes/kubernetes.git kubernetes-shallow
Cloning into 'kubernetes-shallow'...
remote: Enumerating objects: 34305, done.
remote: Counting objects: 100% (34305/34305), done.
remote: Compressing objects: 100% (22976/22976), done.
remote: Total 34305 (delta 17247), reused 19060 (delta 10567), pack-reused 0
Receiving objects: 100% (34305/34305), 34.22 MiB | 10.25 MiB/s, done.
Resolving deltas: 100% (17247/17247), done.

real    0m31.495s
user    0m3.941s
sys     0m1.228s
Writing the pull alias

So how can one harness all this as a bash alias? It takes just a little bit of code:

pull() {
    git fetch "$1" pull/"$2"/head && git checkout FETCH_HEAD
}

alias pull='pull'

Then I can check out a PR locally with the short command pull <remote> <num>:

$ pull origin 123
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Total 5 (delta 4), reused 4 (delta 4), pack-reused 1
Unpacking objects: 100% (5/5), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

You can even add your own commits, save them on a local branch, and push that to your collaborator's repository to build on their PR if you're so inclined... but let's not get too ahead of ourselves.

Changeset references on other git platforms

These pull request refs are not a special feature of git itself, but rather a per-platform implementation detail using an arbitrary git ref format. As far as I'm aware, most major git hosting platforms implement this, but they all use slightly different ref names.

GitLab

At my last job I needed to figure out how to make this work with GitLab in order to set up CI pipelines with our Jenkins instance. Debian's Salsa platform also runs GitLab.

GitLab calls user-submitted changesets "merge requests" and that language is reflected here:

git fetch origin merge-requests/NUM/head

They also have some nifty documentation for adding a git alias to fetch these references. They do so in a way that creates a local branch automatically, if that's something you'd like—personally, I check out so many patches that I would not be able to deal with cleaning up all the extra branch mess!

BitBucket

Bad news: as of the time of publication, this isn't supported on bitbucket.org, even though a request for this feature has been open for seven years. (BitBucket Server supports this feature, but that's standalone and proprietary, so I won't bother including it in this post.)

Gitea

While I can't find any official documentation for it, I tested and confirmed that Gitea uses the same ref names for pull requests as GitHub, and thus you can use the same bash/git aliases on a Gitea repo as those you set up for GitHub.

Saved you a click?

Hope you found this guide handy. No more excuses: now that it's just one short command away, go forth and run your colleagues' code locally!

Sven Hoexter: From 30 to 230 docker container per host

2 August, 2019 - 21:44

I could not find much information on the interwebs how many containers you can run per host. So here are mine and the issues we ran into along the way.

The Beginning

In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

THP and Defragmentation

When we moved to the dedicated bare metal hosts we were running Debian/stretch + Linux from stretch-backports. At that time Linux 4.17. These machnes were sized to run 95+ containers. Once we were above 55 containers we started to see occasional hiccups. First occurences lasted only for 20s, then 2min, and suddenly some lasted for around 20min. Our system metrics, as collected by prometheus-node-exporter, could only provide vague hints. The metric export did work, so processes were executed. But the CPU usage and subsequently the network throughput went down to close to zero.

I've seen similar hiccups in the past with Postgresql running on a host with THP (Transparent Huge Pages) enabled. So a good bet was to look into that area. By default /sys/kernel/mm/transparent_hugepage/enabled is set to always, so THP are enabled. We stick to that, but changed the defrag mode /sys/kernel/mm/transparent_hugepage/defrag (since Linux 4.12) from the default madavise to defer+madvise.

This moves page reclaims and compaction for pages which were not allocated with madvise to the background, which was enough to get rid of those hiccups. See also the upstream documentation. Since there is no sysctl like facility to adjust sysfs values, we're using the sysfsutils package to adjust this setting after every reboot.

Conntrack Table

Since the default docker networking setup involves a shitload of NAT, it shouldn't be surprising that nf_conntrack will start to drop packets at some point. We're currently fine with setting the sysctl tunable

net.netfilter.nf_conntrack_max = 524288

but that's very much up to your network setup and traffic characteristics.

Inotify Watches and Cadvisor

Along the way cadvisor refused to start at one point. Turned out that the default settings (again sysctl tunables) for

fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192

are too low. We increased to

fs.inotify.max_user_instances = 4096
fs.inotify.max_user_watches = 32768
Ephemeral Ports

We didn't ran into an issue with running out of ephemeral ports directly, but dockerd has a constant issue of keeping track of ports in use and we already see collisions to appear regularly. Very unscientifically we set the sysctl

net.ipv4.ip_local_port_range = 11000 60999
NOFILE limits and Nomad

Initially we restricted nomad (via systemd) with

LimitNOFILE=65536

which apparently is not enough for our setup once we were crossing the 100 container per host limit. Though the error message we saw was hard to understand:

[ERROR] client.alloc_runner.task_runner: prestart failed: alloc_id=93c6b94b-e122-30ba-7250-1050e0107f4d task=mycontainer error="prestart hook "logmon" failed: Unrecognized remote plugin message:

This was solved by following the official recommendation and setting

LimitNOFILE=infinity
LimitNPROC=infinity
TasksMax=infinity

The main lead here was looking into the "hashicorp/go-plugin" library source, and understanding that they try to read the stdout of some other process, which sounded roughly like someone would have to open at some point a file.

Running out of PIDs

Once we were close to 200 containers per host (test environment with 256GB RAM per host), we started to experience failures of all kinds because processes could no longer be forked. Since that was also true for completely fresh user sessions, it was clear that we're hitting some global limitation and nothing bound to session via a pam module.

It's important to understand that most of our workloads are written in Java, and a lot of the other software we use is written in go. So we've a lot of Threads, which in Linux are presented as "Lightweight Process" (LWP). So every LWP still exists with a distinct PID out of the global PID space.

With /proc/sys/kernel/pid_max defaulting to 32768 we actually ran out of PIDs. We increased that limit vastly, probably way beyond what we currently need, to 500000. Actuall limit on 64bit systems is 222 according to man 5 proc.

Vincent Bernat: Securing BGP on the host with the RPKI

2 August, 2019 - 16:16

An increasingly popular design for a data-center network is BGP on the host: each host ships with a BGP daemon to advertise the IPs it handles and receives the routes to its fellow servers. Compared to a L2-based design, it is very scalable, resilient, cross-vendor and safe to operate.1 Take a look at “L3 routing to the hypervisor with BGP” for a usage example.

BGP on the host with a spine-leaf IP fabric. A BGP session is established over each link and each host advertises its own IP prefixes.

While routing on the host eliminates the security problems related to Ethernet networks, a server may announce any IP prefix. In the above picture, two of them are announcing 2001:db8:cc::/64. This could be a legit use of anycast or a prefix hijack. BGP offers several solutions to improve this aspect and one of them is to leverage the features around the RPKI infrastructure.

Short introduction to the RPKI

On the Internet, BGP is mostly relying on trust. This contributes to various incidents due to operator errors, like the one that affected Cloudflare a few months ago, or to malicious attackers, like the hijack of Amazon DNS to steal cryptocurrency wallets. RFC 7454 explains the best practices to avoid such issues.

IP addresses are allocated by five Regional Internet Registries (RIR). Each of them maintains a database of the assigned Internet resources, notably the IP addresses and the associated AS numbers. These databases may not be totally reliable but are widely used to build ACLs to ensure peers only announce the prefixes they are expected to. Here is an example of ACLs generated by bgpq3 when peering directly with Apple:2

$ bgpq3 -l v6-IMPORT-APPLE -6 -R 48 -m 48 -A -J -E AS-APPLE
policy-options {
 policy-statement v6-IMPORT-APPLE {
replace:
  from {
    route-filter 2403:300::/32 upto /48;
    route-filter 2620:0:1b00::/47 prefix-length-range /48-/48;
    route-filter 2620:0:1b02::/48 exact;
    route-filter 2620:0:1b04::/47 prefix-length-range /48-/48;
    route-filter 2620:149::/32 upto /48;
    route-filter 2a01:b740::/32 upto /48;
    route-filter 2a01:b747::/32 upto /48;
  }
 }
}

The RPKI (RFC 6480) adds public-key cryptography on top of it to sign the authorization for an AS to be the origin of an IP prefix. Such record is a Route Origination Authorization (ROA). You can browse the databases of these ROAs through the RIPE’s RPKI Validator instance:

RPKI validator shows one ROA for 85.190.88.0/21

BGP daemons do not have to download the databases or to check digital signatures to validate the received prefixes. Instead, they offload these tasks to a local RPKI validator implementing the “RPKI-to-Router Protocol” (RTR, RFC 6810).

For more details, have a look at “RPKI and BGP: our path to securing Internet Routing.”

Using the RPKI in the datacenter

While it is possible to setup our own RPKI for use inside the datacenter, we can take a shortcut and use a validator implementing RTR, like GoRTR, and accepting another source of truth. Let’s work on the following topology:

BGP on the host with prefix validation using RTR. Each server has its own AS number. The leaf routers establish RTR sessions to the validators.

You need a place to maintain a mapping between private AS numbers and the allowed prefixes:3

ASN Allowed prefixes AS 65005 2001:db8:aa::/64 AS 65006 2001:db8:bb::/64,
2001:db8:11::/64 AS 65007 2001:db8:cc::/64 AS 65008 2001:db8:dd::/64 AS 65009 2001:db8:ee::/64,
2001:db8:11::/64 AS 65010 2001:db8:ff::/64

From this table, we build a JSON file for GoRTR, assuming each host can announce the provided prefixes or longer ones (like 2001:db8:aa::­42:d9ff:­fefc:287a/128 for AS 65005):

{
  "roas": [
    {
      "prefix": "2001:db8:aa::/64",
      "maxLength": 128,
      "asn": "AS65005"
    }, {
      "…": "…"
    }, {
      "prefix": "2001:db8:ff::/64",
      "maxLength": 128,
      "asn": "AS65010"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65006"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65009"
    }
  ]
}

This file is deployed to all validators and served by a web server. GoRTR is configured to fetch it and update it every 10 minutes:

$ gortr -refresh=600 \
        -verify=false -checktime=false \
        -cache=http://127.0.0.1/rpki.json
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

The refresh time could be lowered but GoRTR can be notified of an update using the SIGHUP signal. Clients are immediately notified of the change.

The next step is to configure the leaf routers to validate the received prefixes using the farm of validators. Most vendors support RTR:

Platform Over TCP? Over SSH? Juniper JunOS ✔️ ❌ Cisco IOS XR ✔️ ✔️ Cisco IOS XE ✔️ ❌ Cisco IOS ✔️ ❌ Arista EOS ❌ ❌ BIRD ✔️ ✔️ FRR ✔️ ✔️ Configuring JunOS

JunOS only supports plain-text TCP. First, let’s configure the connections to the validation servers:

routing-options {
    validation {
        group RPKI {
            session validator1 {
                hold-time 60;         # session is considered down after 1 minute
                record-lifetime 3600; # cache is kept for 1 hour
                refresh-time 30;      # cache is refreshed every 30 seconds
                port 8282;
            }
            session validator2 { /* OMITTED */ }
            session validator3 { /* OMITTED */ }
        }
    }
}

By default, at most two sessions are randomly established at the same time. This provides a good way to load-balance them among the validators while maintaining good availability. The second step is to define the policy for route validation:

policy-options {
    policy-statement ACCEPT-VALID {
        term valid {
            from {
                protocol bgp;
                validation-database valid;
            }
            then {
                validation-state valid;
                accept;
            }
        }
        term invalid {
            from {
                protocol bgp;
                validation-database invalid;
            }
            then {
                validation-state invalid;
                reject;
            }
        }
    }
    policy-statement REJECT-ALL {
        then reject;
    }
}

The policy statement ACCEPT-VALID turns the validation state of a prefix from unknown to valid if the ROA database says it is valid. It also accepts the route. If the prefix is invalid, the prefix is marked as such and rejected. We have also prepared a REJECT-ALL statement to reject everything else, notably unknown prefixes.

A ROA only certifies the origin of a prefix. A malicious actor can therefore prepend the expected AS number to the AS path to circumvent the validation. For example, AS 65007 could annonce 2001:db8:dd::/64, a prefix allocated to AS 65006, by advertising it with the AS path 65007 65006. To avoid that, we define an additional policy statement to reject AS paths with more than one AS:

policy-options {
    as-path EXACTLY-ONE-ASN "^.$";
    policy-statement ONLY-DIRECTLY-CONNECTED {
        term exactly-one-asn {
            from {
                protocol bgp;
                as-path EXACTLY-ONE-ASN;
            }
            then next policy;
        }
        then reject;
    }
}

The last step is to configure the BGP sessions:

protocols {
    bgp {
        group HOSTS {
            local-as 65100;
            type external;
            # export [ … ];
            import [ ONLY-DIRECTLY-CONNECTED ACCEPT-VALID REJECT-ALL ];
            enforce-first-as;
            neighbor 2001:db8:42::a10 {
                peer-as 65005;
            }
            neighbor 2001:db8:42::a12 {
                peer-as 65006;
            }
            neighbor 2001:db8:42::a14 {
                peer-as 65007;
            }
        }
    }
}

The import policy rejects any AS path longer than one AS, accepts any validated prefix and rejects everything else. The enforce-first-as directive is also pretty important: it ensures the first (and, here, only) AS in the AS path matches the peer AS. Without it, a malicious neighbor could inject a prefix using an AS different than its own, defeating our purpose.4

Let’s check the state of the RTR sessions and the database:

> show validation session
Session                                  State   Flaps     Uptime #IPv4/IPv6 records
2001:db8:4242::10                        Up          0   00:16:09 0/9
2001:db8:4242::11                        Up          0   00:16:07 0/9
2001:db8:4242::12                        Connect     0            0/0

> show validation database
RV database for instance master

Prefix                 Origin-AS Session                                 State   Mismatch
2001:db8:11::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::10                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::11                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::10                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::11                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::10                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::11                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::10                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::11                       valid

  IPv4 records: 0
  IPv6 records: 18

Here is an example of accepted route:

> show route protocol bgp table inet6 extensive all
inet6.0: 11 destinations, 11 routes (8 active, 0 holddown, 3 hidden)
2001:db8:bb::42/128 (1 entry, 0 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 0
                Address: 0xd050470
                Next-hop reference count: 4
                Source: 2001:db8:42::a12
                Next hop: 2001:db8:42::a12 via em1.0, selected
                Session Id: 0x0
                State: <Active NotInstall Ext>
                Local AS: 65006 Peer AS: 65000
                Age: 12:11
                Validation State: valid
                Task: BGP_65000.2001:db8:42::a12+179
                AS path: 65006 I
                Accepted
                Localpref: 100
                Router ID: 1.1.1.1

A rejected route would be similar with the reason “rejected by import policy” shown in the details and the validation state would be invalid.

Configuring BIRD

BIRD supports both plain-text TCP and SSH. Let’s configure it to use SSH. We need to generate keypairs for both the leaf router and the validators (they can all share the same keypair). We also have to create a known_hosts file for BIRD:

(validatorX)$ ssh-keygen -qN "" -t rsa -f /etc/gortr/ssh_key
(validatorX)$ echo -n "validatorX:8283 " ; \
              cat /etc/bird/ssh_key_rtr.pub
validatorX:8283 ssh-rsa AAAAB3[…]Rk5TW0=
(leaf1)$ ssh-keygen -qN "" -t rsa -f /etc/bird/ssh_key
(leaf1)$ echo 'validator1:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ echo 'validator2:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ cat /etc/bird/ssh_key.pub
ssh-rsa AAAAB3[…]byQ7s=
(validatorX)$ echo 'ssh-rsa AAAAB3[…]byQ7s=' >> /etc/gortr/authorized_keys

GoRTR needs additional flags to allow connections over SSH:

$ gortr -refresh=600 -verify=false -checktime=false \
      -cache=http://127.0.0.1/rpki.json \
      -ssh.bind=:8283 \
      -ssh.key=/etc/gortr/ssh_key \
      -ssh.method.key=true \
      -ssh.auth.user=rpki \
      -ssh.auth.key.file=/etc/gortr/authorized_keys
INFO[0000] Enabling ssh with the following authentications: password=false, key=true
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

Then, we can configure BIRD to use these RTR servers:

roa6 table ROA6;
template rpki VALIDATOR {
   roa6 { table ROA6; };
   transport ssh {
     user "rpki";
     remote public key "/etc/bird/known_hosts";
     bird private key "/etc/bird/ssh_key";
   };
   refresh keep 30;
   retry keep 30;
   expire keep 3600;
}
protocol rpki VALIDATOR1 from VALIDATOR {
   remote validator1 port 8283;
}
protocol rpki VALIDATOR2 from VALIDATOR {
   remote validator2 port 8283;
}

Unlike JunOS, BIRD doesn’t have a feature to only use a subset of validators. Therefore, we only configure two of them. As a safety measure, if both connections become unavailable, BIRD will keep the ROAs for one hour.

We can query the state of the RTR sessions and the database:

> show protocols all VALIDATOR1
Name       Proto      Table      State  Since         Info
VALIDATOR1 RPKI       ---        up     17:28:56.321  Established
  Cache server:     rpki@validator1:8283
  Status:           Established
  Transport:        SSHv2
  Protocol version: 1
  Session ID:       0
  Serial number:    1
  Last update:      before 25.212 s
  Refresh timer   : 4.787/30
  Retry timer     : ---
  Expire timer    : 3574.787/3600
  No roa4 channel
  Channel roa6
    State:          UP
    Table:          ROA6
    Preference:     100
    Input filter:   ACCEPT
    Output filter:  REJECT
    Routes:         9 imported, 0 exported, 9 preferred
    Route change stats:     received   rejected   filtered    ignored   accepted
      Import updates:              9          0          0          0          9
      Import withdraws:            0          0        ---          0          0
      Export updates:              0          0          0        ---          0
      Export withdraws:            0        ---        ---        ---          0

> show route table ROA6
Table ROA6:
    2001:db8:11::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:11::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:aa::/64-128 AS65005  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:bb::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:cc::/64-128 AS65007  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:dd::/64-128 AS65008  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ee::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ff::/64-128 AS65010  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)

Like for the JunOS case, a malicious actor could try to workaround the validation by building an AS path where the last AS number is the legitimate one. BIRD is flexible enough to allow us to use any AS to check the IP prefix. Instead of checking the origin AS, we ask it to check the peer AS with this function, without looking at the AS path:

function validated(int peeras) {
   if (roa_check(ROA6, net, peeras) != ROA_VALID) then {
      print "Ignore invalid ROA ", net, " for ASN ", peeras;
      reject;
   }
   accept;
}

The BGP instance is then configured to use the above function as the import policy:

protocol bgp PEER1 {
   local as 65100;
   neighbor 2001:db8:42::a10 as 65005;
   ipv6 {
      import keep filtered;
      import where validated(65005);
      # export …;
   };
}

You can view the rejected routes with show route filtered, but BIRD does not store information about the validation state in the routes. You can also watch the logs:

2019-07-31 17:29:08.491 <INFO> Ignore invalid ROA 2001:db8:bb::40:/126 for ASN 65005

Currently, BIRD does not reevaluate the prefixes when the ROAs are updated. There is work in progress to fix this. If this feature is important to you, have a look at FRR instead: it also supports the RTR protocol and triggers a soft reconfiguration of the BGP sessions when ROAs are updated.

  1. Notably, the data flow and the control plane are separated. A node can remove itself by notifying its peers without losing a single packet. ↩︎

  2. People often use AS sets, like AS-APPLE in this example, as they are convenient if you have multiple AS numbers or customers. However, there is currently nothing preventing a rogue actor to add arbitrary AS numbers to their AS set. ↩︎

  3. We are using 16-bit AS numbers for readability. Because we need to assign a different AS number for each host in the datacenter, in an actual deployment, we would use 32-bit AS numbers. ↩︎

  4. Cisco routers and FRR enforce the first AS by default. It is a tunable value to allow the use of route servers: they distribute prefixes on behalf of other routers. ↩︎

Junichi Uekawa: Started wanting to move stuff to docker.

2 August, 2019 - 11:55
Started wanting to move stuff to docker. Especially around build systems. If things are mutable they will go bad and fixing them is annoying.

Mike Gabriel: My Work on Debian LTS/ELTS (July 2019)

2 August, 2019 - 01:24

In July 2019, I have worked on the Debian LTS project for 15.75 hours (of 18.5 hours planned) and on the Debian ELTS project for another 12 hours (as planned) as a paid contributor.

LTS Work
  • Upload to jessie-security: libssh2 (DLA 1730-3) [1]
  • Upload to jessie-security: libssh2 (DLA 1730-4) [2]
  • Upload to jessie-security: glib2.0 (DLA 1866-1) [3]
  • Upload to jessie-security: wpa (DLA 1867-1) [4]

The Debian Security package archive only has arch-any buildds attached, so source packages that build at least one arch-all bin:pkg must include the arch-all DEBs from a local build. So, ideally, we upload source + arch-all builds and leave the arch-any builds to the buildds. However, this seems to be problematic when doing the builds using sbuild. So, I spent a little time on...

  • sbuild: Try to understand the mechanism of building arch-all + source package (i.e. omit arch-any uploads)... Unfortunately, there is no "-g" option (like in dpkg-buildpackage). Neither does the parameter combination ''--source --arch-all --no-arch-any'' result in a source + arch-all build. More investigation / communication with the developers of sbuild required here. To be continued...
ELTS Work
  • Upload to wheezy-lts: freetype (ELA 149-1) [5]
  • Upload to wheezy-lts: libssh2 (ELA 99-3) [6]
References

Gunnar Wolf: Goodbye, pgp.gwolf.org

1 August, 2019 - 22:25

I started running an SKS keyserver a couple of years ago (don't really remember, but I think it was around 2014). I am, as you probably expect me to be given my lines of work, a believer of the Web-of-Trust model upon which the PGP network is built. I have published a couple of academic papers (Strengthening a Curated Web of Trust in a Geographically Distributed Project, with Gina Gallegos, Cryptologia 2016, and Insights on the large-scale deployment of a curated Web-of-Trust: the Debian project’s cryptographic keyring, with Victor González Quiroga, Journal of Internet Services and Applications, 2018) and presented several conferences regarding some aspects of it, mainly in relation to the Debian project.

Even in light of the recent flooding attacks (more info by dkg, Daniel Lange, Michael Altfield, others available; GnuPG task tracker). I still believe in the model. But I have had enough of the implementation's brittleness. I don't know how much to blame SKS and how much to blame myself, but I cannot devote more time to fiddling around to try to get it to work as it should — I was providing an unstable service. Besides, this year I had to rebuild the database three times already due to it getting corrupted... And yesterday I just could not get past of segfaults when importing.

So, I have taken the unhappy decision to shut down my service. I have contacted both the SKS mailing list and the servers I was peering with. Due to the narrow scope of a single SKS server, possibly this post is not needed... But it won't hurt, so here it goes.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้