Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 38 min ago

Chris Lamb: Free software activities in September 2017

1 October, 2017 - 01:31

Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):

  • Submitted a pull request to Quadrapassel (the Gnome version of Tetris) to start a new game when the pause button is pressed outside of a game. This means you would no longer have to use the mouse to start a new game. [...]
  • Made a large number of improvements to AptFS — my FUSE-based filesystem that provides a view on unpacked Debian source packages as regular folders — including moving away from manual parsing of package lists [...] and numerous code tidying/refactoring changes.
  • Sent a small patch to django-sitetree, a Django library for menu and breadcrumb navigation elements to not mask test exit codes from the surrounding shell. [...]
  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Add support for "sloppy" backports. Thanks to Bernd Zeimetz for the idea and ongoing testing. [...]
    • Merged a pull request from James McCoy to pass DEB_BUILD_PROFILES through to the build. [...]
    • Workaround Travis CI's HTTP proxy which does not appear to support SRV records. [...]
    • Run debc from devscripts if the build was successful [...] and output the .buildinfo file if it exists [...].
  • Fixed a few issues in local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool:
    • Fix an issue where file permissions from the remote could result in a local archive that was impossible to access. [...]
    • Clear out empty directories on the local repository. [...]
  • Updated django-staticfiles-dotd, my Django staticfiles adaptor to concatentate static media in .d-style directories to support Python 3.x by using bytes objects (commit) and move away from monkeypatch as it does not have a Python 3.x port yet (commit).
  • I also posted a short essay to my blog entitled "Ask the Dumb Questions" as well as provided an update on the latest Lintian release.
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Published a short blog post about how to determine which packages on your system are reproducible. [...]
  • Submitted a pull request for Numpy to make the generated config.py files reproducible. [...]
  • Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
  • Within Debian:
    • Updated isdebianreproducibleyet.com, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
    • Submitted the following patches to fix reproducibility-related toolchain issues:
      • gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
      • texlive-bin: Make PDF IDs reproducible. (#874102)
    • Submitted a patch to fix a reproducibility issue in doit.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Chaired our monthly IRC meeting. [...]
  • Worked on publishing our weekly reports. (#123, #124, #125, #126 & #127)


I also made the following changes to our tooling:

reproducible-check

reproducible-check is our script to determine which packages actually installed on your system are reproducible or not.

  • Handle multi-architecture systems correctly. (#875887)
  • Use the "restricted" data file to mask transient issues. (#875861)
  • Expire the cache file after one day and base the local cache filename on the remote name. [...] [...]

I also blogged about this utility. [...]

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
  • New features:
    • Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
    • Print a message if we are reading data from standard input. [...]
  • Bug fixes:
    • Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
    • Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
  • Testing:
    • Make failing some critical flake8 tests result in a failed build. [...]
    • Check we identify all CPIO fixtures. [...]
  • Misc:
    • No need for try-assert-except block in setup.py. [...]
    • Compare types with identity not equality. [...] [...]
    • Use logging.py's lazy argument interpolation. [...]
    • Remove unused imports. [...]
    • Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Log which handler processed a file. (#876140). [...]

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.



Debian

My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

Lintian

I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers:

I also blogged specifically about the Lintian 2.5.54 release.

Patches contributed
  • debconf: Please add a context manager to debconf.py. (#877096)
  • nm.debian.org: Add pronouns to ALL_STATUS_DESC. (#875128)
  • user-setup: Please drop set_special_users hack added for "the convenience of heavy testers". (#875909)
  • postgresql-common: Please update README.Debian for PostgreSQL 10. (#876438)
  • django-sitetree: Should not mask test failures. (#877321)
  • charmtimetracker:
    • Missing binary dependency on libqt5sql5-sqlite. (#873918)
    • Please drop "Cross-Platform" from package description. (#873917)

I also submitted 5 patches for packages with incorrect calls to find(1) in debian/rules against hamster-applet, libkml, pyferret, python-gssapi & roundcube.

Debian LTS

This month I have been paid to work 15¾ hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Documented an example usage of autopkgtests to test security changes.
  • Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
  • Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
  • Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
  • Issued DLA 1096-1 for wordpress-shibboleth, correcting an cross-site scripting vulnerability in the Shibboleth identity provider module.
Uploads
  • python-django:
    • 1.11.5-1 — New upstream security release. (#874415)
    • 1.11.5-2 — Apply upstream patch to fix QuerySet.defer() with "super" and "subclass" fields. (#876816)
    • 2.0~alpha1-2 — New upstream alpha release of Django 2.0, dropping support for Python 2.x.
  • redis:
    • 4.0.2-1 — New upstream release.
    • 4.0.2-2 — Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
    • 4.0.2-2~bpo9+1 — Upload to stretch-backports.
  • aptfs (0.11.0-1) — New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
  • lintian (2.5.53, 2.5.54) — New upstream releases. (Documented in more detail above.)
  • bfs (1.1.2-1) — New upstream release.
  • docbook-to-man (1:2.0.0-39) — Tighten autopkgtests and enable testing via travis.debian.net.
  • python-daiquiri (1.3.0-1) — New upstream release.

I also made the following non-maintainer uploads (NMUs):

  • vimoutliner (0.3.4+pristine-9.3):
    • Make the build reproducible. (#776369)
    • Expand placeholders in Debian.README. (#575142, #725634)
    • Recommend that the ftplugin is enabled. (#603115)
    • Correct "is not enable" typo.
  • bittornado (0.3.18-10.3):
    • Make the build reproducible. (#796212).
    • Add missing Build-Depends on dh-python.
  • dtc-xen (0.5.17-1.1):
    • Make the build reproducible. (#777322)
    • Add missing Build-Depends on dh-python.
  • dict-gazetteer2k (1.0.0-5.4):
    • Make the build reproducible. (#776376).
    • Override empty-binary-packagea Lintian warning to avoid dak autoreject.
  • cgilib (0.6-1.1) — Make the build reproducible. (#776935)
  • dhcping (1.2-4.2) — Make the build reproducible. (#777320)
  • dict-moby-thesaurus (1.0-6.4) — Make the build reproducible. (#776375)
  • dtaus (0.9-1.1) — Make the build reproducible. (#777321)
  • fastforward (1:0.51-3.2) — Make the build reproducible. (#776972)
  • wily (0.13.41-7.3) — Make the build reproducible. (#777360)
Debian bugs filed
  • clipit: Please choose a sensible startup default in "live" mode. (#875903)
  • git-buildpackage: Please add a --reset option to gbp pull. (#875852)
  • bluez: Please default Device "friendly name" to hostname without domain. (#874094)
  • bugs.debian.org: Please explicitly link to {packages,tracker}.debian.org. (#876746)
  • Requests for packaging:
    • selfspy — log everything you do on the computer. (#873955)
    • shoogle — use the Google API from the shell. (#873916)
FTP Team

As a Debian FTP assistant I ACCEPTed 86 packages: bgw-replstatus, build-essential, caja-admin, caja-rename, calamares, cdiff, cockpit, colorized-logs, comptext, comptty, copyq, django-allauth, django-paintstore, django-q, django-test-without-migrations, docker-runc, emacs-db, emacs-uuid, esxml, fast5, flake8-docstrings, gcc-6-doc, gcc-7-doc, gcc-8, golang-github-go-logfmt-logfmt, golang-github-google-go-cmp, golang-github-nightlyone-lockfile, golang-github-oklog-ulid, golang-pault-go-macchanger, h2o, inhomog, ip4r, ldc, libayatana-appindicator, libbson-perl, libencoding-fixlatin-perl, libfile-monitor-lite-perl, libhtml-restrict-perl, libmojo-rabbitmq-client-perl, libmoosex-types-laxnum-perl, libparse-mime-perl, libplack-test-agent-perl, libpod-projectdocs-perl, libregexp-pattern-license-perl, libstring-trim-perl, libtext-simpletable-autowidth-perl, libvirt, linux, mac-fdisk, myspell-sq, node-coveralls, node-module-deps, nov-el, owncloud-client, pantomime-clojure, pg-dirtyread, pgfincore, pgpool2, pgsql-asn1oid, phpliteadmin, powerlevel9k, pyjokes, python-evdev, python-oslo.db, python-pygal, python-wsaccel, python3.7, r-cran-bindrcpp, r-cran-dotcall64, r-cran-glue, r-cran-gtable, r-cran-pkgconfig, r-cran-rlang, r-cran-spatstat.utils, resolvconf-admin, retro-gtk, ring-ssl-clojure, robot-detection, rpy2-2.8, ruby-hocon, sass-stylesheets-compass, selinux-dbus, selinux-python, statsmodels, webkit2-sharp & weston.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: comptext, comptext, ldc & python-oslo.concurrency.

Hideki Yamane: MIRROR DISK USAGE: growing

30 September, 2017 - 18:20
One year later: mirror disk usage is growing
I'll prepare exchanging whole system in the end of this year.

Iain R. Learmonth: Breaking RSS Change in Hugo

30 September, 2017 - 17:45

My website and blog are managed by the static site generator Hugo. I’ve found this to be a stable and flexible system, but at the last upgrade a breaking change has occurred that broken the syndication of my blog on various planets.

At first I thought perhaps with my increased posting rate the planets were truncating my posts but this was not the case. The problem was in Hugo pull request #3129 where for some reason they have changed the RSS feed to contain only a “lead” instead of the full article.

I’ve seen other content management systems offer a similar option but at least they point out that it’s truncated and offer a “read more” link. Here it just looks like I’m publishing truncated unfinished really short posts.

If you take a look at the post above, you’ll see that the change is in an embedded template and it took a little reading the docs to work out how to revert the change. The steps are actually not that difficult, but it’s still annoying that the change occurred.

In a Hugo site, you will have a layouts directory that will contain your overrides from your theme. Create a new file in the path layouts/_default/rss.xml (you may need to create the _default directory) with the following content:

<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>{{ if eq  .Title  .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }}</title>
    <link>{{ .Permalink }}</link>
    <description>Recent content {{ if ne  .Title  .Site.Title }}{{ with .Title }}in {{.}} {{ end }}{{ end }}on {{ .Site.Title }}</description>
    <generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
    <language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
    <managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
    <webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
    <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
    <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
    {{ with .OutputFormats.Get "RSS" }}
        {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
    {{ end }}
    {{ range .Data.Pages }}
    <item>
      <title>{{ .Title }}</title>
      <link>{{ .Permalink }}</link>
      <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
      {{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
      <guid>{{ .Permalink }}</guid>
      <description>{{ .Content | html }}</description>
    </item>
    {{ end }}
  </channel>
</rss>

If you like my new Hugo theme, please let me know and I’ll bump tidying it up and publishing it further up my todo list.

Arturo Borrero González: Installing spotify-client in Debian testing (Buster)

30 September, 2017 - 16:51

Similar to the problem described in the post Google Hangouts in Debian testing (Buster), the Spotify application for Debian (a package called spotify-client) is not ready to run in Debian testing (Buster) as is.

In this particular case, it seems there is only one problem, and is related to openssl/libssl. The spotify-client package requires libssl1.0.0 while in Debian testing (Buster) we have an updated libssl1.1.

Fortunately, this is rather easy to solve, given the little additional dependencies of both spotify-client and libssl1.0.0.

What we will do is to install libssl1.0.0 from jessie-backports, coexisting with libssl1.1.

Simple steps:

  • 1) add jessie-backports repository to your /etc/apt/sources.list file:
    deb http://httpredir.debian.org/debian/ jessie-backports main

  • 2) update your repo database:
    % user@debian:~ $ sudo aptitude update
    
  • 3) verify we have both libssl1.1 and libssl1.0.0 ready to install:
    % user@debian:~ $ aptitude search libssl
    [...]
    p   libssl1.0.0       - Secure Sockets Layer toolkit - shared libraries                                       
    i   libssl1.1         - Secure Sockets Layer toolkit - shared libraries
    [...]
    
  • 4) Follow steps by Spotify to install the spotify-client package:
    https://www.spotify.com/uk/download/linux/

  • 5) Run it and enjoy your music!

  • 6) You can cleanup the jessie-backports line from /etc/apt/sources.list.


Bonus point: Why jessie-backports?? Well, according to the openssl package tracker, jessie-backports contains the most recent version of the libssl1.0.0 package.

BTW, thanks to the openssl Debian maintainers, their work is really appreciated :-) And thanks to Spotify for providing a Debian package :-)

Enrico Zini: Systemd socket units

30 September, 2017 - 05:00

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.socket units

Socket units tell systemd to listen on a given IPC, network socket, or file system FIFO, and use another unit to service requests to it.

For example, this creates a network service that listens on port 55555:

# /etc/systemd/system/ddate.socket
[Unit]
Description=ddate service on port 55555

[Socket]
ListenStream=12345
Accept=true

[Install]
WantedBy=sockets.target
# /etc/systemd/system/ddate@.service
[Unit]
Description=Run ddate as a network service

[Service]
Type=simple
ExecStart=/bin/sh -ec 'while true; do /usr/bin/ddate; sleep 1m; done'
StandardOutput=socket
StandardError=journal

Note that the .service file is called ddate@ instead of ddate: units whose name ends in '@' are template units which can be activated multiple times, by adding any string after the '@' in the unit name.

If I run nc localhost 55555 a couple of times, and then check the list of running units, I see ddate@… instantiated twice, adding the local and remote socket endpoints to the unit name:

$ systemctl list-units 'ddate@*'
  UNIT                                             LOAD   ACTIVE SUB     DESCRIPTION
  ddate@15-127.0.0.1:55555-127.0.0.1:36936.service loaded active running Run ddate as a network service (127.0.0.1:36936)
  ddate@16-127.0.0.1:55555-127.0.0.1:37002.service loaded active running Run ddate as a network service (127.0.0.1:37002)

This allows me to monitor each running service individually.

systemd also automatically creates a slice unit called system-ddate.slice grouping all services together:

$ systemctl status system-ddate.slice
● system-ddate.slice
   Loaded: loaded
   Active: active since Thu 2017-09-21 14:25:02 CEST; 9min ago
    Tasks: 4
   CGroup: /system.slice/system-ddate.slice
           ├─ddate@15-127.0.0.1:55555-127.0.0.1:36936.service
           │ ├─18214 /bin/sh -ec while true; do /usr/bin/ddate; sleep 1m; done
           │ └─18661 sleep 1m
           └─ddate@16-127.0.0.1:55555-127.0.0.1:37002.service
             ├─18228 /bin/sh -ec while true; do /usr/bin/ddate; sleep 1m; done
             └─18670 sleep 1m

This allows to also work with all running services for this template unit as a whole, sending a signal to all their processes and setting up resource control features for the service as a whole.

See:

Iain R. Learmonth: Tor Metrics Team Meeting in Berlin

29 September, 2017 - 21:00

We had a meeting of the Metrics Team in Berlin yesterday to organise a roadmap for the next 12 months. This roadmap isn’t yet finalised as it will now be taken to the main Tor developers meeting in Montreal where perhaps there are things we thought were needed but aren’t, or things that we had forgotten. Still we have a pretty good draft and we were all quite happy with it.

We have updated tickets in the Metrics component on the Tor trac to include either “metrics-2017“ or “metrics-2018“ in the keywords field to identify tickets that we expect to be able to resolve either by the end of this year or by the end of next year (again, not yet finalised but should give a good idea). In some cases this may mean closing the ticket without fixing it, but only if we believe that either the ticket is out of scope for the metrics team or that it’s an old ticket and no one else has had the same issue since.

Having an in-person meeting has allowed us to have easy discussion around some of the more complex tickets that have been sitting around. In many cases these are tickets where we need input from other teams, or perhaps even just reassigning the ticket to another team, but without a clear plan we couldn’t do this.

My work for the remainder of the year will be primarily on Atlas where we have a clear plan for integrating with the Tor Metrics website, and may include some other small things relating to the website.

I will also be triaging the current Compass tickets as we look to shut down compass and integrate the functionality into Atlas. Compass specific tickets will be closed but some tickets relating to desirable functionality may be moved to Atlas with the fix implemented there instead.

Sven Hoexter: Last rites to the lyx and elyxer packaging

29 September, 2017 - 17:39

After having been a heavy LyX user from 2005 to 2010 I've continued to maintain LyX more or less till now. Finally I'm starting to leave that stage and removed myself from the Uploaders list. The upload with some other last packaging changes is currently sitting in the git repo. Mainly because lintian on ftp-master currently rejects 'pakagename@packages.d.o' maintainer addresses (the alternative to the lists.alioth.d.o maintainer mailinglists). For elyxer I filled a request for removal. It hasn't seen any upstream activity for a while and the LyX build in HTML export support improved.

My hope is that if I step away far enough someone else might actually pick it up. I had this strange moment when I lately realized that xchat got reintroduced to Debian after mapreri and myself spent some time last year to get it removed before the stretch release.

Petter Reinholdtsen: Visualizing GSM radio chatter using gr-gsm and Hopglass

29 September, 2017 - 15:30

Every mobile phone announce its existence over radio to the nearby mobile cell towers. And this radio chatter is available for anyone with a radio receiver capable of receiving them. Details about the mobile phones with very good accuracy is of course collected by the phone companies, but this is not the topic of this blog post. The mobile phone radio chatter make it possible to figure out when a cell phone is nearby, as it include the SIM card ID (IMSI). By paying attention over time, one can see when a phone arrive and when it leave an area. I believe it would be nice to make this information more available to the general public, to make more people aware of how their phones are announcing their whereabouts to anyone that care to listen.

I am very happy to report that we managed to get something visualizing this information up and running for Oslo Skaperfestival 2017 (Oslo Makers Festival) taking place today and tomorrow at Deichmanske library. The solution is based on the simple recipe for listening to GSM chatter I posted a few days ago, and will show up at the stand of Åpen Sone from the Computer Science department of the University of Oslo. The presentation will show the nearby mobile phones (aka IMSIs) as dots in a web browser graph, with lines to the dot representing mobile base station it is talking to. It was working in the lab yesterday, and was moved into place this morning.

We set up a fairly powerful desktop machine using Debian Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers connected and visualize the visible cell phone towers using an English version of Hopglass. A fairly powerfull machine is needed as the grgsm_livemon_headless processes from gr-gsm converting the radio signal to data packages is quite CPU intensive.

The frequencies to listen to, are identified using a slightly patched scan-and-livemon (to set the --args values for each receiver), and the Hopglass data is generated using the patches in my meshviewer-output branch. For some reason we could not get more than four SDRs working. There is also a geographical map trying to show the location of the base stations, but I believe their coordinates are hardcoded to some random location in Germany, I believe. The code should be replaced with code to look up location in a text file, a sqlite database or one of the online databases mentioned in the github issue for the topic.

If this sound interesting, visit the stand at the festival!

Dirk Eddelbuettel: Rcpp 0.12.13: Updated vignettes, and more

29 September, 2017 - 08:31

The thirteenth release in the 0.12.* series of Rcpp landed on CRAN this morning, following a little delay because Uwe Ligges was traveling and whatnot. We had announced its availability to the mailing list late last week. As usual, a rather substantial amount of testing effort went into this release so you should not expect any surprise.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, and the 0.12.12 release in July 2017 making it the seventeeth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1069 packages (and hence 73 more since the last release) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contains a large-ish update to the documentation as all vignettes (apart from the unit test one, which is a one-off) now use Markdown and the (still pretty new) pinp package by James and myself. There is also a new vignette corresponding to the PeerJ preprint James and I produced as an updated and current Introduction to Rcpp replacing the older JSS piece (which is still included as a vignette too).

A few other things got fixed: Dan is working on const iterators you would expect with modern C++, Lei Yu spotted error in Modules, and more. See below for details.

Changes in Rcpp version 0.12.13 (2017-09-24)
  • Changes in Rcpp API:

    • New const iterators functions cbegin() and cend() have been added to several vector and matrix classes (Dan Dillon and James Balamuta in #748) starting to address #741).
  • Changes in Rcpp Modules:

    • Misplacement of one parenthesis in macro LOAD_RCPP_MODULE was corrected (Lei Yu in #737)
  • Changes in Rcpp Documentation:

    • Rewrote the macOS sections to depend on official documentation due to large changes in the macOS toolchain. (James Balamuta in #742 addressing issue #682).

    • Added a new vignette ‘Rcpp-introduction’ based on new PeerJ preprint, renamed existing introduction to ‘Rcpp-jss-2011’.

    • Transitioned all vignettes to the 'pinp' RMarkdown template (James Balamuta and Dirk Eddelbuettel in #755 addressing issue #604).

    • Added an entry on running 'compileAttributes()' twice to the Rcpp-FAQ (##745).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Enrico Zini: Systemd device units

29 September, 2017 - 05:00

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.path units

This kind of unit can be used to monitor a file or directory for changes using inotify, and activate other units when an event happens.

For example, this activates a unit that manages a spool directory, which activates another unit whenever a .pdf file is added to /tmp/spool/:

[Unit]
Description=Monitor /tmp/spool/ for new .pdf files

[Path]
Unit=beeponce.service
PathExistsGlob=/tmp/spool/*.pdf
MakeDirectory=true

This instead activates another unit whenever /tmp/ready is changed, for example by someone running touch /tmp/ready:

[Unit]
Description=Monitor /tmp/ready

[Path]
Unit=beeponce.service
PathChanged=/tmp/ready

And beeponce.service:

[Unit]
Description=Beeps once

[Service]
Type=oneshot
ExecStart=/usr/bin/aplay /tmp/beep.wav

See man systemd.path

Sean Whitton: Debian Policy 4.1.1.0 released

29 September, 2017 - 03:35

I just released Debian Policy version 4.1.1.0.

There are only two normative changes, and neither is very important. The main thing is that this upload fixes a lot of packaging bugs that were found since we converted to build with Sphinx.

There are still some issues remaining; I hope to submit some patches to the www-team’s scripts to fix those.

Ricardo Mones: Long time no post

29 September, 2017 - 02:31
Seems the breakage of my desktop computer more than 3 months ago did also caused also a hiatus on my online publishing activities... it was not really intended, it happened I was just busy with other things ಠ_ಠ.

With a broken computer being able to build software on the laptop became a priority. Around September 2016 or so the good'n'old black MacBook decided to stop working. I didn't really need a replacement by that time, but never liked to have just a single working system, and in October just found an offer which I could not resist and bought a ThinkPad X260. It helped to build my final project (it was faster than the desktop), but lacking time for FOSS hadn't used it for much more.

Setting up the laptop for software (Debian packages and Claws Mail, mainly) was somewhat easy. Finding a replacement for the broken desktop was a bit more difficult. I considered a lot of configurations and prices, from those new Ryzen to just buying the same components (pretty difficult now because they're discontinued). In the end, I decided to spend the minimum and make good use of everything else still working (memory, discs and wireless card), so I finally got an AMD A10-7860K on top of an Asus A88M-PLUS. This board has more SATA ports, so I added an unused SSD, remains of a broken laptop, to install the new system —Debian Stretch, of course ʘ‿ʘ— while keeping the existing software RAID partitions of the spinning drives.


The last thing distracting from the usual routine was replacing the car. Our child is growing as expected and the Fiesta was starting to appear small and uncomfortable, specially for long distance travel. We went for an hybrid model, with a high capacity boot. Given our budget, we only found 3 models below the limit: Kia Niro, Hyundai Ioniq and Toyota Auris TS. The color was decided by the kid (after forbidding black), and this was the winner...

In the middle of all of this we also took some vacation to travel to the south of Galicia, mostly around Vigo area, but also visiting Oporto and other nice places.

Matthias Klumpp: Adding fonts to software centers

28 September, 2017 - 21:24

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren’t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already .

Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font.

Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. “Lato Regular Italic” or “Lato Bold”. A user however will see the font family as a font, e.g. just “Lato” instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream “font” components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a

<provides/>
 -tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this.

How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in

/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
  for the AppStream metadata generator to pick up:

<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>

  <name>Lato</name>
  <summary>A sanserif type­face fam­ily</summary>
  <description>
    <p>
      Lato is a sanserif type­face fam­ily designed in the Sum­mer 2010 by Warsaw-based designer
      Łukasz Dziedzic (“Lato” means “Sum­mer” in Pol­ish). In Decem­ber 2010 the Lato fam­ily
      was pub­lished under the open-source Open Font License by his foundry tyPoland, with
      sup­port from Google.
    </p>
  </description>

  <url type="homepage">http://www.latofonts.com/</url>

  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>

When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an “icon” for the respective font component using its regular typeface. Of course that is not ideal – what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display?

This behavior can be influenced by adding

<font/>
  tags to a
<provides/>
  tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
  list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag.

The example lines of text are written in a language matching the font using Pango.

But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font.

For example, a font with mathematical symbols might want to add the following to its metainfo file:

<custom>
  <value key="FontIconText">∑√</value>
  <value key="FontSampleText">∑ ∮ √ ‖...‖ ⊕ 𝔼 ℕ ⋉</value>
</custom>

Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts.

So, In summary:

  • Fonts are hard
  • I need to blog faster
  • Please add metainfo files to your fonts and submit them upstream if you can!
  • Fonts must have a metainfo file in order to show up in GNOME Software, KDE Discover, AppCenter, etc.
  • The “new” font specification is backwards compatible to Richard’s pioneer work in 2014
  • The appstream-generator supports a few non-standard values to influence how font images are rendered that you might be interested in (maybe we can do something like that for appstream-builder as well)
  • The appstream-generator does not (yet?) support the <extends/> logic Richard outlined in his blog post, mainly because it wasn’t necessary in Debian/Ubuntu/Arch yet (which is asgen’s primary audience), and upstream projects would rarely want to write multiple metainfo files.
  • The metaInfo files are not supposed to replace the existing fontconfig files, and we can not generate them from existing metadata, sadly
  • If you want a more detailed look at writing font metainfo files, take a look at the AppStream specification.
  • Please write more font metadata

 

Russell Coker: Process Monitoring

28 September, 2017 - 20:46

Since forking the Mon project to etbemon [1] I’ve been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I’m about to redesign.

Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.

For people who don’t use mon, the monitor scripts return 0 if everything is OK and 1 if there’s a problem along with using stdout to display an error message. While I’m not aware of anyone hooking mon scripts into a different monitoring system that’s going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.

Basic Monitoring
ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2

I’m currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the “master” process in this instance refers to the main process of Postfix, but other daemons use the same process name (it’s one of those names that’s wrong because it’s so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.

The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post – merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won’t be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string “SSH-2.0-OpenSSH_”. Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.

In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix “master” process is running regardless of what other “master” processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that “master” isn’t running I don’t need to fully wake up to know where the problem is.

SE Linux

One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I’m not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.

Transient Processes

Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when “logrotate” or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the “alertafter 2” directive. The “failure_interval” directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn’t delay the notification much.

To deal with this I’ve been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.

CPU Use

Mon currently has a loadavg.monitor script that to check the load average. But that won’t catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won’t catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it’s lifetime or over the last few seconds unless it’s in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).

Monitoring for Exclusion

A common programming mistake is to call setuid() before setgid() which means that the program doesn’t have permission to call setgid(). If return codes aren’t checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn’t show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.

On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn’t happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.

Automated tests for configuration errors that might impact system security is a bigger issue, I’ll probably write a separate blog post about it.

Related posts:

  1. Monitoring of Monitoring I was recently asked to get data from a computer...
  2. When to Use SE Linux Recently someone asked on IRC whether they should use SE...
  3. Health and Status Monitoring via Smart Phone Health Monitoring Eric Topol gave an interesting TED talk about...

Lior Kaplan: LibreOffice community celebrates 7th anniversary

28 September, 2017 - 19:52

The Document foundation blog have a post about LibreOffice 7th anniversary:

Berlin, September 28, 2017 – Today, the LibreOffice community celebrates the 7th anniversary of the leading free office suite, adopted by millions of users in every continent. Since 2010, there have been 14 major releases and dozens of minor ones, fulfilling the personal productivity needs of both individuals and enterprises, on Linux, macOS and Windows.

I wanted to take a moment to remind people that 7 years ago the community decided to make the de facto fork of OpenOffice.org official after life under Sun (and then Oracle) were problematic. From the very first hours the project showed its effectiveness. See my post about LibreOffice first steps. Not to mention what it achieved in the past 7 years.

This is still one of my favourite open source contributions, not because it was sophisticated or hard, but because it as about using the freedom part of the free software:
Replace hardcoded “product by Oracle” with “product by %OOOVENDOR”.

On a personal note, for me, after years of trying to help with OOo l10n for Hebrew and RTL support, things started to go forward in a reasonable pace, getting patches in after years of trying, having upstream fix some of the issues, and actually able doing the translation. We made it to 100% with LibreOffice 3.5.0 in February 2012 (something we should redo soon…).


Filed under: i18n & l10n, Israeli Community, LibreOffice

Russ Allbery: Review: The Seventh Bride

28 September, 2017 - 11:41

Review: The Seventh Bride, by T. Kingfisher

Publisher: 47North Copyright: 2015 ISBN: 1-5039-4975-3 Format: Kindle Pages: 225

There are two editions of this book, although only one currently for sale. This review is of the second edition, released in November of 2015. T. Kingfisher is a pen name for Ursula Vernon when she's writing for adults.

Rhea is a miller's daughter. She's fifteen, obedient, wary of swans, respectful to her parents, and engaged to Lord Crevan. The last was a recent and entirely unexpected development. It's not that she didn't expect to get married eventually, since of course that's what one does. And it's not that Lord Crevan was a stranger, since that's often how it went with marriage for people like her. But she wasn't expecting to get married now, and it was not at all clear why Lord Crevan would want to marry her in particular.

Also, something felt not right about the entire thing. And it didn't start feeling any better when she finally met Lord Crevan for the first time, some days after the proposal to her parents. The decidedly non-romantic hand kissing didn't help, nor did the smug smile. But it's not like she had any choice. The miller's daughter doesn't say no to a lord and a friend of the viscount. The miller's family certainly doesn't say no when they're having trouble paying the bills, the viscount owns the mill, and they could be turned out of their livelihood at a whim.

They still can't say no when Lord Crevan orders Rhea to come to his house in the middle of the night down a road that quite certainly doesn't exist during the day, even though that's very much not the sort of thing that is normally done. Particularly before the marriage. Friends of the viscount who are also sorcerers can get away with quite a lot. But Lord Crevan will discover that there's still a limit to how far he can order Rhea around, and practical-minded miller's daughters can make a lot of unexpected friends even in dire circumstances.

The Seventh Bride is another entry in T. Kingfisher's series of retold fairy tales, although the fairy tale in question is less clear than with The Raven and the Reindeer. Kirkus says it's a retelling of Bluebeard, but I still don't quite see that in the story. I think one could argue equally easily that it's an original story. Nonetheless, it is a fairy tale: it has that fairy tale mix of magical danger and practical morality, and it's about courage and friendships and their consequences.

It also has a hedgehog.

This is an T. Kingfisher story, so it's packed full of bits of marvelous phrasing that I want to read over and over again. It has wonderful characters, the hedgehog among them, and it has, at its heart, a sort of foundational decency and stubborn goodness that's deeply satisfying for the reader.

The Seventh Bride is a lot closer to horror than the other T. Kingfisher books I've read, but it never fell into my dislike of the horror genre, despite a few gruesome bits. I think that's because neither Rhea nor the narrator treat the horrific aspects as representative of the true shape of the world. Rhea instead confronts them with a stubborn determination and an attempt to make the best of each moment, and with a practical self-awareness that I loved reading about.

The problem with crying in the woods, by the side of a white road that leads somewhere terrible, is that the reason for crying isn't inside your head. You have a perfectly legitimate and pressing reason for crying, and it will still be there in five minutes, except that your throat will be raw and your eyes will itch and absolutely nothing else will have changed.

Lord Crevan, when Rhea finally reaches him, toys with her by giving her progressively more horrible puzzle tasks, threatening her with the promised marriage if she fails at any of them. The way this part of the book finally resolves is one of the best moments I've read in any book. Kingfisher captures an aspect of moral decisions, and a way in which evil doesn't work the way that evil people expect it to work, that I can't remember seeing an author capture this well.

There are a lot of things here for Rhea to untangle: the nature of Crevan's power, her unexpected allies in his manor, why he proposed marriage to her, and of course how escape his power. The plot works, but I don't think it was the best part of the book, and it tends to happen to Rhea rather than being driven by her. But I have rarely read a book quite this confident of its moral center, or quite as justified in that confidence.

I am definitely reading everything Vernon has published under the T. Kingfisher name, and quite possibly most of her children's books as well. Recommended, particularly if you liked the excerpt above. There's an entire book full of paragraphs like that waiting for you.

Rating: 8 out of 10

Dirk Eddelbuettel: RcppZiggurat 0.1.4

28 September, 2017 - 09:06

A maintenance release of RcppZiggurat is now on the CRAN network for R. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

The NEWS file entry below lists all changes.

Changes in version 0.1.4 (2017-07-27)
  • The vignette now uses the pinp package in two-column mode.

  • Dynamic symbol registration is now enabled.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Enrico Zini: Systemd device units

28 September, 2017 - 05:00

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.device units

Several devices are automatically represented inside systemd by .device units, which can be used to activate services when a given device exists in the file system.

See systemctl --all --full -t device to see a list of all decives for which systemd has a unit in your system.

For example, this .service unit plays a sound as long as a specific USB key is plugged in my system:

[Unit]
Description=Beeps while a USB key is plugged
DefaultDependencies=false
StopWhenUnneeded=true

[Install]
WantedBy=dev-disk-by\x2dlabel-ERLUG.device

[Service]
Type=simple
ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done'

If you need to work with a device not seen by default by systemd, you can add a udev rule that makes it available, by adding the systemd tag to the device with TAG+="systemd".

It is also possible to give the device an extra alias using ENV{SYSTEMD_ALIAS}="/dev/my-alias-name".

To figure out all you can use for matching a device:

  1. Run udevadm monitor --environment and plug the device
  2. Look at the DEVNAME= values and pick one that addresses your device the way you prefer
  3. udevadm info --attribute-walk --name=*the value of devname* will give you all you can use for matching in the udev rule.

See:

Enrico Zini: Qt cross-architecture development in Debian

27 September, 2017 - 20:25

Use case: use Debian Stable as an environment to run amd64 development machines to develop Qt applications for Raspberry Pi or other smallish armhf devices.

Qt Creator is used as Integrated Development Environment, and it supports cross-compiling, running the built source on the target system, and remote debugging.

Debian Stable (vanilla or Raspbian) runs on both the host and the target systems, so libraries can be kept in sync, and both systems have access to a vast amount of libraries, with security support.

On top of that, armhf libraries can be installed with multiarch also in the host machine, so cross-builders have access to the exact same libraries as the target system.

This sounds like a dream system. But. We're not quite there yet.

cross-compile attempts

I tried cross compiling a few packages:

$ sudo debootstrap stretch cross
$ echo "strech_cross" | sudo tee cross/etc/debian_chroot
$ sudo systemd-nspawn -D cross
# dpkg --add-architecture armhf
# echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
# apt update
# apt install --no-install-recommends build-essential crossbuild-essential-armhf

Some packages work:

# apt source bc
# cd bc-1.06.95/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
dh_auto_configure -- --prefix=/usr --with-readline
        ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=\${prefix}/include --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=\${prefix}/lib/arm-linux-gnueabihf --libexecdir=\${prefix}/lib/arm-linux-gnueabihf --disable-maintainer-mode --disable-dependency-tracking --host=arm-linux-gnueabihf --prefix=/usr --with-readline
…
dpkg-deb: building package 'dc-dbgsym' in '../dc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc-dbgsym' in '../bc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'dc' in '../dc_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc' in '../bc_1.06.95-9_armhf.deb'.
 dpkg-genbuildinfo --build=binary
 dpkg-genchanges --build=binary >../bc_1.06.95-9_armhf.changes
dpkg-genchanges: info: binary-only upload (no source code included)
 dpkg-source --after-build bc-1.06.95
dpkg-buildpackage: info: binary-only upload (no source included)

With qmake based Qt packages, qmake is not configured for cross-building, probably because it is not currently supported:

# apt source pumpa
# cd pumpa-0.9.3/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
        qmake -makefile -nocache "QMAKE_CFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=.
          -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_LFLAGS_RELEASE=-Wl,-z,relro -Wl,-z,now"
          "QMAKE_LFLAGS_DEBUG=-Wl,-z,relro -Wl,-z,now" QMAKE_STRIP=: PREFIX=/usr
qmake: could not exec '/usr/lib/x86_64-linux-gnu/qt5/bin/qmake': No such file or directory
…
debian/rules:19: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2

With cmake based Qt packages it goes a little better in that it finds the cross compiler, pkg-config and some multiarch paths, but then it tries to run armhf moc, which fails:

# apt source caneda
# cd caneda-0.3.0/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
        cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None
          -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SYSTEM_NAME=Linux
          -DCMAKE_SYSTEM_PROCESSOR=arm -DCMAKE_C_COMPILER=arm-linux-gnueabihf-gcc
          -DCMAKE_CXX_COMPILER=arm-linux-gnueabihf-g\+\+
          -DPKG_CONFIG_EXECUTABLE=/usr/bin/arm-linux-gnueabihf-pkg-config
          -DCMAKE_INSTALL_LIBDIR=lib/arm-linux-gnueabihf
…
CMake Error at /usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfig.cmake:27 (message):
  The imported target "Qt5::Core" references the file

     "/usr/lib/arm-linux-gnueabihf/qt5/bin/moc"

  but this file does not exist.  Possible reasons include:

  * The file was deleted, renamed, or moved to another location.

  * An install or uninstall procedure did not complete successfully.

  * The installation package was faulty and contained

     "/usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfigExtras.cmake"

  but not all the files it references.

Note: Although I improvised a chroot to be able to fool around with it, I would use pbuilder or sbuild to do the actual builds.

Helmut suggests pbuilder --host-arch or sbuild --host.

Doing it the non-Debian way

This guide in the meantime explains how to set up a cross-compiling Qt toolchain in a rather dirty way, by recompiling Qt pointing it at pieces of the Qt deployed on the Raspberry Pi.

Following that guide, replacing the CROSS_COMPILE value with /usr/bin/arm-linux-gnueabihf- gave me a working qtbase, for which it is easy to create a Kit for Qt Creator that works, and supports linking applications with Debian development packages that do not use Qt.

However, at that point I need to recompile all dependencies that use Qt myself, and I quickly got stuck at that monster of QtWebEngine, whose sources embed the whole of Chromium.

Having a Qt based development environment in which I need to become the maintainer for the whole Qt toolchain is not a product I can offer to a customer. Cross compiling qmake based packages on stretch is not currently supported, so at the moment I had to suggest to postpone all plans for total world domination for at least two years.

Cross-building Debian

In the meantime, Helmut Grohne has been putting a lot of effort into making Debian packages cross-buildable:

helmut> enrico: yes, cross building is painful. we have ~26000 source packages. of those, ~13000 build arch-dep packages. of those, ~6000 have cross-satisfiable build-depends. of those, I tried cross building ~2300. of those 1300 cross built. so we are at about 10% working.

helmut> enrico: plus there are some 607 source packages affected by some 326 bugs with patches.

helmut> enrico: gogo nmu them

helmut> enrico: I've filed some 1000 bugs (most of them with patches) now. around 600 are fixed :)

He is doing it mostly alone, and I would like people not to be alone when they do a lot of work in Debian, so…

Join Helmut in the effort of making Debian cross-buildable!

Build any Debian package for any device right from the comfort of your own work computer!

Have a single development environment seamlessly spanning architecture boundaries, with the power of all that there is in Debian!

Join Helmut in the effort of making Debian cross-buildable!

Apply here, or join #debian-bootstrap on OFTC!

Cross-building Qt in Debian

mitya57 summarised the situation on the KDE team side:

mitya57> we have cross-building stuff on our TODO list, but it will likely require a lot of time and neither Lisandro nor I have it currently.

mitya57> see https://gobby.debian.org/export/Teams/KDE/qt-cross for a summary of what needs to be done.

mitya57> Any help or patches are always welcome :))

qemu-user-static

Helmut also suggested to use qemu-user-static to make the host system able to run binaries compiled for the target system, so that even if a non-cross-compiling Qt build tries to run moc and friends in their target architecture version, they would transparently succeed.

At that point, it would just be a matter of replacing compiler paths to point to the native cross-compiling gcc, and the build would not be slowed down by much.

Fixing bug #781226 would help in making it possible to configure a multiarch version of qmake as the qmake used for cross compiling.

I have not had a chance of trying to cross-build in this way yet.

In the meantime...

Having qtcreator able to work on an amd64 devel machine and deploy/test/debug remotely on an arm target machine, where both machine run debian stable and have libraries in sync, would be a great thing to have even though packages do not cross-build yet.

Helmut summarised the situation on IRC:

svuorela and others repeat that Qt upstream is not compatible with Debian's multiarch thinking, in that Qt upstream insists on having one toolchain for each pair of architectures, whereas the Debian way tends to be to make packages generic and split stuff such that it can be mixed and matched.

An example being that you need to run qmake (thus you need qmake for the build architecture), but qmake also embeds the relevant paths and you need to query it for them (so you need qmake for the host architecture)

Either you run it through qemu, or you have a particular cross qmake for your build/host pair, or you fix qt upstream to stop this madness

Building qmake in Debian for each host-target pair, even just limited to released architectures, would mean building Qt 100 times, and that's not going to scale.

I wonder:

  • can I have a qmake-$ARCH binary that can build a source using locally installed multiarch Qt libraries, do I need to recompile and ship the whole of Qt, or just qmake?
  • is there a recipe for building a cross-building Qt environment that would be able use Debian development libraries installed the normal multiarch way?
  • we can't do perfect yet, but can we do better than this?

Dirk Eddelbuettel: RcppAnnoy 0.0.10

27 September, 2017 - 09:05

A few short weeks after the more substantial 0.0.9 release of RcppAnnoy, we have a quick bug-fix update.

RcppAnnoy is our Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours.

Michaël Benesty noticed that our getItemsVector() function didn't, ahem, do much besides crashing. Simple bug, they happen--now fixed, and a unit test added.

Changes in this version are summarized here:

Changes in version 0.0.10 (2017-09-25)
  • The getItemsVector() function no longer crashes (#24)

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้