Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 6 min ago

Antoine Beaupré: Notmuch, offlineimap and Sieve setup

13 May, 2016 - 06:29

I've been using Notmuch since about 2011, switching away from Mutt to deal with the monstrous amount of emails I was, and still am dealing with on the computer. I have contributed a few patches and configs on the Notmuch mailing list, but basically, I have given up on merging patches, and instead have a custom config in Emacs that extend it the way I want. In the last 5 years, Notmuch has progressed significantly, so I haven't found the need to patch it or make sweeping changes.

The huge INBOX of death

The one thing that is problematic with my use of Notmuch is that I end up with a ridiculously large INBOX folder. Before the cleanup I did this morning, I had over 10k emails in there, out of about 200k emails overall.

Since I mostly work from my laptop these days, the Notmuch tags are only on the laptop, and not propagated to the server. This makes accessing the mail spool directly, from webmail or simply through a local client (say Mutt) on the server, really inconvenient, because it has to load a very large spool of mail, which is very slow in Mutt. Even worse, a bunch of mail that was archived in Notmuch shows up in the spool because it's just removed tags in Notmuch: the mails are still in the inbox, even though they are marked as read.

So I was hoping that Notmuch would help me deal with the giant inbox of death problem, but in fact, when I don't use Notmuch, it actually makes the problem worse. Today, I did a bunch of improvements to my setup to fix that.

The first thing I did was to kill procmail, which I was surprised to discover has been dead for over a decade. I switched over to Sieve for filtering, having already switched to Dovecot a while back on the server. I tried to use the procmail2sieve.pl conversion tool but it didn't work very well, so I basically rewrote the whole file. Since I was mostly using Notmuch for filtering, there wasn't much left to convert.

Sieve filtering

But this is where things got interesting: Sieve is so simpler to use and more intuitive that I started doing more interesting stuff in bridging the filtering system (Sieve) with the tagging system (Notmuch). Basically, I use Sieve to split large chunks of emails off my main inbox, to try to remove as much spam, bulk email, notifications and mailing lists as possible from the larger flow of emails. Then Notmuch comes in and does some fine-tuning, assigning tags to specific mailing lists or topics, and being generally the awesome search engine that I use on a daily basis.

Dovecot and Postfix configs

For all of this to work, I had to tweak my mail servers to talk sieve. First, I enabled sieve in Dovecot:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -44,5 +44,5 @@

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).
-  #mail_plugins = $mail_plugins
+  mail_plugins = $mail_plugins sieve
 }

Then I had to switch from procmail to dovecot for local delivery, that was easy, in Postfix's perennial main.cf:

#mailbox_command = /usr/bin/procmail -a "$EXTENSION"
mailbox_command = /usr/lib/dovecot/dovecot-lda -a "$RECIPIENT"

Note that dovecot takes the full recipient as an argument, not just the extension. That's normal. It's clever, it knows that kind of stuff.

One last tweak I did was to enable automatic mailbox creation and subscription, so that the automatic extension filtering (below) can create mailboxes on the fly:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -37,10 +37,10 @@
 #lda_original_recipient_header =

 # Should saving a mail to a nonexistent mailbox automatically create it?
-#lda_mailbox_autocreate = no
+lda_mailbox_autocreate = yes

 # Should automatically created mailboxes be also automatically subscribed?
-#lda_mailbox_autosubscribe = no
+lda_mailbox_autosubscribe = yes

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).
Sieve rules

Then I had to create a Sieve ruleset. That thing lives in ~/.dovecot.sieve, since I'm running Dovecot. Your provider may accept an arbitrary ruleset like this, or you may need to go through a web interface, or who knows. I'm assuming you're running Dovecot and have a shell from now on.

The first part of the file is simply to enable a bunch of extensions, as needed:

# Sieve Filters
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples
# https://tools.ietf.org/html/rfc5228
require "fileinto";
require "envelope";
require "variables";
require "subaddress";
require "regex";
require "vacation";
require "vnd.dovecot.debug";

Some of those are not used yet, for example I haven't tested the vacation module yet, but I have good hopes that I can use it as a way to announce a special "urgent" mailbox while I'm traveling. The rationale is to have a distinct mailbox for urgent messages that is announced in the autoreply, that hopefully won't be parsable by bots.

Spam filtering

Then I filter spam using this fairly standard expression:

########################################################################
# spam 
# possible improvement, server-side:
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Filtering_using_the_spamtest_and_virustest_extensions
if header :contains "X-Spam-Flag" "YES" {
  fileinto "junk";
  stop;
} elsif header :contains "X-Spam-Level" "***" {
  fileinto "greyspam";
  stop;
}

This puts stuff into the junk or greyspam folder, based on the severity. I am very aggressive with spam: stuff often ends up in the greyspam folder, which I need to check from time to time, but it beats having too much spam in my inbox.

Mailing lists

Mailing lists are generally put into a lists folder, with some mailing lists getting their own folder:

########################################################################
# lists
# converted from procmail
if header :contains "subject" "FreshPorts" {
    fileinto "freshports";
} elsif header :contains "List-Id" "alternc.org" {
    fileinto "alternc";
} elsif header :contains "List-Id" "koumbit.org" {
    fileinto "koumbit";
} elsif header :contains ["to", "cc"] ["lists.debian.org",
                                       "anarcat@debian.org"] {
    fileinto "debian";
# Debian BTS
} elsif exists "X-Debian-PR-Message" {
    fileinto "debian";
# default lists fallback
} elsif exists "List-Id" {
    fileinto "lists";
}

The idea here is that I can safely subscribe to lists without polluting my mailbox by default. Further processing is done in Notmuch.

Extension matching

I also use the magic +extension tag on emails. If you send email to, say, foo+extension@example.com then the emails end up in the foo folder. This is done with the help of the following recipe:

########################################################################
# wildcard +extension
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Plus_Addressed_mail_filtering
if envelope :matches :detail "to" "*" {
  # Save name in ${name} in all lowercase except for the first letter.
  # Joe, joe, jOe thus all become 'Joe'.
  set :lower "name" "${1}";
  fileinto "${name}";
  #debug_log "filed into mailbox ${name} because of extension";
  stop;
}

This is actually very effective: any time I register to a service, I try as much as possible to add a +extension that describe the service. Of course, spammers and marketers (it's the same really) are free to drop the extension and I suspect a lot of them do, but it helps with honest providers and this actually sorts a lot of stuff out of my inbox into topically-defined folders.

It is also a security issue: someone could flood my filesystem with tons of mail folders, which would cripple the IMAP server and eat all the inodes, 4 times faster than just sending emails. But I guess I'll cross that bridge when I get there: anyone can flood my address and I have other mechanisms to deal with this.

The trick is to then assign tags to all folders so that they appear in the Notmuch-emacs welcome view:

echo tagging folders
for folder in $(ls -ad $HOME/Maildir/${PREFIX}*/ | egrep -v "Maildir/${PREFIX}(feeds.*|Sent.*|INBOX/|INBOX/Sent)\$"); do
    tag=$(echo $folder | sed 's#/$##;s#^.*/##')
    notmuch tag +$tag -inbox tag:inbox and not tag:$tag and folder:${PREFIX}$tag
done

This is part of my notmuch-tag script that includes a lot more fine-tuned filtering, detailed below.

Automated reports filtering

Another thing I get a lot of is machine-generated "spam". Well, it's not commercial spam, but it's a bunch of Nagios, cron jobs, and god knows what software thinks it's important to send me emails every day. I get a lot less of those these days since I'm off work at Koumbit, but still, those can be useful for others as well:

if anyof (exists "X-Cron-Env",
          header :contains ["subject"] ["security run output",
                                        "monthly run output",
                                        "daily run output",
                                        "weekly run output",
                                        "Debian Package Updates",
                                        "Debian package update",
                                        "daily mail stats",
                                        "Anacron job",
                                        "nagios",
                                        "changes report",
                                        "run output",
                                        "[Systraq]",
                                        "Undelivered mail",
                                        "Postfix SMTP server: errors from",
                                        "backupninja",
                                        "DenyHosts report",
                                        "Debian security status",
                                        "apt-listchanges"
                                        ],
           header :contains "Auto-Submitted" "auto-generated",
           envelope :contains "from" ["nagios@",
                                      "logcheck@"])
    {
    fileinto "rapports";
}
# imported from procmail
elsif header :comparator "i;octet" :contains "Subject" "Cron" {
  if header :regex :comparator "i;octet"  "From" ".*root@" {
        fileinto "rapports";
  }
}
elsif header :comparator "i;octet" :contains "To" "root@" {
  if header :regex :comparator "i;octet"  "Subject" "\\*\\*\\* SECURITY" {
        fileinto "rapports";
  }
}
elsif header :contains "Precedence" "bulk" {
    fileinto "bulk";
}
Refiltering emails

Of course, after all this I still had thousands of emails in my inbox, because the sieve filters apply only on new emails. The beauty of Sieve support in Dovecot is that there is a neat sieve-filter command that can reprocess an existing mailbox. That was a lifesaver. To run a specific sieve filter on a mailbox, I simply run:

sieve-filter .dovecot.sieve INBOX 2>&1 | less

Well, this doesn't do anything. To really execute the filters, you need the -e flags, and to write to the INBOX for real, you need the -w flag as well, so the real run looks something more like this:

sieve-filter -e -W -v .dovecot.sieve INBOX > refilter.log 2>&1

The funky output redirects are necessary because this outputs a lot of crap. Also note that, unfortunately, the fake run output differs from the real run and is actually more verbose, which makes it really less useful than it could be.

Archival

I also usually archive my mails every year, rotating my mailbox into an Archive.YYYY directory. For example, now all mails from 2015 are archived in a Archive.2015 directory. I used to do this with Mutt tagging and it was a little slow and error-prone. Now, i simply have this Sieve script:

require ["variables","date","fileinto","mailbox", "relational"];

# Extract date info
if currentdate :matches "year" "*" { set "year" "${1}"; }

if date :value "lt" :originalzone "date" "year" "${year}" {
  if date :matches "received" "year" "*" {
    # Archive Dovecot mailing list items by year and month.
    # Create folder when it does not exist.
    fileinto :create "Archive.${1}";
  }
}

I went from 15613 to 1040 emails in my real inbox with this process (including refiltering with the default filters as well).

Notmuch configuration

My Notmuch configuration is a in three parts: I have small settings in ~/.notmuch-config. The gist of it is:

[new]
tags=unread;inbox;
ignore=

#[maildir]
# synchronize_flags=true
# tentative patch that was refused upstream
# http://mid.gmane.org/1310874973-28437-1-git-send-email-anarcat@koumbit.org
#reckless_trash=true

[search]
exclude_tags=deleted;spam;

I omitted the fairly trivial [user] section for privacy reasons and [database] for declutter.

Then I have a notmuch-tag script symlinked into ~/Maildir/.notmuch/hooks/post-new. It does way too much stuff to describe in details here, but here are a few snippets:

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

This sets a variable that makes the script work on my laptop (angela), where mailboxes are in Maildir/Anarcat/foo or the server, where mailboxes are in Maildir/.foo.

I also have special rules to tag my RSS feeds, which are generated by feed2imap, which is documented shortly below:

echo tagging feeds
( cd $HOME/Maildir/ && for feed in ${PREFIX}feeds.*; do
    name=$(echo $feed | sed "s#${PREFIX}feeds\\.##")
    notmuch tag +feeds +$name -inbox folder:$feed and not tag:feeds
done )

Another example that would be useful is how to tag mailing lists, for example, this removes the inbox tag and adds the notmuch tags to emails from the notmuch mailing list.

notmuch tag +lists +notmuch      -inbox tag:inbox and "to:notmuch@notmuchmail.org"

Finally, I have a bunch of special keybindings in ~/.emacs.d/notmuch-config.el:

;; autocompletion
(eval-after-load "notmuch-address"
  '(progn
     (notmuch-address-message-insinuate)))

; use fortune for signature, config is in custom
(add-hook 'message-setup-hook 'fortune-to-signature)
; don't remember what that is
(add-hook 'notmuch-show-hook 'visual-line-mode)

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; keymappings
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-key notmuch-show-mode-map "S"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("+spam" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "S"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+spam" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "H"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("-spam"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "H"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-spam") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "l" 
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "u"
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-deleted") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "d"
  (lambda (&optional beg end)
    "delete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+deleted" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "d"
  (lambda ()
    "delete current message and advance"
    (interactive)
    (notmuch-show-tag '("+deleted" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

;; https://notmuchmail.org/emacstips/#index17h2
(define-key notmuch-show-mode-map "b"
  (lambda (&optional address)
    "Bounce the current message."
    (interactive "sBounce To: ")
    (notmuch-show-view-raw-message)
    (message-resend address)
    (kill-buffer)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; my custom notmuch functions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun anarcat/notmuch-search-next-thread ()
  "Skip to next message from region or point

This is necessary because notmuch-search-next-thread just starts
from point, whereas it seems to me more logical to start from the
end of the region."
  ;; move line before the end of region if there is one
  (unless (= beg end)
    (goto-char (- end 1)))
  (notmuch-search-next-thread))

;; Linking to notmuch messages from org-mode
;; https://notmuchmail.org/emacstips/#index23h2
(require 'org-notmuch nil t)

(message "anarcat's custom notmuch config loaded")

This is way too long: in my opinion, a bunch of that stuff should be factored in upstream, but some features have been hard to get in. For example, Notmuch is really hesitant in marking emails as deleted. The community is also very strict about having unit tests for everything, which makes writing new patches a significant challenge for a newcomer, which will often need to be familiar with both Elisp and C. So for now I just have those configs that I carry around.

Emails marked as deleted or spam are processed with the following script named notmuch-purge which I symlink to ~/Maildir/.notmuch/hooks/pre-new:

#!/bin/sh

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

echo moving tagged spam to the junk folder
notmuch search --output=files tag:spam \
        and not folder:${PREFIX}junk \
        and not folder:${PREFIX}greyspam \
        and not folder:Koumbit/INBOX \
        and not path:Koumbit/** \
    | while read file; do
          mv "$file" "$HOME/Maildir/${PREFIX}junk/cur"
      done

echo unconditionnally deleting deleted mails
notmuch search --output=files tag:deleted | xargs -r rm

Oh, and there's also customization for Notmuch:

;; -*- mode: emacs-lisp; auto-recompile: t; -*-
(custom-set-variables
 ;; from https://anarc.at/sigs.fortune
 '(fortune-file "/home/anarcat/.mutt/sigs.fortune")
 '(message-send-hook (quote (notmuch-message-mark-replied)))
 '(notmuch-address-command "notmuch-address")
 '(notmuch-always-prompt-for-sender t)
 '(notmuch-crypto-process-mime t)
 '(notmuch-fcc-dirs
   (quote
    ((".*@koumbit.org" . "Koumbit/INBOX.Sent")
     (".*" . "Anarcat/Sent"))))
 '(notmuch-hello-tag-list-make-query "tag:unread")
 '(notmuch-message-headers (quote ("Subject" "To" "Cc" "Bcc" "Date" "Reply-To")))
 '(notmuch-saved-searches
   (quote
    ((:name "inbox" :query "tag:inbox and not tag:koumbit and not tag:rt")
     (:name "unread inbox" :query "tag:inbox and tag:unread")
     (:name "unread" :query "tag:unred")
     (:name "freshports" :query "tag:freshports and tag:unread")
     (:name "rapports" :query "tag:rapports and tag:unread")
     (:name "sent" :query "tag:sent")
     (:name "drafts" :query "tag:draft"))))
 '(notmuch-search-line-faces
   (quote
    (("deleted" :foreground "red")
     ("unread" :weight bold)
     ("flagged" :foreground "blue"))))/
 '(notmuch-search-oldest-first nil)
 '(notmuch-show-all-multipart/alternative-parts nil)
 '(notmuch-show-all-tags-list t)
 '(notmuch-show-insert-text/plain-hook
   (quote
    (notmuch-wash-convert-inline-patch-to-part notmuch-wash-tidy-citations notmuch-wash-elide-blank-lines notmuch-wash-excerpt-citations)))
 )

I think that covers it.

Offlineimap

So of course the above works well on the server directly, but how do run Notmuch on a remote machine that doesn't have access to the mail spool directly? This is where OfflineIMAP comes in. It allows me to incrementally synchronize a local Maildir folder hierarchy with a a remote IMAP server. I am assuming you already have an IMAP server configured, since you already configured Sieve above.

Note that other synchronization tools exist. The other popular one is isync but I had trouble migrating to it (see courriels for details) so for now I am sticking with OfflineIMAP.

The configuration is fairly simple:

[general]
accounts = Anarcat
ui = Blinkenlights
maxsyncaccounts = 3

[Account Anarcat]
localrepository = LocalAnarcat
remoterepository = RemoteAnarcat
# refresh all mailboxes every 10 minutes
autorefresh = 10
# run notmuch after refresh
postsynchook = notmuch new
# sync only mailboxes that changed
quick = -1
## possible optimisation: ignore mails older than a year
#maxage = 365

# local mailbox location
[Repository LocalAnarcat]
type = Maildir
localfolders = ~/Maildir/Anarcat/

# remote IMAP server
[Repository RemoteAnarcat]
type = IMAP
remoteuser = anarcat
remotehost = anarc.at
ssl = yes
# without this, the cert is not verified (!)
sslcacertfile = /etc/ssl/certs/DST_Root_CA_X3.pem
# do not sync archives
folderfilter = lambda foldername: not re.search('(Sent\.20[01][0-9]\..*)', foldername) and not re.search('(Archive.*)', foldername)
# and only subscribed folders
subscribedonly = yes
# don't reconnect all the time
holdconnectionopen = yes
# get mails from INBOX immediately, doesn't trigger postsynchook
idlefolders = ['INBOX']

Critical parts are:

  • postsynchook: obviously, we want to run notmuch after fetching mail
  • idlefolders: receives emails immediately without waiting for the longer autorefresh delay, which means that most mailboxes don't see new emails until 10 minutes in the worst case. unfortunately, doesn't run the postsynchook so I need to hit G in Emacs to see new mail
  • quick=-1, subscribedonly, holdconnectionopen: makes most runs much, much faster as it skips unchanged or unsubscribed folders and keeps the connection to the server

The other settings should be self-explanatory.

RSS feeds

I gave up on RSS readers, or more precisely, I merged RSS feeds and email. The first time I heard of this, it sounded like a horrible idea, because it means yet more emails! But with proper filtering, it's actually a really nice way to process emails, since it leverages the distributed nature of email.

For this I use a fairly standard feed2imap, although I do not deliver to an IMAP server, but straight to a local Maildir. The configuration looks like this:

---
include-images: true
target-refix: &target "maildir:///home/anarcat/Maildir/.feeds."
feeds:
- name: Planet Debian
  url: http://planet.debian.org/rss20.xml
  target: [ *target, 'debian-planet' ]

I have obviously more feeds, the above is just and example. This will deliver the feeds as emails in one mailbox per feed, in ~/Maildir/.feeds.debian-planet, in the above example.

Troubleshooting

You will fail at writing the sieve filters correctly, and mail will (hopefully?) fall through to your regular mailbox. Syslog will tell you things fail, as expected, and details are in your .dovecot.sieve.log file in your home directory.

I also enabled debugging on the Sieve module

--- a/dovecot/conf.d/90-sieve.conf
+++ b/dovecot/conf.d/90-sieve.conf
@@ -51,6 +51,7 @@ plugin {
        # deprecated imapflags extension in addition to all extensions were already
   # enabled by default.
   #sieve_extensions = +notify +imapflags
+  sieve_extensions = +vnd.dovecot.debug

   # Which Sieve language extensions are ONLY available in global scripts. This
   # can be used to restrict the use of certain Sieve extensions to administrator

This allowed me to use debug_log function in the rulesets to output stuff directly to the logfile.

Further improvements

Of course, this is all done on the commandline, but that is somewhat expected if you are already running Notmuch. Of course, it would be much easier to edit those filters through a GUI. Roundcube has a nice Sieve plugin, and Thunderbird also has such a plugin as well. Since Sieve is a standard, there's a bunch of clients available. All those need you to setup some sort of thing on the server, which I didn't bother doing yet.

And of course, a key improvement would be to have Notmuch synchronize its state better with the mailboxes directly, instead of having the notmuch-purge hack above. Dovecot and Maildir formats support up to 26 flags, and there were discussions about using those flags to synchronize with notmuch tags so that multiple notmuch clients can see the same tags on different machines transparently.

This, however, won't make Notmuch work on my phone or webmail or any other more generic client: for that, Sieve rules are still very useful.

I still don't have webmail setup at all: so to read email, I need an actual client, which is currently my phone, which means I need to have Wifi access to read email. "Internet Cafés" or "this guy's computer" won't work as well, although I can always use ssh to login straight to the server and read mails with Mutt.

I am also considering using X509 client certificates to authenticate to the mail server without a passphrase. This involves configuring Postfix, which seems simple enough. Dovecot's configuration seems a little more involved and less well documented. It seems that both [OfflimeIMAP][] and K-9 mail support client-side certs. OfflineIMAP prompts me for the password so it doesn't get leaked anywhere. I am a little concerned about building yet another CA, but I guess it would not be so hard...

The server side of things needs more documenting, particularly the spam filters. This is currently spread around this wiki, mostly in configuration.

Security considerations

The whole purpose of this was to make it easier to read my mail on other devices. This introduces a new vulnerability: someone may steal that device or compromise it to read my mail, impersonate me on different services and even get a shell on the remote server.

Thanks to the two-factor authentication I setup on the server, I feel a little more confident that just getting the passphrase to the mail account isn't sufficient anymore in leveraging shell access. It also allows me to login with ssh on the server without trusting the machine too much, although that only goes so far... Of course, sudo is then out of the question and I must assume that everything I see is also seen by the attacker, which can also inject keystrokes and do all sorts of nasty things.

Since I also connected my email account on my phone, someone could steal the phone and start impersonating me. The mitigation here is that there is a PIN for the screen lock, and the phone is encrypted. Encryption isn't so great when the passphrase is a PIN, but I'm working on having a better key that is required on reboot, and the phone shuts down after 5 failed attempts. This is documented in my phone setup.

Client-side X509 certificates further mitigates those kind of compromises, as the X509 certificate won't give shell access.

Basically, if the phone is lost, all hell breaks loose: I need to change the email password (or revoke the certificate), as I assume the account is about to be compromised. I do not trust Android security to give me protection indefinitely. In fact, one could argue that the phone is already compromised and putting the password there already enabled a possible state-sponsored attacker to hijack my email address. This is why I have an OpenPGP key on my laptop to authenticate myself for critical operations like code signatures.

The risk of identity theft from the state is, after all, a tautology: the state is the primary owner of identities, some could say by definition. So if a state-sponsored attacker would like to masquerade as me, they could simply issue a passport under my name and join a OpenPGP key signing party, and we'd have other problems to deal with, namely, proper infiltration counter-measures and counter-snitching.

Ingo Juergensmann: Xen randomly crashing server - part 2

13 May, 2016 - 01:38

Some weeks ago I blogged about "Xen randomly crashing server". The problem back then was that I couldn't get any information why the server reboots. Using a netconsole was not possible, because netconsole refused to work with the bridge that is used for Xen networking. Luckily my colocation partner rrbone.net connected the second network port of my server to the network so that I could use eth1 instead of the bridged eth0 for netconsole.

Today the server crashed several times and I was able to collect some more information than just the screenshots from IPMI/KVM console as shown in my last blog entry (full netconsole output is attached as a file): 

May 12 11:56:39 31.172.31.251 [829681.040596] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64 #1 Debian 3.16.7-ckt25-2
May 12 11:56:39 31.172.31.251 [829681.040647] Hardware name: Supermicro X9SRE/X9SRE-3F/X9SRi/X9SRi-3F/X9SRE/X9SRE-3F/X9SRi/X9SRi-3F, BIOS 3.0a 01/03/2014
May 12 11:56:39 31.172.31.251 [829681.040701] task: ffffffff8181a460 ti: ffffffff81800000 task.ti: ffffffff81800000
May 12 11:56:39 31.172.31.251 [829681.040749] RIP: e030:[<ffffffff812b7e56>]
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.040802] RSP: e02b:ffff880280e03a58  EFLAGS: 00010286
May 12 11:56:39 31.172.31.251 [829681.040834] RAX: ffff88026eec9070 RBX: ffff88023c8f6b00 RCX: 00000000000000ee
May 12 11:56:39 31.172.31.251 [829681.040880] RDX: 00000000000004a0 RSI: ffff88006cd1f000 RDI: ffff88026eec9422
May 12 11:56:39 31.172.31.251 [829681.040927] RBP: ffff880280e03b38 R08: 00000000000006c0 R09: ffff88026eec9062
May 12 11:56:39 31.172.31.251 [829681.040973] R10: 0100000000000000 R11: 00000000af9a2116 R12: ffff88023f440d00
May 12 11:56:39 31.172.31.251 [829681.041020] R13: ffff88006cd1ec66 R14: ffff88025dcf1cc0 R15: 00000000000004a8
May 12 11:56:39 31.172.31.251 [829681.041075] FS:  0000000000000000(0000) GS:ffff880280e00000(0000) knlGS:ffff880280e00000
May 12 11:56:39 31.172.31.251 [829681.041124] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
May 12 11:56:39 31.172.31.251 [829681.041153] CR2: ffff88006cd1f000 CR3: 0000000271ae8000 CR4: 0000000000042660
May 12 11:56:39 31.172.31.251 [829681.041202] Stack:
May 12 11:56:39 31.172.31.251 [829681.041225]  ffffffff814d38ff
May 12 11:56:39 31.172.31.251  ffff88025b5fa400
May 12 11:56:39 31.172.31.251  ffff880280e03aa8
May 12 11:56:39 31.172.31.251  9401294600a7012a
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041287]  0100000000000000
May 12 11:56:39 31.172.31.251  ffffffff814a000a
May 12 11:56:39 31.172.31.251  000000008181a460
May 12 11:56:39 31.172.31.251  00000000000080fe
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041346]  1ad902feff7ac40e
May 12 11:56:39 31.172.31.251  ffff88006c5fd980
May 12 11:56:39 31.172.31.251  ffff224afc3e1600
May 12 11:56:39 31.172.31.251  ffff88023f440d00
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041407] Call Trace:
May 12 11:56:39 31.172.31.251 [829681.041435]  <IRQ>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041441]
May 12 11:56:39 31.172.31.251  [<ffffffff814d38ff>] ? ndisc_send_redirect+0x3bf/0x410
May 12 11:56:39 31.172.31.251 [829681.041506]  [<ffffffff814a000a>] ? ipmr_device_event+0x7a/0xd0
May 12 11:56:39 31.172.31.251 [829681.041548]  [<ffffffff814bc74c>] ? ip6_forward+0x71c/0x850
May 12 11:56:39 31.172.31.251 [829681.041585]  [<ffffffff814c9e54>] ? ip6_route_input+0xa4/0xd0
May 12 11:56:39 31.172.31.251 [829681.041621]  [<ffffffff8141f1a3>] ? __netif_receive_skb_core+0x543/0x750
May 12 11:56:39 31.172.31.251 [829681.041729]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.041771]  [<ffffffffa0585eb2>] ? br_handle_frame_finish+0x1c2/0x3c0 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041821]  [<ffffffffa058c757>] ? br_nf_pre_routing_finish_ipv6+0xc7/0x160 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041872]  [<ffffffffa058d0e2>] ? br_nf_pre_routing+0x562/0x630 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041907]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041955]  [<ffffffff8144fb65>] ? nf_iterate+0x65/0xa0
May 12 11:56:39 31.172.31.251 [829681.041987]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042035]  [<ffffffff8144fc16>] ? nf_hook_slow+0x76/0x130
May 12 11:56:39 31.172.31.251 [829681.042067]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042116]  [<ffffffffa0586220>] ? br_handle_frame+0x170/0x240 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042148]  [<ffffffff8141ee24>] ? __netif_receive_skb_core+0x1c4/0x750
May 12 11:56:39 31.172.31.251 [829681.042185]  [<ffffffff81009f9c>] ? xen_clocksource_get_cycles+0x1c/0x20
May 12 11:56:39 31.172.31.251 [829681.042217]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.042251]  [<ffffffffa063f50f>] ? xenvif_tx_action+0x49f/0x920 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042299]  [<ffffffffa06422f8>] ? xenvif_poll+0x28/0x70 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042331]  [<ffffffff8141f7b0>] ? net_rx_action+0x140/0x240
May 12 11:56:39 31.172.31.251 [829681.042367]  [<ffffffff8106c6a1>] ? __do_softirq+0xf1/0x290
May 12 11:56:39 31.172.31.251 [829681.042397]  [<ffffffff8106ca75>] ? irq_exit+0x95/0xa0
May 12 11:56:39 31.172.31.251 [829681.042432]  [<ffffffff8135a285>] ? xen_evtchn_do_upcall+0x35/0x50
May 12 11:56:39 31.172.31.251 [829681.042469]  [<ffffffff8151669e>] ? xen_do_hypervisor_callback+0x1e/0x30
May 12 11:56:39 31.172.31.251 [829681.042499]  <EOI>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.042506]
May 12 11:56:39 31.172.31.251  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042561]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042592]  [<ffffffff81009e7c>] ? xen_safe_halt+0xc/0x20
May 12 11:56:39 31.172.31.251 [829681.042627]  [<ffffffff8101c8c9>] ? default_idle+0x19/0xb0
May 12 11:56:39 31.172.31.251 [829681.042666]  [<ffffffff810a83e0>] ? cpu_startup_entry+0x340/0x400
May 12 11:56:39 31.172.31.251 [829681.042705]  [<ffffffff81903076>] ? start_kernel+0x497/0x4a2
May 12 11:56:39 31.172.31.251 [829681.042735]  [<ffffffff81902a04>] ? set_init_arg+0x4e/0x4e
May 12 11:56:39 31.172.31.251 [829681.042767]  [<ffffffff81904f69>] ? xen_start_kernel+0x569/0x573
May 12 11:56:39 31.172.31.251 [829681.042797] Code:
May 12 11:56:39 31.172.31.251  <f3>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.043113] RIP
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.043145]  RSP <ffff880280e03a58>
May 12 11:56:39 31.172.31.251 [829681.043170] CR2: ffff88006cd1f000
May 12 11:56:39 31.172.31.251 [829681.043488] ---[ end trace 1838cb62fe32daad ]---
May 12 11:56:39 31.172.31.251 [829681.048905] Kernel panic - not syncing: Fatal exception in interrupt
May 12 11:56:39 31.172.31.251 [829681.048978] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)

I'm not that good at reading this kind of output, but to me it seems that ndisc_send_redirect is at fault. When googling for "ndisc_send_redirect" you can find a patch on lkml.org and Debian bug #804079, both seem to be related to IPv6.

When looking at the linux kernel source mentioned in the lkml patch I see that this patch is already applied (line 1510): 

        if (ha) 
                ndisc_fill_addr_option(buff, ND_OPT_TARGET_LL_ADDR, ha);

So, when the patch was intended to prevent "leading to data corruption or in the worst case a panic when the skb_put failed" it does not help in my case or in the case of #804076.

Any tips are appreciated!

PS: I'll contribute to that bug in the BTS, of course!

AttachmentSize syslog-xen-crash.txt24.27 KB Kategorie: DebianTags: DebianSoftwareXenServerBug 

Matthew Garrett: Convenience, security and freedom - can we pick all three?

12 May, 2016 - 21:40
Moxie, the lead developer of the Signal secure communication application, recently blogged on the tradeoffs between providing a supportable federated service and providing a compelling application that gains significant adoption. There's a set of perfectly reasonable arguments around that that I don't want to rehash - regardless of feelings on the benefits of federation in general, there's certainly an increase in engineering cost in providing a stable intra-server protocol that still allows for addition of new features, and the person leading a project gets to make the decision about whether that's a valid tradeoff.

One voiced complaint about Signal on Android is the fact that it depends on the Google Play Services. These are a collection of proprietary functions for integrating with Google-provided services, and Signal depends on them to provide a good out of band notification protocol to allow Signal to be notified when new messages arrive, even if the phone is otherwise in a power saving state. At the time this decision was made, there were no terribly good alternatives for Android. Even now, nobody's really demonstrated a free implementation that supports several million clients and has no negative impact on battery life, so if your aim is to write a secure messaging client that will be adopted by as many people is possible, keeping this dependency is entirely rational.

On the other hand, there are users for whom the decision not to install a Google root of trust on their phone is also entirely rational. I have no especially good reason to believe that Google will ever want to do something inappropriate with my phone or data, but it's certainly possible that they'll be compelled to do so against their will. The set of people who will ever actually face this problem is probably small, but it's probably also the set of people who benefit most from Signal in the first place.

(Even ignoring the dependency on Play Services, people may not find the official client sufficient - it's very difficult to write a single piece of software that satisfies all users, whether that be down to accessibility requirements, OS support or whatever. Slack may be great, but there's still people who choose to use Hipchat)

This shouldn't be a problem. Signal is free software and anybody is free to modify it in any way they want to fit their needs, and as long as they don't break the protocol code in the process it'll carry on working with the existing Signal servers and allow communication with people who run the official client. Unfortunately, Moxie has indicated that he is not happy with forked versions of Signal using the official servers. Since Signal doesn't support federation, that means that users of forked versions will be unable to communicate with users of the official client.

This is awkward. Signal is deservedly popular. It provides strong security without being significantly more complicated than a traditional SMS client. In my social circle there's massively more users of Signal than any other security app. If I transition to a fork of Signal, I'm no longer able to securely communicate with them unless they also install the fork. If the aim is to make secure communication ubiquitous, that's kind of a problem.

Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?

comments

Michal &#268;iha&#345;: Changed Debian repository signing key

12 May, 2016 - 14:10

After getting complains from apt and users, I've finally decided to upgrade signing key on my Debian repository to something more decent that DSA. If you are using that repository, you will now have to fetch new key to make it work again.

The old DSA key was there really because my laziness as I didn't want users to reimport the key, but I think it's really good that apt started to complain about it (it doesn't complain about DSA itself, but rather on using SHA1 signatures, which is most you can get out of DSA key).

Anyway the new key ID is DCE7B04E7C6E3CD9 and fingerprint is 4732 8C5E CD1A 3840 0419 1F24 DCE7 B04E 7C6E 3CD9. It's signed by my GPG key, so you can verify it this way. Of course instruction on my Debian repository page have been updated as well.

Filed under: Debian English | 2 comments

Petter Reinholdtsen: Debian now with ZFS on Linux included

12 May, 2016 - 12:30

Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.

Elena 'valhalla' Grandi: GnuPG Crowdfunding and sticker

11 May, 2016 - 19:22
GnuPG Crowdfunding and sticker

I've just received my laptop sticker from the GnuPG crowdfund http://goteo.org/project/gnupg-new-website-and-infrastructure: it is of excellent quality, but comes with HOWTO-like detailed instructions to apply it in the proper way.

This strikes me as oddly appropriate.

#gnupg

Elena 'valhalla' Grandi: New gpg subkey

11 May, 2016 - 19:21
New gpg subkey

The GPG subkey http://www.trueelena.org/about/gpg.html I keep for daily use was going to expire, and this time I decided to create a new one instead of changing the expiration date.

Doing so I've found out that gnupg does not support importing just a private subkey for a key it already has (on IRC I've heard that there may be more informations on it on the gpg-users mailing list), so I've written a few notes on what I had to do on my website http://www.trueelena.org/computers/howto/gpg_subkeys.html, so that I can remember them next year.

The short version is:

* Create your subkey (in the full keyring, the one with the private master key)
* export every subkey (including the expired ones, if you want to keep them available), but not the master key
* (copy the exported key from the offline computer to the online one)
* delete your private key from your regular use keyring
* import back the private keys you have exported before.

#gnupg

Julian Andres Klode: Backing up with borg and git-annex

11 May, 2016 - 16:47

I recently found out that I have access to a 1 TB cloud storage drive by 1&1, so I decided to start taking off-site backups of my $HOME (well, backups at all, previously I only mirrored the latest version from my SSD to an HDD).

I initially tried obnam. Obnam seems like a good tool, but is insanely slow. Unencrypted it can write about 3 MB/s, which is somewhat OK, but even then it can spend hours forgetting generations (1 generation takes probably 2 minutes, and there might be 22 of them). In encrypted mode, the speed reduces a lot, to about 500 KB/s if I recall correctly, which is just unusable.

I found borg backup, a fork of attic. Borg backup achieves speeds of up to 15 MB/s which is really nice. It’s also faster with scanning: I can now run my bihourly backups in about 1 min 30s (usually backs up about 30 to 80 MB – mostly thanks to Chrome I suppose!). And all those speeds are with encryption turned on.

Both borg and obnam use some form of chunks from which they compose files. Obnam stores each chunk in its own file, borg stores multiple chunks (even from different files) in a single pack file which is probably the main reason it is faster.

So how am I backing up: My laptop has an internal SSD and an HDD.  I backup every 2 hours (at 09,11,13,15,17,19,21,23,01:00 hours) using a systemd timer event, from the SSD to the HDD. The backup includes all of $HOME except for Downloads, .cache, the trash, Android SDK, and the eclipse and IntelliJ IDEA IDEs.

Now the magic comes in: The backup repository on the HDD is monitored by git-annex assistant, which automatically encrypts and uploads any new files in there to my 1&1 WebDAV drive and registers them in a git repository hosted on bitbucket. All files are encrypted and checksummed using SHA256, reducing the chance of the backup being corrupted.

I’m not sure how the WebDAV thing will work once I want to prune things, I suspect it will then delete some pack files and repack things into new files which means it will spend more bandwidth than obnam would. I’d also have to convince git-annex to actually drop anything from the WebDAV remote, but that is not really that much of a concern with 1TB storage space in the next 2 years at least…

I also have an external encrypted HDD which I can take backups on, it currently houses a fuller backup of $HOME that also includes Downloads, the Android SDK, and the IDEs for quicker recovery. Downloads changes a lot, and all of them can be fairly easily re-retrieved from the internet as needed, so there’s not much point in painfully uploading them to a WebDAV backup site.

 


Filed under: Uncategorized

Norbert Preining: Gaming: The Room series

11 May, 2016 - 12:55

After having finished Monument Valley and some spin-offs, Google Play suggested me The Room series games (The Room, The Room II, Room III), classical puzzle games with a common theme – one needs to escape from some confinement.

I have finished all the three games, game play was very nice and smooth on my phone (Nexus 6P). The graphics and detail level is often astonishing, and everything is well made.

But there is one drop of Vermouth: You need a strong finger tapping muscle! I really love solving the puzzles, but most of them were not really difficult. The real difficulty is finding everything by touching each and every knob, looking from all angles at all times. This later part, the tedious part to find things by often illogically tapping on strange places to realize “ahh, there is something that turns”, that is what I do not like.

I had the feeling that more than 60% of the game play is searching for things. Once you have found them, their use and the actual riddle is mostly straightforward, though.

The Room series somehow reminded me of the Myst series (Myst, Riven, Myst III etc), but afair the Myst series had more involved, more complicated riddles, and less searching. Also the recently reviewed Talos Principle and Portal series have clear set problems that challenge your brain, not your finger tapping muscle.

But all in all a very enjoyable series of games.

Final remark: I learned recently that there are real-world games like this, called “Escape Room“. Somehow tempting to try one out …

Reproducible builds folks: Reproducible builds: week 54 in Stretch cycle

11 May, 2016 - 04:00

What happened in the Reproducible Builds effort between May 1st and May 7th 2016:

Media coverage

There has been a surprising tweet last week: "Props to @FiloSottile for his nifty gvt golang tool. We're using it to get reproducible builds for a Zika & West Nile monitoring project." and to our surprise Kenn confirmed privately that he indeed meant "reproducible builds" as in "bit by bit identical builds". Wow. We're looking forward to learn more details about this; for now we just know that they are doing this for software quality reasons basically.

Two of the four GSoC and Outreachy participants for Reproducible builds posted their introductions to Planet Debian:

Toolchain fixes and other upstream developments

dpkg 1.18.5 was uploaded fixing two bugs relevant to us:

  • #719845 (make the file order within the {data,control}.tar.gz .deb members deterministic)
  • #819194 (add -fdebug-prefix-map to the compilers options)

This upload made it necessary to rebase our dpkg on the version on sid again, which Niko Tyni and Lunar promptly did. Then a few days later 1.18.6 was released to fix a regression in the previous upload, and Niko promptly updated our patched version again. Following this Niko Tyni found #823428: "dpkg: many packages affected by dpkg-source: error: source package uses only weak checksums".

Alexis Bienvenüe worked on tex related packages and SOURCE_DATE_EPOCH:

  • Alexis uploaded texlive-bin to our repo improving the existing patches.
  • pdftex upstream discussion by Alexis Bienvenüe began at tex-k mailing list to make \today honour SOURCE_DATE_EPOCH. Upstream already commited enhanced versions of the proposed patches.
  • Similar discussion on the luatex side at luatex mailing list. Upstream is working on it, and already committed some changes.

Emmanuel Bourg uploaded jflex/1.4.3+dfsg-2, which removes timestamps from generated files.

Packages fixed

The following 285 packages have become reproducible due to changes in their build dependencies (mostly from GCC honouring SOURCE_DATE_EPOCH, see the previous week report): 0ad abiword abcm2ps acedb acpica-unix actiona alliance amarok amideco amsynth anjuta aolserver4-nsmysql aolserver4-nsopenssl aolserver4-nssqlite3 apbs aqsis aria2 ascd ascii2binary atheme-services audacity autodocksuite avis awardeco bacula ballerburg bb berusky berusky2 bindechexascii binkd boinc boost1.58 boost1.60 bwctl cairo-dock cd-hit cenon.app chipw ckermit clp clustalo cmatrix coinor-cbc commons-pool cppformat crashmail crrcsim csvimp cyphesis-cpp dact dar darcs darkradiant dcap dia distcc dolphin-emu drumkv1 dtach dune-localfunctions dvbsnoop dvbstreamer eclib ed2k-hash edfbrowser efax-gtk efax exonerate f-irc fakepop fbb filezilla fityk flasm flightgear fluxbox fmit fossil freedink-dfarc freehdl freemedforms-project freeplayer freeradius fxload gdb-arm-none-eabi geany-plugins geany geda-gaf gfm gif2png giflib gifticlib glaurung glusterfs gnokii gnubiff gnugk goaccess gocr goldencheetah gom gopchop gosmore gpsim gputils grcompiler grisbi gtkpod gvpe hardlink haskell-github hashrat hatari herculesstudio hpcc hypre i2util incron infiniband-diags infon ips iptotal ipv6calc iqtree jabber-muc jama jamnntpd janino jcharts joy2key jpilot jumpnbump jvim kanatest kbuild kchmviewer konclude krename kscope kvpnc latexdiff lcrack leocad libace-perl libcaca libcgicc libdap libdbi-drivers libewf libjlayer-java libkcompactdisc liblscp libmp3spi-java libpwiz librecad libspin-java libuninum libzypp lightdm-gtk-greeter lighttpd linpac lookup lz4 lzop maitreya meshlab mgetty mhwaveedit minbif minc-tools moc mrtrix mscompress msort mudlet multiwatch mysecureshell nifticlib nkf noblenote nqc numactl numad octave-optim omega-rpg open-cobol openmama openmprtl openrpt opensm openvpn openvswitch owx pads parsinsert pcb pd-hcs pd-hexloader pd-hid pd-libdir pear-channels pgn-extract phnxdeco php-amqp php-apcu-bc php-apcu php-solr pidgin-librvp plan plymouth pnscan pocketsphinx polygraph portaudio19 postbooks-updater postbooks powertop previsat progressivemauve puredata-import pycurl qjackctl qmidinet qsampler qsopt-ex qsynth qtractor quassel quelcom quickplot qxgedit ratpoison rlpr robojournal samplv1 sanlock saods9 schism scorched3d scummvm-tools sdlbasic sgrep simh sinfo sip-tester sludge sniffit sox spd speex stimfit swarm-cluster synfig synthv1 syslog-ng tart tessa theseus thunar-vcs-plugin ticcutils tickr tilp2 timbl timblserver tkgate transtermhp tstools tvoe ucarp ultracopier undbx uni2ascii uniutils universalindentgui util-vserver uudeview vfu virtualjaguar vmpk voms voxbo vpcs wipe x264 xcfa xfrisk xmorph xmount xyscan yacas yasm z88dk zeal zsync zynaddsubfx

Last week the 1000th bug usertagged "reproducible" was fixed! This means roughly 2 bugs per day since 2015-01-01. Kudos and huge thanks to everyone involved! Please also note: FTBFS packages have not been counted here and there are still 600 open bugs with reproducible patches provided. Please help bringing that number down to 0!

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Uploads which fix reproducibility issues, but currently FTBFS:

Patches submitted that have not made their way to the archive yet:

  • #823174 against ros-pluginlib by Daniel Shahaf: use printf instead of echo to fix implementation-specific behavior.
  • #823239 against gspiceui by Alexis Bienvenüe: sort list of object files for linking binary.
  • #823241 against unhide by Alexis Bienvenüe: sort list of source files passed to compiler.
  • #823393 against kdbg by Alexis Bienvenüe: fix changelog encoding and call grep in text mode.
  • #823452 against khronos-opengl-man4 by Daniel Shahaf: sort file lists deterministically.
Package reviews

54 reviews have been added, 6 have been updated and 44 have been removed in this week.

18 FTBFS bugs have been reported by Chris Lamb, James Cowgill and Niko Tyni.

diffoscope development

Thanks to Mattia, diffoscope 52~bpo8+1 is available in jessie-backports now.

tests.reproducible-builds.org
  • All packages from all tested suites have finally been built on i386.
  • Due to GCC supporting SOURCE_DATE_EPOCH sid/armhf has finally reached 20k reproducible packages and sid/amd64 has even reached 21k reproducible packages. (These numbers are about our test setup. The numbers for the Debian archive are still all 0. dpkg and dak need to be fixed to get the numbers above 0.)
  • IRC notifications for non-Debian related jenkins job results go to #reproducible-builds now, while Debian related notifications stay on #debian-reproducible. (h01ger)
  • profitbricks-build4-amd64 has been fully set up now and is running 398 days in the future. Next: update coreboot/OpenWrt/Fedora/Archlinux/FreeBSD/NetBSD scripts to use it.
Misc.

This week's edition was written by Reiner Herrmann, Holger Levsen and Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC. Mattia also wrote a small ikiwiki macro for this blog to ease linking reproducible issues, packages in the package tracker and bugs in the Debian BTS.

Mike Hommey: Using git to access mercurial repositories, without mercurial

10 May, 2016 - 22:45

If you’ve been following this blog, you know I’ve been working on a git remote helper that gives access to mercurial repositories, named git-cinnabar. So far, it has been using libraries from mercurial itself in order to talk to local or remote repositories.

That is, until today. The current master branch now has experimental support for direct access to remote mercurial repositories, without mercurial.

Lars Wirzenius: Qvarn Platform announcement

10 May, 2016 - 20:30

In March we started a new company, to develop and support the software whose development I led at my previous job. The software is Qvarn, and it's fully free software, licensed under AGPL3+. The company is QvarnLabs (no website yet). Our plan is to earn a living from this, and our hope is to provide software that is actually useful for helping various organisations handle data securely.

The first press release about Qvarn was sent out today. We're still setting up the company and getting operational, but a little publicity never hurts. (Even if it is more marketing-speak and self-promotion than I would normally put on my blog.)

So this is what I do for a living now.


The development of the open source Qvarn Platform was led by QvarnLabs CEO Kaius Häggblom (left) and CTO Lars Wirzenius.

Helsinki, Finland 10.05.2016 – With Privacy by Design, integrated Gluu access management and comprehensive support for regulatory data compliance, Qvarn is set to become the Europe-wide platform of choice for managing workforce identities and providing associated value-added services.

Construction industry federations in Sweden, Finland and the Baltic States have been using the Qvarn Platform (http://www.qvarn.org) since October 2015 to securely manage the professional digital identities of close to one million construction workers. Developed on behalf of these same federations, Qvarn is now free and open source software; making it a compelling solution for any organization that needs to manage a secure register of workers’ data.

"There is something universal and fundamental at the core of the Qvarn platform. And that’s trust," said Qvarn evangelist Kaius Häggblom. "We decided to make it free, open source and include Gluu access management because we wanted all those using Qvarn or contributing to its continued development to have the freedom to work with the platform in whatever way is best for them."

Qvarn has been designed to meet the requirements of the European Union’s new General Data Protection Regulation (GDPR), enabling organizations that use the platform to ensure their compliance with the new law. Qvarn has also incorporated the principles of Privacy by Design to minimize the disclosure of non-essential personal information and to give people more control over their data.

"Today, Qvarn is used by the construction industry as a way to manage the data of employees, many of whom frequently move across borders. In this way the platform helps to combat the grey economy in the building sector, thereby improving quality and safety, while simultaneously protecting the professional identity data of almost a million individuals," said Häggblom. "Qvarn is so flexible and secure that we envision it becoming the preferred platform for the provision of any value-added services with an identity management component, eventually even supporting monetary transactions."

Qvarn is a cloud based solution supported to run on both Amazon Web Services (AWS) and OpenStack. In partnership with Gluu, the platform delivers an out-of-the-box solution that uses open and standard protocols to provide powerful yet flexible identity and access management, including mechanisms for appropriate authentication and authorization.

"Qvarn's identity management and governance capabilities perfectly compliment the Gluu Server's access management features," said Founder and CEO of Gluu, Michael Schwartz. "Free open source software (FOSS) is essential to the future of identity and access management. And the FOSS development methodology provides the transparency that is needed to foster the strong sense of community upon which a vibrant ecosystem thrives."

Qvarn’s development team continues to be led by recognized open source developer and platform architect Lars Wirzenius. He has been developing free and open source software for 30 years and is a renowned expert in the Linux environment, with a particular focus on the Debian distribution. Lars works at all levels of software development – from writing code to designing system architecture.

About the Qvarn Platform:

The Qvarn Platform is free and open source software for managing workforce identities. Qvarn is integrated with the Gluu Server’s access management features out of the box, using open and standard protocols to provide the platform with a single common digital identity and mechanisms for appropriate authentication and authorization. A cloud based solution, Qvarn is supported to run on both Amazon Web Services (AWS) and OpenStack. Privacy by Design is central to the architecture of Qvarn and the platform has been third party audited to a security level of HIGH.

http://www.qvarn.org

For more information, please contact:
Andrew Flowers
andrew.flowers@ellisnichol.com
+358 40 161 5668

Dmitry Shachnev: ReText 6.0 and PyMarkups 2.0 released

10 May, 2016 - 18:45

Today I have released the new major version of the ReText editor.

This release would not be possible without Maurice van der Pot who was the author of the greatest features because of which the version number was bumped:

  • Synchronized scrolling for Markdown. It means that the preview area scrolls automatically to make sure the paragraph you are editing is visible.

  • The multi-process mode. The text conversion now happens in a separate process so will never block the UI. As a result, the interface should be now more responsive when editing huge texts.

    This also required refactoring the PyMarkups library. ReText now depends on version 2.0 of it, which was released yesterday (or a newer version).

Other news worth mentioning are:

  • The tags box was replacing with a new “Formatting” box with quick access to most Markdown features, such as headers, lists, links, images and so on. The initial version of this code was contributed by Donato Marrazzo.

  • Images now can be pasted into ReText: first a save dialog for them will be presented, and then the valid Markdown or reStructuredText markup for them will be inserted. This was contributed by Bart Clephas and Maurice van der Pot.

  • The basic CSS styling for tables has been added (to make sure they have borders visible). This applies only unless you have your own CSS.

  • Added a button to quickly close the search bar, for those who prefer buttons over keyboard shortcuts.

  • Hitting Return key twice now will close the itemized lists and remove the inserted list markers from the previous line.

As usual, some bugs have been fixed (most of fixes have also been backported to 5.3.1 release), and some translations have been updated from Transifex.

Please report any bugs you find to our issue tracker.

Arturo Borrero González: Talk about contributing to FLOSS

10 May, 2016 - 01:00

The 26th of April I gave a talk in the University of Seville (ETSII) about contributing to FLOS software, focusing in the main projects I contribute to: Debian and Netfilter.

The talk was hosted by SUGUS, which is the university local group of FLOSS users.
The public were other students of the university, all of them young (like me), so the talk was very relaxed and formalities-free :-)

I talked my experiences in contributing to FLOSS, type of projects, how to start and how I integrate this with my full-time job.

Here is a video recording of the talk (in spanish):



I gave a similar talk some months ago to the students of the IES Gonzalo Nazareno.

Mike Gabriel: Recent progress in NXv3 development

9 May, 2016 - 19:42

This is to give a comprehensive overview on the recent progress made in NXv3 (aka nx-libs) development.

The upstream sources of nx-libs can be found at / viewed on / cloned from Github:
https://github.com/ArcticaProject/nx-libs

A great portion of the current work is sponsored by the Qindel Group [1] in Spain (QINDEL FORMACIÓN Y SERVICIOS, S.L.). Thanks for making this possible.

Planned release date: 2nd July, 2016

We aim at releasing a completely tidied up nx-libs code tree versioned 3.6.0 on July 2nd, 2016. There is still a whole bunch of work to do for this, but I am positive that we can make this release date.

Goals of our Efforts

There are basically two major goals for spending a considerable amount of time, money and energy on NXv3 hacking:

  • make this beast long-term maintainable
  • make it work with latest X11 desktop environments and applications

The efforts undertaken always have the various existing use cases in mind (esp. the framework of the coming-up Arctica Project, TheQVD and X2Go).

Overview on Recent Development Progress General Code Cleanups

Making this beast maintainable means first of all: identifying code redundancies, unused code passages, etc. and remove them.

This is where we came from (NoMachine's NX 3.5.x, including nxcomp, nxcompext, nxcompshad, nx-X11 and nxagent): 1,757,743 lines of C/C++ code.

[mike@minobo nx-libs.35 (3.5.0.x)]$ cloc --match-f '.*\.(c|cpp|h)$' .
    5624 text files.
    5614 unique files.                                          
    2701 files ignored.

http://cloc.sourceforge.net v 1.60  T=18.59 s (302.0 files/s, 132847.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C                             3134         231180         252893        1326393
C/C++ Header                  2274          78062         116132         349743
C++                            206          20037          13312          81607
-------------------------------------------------------------------------------
SUM:                          5614         329279         382337        1757743
-------------------------------------------------------------------------------

On the current 3.6.x branch of nx-libs (at commit 6c6b6b9), this is where we are now: 662,635 lines of C/C++ code, amount of code reduced to a third of the original code lines.

[mike@minobo nx-libs (3.6.x)]$ cloc --match-f '.*\.(c|cpp|h)' .
    2012 text files.
    2011 unique files.                                          
    1898 files ignored.

http://cloc.sourceforge.net v 1.60  T=5.63 s (341.5 files/s, 161351.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C                             1015          74605          81625         463244
C/C++ Header                   785          26992          34354         138063
C++                            122          16984          10804          61328
-------------------------------------------------------------------------------
SUM:                          1922         118581         126783         662635
-------------------------------------------------------------------------------

The latest development branch currently has these statistics: 622,442 lines of C/C++ code, another 40,000 lines could be dropped.

[mike@minobo nx-libs (pr/libnx-xext-drop-unused-extensions)]$ cloc --match-f '.*\.(c|cpp|h)' .
    1932 text files.
    1931 unique files.                                          
    1898 files ignored.

http://cloc.sourceforge.net v 1.60  T=5.66 s (325.4 files/s, 150598.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C                              983          69474          77186         426564
C/C++ Header                   738          25616          33048         131599
C++                            121          16984          10802          61190
-------------------------------------------------------------------------------
SUM:                          1842         112074         121036         619353
-------------------------------------------------------------------------------
Dropping various libNX_X* shared libraries (and using X.org shared client libraries instead)

At first, various bundled libraries could be dropped from the nx-X11 code tree. Secondly, several of the bundled X.org libraries could be dropped, because we managed to build against those libraries as provided system-wide.

Then, and this undertaking is much trickier, we could drop nearly all Xlib extension libraries that are used by nxagent with its role of being an X11 client.

We could sucessfully drop these Xlib extension libraries from nx-X11, because we managed to build nxagent against the matching libraries in X.org: libNX_Xdmcp, libNX_Xfixes, libNX_XComposite, libNX_Xdamage, libNX_Xtst, libNX_Xinerama, libNX_Xfont, libNX_xkbui, and various others. All these droppings happened without a loss of functionality.

However, some shared X client libraries are not easy to remove without loss of functionality, or rather not removable at all.

Dropping libNX_Xrender

We recently dropped libNX_Xrender [2] and now build nxagent against X.org's libXrender. However, this cost us a compression feature in NX. The libNX_Xrender code has passages that do zero padding of the unused memory portions in non-32bit-depth glyphs (the NX_RENDER_CLEANUP feature). However, we have hope for being able to reintroduce that feature again later, but current efforts [3] still fail at run-time.

Dropping libNX_Xext is not possible...

...the libNX_Xext / Xserver Xext code has been cleaned up instead.

Quite an amount of research and testing has been spent on removing the libNX_Xext library from the build workflow of nxagent. However, it seems that building against X.org's libXext will require us to update the Xext part of the nxagent Xserver at the same time. While taking this deep look into Xext code, we dropped various Xext extensions from the nx-X11 Xserver code. The extensions that got dropped [5] are all extensions that already have been dropped from X.org's Xserver code, as well.

Further investigation, however, showed, that actually the only used client side extension code from libNX_Xext is the XShape extension. Thus, all other client side extension got dropped now in a very recent pull request [4].

Dropping libNX_X11 not easy, research summary given here

For the sake of dropping the Xlib library bundled with nx-libs, we have attempted at writing a shared library called libNX_Xproxy. This library is supposed to contain a subset of the NXtrans related Xlib code that NoMachine patched into X.org's libX11 (and libxtrans).

Results of this undertaking [6] so far:

  • We managed to build nxagent against Xlib from X.org
  • Local nxagent sessions (using the normal X11 transport) worked and seemed to perform better than sessions running under the original Xlib library (libNX_X11) bundled with nx-libs.
  • NXtrans support in libNX_Xproxy is half implemented but far from being finished, yet. The referenced branch [6] is a work-in-progress branch. Don't expect it to work. Expect force-pushes no that branch, too, please.

Over the weekend, I thought this all through once more and I am pretty sure right now, that we can actually make libNX_Xproxy work and drop all of libNX_X11 from nx-libs soon. Although we have to port (i.e. copy) various functions related to the NX transport from libNX_X11 into libNX_Xproxy, this change will allow us to drop all Xlib drawing routines and use those provided by X.org's Xlib shared library directly.

Composite extension backport in nxagent to version 0.4

Mihai Moldovan looked at what it needs to make the Composite extension functional in nxagent. Unfortunately, the exact answer cannot be given, yet. As a start, Mihai backported latest Composite extension (v0.4) code from X.org's Xserver into the nxagent Xserver [7]. The currently shipped Composite extension version in nxagent is v0.2.

Work on the NX Compression shared library (aka nxcomp v3)

Fernando Carvajal and Salvador Fandiño from the Qindel Group [1] filed three pull requests against the nxcomp part of nx-libs recently, two of them have been code cleanups (done by Fernando), the third one is a feature enhancement regarding channels in nxcomp (provided by Salva).

Protocol clean-up: drop pre-3.5 support

With the release of nx-libs 3.6.x, we will drop support for nxcomp versions earlier than 3.5. Thus, if you are still on nxcomp 3.4, be prepared for upgrading at least to version 3.5.x.

The code removal had been issued as pull request #111 ("Remove compatibility code for nxcomp before 3.5.0") [8]. The PR has already been reviewed and merged.

Fernando filed another code cleanup PR (#119 [9]) against nx-libs that also already got merged into the 3.6.x branch.

UNIX Socket Support for Channels

The nxcomp library (and thus, nxproxy) provides a feature called "channels". Channels in nxcomp can be used for forwarding traffic between NX client side and the NX server side (along the graphical X11 transport that nxcomp is designed for). Until version 3.5.x, nxcomp was only able to use local TCP sockets for providing / connecting to channel endpoints. We consider local TCP sockets as insecure and aim at adding UNIX file socket support to nxcomp whereever a connection is established.

Salva provided a patch against nxcomp that provides UNIX socket support to channel endpoints. The initial use case for this patch is: connect to client side pulseaudio socket file and avoid enabling the TCP listening socket in pulseaudio. The traffic then is channeled to the server side, so that pulse clients can connect to a UNIX socket file rather than to a local TCP port.

The channel support patch has already been reviewed and merged into the 3.6.x branch of nx-libs.

Rebasing nxagent against latest X.org

Ulrich Sibiller spent a considerable amount of time and energy on providing a build chain that allows building nxagent against a modularized X.org 7.0 (rather than against the monolithic build tree of X.org 6.9, like we still do in nx-libs 3.6.x). We plan to adapt and minimize this build workflow for nx-libs 3.7.x (scheduled for summer 2017).

A short howto that shows how to build nxagent with that new workflow will be posted on this blog within the next days. So stay tuned.

Further work accomplished

Quite a lot of code cleanup PRs have been filed by myself against nx-libs. Most of them target at removal of unnecessary code from the nx-X11 Xserver code base and the nxagent DDX:

  • Amend all issues generating compiler warnings in progams/Xserver/hw/nxagent/. (PR #102 [10], not yet merged).
  • nx-X11's Xserver: Drop outdated Xserver extensions. (PR #106 [11], not yet merged).
  • nxagent: drop duplicate Xserver code and include original Xserver code in nxagent code files. (PR #120 [12], not yet merged).
  • libNX_Xext: Drop unused extensions. (PR #121 [4], not yet merged).

The third one (PR #120) in the list requires some detailled explanation:

We discovered that nxagent ships overrides some symbols from the original Xserver code base. These overrides are induced by copies of some files from some Xserver sub-directory placed into the hw/nxagent/ DDX path. All those files' names match the pattern NX*.c. These copies of code are done in a way that the C compiler suppresses throwing its 'symbol "" redefined: first defined in ""; redefined in ""' errors.

The approach taken, however, requires to have quite a few 10.000 lines of redundant code in hw/nxagent/NX*.c that also gets shipped in some Xserver sub-directory (mostly dix/ and render/).

With pull request #120, we have identified all code passages in hw/nxagent/NX*.c that must be considered as NX'ish. We differentiated the NX'ish functions from functions that never got changed by NoMachine when developing nxagent.

I then came up with four different approaches ([13,14,15,16]) of dropping redundant code from those hw/nxagent/NX*.c files. We (Mihai Moldovan and myself) discussed the various approaches and favoured the disable-Xserver-code-and-include-into-NX*.c variant [14] over the others for the following reasons:

  • It requires the least invasive change in Xserver code files.
  • It pulls in Xserver code (MIT/X11) into nxagent code (GPL-2) at build time (vs. [13]).
  • It does not require weakening symbols, static symbols can stay static symbols [15].
  • We don't have to use the hacky interception approach as shown in [16].

In the long run, the Xserver portion of the patches provided via this pull request #120 are required to be upstreamed into X.org's Xserver. The discussion around this will be started when we fully dive into rebasing nxagent's Xserver code base against latest X.org Xserver.

Tasks ahead before the 3.6.x Release

Various tasks we face before 3.6.x can be released. Here is a probably incomplete list:

  • Drop libNX_X11 from nx-libs (2nd attempt)
  • Drop libNX_Xau from nx-libs
  • Fully fix the Composite extension in nxagent (hopefully)
  • Several QA pull requests to close several of the open issues
  • Upgrade XRandR extension in the nxagent Xserver to 1.4
  • Attempt fixing KDE5 launch-up inside nxagent and recent GNOME application failures (due to missing Xinput2 extension in nxagent)
  • More UNIX file socket support in nxcomp
  • Fix reparenting in nxagent
  • Make nxagent run smoothly in x11vnc when suspended
Tasks ahead after the 3.6.x Release (i.e., for 3.7.x)

Here is an even rougher and probably highly incomplete list for tasks after the 3.6.x release:

  • Rename nx-libs to a new name that does not remind us of the original authoring company (NoMachine) that much.
  • Generalize the channel support in nxcomp (make it possible to fire-up an arbitrary amount of channels with TCP and/or UNIX file socket endpoints.
  • Proceed with the X.org Rebasing Effort.
Credits

Some people have to be named here that give their heart and love to this project. Thank you guys for supporting the development efforts around nx-libs and the Arctica Project:

Thanks to Nico Arenas Alonso from and on behalf of the Qindel Group for coordinating the current funding project around nx-libs.

Thanks to Ulrich Sibiller for giving a great amount of spare time to working on the nxagent-rebase-against-X.org effort.

Thanks to Mihai Moldovan for doing endless code reviews and being available for contracted work via BAUR-ITCS UG [17] on NXv3, as well.

Thanks to Mario Becroft for providing a patch that allows us to hook into nxagent X11 sessions with VNC and have the session fully available over VNC while the NX transport is in suspended state. Also Mario is pouring some fancy UDP ideas into the re-invention of remote desktop computing process performed in the Arctica Project. Mario has been an NX supporter for years, I am glad to have him still around after so many years (although he was close to abandoning NX usage at least once).

Thanks to Fernando Carvajal from Qindel (until April 2016) for cleaning up nxcomp code.

Thanks to Orion Poplawski from the Fedora Project for working on the first bundled libraries removal patches and being a resource on RPM packaging.

Thanks to my friend Lee for working behind the scenes on the Arctica Core code and constantly pouring various of his ideas into my head. Thanks for regularly reminding me on benchmarking things.


Folks, thanks to all of you for all your various efforts on this huge beast of software. You are a great resource of expertise and it's a pleasure and honour working with you all.


New Faces

Last but not least, I'd like to let everyone know that the Qindel Group sponsors another developer joining in on NXv3 development: Vadim Troshchinskiy (aka vatral on Github). Vadim has worked on NXv3 before and we are looking forward to having him and his skills in the team soon (probably end of May 2016).

Welcome on board of the team, Vadim.


[1] http://www.qindel.com
[2] https://github.com/ArcticaProject/nx-libs/pull/93
[3] https://github.com/sunweaver/nx-libs/commit/be41bde7efc46582b442706dfb85...
[4] https://github.com/ArcticaProject/nx-libs/pull/121
[5] https://github.com/ArcticaProject/nx-libs/pull/106
[6] https://github.com/sunweaver/nx-libs/tree/wip/libnx-x11-full-removal
[7] https://github.com/sunweaver/nx-libs/tree/pr/composite-0_4
[8] https://github.com/ArcticaProject/nx-libs/pull/111
[9] https://github.com/ArcticaProject/nx-libs/pull/119
[10] https://github.com/ArcticaProject/nx-libs/pull/102
[11] https://github.com/ArcticaProject/nx-libs/pull/106
[12] https://github.com/ArcticaProject/nx-libs/pull/120
[13] https://github.com/sunweaver/nx-libs/commit/9692e6a7045b3ab5cb0daaed187e... (include NX'ish code into Xserver)
[14] https://github.com/sunweaver/nx-libs/commit/3d359bfc2b6d021c1ae9c6e19e96... (include Xserver code into NX*.c)
[15] https://github.com/sunweaver/nx-libs/commit/af72ee5624a15d21c610528e37b6... (use weak symbols and non-static symbols)
[16] https://github.com/sunweaver/nx-libs/commit/7205bb8848c49ee3e78a82fde906... (override symbols with interceptions)
[17] http://www.baur-itcs.de/20-x2go/20-x2gosupport/

Riku Voipio: Booting ubuntu 16.04 cloud images on Arm64

9 May, 2016 - 19:32
For testing kvm/qemu, prebaked images cloud images are nice. However, there is a few steps to get started. First we need a recent Qemu (2.5 is good enough). An efi firmware is needed, and cloud-utils, for customizing our VM.

sudo apt install -y qemu qemu-utils cloud-utils
wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.img
Cloud images are plain - there is no user setup, no default user/pw combo, so to log in to the image, we need to customize the image on first boot. The defacto tool for this is cloud-init. The simplest method for using cloud-init is passing a block media with a settings file - of course for real cloud deployment, you would use one of fancy network based initialization protocols cloud-init supports. Enter the following to a file, say cloud.txt:

#cloud-config

users:
- name: you
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz....
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
This minimal config will just set you a user with ssh key. A more complex setup can install packages, write files and run arbitrary commands on first boot. In professional setups, you would most likely end up using cloud-init only to start Ansible or another configuration management tool.

cloud-localds cloud.img cloud.txt
qemu-system-aarch64 -smp 2 -m 1024 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -redir tcp:2222::22 \
-enable-kvm -cpu host
If you are on an X86 host and want to use qemu to run an aarch64 image, replace the last line with "-cpu cortex-a57". Now, since the example uses user networking with tcp port redirect, you can ssh into the VM:

ssh -p 2222 you@localhost
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-22-generic aarch64)
....

Joachim Breitner: Doctoral Thesis Published

9 May, 2016 - 15:48

I have officially published my doctoral thesis “ Lazy Evaluation: From natural semantics to a machine-checked compiler transformation” (DOI: 10.5445/IR/1000054251). The abstract of the 226 page long document that earned me a “summa cum laude” reads

In order to solve a long-standing problem with list fusion, a new compiler transformation, 'Call Arity' is developed and implemented in the Haskell compiler GHC. It is formally proven to not degrade program performance; the proof is machine-checked using the interactive theorem prover Isabelle. To that end, a formalization of Launchbury`s Natural Semantics for Lazy Evaluation is modelled in Isabelle, including a correctness and adequacy proof.

and I assembled all relevant artefacts (the thesis itself, its LaTeX-sources, the Isabelle theories, various patches against GHC, raw benchmark results, errata etc.) at http://www.joachim-breitner.de/thesis/.

Other, less retrospective news: My paper on the Incredible Proof Machine got accepted at ITP in Nancy, and I was invited to give a keynote demo about the proof machine at LFMTP in Porto. Exciting!

Russell Coker: BIND Configuration Files

9 May, 2016 - 15:32

I’ve recently been setting up more monitoring etc to increase the reliability of servers I run. One ongoing issue with computer reliability is any case where a person enters the same data in multiple locations, often people make mistakes and enter slightly different data which can give bad results.

For DNS you need to have at least 2 authoritative servers for each zone. I’ve written the below Makefile to extract the zone names from the primary server and generate a config file suitable for use on a secondary server. The next step is to automate this further by having the Makefile copy the config file to secondary servers and run “rndc reload”. Note that in a typical Debian configuration any user in group “bind” can write to BIND config files and reload the server configuration so this can be done without granting the script on the primary server root access on the secondary servers.

My blog replaces the TAB character with 8 spaces, you need to fix this up if you want to run the Makefile on your own system and also replace 10.10.10.10 with the IP address of your primary server.

all: other/secondary.conf

other/secondary.conf: named.conf.local Makefile
        for n in $$(grep ^zone named.conf.local | cut -f2 -d\"|sort) ; do echo "zone \"$$n\" {\n  type slave;\n  file \"$$n\";\n  masters { 10.10.10.10; };\n};\n" ; done > other/secondary.conf

Related posts:

  1. BIND Stats In Debian the BIND server will by default append statistics...
  2. DNSSEC reason=”verification failed; insecure key” I’ve recently noticed OpenDKIM on systems...
  3. A Basic IPVS Configuration I have just configured IPVS on a Xen server for...

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้