Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 41 min 19 sec ago

Brett Parker: The Psion Gemini

7 June, 2018 - 20:04

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

Mario Lang: Debian on a synthesizer

7 June, 2018 - 17:00

Bela is a low latency optimized platform for audio applications built using Debian and Xenomai, running on a BeagleBoard Black. I recently stumbled upon this platform while skimming through a modular synthesizer related forum. Bela has teamed up with the guys at Rebel Technologies to build a Bela based system in eurorack module format, called Salt. Luckily enough, I managed to secure a unit for my modular synthesizer.

Inputs and Outputs

Salt features 2 audio (44.1kHz) in, 2 audio out, 8 analog (22kHz) in, 8 analog out, and a number of digital I/Os. And it also features a USB host port, which is what I need to connect a Braille display to it.

Accessible synthesizers

do not really exist. Complex devices like sequencers or basically anything with a elaborate menu structure are usually not usable by the blind. However, Bela, or more specifically, Salt, is actually a game changer. I was able to install brltty and libbrlapi-dev (and a number of C++ libraries I like to use) with just a simple apt invokation.

Programmable module

Salt is marketed as a programmable module. To make life easy for creative people, the Bela platform does provide integration for well-known audio processing systems like PureData, SuperCollider (and recently) Csound. This is great to get started. However, it also allows to write your own C++ applications. Which is what I am doing right now, since I want to implement full Braille integration. So the display of my synthesizer is going to be tactile!

A stable product

Bought in May 2018, Salt shipped with Debian Stretch preinstalled. This means I get to use GCC 6.4 (C++14). Nice to see stable ship in commercial products.

Pepper

pepper is an obvious play on words. The goal for this project is to provide a Bela application for braille display users.

As a proof of concept, I already managed to successfully run a number of LV2 plugins via pepper on my Salt module. In the upcoming days, I hope I can manage to secure enough spare time to actually make more progress with this programming project.

Norbert Preining: Git and Subversion collaboration

7 June, 2018 - 09:28

Git is great, we all know that, but there are use cases where there completely distributed development model does not shine (see here and here). And while my old git svn mirror of TeX Live subversion was working well, git pull and git svn rebase didn’t work well together, repulling the same changes again and again. Finally, I took the time to experiment and fix this!

Most of the material in this blog is already written up, and the best sources I found are here and here. There practically everything is written down, but when one goes down to business some things work out a bit differently. So here we go.

Aim

Aim of the setup is to be able that several developers can work on a git svn mirror of a central subversion repository. “Work” here means:

  • pull from the git mirror to get the latest changes
  • normal git workflows: branch, develop new features, push new branches to the git mirror
  • commit to the subversion repository using git svn dcommit

and all that with a much redundancy removed as possible.

On solution to this would be that each developer creates his own git-svn mirror. While this is fine in principle, it is error prone, costs lots of time, and everyone has to do git svn rebase etc. We want to be able to use normal git workflows as far as possible.

Layout

The basic layout of our setup is as follows:

The following entities are shown in the above graphics:

  • SvnRepo: the central subversion repository
  • FetchingRepo: the git-svn mirror which does regular fetches and pushes to the BareRepo
  • BareRepo: the central repository which is used by all developers to pull and collaborate
  • DevRepo: normal git clones of the BareRepo on the developers’ computer

The flow of data is also shown in the above diagram:

  • git svn fetch: the FetchingRepo is updated regularly (using cron) to fetch new revisions and new branches/tags from the SvnRepo
  • git push (1): the FetchingRepo pushes changes regularly (using cron) to the BareRepo
  • git pull: developers pull from the BareRepo, can check out remote branches and do normal git workflows
  • git push (2): developers push changes to and creation of new branches to the BareRepo
  • git svn dcommit: developers rebase-merge their changes into the main branch and commit from there to the SvnRepo

Besides the requirement to use git svn dcommit for submitting the changes to the SvnRepo, and the requirement by git svn to have linear histories, everything else can be done with normal workflows.

Procedure

Let us for the following assume that SVNREPO points to the URI of the Subversion repository, and BAREREPO points to the URI of the BareRepo. Furthermore, we refer to the path on the system (server, local) with variables like $BareRepo etc.

Step 1 – preparation of authors-file

To get consistent entries for committers, we need to set up a authors file, giving a mapping from Subversion users to Name/Emails:

svnuser1 = AAA BBB 
svnuser2 = CCC DDD 
...

Let us assume that AUTHORSFILE environment variable points to this file.

Step 2 – creation of fetching repository

This step creates a git-svn mirror, please read the documentation for further details. If the Subversion repository follows the standard layout (trunk, branches, tags), then the following line will work:

git svn clone --prefix="" --authors-file=$AUTHORSFILE -s $SVNREPO

The important part here is the --prefix one. The documentation of git svn says here:

Setting a prefix (with a trailing slash) is strongly encouraged in any case, as your SVN-tracking refs will then be located at “refs/remotes/$prefix/”, which is compatible with Git’s own remote-tracking ref layout (refs/remotes/$remote/). Setting a prefix is also useful if you wish to track multiple projects that share a common repository. By default, the prefix is set to origin/.

Note: Before Git v2.0, the default prefix was “” (no prefix). This meant that SVN-tracking refs were put at “refs/remotes/*”, which is incompatible with how Git’s own remote-tracking refs are organized. If you still want the old default, you can get it by passing –prefix “” on the command line.

While one might be tempted to use a prefix of “svn” or “origin”, both of which I have done, this will complicate (make impossible?) later steps, in particular the synchronization of git pull with git svn fetch.

The original blogs I mentioned in the beginning were written before the switch to default=”origin” was made, so this was the part that puzzled me and I didn’t understand why the old descriptions didn’t work anymore.

Step 3 – cleanup of the fetching repository

By default, git svn creates and checks out a master branch. In this case, the Subversion repositories “master” is the “trunk” branch, and we want to keep it like this. Thus, let us checkout the trunk branch and remove the master, after entering the FetchingRepo, do

cd $FetchingRepo
git checkout trunk
git checkout -b trunk
git branch -d master

The two checkouts are necessary because the first will leave you with a detached head. In fact, no checkout would be fine, too, but git svn does not work over bare repositories, so we need to checkout some branch.

Step 4 – init the bare BareRepo

This is done in the usual way, I guess you know that:

git init --bare $BareRep
Step 5 – setup FetchingRepo to push all branches and push them

The cron job we will introduce later will fetch all new revisions, including new branches. We want to push all branches to the BareRepo. This is done by adjusting the fetch and push configuration, after changing into the FetchingRepo

cd $FetchingRepo
git remote add origin $BAREREPO
git config remote.origin.fetch '+refs/remotes/*:refs/remotes/origin/*'
git config remote.origin.push 'refs/remotes/*:refs/heads/*'
git push origin

What has been done is that fetch should update the remote branches, and push should pull the remote branches to the BareRepo. This ensures that new Subversion branches (or tags, which are nothing else then branches) are also pushed to the BareRepo.

Step 6 – adjust the default checkout branch in the BareRepo

By default the master branch is cloned/checked out in git, but we don’t have a master branch, but “trunk” plays its role. Thus, let us adjust the default in the BareRepo:

cd $BareRepo
git symbolic-ref HEAD refs/heads/trunk
Step 7 – developers branch

Now we are ready to use the bare repo, and clone it onto one of the developers machine:

git clone $BAREREPO

But before we can actually use this item, we need to make sure that git commits sent to the Subversion repository have the same user name and email for the committer. The reason for this is that the commit hash is computed from various information including the name/email (see details here). Thus we need to make sure that the git svn dcommit at the DeveloperRepo and the git svn fetch on the FetchingRepo create the very same hash! Thus, each developer needs to set up an authorsfile with at least his own entry:

cd $DeveloperRepo
echo 'mysvnuser = My Name '  > .git/usermap
git config svn.authorsfile '.git/usermap'

Important: the line for mysvnuser must exactly match the one in the original authorsfile from Step 1!

The final step is to allow the developer to commit to the SvnRepo by adding the necessary information to the git configuration:

git svn init -s $SVNREPO

Warning: Here we rely on two items: First, that the git clone initializes the default origin for the remote name, and second, that git svn init uses the default prefix “origin”, as discussed above.

If this is too shaky for you, the other option is to define the remote name during clone, and use that for the prefix:

git clone -o mirror $BAREREPO
git svn init --prefix=mirror/ -s $SVNREPO

This way the default remote will be “mirror” and all is fine.

Note: Upon your first git svn usage in the DeveloperRepo, as well as always after a pull, you will see messages like:

Rebuilding .git/svn/refs/remotes/origin/trunk/.rev_map.c570f23f-e606-0410-a88d-b1316a301751 ...
rNNNN = 1bdc669fab3d21ed7554064dc461d520222424e2
rNNNM = 2d1385fdd8b8f1eab2a95d325b0d596bd1ddb64f
...

This is a good sign, meaning that git svn does not re-fetch the whole set of revisions, but reuses the one pulled from the BareRepo and only rebuilds the mapping, which should be fast.

Updating the FetchingRepo

Updating the FetchingRepo should be done automatically using cron, the necessary steps are:

cd $FetchingRepo
git svn fetch --all
git push

This will fetch all revisions, and pushes the default configured branches, that are all remote heads to the BareRepo.

Note: If a Developer first commits a change to the SvnRepo using git svn dcommit and before the FetchingRepo updated the BareRepo (i.e., before the next cron run) also uses git pull, he will see something like:

$ git pull
From preining.info:texlive2
 + 10cc435f163...953f9564671 trunk      -> origin/trunk  (forced update)
Already up to date.

This is due to the fact that the remote head is still behind the local head, which can easily be seen by looking at the output of git log: Before the FetchingRepo updated the BareRepo, one would see something like:

$ git log
commit 3809fcc9aa6e0a70857cbe4985576c55317539dc (HEAD -> trunk)
Author: ....

commit eb19b9e6253dbc8bdc4e1774639e18753c4cd08f (origin/trunk, origin/HEAD)
...

and afterwards all of the three refs would point to the same top commit. This is nothing to worry and normal behavior. In fact, the default setup for fetching remotes is to force pull.

Protecting the trunk branch

I found myself sometimes pushing wrongly to trunk instead of using svn dcommit. This can be avoided by posing restriction on pushing. With gitolite, simply add a rule

- refs/heads/trunk = USERID

to the repo stanza of your mirror. When using Git(Lab|Hub) there are options to protect branches.

A more advanced restriction policy would be users to require that created branches are within a certain namespace. For example, a gitolite rule

repo yoursvnmirror
    RW+      = fetching-user
    RW+ dev/ = USERID
    R        = USERID

would only allow the FetchingRepo (identified by fetching-user) to push everywhere, but myself (USERID) to push/rewind/delete etc only branches starting with “dev/”, but read everything.

Workflow for developers

The recommended workflow compatible with this setup is

  • use git pull to update the local developers repository
  • use only branches that are not created/update via git-svn
  • on commit time, (1) rebase you branch on trunk, (2) merge (fast forward) your branch into trunk, (3) commit your changes with git svn dcommit
  • rinse and repeat

More detailed discussion and safety measure as laid out in the git-svn documentation apply as well, worth reading!

Athos Ribeiro: Running OBS Workers and OBS staging instance

7 June, 2018 - 03:10
This is my third post of my Google Summer of Code 2018 series. Links for the previous posts can be found below: Post 1: My Google Summer of Code 2018 project Post 2: Setting up a local OBS development environment About the stuck OBS workers As I mentioned in my last post, OBS workers were hanging on my local installation. I finally got to the point where the only missing piece of my local instalation (to have a raw OBS install which can build Debian packages) was to figure out this issue with the OBS workers.

Sylvain Beucler: Best GitHub alternative: us

7 June, 2018 - 00:20

Why try to choose the host that sucks less, when hosting a single-file CGI gets you decentralized git-like + tracker + wiki?

https://www.fossil-scm.org/

We gotta take the power back.

Joey Hess: the single most important criteria when replacing Github

6 June, 2018 - 23:40

I could write a lot of things about the Github aquisition by Microsoft. About Github's embrace and extend of git, and how it passed unnoticed by people who now fear the same thing now that Microsoft is in the picture. About the stultifying effects of Github's centralization, and its retardant effect on general innovation in spaces around git and software development infrastructure.

Instead I'd rather highlight one simple criteria you can consider when you are evaluating any git hosting service, whether it's Gitlab or something self-hosted, or federated, or P2P[1], or whatever:

Consider all the data that's used to provide the value-added features on top of git. Issue tracking, wikis, notes in commits, lists of forks, pull requests, access controls, hooks, other configuration, etc.
Is that data stored in a git repository?

Github avoids doing that and there's a good reason why: By keeping this data in their own database, they lock you into the service. Consider if Github issues had been stored in a git repository next to the code. Anyone could quickly and easily clone the issue data, consume it, write alternative issue tracking interfaces, which then start accepting git pushes of issue updates and syncing all around. That would have quickly became the de-facto distributed issue tracking data format.

Instead, Github stuck it in a database, with a rate-limited API, and while this probably had as much to do with expediency, and a certain centralized mindset, as intentional lock-in at first, it's now become such good lock-in that Microsoft felt Github was worth $7 billion.

So, if whatever thing you're looking at instead of Github doesn't do this, it's at worst hoping to emulate that, or at best it's neglecting an opportunity to get us out of the trap we now find ourselves in.

[1] Although in the case of a P2P system which uses a distributed data structure, that can have many of the same benefits as using git. So, git-ssb, which stores issues etc as ssb messages, is just as good, for example.

Russell Coker: BTRFS and SE Linux

6 June, 2018 - 18:07

I’ve had problems with systems running SE Linux on BTRFS losing the XATTRs used for storing the SE Linux file labels after a power outage.

Here is the link to the patch that fixes this [1]. Thanks to Hans van Kranenburg and Holger Hoffstätte for the information about this patch which was already included in kernel 4.16.11. That was uploaded to Debian on the 27th of May and got into testing about the time that my message about this issue got to the SE Linux list (which was a couple of days before I sent it to the BTRFS developers).

The kernel from Debian/Stable still has the issue. So using a testing kernel might be a good option to deal with this problem at the moment.

Below is the information on reproducing this problem. It may be useful for people who want to reproduce similar problems. Also all sysadmins should know about “reboot -nffd”, if something really goes wrong with your kernel you may need to do that immediately to prevent corrupted data being written to your disks.

The command “reboot -nffd” (kernel reboot without flushing kernel buffers or writing status) when run on a BTRFS system with SE Linux will often result in /var/log/audit/audit.log being unlabeled. It also results in some systemd-journald files like /var/log/journal/c195779d29154ed8bcb4e8444c4a1728/system.journal being unlabeled but that is rarer. I think that the same
problem afflicts both systemd-journald and auditd but it’s a race condition that on my systems (both production and test) is more likely to affect auditd.

root@stretch:/# xattr -l /var/log/audit/audit.log 
security.selinux: 
0000   73 79 73 74 65 6D 5F 75 3A 6F 62 6A 65 63 74 5F    system_u:object_ 
0010   72 3A 61 75 64 69 74 64 5F 6C 6F 67 5F 74 3A 73    r:auditd_log_t:s 
0020   30 00                                              0.

SE Linux uses the xattr “security.selinux”, you can see what it’s doing with xattr(1) but generally using “ls -Z” is easiest.

If this issue just affected “reboot -nffd” then a solution might be to just not run that command. However this affects systems after a power outage.

I have reproduced this bug with kernel 4.9.0-6-amd64 (the latest security update for Debian/Stretch which is the latest supported release of Debian). I have also reproduced it in an identical manner with kernel 4.16.0-1-amd64 (the latest from Debian/Unstable). For testing I reproduced this with a 4G filesystem in a VM, but in production it has happened on BTRFS RAID-1 arrays, both SSD and HDD.

#!/bin/bash 
set -e 
COUNT=$(ps aux|grep [s]bin/auditd|wc -l) 
date 
if [ "$COUNT" = "1" ]; then 
 echo "all good" 
else 
 echo "failed" 
 exit 1 
fi

Firstly the above is the script /usr/local/sbin/testit, I test for auditd running because it aborts if the context on it’s log file is wrong. When SE Linux is in enforcing mode an incorrect/missing label on the audit.log file causes auditd to abort.

root@stretch:~# ls -liZ /var/log/audit/audit.log 
37952 -rw-------. 1 root root system_u:object_r:auditd_log_t:s0 4385230 Jun  1 
12:23 /var/log/audit/audit.log

Above is before I do the tests.

while ssh stretch /usr/local/sbin/testit ; do 
 ssh stretch "reboot -nffd" > /dev/null 2>&1 & 
 sleep 20 
done

Above is the shell code I run to do the tests. Note that the VM in question runs on SSD storage which is why it can consistently boot in less than 20 seconds.

Fri  1 Jun 12:26:13 UTC 2018 
all good 
Fri  1 Jun 12:26:33 UTC 2018 
failed

Above is the output from the shell code in question. After the first reboot it fails. The probability of failure on my test system is greater than 50%.

root@stretch:~# ls -liZ /var/log/audit/audit.log  
37952 -rw-------. 1 root root system_u:object_r:unlabeled_t:s0 4396803 Jun  1 12:26 /var/log/audit/audit.log

Now the result. Note that the Inode has not changed. I could understand a newly created file missing an xattr, but this is an existing file which shouldn’t have had it’s xattr changed. But somehow it gets corrupted.

The first possibility I considered was that SE Linux code might be at fault. I asked on the SE Linux mailing list (I haven’t been involved in SE Linux kernel code for about 15 years) and was informed that this isn’t likely at
all. There have been no problems like this reported with other filesystems.

Related posts:

  1. SE Linux in Debian I have now got a Debian Xen domU running the...
  2. SE Linux Status in Debian 2012-01 Since my last SE Linux in Debian status report [1]...
  3. More BTRFS Fun I wrote a BTRFS status report yesterday commenting on the...

Evgeni Golov: Not-So-Self-Hosting

6 June, 2018 - 16:54

I planned to write about this for quite some time now (last time end of April), and now, thanks to the GitHub acquisition by Microsoft and all that #movingtogitlab traffic, I am finally sitting here and writing these lines.

This post is not about Microsoft, GitHub or GitLab, and it's neither about any other SaaS solution out there, the named companies and products are just examples. It's more about "do you really want to self-host?"

Every time a big company acquires, shuts down or changes an online service (SaaS - Software as a Service), you hear people say "told you so, you should better have self-hosted from the beginning". And while I do run quite a lot of own infrastructure, I think this statement is too general and does not work well for many users out there.

Software as a Service

There are many code-hosting SaaS offerings: GitHub (proprietary), GitLab (open core), Pagure (FOSS) to name just a few. And while their licenses, ToS, implementations and backgrounds differ, they have a few things in common.

Benefits:

  • (sort of) centralized service
  • free (as in beer) tier available
  • high number of users (and potential collaborators)
  • high number of hosted projects
  • good (fsvo "good") connection from around the globe
  • no maintenance required from the users

Limitations:

  • dependency on the interest/goodwill of the owner to continue the service
  • some features might require signing up for a paid tier

Overall, SaaS is handy if you're lazy, just want to get the job done and benefit from others being able to easily contribute to your code.

Hosted Solutions

All of the above mentioned services also offer a hosted solution: GitHub Enterprise, GitLab CE and EE, Pagure.

As those are software packages you can install essentially everywhere, you can host the service "in your basement", in the cloud or in any data center you have hardware or VMs running.

However, with self-hosting, the above list of common things shifts quite a bit.

Benefits:

  • the service is configured and secured exactly like you need it
  • the data remains inside your network/security perimeter if you want it

Limitations:

  • requires users to create an own account on your instance for collaboration
  • probably low number of users (and potential collaborators)
  • connection depends on your hosting connection
  • infrastructure (hardware, VM, OS, software) requires regular maintenance
  • dependency on your (free) time to keep the service running
  • dependency on your provider (network/hardware/VM/cloud)

I think especially the first and last points are very important here.

First, many contributions happen because someone sees something small and wants to improve it, be it a typo in the documentation, a formatting error in the manpage or a trivial improvement of the code. But these contributions only happen when the complexity to submit it is low. Nobody not already involved in OpenStack would submit a typo-fix to their Gerrit which needs a Launchpad account… A small web-edit on GitHub or GitLab on the other hand is quickly done, because "everybody" has an account anyways.

Second, while it is called "self-hosting", in most cases it's more of a "self-running" or "self-maintaining" as most people/companies don't own the whole infrastructure stack.

Let's take this website as an example (even though it does not host any Git repositories): the webserver runs in a container (LXC) on a VM I rent from netcup. In the past, netcup used to get their infrastructure from Hetzner - however I am not sure that this is still the case. So worst case, the hosting of this website depends on me maintaining the container and the container host, netcup maintaining the virtualization infrastructure and Hetzner maintaining the actual data center. This also implies that I have to trust those companies and their suppliers as I only "own" the VM upwards, not the underlying infrastructure and not the supporting infrastructure (network etc).

SaaS vs Hosted

There is no silver bullet to that. One important question is "how much time/effort can you afford?" and another "which security/usability constraints do you have?".

Hosted for a dedicated group

If you need a solution for a dedicated group (your work, a big FOSS project like Debian or a social group like riseup), a hosted solution seems like a good idea. Just ensure that you have enough infrastructure and people to maintain it as a 24x7 service or at least close to that, for a long time, as people will depend on your service.

The same also applies if you need/want to host your code inside your network/security perimeter.

Hosted for an individual

Contrary to a group, I don't think a hosted solution makes sense for an individual most of the time. The burden of maintenance quite often outweighs the benefits, especially as you'll have to keep track of (security) updates for the software and the underlying OS as otherwise the "I own my data" benefit becomes "everyone owns me" quite quickly. You also have to pay for the infrastructure, even if the OS and the software are FOSS.

You're also probably missing out on potential contributors, which might have an account on the common SaaS platforms, but won't submit a pull-request for a small change if they have to register on your individual instance.

SaaS for a dedicated group

If you don't want to maintain an own setup (resources/costs), you can also use a SaaS platform for a group. Some SaaS vendors will charge you for some features (they have to pay their staff and bills too!), but it's probably still cheaper than having the right people in-house unless you have them anyways.

You also benefit from a networking effect, as other users of the same SaaS platform can contribute to your projects "at no cost".

Saas for an individual

For an individual, a SaaS solution is probably the best fit as it's free (as in beer) in the most cases and allows the user to do what they intend to do, instead of shaving yaks and stacking turtles (aka maintaining infrastructure instead of coding).

And you again get the networking effect of the drive-by contributors who would not sign up for a quick fix.

Selecting the right SaaS

When looking for a SaaS solution, try to answer the following questions:

  • Do you trust the service to be present next year? In ten years? Is there a sustainable business model?
  • Do you trust the service with your data?
  • Can you move between SaaS and hosted easily?
  • Can you move to a different SaaS (or hosted solution) easily?
  • Does it offer all the features and integrations you want/need?
  • Can you leverage the network effect of being on the same platform as others?
Selecting the right hosted solution

And answer these when looking for a hosted one:

  • Do you trust the vendor to ship updates next year? In ten years?
  • Do you understand the involved software stack and willing to debug it when things go south?
  • Can you get additional support from the vendor (for money)?
  • Does it offer all the features and integrations you want/need?
So, do you really want to self-host?

I can't speak for you, but for my part, I don't want to run a full-blown Git hosting just for my projects, GitHub is just fine for that. And yes, GitLab would be equally good, but there is little reason to move at the moment.

And yes, I do run my own Nextcloud instance, mostly because I don't want to backup the pictures from my phone to "a cloud". YMMV.

Thomas Lange: FAI 5.7

6 June, 2018 - 13:33

The new FAI release 5.7 is now available. Packages are uploaded to unstable and are available from the fai-project.org repository. I've also created new FAI ISO images and the special Ubuntu only installation FAI CD is now installing Ubuntu 18.04 aka Bionic. The FAI.me build service is also using the new FAI release.

In summary, the process for this release went very smooth and I am happy that the update of the ISO images and FAI.me service happend very shortly after the new release.

Louis-Philippe Véronneau: Disaster a-Brewing

6 June, 2018 - 11:00

I brewed two new batches of beer last March and I've been so busy since I haven't had time to share how much of a failure it was.

See, after three years I thought I was getting better at brewing beer and the whole process of mashing, boiling, fermenting and bottling was supposed to be all figured out by now.

Turns out I was both greedy and unlucky and - woe is me! - one of my carboy exploded. Imagine 15 liters (out of a 19L batch) spilling out in my bedroom at 1AM with such force that the sound of the rubber bung shattering on the ceiling woke me up in panic. I legitimately thought someone had been shot in my bedroom.

The aftermath left the walls, the ceiling and the wooden floor covered in thick semi-sweet brown liquid.

This was the first time I tried a "new" brewing technique called parti-gyle. When doing a parti-gyle, you reuse the same grains twice to make two different batches of beer: typically, the first batch is strong, whereas the second one is pretty low in alcohol. Parti-gyle used to be way beer was brewed a few hundred years ago. The Belgian monks made their Tripels with the first mash, the Dubbels with the second mash, and the final mash was brewed with funky yeasts to make lighter beers like Saisons.

The reason for my carboy exploding was twofold. First of all, I was greedy and filled the carboy too much for the high-gravity porter I was brewing. When your wort is very sweet, the yeast tends to degas a whole lot more and needs more head space not to spill over. At this point, any homebrewer with experience will revolt and say something like "Why didn't you use a blow-off tube you dummy!". A blow-off tube is a tube that comes out the airlock into a large tub of water and helps contain the effects of violent primary fermentation. With a blow-off tube, instead of having beer spill out everywhere (or worse, having your airlock completely explode), the mess is contained to the water vessel the tube is in.

The thing is, I did use a blow-off tube. Previous experience taught me how useful they can be. No, the real reason my carboy exploded was my airlock clogged up and let pressure build up until the bung gave way. The particular model of airlock I used was a three piece airlock with a little cross at the end of the plastic tube1. Turns out that little cross accumulated yeast and when that yeast dried up, it created a solid plug. Easy to say my airlocks don't have these little crosses anymore...

On a more positive note, it was also the first time I dry-hopped with full cones instead of pellets. I had some leftover cones in the freezer from my summer harvest and decided to use them. The result was great as the cones make for less trub than pellets when dry-hopping.

Recipes

What was left of the porter came out great. Here's the recipe if you want to try to replicate it. The second mash was also surprisingly good and turned out to be a very drinkable brown beer.

Party Porter (first mash)

The target boil volume is 23L and the target batch size 17L. Mash at 65°C and ferment at 19°C.

Since this is a parti-gyle, do not sparge. If you don't reach the desired boil size in the kettle, top it off with water until you reach 23L.

Black Malt gives very nice toasty aromas to this porter, whereas the Oat Flakes and the unmalted Black Barley make for a nice black and foamy head.

Malt:

  • 5.7 kg x Superior Pale Ale
  • 450 g x Amber Malt
  • 450 g x Black Barley (not malted)
  • 400 g x Oat Flakes
  • 300 g x Crystal Dark
  • 200 g x Black Malt

Hops:

  • 13 g x Bravo (15.5% alpha acid) - 60 min Boil
  • 13 g x Bramling Cross (6.0% alpha acid) - 30 min Boil
  • 13 g x Challenger (7.0% alpha acid) - 30 min Boil

Yeast:

  • White Labs - American Ale Yeast Blend - WLP060
Party Brown (second mash)

The target boil volume is 26L and the target batch size 18L. Mash at 65°C for over an hour, sparge slowly and ferment at 19°C.

The result is a very nice table beer.

Malt:

same as for the Party Porter, since we are doing a parti-gyle.

Hops:

  • 31 g x Northern Brewer (9.0% alpha acid) - 60 min Boil
  • 16 g x Kent Goldings (5.5% alpha acid) - 15 min Boil
  • 13 g x Kent Goldings (5.5% alpha acid) - 5 min Boil
  • 13 g x Chinook (cones) - Dry Hop

Yeast:

  • White Labs - Nottingham Ale Yeast - WLP039
  1. The same kind of cross you can find in sinks to keep you from dropping objects down the drain by inadvertance. 

Louis-Philippe Véronneau: Disaster a-Brewing

6 June, 2018 - 11:00

I brewed two new batches of beer last March and I've been so busy since I haven't had time to share how much of a failure it was.

See, after three years I thought I was getting better at brewing beer and the whole process of mashing, boiling, fermenting and bottling was supposed to be all figured out by now.

Turns out I was both greedy and unlucky and - woe is me! - one of my carboy exploded. Imagine 15 liters (out of a 19L batch) spilling out in my bedroom at 1AM with such force that the sound of the rubber bung shattering on the ceiling woke me up in panic. I legitimately thought someone had been shot in my bedroom.

The aftermath left the walls, the ceiling and the wooden floor covered in thick semi-sweet brown liquid.

This was the first time I tried a "new" brewing technique called parti-gyle. When doing a parti-gyle, you reuse the same grains twice to make two different batches of beer: typically, the first batch is strong, whereas the second one is pretty low in alcohol. Parti-gyle used to be way beer was brewed a few hundred years ago. The Belgian monks made their Tripels with the first mash, the Dubbels with the second mash, and the final mash was brewed with funky yeasts to make lighter beers like Saisons.

The reason for my carboy exploding was twofold. First of all, I was greedy and filled the carboy too much for the high-gravity porter I was brewing. When your wort is very sweet, the yeast tends to degas a whole lot more and needs more head space not to spill over. At this point, any homebrewer with experience will revolt and say something like "Why didn't you use a blow-off tube you dummy!". A blow-off tube is a tube that comes out the airlock into a large tub of water and helps contain the effects of violent primary fermentation. With a blow-off tube, instead of having beer spill out everywhere (or worse, having your airlock completely explode), the mess is contained to the water vessel the tube is in.

The thing is, I did use a blow-off tube. Previous experience taught me how useful they can be. No, the real reason my carboy exploded was my airlock clogged up and let pressure build up until the bung gave way. The particular model of airlock I used was a three piece airlock with a little cross at the end of the plastic tube1. Turns out that little cross accumulated yeast and when that yeast dried up, it created a solid plug. Easy to say my airlocks don't have these little crosses anymore...

On a more positive note, it was also the first time I dry-hopped with full cones instead of pellets. I had some leftover cones in the freezer from my summer harvest and decided to use them. The result was great as the cones make for less trub than pellets when dry-hopping.

Recipes

What was left of the porter came out great. Here's the recipe if you want to try to replicate it. The second mash was also surprisingly good and turned out to be a very drinkable brown beer.

Party Porter (first mash)

The target boil volume is 23L and the target batch size 17L. Mash at 65°C and ferment at 19°C.

Since this is a parti-gyle, do not sparge. If you don't reach the desired boil size in the kettle, top it off with water until you reach 23L.

Black Malt gives very nice toasty aromas to this porter, whereas the Oat Flakes and the unmalted Black Barley make for a nice black and foamy head.

Malt:

  • 5.7 kg x Superior Pale Ale
  • 450 g x Amber Malt
  • 450 g x Black Barley (not malted)
  • 400 g x Oat Flakes
  • 300 g x Crystal Dark
  • 200 g x Black Malt

Hops:

  • 13 g x Bravo (15.5% alpha acid) - 60 min Boil
  • 13 g x Bramling Cross (6.0% alpha acid) - 30 min Boil
  • 13 g x Challenger (7.0% alpha acid) - 30 min Boil

Yeast:

  • White Labs - American Ale Yeast Blend - WLP060
Party Brown (second mash)

The target boil volume is 26L and the target batch size 18L. Mash at 65°C for over an hour, sparge slowly and ferment at 19°C.

The result is a very nice table beer.

Malt:

same as for the Party Porter, since we are doing a parti-gyle.

Hops:

  • 31 g x Northern Brewer (9.0% alpha acid) - 60 min Boil
  • 16 g x Kent Goldings (5.5% alpha acid) - 15 min Boil
  • 13 g x Kent Goldings (5.5% alpha acid) - 5 min Boil
  • 13 g x Chinook (cones) - Dry Hop

Yeast:

  • White Labs - Nottingham Ale Yeast - WLP039
  1. The same kind of cross you can find in sinks to keep you from dropping objects down the drain by inadvertance. 

Thomas Goirand: Using a dummy network interface

6 June, 2018 - 03:45

For a long time, I’ve been very much annoyed by network setups on virtual machines. Either you choose a bridge interface (which is very easy with something like Virtualbox), or you choose NAT. The issue with NAT is that you can’t easily get into your VM (for example, virtualbox doesn’t exposes the gateway to your VM). With bridging, you’re getting in trouble because your VM will attempt to get DHCP from the outside network, which means that first, you’ll get a different IP depending on where your laptop runs, and second, the external server may refuse your VM because it’s not authenticated (for example because of a MAC address filter, or 802.11x auth).

But there’s a solution to it. I’m now very happy with my network setup, which is using a dummy network interface. Let me share how it works.

In the modern Linux kernel, there’s “fake” network interface through a module called “dummy”. To add such an interface, simply load the kernel module (ie: “modprobe dummy”) and start playing. Then you can bridge that interface, and tap it, then plug your VM to it. Since the dummy interface is really living in your computer, you do have access to this internal network with a route to it.

I’m using this setup for connecting both KVM and Virtualbox VMs, you can even mix both. For Virtualbox, simply use the dropdown list for the bridge. For KVM, use something like this in the command line: -device e1000,netdev=net0,mac=08:00:27:06:CF:CF -netdev tap,id=net0,ifname=mytap0,script=no,downscript=no

Here’s a simple script to set that up, with on top, masquerading for both ip4 and ipv6:

# Load the dummy interface module
modprobe dummy

# Create a dummy interface called mynic0
ip link set name mynic0 dev dummy0

# Set its MAC address
ifconfig mynic0 hw ether 00:22:22:dd:ee:ff

# Add a tap device
ip tuntap add dev mytap0 mode tap user root

# Create a bridge, and bridge to it mynic0 and mytap0
brctl addbr mybr0
brctl addif mybr0 mynic0
brctl addif mybr0 mytap0

# Set an IP addresses to the bridge
ifconfig mybr0 192.168.100.1 netmask 255.255.255.0 up
ip addr add fd5d:12c9:2201:1::1/24 dev mybr0

# Make sure all interfaces are up
ip link set mybr0 up
ip link set mynic0 up
ip link set mytap0 up

# Set basic masquerading for both ipv4 and 6
iptables -I FORWARD -j ACCEPT
iptables -t nat -I POSTROUTING -s 192.168.100.0/24 -j MASQUERADE
ip6tables -I FORWARD -j ACCEPT
ip6tables -t nat -I POSTROUTING -s fd5d:12c9:2201:1::/64 -j MASQUERADE

Daniel Pocock: Public Money Public Code: a good policy for FSFE and other non-profits?

6 June, 2018 - 03:40

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes.

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

Jonathan McDowell: Getting started with Home Assistant

6 June, 2018 - 02:05

Having set up some MQTT sensors and controllable lights the next step was to start tying things together with a nicer interface than mosquitto_pub and mosquitto_sub. I don’t yet have enough devices setup to be able to do some useful scripting (turning on the snug light when the study is cold is not helpful), but a web control interface makes things easier to work with as well as providing a suitable platform for expansion as I add devices.

There are various home automation projects out there to help with this. I’d previously poked openHAB and found it quite complex, and I saw reference to Domoticz which looked viable, but in the end I settled on Home Assistant, which is written in Python and has a good range of integrations available out of the box.

I shoved the install into a systemd-nspawn container (I have an Ansible setup which makes spinning one of these up with a basic Debian install simple, and it makes it easy to cleanly tear things down as well). One downside of Home Assistant is that it decides it’s going to install various Python modules once you actually configure up some of its integrations. This makes me a little uncomfortable, but I set it up with its own virtualenv to make it easy to see what had been pulled in. Additionally I separated out the logs, config and state database, all of which normally go in ~/.homeassistant/. My systemd service file went in /etc/systemd/system/home-assistant.service and looks like:

[Unit]
Description=Home Assistant
After=network-online.target

[Service]
Type=simple
User=hass
ExecStart=/srv/hass/bin/hass -c /etc/homeassistant --log-file /var/log/homeassistant/homeassistant.log

MemoryDenyWriteExecute=true
ProtectControlGroups=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Moving the state database needs an edit to /etc/homeassistant/configuration.yaml (a default will be created on first startup, I’ll only mention the changes I made here):

recorder:
  db_url: sqlite:///var/lib/homeassistant/home-assistant_v2.db

I disabled the Home Assistant cloud piece, as I’m not planning on using it:

# cloud:

And the introduction card:

# introduction:

The existing MQTT broker was easily plumbed in:

mqtt:
  broker: mqtt-host
  username: hass
  password: !secret mqtt_password
  port: 8883
  certificate: /etc/ssl/certs/ca-certificates.crt

Then the study temperature sensor (part of the existing sensor block that had weather prediction):

sensor:
  - platform: mqtt
    name: "Study Temperature"
    state_topic: "collectd/mqtt.o362.us/mqtt/temperature-study"
    value_template: "{{ value.split(':')[1] }}"
    device_class: "temperature"
    unit_of_measurement: "°C"

The templating ability let me continue to log into MQTT in a format collectd could parse, while also being able to pull the information into Home Assistant.

Finally the Sonoff controlled light:

light:
  - platform: mqtt
    name: snug
    command_topic: 'cmnd/sonoff-snug/power'

I set http_password (to prevent unauthenticated access) and mqtt_password in /etc/homeassistant/secrets.yaml. Then systemctl start home-assistant brought the system up on http://hass-host:8123/, and the default interface presented the study temperature and a control for the snug light, as well as the default indicators of whether the sun is up or not and the local weather status.

I do have a few niggles with Home Assistant:

  • Single password for access: There’s one password for accessing the API endpoint, so no ability to give different users different access or limit what an external integration can do.
  • Wants an entire subdomain: This is a common issue with webapps; they don’t want to live in a subdirectory under a main site (I also have this issue with my UniFi controller and Keybase, who don’t want to believe my main website is old skool with /~noodles/). There’s an open configurable webroot feature request, but no sign of it getting resolved. Sadly it involves work to both the backend and the frontend - I think a modicum of hacking could fix up the backend bits, but have no idea where to start with a Polymer frontend.
  • Installs its own software: I don’t like the fact the installation of Python modules isn’t an up front thing. I’d rather be able to pull a dependency file easily into Ansible and lock down the installation of new things. I can probably get around this by enabling plugins, allowing the modules to be installed and then locking down permissions but it’s kludgy and feels fragile.
  • Textual configuration: I’m not really sure I have a good solution to this, but it’s clunky to have to do all the configuration via a text file (and I love scriptable configuration). This isn’t something that’s going to work out of the box for non-technical users, and even for those of us happy hand editing YAML there’s a lot of functionality that’s hard to discover without some digging. One of my original hopes with Home Automation was to get a better central heating control and if it’s not usable by any household member it isn’t going to count as better.

Some of these are works in progress, some are down to my personal preferences. There’s active development, which is great to see, and plenty of documentation - both offical on the project website, and in the community forums. And one of the nice things about tying everything together with MQTT is that if I do decide Home Assistant isn’t the right thing down the line, I should be able to drop in anything else that can deal with an MQTT broker.

Reproducible builds folks: Reproducible Builds: Weekly report #162

5 June, 2018 - 14:47

Here’s what happened in the Reproducible Builds effort between Sunday May 27 and Saturday June 2 2018:

Packages reviewed and fixed, and bugs filed tests-reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

reproducible.org website updates

There were a number of changes to the reproducible-builds.org website this week too, including:

Chris Lamb also updated the diffoscope.org website, including adding a progress bar animation as well as making “try it online” link more prominent and correctiing the source tarball link.

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Russ Allbery: Review: The Obelisk Gate

5 June, 2018 - 10:22

Review: The Obelisk Gate, by N.K. Jemisin

Series: The Broken Earth #2 Publisher: Orbit Copyright: August 2016 ISBN: 0-316-22928-8 Format: Kindle Pages: 448

The Obelisk Gate is the sequel to The Fifth Season and picks up right where it left off. This is not a series to read out of order.

The complexity of The Fifth Season's three entwined stories narrows down to two here, which stay mostly distinct. One follows Essun, who found at least a temporary refuge at the end of the previous book and now is split between learning a new community and learning more about the nature of the world and orogeny. The second follows Essun's daughter, whose fate had been left a mystery in the first book. This is the middle book of a trilogy, and it's arguably less packed with major events than the first book, but the echoing ramifications of those events are vast and provide plenty to fill a novel. The Obelisk Gate never felt slow. The space between major events is filled with emotional processing and revelations about the (excellent) underlying world-building.

We do finally learn at least something about the stone-eaters, although many of the details remain murky. We also learn something about Alabaster's goals, which were the constant but mysterious undercurrent of the first book. Mixed with this is the nature of the Guardians (still not quite explicit, but much clearer now than before), the purpose of the obelisks, something of the history that made this world such a hostile place, and the underlying nature of orogeny.

The last might be a touch disappointing to some readers (I admit it was a touch disappointing to me). There are enough glimmers of forgotten technology and alternative explanations that I was wondering if Jemisin was setting up a quasi-technological explanation for orogeny. This book makes it firmly clear that she's not: this is a fantasy, and it involves magic. I have a soft spot in my heart for apparent magic that's some form of technology, so I was a bit sad, but I do appreciate the clarity. The Obelisk Gate is far more open with details and underlying systems (largely because Essun is learning more), which provides a lot of meat for the reader to dig into and understand. And it remains a magitech world that creates artifacts with that magic and uses them (or, more accurately, used them) to build advanced civilizations. I still see some potential pitfalls for the third book, depending on how Jemisin reconciles this background with one quasi-spiritual force she's introduced, but the world building has been so good that I have high hopes those pitfalls will be avoided.

The world-building is not the best part of this book, though. That's the characters, and specifically the characters' emotions. Jemisin manages the feat of both giving protagonists enough agency that the story doesn't feel helpless while still capturing the submerged rage and cautious suspicion that develops when the world is not on your side. As with the first book of this series, Jemisin captures the nuances, variations, and consequences of anger in a way that makes most of fiction feel shallow.

I realized, while reading this book, that so many action-oriented and plot-driven novels show anger in only two ways, which I'll call "HULK SMASH!" and "dark side" anger. The first is the righteous anger when the protagonist has finally had enough, taps some heretofore unknown reservoir of power, and brings the hurt to people who greatly deserved it. The second is the Star Wars cliche: anger that leads to hate and suffering, which the protagonist has to learn to control and the villain gives into. I hadn't realized how rarely one sees any other type of anger until Jemisin so vividly showed me the vast range of human reaction that this dichotomy leaves out.

The most obvious missing piece is that both of those modes of anger are active and empowered. Both are the anger of someone who can change the world. The argument between them is whether anger changes the world in a good way or a bad way, but the ability of the angry person to act on that anger and for that anger to be respected in some way by the world is left unquestioned. One might, rarely, see helpless anger, but it's usually just the build-up to a "HULK SMASH!" moment (or, sometimes, leads to a depressing sort of futility that makes me not want to read the book at all).

The Obelisk Gate felt like a vast opening-up of emotional depth that has a more complicated relationship to power: hard-earned bitterness that brings necessary caution, angry cynicism that's sometimes wrong but sometimes right, controlled anger, anger redirected as energy into other actions, anger that flares and subsides but doesn't disappear. Anger that one has to live with, and work around, and understand, instead of getting an easy catharsis. Anger with tradeoffs and sacrifices that the character makes consciously, affected by emotion but not driven by it. There is a moment in this book where one character experiences anger as an overwhelming wave of tiredness, a sharp realization that they're just so utterly done with being angry all the time, where the emotion suddenly shifts into something more introspective. It was a beautifully-captured moment of character depth that I don't remember seeing in another book.

This may sound like it would be depressing and exhausting to read, but at least for me it wasn't at all. I didn't feel like I was drowning in negative emotions — largely, I think, because Jemisin is so good at giving her characters agency without having the world give it to them by default. The protagonists are self-aware. They know what they're angry about, they know when anger can be useful and when it isn't, and they know how to guide it and live with it. It feels more empowering because it has to be fought for, carved out of a hostile world, earned with knowledge and practice and stubborn determination. Particularly in Essun, Jemisin is writing an adult whose life is full of joys and miseries, who doesn't forget her emotions but also isn't controlled by them, and who doesn't have the luxury of either being swept away by anger or reaching some zen state of unperturbed calm.

I think one key to how Jemisin pulls this off is the second-person perspective used for Essun's part of the book (and carried over into the other strand, which has the same narrator but a different perspective since this story is being told to Essun). That's another surprise, since normally this style strikes me as affected and artificial, but here it serves the vital purpose of giving the reader a bit of additional distance from Essun's emotions. Following an emotionally calmer retelling of someone else's perspective on Essun made it easier to admire what Jemisin is doing with the nuances of anger without getting too caught up in it.

It helps considerably that the second-person perspective here has a solid in-story justification (not explicitly explained here, but reasonably obvious by the end of the book), and is not simply a gimmick. The answers to who is telling this story and why they're telling it to a protagonist inside the story are important, intriguing, and relevant.

This series is doing something very special, and I'm glad I stuck to it through the confusing and difficult parts in the first book. There's a reason why every book in it was nominated for the Hugo and The Obelisk Gate won in 2017 (and The Fifth Season in 2016). Despite being the middle book of a trilogy, and therefore still leaving unresolved questions, this book was even better than The Fifth Season, which already set a high bar. This is very skillful and very original work and well worth the investment of time (and emotion).

Followed by The Stone Sky.

Rating: 9 out of 10

Norbert Preining: Hyper Natural Deduction

5 June, 2018 - 07:40

After quite some years of research, my colleague Arnold Beckmann and my paper on Hyper Natural Deduction has finally been published in the Journal of Logic and Computation. This paper was the difficult but necessary first step in our program to develop a Curry-Howard style correspondence between standard Gödel logic (and its Hypersequent calculus) and some kind of parallel computations.

The results of this article were first announced at the LICS (Logic in Computer Science) Conference in 2015, but the current version is much more intuitive due to a switch to inductive definition, usage of graph representation for proofs, and finally due to a fix of a serious error. The abstract of the current article read:

We introduce a system of Hyper Natural Deduction for Gödel Logic as an extension of Gentzen’s system of Natural Deduction. A deduction in this system consists of a finite set of derivations which uses the typical rules of Natural Deduction, plus additional rules providing means for communication between derivations. We show that our system is sound and complete for infinite-valued propositional Gödel Logic, by giving translations to and from Avron’s Hypersequent Calculus. We provide conversions for normalization extending usual conversions for Natural Deduction and prove the existence of normal forms for Hyper Natural Deduction for Gödel Logic. We show that normal deductions satisfy the subformula property.

The article (preprint version) by itself is rather long (around 70 pages including the technical appendix), but for those interested, the first 20 pages give a nice introduction and the inductive definition of our system, which suffices for building upon this work. The rest of the paper is dedicated to an extensional definition – not and inductive definition but one via clearly defined properties on the final object – and the proof of normalization.

Starting point of our investigations was Arnon Avron‘s comments on parallel computations and communication when he introduced the Hypersequent calculus (Hypersequents, Logical Consequence and Intermediate Logics for Concurrency, Ann.Math.Art.Int. 4 (1991) 225-248):

The second, deeper objective of this paper is to contribute towards a better understanding of the notion of logical consequence in general, and especially its possible relations with parallel computations.

We believe that these logics […] could serve as bases for parallel λ-calculi.

The name “communication rule” hints, of course, at a certain intuitive interpretation that we have of it as corresponding to the idea of exchanging information between two multiprocesses: […]

In working towards a Curry-Howard (CH) correspondence between Gödel logics and some kind of process calculus, we are guided by the original path, as laid out in the above graphics: Starting from Intuitionistic Logic (IL) and its sequent calculus (LJ) a natural dudcution system (ND) provided the link to the λ-calculus. We started from Gödel logics (GL) and its Hypersequent calculus (HLK) and in this article developed a Hyper Natural Deduction with similar properties as the original Natural Deduction system.

Curry-Howard correspondences provide deep conceptual links between formal proofs and computational programs. A whole range of such CH correspondences have been identified and explored. The most fundamental one is between the natural deduction proof formalism for intuitionistic logic, and a foundational programming language called lambda calculus. This CH correspondence interprets formulas in proofs as types in programs, proof transformations like cut-reduction to computation steps like beta-reduction in lambda calculus. These insights have led to applications of logical tools to programming language technology, and the development of programming languages like CAML and of proof assistants like Coq.

CH correspondences are well established for sequential programming, but are far less clear for parallel programming. Current approaches to establish such links for parallel programming always start from established models of parallel programming like process algebra (CSP, CCS, pi-calculus) and define a related logical formalism. For example the linear logic proof formalism is inspired by the pi-calculus process algebra. Although some links between linear logic and pi-calculus have been established, a deep, inspiring connection is missing. Another problem is that logical formalisms established in this way often lack a clear semantics which is independent of the related computational model on which they are based. Thus, despite years of intense research on this topic, we are far from having a clear and useful answer which leads to strong applications in programming language technology as we have seen for the fundamental CH correspondence for sequential programming.

Although it was a long and tiring path to the current status, it is only the beginning.

Shashank Kumar: Google Summer of Code 2018 with Debian - Week 3

5 June, 2018 - 01:30

Coming to the third week of GSoC felt like it was part of the daily schedule since ever. Daily updates to mentors, reviews, and evaluations on merge/pull requests and constant learning process kept my schedule full of adrenalin. Here's what I worked on!

How A Project Is Made with Sanyam Khurana

Building an idea into a project seems like a lot of work and excitement right? You can do all sorts of crazy stuff with your code to make it as amazing as possible. Use all sorts of cool-tool in hope of making something out of it at the end. But this is where the problem lies. Diving into the sea of amusement and uncertainty never promises a good ending. And hence, my mentor Sanyam Khurana and I sat down for my intervention in hope of structuring the tasks for good. And this is how a project begins. Sanyam with his experience in Open Source as well as industry taught me the importance of dividing tasks which should be atomic and should clearly define what we are trying to achieve in Plain-English. For example, when you are trying to make a blogging website, you don't create a pull request with all of the functionalities needed for the blog. First, think about the atomic tasks which can be done independently. Now create a series of these tasks (we call them tickets/issues/features). So, you have a ticket for, say, setting up pelican blog. Another for creating a theme for your blog. Another for adding analytics to your blog and so on.

Now, you can also create boards or table with columns which define the state of which these tasks lies. A task may be in development or in testing or review phase. This makes it easier to vizualise what needs to be done, what has been done already and what tasks should be in focus currently. This methodology in a broader sense and proper framework with a lot of disciplines in action is known as Agile Software Development.

Dividing Project into Tasks

After learning much about how to proceed, I sketched out the way in which I can separate out the atomic features needed for the project. We're using Debian hosted Redmine for our project management and I started jotting down the issues, to begin with. Here are the issues which shape the beginning of the project.

  • Create Design Guideline - The first issue in order to create a reference GUI design guideline for the application.
  • Design GUI for Sign Up - Design mockup following the guideline to describe how Sign Up module should look like on the GUI.
  • Design GUI for Sign In - Design mockup following the guideline to describe how Sign In module should look like on the GUI.
  • Create Git repository for the project - Project mentor Daniel created this issue as the first step which marks the beginning of the project.
  • Initializing skeleton KIVY application - After a dedicated repository has been created for our project, a KIVY application has to be setup which should also include tests, documentation, and changelog.
  • Create SignUp Feature - After the skeleton is setup, sign up modules can be implemented which should present a GUI to the user in order to create the account to access the application. This screen should be the first interaction with the user after they run the application for the first time.
  • Create SignIn Feature - If the user is already signed up for an account on the application, this screen will be the medium with which they can Sign In with the credentials.
  • Add a license to Project Repository - Being an open source project, picking up license is a very elaborative process where we have to also look at all the dependencies our application has and other parameters. Hence, this issues is more of a discussion which will conclude by adding a License file in the project repository.

These were some of the key issues which came up after my discussion with Sanyam (except creating git repo which Daniel kickstarted). These issues were enough to begin with and as we progress we can create more issues on Redmine. As part of the first couple of weeks of GSoC, I've already completed the first 3 design issues, I also wrote a blog explaining about my process and the outcome. So, for the third week, I started with initializing skeleton KIVY application.

The First Merge Request

Don't be confused if you are a Github native, since we are using Debian hosted Gitlab (called salsa), it has Merge Request in place of Pull Request.

The issue which I was trying to solve in my first Merge Request was Initializing skeleton KIVY application. It was just to create a boilerplate from scratch so that development from now on would be smooth. I set out to achieve following things in my Merge Request

  • Add a KIVY application which can create a sample window with sample text on it to showcase that KIVY is working just fine
  • Create the project structure to fit documentation, ui, tests and modules
  • Add pipenv support for virtual environment and dependency management
  • Integrate pylint to test Python code for PEP8 compliance
  • Integrate pytest and write tests for unit and integration testing
  • Adding Gitlab CI support
  • Add a README.md file and write general description about the project and all it's components
  • Add documentation for end user to help them easily run and explain all the features of the application
  • Add documentation for developers to help them build the application from source
  • Add documentation for contributors to share some of the best practices while contributing to this application

Here's the Merge Request which resulted in all of the above additions to the project. It was a lot of pain getting CI to work for the first time, but once you get a green tick, you know what ticks CI to work correctly. Throughout my development process Sanyam helped me with reviews and it finally got merged into the repository by Daniel.

Conclusion

This week kickstarted the main development process for New Contributor Wizard and gave me a chance to learn about project/software management. I will be creating more issues and share about what I'm working in the next week's GSoC blog.

Markus Koschany: My Free Software Activities in May 2018

5 June, 2018 - 00:46

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java Debian LTS

This was my twenty-seventh month as a paid contributor and I have been paid to work 24,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 21.05.2018 until 27.05.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in glusterfs, tomcat7, zookeeper, imagemagick, strongswan, radare2, batik, mupdf and graphicsmagick.
  • I drafted a announcement for Wheezy’s EOL that was later released as DLA-1393-1 and as an official Debian news.
  • DLA-1384-1. I reviewed and uploaded xdg-utils for Abhijith PA.
  • DLA-1381-1. Issued a security update for imagemagick/Wheezy fixing 3 CVE.
  • DLA-1385-1. Issued a security update for batik/Wheezy fixing 1 CVE.
  • Prepared a backport of Tomcat 7.0.88 for Jessie which fixes all open CVE (5) in Jessie. From now on we intend to provide the latest upstream releases for a specific Tomcat branch. We hope this will improve the user experience. It also allows Debian users to get more help from Tomcat developers directly because there is no significant Debian specific delta anymore. The update is pending review by the security team.
  • Prepared a security update for graphicsmagick fixing 19 CVE. I also investigated CVE-2017-10794 and CVE-2017-17913 and came to the conclusion that the Jessie version is not affected. I merged and reviewed another update by László Böszörményi. At the moment the update is pending review by the security team. Together these updates will fix the most important issues in Graphicsmagick/Jessie.
  • DSA-4214-1. Prepared a security update for zookeeper fixing 1 CVE.
  • DSA-4215-1. Prepared a security update for batik/Jessie fixing 1 CVE.
  • Prepared a security update for memcached in Jessie and Stretch fixing 2 CVE. This update is also pending review by the security team.
  • Finished the security update for JRuby (Jessie and Stretch) fixing 5 respectively 7 CVE. However we discovered that JRuby fails to build from source in Jessie and a fix or workaround will most likely break reverse-dependencies. Thus we have decided to mark JRuby as end-of-life in Jessie also because the version is already eight years old.
Misc
  • I reviewed and sponsored xtrkcad for Jörg Frings-Fürst.

Thanks for reading and see you next time.

Raphaël Hertzog: My Free Software Activities in May 2018

4 June, 2018 - 23:56

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

distro-tracker

With the disappearance of many alioth mailing lists, I took the time to finish proper support of a team email in distro-tracker. There’s no official documentation yet but it’s already used by a bunch of team. If you look at the pkg-security team on tracker.debian.org it has used “pkg-security” as its unique identifier and it has thus inherited from team+pkg-security@tracker.debian.org as an email address that can be used in the Maintainer field (and it can be used to communicate between all team subscribers that have the contact keyword enabled on their team subscription).

I also dealt with a few merge requests:

I also filed ticket #7283 on rt.debian.org to have local_part_suffix = “+” for tracker.debian.org’s exim config. This will let us bounce emails sent to invalid email addresses. Right now all emails are delivered in a Maildir, valid messages are processed and the rest is silently discarded. At the time of processing, it’s too late to send bounces back to the sender.

pkg-security team

This month my activity is limited to sponsorship of new packages:

  • grokevt_0.5.0-2.dsc fixing one RC bug (missing build-dep on python3-distutils)
  • dnsrecon_0.8.13-1.dsc (new upstream release)
  • recon-ng_4.9.3-1.dsc (new upstream release)
  • wifite_2.1.0-1.dsc (new upstream release)
  • aircrack-ng (add patch from upstream git)

I also interacted multiple times with Samuel Henrique who started to work on the Google Summer of Code porting Kali packages to Debian. He mainly worked on getting some overview of the work to do.

Misc Debian work

I reviewed multiple changes submitted by Hideki Yamane on debootstrap (on the debian-boot mailing list, and also in MR 2 and MR 3). I reviewed and merged some changes on live-boot too.

Extended LTS

I spent a good part of the month dealing with the setup of the Wheezy Extended LTS program. Given the lack of interest of the various Debian teams, it’s hosted on a Freexian server and not on any debian.org infrastructure. But the principle is basically the same as Debian LTS except that the package list is reduced to the set of packages used by Extended LTS sponsors. But the updates prepared in this project are freely available for all.

It’s not too late to join the program, you can always contact me at deblts@freexian.com with a source package list that you’d like to see supported and I’ll send you back an estimation of the cost.

Thanks to an initial contribution from Credativ, Emilio Pozuelo Monfort has prepared a merge request making it easy for third parties to host their own security tracker that piggy-back on Debian’s one. For Extended LTS, we thus have our own tracker.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้