Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 24 min ago

Antoine Beaupré: bup vs attic silly benchmark

18 November, 2014 - 12:39

after see attic introduced in a discussion about bup, i figured out i could give it a try. it was answering two of my biggest concerns with bup:

  • backup removal
  • encryption

and seemed to magically out of nowhere and basically do everything i need, with an inline manual on top of it.

disclaimer

Note: this is not a real benchmark! i would probably need to port bup and attic to liw's seivot software to report on this properly (and that would amazing and really interesting, but it's late now). even worse, this was done on a production server with other stuff going on so take results with a grain of salt.

procedure and results

Here's what I did. I setup backups of my ridiculously huge ~/src directory on the external hard drive where I usually make my backups. I ran a clean backup with attic, than redid it, then I ran a similar backup with bup, then redid it. Here are the results:

anarcat@marcos:~$ sudo apt-get install attic # this installed 0.13 on debian jessie amd64
[...]
anarcat@marcos:~$ attic init /mnt/attic-test:
Initializing repository at "/media/anarcat/calyx/attic-test"
Encryption NOT enabled.
Use the "--encryption=passphrase|keyfile" to enable encryption.
anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src ~/src/
Initializing cache...
------------------------------------------------------------------------------
Archive name: src
Archive fingerprint: 7bdcea8a101dc233d7c122e3f69e67e5b03dbb62596d0b70f5b0759d446d9ed0
Start time: Tue Nov 18 00:42:52 2014
End time: Tue Nov 18 00:54:00 2014
Duration: 11 minutes 8.26 seconds
Number of files: 283910

                       Original size      Compressed size    Deduplicated size
This archive:                6.74 GB              4.27 GB              2.99 GB
All archives:                6.74 GB              4.27 GB              2.99 GB
------------------------------------------------------------------------------
311.60user 68.28system 11:08.49elapsed 56%CPU (0avgtext+0avgdata 122824maxresident)k
15279400inputs+6788816outputs (0major+3258848minor)pagefaults 0swaps
anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src-2014-11-18 ~/src/
------------------------------------------------------------------------------
Archive name: src-2014-11-18
Archive fingerprint: be840f1a49b1deb76aea1cb667d812511943cfb7fee67f0dddc57368bd61c4bf
Start time: Tue Nov 18 00:05:57 2014
End time: Tue Nov 18 00:06:35 2014
Duration: 38.15 seconds
Number of files: 283910

                       Original size      Compressed size    Deduplicated size
This archive:                6.74 GB              4.27 GB            116.63 kB
All archives:               13.47 GB              8.54 GB              3.00 GB
------------------------------------------------------------------------------
30.60user 4.66system 0:38.38elapsed 91%CPU (0avgtext+0avgdata 104688maxresident)k
18264inputs+258696outputs (0major+36892minor)pagefaults 0swaps
anarcat@marcos:~$ sudo apt-get install bup # this installed bup 0.25
anarcat@marcos:~$ free && sync && echo 3 | sudo tee /proc/sys/vm/drop_caches && free # flush caches
anarcat@marcos:~$ export BUP_DIR=/mnt/bup-test
anarcat@marcos:~$ bup init
Dépôt Git vide initialisé dans /mnt/bup-test/
anarcat@marcos:~$ time bup index ~/src
Indexing: 345249, done.
56.57user 14.37system 1:45.29elapsed 67%CPU (0avgtext+0avgdata 85236maxresident)k
699920inputs+104624outputs (4major+25970minor)pagefaults 0swaps
anarcat@marcos:~$ time bup save -n src ~/src
Reading index: 345249, done.
bloom: creating from 1 file (200000 objects).
bloom: adding 1 file (200000 objects).
bloom: creating from 3 files (600000 objects).
Saving: 100.00% (6749592/6749592k, 345249/345249 files), done.
bloom: adding 1 file (126005 objects).
383.08user 61.37system 10:52.68elapsed 68%CPU (0avgtext+0avgdata 194256maxresident)k
14638104inputs+5944384outputs (50major+299868minor)pagefaults 0swaps
anarcat@marcos:attic$ time bup index ~/src
Indexing: 345249, done.
56.13user 13.08system 1:38.65elapsed 70%CPU (0avgtext+0avgdata 133848maxresident)k
806144inputs+104824outputs (137major+38463minor)pagefaults 0swaps
anarcat@marcos:attic$ time bup save -n src2 ~/src
Reading index: 1, done.
Saving: 100.00% (0/0k, 1/1 files), done.
bloom: adding 1 file (1 object).
0.22user 0.05system 0:00.66elapsed 42%CPU (0avgtext+0avgdata 17088maxresident)k
10088inputs+88outputs (39major+15194minor)pagefaults 0swaps

Disk usage is comparable:

anarcat@marcos:attic$ du -sc /mnt/*attic*
2943532K        /mnt/attic-test
2969544K        /mnt/bup-test

People are encouraged to try and reproduce those results, which should be fairly trivial.

Observations

Here are interesting things I noted while working with both tools:

  • attic is Python3: i could compile it, with dependencies, by doing apt-get build-dep attic and running setup.py - i could also install it with pip if i needed to (but i didn't)
  • bup is Python 2, and has a scary makefile
  • both have an init command that basically does almost nothing and takes little enough time that i'm ignoring it in the benchmarks
  • attic backups are a single command, bup requires me to know that i first want to index and then save, which is a little confusing
  • bup has nice progress information, especially during save (because when it loaded the index, it knew how much was remaining) - just because of that, bup "feels" faster
  • bup, however, lets me know about its deep internals (like now i know it uses a bloom filter) which is probably barely understandable by most people
  • on the contrary, attic gives me useful information about the size of my backups, including the size of the current increment
  • it is not possible to get that information from bup, even after the fact - you need to du before and after the backup
  • attic modifies the files access times when backing up, while bup is more careful (there's a pull request to fix this in attic, which is how i found out about this)
  • both backup systems seem to produce roughly the same data size from the same input
Summary

attic and bup are about equally fast. bup took 30 seconds less than attic to save the files, but that's not counting the 1m45s it took indexing them, so on the total run time, bup was actually slower. attic is also (almost) two times faster on the second run as well. but this could be within the margin of error of this very quick experiment, so my provisional verdict for now would be that they are about as fast.

bup may be more robust (for example it doesn't modify the atimes), but this has not been extensively tested and is more based with my familiarity with the "conservatism" of the bup team rather than actual tests.

considering all the features promised by attic, it makes for a really serious contender to the already amazing bup.

Next steps

The properly do this, we would need to:

  • include other software (thinking of Zbackup, Burp, ddar, obnam, rdiff-backup and duplicity)
  • bench attic with the noatime patch
  • bench dev attic vs dev bup
  • bench data removal
  • bench encryption
  • test data recovery
  • run multiple backup runs, on different datasets, on a cleaner environment
  • ideally, extend seivot to do all of that

Vincent Sanders: NetSurf Developer workshop IV

18 November, 2014 - 03:54
Over the weekend the NetSurf developers met to make a concentrated effort on improving the browser. This time we were kindly hosted by Codethink in their Manchester office in a pleasant environment with plenty of refreshments.

Five developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders. We also had Chris Young providing some bug fixes remotely.

We started the weekend by discussing all the thorny core issues that had been put on the agenda and ensuring the outcomes were properly noted. We also held the society AGM which was minuted by Daniel.

The emphasis of this weekend was very much on planning and doing the disruptive changes we had been putting off until we were all together.

John-Mark and myself managed to change the core build system as used by all the libraries to using standard triplets to identify systems and use the gnu autoconf style of naming for parameters (i.e. HOST, BUILD and CC being used correctly).

This was accompanied by improvements and configuration changes to the CI system to accommodate the new usage.

Several issues from the bug tracker were addressed and we put ourselves in a stronger position to address numerous other usability problems in the future.

We managed to pack a great deal into the 20 hours of work on Saturday and Sunday although because we were concentrating much more on planning and infrastructure rather than a release the metrics of commits and files changed were lower than at previous events.

Niels Thykier: The first 12 days and 408 unblock requests into the Jessie freeze

18 November, 2014 - 03:17

The release team receives an extreme amount of unblock requests right now.  For the past 22 days[1], we have been receiving no less than 408 unblock/ageing requests.  That is an average of ~18.5/day.  In the same period, the release team have closed 350 unblocks requests, averaging 15.9/day.

This number does not account for number of unblocks, we add without a request, when we happen to spot when we look at the list of RC bugs[2]. Nor does it account for unblock requests currently tagged “moreinfo”, of which there are currently 25.

All in all, it has been 3 intensive weeks for the release team.  I am truly proud of my fellow team members for keeping up with this for so long!  Also a thanks to the non-RT members, who help us by triaging and reviewing the unblock requests!  It is much appreciated. :)

 

Random bonus info:

  • d (our diffing tool) finally got colordiff support during the Release Sprint last week.  Prior to that, we got black’n’white diffs!
    • ssh coccia.debian.org -t /srv/release.debian.org/tools/scripts/d <srcpkg>
    • Though coccia.debian.org do not have colordiff installed right now.  I have filed a request to have it installed.
  • The release team have about 132 (active) unblock hints deployed right now in our hint files.

 

[1] We started receiving some in the 10 days before the freeze as people realised that their uploads would need an unblock to make it into Jessie.

[2] Related topics: “what is adsb?” (the answer being: Our top hinter for Wheezy)

 


Daniel Leidert: Rsync files between two machines over SSH and limit read access

17 November, 2014 - 23:32

From time to time I need to get contents from a remote machine to my local workstation. The data sometimes is big and I don't want to start all over again if something fails. Further the transmission should be secure and the connection should be limited to syncing only this path and its sub-directories. So I've setup a way to do this using rsync and ssh and I'm going to describe this setup.

Consider you have already created a SSH key, say ~/.ssh/key_rsa together with ~/.ssh/key_rsa.pub, and on the remote machine there is an SSH server running allowing to login by a public key and rsync is available. Lets further assume the following:

  • the remote machine is rsync.domain.tld
  • the path on the remote machine that holds the data is /path/mydata
  • the user on the remote machine being able to read /path/mydata and to login via SSH is remote_user
  • the path on the local machine to put the data is /path/mydest
  • the user on the local machine being able to write /path/mydest is local_user
  • the user on the local machine has the private key ~local_user/.ssh/key_rsa and the public key ~local_user/.ssh/key_rsa.pub

Now the public key ~local_user/.ssh/key_rsa.pub is added to the remote users ~remote_user/.ssh/authorized_keys file. The file will then probably look like this (there is just one very long line with the key, here cut out by [..]):

ssh-rsa [..]= user@domain.tld

Now I would like to limit the abilities for a user logging in with this key to only rsync the special directory /path/mydata. I therefor preceed the key with a command prefix, which is explained in the manual page sshd(8). The file then looks like this:

command="/usr/bin/rsync --server --sender -vlogDtprze . /path/mydata",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [..]= user@domain.tld

I then can rsync the remote directory to a local location over SSH by running:

rsync -avz -P --delete -e 'ssh remote_user@rsync.domain.tld' rsync.domain.tld:/path/mydata/ /path/mydest

That's it.

Thorsten Alteholz: Manage own CA with Debian

17 November, 2014 - 20:57

Self signed SSL certificates are nice, but only provide encryption of retrieved data. Nobody knows who is really sending the data.

If one buys an SSL certificate for a website, the browser doesn’t complain as much as with a self signed certificate. But can you really trust the other side? Almost every commercial CA has some kind of “fast validation” or “domain validation, issued in minutes”, which is done by email or phone. So if required, within minutes everybody might become you. Even with putting money on the table your users can not be sure whether this server really belongs to the right guy.

Well, why wasting time and money? Just create your own Root CA and tell users that they need to add something in order to avoid some error messages. In Debian we basically have five packages who claim to be able to manage some kind of CA.

easy-rsa is mainly needed to manage certificates used by openVPN. Within this use case it works like a charm, but I don’t want to manage a more complex CA with it.

gnomint is dead upstream and only uses SHA1 as signature algorithm. This will cause lots of problems as Mircrosoft and Google want to deprecate SHA1 in their products by 2017. Besides, this package is already orphaned and maybe it can disappear now.

tinyCA uses more signature algorithms, unfortunately SHA1 seems to be the “best” it can. There are some patches to support up to SHA512, but they don’t work for all parts of the software yet. For example Sub-CAs still use SHA1 despite of choosing something different in the GUI. So nice, but not (yet) usable in Jessie.

FreeIPA seems to be great, but didn’t make it into Jessie in time. Unfortunately the Release Team has reasons to not unblock it. So nice, but not usable in Jessie.

xca is based on QT4. As announced in the 15th DPN of 2014 the deprecated QT4 will be removed from Debian Stretch (= Jessie+1). Apart from this, the software meets all my requirements.

Jonathan Wiltshire: Getting things into Jessie (#2)

17 November, 2014 - 17:45
If your request doesn’t appear on the list, probably the diff was too big

There’s a size limit for mails to debian-release@lists.debian.org, and in general that’s how we read unblock requests. If your mail doesn’t appear on the list, but you do get a bug number back, your mail is likely too large..

In this case, that’s probably a sign that we won’t accept your request without exceptional circumstances. Making so many changes in a package is almost certainly not appropriate for this phase of the cycle.

Do use filterdiff to remove automatically-generated files from your diff before sending it, which increases the chances of acceptance.

Do follow up to the bug if it doesn’t appear on the list; send a short message explaining that happened and why your giant diff should be considered.

Don’t be surprised if thousands and thousands of lines changed are rejected.

Getting things into Jessie (#2) is a post from: jwiltshire.org.uk | Flattr

Chris Lamb: Calculating the number of pedal turns on a bike ride

17 November, 2014 - 16:34

If you have a cadence sensor on your bike such as the Garmin GSC-10, you can approximate the number of pedal turns you made on the bike ride using the following script (requires GPSBabel):

#!/bin/sh

STYLE="$(mktemp)"

cat >${STYLE} <<EOF
FIELD_DELIMITER COMMA
RECORD_DELIMITER NEWLINE
OFIELD CADENCE,"","%d"
EOF

exec gpsbabel -i garmin_fit -f "${1}" -o xcsv,style=${STYLE} -F- |
    awk '{x += $1} END {print int(x / 60)}'

... then call with:

$ sh cadence.sh ~/path/to/2014-11-16-14-46-05.fit
24344

Unfortunately the Garmin .fit format doesn't store the actual number of pedal turns, only the average for each particular second. However, it should be reasonably accurate given that one keeps a reasonably steady cadence.

As a bonus, using a small amount of shell plumbing you can then sum an entire year's worth of riding like so:

$ for X in ~/path/to/2014-*.fit; do sh cadence.sh ${X}; done | awk '{x += $1} END { print x }'
749943

Paul Tagliamonte: BOS -> DC

17 November, 2014 - 09:06

Hello, World

Been a while since my last blog post - things have been a bit hectic lately, and I’ve not really had the time.

Now that things have settled down a bit — I’m in DC! I’ve moved down south to join the rest of my colleagues at Sunlight to head up our State & Local team.

Leaving behind the brilliant Free Software community in Boston won’t be easy, but I’m hoping to find a similar community here in DC.

Daniel Leidert: Install automatically starting XBMC to N54L microserver under Debian Wheezy 7.7

17 November, 2014 - 06:12

This is a followup to my previous post about getting sound output from the Sapphire Radeon HD 6450 card in my HP N54L microserver via HDMI. This post will describe, howto install XBMC from Wheezy backports and how to automatically start it. Again, there are vaious ways and I'll only describe mine. Further, this is, what I did so far: enable the audio output for the Radeon card and install X.org together with lightdm.

Step 3 - Install XBMC

This is a pretty easy task. I've chosen to install XBMC 13.2 from the Wheezy backports repository.

# apt-get install -t wheezy-backports xbmc
Step 4 - Automically start XBMC

There are various ways; some involve starting it a s a service using init scripts für sysvinit or upstrart or systemd. You'll easily find them. I've chosen to create a user, automatically log him into X and start XBMC. The user is called xbmc.

# adduser --home /home/xbmc --add_extra_groups xbmc

I used to choose a password. But I wonder, if using --disabled-password would work too? Next I adjusted /etc/lightdm/lightdm.conf. Below are only the differences to the stock version of this file. I haven't touched other lines.

[SeatDefaults]
greeter-session=lightdm-gtk-greeter
user-session=XBMC
autologin-guest=false
autologin-user=xbmc
autologin-user-timeout=0

The file /usr/share/xsessions/XBMC.desktop is the stock one, no changes made. After restarting lightdm:

# service lightdm restart

XBMC is started automatically. If anything goes wrong or doesn't work, I suggest to check /var/log/auth.log, /home/xbmc/.xsession-errors and /var/log/lightdm/*.log. In a few cases it seems necessary to login the user xbmc manually once although it wasn't necessary here.

JFTR: When I checked /var/log/auth.log I saw a few errors and installed gnome-keyring too:

apt-get install --install-recommends gnome-keyring
Step 5 - Useful packages

There are some packages, which might be useful running XBMC, e.g.

Conclusion

I'm now running XBMC on top of Debian Wheezy on the N54L microserver without a bloated desktop environment. Video and sound are working fine, though it was necessary to install recent firmware and a recent kernel from Wheezy backports to get it done. The system automatically starts the XBMC session on start/reboot. I'm now examining the XBMC features :)

Tollef Fog Heen: Resigning as a Debian systemd maintainer

17 November, 2014 - 05:55

Apparently, people care when you, as privileged person (white, male, long-time Debian Developer) throw in the towel because the amount of crap thrown your way just becomes too much. I guess that's good, both because it gives me a soap box for a short while, but also because if enough people talk about how poisonous the well that Debian is has become, we can fix it.

This morning, I resigned as a member of the systemd maintainer team. I then proceeded to leave the relevant IRC channels and announced this on twitter. The responses I've gotten have been almost all been heartwarming. People have generally been offering hugs, saying thanks for the work put into systemd in Debian and so on. I've greatly appreciated those (and I've been getting those before I resigned too, so this isn't just a response to that). I feel bad about leaving the rest of the team, they're a great bunch: competent, caring, funny, wonderful people. On the other hand, at some point I had to draw a line and say "no further".

Debian and its various maintainer teams are a bunch of tribes (with possibly Debian itself being a supertribe). Unlike many other situations, you can be part of multiple tribes. I'm still a member of the DSA tribe for instance. Leaving pkg-systemd means leaving one of my tribes. That hurts. It hurts even more because it feels like a forced exit rather than because I've lost interest or been distracted by other shiny things for long enough that you don't really feel like part of a tribe. That happened with me with debian-installer. It was my baby for a while (with a then quite small team), then a bunch of real life thing interfered and other people picked it up and ran with it and made it greater and more fantastic than before. I kinda lost touch, and while it's still dear to me, I no longer identify as part of the debian-boot tribe.

Now, how did I, standing stout and tall, get forced out of my tribe? I've been a DD for almost 14 years, I should be able to weather any storm, shouldn't I? It turns out that no, the mountain does get worn down by the rain. It's not a single hurtful comment here and there. There's a constant drum about this all being some sort of conspiracy and there are sometimes flares where people wish people involved in systemd would be run over by a bus or just accusations of incompetence.

Our code of conduct says, "assume good faith". If you ever find yourself not doing that, step back, breathe. See if there's a reasonable explanation for why somebody is saying something or behaving in a way that doesn't make sense to you. It might be as simple as your native tongue being English and their being something else.

If you do genuinely disagree with somebody (something which is entirely fine), try not to escalate, even if the stakes are high. Examples from the last year include talking about this as a war and talking about "increasingly bitter rear-guard battles". By using and accepting this terminology, we, as a project, poison ourselves. Sam Hartman puts this better than me:

I'm hoping that we can all take a few minutes to gain empathy for those who disagree with us. Then I'm hoping we can use that understanding to reassure them that they are valued and respected and their concerns considered even when we end up strongly disagreeing with them or valuing different things.

I'd be lying if I said I didn't ever feel the urge to demonise my opponents in discussions. That they're worse, as people, than I am. However, it is imperative to never give in to this, since doing that will diminish us as humans and make the entire project poorer. Civil disagreements with reasonable discussions lead to better technical outcomes, happier humans and a healthier projects.

John Goerzen: Contemplative Weather

17 November, 2014 - 05:30

Sometimes I look out the window and can’t help but feel “this weather is deep.” Deep with meaning, with import. Almost as if the weather is confident of itself, and is challenging me to find some meaning within it.

This weekend brought the first blast of winter to the plains of Kansas. Saturday was chilly and windy, and overnight a little snow fell. Just enough to cover up the ground and let the tops of the blades of grass poke through. Just enough to make the landscape look totally different, without completely hiding what lies beneath. Laura and I stood silently at the window for a few minutes this morning, gazing out over the untouched snow, extending out as far as we can see.

Yesterday, I spent some time with my great uncle and aunt. My great uncle isn’t doing so well. He’s been battling cancer and other health issues for some time, and can’t get out of the house very well. We talked for an hour and a half – about news of the family, struggles in life now and in the past, and joys. There were times when all three of us had tears in our eyes, and times when all of us were laughing so loudly. My great uncle managed to stand up twice while I was there — this took quite some effort — once to give me a huge hug when I arrived, and another to give me an even bigger hug when I left. He has always been a person to give the most loving hugs.

He hadn’t been able to taste food for awhile, due to treatment for cancer. When I realized he could taste again, I asked, “When should I bring you some borscht?” He looked surprised, then got a huge grin, glanced at his watch, and said, “Can you be back by 3:00?”

His brother, my grandpa, was known for his beef borscht. I also found out my great uncle’s favorite kind of bread, and thought that maybe I would do some cooking for him sometime soon.

Today on my way home from church, I did some shopping. I picked up the ingredients for borscht and for bread. I came home, said hi to the cats that showed up to greet me, and went inside. I turned on the radio – Prairie Home Companion was on – and started cooking.

It takes a long time to prepare what I was working on – I spent a solid two hours in the kitchen. As I was chopping up a head of cabbage, I remembered coming to what is now my house as a child, when my grandpa lived here. I remembered his borscht, zwiebach, monster cookies; his dusty but warm wood stove; his closet with toys in it. I remembered two years ago, having nearly 20 Goerzens here for Christmas, hosted by the boys and me, and the 3 gallons of borscht I made for the occasion.

I poured in some tomato sauce, added some water. The radio was talking about being kind to people, remembering that others don’t always have the advantages we do. Garrison Keillor’s fictional boy in a small town, when asked what advantages he had, mentioned “belonging.” Yes, that is an advantage. We all deal with death, our own and that of loved ones, but I am so blessed by belonging – to a loving family, two loving churches, a wonderful community.

Out came three pounds of stew beef. Chop, chop, slice, plunk into the cast iron Dutch oven. It’s my borscht pot. It looks as if it would be more at home over a campfire than a stovetop, but it works anywhere.

Outside, the sun came up. The snow melts a little, and the cats start running around even though it’s still below freezing. They look like they’re having fun playing.

I’m chopping up parsley and an onion, then wrapping them up in a cheesecloth to make the spice ball for the borscht. I add the basil and dill, some salt, and plonk them in, too. My 6-quart pot is nearly overflowing as I carefully stir the hearty stew.

On the radio, a woman who plays piano in a hospital and had dreamed of being on that particular radio program for 13 years finally was. She played with passion and delight I could hear through the radio.

Then it’s time to make bread. I pour in some warm water, add some brown sugar, and my thoughts turn to Home On The Range. I am reminded of this verse:

How often at night when the heavens are bright
With the light from the glittering stars
Have I stood here amazed and asked as I gazed
If their glory exceeds that of ours.

There’s something about a beautiful landscape out the window to remind a person of all the blessings in life. This has been a quite busy weekend — actually, a busy month — but despite the fact I have a relative that is sick in the midst of it all, I am so blessed in so many ways.

I finish off the bread, adding some yeast, and I remember my great uncle thanking me so much for visiting him yesterday. He commented that “a lot of younger people have no use for visiting an old geezer like me.” I told him, “I’ve never been like that. I am so glad I could come and visit you today. The best gifts are those that give in both directions, and this surely is that.”

Then I clean up the kitchen. I wipe down the counters from all the bits of cabbage that went flying. I put away all the herbs and spices I used, and finally go to sit down and reflect. From the kitchen, the smells of borscht and bread start to seep out, sweeping up the rest of the house. It takes at least 4 hours for the borscht to cook, and several hours for the bread, so this will be an afternoon of waiting with delicious smells. Soon my family will be home from all their activities of the day, and I will be able to greet them with a warm house and the same smells I stepped into when I was a boy.

I remember this other verse from Home On the Range:

Where the air is so pure, the zephyrs so free,
The breezes so balmy and light,
That I would not exchange my home on the range
For all of the cities so bright.

Today’s breeze is an icy blast from the north – maybe not balmy in the conventional sense. But it is the breeze of home, the breeze of belonging. Even today, as I gaze out at the frozen landscape, I realize how balmy it really is, for I know I wouldn’t exchange my life on the range for anything.

Gregor Herrmann: RC bugs 2014/45-46

17 November, 2014 - 05:07

I was not much at home during the last two weeks, so not much to report about RC bug activities. – some general observations:

  1. the RC bug count is still relatively low, even after lucas' archive rebuild.
  2. the release team is extremely fast in handling unblock request - kudos! – pro tip: file them quickly after uploading, or some{thing,one} else might be faster :)

my small contributions:

  • #765327 – libnet-dns-perl: "lintian fails if the machine has a link-local IPv6 nameserver configured"
    discuss possible fix with upstream (pkg-perl)
  • #768683 – src:libmoosex-storage-perl: "libmoosex-storage-perl: missing runtime dependencies cause (build) failures in other packages"
    move packages from Recommends to Depends (pkg-perl)
  • #768692 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #768712 – src:libpoe-component-client-mpd-perl: "libpoe-component-client-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #769003 – libgluegen2-build-java: "libjogl2-java: FTBFS on arm64, ppc64el, s390x"
    add patch from Colin Watson (pkg-java)

Daniel Leidert: Getting the audio over HDMI to work for the HP N54L microserver running Debian Wheezy and a Sapphire Radeon HD 6450

17 November, 2014 - 02:13
Conclusion: Sound over HDMI works with the Sapphire Radeon HD 6450 card in my HP N54L microserver. It requires a recent kernel and firmware from Wheezy 7.7 backports and the X.org server. There is no sound without X.org, even if audio has been enabled for the radeon kernel module.

Last year I couldn't get audio over HDMI to work after I installed a Sapphire Radeon HD 6450 1 GB (11190-02-20g) card into my N54L microserver. The cable that connects the HDMI interfaces between the card and the TV monitor supports HDMI 1.3, so audio should have been possible even then. However, I didn't get any audio output by XBMC playing video or music files. Nothing happened with stock Wheezy 7.1 and X.org/XBMC installed. So I removed the latter two and used the server as stock server without X/desktop and delayed my plans for an HTPC.

Now I tried again after I found some new hints, that made me curious for a second try :) Imagine my joy, when (finally) speaker-test produced noise on the TV! So here is my configuration and a step-by-step guide to

  • enable Sound over HDMI for the Radeon HD 6450
  • install a graphical environment
  • install XBMC
  • automatically start XBMC on boot

The latter two will be covered by a second post. Also note, that there is lot of information out there to achive the above tasks. So this is only about my configuration. Some packages below are marked as optional. A few are necessary only for the N54L microserver (firmware) and for a few I'm not sure they are necessary at all.

Step 1 - Prepare the system

At this point I don't have any desktop nor any other graphical environment (X.org) installed. First I purged pulseaudio and related packages completely and only use ALSA:

# apt-get autoremove --purge pulseaudio pulseaudio-utils pulseaudio-module-x11 gstreamer0.10-pulseaudio
# apt-get install alsa-base alsa-utils alsa-oss

Next I installed a recent linux kernel and recent firmware from Wheezy backports:

# apt-get install -t wheezy-backports linux-image-amd64 firmware-linux-free firmware-linux firmware-linux-nonfree firmware-atheros firmware-bnx2 firmware-bnx2x

This put linux-image-3.16-0.bpo.3-amd64 and recent firmware onto my system. I've chosen to upgrade linux-image-amd64 instead to pick a special (recent) linux kernel package from Wheezy backports to keep up-to-date with recent kernels from there.

Then I enabled the audio output of the kernel radeon module. Essentially there are at least three ways to do this. I use the one to modify /etc/modules.d/radeon.conf and set the audio parameter there. The hw_i2c parameter is disabled. I read, that it might cause trouble with the audio output here although I never personally experienced it:

options radeon audio=1 hw_i2c=0

JFTR: This is how I boot the N54L by default:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force pcie_aspm=force nmi_watchdog=0"

After rebooting I see this for the Radeon card in question:


# lsmod | egrep snd\|radeon\|drm | awk '{print $1}' | sort
[..]
drm
drm_kms_helper
i2c_algo_bit
i2c_core
radeon
snd
snd_hda_codec
snd_hda_codec_hdmi
snd_hda_controller
snd_hda_intel
snd_hwdep
snd_pcm
snd_seq
snd_seq_device
snd_timer
soundcore
ttm
[..]
# lspci -k
[..]
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Caicos [Radeon HD 6450/7450/8450 / R5 230 OEM]
Subsystem: PC Partner Limited / Sapphire Technology Device e204
Kernel driver in use: radeon
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Caicos HDMI Audio [Radeon HD 6400 Series]
Subsystem: PC Partner Limited / Sapphire Technology Radeon HD 6450 1GB DDR3
Kernel driver in use: snd_hda_intel
[..]
# cat /sys/module/radeon/parameters/audio
1
# cat /sys/module/radeon/parameters/hw_i2c
0
# aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0]
Subdevices: 0/1
Subdevice #0: subdevice #0
# aplay -L
null
Discard all samples (playback) or generate zero samples (capture)
pulse
PulseAudio Sound Server
hdmi:CARD=HDMI,DEV=0
HDA ATI HDMI, HDMI 0
HDMI Audio Output

At this point, without having the X.org server installed, I still have no audio output to the connected monitor. Running alsamixer I only see the S/PDIF bar for the HDA ATI HDMI device, showing a value of 00. I can mute and un-mute this device but not change the value. No need to worry, sound comes with step two.

Step 2 - Install a graphical environment (X.org server)

Next is to install a graphical environment, basically the X.org server. This is done in Debian by the desktop task. Unfortunately tasksel makes use of APT::Install-Recommends="true" and would install a desktop environment and some more recommended packages. At the moment I don't want this, only X. So basically I installed only the task-desktop package with dependencies:

# apt-get install task-desktop xfonts-cyrillic

Next is to install a display manager. I've chosen lightdm:

# apt-get install lightdm accountsservice

Done. Now (re-)start the X server. Simply ...

# service lightdm restart

... should do. And now there is sound, probably due to the X.org Radeon driver. The following command created noise on the two monitor speakers :)

# speaker-test -c2 -D hdmi:0 -t pink

Finally there is sound over HDMI!

Step 3 - Install XBMC

To be continued ...

Bernhard R. Link: Enabling Change

16 November, 2014 - 22:51

Big changes are always a complicated thing to get done and can be the harder the bigger or more diverse an organization is it is taking place in.

Transparency

Ideally every change is well communicated early and openly. Leaving people in the dark about what will change and when means people have much less time to feeling comfortable about it or arranging with it mentally. Especially bad can be extending the change later or or shortening transition periods. Letting people think they have some time to transition only to force them to rush later will remove any credibility you have and severely reduce their ability to believe you are not crossing them. Making a new way optional is a great way to create security (see below), but making that obligatory before the change even arrives as optional with them will not make them very willing to embrace change.

Take responsibility

Every transformation means costs. Even if some change did only improve and did not make anything worse once implemented (the ideal change you will never meet in reality), the deployment of the change still costs: processes have adapted to it, people have to relearn how to do things, how to detect if something goes wrong, how to fix it, documentation has to be adopted and and and. Even as the change causes more good than costs in the whole organization (let's hope it does, I hope you wouldn't try to do something if the total benefit is negative), the benefits and thus the benefit to cost ratio will differ for the different parts of your organization or the different people within it. It's hardly avoidable that for some people there will not be much benefit, much less perceived benefit compared to the costs they have to burden for it. Those are the people whose good will you want to fight for, not the people you want to fight against.

They have to pay with their labor/resources and thus their good will for your benefit the overall benefit.

This is much easier if you acknowledge that fact. If you blame them for having the costs, claim their situation does not even exist or even ridicule them for not embracing change you only prepare yourself for frustration. You might be able to persuade yourself that everyone that is not willing to invest in the change is just acting out of malevolent self-interest. But you will hardly be able to persuade people that it is evil to not help your cause if you treat them as enemies.

And once you ignored or played down costs that later actually happen, your credibility in being able to see the big picture will simply cease to exist at all for the next change.

Allow different metrics

People have different opinions about priorities, about what is important, about how much something costs and even about what is a problem. If you want to persuade them, try to take that into account. If you do not understand why something is a reason, it might be because the given point is stupid. But it might also be that you miss something. And often there is simple a different valuation of what is important, what the costs are and what are problems. If you want to persuade people, it is worth to try to understand those.

If all you want to do is persuade some leader or some majority then ridiculing their concerns might get you somewhere. But how do you want to win people over if you do not even appear to understand their problems. Why should people trust you that their costs will be worth the overall benefits if you tell them the costs that they clearly see do not exist? How credible is referring to the bigger picture if the part of the picture they can see does not match what you say the bigger picture looks like?

Don't get trolled and don't troll

There will always be people that might be unreasonable or even try to provoke you. Don't allow being provoked. Remember that for successful changes you need to win broad support. Feeling personally attacked or feeling presented a large amount of pointless arguments easily results in not bringing proper responses or actually looking at arguments. If someone is only trolling and purely malevolent, they will tease you best if they bring actual concerns of people in a way you likely degrade your yourself and your point in answering. Becoming impertinent with the troll is like attacking the annoying little goblin hiding next to the city guards with area damage.

When not being able to persuade people, it is also far to easy to consider them in bad faith and/or attacking them personally. This can only escalate even more. Worst case you frustrate someone in good faith. In most cases you poison the discussion so much that people actually in good faith will no longer contribute the discussion. It might be rewarding short term because after some escalation only obviously unreasonable people will talk against you, but it makes it much harder to find solutions together that could benefit anyone and almost impossible to persuade those that simply left the discussion.

Give security

Last but not least, remember that humans are quite risk-averse. In general they might invest in (even small) chances to win, but go a long way to avoid risks. Thus an important part of enabling change is to reduce risks, real and perceived ones and give people a feeling of security.

In the end, almost every measure boils down to that: You give people security by giving them the feeling that the whole picture is considered in decisions (by bringing them early into the process, by making sure their concerns are understood and part of the global profit/cost calculation and making sure their experiences with the change are part of the evaluation). You give people security by allowing them to predict and control things (by transparency about plans, how far the change will go and guaranteed transitions periods, by giving them enough time so they can actually plan and do the transition). You give people security by avoiding early points of no return (by having wide enough tests, rollback scenarios,...). You give people security by not letting them alone (by having good documentation, availability of training, ...).

Especially side-by-side availability of old and new is an extremely powerful tool, as it fits all of the above: It allows people to actually test it (and not some little prototype mostly but not quite totally unrelated to reality) so their feedback can be heard. It makes it more predictable as all the new ways can be tried before the old ones no longer work. It is the ultimate role-back scenario (just switch off the new). And allows for learning the new before losing the old.

Of course giving the people a feeling of security needs resources. But it is a very powerful way to get people to embrace the chance.

Also in my experience people only fearing for themselves will usually mostly be passive by not pushing forward and trying to avoid or escape the changes. (After all, working against something costs energy, so purely egoistic behavior is quite limiting in that regard). Most people actively pushing back do it because they fear for something larger than only them. And any measure to making them fear less that you ruin the overall organization, not only avoids unnecessary hurdles rolling out the change but also has some small chance to actually avoid running into disaster with closed eyes.

Vincent Bernat: Staging a Netfilter ruleset in a network namespace

16 November, 2014 - 22:28

A common way to build a firewall ruleset is to run a shell script calling iptables and ip6tables. This is convenient since you get access to variables and loops. There are three major drawbacks with this method:

  1. While the script is running, the firewall is temporarily incomplete. Even if existing connections can be arranged to be left untouched, the new ones may not be allowed to be established (or unauthorized flows may be allowed). Also, essential NAT rules or mangling rules may be absent.

  2. If an error occurs, you are left with an half-working firewall. Therefore, you should ensure that some rules authorizing remote access are set very early. Or implement some kind of automatic rollback system.

  3. Building a large firewall can be slow. Each ip{,6}tables command will download the ruleset from the kernel, add the rule and upload the whole modified ruleset to the kernel.

Using iptables-restore

A classic way to solve these problems is to build a rule file that will be read by iptables-restore and ip6tables-restore1. Those tools send the ruleset to the kernel in one pass. The kernel applies it atomically. Usually, such a file is built with ip{,6}tables-save but a script can fit the task.

The ruleset syntax understood by ip{,6}tables-restore is similar to the syntax of ip{,6}tables but each table has its own block and chain declaration is different. See the following example:

$ iptables -P FORWARD DROP
$ iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE
$ iptables -N SSH
$ iptables -A SSH -p tcp --dport ssh -j ACCEPT
$ iptables -A INPUT -i lo -j ACCEPT
$ iptables -A OUTPUT -o lo -j ACCEPT
$ iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
$ iptables -A FORWARD -j SSH
$ iptables-save
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE
COMMIT

*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:SSH - [0:0]
-A INPUT -i lo -j ACCEPT
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j SSH
-A OUTPUT -o lo -j ACCEPT
-A SSH -p tcp -m tcp --dport 22 -j ACCEPT
COMMIT

As you see, we have one block for the nat table and one block for the filter table. The user-defined chain SSH is declared at the top of the filter block with other builtin chains.

Here is a script diverting ip{,6}tables commands to build such a file (heavily relying on some Zsh-fu2):

#!/bin/zsh
set -e

work=$(mktemp -d)
trap "rm -rf $work" EXIT

# ➊ Redefine ip{,6}tables
iptables() {
    # Intercept -t
    local table="filter"
    [[ -n ${@[(r)-t]} ]] && {
        # Which table?
        local index=${(k)@[(r)-t]}
        table=${@[(( index + 1 ))]}
        argv=( $argv[1,(( $index - 1 ))] $argv[(( $index + 2 )),$#] )
    }
    [[ -n ${@[(r)-N]} ]] && {
        # New user chain
        local index=${(k)@[(r)-N]}
        local chain=${@[(( index + 1 ))]}
        print ":${chain} -" >> ${work}/${0}-${table}-userchains
        return
    }
    [[ -n ${@[(r)-P]} ]] && {
        # Policy for a builtin chain
        local index=${(k)@[(r)-P]}
        local chain=${@[(( index + 1 ))]}
        local policy=${@[(( index + 2 ))]}
        print ":${chain} ${policy}" >> ${work}/${0}-${table}-policy
        return
    }
    # iptables-restore only handle double quotes
    echo ${${(q-)@}//\'/\"} >> ${work}/${0}-${table}-rules #'
}
functions[ip6tables]=${functions[iptables]}

# ➋ Build the final ruleset that can be parsed by ip{,6}tables-restore
save() {
    for table (${work}/${1}-*-rules(:t:s/-rules//)) {
        print "*${${table}#${1}-}"
        [ ! -f ${work}/${table}-policy ] || cat ${work}/${table}-policy
        [ ! -f ${work}/${table}-userchains || cat ${work}/${table}-userchains
        cat ${work}/${table}-rules
        print "COMMIT"
    }
}

# ➌ Execute rule files
for rule in $(run-parts --list --regex '^[.a-zA-Z0-9_-]+$' ${0%/*}/rules); do
    . $rule
done

# ➍ Execute rule files
ret=0
save iptables  | iptables-restore  || ret=$?
save ip6tables | ip6tables-restore || ret=$?
exit $ret

In ➊, a new iptables() function is defined and will shadow the iptables command. It will try to locate the -t parameter to know which table should be used. If such a parameter exists, the table is remembered in the $table variable and removed from the list of arguments. Defining a new chain (with -N) is also handled as well as setting the policy (with -P).

In ➋, the save() function will output a ruleset that should be parseable by ip{,6}tables-restore. In ➌, user rules are executed. Each ip{,6}tables command will call the previously defined function. When no error has occurred, in ➍, ip{,6}tables-restore is invoked. The command will either succeed or fail.

This method works just fine3. However, the second method is more elegant.

Using a network namespace

An hybrid approach is to build the firewall rules with ip{,6}tables in a newly created network namespace, save it with ip{,6}tables-save and apply it in the main namespace with ip{,6}tables-restore. Here is the gist (still using Zsh syntax):

#!/bin/zsh
set -e

alias main='/bin/true ||'
[ -n $iptables ] || {
    # ➊ Execute ourself in a dedicated network namespace
    iptables=1 unshare --net -- \
        $0 4> >(iptables-restore) 6> >(ip6tables-restore)
    # ➋ In main namespace, disable iptables/ip6tables commands
    alias iptables=/bin/true
    alias ip6tables=/bin/true
    alias main='/bin/false ||'
}

# ➌ In both namespaces, execute rule files
for rule in $(run-parts --list --regex '^[.a-zA-Z0-9_-]+$' ${0%/*}/rules); do
    . $rule
done

# ➍ In test namespace, save the rules
[ -z $iptables ] || {
    iptables-save >&4
    ip6tables-save >&6
}

In ➊, the current script is executed in a new network namespace. Such a namespace has its own ruleset that can be modified without altering the one in the main namespace. The $iptables environment variable tell in which namespace we are. In the new namespace, we execute all the rule files (➌). They contain classic ip{,6}tables commands. If an error occurs, we stop here and nothing happens, thanks to the use of set -e. Otherwise, in ➍, the ruleset of the new namespace are saved using ip{,6}tables-save and sent to dedicated file descriptors.

Now, the execution in the main namespace resumes in ➊. The results of ip{,6}tables-save are feeded to ip{,6}tables-restore. At this point, the firewall is mostly operational. However, we will play again the rule files (➌) but the ip{,6}tables commands will be disabled (➋). Additional commands in the rule files, like enabling IP forwarding, will be executed.

The new namespace does not provide the same environment as the main namespace. For example, there is no network interface in it, so we cannot get or set IP addresses. A command that must not be executed in the new namespace should be prefixed by main:

main ip addr add 192.168.15.1/24 dev lan-guest

You can look at a complete example on GitHub.

  1. Another nifty tool is iptables-apply which will apply a rule file and rollback after a given timeout unless the change is confirmed by the user. 

  2. As you can see in the snippet, Zsh comes with some powerful features to handle arrays. Another big advantage of Zsh is it does not require quoting every variable to avoid field splitting. Hence, the script can handle values with spaces without a problem, making it far more robust. 

  3. If I were nitpicking, there are three small flaws with it. First, when an error occurs, it can be difficult to match the appropriate location in your script since you get the position in the ruleset instead. Second, a table can be used before it is defined. So, it may be difficult to spot some copy/paste errors. Third, the IPv4 firewall may fail while the IPv6 firewall is applied, and vice-versa. Those flaws are not present in the next method. 

Vincent Bernat: Intel Wireless 7260 as an access point

16 November, 2014 - 22:27

My home router acts as an access point with an Intel Dual-Band Wireless-AC 7260 wireless card. This card supports 802.11ac (on the 5 GHz band) and 802.11n (on both the 5 GHz and 2.4 GHz band). While this seems a very decent card to use in managed mode, this is not really a great choice for an access point.

$ lspci -k -nn -d 8086:08b1
03:00.0 Network controller [0280]: Intel Corporation Wireless 7260 [8086:08b1] (rev 73)
        Subsystem: Intel Corporation Dual Band Wireless-AC 7260 [8086:4070]
        Kernel driver in use: iwlwifi

TL;DR: Use an Atheros card instead.

Limitations

First, the card is said “dual-band” but you can only uses one band at a time because there is only one radio. Almost all wireless cards have this limitation. If you want to use both the 2.4 GHz band and the less crowded 5 GHz band, two cards are usually needed.

5 GHz band

There is no support to set an access point on the 5 GHz band. The firmware doesn’t allow it. This can be checked with iw:

$ iw reg get
country CH: DFS-ETSI
        (2402 - 2482 @ 40), (N/A, 20), (N/A)
        (5170 - 5250 @ 80), (N/A, 20), (N/A)
        (5250 - 5330 @ 80), (N/A, 20), (0 ms), DFS
        (5490 - 5710 @ 80), (N/A, 27), (0 ms), DFS
        (57240 - 65880 @ 2160), (N/A, 40), (N/A), NO-OUTDOOR
$ iw list
Wiphy phy0
[...]
        Band 2:
                Capabilities: 0x11e2
                        HT20/HT40
                        Static SM Power Save
                        RX HT20 SGI
                        RX HT40 SGI
                        TX STBC
                        RX STBC 1-stream
                        Max AMSDU length: 3839 bytes
                        DSSS/CCK HT40
                Frequencies:
                        * 5180 MHz [36] (20.0 dBm) (no IR)
                        * 5200 MHz [40] (20.0 dBm) (no IR)
                        * 5220 MHz [44] (20.0 dBm) (no IR)
                        * 5240 MHz [48] (20.0 dBm) (no IR)
                        * 5260 MHz [52] (20.0 dBm) (no IR, radar detection)
                          DFS state: usable (for 192 sec)
                          DFS CAC time: 60000 ms
                        * 5280 MHz [56] (20.0 dBm) (no IR, radar detection)
                          DFS state: usable (for 192 sec)
                          DFS CAC time: 60000 ms
[...]

While the 5 GHz band is allowed by the CRDA, all frequencies are marked with no IR. Here is the explanation for this flag:

The no-ir flag exists to allow regulatory domain definitions to disallow a device from initiating radiation of any kind and that includes using beacons, so for example AP/IBSS/Mesh/GO interfaces would not be able to initiate communication on these channels unless the channel does not have this flag.

Multiple SSID

This card can only advertise one SSID. Managing several of them is useful to setup distinct wireless networks, like a public access (routed to Tor), a guest access and a private access. iw can confirm this:

$ iw list
        valid interface combinations:
                 * #{ managed } <= 1, #{ AP, P2P-client, P2P-GO } <= 1, #{ P2P-device } <= 1,
                   total <= 3, #channels <= 1

Here is the output of an Atheros card able to manage 8 SSID:

$ iw list
        valid interface combinations:
                 * #{ managed, WDS, P2P-client } <= 2048, #{ IBSS, AP, mesg point, P2P-GO } <= 8,
                   total <= 2048, #channels <= 1
Configuration as an access point

Except for those two limitations, the card works fine as an access point. Here is the configuration that I use for hostapd:

interface=wlan-guest
driver=nl80211

# Radio
ssid=XXXXXXXXX
hw_mode=g
channel=11

# 802.11n
wmm_enabled=1
ieee80211n=1
ht_capab=[HT40-][SHORT-GI-20][SHORT-GI-40][DSSS_CCK-40][DSSS_CCK-40][DSSS_CCK-40]

# WPA
auth_algs=1
wpa=2
wpa_passphrase=XXXXXXXXXXXXXXX
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

Because of the use of channel 11, only 802.11n HT40- rate can be enabled. Look at the Wikipedia page for 802.11n to check if you can use either HT40-, HT40+ or both.

Vincent Bernat: Replacing Swisscom router by a Linux box

16 November, 2014 - 22:26

I have recently moved to Lausanne, Switzerland. Broadband Internet access is not as cheap as in France. Free, a French ISP, is providing an FTTH access with a bandwith of 1 Gbps1 for about 38 € (including TV and phone service), Swisscom is providing roughly the same service for about 200 €2. Swisscom fiber access was available for my appartment and I chose the 40 Mbps contract without phone service for about 80 €.

Like many ISP, Swisscom provides an Internet box with an additional box for TV. I didn’t unpack the TV box as I have no use for it. The Internet box comes with some nice features like the ability to setup firewall rules, a guest wireless access and some file sharing possibilities. No shell access!

I have bought a small PC to act as router and replace the Internet box. I have loaded the upcoming Debian Jessie on it. You can find the whole software configuration in a GitHub repository.

This blog post only covers the Swisscom-specific setup (and QoS). Have a look at those two blog posts for related topics:

Ethernet

The Internet box is packed with a Siligence-branded 1000BX SFP3. This SFP receives and transmits data on the same fiber using a different wavelength for each direction.

Instead of using a network card with an SFP port, I bought a Netgear GS110TP which comes with 8 gigabit copper ports and 2 fiber SFP ports. It is a cheap switch bundled with many interesting features like VLAN and LLDP. It works fine if you don’t expect too much from it.

IPv4

IPv4 connectivity is provided over VLAN 10. A DHCP client is mandatory. Moreover, the DHCP vendor class identifier option (option 60) needs to be advertised. This can be done by adding the following line to /etc/dhcp/dhclient.conf when using the ISC DHCP client:

send vendor-class-identifier "100008,0001,,Debian";

The first two numbers are here to identify the service you are requesting. I suppose this can be read as requesting the Swisscom residential access service. You can put whatever you want after that. Once you get a lease, you need to use a browser to identify yourself to Swisscom on the first use.

IPv6

Swisscom provides IPv6 access through the 6rd protocol. This is a tunneling mechanism to facilitate IPv6 deployment accross an IPv4 infrastructure. This kind of tunnel is natively supported by Linux since kernel version 2.6.33.

To setup IPv6, you need the base IPv6 prefix and the 6rd gateway. Some ISP are providing those values through DHCP (option 212) but this is not the case for Swisscom. The gateway is 6rd.swisscom.com and the prefix is 2a02:1200::/28. After appending the IPv4 address to the prefix, you still get 4 bits for internal subnets.

Swisscom doesn’t provide a fixed IPv4 address. Therefore, it is not possible to precompute the IPv6 prefix. When installed as a DHCP hook (in /etc/dhcp/dhclient-exit-hooks.d/6rd), the following script configures the tunnel:

sixrd_mtu=1472                  # This is 1500 - 20 - 8 (PPPoE header)
sixrd_ttl=64
sixrd_prefix=2a02:1200::/28     # No way to guess, just have to know it.
sixrd_br=193.5.29.1             # That's "6rd.swisscom.com"

sixrd_down() {
    ip tunnel del internet6 || true
}

sixrd_up() {
    ipv4=${new_ip_address:-$old_ip_address}

    sixrd_subnet=$(ruby <<EOF
require 'ipaddr'
prefix = IPAddr.new "${sixrd_prefix}", Socket::AF_INET6
prefixlen = ${sixrd_prefix#*/}
ipv4 = IPAddr.new "${ipv4}", Socket::AF_INET
ipv6 = IPAddr.new (prefix.to_i + (ipv4.to_i << (64 + 32 - prefixlen))), Socket::AF_INET6
puts ipv6
EOF
)

    # Let's configure the tunnel
    ip tunnel add internet6 mode sit local $ipv4 ttl $sixrd_ttl
    ip addr add ${sixrd_subnet}1/64 dev internet6
    ip link set mtu ${sixrd_mtu} dev internet6
    ip link set internet6 up
    ip route add default via ::${sixrd_br} dev internet6
}

case $reason in
    BOUND|REBOOT)
        sixrd_down
        sixrd_up
        ;;
    RENEW|REBIND)
        if [ "$new_ip_address" != "$old_ip_address" ]; then
            sixrd_down
            sixrd_up
        fi
        ;;
    STOP|EXPIRE|FAIL|RELEASE)
        sixrd_down
        ;;
esac

The computation of the IPv6 prefix is offloaded to Ruby instead of trying to use the shell for that. Even if the ipaddr module is pretty “basic”, it suits the job.

Swisscom is using the same MTU for all clients. Because some of them are using PPPoE, the MTU is 1472 instead of 1480. You can easily check your MTU with this handy online MTU test tool.

It is not uncommon that PMTUD is broken on some parts of the Internet. While not ideal, setting up TCP MSS will alievate any problem you may run into with a MTU less than 1500:

ip6tables -t mangle -A POSTROUTING -o internet6 \
          -p tcp --tcp-flags SYN,RST SYN \
          -j TCPMSS --clamp-mss-to-pmtu
QoS

Once upon a time, QoS was a tacky subject. The Wonder Shaper was a common way to get a somewhat working setup. Nowadays, thanks to the work of the Bufferbloat project, there are two simple steps to get something quite good:

  1. Reduce the queue of your devices to something like 32 packets. This helps TCP to detect congestion and act accordingly while still being able to saturate a gigabit link.

    ip link set txqueuelen 32 dev lan
    ip link set txqueuelen 32 dev internet
    ip link set txqueuelen 32 dev wlan
    
  2. Change the root qdisc to fq_codel. A qdisc receives packets to be sent from the kernel and decide how packets are handled to the network card. Packets can be dropped, reordered or rate-limited. fq_codel is a queuing discipline combining fair queuing and controlled delay. Fair queuing means that all flows get an equal chance to be served. Another way to tell it is that a high-bandwidth flow won’t starve the queue. Controlled delay means that the queue size will be limited to ensure the latency stays low. This is achieved by dropping packets more aggressively when the queue grows.

    tc qdisc replace dev lan root fq_codel
    tc qdisc replace dev internet root fq_codel
    tc qdisc replace dev wlan root fq_codel
    
  1. Maximum download speed is 1 Gbps, while maximum upload speed is 200 Mbps. 

  2. This is the standard Vivo XL package rated at CHF 169.– plus the 1 Gbps option at CHF 80.–. 

  3. There are two references on it: SGA 441SFP0-1Gb and OST-1000BX-S34-10DI. It transmits to the 1310 nm wave length and receives on the 1490 nm one. 

Dirk Eddelbuettel: Introducing RcppAnnoy

16 November, 2014 - 21:36

A new package RcppAnnoy is now on CRAN.

It wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify.

While Annoy is setup for use by Python, RcppAnnoy offers the exact same functionality from R via Rcpp.

A new page for RcppAnnoy provides some more detail, example code and further links. See a recent blog post by Erik for a performance comparison of different approximate nearest neighbours libraries for Python.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Stefano Zacchiroli: Debsources Participation in FOSS Outreach Program

16 November, 2014 - 19:45
Jingjie Jiang selected as OPW intern for Debsources

I'm glad to announce that Jingjie Jiang, AKA sophiejjj, has been selected as intern to work on Debsources as part of the FOSS Outerach Program (formerly known as Outreach Program for Women, or OPW). I'll co-mentor her work together with Matthieu Caneill.

I've just added sophiejjj's blog to Planet Debian, so you will soon hear about her work in the Debian blogosphere.

I've been impressed by the interest that the Debsources proposal in this round of OPW has spawned. Together with Matthieu I have interacted with more than a dozen OPW applicants. Many of them have contributed useful patches during the application period, and those patches have been in production at http://sources.debian.net since quite a while now (see the commit log for details). A special mention goes to Akshita Jha, who has shown a lot of determination in tackling both simple and complex issues affecting Debsources. I hope there will be other chances to work with her in the future.

OPW internship will begin December 9th, fasten your seat belts for a boost in Debsources development!

Jonathan Wiltshire: Getting things into Jessie (#1)

16 November, 2014 - 17:35
Make it easy for us to review your request

The release team gets a lot of mail at this time in the cycle. Make it easy for us by:

  • including as much information as you can think of
    • yes, even if you think it’s too much
    • remember we have probably never seen your package before
    • if you do write a lot, include a short summary at the top
  • not deviating from the freeze policy without a really good reason
    • explain why you deviated and why it’s really good in your request
  • demonstrating that you’ve considered and checked your changes carefully
    • that’s one (but not the sole) reason we ask for a source debdiff (and assume you’ve read it yourself)

 

Getting things into Jessie (#1) is a post from: jwiltshire.org.uk | Flattr

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้