Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 1 min ago

Petter Reinholdtsen: Debian Jessie, PXE and automatic firmware installation

17 October, 2014 - 20:10

When PXE installing laptops with Debian, I often run into the problem that the WiFi card require some firmware to work properly. And it has been a pain to fix this using preseeding in Debian. Normally something more is needed. But thanks to my isenkram package and its recent tasksel extension, it has now become easy to do this using simple preseeding.

The isenkram-cli package provide tasksel tasks which will install firmware for the hardware found in the machine (actually, requested by the kernel modules for the hardware). (It can also install user space programs supporting the hardware detected, but that is not the focus of this story.)

To get this working in the default installation, two preeseding values are needed. First, the isenkram-cli package must be installed into the target chroot (aka the hard drive) before tasksel is executed in the pkgsel step of the debian-installer system. This is done by preseeding the base-installer/includes debconf value to include the isenkram-cli package. The package name is next passed to debootstrap for installation. With the isenkram-cli package in place, tasksel will automatically use the isenkram tasks to detect hardware specific packages for the machine being installed and install them, because isenkram-cli contain tasksel tasks.

Second, one need to enable the non-free APT repository, because most firmware unfortunately is non-free. This is done by preseeding the apt-mirror-setup step. This is unfortunate, but for a lot of hardware it is the only option in Debian.

The end result is two lines needed in your preseeding file to get firmware installed automatically by the installer:

base-installer base-installer/includes string isenkram-cli
apt-mirror-setup apt-setup/non-free boolean true

The current version of isenkram-cli in testing/jessie will install both firmware and user space packages when using this method. It also do not work well, so use version 0.15 or later. Installing both firmware and user space packages might give you a bit more than you want, so I decided to split the tasksel task in two, one for firmware and one for user space programs. The firmware task is enabled by default, while the one for user space programs is not. This split is implemented in the package currently in unstable.

If you decide to give this a go, please let me know (via email) how this recipe work for you if you decide to give it a go. :)

So, I bet you are wondering, how can this work. First and foremost, it work because tasksel is modular, and driven by whatever files it find in /usr/lib/tasksel/ and /usr/share/tasksel/. So the isenkram-cli package place two files for tasksel to find. First there is the task description file (/usr/share/tasksel/descs/isenkram.desc):

Task: isenkram-packages
Section: hardware
Description: Hardware specific packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific packages are
 proposed.
Test-new-install: show show
Relevance: 8
Packages: for-current-hardware

Task: isenkram-firmware
Section: hardware
Description: Hardware specific firmware packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific firmware
 packages are proposed.
Test-new-install: mark show
Relevance: 8
Packages: for-current-hardware-firmware

The key parts are Test-new-install which indicate how the task should be handled and the Packages line referencing to a script in /usr/lib/tasksel/packages/. The scripts use other scripts to get a list of packages to install. The for-current-hardware-firmware script look like this to list relevant firmware for the machine:

#!/bin/sh
#
PATH=/usr/sbin:$PATH
export PATH
isenkram-autoinstall-firmware -l

With those two pieces in place, the firmware is installed by tasksel during the normal d-i run. :)

If you want to test what tasksel will install when isenkram-cli is installed, run DEBIAN_PRIORITY=critical tasksel --test --new-install to get the list of packages that tasksel would install.

Debian Edu will be pilots in testing this feature, as isenkram is used there now to install firmware, replacing the earlier scripts.

Junichi Uekawa: test.

17 October, 2014 - 06:20
test.

Bits from Debian: Help empower the Debian Outreach Program for Women

17 October, 2014 - 01:30

Debian is thrilled to participate in the 9th round of the GNOME FOSS Outreach Program. While OPW is similar to Google Summer of Code it has a winter session in addition to a summer session and is open to non-students.

Back at DebConf 14 several of us decided to volunteer because we want to increase diversity in Debian. Shortly thereafter the DPL announced Debian's participation in OPW 2014.

We have reached out to several corporate sponsors and are thrilled that so far Intel has agreed to fund an intern slot (in addition to the slot offered by the DPL)! While that makes two funded slots we have a third sponsor that has offered a challenge match: for each dollar donated by an individual to Debian the sponsor will donate another dollar for Debian OPW.

This is where we need your help! If we can raise $3,125 by October 22 that means we can mentor a third intern ($6,250). Please spread the word and donate today if you can at: http://debian.ch/opw2014/

If you'd like to participate as intern, the application deadline is the same (October 22nd). You can find out more on the Debian Wiki.

Junichi Uekawa: Trying to migrate to new server and new infrastructure.

15 October, 2014 - 15:47
Trying to migrate to new server and new infrastructure.

Raphaël Hertzog: Freexian’s second report about Debian Long Term Support

15 October, 2014 - 15:45

Like last month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September 2014, 3 contributors have been paid for 11h each. Here are their individual reports:

Evolution of the situation

Compared to last month, we have gained 5 new sponsors, that’s great. We’re now at almost 25% of a full-time position. But we’re not done yet. We believe that we would need at least twice as many sponsored hours to do a reasonable work with at least the most used packages, and possibly four times as much to be able to cover the full archive.

We’re now at 39 packages that need an update in Squeeze (+9 compared to last month), and the contributors paid by Freexian did handle 11 during last month (this gives an approximate rate of 3 hours per update, CVE triage included).

Open questions

Dear readers, what can we do to convince more companies to join the effort?

The list of sponsors contains almost exclusively companies from Europe. It’s true that Freexian’s offer is in Euro but the economy is world-wide and it’s common to have international invoices. When Ivan Kohler asked if having an offer in dollar would help convince other companies, we got zero feedback.

What are the main obstacles that you face when you try to convince your managers to get the company to contribute?

By the way, we prefer that companies take small sponsorship commitments that they can afford over multiple years over granting lots of money now and then not being able to afford it for another year.

Thanks to our sponsors

Let me thank our main sponsors:

Matthew Palmer: My entry in the "Least Used Software EVAH" competition

15 October, 2014 - 13:00

For some reason, I seem to end up writing software for very esoteric use-cases. Today, though, I think I’ve outdone myself: I sat down and wrote a Ruby library to get and set process resource limits – those things that nobody ever thinks about except when they run out of file descriptors.

I didn’t even have a direct need for it. Recently I was grovelling through the EventMachine codebase, looking at the filehandle limit code, and noticed that the pure-ruby implementation didn’t manipulate filehandle limits. I considered adding it, then realised that there wasn’t a library available to do it. Since I haven’t berked around with FFI for a while, I decided to write rlimit. Now to find the time to write that patch for EventMachine…

Since I doubt there are many people who have a burning need to manipulate rlimits in Ruby, this gem will no doubt sit quiet and undisturbed in the dark, dusty corners of rubygems.org. However, for the three people on earth who find this useful: you’re welcome.

Julian Andres Klode: Key transition

15 October, 2014 - 05:46

I started transitioning from 1024D to 4096R. The new key is available at:

https://people.debian.org/~jak/pubkey.gpg

and the keys.gnupg.net key server. A very short transition statement is available at:

https://people.debian.org/~jak/transition-statement.txt

and included below (the http version might get extended over time if needed).

The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That’s really cool, isn’t it?

Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512

Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.

The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.

the old key was:

pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2

And the new key is:

pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

Joachim Breitner: Switching to systemd-networkd

15 October, 2014 - 03:30

Ever since I read about systemd-networkd being in the making I was looking forward to try it out. I kept watching for the package to appear in Debian, or at least ITP bugs. A few days ago, by accident, I noticed that I already have systemd-networkd on my machine: It is simply shipped with the systemd package!

My previous setup was a combination of ifplugd to detect when I plug or unplug the ethernet cable with a plain DHCP entry in /etc/network/interface. A while ago I was using guessnet to do a static setup depending on where I am, but I don’t need this flexibility any more, so the very simple approach with systemd-networkd is just fine with me. So after stopping ifplugd and

$ cat > /etc/systemd/network/eth.network <<__END__
[Match]
Name=eth0
[Network]
DHCP=yes
__END__
$ systemctl enable systemd-networkd
$ systemctl start systemd-networkd

I was ready to go. Indeed, systemd-networkd, probably due to the integrated dhcp client, felt quite a bit faster than the old setup. And what’s more important (and my main motivation for the switch): It did the right thing when I put it to sleep in my office, unplug it there, go home, plug it in and wake it up. ifplugd failed to detect this change and I often had to manually run ifdown eth0 && ifup eth0; this now works.

But then I was bitten by what I guess some people call the viral nature of systemd: systemd-networkd would not update /etc/resolve.conf, but rather relies on systemd-resolved. And that requires me to change /etc/resolve.conf to be a symlink to /run/systemd/resolve/resolv.conf. But of course I also use my wireless adapter, which, at that point, was still managed using ifupdown, which would use dhclient which updates /etc/resolve.conf directly.

So I investigated if I can use systemd-networkd also for my wireless account. I am not using NetworkManager or the like, but rather keep wpa_supplicant running in roaming mode, controlled from ifupdown (not sure how that exactly works and what controls what, but it worked). I found out that this setup works just fine with systemd-networkd: I start wpa_supplicant with this service file (which I found in the wpasupplicant repo, but not yet in the Debian package):

[Unit]
Description=WPA supplicant daemon (interface-specific version)
Requires=sys-subsystem-net-devices-%i.device
After=sys-subsystem-net-devices-%i.device

[Service]
Type=simple
ExecStart=/sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-%I.conf -i%I

[Install]
Alias=multi-user.target.wants/wpa_supplicant@%i.service

Then wpa_supplicant will get the interface up and down as it goes, while systemd-networkd, equipped with

[Match]
Name=wlan0
[Network]
DHCP=yes

does the rest.

So suddenly I have a system without /etc/init.d/networking and without ifup. Feels a bit strange, but also makes sense. I still need to migrate how I manage my UMTS modem device to that model.

The only thing that I’m missing so far is a way to trigger actions when the network configuration has changes, like I could with /etc/network/if-up.d/ etc. I want to run things like killall -ALRM tincd and exim -qf. If you know how to do that, please tell me, or answer over at Stack Exchange.

Joachim Breitner: Switching to sytemd-networkd

15 October, 2014 - 03:00

Ever since I read about sytemd-networkd being in the making I was looking forward to try it out. I kept watching for the package to appear in Debian, or at least ITP bugs. A few days ago, by accident, I noticed that I already have systemd-networkd on my machine: It is simply shipped with the systemd package!

My previous setup was a combination of ifplugd to detect when I plug or unplug the ethernet cable with a plain DHCP entry in /etc/network/interface. A while ago I was using guessnet to do a static setup depending on where I am, but I don’t need this flexibility any more, so the very simple approach with systemd-networkd is just fine with me. So after stopping ifplugd and

$ cat > /etc/systemd/network/eth.network <<__END__
[Match]
Name=eth0
[Network]
DHCP=yes
__END__
$ systemctl enable systemd-networkd
$ systemctl start systemd-networkd

I was ready to go. Indeed, systemd-networkd, probably due to the integrated dhcp client, felt quite a bit faster than the old setup. And what’s more important (and my main motivation for the switch): It did the right thing when I put it to sleep in my office, unplug it there, go home, plug it in and wake it up. ifplugd failed to detect this change and I often had to manually run ifdown eth0 && ifup eth0; this now works.

But then I was bitten by what I guess some people call the viral nature of systemd: sytemd-networkd would not update /etc/resolve.conf, but rather relies on systemd-resolved. And that requires me to change /etc/resolve.conf to be a symlink to /run/systemd/resolve/resolv.conf. But of course I also use my wireless adapter, which, at that point, was still managed using ifupdown, which would use dhclient which updates /etc/resolve.conf directly.

So I investigated if I can use systemd-networkd also for my wireless account. I am not using NetworkManager or the like, but rather keep wpa_supplicant running in roaming mode, controlled from ifupdown (not sure how that exactly works and what controls what, but it worked). I found out that this setup works just fine with systemd-networkd: I start wpa_supplicant with this service file (which I found in the wpasupplicant repo, but not yet in the Debian package):

[Unit]
Description=WPA supplicant daemon (interface-specific version)
Requires=sys-subsystem-net-devices-%i.device
After=sys-subsystem-net-devices-%i.device

[Service]
Type=simple
ExecStart=/sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-%I.conf -i%I

[Install]
Alias=multi-user.target.wants/wpa_supplicant@%i.service

Then wpa_supplicant will get the interface up and down as it goes, while systemd-networkd, equipped with

[Match]
Name=wlan0
[Network]
DHCP=yes

does the rest.

So suddenly I have a system without /etc/init.d/networking and without ifup. Feels a bit strange, but also makes sense. I still need to migrate how I manage my UMTS modem device to that model.

The only thing that I’m missing so far is a way to trigger actions when the network configuration has changes, like I could with /etc/network/if-up.d/ etc. I want to run things like killall -ALRM tincd and exim -qf. If you know how to do that, please tell me, or answer over at Stack Exchange.

Gunnar Wolf: When Open Access meets the Napster anniversary

15 October, 2014 - 00:58

Two causally unrelated events which fit in together in the greater scheme of things ;-)

In some areas, the world is better aligning to what we have been seeking for many years. In some, of course, it is not.

In this case, today I found our article on the Network of Digital Repositories for our University, in the Revista Digital Universitaria [en línea] was published. We were invited to prepare an article on this topic because this month's magazine would be devoted to Open Access in Mexico and Latin America — This, because a law was recently passed that makes conditions much more interesting for the nonrestricted publication of academic research. Of course, there is still a long way to go, but this clearly is a step in the right direction.

On the other hand, after a long time of not looking in that direction (even though it's a lovely magazine), I found that this edition of FirstMonday takes as its main topic Napster, 15 years on: Rethinking digital music distribution.

I know that nonrestricted academic publishing via open access and nonauthorized music sharing via Napster are two very different topics. However, there is a continuous push and trend towards considering and accepting open licensing terms, and they are both points in the same struggle. An interesting data point to add is that, although many different free licenses have existed over time, Creative Commons (which gave a lot of visibility and made the discussion within the reach of many content creators) was created in 2001 — 13 years ago today, two years after Napster. And, yes, there are no absolute coincidences.

Marco d'Itri: The Italian peering ecosystem

15 October, 2014 - 00:34

I published the slides of my talk "An introduction to peering in Italy - Interconnections among the Italian networks" that I presented today at the MIX-IT (the Milano internet exchange) technical meeting.

Philipp Kern: pbuilder and pam_tmpdir

14 October, 2014 - 06:28
It turns out that my recent woes with pbuilder were all due to libpam-tmpdir being installed (at least two old bug reports exist about this issue: #576425 and #725434). I rather like my private temporary directory that cannot be accessed by other (potential) users on the same system. Previously I used a hook to fix this up by ensuring that the directory actually exists in the chroot, but somehow that recently broke.

A rather crude but working solution seems to be "session required pam_env.so user_readenv=1" in /etc/pam.d/sudo and "TMPDIR=/tmp" in /root/.pam_environment. One could probably skip pam_tmpdir.so for root, but I did not want to start fighting with pam-auth-update as this is in /etc/pam.d/common-session*.

Konstantinos Margaritis: SIMD optimizations, cont.

14 October, 2014 - 03:19

A friend of mine told me that I should advertise my passion and know-how about SIMD more, and I decided to follow his advice. Though I am terrible at marketing and even more at personal marketing, I've made an attempt to do just that, advertise the fact that I'm offering SIMD Optimization Services (with emphasis on PowerPC AltiVec/VMX/VSX, and ARM NEON, but I'm ok with SSE as well, the logic is pretty much the same, though the difference(s) are in the details). For this reason I'm offering a free evaluation of your performance critical code (open/closed, able to sign NDAs if needed) to let you know if it's worth optimizing it, what kind of a performance gain you would get and how much it would cost you to get that result.
You can read more here.

John Goerzen: Update on the systemd issue

14 October, 2014 - 01:46

The other day, I wrote about my poor first impressions of systemd in jessie. Here’s an update.

I’d like to start with the things that are good. I found the systemd community to be one of the most helpful in Debian, and #debian-systemd IRC channel to be especially helpful. I was in there for quite some time yesterday, and appreciated the help from many people, especially Michael. This is a nontechnical factor, but is extremely important; this has significantly allayed my concerns about systemd right there.

There are things about the systemd design that impress. The dependency system and configuration system is a lot more flexible than sysvinit. It is also a lot more complicated, and difficult to figure out what’s happening. I am unconvinced of the utility of parallelization of boot to begin with; I rarely reboot any of my Linux systems, desktops or servers, and it seems to introduce needless complexity.

Anyhow, on to the filesystem problem, and a bit of a background. My laptop runs ZFS, which is somewhat similar to btrfs in that it’s a volume manager (like LVM), RAID manager (like md), and filesystem in one. My system runs LVM, and inside LVM, I have two ZFS “pools” (volume groups): one, called rpool, that is unencrypted and holds mainly the operating system; and the other, called crypt, that is stacked atop LUKS. ZFS on Linux doesn’t yet have built-in crypto, which is why LVM is even in the picture here (to separate out the SSD at a level above ZFS to permit parts of it to be encrypted). This is a bit of an antiquated setup for me; as more systems have AES-NI, I’m going to everything except /boot being encrypted.

Anyhow, inside rpool is the / filesystem, /var, and /usr. Inside /crypt is /tmp and /home.

Initially, I tried to just boot it, knowing that systemd is supposed to work with LSB init scripts, and ZFS has init scripts with carefully-planned dependencies. This was evidently not working, perhaps because /lib/systemd/systemd/ It turns out that systemd has a few assumptions that turn out to be less true with ZFS than otherwise. ZFS filesystems are normally not mounted via /etc/fstab; a ZFS pool has internal properties about which dataset gets mounted where (similar to LVM’s actions after a vgscan and vgchange -ay). Even though there are ordering constraints in the units, systemd is writing files to /var before /var gets mounted, resulting in the mount failing (unlike ext4, ZFS by default will reject an attempt to mount over a non-empty directory). Partly this due to the debian-fixup.service, and partly it is due to systemd reacting to udev items like backlight.

This problem was eventually worked around by doing zfs set mountpoint=legacy rpool/var, and then adding a line to fstab (“rpool/var /var zfs defaults 0 2″) for /var and its descendent filesystems.

This left the problem of /tmp; again, it wasn’t getting mounted soon enough. In this case, it required crypttab to be processed first, and there seem to be a lot of bugs in the crypttab processing in systemd (more on that below). I eventually worked around that by adding After=cryptsetup.target to the zfs-import-cache.service file. For /tmp, it did NOT work to put it in /etc/fstab, because then it tried to mount it before starting cryptsetup for some reason. It probably didn’t help that the system’s cryptdisks.service is a symlink to /dev/null, a fact I didn’t realize until after a lot of needless reboots.

Anyhow, one thing I stumbled across was poor console control with systemd. On numerous occasions, I had things like two cryptsetup processes trying to read a password, plus an emergency mode console trying to do so. I had this memorable line of text at one point:

(or type Control-D to continue): Please enter passphrase for disk athena-crypttank (crypt)! [ OK ] Stopped Emergency Shell.

And here we venture into unsatisfying territory with systemd. One answer to this in IRC was to install plymouth, which apparently serializes console I/O. However, plymouth is “an attractive boot animation in place of the text messages that normally get shown.” I don’t want an “attractive boot animation”. Nevertheless, neither systemd-sysv nor cryptsetup depends on plymouth, so by default, the prompt for a password at boot is obscured by various other text.

Worse, plymouth doesn’t support serial consoles, so at the moment booting a system that uses LUKS with systemd over a serial console is a matter of blind luck of typing the right password at the right time.

In the end, though, the system booted and after a few more tweaks, the backlight buttons do their thing again. Whew!

Update 2014-10-13: uau pointed out that Plymouth is more than a bootsplash, and can work with serial consoles, despite the description of the package. I stand corrected on that. (It is still the case, however, that packages don’t depend on it where they should, and the default experience for people using cryptsetup is not very good.)

Steve McIntyre: Successful Summer of Code in Linaro

14 October, 2014 - 01:06

It's past time I wrote about how Linaro's students fared in this year's Google Summer of Code. You might remember me posting earlier in the year when we welcomed our students. We started with 3 student projects at the beginning of the summer. One of the students unfortunately didn't work out, but the other two were hugely successful.

Gaurav Minocha was a graduate student at the University of British Columbia, Vancouver, Canada. He worked on Linux Flattened Device Tree Self-checking, mentored by Grant Likely from Linaro's Office of the CTO. Gaurav achieved all of his project's goals, and he was invited to Linaro's recent Linaro Connect USAConnect conference in California to meet people and and talk about his project. He and Grant presented a session on their work; it was filmed, and video is online. Grant said he was very happy with Gaurav's "strong, solid performance" during the project.

Varad Gautam was a student at Birla Institute of Technology and Science, Pilani, India. He succeeded in porting UEFI to the BeagleBone Black. Leif Lindholm from the Linaro Enterprise Group was his mentor for the summer. At the end of the summer, Varad delivered a UEFI port ready for booting Linux and his code was included in Linaro's September UEFI release. Leif said that he was "very pleased with Varad's self sufficiency and ability to pick up an entirely new software project very quickly". We were hoping to invite Gaurad to Connect in California also, but travel document delays got in the way. With luck we'll see him at the next Connect in Hong Kong in February 2015.

Well done, guys! It was great to work with these young developers for the summer, and we wish them lots more success in their future endeavours.

Google have also just confirmed that they will be running the Summer of Code program again in 2015. I'm hoping that Linaro will be accepted again next year as a mentoring organisation. I'll post more about that early next year.

Dirk Eddelbuettel: Seinfeld streak at GitHub

13 October, 2014 - 08:23

Early last year, I referred to a Seinfeld Streak in a blog post referring to almost two months of updates to the Rcpp Gallery. This is sometimes called Jerry Seinfeld's secret to productivity: Just keep at it. Don't break the streak.

I now have different streak:

Now we'll see how far this one will go.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Wiltshire: Clean builds for the win

13 October, 2014 - 04:50

I’ve just spent a little time squashing several bugs on the trot, all the same: insufficient build-dependencies when built in a clean environment. Typically this means that the package was uploaded after being built on a developer’s normal machine, which already has everything required installed.

It’s long been the case that we have several ways to build packages in a clean chroot before upload, which reveals these sorts of errors and more. There’s not really any excuse for uploading packages that fail to build in this way.

Please, for the sanity of everyone working with the archive, don’t upload packages that haven’t been built in a clean environment. It’s such a waste of everybody’s time if you don’t do this most basic of checks.

Clean builds for the win is a post from: jwiltshire.org.uk | Flattr

Steinar H. Gunderson: Short SSH keys

13 October, 2014 - 03:00

I'm sure this is useful for something beyond being neat:

klump:~> cat .ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFePWUlZmVbCZ9KHa4pOOMBXHaMFeuuIZDw0uHHEY2/m sesse@klump

I hope OpenSSH doesn't eventually grow a sort-of single point of failure in “djb ALL the algorithms!” by default, though.

Iustin Pop: Day trip on the Olympic Peninsula

13 October, 2014 - 01:53
Day trip on the Olympic Peninsula

TL;DR: drove many kilometres on very nice roads, took lots of pictures, saw sunshine and fog and clouds, an angry ocean and a calm one, a quiet lake and lots and lots of trees: a very well spent day. Pictures at http://photos.k1024.org/Daytrips/Olympic-Peninsula-2014/.

Sometimes I travel to the US on business, and as such I've been a few times in the Seattle area. Until this summer, when I had my last trip there, I was content to spend any extra days (weekend or such) just visiting Seattle itself, or shopping (I can spend hours in the REI store!), or working on my laptop in the hotel.

This summer though, I thought - I should do something a bit different. Not too much, but still - no sense in wasting both days of the weekend. So I thought maybe driving to Mount Rainier, or something like that.

On the Wednesday of my first week in Kirkland, as I was preparing my drive to the mountain, I made the mistake of scrolling the map westwards, and I saw for the first time the Olympic Peninsula; furthermore, I was zoomed in enough that I saw there was a small road right up to the north-west corner. Intrigued, I zoomed further and learned about Cape Flattery (“the northwestern-most point of the contiguous United States!”), so after spending a bit time reading about it, I was determined to go there.

Easier said than done - from Kirkland, it's a 4h 40m drive (according to Google Maps), so it would be a full day on the road. I was thinking of maybe spending the night somewhere on the peninsula then, in order to actually explore the area a bit, but from Wednesday to Saturday it was a too short notice - all hotels that seemed OK-ish were fully booked. I spent some time trying to find something, even not directly on my way, but I failed to find any room.

What I did manage to do though, is to learn a bit about the area, and to realise that there's a nice loop around the whole peninsula - the 104 from Kirkland up to where it meets the 101N on the eastern side, then take the 101 all the way to Port Angeles, Lake Crescent, near Lake Pleasant, then south toward Forks, crossing the Hoh river, down to Ruby Beach, down along the coast, crossing the Queets River, east toward Lake Quinault, south toward Aberdeen, then east towards Olympia and back out of the wilderness, into the highway network and back to Kirkland. This looked like an awesome road trip, but it is as long as it sounds - around 8 hours (continuous) drive, though skipping Cape Flattery. Well, I said to myself, something to keep in mind for a future trip to this area, with a night in between. I was still planning to go just to Cape Flattery and back, without realising at that point that this trip was actually longer (as you drive on smaller, lower-speed roads).

Preparing my route, I read about the queues at the Edmonds-Kingston ferry, so I was planning to wake up early on the weekend, go to Cape Flattery, and go right back (maybe stop by Lake Crescent).

Saturday comes, I - of course - sleep longer than my trip schedule said, and start the day in a somewhat cloudy weather, driving north from my hotel on Simonds Road, which was quite nicer than the usual East-West or North-South roads in this area. The weather was becoming nicer, however as I was nearing the ferry terminal and the traffic was getting denser, I started suspecting that I'll spend a quite a bit of time waiting to board the ferry.

And unfortunately so it was (photo altered to hide some personal information):

.

The weather at least was nice, so I tried to enjoy it and simply observe the crowd - people were looking forward to a weekend relaxing, so nobody seemed annoyed by the wait. After almost half an hour, time to get on the ferry - my first time on a ferry in US, yay! But it was quite the same as in Europe, just that the ship was much larger.

Once I secured the car, I went up deck, and was very surprised to be treated with some excellent views:

The crossing was not very short, but it seemed so, because of the view, the sun, the water and the wind. Soon we were nearing the other shore; also, see how well panorama software deals with waves :P!

And I was finally on the "real" part of the trip.

The road was quite interesting. Taking the 104 North, crossing the "Hood Canal Floating Bridge" (my, what a boring name), then finally joining the 101 North. The environment was quite varied, from bare plains and hills, to wooded areas, to quite dense forests, then into inhabited areas - quite a long stretch of human presence, from the Sequim Bay to Port Angeles.

Port Angeles surprised me: it had nice views of the ocean, and an interesting port (a few big ships), but it was much smaller than I expected. The 101 crosses it, and in less than 10 minutes or so it was already over. I expected something nicer, based on the name, but… Anyway, onwards!

Soon I was at a crossroads and had to decide: I could either follow the 101, crossing the Elwha River and then to Lake Crescent, then go north on the 113/112, or go right off 101 onto 112, and follow it until close to my goal. I took the 112, because on the map it looked "nicer", and closer to the shore.

Well, the road itself was nice, but quite narrow and twisty here and there, and there was some annoying traffic, so I didn't enjoy this segment very much. At least it had the very interesting property (to me) that whenever I got closer to the ocean, the sun suddenly disappeared, and I was finding myself in the fog:

So my plan to drive nicely along the coast failed. At one point, there was even heavy smoke (not fog!), and I wondered for a moment how safe was to drive out there in the wilderness (there were other cars though, so I was not alone).

Only quite a bit later, close to Neah Bay, did I finally see the ocean: I saw a small parking spot, stopped, and crossing a small line of trees I found myself in a small cove? bay? In any case, I had the impression I stepped out of the daily life in the city and out into the far far wilderness:

There was a couple, sitting on chairs, just enjoying the view. I felt very much intruding, behaving like I did as a tourist: running in, taking pictures, etc., so I tried at least to be quiet ☺. I then quickly moved on, since I still had some road ahead of me.

Soon I entered Neah Bay, and was surprised to see once more blue, and even more blue. I'm a sucker for blue, whether sky blue or sea blue ☺, so I took a few more pictures (watch out for the evil fog in the second one):

Well, the town had some event, and there were lots of people, so I just drove on, now on the last stretch towards the cape. The road here was also very interesting, yet another environment - I was driving on Cape Flattery Road, which cuts across the tip of the peninsula (quite narrow here) along the Waatch River and through its flooding plains (at least this is how it looked to me). Then it finally starts going up through the dense forest, until it reaches the parking lot, and from there, one goes on foot towards the cape. It's a very easy and nice walk (not a hike), and the sun was shining very nicely through the trees:

But as I reached the peak of the walk, and started descending towards the coast, I was surprised, yet again, by fog:

I realised that probably this means the cape is fully in fog, so I won't have any chance to enjoy the view.

Boy, was I wrong! There are three viewpoints on the cape, and at each one I was just "wow" and "aah" at the view. Even thought it was not a sunny summer view, and there was no blue in sight, the combination between the fog (which was hiding the horizon and even the closer islands), the angry ocean which was throwing wave after wave at the shore, making a loud noise, and the fact that even this seemingly inhospitable area was just teeming with life, was both unexpected and awesome. I took here waay to many pictures, here are just a couple inlined:

I spent around half an hour here, just enjoying the rawness of nature. It was so amazing to see life encroaching on each bit of land, even though it was not what I would consider a nice place. Ah, how we see everything through our own eyes!

The walk back was through fog again, and at one point it switched over back to sunny. Driving back on the same road was quite different, knowing what lies at its end. On this side, the road had some parking spots, so I managed to stop and take a picture - even though this area was much less wild, it still has that “outdoors” flavour, at least for me:

Back in Neah Bay, I stopped to eat. I had a place in mind from TripAdvisor, and indeed - I was able to get a custom order pizza at "Linda's Woodfired Kitchen". Quite good, and I ate without hurry, looking at the people walking outside, as they were coming back from the fair or event that was taking place.

While eating, a somewhat disturbing thought was going through my mind. It was still early, around two to half past two, so if I went straight back to Kirkland I would be early at the hotel. But it was also early enough that I could - in theory at least - still do the "big round-trip". I was still rummaging the thought as I left…

On the drive back I passed once more near Sekiu, Washington, which is a very small place but the map tells me it even has an airport! Fun, and the view was quite nice (a bit of blue before the sea is swallowed by the fog):

After passing Sekiu and Clallam Bay, the 112 curves inland and goes on a bit until you are at the crossroads: to the left the 112 continues, back the same way I came; to the right, it's the 113, going south until it meets the 101. I looked left - remembering the not-so-nice road back, I looked south - where a very appealing, early afternoon sun was beckoning - so I said, let's take the long way home!

It's just a short stretch on the 113, and then you're on the 101. The 101 is a very nice road, wide enough, and it goes through very very nice areas. Here, west to south-west of the Olympic Mountains, it's a very different atmosphere from the 112/101 that I drove on in the morning; much warmer colours, a bit different tree types (I think), and more flat. I soon passed through Forks, which is one of the places I looked at when searching for hotels. I did so without any knowledge of the town itself (its wikipedia page is quite drab), so imagine my surprise when a month later I learned from a colleague that this is actually a very important place for vampire-book fans. Oh my, and I didn't even stop! This town also had some event, so I just drove on, enjoying the (mostly empty) road.

My next planned waypoint was Ruby Beach, and I was looking forward to relaxing a bit under the warm sun - the drive was excellent, weather perfect, so I was watching the distance countdown on my Garmin. At two miles out, the "Near waypoint Ruby Beach" message appeared, and two seconds later the sun went out. What the… I was hoping this is something temporary, but as I slowly drove the remaining mile I couldn't believe my eyes that I was, yet again, finding myself in the fog…

I park the car, thinking that asking for a refund would at least allow me to feel better - but it was I who planned the trip! So I resigned myself, thinking that possibly this beach is another special location that is always in the fog. However, getting near the beach it was clear that it was not so - some people were still in their bathing suits, just getting dressed, so… it seems I was just unlucky with regards to timing. However, I the beach itself was nice, even in the fog (I later saw online sunny pictures, and it is quite beautiful), the the lush trees reach almost to the shore, and the way the rocks are “sitting” on the beach:

Since the weather was not that nice, I took a few more pictures, then headed back and started driving again. I was soo happy that the weather didn't clear at the 2 mile mark (it was not just Ruby Beach!), but alas - it cleared as soon as the 101 turns left and leaves the shore, as it crosses the Queets river. Driving towards my next planned stop was again a nice drive in the afternoon sun, so I think it simply was not a sunny day on the Pacific shore. Maybe seas and oceans have something to do with fog and clouds ☺! In Switzerland, I'm very happy when I see fog, since it's a somewhat rare event (and seeing mountains disappearing in the fog is nice, since it gives the impression of a wider space). After this day, I was a bit fed up with fog for a while ☺…

Along the 101 one reaches Lake Quinault, which seemed pretty nice on the map, and driving a bit along the lake - a local symbol, the "World's largest spruce tree". I don't know what a spruce tree is, but I like trees, so I was planning to go there, weather allowing. And the weather did cooperate, except that the tree was not so imposing as I thought! In any case, I was glad to stretch my legs a bit:

However, the most interesting thing here in Quinault was not this tree, but rather - the quiet little town and the view on the lake, in the late afternoon sun:

The entire town was very very quiet, and the sun shining down on the lake gave an even stronger sense of tranquillity. No wind, not many noises that tell of human presence, just a few, and an overall sense of peace. It was quite the opposite of the Cape Flattery… and a very nice way to end the trip.

Well, almost end - I still had a bit of driving ahead. Starting from Quinault, driving back and entering the 101, driving down to Aberdeen:

then turning east towards Olympia, and back onto the highways.

As to Aberdeen and Olympia, I just drove through, so I couldn't make any impression of them. The old harbour and the rusted things in Aberdeen were a bit interesting, but the day was late so I didn't stop.

And since the day shouldn't end without any surprises, during the last profile change between walking and driving in Quinault, my GPS decided to reset its active maps list and I ended up with all maps activated. This usually is not a problem, at least if you follow a pre-calculated route, but I did trigger recalculation as I restarted my driving, so the Montana was trying to decide on which map to route me - between the Garmin North America map and the Open StreeMap one, the result was that it never understood which road I was on. It always said "Drive to I5", even though I was on I5. Anyway, thanks to road signs, and no thanks to "just this evening ramp closures", I was able to arrive safely at my hotel.

Overall, a very successful, if long trip: around 725 kilometres, 10h:30m moving, 13h:30m total:

There were many individual good parts, but the overall think about this road trip was that I was able to experience lots of different environments of the peninsula on the same day, and that overall it's a very very nice area.

The downside was that I was in a rush, without being able to actually stop and enjoy the locations I visited. And there's still so much to see! A two nights trip sound just about right, with some long hikes in the rain forest, and afternoons spent on a lake somewhere.

Another not so optimal part was that I only had my "travel" camera (a Nikon 1 series camera, with a small sensor), which was a bit overwhelmed here and there by the situation. It was fortunate that the light was more or less good, but looking back at the pictures, how I wish that I had my "serious" DSLR…

So, that means I have two reasons to go back! Not too soon though, since Mount Rainier is also a good location to visit ☺.

If the pictures didn't bore you yet, the entire gallery is on my smugmug site. In any case, thanks for reading!

Giuseppe Iuculano: apt-get purge chromium

13 October, 2014 - 00:10

As you may know, I was the Debian chromium maintainer for many years. Some week ago, I decided to stop working  in the chromium package because it is not possible anymore to contribute and work in the team. In fact, Michael Gilbert started to work in a manner that prevent people to help maintain the package.

In the last period the git repository rarely was updated, and my requests were systematically  ignored. Having an updated git repository is mandatory in a big package like Chromium,  and if you don’t push your changes, other people will lost their time

Now, after deciding to stop maintaining Chromium, I also decided to purge it and switch to the Google Chrome binary. Why? Chromium is a pain. Huge commits not documented in changelog that caused stupid bugs because no one can double check them.

In this moment we have an unusable [1] [2] [3] version of Chromium in testing because maintainer demoted grave bugs with the recommendation to rm -rf ./config/chromium … and nobody can understand the sense of latest commits.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้