Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 4 min ago

Marco d'Itri: The Italian peering ecosystem

15 October, 2014 - 00:34

I published the slides of my talk "An introduction to peering in Italy - Interconnections among the Italian networks" that I presented today at the MIX-IT (the Milano internet exchange) technical meeting.

Philipp Kern: pbuilder and pam_tmpdir

14 October, 2014 - 06:28
It turns out that my recent woes with pbuilder were all due to libpam-tmpdir being installed (at least two old bug reports exist about this issue: #576425 and #725434). I rather like my private temporary directory that cannot be accessed by other (potential) users on the same system. Previously I used a hook to fix this up by ensuring that the directory actually exists in the chroot, but somehow that recently broke.

A rather crude but working solution seems to be "session required pam_env.so user_readenv=1" in /etc/pam.d/sudo and "TMPDIR=/tmp" in /root/.pam_environment. One could probably skip pam_tmpdir.so for root, but I did not want to start fighting with pam-auth-update as this is in /etc/pam.d/common-session*.

Konstantinos Margaritis: SIMD optimizations, cont.

14 October, 2014 - 03:19

A friend of mine told me that I should advertise my passion and know-how about SIMD more, and I decided to follow his advice. Though I am terrible at marketing and even more at personal marketing, I've made an attempt to do just that, advertise the fact that I'm offering SIMD Optimization Services (with emphasis on PowerPC AltiVec/VMX/VSX, and ARM NEON, but I'm ok with SSE as well, the logic is pretty much the same, though the difference(s) are in the details). For this reason I'm offering a free evaluation of your performance critical code (open/closed, able to sign NDAs if needed) to let you know if it's worth optimizing it, what kind of a performance gain you would get and how much it would cost you to get that result.
You can read more here.

John Goerzen: Update on the systemd issue

14 October, 2014 - 01:46

The other day, I wrote about my poor first impressions of systemd in jessie. Here’s an update.

I’d like to start with the things that are good. I found the systemd community to be one of the most helpful in Debian, and #debian-systemd IRC channel to be especially helpful. I was in there for quite some time yesterday, and appreciated the help from many people, especially Michael. This is a nontechnical factor, but is extremely important; this has significantly allayed my concerns about systemd right there.

There are things about the systemd design that impress. The dependency system and configuration system is a lot more flexible than sysvinit. It is also a lot more complicated, and difficult to figure out what’s happening. I am unconvinced of the utility of parallelization of boot to begin with; I rarely reboot any of my Linux systems, desktops or servers, and it seems to introduce needless complexity.

Anyhow, on to the filesystem problem, and a bit of a background. My laptop runs ZFS, which is somewhat similar to btrfs in that it’s a volume manager (like LVM), RAID manager (like md), and filesystem in one. My system runs LVM, and inside LVM, I have two ZFS “pools” (volume groups): one, called rpool, that is unencrypted and holds mainly the operating system; and the other, called crypt, that is stacked atop LUKS. ZFS on Linux doesn’t yet have built-in crypto, which is why LVM is even in the picture here (to separate out the SSD at a level above ZFS to permit parts of it to be encrypted). This is a bit of an antiquated setup for me; as more systems have AES-NI, I’m going to everything except /boot being encrypted.

Anyhow, inside rpool is the / filesystem, /var, and /usr. Inside /crypt is /tmp and /home.

Initially, I tried to just boot it, knowing that systemd is supposed to work with LSB init scripts, and ZFS has init scripts with carefully-planned dependencies. This was evidently not working, perhaps because /lib/systemd/systemd/ It turns out that systemd has a few assumptions that turn out to be less true with ZFS than otherwise. ZFS filesystems are normally not mounted via /etc/fstab; a ZFS pool has internal properties about which dataset gets mounted where (similar to LVM’s actions after a vgscan and vgchange -ay). Even though there are ordering constraints in the units, systemd is writing files to /var before /var gets mounted, resulting in the mount failing (unlike ext4, ZFS by default will reject an attempt to mount over a non-empty directory). Partly this due to the debian-fixup.service, and partly it is due to systemd reacting to udev items like backlight.

This problem was eventually worked around by doing zfs set mountpoint=legacy rpool/var, and then adding a line to fstab (“rpool/var /var zfs defaults 0 2″) for /var and its descendent filesystems.

This left the problem of /tmp; again, it wasn’t getting mounted soon enough. In this case, it required crypttab to be processed first, and there seem to be a lot of bugs in the crypttab processing in systemd (more on that below). I eventually worked around that by adding After=cryptsetup.target to the zfs-import-cache.service file. For /tmp, it did NOT work to put it in /etc/fstab, because then it tried to mount it before starting cryptsetup for some reason. It probably didn’t help that the system’s cryptdisks.service is a symlink to /dev/null, a fact I didn’t realize until after a lot of needless reboots.

Anyhow, one thing I stumbled across was poor console control with systemd. On numerous occasions, I had things like two cryptsetup processes trying to read a password, plus an emergency mode console trying to do so. I had this memorable line of text at one point:

(or type Control-D to continue): Please enter passphrase for disk athena-crypttank (crypt)! [ OK ] Stopped Emergency Shell.

And here we venture into unsatisfying territory with systemd. One answer to this in IRC was to install plymouth, which apparently serializes console I/O. However, plymouth is “an attractive boot animation in place of the text messages that normally get shown.” I don’t want an “attractive boot animation”. Nevertheless, neither systemd-sysv nor cryptsetup depends on plymouth, so by default, the prompt for a password at boot is obscured by various other text.

Worse, plymouth doesn’t support serial consoles, so at the moment booting a system that uses LUKS with systemd over a serial console is a matter of blind luck of typing the right password at the right time.

In the end, though, the system booted and after a few more tweaks, the backlight buttons do their thing again. Whew!

Update 2014-10-13: uau pointed out that Plymouth is more than a bootsplash, and can work with serial consoles, despite the description of the package. I stand corrected on that. (It is still the case, however, that packages don’t depend on it where they should, and the default experience for people using cryptsetup is not very good.)

Steve McIntyre: Successful Summer of Code in Linaro

14 October, 2014 - 01:06

It's past time I wrote about how Linaro's students fared in this year's Google Summer of Code. You might remember me posting earlier in the year when we welcomed our students. We started with 3 student projects at the beginning of the summer. One of the students unfortunately didn't work out, but the other two were hugely successful.

Gaurav Minocha was a graduate student at the University of British Columbia, Vancouver, Canada. He worked on Linux Flattened Device Tree Self-checking, mentored by Grant Likely from Linaro's Office of the CTO. Gaurav achieved all of his project's goals, and he was invited to Linaro's recent Linaro Connect USAConnect conference in California to meet people and and talk about his project. He and Grant presented a session on their work; it was filmed, and video is online. Grant said he was very happy with Gaurav's "strong, solid performance" during the project.

Varad Gautam was a student at Birla Institute of Technology and Science, Pilani, India. He succeeded in porting UEFI to the BeagleBone Black. Leif Lindholm from the Linaro Enterprise Group was his mentor for the summer. At the end of the summer, Varad delivered a UEFI port ready for booting Linux and his code was included in Linaro's September UEFI release. Leif said that he was "very pleased with Varad's self sufficiency and ability to pick up an entirely new software project very quickly". We were hoping to invite Gaurad to Connect in California also, but travel document delays got in the way. With luck we'll see him at the next Connect in Hong Kong in February 2015.

Well done, guys! It was great to work with these young developers for the summer, and we wish them lots more success in their future endeavours.

Google have also just confirmed that they will be running the Summer of Code program again in 2015. I'm hoping that Linaro will be accepted again next year as a mentoring organisation. I'll post more about that early next year.

Dirk Eddelbuettel: Seinfeld streak at GitHub

13 October, 2014 - 08:23

Early last year, I referred to a Seinfeld Streak in a blog post referring to almost two months of updates to the Rcpp Gallery. This is sometimes called Jerry Seinfeld's secret to productivity: Just keep at it. Don't break the streak.

I now have different streak:

Now we'll see how far this one will go.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Wiltshire: Clean builds for the win

13 October, 2014 - 04:50

I’ve just spent a little time squashing several bugs on the trot, all the same: insufficient build-dependencies when built in a clean environment. Typically this means that the package was uploaded after being built on a developer’s normal machine, which already has everything required installed.

It’s long been the case that we have several ways to build packages in a clean chroot before upload, which reveals these sorts of errors and more. There’s not really any excuse for uploading packages that fail to build in this way.

Please, for the sanity of everyone working with the archive, don’t upload packages that haven’t been built in a clean environment. It’s such a waste of everybody’s time if you don’t do this most basic of checks.

Clean builds for the win is a post from: jwiltshire.org.uk | Flattr

Steinar H. Gunderson: Short SSH keys

13 October, 2014 - 03:00

I'm sure this is useful for something beyond being neat:

klump:~> cat .ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFePWUlZmVbCZ9KHa4pOOMBXHaMFeuuIZDw0uHHEY2/m sesse@klump

I hope OpenSSH doesn't eventually grow a sort-of single point of failure in “djb ALL the algorithms!” by default, though.

Iustin Pop: Day trip on the Olympic Peninsula

13 October, 2014 - 01:53
Day trip on the Olympic Peninsula

TL;DR: drove many kilometres on very nice roads, took lots of pictures, saw sunshine and fog and clouds, an angry ocean and a calm one, a quiet lake and lots and lots of trees: a very well spent day. Pictures at http://photos.k1024.org/Daytrips/Olympic-Peninsula-2014/.

Sometimes I travel to the US on business, and as such I've been a few times in the Seattle area. Until this summer, when I had my last trip there, I was content to spend any extra days (weekend or such) just visiting Seattle itself, or shopping (I can spend hours in the REI store!), or working on my laptop in the hotel.

This summer though, I thought - I should do something a bit different. Not too much, but still - no sense in wasting both days of the weekend. So I thought maybe driving to Mount Rainier, or something like that.

On the Wednesday of my first week in Kirkland, as I was preparing my drive to the mountain, I made the mistake of scrolling the map westwards, and I saw for the first time the Olympic Peninsula; furthermore, I was zoomed in enough that I saw there was a small road right up to the north-west corner. Intrigued, I zoomed further and learned about Cape Flattery (“the northwestern-most point of the contiguous United States!”), so after spending a bit time reading about it, I was determined to go there.

Easier said than done - from Kirkland, it's a 4h 40m drive (according to Google Maps), so it would be a full day on the road. I was thinking of maybe spending the night somewhere on the peninsula then, in order to actually explore the area a bit, but from Wednesday to Saturday it was a too short notice - all hotels that seemed OK-ish were fully booked. I spent some time trying to find something, even not directly on my way, but I failed to find any room.

What I did manage to do though, is to learn a bit about the area, and to realise that there's a nice loop around the whole peninsula - the 104 from Kirkland up to where it meets the 101N on the eastern side, then take the 101 all the way to Port Angeles, Lake Crescent, near Lake Pleasant, then south toward Forks, crossing the Hoh river, down to Ruby Beach, down along the coast, crossing the Queets River, east toward Lake Quinault, south toward Aberdeen, then east towards Olympia and back out of the wilderness, into the highway network and back to Kirkland. This looked like an awesome road trip, but it is as long as it sounds - around 8 hours (continuous) drive, though skipping Cape Flattery. Well, I said to myself, something to keep in mind for a future trip to this area, with a night in between. I was still planning to go just to Cape Flattery and back, without realising at that point that this trip was actually longer (as you drive on smaller, lower-speed roads).

Preparing my route, I read about the queues at the Edmonds-Kingston ferry, so I was planning to wake up early on the weekend, go to Cape Flattery, and go right back (maybe stop by Lake Crescent).

Saturday comes, I - of course - sleep longer than my trip schedule said, and start the day in a somewhat cloudy weather, driving north from my hotel on Simonds Road, which was quite nicer than the usual East-West or North-South roads in this area. The weather was becoming nicer, however as I was nearing the ferry terminal and the traffic was getting denser, I started suspecting that I'll spend a quite a bit of time waiting to board the ferry.

And unfortunately so it was (photo altered to hide some personal information):

.

The weather at least was nice, so I tried to enjoy it and simply observe the crowd - people were looking forward to a weekend relaxing, so nobody seemed annoyed by the wait. After almost half an hour, time to get on the ferry - my first time on a ferry in US, yay! But it was quite the same as in Europe, just that the ship was much larger.

Once I secured the car, I went up deck, and was very surprised to be treated with some excellent views:

The crossing was not very short, but it seemed so, because of the view, the sun, the water and the wind. Soon we were nearing the other shore; also, see how well panorama software deals with waves :P!

And I was finally on the "real" part of the trip.

The road was quite interesting. Taking the 104 North, crossing the "Hood Canal Floating Bridge" (my, what a boring name), then finally joining the 101 North. The environment was quite varied, from bare plains and hills, to wooded areas, to quite dense forests, then into inhabited areas - quite a long stretch of human presence, from the Sequim Bay to Port Angeles.

Port Angeles surprised me: it had nice views of the ocean, and an interesting port (a few big ships), but it was much smaller than I expected. The 101 crosses it, and in less than 10 minutes or so it was already over. I expected something nicer, based on the name, but… Anyway, onwards!

Soon I was at a crossroads and had to decide: I could either follow the 101, crossing the Elwha River and then to Lake Crescent, then go north on the 113/112, or go right off 101 onto 112, and follow it until close to my goal. I took the 112, because on the map it looked "nicer", and closer to the shore.

Well, the road itself was nice, but quite narrow and twisty here and there, and there was some annoying traffic, so I didn't enjoy this segment very much. At least it had the very interesting property (to me) that whenever I got closer to the ocean, the sun suddenly disappeared, and I was finding myself in the fog:

So my plan to drive nicely along the coast failed. At one point, there was even heavy smoke (not fog!), and I wondered for a moment how safe was to drive out there in the wilderness (there were other cars though, so I was not alone).

Only quite a bit later, close to Neah Bay, did I finally see the ocean: I saw a small parking spot, stopped, and crossing a small line of trees I found myself in a small cove? bay? In any case, I had the impression I stepped out of the daily life in the city and out into the far far wilderness:

There was a couple, sitting on chairs, just enjoying the view. I felt very much intruding, behaving like I did as a tourist: running in, taking pictures, etc., so I tried at least to be quiet ☺. I then quickly moved on, since I still had some road ahead of me.

Soon I entered Neah Bay, and was surprised to see once more blue, and even more blue. I'm a sucker for blue, whether sky blue or sea blue ☺, so I took a few more pictures (watch out for the evil fog in the second one):

Well, the town had some event, and there were lots of people, so I just drove on, now on the last stretch towards the cape. The road here was also very interesting, yet another environment - I was driving on Cape Flattery Road, which cuts across the tip of the peninsula (quite narrow here) along the Waatch River and through its flooding plains (at least this is how it looked to me). Then it finally starts going up through the dense forest, until it reaches the parking lot, and from there, one goes on foot towards the cape. It's a very easy and nice walk (not a hike), and the sun was shining very nicely through the trees:

But as I reached the peak of the walk, and started descending towards the coast, I was surprised, yet again, by fog:

I realised that probably this means the cape is fully in fog, so I won't have any chance to enjoy the view.

Boy, was I wrong! There are three viewpoints on the cape, and at each one I was just "wow" and "aah" at the view. Even thought it was not a sunny summer view, and there was no blue in sight, the combination between the fog (which was hiding the horizon and even the closer islands), the angry ocean which was throwing wave after wave at the shore, making a loud noise, and the fact that even this seemingly inhospitable area was just teeming with life, was both unexpected and awesome. I took here waay to many pictures, here are just a couple inlined:

I spent around half an hour here, just enjoying the rawness of nature. It was so amazing to see life encroaching on each bit of land, even though it was not what I would consider a nice place. Ah, how we see everything through our own eyes!

The walk back was through fog again, and at one point it switched over back to sunny. Driving back on the same road was quite different, knowing what lies at its end. On this side, the road had some parking spots, so I managed to stop and take a picture - even though this area was much less wild, it still has that “outdoors” flavour, at least for me:

Back in Neah Bay, I stopped to eat. I had a place in mind from TripAdvisor, and indeed - I was able to get a custom order pizza at "Linda's Woodfired Kitchen". Quite good, and I ate without hurry, looking at the people walking outside, as they were coming back from the fair or event that was taking place.

While eating, a somewhat disturbing thought was going through my mind. It was still early, around two to half past two, so if I went straight back to Kirkland I would be early at the hotel. But it was also early enough that I could - in theory at least - still do the "big round-trip". I was still rummaging the thought as I left…

On the drive back I passed once more near Sekiu, Washington, which is a very small place but the map tells me it even has an airport! Fun, and the view was quite nice (a bit of blue before the sea is swallowed by the fog):

After passing Sekiu and Clallam Bay, the 112 curves inland and goes on a bit until you are at the crossroads: to the left the 112 continues, back the same way I came; to the right, it's the 113, going south until it meets the 101. I looked left - remembering the not-so-nice road back, I looked south - where a very appealing, early afternoon sun was beckoning - so I said, let's take the long way home!

It's just a short stretch on the 113, and then you're on the 101. The 101 is a very nice road, wide enough, and it goes through very very nice areas. Here, west to south-west of the Olympic Mountains, it's a very different atmosphere from the 112/101 that I drove on in the morning; much warmer colours, a bit different tree types (I think), and more flat. I soon passed through Forks, which is one of the places I looked at when searching for hotels. I did so without any knowledge of the town itself (its wikipedia page is quite drab), so imagine my surprise when a month later I learned from a colleague that this is actually a very important place for vampire-book fans. Oh my, and I didn't even stop! This town also had some event, so I just drove on, enjoying the (mostly empty) road.

My next planned waypoint was Ruby Beach, and I was looking forward to relaxing a bit under the warm sun - the drive was excellent, weather perfect, so I was watching the distance countdown on my Garmin. At two miles out, the "Near waypoint Ruby Beach" message appeared, and two seconds later the sun went out. What the… I was hoping this is something temporary, but as I slowly drove the remaining mile I couldn't believe my eyes that I was, yet again, finding myself in the fog…

I park the car, thinking that asking for a refund would at least allow me to feel better - but it was I who planned the trip! So I resigned myself, thinking that possibly this beach is another special location that is always in the fog. However, getting near the beach it was clear that it was not so - some people were still in their bathing suits, just getting dressed, so… it seems I was just unlucky with regards to timing. However, I the beach itself was nice, even in the fog (I later saw online sunny pictures, and it is quite beautiful), the the lush trees reach almost to the shore, and the way the rocks are “sitting” on the beach:

Since the weather was not that nice, I took a few more pictures, then headed back and started driving again. I was soo happy that the weather didn't clear at the 2 mile mark (it was not just Ruby Beach!), but alas - it cleared as soon as the 101 turns left and leaves the shore, as it crosses the Queets river. Driving towards my next planned stop was again a nice drive in the afternoon sun, so I think it simply was not a sunny day on the Pacific shore. Maybe seas and oceans have something to do with fog and clouds ☺! In Switzerland, I'm very happy when I see fog, since it's a somewhat rare event (and seeing mountains disappearing in the fog is nice, since it gives the impression of a wider space). After this day, I was a bit fed up with fog for a while ☺…

Along the 101 one reaches Lake Quinault, which seemed pretty nice on the map, and driving a bit along the lake - a local symbol, the "World's largest spruce tree". I don't know what a spruce tree is, but I like trees, so I was planning to go there, weather allowing. And the weather did cooperate, except that the tree was not so imposing as I thought! In any case, I was glad to stretch my legs a bit:

However, the most interesting thing here in Quinault was not this tree, but rather - the quiet little town and the view on the lake, in the late afternoon sun:

The entire town was very very quiet, and the sun shining down on the lake gave an even stronger sense of tranquillity. No wind, not many noises that tell of human presence, just a few, and an overall sense of peace. It was quite the opposite of the Cape Flattery… and a very nice way to end the trip.

Well, almost end - I still had a bit of driving ahead. Starting from Quinault, driving back and entering the 101, driving down to Aberdeen:

then turning east towards Olympia, and back onto the highways.

As to Aberdeen and Olympia, I just drove through, so I couldn't make any impression of them. The old harbour and the rusted things in Aberdeen were a bit interesting, but the day was late so I didn't stop.

And since the day shouldn't end without any surprises, during the last profile change between walking and driving in Quinault, my GPS decided to reset its active maps list and I ended up with all maps activated. This usually is not a problem, at least if you follow a pre-calculated route, but I did trigger recalculation as I restarted my driving, so the Montana was trying to decide on which map to route me - between the Garmin North America map and the Open StreeMap one, the result was that it never understood which road I was on. It always said "Drive to I5", even though I was on I5. Anyway, thanks to road signs, and no thanks to "just this evening ramp closures", I was able to arrive safely at my hotel.

Overall, a very successful, if long trip: around 725 kilometres, 10h:30m moving, 13h:30m total:

There were many individual good parts, but the overall think about this road trip was that I was able to experience lots of different environments of the peninsula on the same day, and that overall it's a very very nice area.

The downside was that I was in a rush, without being able to actually stop and enjoy the locations I visited. And there's still so much to see! A two nights trip sound just about right, with some long hikes in the rain forest, and afternoons spent on a lake somewhere.

Another not so optimal part was that I only had my "travel" camera (a Nikon 1 series camera, with a small sensor), which was a bit overwhelmed here and there by the situation. It was fortunate that the light was more or less good, but looking back at the pictures, how I wish that I had my "serious" DSLR…

So, that means I have two reasons to go back! Not too soon though, since Mount Rainier is also a good location to visit ☺.

If the pictures didn't bore you yet, the entire gallery is on my smugmug site. In any case, thanks for reading!

Giuseppe Iuculano: apt-get purge chromium

13 October, 2014 - 00:10

As you may know, I was the Debian chromium maintainer for many years. Some week ago, I decided to stop working  in the chromium package because it is not possible anymore to contribute and work in the team. In fact, Michael Gilbert started to work in a manner that prevent people to help maintain the package.

In the last period the git repository rarely was updated, and my requests were systematically  ignored. Having an updated git repository is mandatory in a big package like Chromium,  and if you don’t push your changes, other people will lost their time

Now, after deciding to stop maintaining Chromium, I also decided to purge it and switch to the Google Chrome binary. Why? Chromium is a pain. Huge commits not documented in changelog that caused stupid bugs because no one can double check them.

In this moment we have an unusable [1] [2] [3] version of Chromium in testing because maintainer demoted grave bugs with the recommendation to rm -rf ./config/chromium … and nobody can understand the sense of latest commits.

Mario Lang: soundCLI works again

12 October, 2014 - 21:30

I recently ranted about my frustration with GStreamer in a SoundCloud command-line client written in Ruby.

Well, it turns out that there was quite a bit confusion going on. I still haven't figured out why my initial tries resulted in an error regarding $DISPLAY not being set. But now that I have played a bit with gst-launch-1.0, I can positively confirm that this was very likely not the fault of GStreamer.

THe actual issue is, that ruby-gstreamer is assuming gstreamer-1.0, while soundCLI was still written against the gstreamer-0.10 API.

Since the ruby gst module doesn't have the Gstreamer API version in its name, and since Ruby is a dynamic language that only detects most errors at runtime, this led to all sorts of cascaded errors.

It turns out I only had to correct the use of query_position, query_duration, and get_state, as well as switching from playbin2 to playbin.

soundCLI is now running in the background and playing my SoundCloud stream. A pull request against soundCLI has also been opened.

On a somewhat related note, I found a GCC bug (ICE SIGSEGV) this weekend. My first one. It is related to C++11 bracketed initializers. Given that I have heard GCC 5.0 aims to remove the experimental nature of C++11 (and maybe also C++14), this seems like a good time to hit this one. I guess that means I should finally isolate the C++ regex (runtime) segfault I recently stumbled across.

Guido Günther: Testing a NetworkManager VPN plugin password dialog

12 October, 2014 - 19:55

Testing the password dialog of a NetworkManager VPN plugin is as simple as:

echo -e 'DATA_KEY=foo\nDATA_VAL=bar\nDONE\nQUIT\n' | ./auth-dialog/nm-iodine-auth-dialog -n test -u $(uuid) -i

The above is for the iodine plugin when run from the built source tree. This allows one to test these dialogs although one didn't see them since ages since GNOME shell uses the external UI mode to query for the password.

This blog is flattr enabled.

John Goerzen: First impressions of systemd, and they’re not good

12 October, 2014 - 11:18

Well, I finally bit the bullet. My laptop, which runs jessie, got dist-upgraded for the first time in a few months. My brightness keys stopped working, and it no longer would suspend to RAM when the lid was closed, and upon chasing things down from XFCE to policykit, eventually it appears that suddenly major parts of the desktop breaks without systemd in jessie. Sigh.

So apt-get install systemd-sysv (and watch sysvinit-core get uninstalled) and reboot.

Only, my system doesn’t come back up. In fact, over several hours of trying to make it boot with systemd, it failed in numerous spectacular and hilarious (or, would be hilarious if my laptop would boot) ways. I had text obliterating the cryptsetup password prompt almost every time. Sometimes there were two processes trying to read a cryptsetup password at once. Sometimes a process was trying to read that while another one was trying to read an emergency shell password. Many times it tried to write to /var and /tmp before they were mounted, meaning they *wouldn’t* mount because there was stuff there.

I noticed it not doing much with ZFS, complaining of a dependency loop between zfs-mount and $local-fs. I fixed that, but it still wouldn’t boot. In fact, it simply hung after writing something about wall passwords.

I’ve dug into systemd, finding a “unit generator for fstab” (whatever the hack that is, it’s not at all made clear by systemd-fstab-generator(8)).

In some cases, there’s info in journalctl, but if I can’t even get to an emergency mode prompt, the practice of hiding all stdout and stderr output is not all that pleasant.

I remember thinking “what’s all the flaming about?” systemd wasn’t my first choice (I always thought “if it ain’t broke, don’t fix it” about sysvinit), but basically ignored the thousands of messages, thinking whatever happens, jessie will still boot.

Now I’m not so sure. Even if the thing boots out of the box, it seems like the boot process with systemd is colossally fragile.

For now, at least zfs rollback can undo upgrades to 800 packages in about 2 seconds. But I can’t stay at some early jessie checkpoint forever.

Have we made a vast mistake that can’t be undone? (If things like even *brightness keys* now require systemd…)

Dirk Eddelbuettel: RPushbullet 0.1.0 with a lot more awesome

11 October, 2014 - 11:29

A new release 0.1.0 of the RPushbullet package (interfacing the neat Pushbullet service) landed on CRAN today.

It brings a number of goodies relative to the first release 0.0.2 of a few months ago:

  • pushing of files is now supported thanks to a nice pull request bu Mike Birdgeneau
  • a default device can be designated in the ~/.rpushbullet.json file or options
  • initialization has been rewritten to use recpients which can be indices, device names or, if missing entirely, the (new) default device
  • alternatively, email is supported as another recipient option in which case the Pushbullet service will send an email to the give address
  • pbGetDevices() now returns a proper S3 object with corresponding print() and summary() methods
  • the documentation regarding package initialization, and setting of key, devices, etc has been expanded
  • more examples has been added to the documentation
  • various minor cleanups, fixes, corrections throughout

There is a whole boat load of more wickedness in the Pushbullet API so if anybody feels compelled to add it, fire off pull requests at GitHub.

More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Matthias Klumpp: Listaller + Glick: Some new ideas

10 October, 2014 - 20:59

As you might know, due to invasive changes in PackageKit, I am currently rewriting the 3rd-party application installer Listaller. Since I am not the only one looking at the 3rd-party app-installation issue (there is a larger effort going on at GNOME, based on Lennarts ideas), it makes sense to redesign some concepts of Listaller.

Currently, dependencies and applications are installed into directories in /opt, and Listaller contains some logic to make applications find dependencies, and to talk to the package manager to install missing things. This has some drawbacks, like the need to install an application before using it, the need for applications to be relocatable, and application-installations being non-atomic.

Glick2

There is/was another 3rd-party app installer approach on the GNOME side, by Alexander Larsson, called Glick2. Glick uses application bundles (do you remember Klik from back in the days?) mounted via FUSE. This allows some neat features, like atomic installations and software upgrades, no need for relocatable apps and no need to install the application.

However, it also has disadvantages. Quoting the introduction document for Glick2:

“Bundling isn’t perfect, there are some well known disadvantages. Increased disk footprint is one, although current storage space size makes this not such a big issues. Another problem is with security (or bugfix) updates in bundled libraries. With bundled libraries its much harder to upgrade a single library, as you need to find and upgrade each app that uses it. Better tooling and upgrader support can lessen the impact of this, but not completely eliminate it.”

This is what Listaller does better, since it was designed to do a large effort to avoid duplication of code.

Also, currently Glick doesn’t have support for updates and software-repositories, which Listaller had.

Combining Listaller and Glick ideas

So, why not combine the ideas of Listaller and Glick? In order to have Glick share resources, the system needs to know which shared resources are available. This is not possible if there is one huge Glick bundle containing all of the application’s dependencies. So I modularized Glick bundles to contain just one software component, which is e.g. GTK+ or Qt, GStreamer or could even be a larger framework (e.g. “GNOME 3.14 Platform”). These components are identified using AppStream XML metadata, which allows them to be installed from the distributor’s software repositories as well, if that is wanted.

If you now want to deploy your application, you first create a Glick bundle for it. Then, in a second step, you bundle your application bundle with it’s dependencies in one larger tarball, which can also be GPG signed and can contain additional metadata.

The resulting “metabundle” will look like this:

 

 

 

 

 

 

 

 

 

This doesn’t look like we share resources yet, right? The dependencies are still bundled with the application requiring them. The trick lies in the “installation” step: While the application above can be executed right away without installing it, there will also be an option to install it. For the user, this will mean that the application shows up in GNOME-Shell’s overview or KDEs Plasma launcher, gets properly registered with mimetypes and is – if installed for all users – available system-wide.

Technically, this will mean that the application’s main bundle is extracted and moved to a special location on the file system, so are the dependency-bundles. If bundles already exist, they will not be installed again, and the new application will simply use the existing software. Since the bundles contain information about their dependencies, the system is able to determine which software is needed and which can simply be deleted from the installation directories.

If the application is started now, the bundles are combined and mounted, so the application can see the libraries it depends on.

Additionally, this concept allows secure updates of applications and shared resources. The bundle metadata contains an URL which points to a bundle repository. If new versions are released, the system’s auto-updater can automatically pick these up and install them – this means e.g. the Qt bundle will receive security updates, even if the developer who shipped it with his/her app didn’t think of updating it.

Conclusion

So far, no productive code exists for this – I just have a proof-of-concept here. But I pretty much like the idea, and I am thinking about going further in that direction, since it allows deploying applications on the Linux desktop as well as deploying software on servers in a way which plays nice with the native package manager, and which does not duplicate much code (less risk of having not-updated libraries with security flaws around).

However, there might be issues I haven’t thought about yet. Also, it makes sense to look at GNOME to see how the whole “3rd-party app deployment” issue develops. In case I go further with Listaller-NEXT, it is highly likely that it will make use of the ideas sketched above (comments and feedback are more than welcome!).

Mario Lang

10 October, 2014 - 17:00
GStreamer and the command-line?

I was recently looking for a command-line client for SoundCloud. soundCLI on GitHub appeared to be what I want. But wait, there is a problem with its implementation.

soundCLI uses gstreamer's playbin2 to play audio data. But that apparently requires $DISPLAY to be set.

So no, soundCLI is not a command-line client. It is a command-line client for X11 users.

Ahem.

A bit of research on Stackoverflow and related sites did not tell me how to modify playbin2 usage such that it does not require X11, while it is only playing AUDIO data.

What the HECK is going on here. Are the graphical people trying to silently overtake the world? Is Linux becoming the new Windows? The distinction between CLI and GUI has become more and more blurry in the recent years. I fear for my beloved platform.

If you know how to patch soundCLI to not require X11, please let me know. My current work-around is to replace all gstreamer usage with a simple "system" call to vlc. That works, but it does not give me comment display (since soundCLI doesn't know the playback position anymore) and hangs after every track, requiring me to enter "quit" manually on the VLC prompt. I really would have liked to use mplayer2 for this, but alas, mplayer2 does not support https. Oh well, why would it need to, in this day and age where everyone seems to switch to https by default. Oh well.

Martin Pitt: Running autopkgtests in the cloud

10 October, 2014 - 15:25

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Ingo Juergensmann: Buildd.Net: update-buildd.net v0.99 released

10 October, 2014 - 03:29

Buildd.Net offers a buildd centric view to autobuilder network such as previously Debians autobuilder network or nowadays the autobuilder network of debian-ports.org. The policy of debian-ports.org requires a GPG key for the buildd to sign packages for upload that is valid for 1 year. Buildd admins are usually lazy people. At least they are running a buildd instead of building those packages all manually. Being a lazy buildd admin it might happen that you miss to renew your GPG key, which will render your buildd unable to upload newly built packages.

When participating in Buildd.Net you need to run update-buildd.net, a small script that transmits some statistical data about your package building. I added now a GPG key expiry check to that script that will warn the buildd admin by mail and text on the Buildd.Net arch status page, such as for m68k. So, either your client updates automatically to the new version or you can download the script yourself.

Kategorie: DebianTags: BuilddNetDebianSoftware 

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Qt 5.3.2 in Wheezy-backports: just a few hours away

10 October, 2014 - 02:50
In more or less 24 hs most of Qt 5.3.2 will be available as a Wheezy backport. That means that if you are using Debian stable you don't need to wait for Jessie: just wait a few hours, add wheezy-backports's repo to your sources.list and get it :)

The rest of Qt 5 will arrive soon.

This is the same version that will be shipped in Jessie, so whatever you develop with it will work with the next Debian stable release :)

Don't forget: you better start porting your Qt4 apps to Qt5!

Chris Lamb: London—Paris—London 2014

10 October, 2014 - 01:19

I've wanted to ride to Paris for a few months now but was put off by the hassle of taking a bicycle on the Eurostar, as well having a somewhat philosophical and aesthetic objection to taking a bike on a train in the first place. After all, if one already is possession of a mode of transport...

My itinerary was straightforward:

Friday 12h00
London → Newhaven
Friday 23h00
Newhaven → Dieppe (ferry)
Saturday 04h00
Dieppe → Paris
Saturday 23h00
(Sleep)
Sunday 07h00
Paris → Dieppe
Sunday 18h00
Dieppe → Newhaven (ferry)
Sunday 21h00
Newhaven → Peacehaven
Sunday 23h00
(Sleep)
Monday 07h00
Peacehaven → London
Packing list
  • Ferry ticket (unnecessary in the end)
  • Passport
  • Credit card
  • USB A male → mini A male (charges phone, battery pack & front light)
  • USB A male → mini B male (for charging or connecting to Edge 800)
  • USB mini A male → OTG A female (for Edge 800 uploads via phone)
  • Waterproof pocket
  • Sleeping mask for ferry (probably unnecessary)
  • Battery pack

Not pictured:

  • Castelli Gabba Windstopper short-sleeve jersey
  • Castelli Velocissimo bib shorts
  • Castelli Nanoflex arm warmers
  • Castelli Squadra rain jacket
  • Garmin Edge 800
  • Phone
  • Front light: Lezyne Macro Drive
  • Rear lights: Knog Gekko (on bike), Knog Frog (on helmet)
  • Inner tubes (X2), Lezyne multitool, tire levers, hand pump

Day 1: London → Newhaven

Tower Bridge.

Many attempt to go from Tower Bridge → Eiffel Tower (or Marble Arch → Arc de Triomphe) in less than 24 hours. This would have been quite easy if I had left a couple of hours later.

Fanny's Farm Shop, Merstham, Surrey.

Plumpton, East Sussex.

West Pier, Newhaven.

Leaving Newhaven on the 23h00 ferry.


Day 2: Dieppe → Paris

Beauvoir-en-Lyons, Haute-Normandie.

Sérifontaine, Picardie.

La tour Eiffel, Paris.

Champ de Mars, Paris.

Pont de Grenelle, Paris.


Day 3: Paris → Dieppe

Cormeilles-en-Vexin, Île-de-France.

Gisors, Haute-Normandie.

Paris-Brest, Gisors, Haute-Normandie.

Wikipedia: This pastry was created in 1910 to commemorate the Paris–Brest bicycle race begun in 1891. Its circular shape is representative of a wheel. It became popular with riders on the Paris–Brest cycle race, partly because of its energizing high caloric value, and is now found in pâtisseries all over France.

Gournay-en-Bray, Haute-Normandie.

Début de l'Avenue Verte, Forges-les-Eaux, Haute-Normandie.

Mesnières-en-Bray, Haute-Normandie.

Dieppe, Haute-Normandie.

«La Mancha».


Day 4: Peacehaven → London

Peacehaven, East Sussex.

Highbrook, West Sussex.

London weather.


Summary
Distance
588.17 km
Pedal turns
~105,795

My only non-obvious tips would be to buy a disposable blanket in the Newhaven Co-Op to help you sleep on the ferry. In addition, as the food on the ferry is good enough you only need to get to the terminal one hour before departure, avoiding time on your feet in unpicturesque Newhaven.

In terms of equipment, I would bring another light for the 4AM start on «L'Avenue Verte» if only as a backup and I would have checked I could arrive at my Parisian Airbnb earlier in the day - I had to hang around for five hours in the heat before I could have a shower, properly relax, etc.

I had been warned not to rely on being able to obtain enough water en route on Sunday but whilst most shops were indeed shut I saw a bustling tabac or boulangerie at least once every 20km so one would never be truly stuck.

Route-wise, the surburbs of London and Paris are both equally dismal and unmotivating and there is about 50km of rather uninspiring and exposed riding on the D915.

However, «L'Avenue Verte» is fantastic even in the pitch-black and the entire trip was worth it simply for the silent and beautiful Normandy sunrise. I will be back.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้