Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 13 min 3 sec ago

Michael Prokop: Book Review: The Docker Book

24 July, 2014 - 04:16

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

Tanguy Ortolo: GNU/Linux graphic sessions: suspending your computer

23 July, 2014 - 20:45

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \
            --dest='org.freedesktop.UPower' \
            /org/freedesktop/UPower org.freedesktop.UPower.Suspend
Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with
signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \
                        /usr/sbin/pm-suspend-hybrid, \
                        /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

Francesca Ciceri: Adventures in Mozillaland #3

23 July, 2014 - 19:04

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

Steinar H. Gunderson: The sad state of Linux Wi-Fi

23 July, 2014 - 18:45

I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer “this is just crap”.

I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:

  • Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz—this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
  • Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello “iwl-agn-rs”, I'm looking at you.)
  • Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
  • Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. “there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine”, which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.

With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.

Augh.

Andrew Pollock: [tech] Going solar

23 July, 2014 - 13:36

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

Matthew Palmer: Per-repo update hooks with gitolite

23 July, 2014 - 12:45

Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.

In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.

There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.

The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.

The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

Jonathan McCrohan: Git remote helpers

23 July, 2014 - 09:19

If you follow upstream Git development closely, you may have noticed that the Mercurial and Bazaar remote helpers (use git to interact with hg and bzr repos) no longer live in the main Git tree. They have been split out into their own repositories, here and here.

git-remote-bzr had been packaged (as git-bzr) for Debian since March 2013, but was removed in May 2014 when the remote helpers were removed upstream. There had been a wishlist bug report open since Mar 2013 to get git-remote-hg packaged, and I had submitted a patch, but it was never applied.

Split out of these remote helpers upstream has allowed Vagrant Cascadian and myself to pick up these packages and both are now available in Debian.

apt-get install git-remote-hg git-remote-bzr

Tim Retout: Cowbuilder and Tor

23 July, 2014 - 05:31

You've installed apt-transport-tor to help prevent targeted attacks on your system. Great! Now you want to build Debian packages using cowbuilder, and you notice these are still using plain HTTP.

If you're willing to fetch the first few packages without using apt-transport-tor, this is as easy as:

  • Add 'EXTRAPACKAGES="apt-transport-tor"' to your pbuilderrc.
  • Run 'cowbuilder --update'
  • Set 'MIRRORSITE=tor+http://http.debian.net/debian' in pbuilderrc.
  • Run 'cowbuilder --update' again.

Now any future builds should fetch build-dependencies over Tor.

Unfortunately, creating a base.cow from scratch is more problematic. Neither 'debootstrap' nor 'cdebootstrap' actually rely on apt acquire methods to download files - they look at the URL scheme themselves to work out where to fetch from. I think it's a design point that they shouldn't need apt, anyway, so that you can debootstrap on non-Debian systems. I don't have a good solution beyond using some other means to route these requests over Tor.

Neil Williams: Validating ARMMP device tree blobs

23 July, 2014 - 05:18

I’ve done various bits with ARMMP and LAVA on this blog already, usually waiting until I’ve got all the issues ironed out before writing it up. However, this time I’m just going to do a dump of where it’s at, how it works and what can be done.

I’m aware that LAVA can seem mysterious at first, the package description has improved enormously recently, thanks to exposure in Debian: LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

The LAVA documentation has a glossary of terms like result bundle and all the documentation is also available in the lava-server-doc package.

The goal is to validate the dtbs built for the Debian ARMMP kernel. One of the most accessible ways to get the ARMMP kernel onto a board for testing is tftp using the Debian daily DI builds. Actually using the DI initrd can come later, once I’ve got a complete preseed config so that the entire install can be automated. (There are some things to sort out in LAVA too before a full install can be deployed and booted but those are at an early stage.) It’s enough at first to download the vmlinuz which is common to all ARMMP deployments, supply the relevant dtb, partner those with a minimal initrd and see if the board boots.

The first change comes when this process is compared to how boards are commonly tested in LAVA – with a zImage or uImage and all/most of the modules already built in. Packaged kernels won’t necessarily raise a network interface or see the filesystem without modules, so the first step is to extend a minimal initramfs to include the armmp modules.

apt install pax u-boot-tools

The minimal initramfs I selected is one often used within LAVA:

wget http://images.armcloud.us/lava/common/linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

It has a u-boot header added, as most devices using this would be using u-boot and this makes it easier to debug boot failures as the initramfs doesn’t need to have the header added, it can simply be downloaded to a local directory and passed to the board as a tftp location. To modify it, the u-boot header needs to be removed. Rather than assuming the size, the u-boot tools can (indirectly) show the size:

$ ls -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot
-rw-r--r-- 1 neil neil  5179571 Nov 26  2013 linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

$ mkimage -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot 
Image Name:   linaro-image-minimal-initramfs-g
Created:      Tue Nov 26 22:30:49 2013
Image Type:   ARM Linux RAMDisk Image (gzip compressed)
Data Size:    5179507 Bytes = 5058.11 kB = 4.94 MB
Load Address: 00000000
Entry Point:  00000000

Referencing http://www.omappedia.com/wiki/Development_With_Ubuntu, the header size is the file size minus the data size listed by mkimage.

5179571 - 5179507 == 64

So, create a second file without the header:

dd if=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot of=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz skip=64 bs=1

decompress it

gunzip linaro-image-minimal-initramfs-genericarmv7a.cpio.gz

Now for the additions

dget http://ftp.uk.debian.org/debian/pool/main/l/linux/linux-image-3.14-1-armmp_3.14.12-1_armhf.deb

(Yes, this process will need to be repeated when this package is rebuilt, so I’ll want to script this at some point.)

dpkg -x linux-image-3.14-1-armmp_3.14.12-1_armhf.deb kernel-dir
cd kernel-dir

Pulling in the modules we need for most needs, comes thanks to a script written by the Xen folks. The set is basically disk, net, filesystems and LVM.

find lib -type d -o -type f -name modules.\*  -o -type f -name \*.ko  \( -path \*/kernel/lib/\* -o  -path \*/kernel/crypto/\* -o  -path \*/kernel/fs/mbcache.ko -o  -path \*/kernel/fs/ext\* -o  -path \*/kernel/fs/jbd\* -o  -path \*/kernel/drivers/net/\* -o  -path \*/kernel/drivers/ata/\* -o  -path \*/kernel/drivers/scsi/\* -o -path \*/kernel/drivers/md/\* \) | pax -x sv4cpio -s '%lib%/lib%' -d -w >../cpio
gzip -9f cpio

original Xen script (GPL-3+)

I found it a bit confusing that i is used for extract by cpio, but that’s how it is. Extract the minimal initramfs to a new directory:

sudo cpio -id < ../linaro-image-minimal-initramfs-genericarmv7a.cpio

Extract the new cpio into the same location. (Yes, I could do this the other way around and pipe the output of find into the already extracted location but that's for when I get a script to do this):

sudo cpio --no-absolute-filenames -id < ../ramfs/cpio

CPIO Manual

Use newc format, the new (SVR4) portable format, which supports file systems having more than 65536 i-nodes. (4294967295 bytes)
(41M)

find . | cpio -H newc -o > ../armmp-image.cpio

... and add the u-boot header back:

mkimage -A arm -T ramdisk -C none -d armmp-image.cpio.gz debian-armmp-initrd.cpio.gz.u-boot
Now what?

Now send the combination to LAVA and test it.

Results bundle for a local LAVA test job using this technique. (18k)

submission JSON - uses file:// references, so would need modification before being submitted to LAVA elsewhere.

complete log of the test job (72k)

Those familiar with LAVA will spot that I haven't optimised this job, it boots the ARMMP kernel into a minimal initramfs and then expects to find apt and other tools. Actual tests providing useful results would use available tools, add more tools or specify a richer rootfs.

The tests themselves are very quick (the job described above took 3 minutes to run) and don't need to be run particularly often, just once per board type per upload of the ARMMP kernel. LAVA can easily run those jobs in parallel and submission can be automated using authentication tokens and the lava-tool CLI. lava-tool can be installed without lava-server, so can be used in hooks for automated submissions.

Extensions

That's just one DTB and one board. I have a range of boards available locally:

* iMX6Q Wandboard (used for this test)
* iMX.53 Quick Start Board (needs updated u-boot)
* Beaglebone Black
* Cubie2
* CubieTruck
* arndale (no dtb?)
* pandaboard

Other devices available could involve ARMv7 devices hosted at www.armv7.com and validation.linaro.org - as part of a thank you to the Debian community for providing the OS which is (now) running all of the LAVA infrastructure.

That doesn't cover all of the current DTBs (and includes many devices which have no DTBs) so there is plenty of scope for others to get involved.

Hopefully, the above will help get people started with a suitable kernel+dtb+initrd and I'd encourage anyone interested to install lava-server and have a go at running test jobs based on those so that we start to build data about as many of the variants as possible.

(If anyone in DI fancies producing a suitable initrd with modules alongside the DI initrd for armhf builds, or if anyone comes up with a patch for DI to do that, it would help enormously.)

This will at least help Debian answer the question of what the Debian ARMMP package can actually support.

For help on LAVA, do read through the documentation and then come to us at #linaro-lava or the linaro-validation mailing list or file bugs in Debian: reportbug lava-server.

, so you can ask me.

I'm giving one talk on the LAVA software and there will be a BoF on validation and CI in Debian.

Russell Coker: Public Lectures About FOSS

22 July, 2014 - 16:22
Eventbrite

I’ve recently started using the Eventbrite Web site [1] and the associated Eventbrite Android app [2] to discover public events in my area. Both the web site and the Android app lack features for searching (I’d like to save alerts for my accounts and have my phone notify me when new events are added to their database) but it is basically functional. The main issue is content, Eventbrite has a lot of good events in their database (I’ve got tickets for 6 free events in the next month). I assume that Eventbrite also has many people attending their events, otherwise the events wouldn’t be promoted there.

At this time I haven’t compared Eventbrite to any similar services, Eventbrite events have taken up much of my available time for the next 6 weeks (I appreciate the button on the app to add an entry to my calendar) so I don’t have much incentive to find other web sites that list events. I would appreciate comments from users of competing event registration systems and may write a post in future comparing different systems. Also I have only checked for events in Melbourne, Australia as I don’t have any personal interest in events in other places. For the topic of this post Eventbrite is good enough, it meets all requirements for Melbourne and I’m sure that if it isn’t useful in other cities then there are competing services.

I think that we need to have free FOSS events announced through Eventbrite. We regularly have experts in various fields related to FOSS visiting Melbourne who give a talk for the Linux Users of Victoria (and sometimes other technical groups). This is a good thing but I think we could do better. Most people in Melbourne probably won’t attend a LUG meeting and if they did they probably wouldn’t find it a welcoming experience.

Also I recommend that anyone who is looking for educational things to do in Melbourne visit the Eventbrite web site and/or install the Android app.

Accessible Events

I recently attended an Eventbrite event where a professor described the work of his research team, it was a really good talk that made the topic of his research accessible to random members of the public like me. Then when it came to question time the questions were mostly opinion pieces disguised as questions which used a lot of industry specific jargon and probably lost the interest of most people in the audience who wasn’t from the university department that hosted the lecture. I spent the last 15 minutes in that lecture hall reading Wikipedia and resisted the temptation to load an Android game.

Based on this lecture (and many other lectures I’ve seen) I get the impression that when the speaker or the MC addresses a member of the audience by name (EG “John Smith has a question”) then it’s strongly correlated with a low quality question. See my previous post about the Length of Conference Questions for more on this topic [3].

It seems to me that when running a lecture everyone involved has to agree about whether it’s a public lecture (IE one that is for any random people) as opposed to a society meeting (which while free for anyone to attend in the case of a LUG is for people with specific background knowledge). For a society meeting (for want of a better term) it’s OK to assume a minimum level of knowledge that rules out some people. If 5% of the audience of a LUG don’t understand a lecture that doesn’t necessarily mean it’s a bad lecture, sometimes it’s not possible to give a lecture that is easily understood by those with the least knowledge that also teaches the most experienced members of the audience.

For a public lecture the speaker has to give a talk for people with little background knowledge. Then the speaker and/or the MC have to discourage or reject questions that are for a higher level of knowledge.

As an example of how this might work consider the case of an introductory lecture about how an OS kernel works. When one of the experienced Linux kernel programmers visits Melbourne we could have an Eventbrite event organised for a lecture introducing the basic concepts of an OS kernel (with Linux as an example). At such a lecture any questions about more technical topics (such as specific issues related to compilers, drivers, etc) could be met with “we are having a meeting for more technical people at the Linux Users of Victoria meeting tomorrow night” or “we are having coffee at a nearby cafe afterwards and you can ask technical questions there”.

Planning Eventbrite Events

When experts in various areas of FOSS visit Melbourne they often offer a talk for LUV. For any such experts who read this post please note that most lectures at LUV meetings are by locals who can reschedule, so if you are only in town for a short time we can give you an opportunity to speak at short notice.

I would like to arrange to have some of those people give a talk aimed at a less experienced audience which we can promote through Eventbrite. The venue for LUV talks (Melbourne University 7PM on the first Tuesday of the month) might not work for all speakers so we need to find a sponsor for another venue.

I will contact Linux companies that are active in Melbourne and ask whether they would be prepared to sponsor the venue for such a talk. The fallback option would be to have such a lecture at a LUV meeting.

I will talk to some of the organisers of science and technology events advertised on Eventbrite and ask why they chose the times that they did. Maybe they have some insight into which times are best for getting an audience. Also I will probably get some idea of the best times by just attending many events and observing the attendance. I think that the aim of an Eventbrite event is to attract delegates who wouldn’t attend other meetings, so it is a priority to choose a suitable time and place.

Finally please note that while I am a member of the LUV committee I’m not representing LUV in this post. My aim is that community feedback on this post will help me plan such events. I will discuss this with the LUV committee after I get some comments here.

Please comment if you would like to give such a public lecture, attend such a lecture, or if you just have any general ideas.

Related posts:

  1. Sex and Lectures about Computers I previously wrote about the appropriate references to porn in...
  2. Phone Based Lectures Early this month at a LUV meeting I gave a...
  3. Car vs Public Transport to Save Money I’ve just been considering when it’s best to drive and...

Martin Pitt: autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

22 July, 2014 - 14:16

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid
Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

MJ Ray: Three systems

22 July, 2014 - 11:59

There are three basic systems:

The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.

The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.

The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.

So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?

Andrew Pollock: [debian] Day 174: Kindergarten, startup stuff, tennis

22 July, 2014 - 09:23

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

Andrew Pollock: [debian] Day 173: Investigation for bug #749410 and fixing my VMs

22 July, 2014 - 04:25

I have a couple of virt-manager virtual machines for doing DHCP-related work. I have one for the DHCP server and one for the DHCP client, and I have a private network between the two so I can simulate DHCP requests without messing up anything else. It works nicely.

I got a bit carried away, and I use LVM to snapshots for the work I do, so that when I'm done I can throw away the virtual machine's disks and work with a new snapshot next time I want to do something.

I have a cron job, that on a good day, fires up the virtual machines using the master logical volumes and does a dist-upgrade on a weekly basis. It seems to have varying degrees of success though.

So I fired up my VMs to do some investigation of the problem for #749410 and discovered that they weren't booting, because the initramfs couldn't find the root filesystem.

Upon investigation, the problem seemed to be that the logical volumes weren't getting activated. I didn't get to the bottom of why, but a manual activation of the logical volumes allowed the instances to continue booting successfully, and after doing manual dist-upgrades and kernel upgrades, they booted cleanly again. I'm not sure if I got hit by a passing bug in unstable, or what the problem was. I did burn about 2.5 hours just fixing everything up though.

Then I realised that there'd been more activity on the bug since I'd last read it while I was on vacation, and half the investigation I needed to do wasn't necessary any more. Lesson learned.

I haven't got to the bottom of the bug yet, but I had a fun day anyway.

Chris Lamb: Disabling internet for specific processes with libfiu

22 July, 2014 - 02:26

My primary usecase is to prevent testsuites and build systems from contacting internet-based services. This, at the very least, introduces an element of non-determinism and malicious code at worst.

I use Alberto Bertogli's libfiu for this, specifically the fiu-run utility which part of the fiu-utils package on Debian and Ubuntu.

Here's a contrived example, where I prevent Curl from talking to the internet:

$ fiu-run -x -c 'enable name=posix/io/net/connect' curl google.com
curl: (6) Couldn't resolve host 'google.com'

... and here's an example of it detecting two possibly internet-connecting tests:

$ fiu-run -x -c 'enable name=posix/io/net/connect' ./manage.py text
[..]
----------------------------------------------------------------------
Ran 892 tests in 2.495s

FAILED (errors=2)
Destroying test database for alias 'default'...

Note that libfiu inherits all the drawbacks of LD_PRELOAD; in particular, we cannot limit the child process that calls setuid binaries such as /bin/ping:

$ fiu-run -x -c 'enable name=posix/io/net/connect' ping google.com
PING google.com (173.194.41.65) 56(84) bytes of data.
64 bytes from lhr08s01.1e100.net (17.194.41.65): icmp_req=1 ttl=57 time=21.7 ms
64 bytes from lhr08s01.1e100.net (17.194.41.65): icmp_req=2 ttl=57 time=18.9 ms
[..]

Whilst it would certainly be more robust and flexible to use iptables—such as allowing localhost and other local socket connections but disabling all others—I gravitate towards this entirely userspace solution as it requires no setup and I can quickly modify it to block other calls on an ad-hoc basis. The list of other "modules" libfiu supports is viewable here.

Ian Campbell: sunxi-tools now available in Debian

22 July, 2014 - 02:10

I've recently packaged the sunxi tools for Debian. These are a set of tools produce by the Linux Sunxi project for working with the Allwinner "sunxi" family of processors. See the package page for details. Thanks to Steve McIntyre for sponsoring the initial upload.

The most interesting component of the package are the tools for working with the Allwinner processors' FEL mode. This is a low-level processor mode which implements a simple USB protocol allowing for initial programming of the device and recovery which can be entered on boot (usually be pressing a special 'FEL button' somewhere on the device). It is thanks to FEL mode that most sunxi based devices are pretty much unbrickable.

The most common use of FEL is to boot over USB. In the Debian package the fel and usb-boot tools are named sunxi-fel and sunxi-usb-boot respectively but otherwise can be used in the normal way described on the sunxi wiki pages.

One enhancement I made to the Debian version of usb-boot is to integrate with the u-boot packages to allow you to easily FEL boot any sunxi platform supported by the Debian packaged version of u-boot (currently only Cubietruck, more to come I hope). To make this work we take advantage of Multiarch to install the armhf version of u-boot (unless your host is already armhf of course, in which case just install the u-boot package):

# dpkg --add-architecture armhf
# apt-get update
# apt-get install u-boot:armhf
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  u-boot:armhf
0 upgraded, 1 newly installed, 0 to remove and 1960 not upgraded.
Need to get 0 B/546 kB of archives.
After this operation, 8,676 kB of additional disk space will be used.
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Selecting previously unselected package u-boot:armhf.
(Reading database ... 309234 files and directories currently installed.)
Preparing to unpack .../u-boot_2014.04+dfsg1-1_armhf.deb ...
Unpacking u-boot:armhf (2014.04+dfsg1-1) ...
Setting up u-boot:armhf (2014.04+dfsg1-1) ...

With that done FEL booting a cubietruck is as simple as starting the board in FEL mode (by holding down the FEL button when powering on) and then:

# sunxi-usb-boot Cubietruck -
fel write 0x2000 /usr/lib/u-boot/Cubietruck_FEL/u-boot-spl.bin
fel exe 0x2000
fel write 0x4a000000 /usr/lib/u-boot/Cubietruck_FEL/u-boot.bin
fel write 0x41000000 /usr/share/sunxi-tools//ramboot.scr
fel exe 0x4a000000

Which should result in something like this on the Cubietruck's serial console:

U-Boot SPL 2014.04 (Jun 16 2014 - 05:31:24)
DRAM: 2048 MiB


U-Boot 2014.04 (Jun 16 2014 - 05:30:47) Allwinner Technology

CPU:   Allwinner A20 (SUN7I)
DRAM:  2 GiB
MMC:   SUNXI SD/MMC: 0
In:    serial
Out:   serial
Err:   serial
SCSI:  SUNXI SCSI INIT
Target spinup took 0 ms.
AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl SATA mode
flags: ncq stag pm led clo only pmp pio slum part ccc apst 
Net:   dwmac.1c50000
Hit any key to stop autoboot:  0 
sun7i# 

As more platforms become supported by the u-boot packages you should be able to find them in /usr/lib/u-boot/*_FEL.

There is one minor inconvenience which is the need to run sunxi-usb-boot as root in order to access the FEL USB device. This is easily resolved by creating /etc/udev/rules.d/sunxi-fel.rules containing either:

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", OWNER="myuser"

or

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", GROUP="mygroup"

To enable access for myuser or mygroup respectively. Once you have created the rules file then to enable:

# udevadm control --reload-rules

As well as the FEL mode tools the packages also contain a FEX (de)compiler. FEX is Allwinner's own hardware description language and is used with their Android SDK kernels and the fork of that kernel maintained by the linux-sunxi project. Debian's kernels follow mainline and therefore use Device Tree.

Daniel Pocock: Australia can't criticize Putin while competing with him

22 July, 2014 - 01:00

While much of the world is watching the tragedy of MH17 and contemplating the grim fate of 298 deceased passengers sealed into a refrigerated freight train in the middle of a war zone, Australia (with 28 victims on that train) has more than just theoretical skeletons in the closet too.

At this moment, some 153 Tamil refugees, fleeing the same type of instability that brought a horrible death to the passengers of MH17, have been locked up in the hull of a customs ship on the high seas. Windowless cabins and a supply of food not fit for a dog are part of the Government's strategy to brutalize these people for simply trying to avoid the risk of enhanced imprisonment(TM) in their own country.

Under international protocol for rescue at sea and political asylum, these people should be taken to the nearest port and given a humanitarian visa on arrival. Australia, however, is trying to lie and cheat their way out of these international obligations while squealing like a stuck pig about the plight of Australians in the hands of Putin. If Prime Minister Tony Abbott wants to encourage Putin to co-operate with the international community, shouldn't he try to lead by example? How can Australians be safe abroad if our country systematically abuses foreigners in their time of need?

Steve Kemp: An alternative to devilspie/devilspie2

21 July, 2014 - 22:30

Recently I was updating my dotfiles, because I wanted to ensure that media-players were "always on top", when launched, as this suits the way I work.

For many years I've used devilspie to script the placement of new windows, and once I googled a recipe I managed to achieve my aim.

However during the course of my googling I discovered that devilspie is unmaintained, and has been replaced by something using Lua - something I like.

I'm surprised I hadn't realized that the project was dead, although I've always hated the configuration syntax it is something that I've used on a constant basis since I found it.

Unfortunately the replacement, despite using Lua, and despite being functional just didn't seem to gell with me. So I figured "How hard could it be?".

In the past I've written softare which iterated over all (visible) windows, and obviously I'm no stranger to writing Lua bindings.

However I did run into a snag. My initial implementation did two things:

  • Find all windows.
  • For each window invoke a lua script-file.

This worked. This worked well. This worked too well.

The problem I ran into was that if I wrote something like "Move window 'emacs' to desktop 2" that action would be applied, over and over again. So if I launched emacs, and then manually moved the window to desktop3 it would jump back!

In short I needed to add a "stop()" function, which would cause further actions against a given window to cease. (By keeping a linked list of windows-to-ignore, and avoiding processing them.)

The code did work, but it felt wrong to have an ever-growing linked-list of processed windows. So I figured I'd look at the alternative - the original devilspie used libwnck to operate. That library allows you to nominate a callback to be executed every time a new window is created.

If you apply your magic only on a window-create event - well you don't need to bother caching prior-windows.

So in conclusion :

I think my code is better than devilspie2 because it is smaller, simpler, and does things more neatly - for example instead of a function to get geometry and another to set it, I use one. (e.g. "xy()" returns the position of a window, but xy(3,3) sets it.).

kpie also allows you to run as a one-off job, and using the simple primitives I wrote a file to dump your windows, and their size/placement, which looks like this:

shelob ~/git/kpie $ ./kpie --single ./samples/dump.lua
-- Screen width : 1920
-- Screen height: 1080
..
if ( ( window_title() == "Buddy List" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1536,24 )
     size(384,1032 )
     workspace(2)
end
if ( ( window_title() == "feeds" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1,24 )
     size(1536,1032 )
     workspace(2)
end
..

As you can see that has dumped all my windows, along with their current state. This allows a simple starting-point - Configure your windows the way you want them, then dump them to a script file. Re-run that script file and your windows will be set back the way they were! (Obviously there might be tweaks required.)

I used that starting-point to define a simple recipe for configuring pidgin, which is more flexible than what I ever had with pidgin, and suits my tastes.

Bug-reports welcome.

Tim Retout: apt-transport-tor 0.2.1

21 July, 2014 - 20:17

apt-transport-tor 0.2.1 should now be on your preferred unstable Debian mirror. It will let you download Debian packages through Tor.

New in this release: support for HTTPS over Tor, to keep up with people.debian.org. :)

I haven't mentioned it before on this blog. To get it working, you need to "apt-get install apt-transport-tor", and then use sources.list lines like so:

deb tor+http://http.debian.net/debian unstable main

Note the use of http.debian.net in order to pick a mirror near to whichever Tor exit node. Throughput is surprisingly good.

On the TODO list: reproducible builds? It would be nice to have some mirrors offer Tor hidden services, although I have yet to think about the logistics of this, such as how the load could be balanced (maybe a service like http.debian.net). I also need to look at how cowbuilder etc. can be made to play nicely with Tor. And then Debian installer support!

Francois Marier: Creating a modern tiling desktop environment using i3

21 July, 2014 - 19:03

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false
Screensaver

In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow
Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap
Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:

HandleLidSwitch=lock

Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend
Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1
Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้