Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 2 min ago

John Goerzen: My boys love 1986 computing

1 hour 30 min ago

Yesterday, Jacob (age 8) asked to help me put together a 30-year-old computer from parts in my basement. Meanwhile, Oliver (age 5) asked Laura to help him learn cursive. Somehow, this doesn’t seem odd for a Saturday at our place.

Let me tell you how this came about.

I’ve had a project going on for a while now to load data from old floppies. It’s been fun, and had a surprise twist the other day: my parents gave me an old TRS-80 Color Computer II (aka “CoCo 2″). It was, in fact, my first computer, one they got for me when I was in Kindergarten. It is nearly 30 years old.

I have been musing lately about the great disservice Apple did the world by making computers easy to learn — namely the fact that few people ever bother to learn about them. Who bothers to learn about them when, on the iPhone for instance, the case is sealed shut, the lifespan is 1 or 2 years for many purchasers, and the platform is closed in lots of ways?

I had forgotten how finicky computers used to be. But after some days struggling with IDE incompatibilities, booting issues, etc., when I actually managed to get data off a machine that had last booted in 1999, I had quite the sense of accomplishment, which I rarely have lately. I did something that was hard to do in a world where most of the interfaces don’t work with equipment that old (even if nominally they are supposed to.)

The CoCo is one of those computers normally used with a floppy drive or cassette recorder to store programs. You type DIR, and you feel the clack of the drive heads through the desk. You type CLOAD and you hear the relay click closed to turn on the tape motor. You wiggle cables around until they make contact just right. You power-cycle for the times when the reset button doesn’t quite do the job. The details of how it works aren’t abstracted away by innumerable layers of controllers, interfaces, operating system modules, etc. It’s all right there, literally vibrating your desk.

So I thought this could be a great opportunity for Jacob to learn a few more computing concepts, such as the difference between mass storage and RAM, plus a great way to encourage him to practice critical thinking. So we trekked down to the basement and came up with handfulls of parts. We brought up the computer, some joysticks, all sorts of tangled cables. We needed adapters, an old TV. Jacob helped me hook everything up, and then the moment of truth: success! A green BASIC screen!

I added more parts, but struck out when I tried to connect the floppy drive. The thing just wouldn’t start up right whenever the floppy controller cartridge was installed. I cleaned the cartridge. I took it apart, scrubbed the contacts, even did a re-seat of the chips. No dice.

So I fired up my CoCo emulator (xroar), and virtually “saved” some programs to cassette (a .wav file). I then burned those .wav files to an audio CD, brought up an old CD player from the basement, connected the “cassette in” plug to the CD player’s headphone jack, and presto — instant programs. (Well, almost. It takes a couple of minutes to load a program from audio codes.)

The picture above is Oliver cackling at one of the very simplest BASIC programs there is: “number find.” The computer picks a random number between 1 and 2000, and asks the user to guess it, giving a “too low” or “too high” clue with each incorrect guess. Oliver delighted in giving invalid input (way too high numbers, or things that weren’t numbers at all) and cackled at the sarcastic error messages built into the program. During Jacob’s turn, he got very serious about it, and is probably going to be learning about how to calculate halfway points before too long.

But imagine my pride when this morning, Jacob found the new CD I had made last night (correcting a couple recordings), found my one-line instruction on just part of how to load a program, and correctly figured out by himself all the steps to do in order (type CLOAD on the CoCo, advance the CD to the proper track, press play on the player, wait for it to load on the CoCo, then type RUN).

I ordered a replacement floppy controller off eBay tonight, and paid $5 for a coax adapter that should fix some video quality issues. I rescued some 5.25″ floppies from my trash can from another project, so they should have plenty of tools for exploration.

It is so much easier for them to learn how a disk drive works, and even what the heck a track is, when you can look at a floppy drive with the cover off and see the heads move. There are other things we can do with more modern equipment — Jacob has shown a lot of interest in Arduino projects — but I have so far drawn a blank on ways to really let kids discover how a modern PC (let alone a modern phone or tablet) works.

Dimitri John Ledkov: Analyzing public OpenPGP keys

7 hours 42 min ago
OpenPGP Message Format (RFC 4880) well defines key structure and wire formats (openpgp packets). Thus when I looked for public key network (SKS) server setup, I quickly found pointers to dump files in said format for bootstrapping a key server.

I did not feel like experimenting with Python and instead opted for Go and found http://code.google.com/p/go.crypto/openpgp/packet library that has comprehensive support for parsing openpgp low level structures. I've downloaded the SKS dump, verified it's MD5SUM hashes (lolz), and went ahead to process them in Go.

With help from http://github.com/lib/pq and database/sql, I've written a small program to churn through all the dump files, filter for primary RSA keys (not subkeys) and inject them into a database table. The things that I have chosen to inject are fingerprint, N, E. N & E are the modulus of the RSA key pair and the public exponent. Together they form a public part of an RSA keypair. So far, nothing fancy.

Next I've run an SQL query to see how unique things are... and found 92 unique N & E pairs that have from two and up to fifteen duplicates. In total it is 231 unique fingerprints, which use key material with a known duplicate in the public key network. That didn't sound good. And also odd - given that over 940 000 other RSA keys managed to get unique enough entropy to pull out a unique key out of the keyspace haystack (which is humongously huge by the way).

Having the list of the keys, I've fetched them and they do not look like regular keys - their UIDs do not have names & emails, instead they look like something from the monkeysphere. The keys look like they are originally used for TLS and/or SSH authentication, but were converted into OpenPGP format and uploaded into the public key server. This reminded me of the Debian's SSL key generation vulnerability CVE-2008-0166. So these keys might have been generated with bad entropy due to affected tools by that CVE and later converted to OpenPGP.

Looking at the openssl-blacklist package, it should be relatively easy for me to generate all possible RSA key-pairs and I believe all other material that is hashed to generate the fingerprint are also available (RFC 4880#12.2). Thus it should be reasonably possible to generate matching private keys, generate revocation certificates and publish the revocation certificate with pointers to CVE-2008-0166. (Or email it to the people who have signed given monkeysphered keys). When I have a minute I will work on generating openpgp-blacklist type of scripts to address this.

If anyone is interested in the Go source code I've written to process openpgp packets, please drop me a line and I'll publish it on github or something.

Steinar H. Gunderson: Scaling analysis.sesse.net

8 hours 52 min ago

As I previously mentioned, I've been running live chess analysis during the Carlsen–Anand World Chess Championship match. Now it's all over (congratulations to Magnus!), so I thought I should write a few words about scaling, as we ended up peaking at (I think) 1527 simultaneous users, much more than the system was originally designed for (2–3 or so :-) ).

Let me explain first the requirements. Generally the backend system outputs analysis as soon as it comes in from the two chess engines (although rate-limited so that it doesn't output more than once a second), and we want to push this out to the clients as soon as possible. The clients are regular web browsers (both on mobile and on desktop; I haven't checked the ratio) running a fair amount of JavaScript; they generally request an URL in a loop, and whenever something comes in, they display it on the chessboard, wait 100 ms (just as a safeguard) and then go fetch again.

Of course, I could have just had the clients ask every second, but it seems inelegant and a bit wasteful, especially for mobile. If the analysis has come far, or even has stopped entirely since e.g. the game is over, there's no need to go fetch the same data over and over again. Instead, what I want it a system where the, if the client already has the latest data, the HTTP request hangs until there's more data, and then gets it immediately. Together with this, there's also a special header that says how many people are connected (which is also shown to the viewers). If a client has been hanging/sleeping for more than 30 seconds, I just re-send the latest analysis; in this world of NATs, transparent proxies and other unpredictable network conditions, I don't want to have connections hanging for minutes with no idea of whether I can actually answer them when the time comes.

The client tells the server (in a CGI parameter, again for simplicity so that I don't have to deal with caching proxies etc. on the way) in the request the timestamp of the latest data it has. This leads to the following different scenarios:

  1. A client comes in and has no existing data. They should get the latest data, immediately.
  2. A client has old data, and re-asks. They should also get the latest data, immediately.
  3. A client already has the newest data, which causes it to hang, and no new data is ready within 30 seconds. They should get the latest data anew (or I could have returned some other HTTP code, but I decided not to get fancy).
  4. A client already has the newest data, but new data comes in underway. They should get the new data.

Unsurprisingly, this leads to a lot of clients being in the “hanging” state at the same time, and then when new analysis comes in, there's a thundering herd of clients that should have it at the same time (and then come back for more soon after).

Like I wrote earlier, Node.js was pretty much the ideal case model-wise for this; there's only one process to handle all of them, which means the extra memory overhead per hanging client is very low, and when there's new data, we can just send to all of them after each other. Furthermore, since there's only one process, it is also easy to find the viewer count: Simply count all the hanging clients, plus the ones that I think are simply processing the latest data and should come back with a new request soon (the limit was five seconds or so).

However, around 6–700 clients, I started getting issues with requests not coming through. It turns out that the single Node.js process just couldn't handle all that many clients and started hitting the roof CPU-wise. Everyone who's done a bit of performance work in nontrivial systems know that you can't really optimize anything without profiling it first, but unfortunately, Node.js was extremely limited here. There were some systems sending lots of data to externals, which I didn't really feel like. Then, there were some systems to try to interpret V8's debug output logs, and they simply gave out bogus answers.

In particular, they claimed 93% of my time was spent in glibc, but couldn't say where in glibc, and when I pointed perf at them, it was pretty clear that most of the time was actually spent in JavaScript and V8 support functions. I took a guess at my viewer count functions, optimized so I didn't have to count as often, and it helped—but I still wasn't really confident it would scale very far. Usually people start up more Node.js workers and then have some load balancer in front, but it would make the viewer counting much more complicated, and the CPU would need to be shared with the chess engine itself, so I wasn't happy with the “just give it more cores” approach either.

So I turned to everyone's favorite website scaling tool, Varnish. With lots of help from Lasse (Varnish Software) and Tollef (ex-Varnish Software, now Fastly), I got things working; it was a sort of bumpy road, though, especially as I hit two different crash bugs in Varnish 4.0.2 that only manifested themselves under actual production load. Here's what I ended up with (running on git master):

The first thing to realize is that we're not trying to keep backend traffic entirely minimal, just cut it significantly. For instance, if Varnish sees #1 (no existing data) and #2 (old existing data) as different and fire off two different backend requests for them, it doesn't really matter. However, we really want all the hanging clients to get the same backend connection; thankfully, Varnish gives us this entirely by itself with its backend coalescing; if it has a backend request going and another one comes in for the same URL, it simply puts the second one on the sleep list and gives both the same response when it comes back. Also, if a slow or new client doesn't manage to get onto the hanging request (ie., it comes in after the backend response came), it should simply be served entirely out of Varnish' cache.

A lot of this comes automatically, and some of it comes with some cooperation between the backend and the VCL. In particular, we can let the client tag the response with the timestamp of the data, and once something comes in, simply purge/ban every object with a different timestamp header, causing us to never give out stale data from the Varnish cache. Varnish bans can be a bit tricky since they're checked lazily, and if you're not entirely careful, you can end up with a very long ban list, but it seems to work well in practice.

However, the distinction between #3 and #4 gives us a problem. We now have a situation where people ask for an URL, it times out after 30 seconds and gives us a response... which we then should give out to everybody, but the next time they ask, we should hang again! This was incredibly tricky to get right; the combination of TTL, Expires headers, grace, and the problem of clock skew for long-running requests (what exactly is the Date timestamp supposed to mean; request received, first byte of backend response, or something else?) and so on was just too much for me. Eventually I got tired of reading the Varnish source code (which, frankly, I find quite opaque) and decided to just sidestep the problem. Now, instead of the 30-second timeout, the backend simply just touches the data file every 30 seconds if it hasn't been changed, so it gets a new timestamp every time. Problem solved.

Finally, there's the counting problem; the backend doesn't see all the requests anymore, so we need a different way of counting. I solve this by tailing the access logs (using varnishncsa) in a separate process, comparing to the updates of the analysis file, and then trying to figure out if they've fallen out or not. Then I simply inject the viewer count into the backend every second. Problem solved, again. (Well, at low traffic numbers, seemingly there's some sort of buffering somewhere, causing me to see the requests way too late, and this causes the count to oscillate down between 1 and the real number somehow. But I don't care too much right now.)

So, there you have it. Varnish' threaded architecture isn't super for this kind of thing; in a sense, much less than Node.js. However, even in this day and age, optimized C beats JavaScript any day of the week; seemingly by a factor five or so. In the end, the 1500 clients were handled with CPU usage of about 40% of one core. I don't really like the fact that it needs ~1500 worker threads for those 1500 clients (I had to increase it from the default of 1024 in order to keep the HTTP errors away), but I used taskset to restrict it to two physical cores in order not to disturb the chess worker threads too much (they are already rather sensitive to the kernel's scheduling decisions).

So, how far can it go? Well, those 1500 clients needed about 33 Mbit/sec, so we can go to ~45k based on bandwidth (the server is on gigabit). At that point, though, I sincerely doubt I can sustain both Varnish and the chess engine can keep going, so I'd either move it externally. So next up, maybe Fastly? Well, at least if they start supporting IPv6.

You can find all the code, including the Varnish snippets, in the git repository. Until next time—perhaps WCC 2016! Waiting for Carlsen–Caruana. :-)

Final bonus: Munin graphs. Everyone loves Munin graphs; it's the Comic Sans of system administration.

Iustin Pop: Debian, Debian…

23 November, 2014 - 16:23

Due to some technical issues, I've been without access to my lists subscription email for a bit more than a week. Once I regained access and proceeded to read the batch of emails, I was - once again - shocked. Shocked at the amount of emails spent on the systemd issue, shocked at the number of people resigning, shocked at the amount of mud thrown.

I just hope that the GR results finally will mean silence and getting over the last 3-6 months.

For the record:

  • I seconded the GR because I believed we were moving too fast (I wanted one full release as a transition period, even if that's a long time)
  • I am quite happy with the result of the GR!
  • I am not happy with the amount of people leaving (I hope they're just taking a break)
  • I am, as usual, behind on my Debian packaging ☹

However, some of the more recent emails on -private give me more hope, so I'm looking forward to the next 6 months. I wonder how this will all look in two years?

(Side-note: emacs-nox shows me the italic word above as italic in text mode: I never saw that before, and didn't know, that it's possible to have italic fonts in xterm! What is this trickery⁈ … it seems to be related to the font I use, fun!)

Iustin Pop: Debian, Debian…

23 November, 2014 - 16:23

Due to some technical issues, I've been without access to my lists subscription email for a bit more than a week. Once I regained access and proceeded to read the batch of emails, I was - once again - shocked. Shocked at the amount of emails spent on the systemd issue, shocked at the number of people resigning, shocked at the amount of mud thrown.

I just hope that the GR results finally will mean silence and getting over the last 3-6 months.

For the record:

  • I seconded the GR because I believed we were moving too fast (I wanted one full release as a transition period, even if that's a long time)
  • I am quite happy with the result of the GR!
  • I am not happy with the amount of people leaving (I hope they're just taking a break)
  • I am, as usual, behind on my Debian packaging ☹

However, some of the more recent emails on -private give me more hope, so I'm looking forward to the next 6 months. I wonder how this will all look in two years?

(Side-note: emacs-nox shows me the italic word above as italic in text mode: I never saw that before, and didn't know, that it's possible to have italic fonts in xterm! What is this trickery⁈ … it seems to be related to the font I use, fun!)

Matthew Palmer: You stay classy, Uber

23 November, 2014 - 07:00

You may have heard that Uber has been under a bit of fire lately for its desires to hire private investigators to dig up “dirt” on journalists who are critical of Uber. From using users’ ride data for party entertainment, putting the assistance dogs of blind passengers in the trunk, adding a surcharge to reduce the number of dodgy drivers, or even booking rides with competitors and then cancelling, or using the ride to try and convince the driver to change teams, it’s pretty clear that Uber is a pretty good example of how companies are inherently sociopathic.

However, most of those examples are internal stupidities that happened to be made public. It’s a very rare company that doesn’t do all sorts of shady things, on the assumption that the world will never find out about them. Uber goes quite a bit further, though, and is so out-of-touch with the world that it blogs about analysing people’s sexual activity for amusement.

You’ll note that if you follow the above link, it sends you to the Wayback Machine, and not Uber’s own site. That’s because the original page has recently turned into a 404. Why? Probably because someone at Uber realised that bragging about how Uber employees can amuse themselves by perving on your one night stands might not be a great idea. That still leaves the question open of what sort of a corporate culture makes anyone ever think that inspecting user data for amusement would be a good thing, let alone publicising it? It’s horrific.

Thankfully, despite Uber’s fairly transparent attempt at whitewashing (“clearwashing”?), the good ol’ Wayback Machine helps us to remember what really went on. It would be amusing if Uber tried to pressure the Internet Archive to remove their copies of this blog post (don’t bother, Uber; I’ve got a “Save As” button and I’m not afraid to use it).

In any event, I’ve never used Uber (not that I’ve got one-night stands to analyse, anyway), and I’ll certainly not be patronising them in the future. If you’re not keen on companies amusing themselves with your private data, I suggest you might consider doing the same.

Steve Kemp: Lumail 2.x ?

23 November, 2014 - 04:39

I've continued to ponder the idea of reimplementing the console mail-client I wrote, lumail, using a more object-based codebase.

For one thing having loosely coupled code would allow testing things in isolation, which is clearly a good thing.

I've written some proof of concept code which will allow the following Lua to be evaluated:

-- Open the maildir.
users = Maildir.new( "/home/skx/Maildir/.debian.user" )

-- Count the messages.
print( "There are " .. users:count() .. " messages in the maildir " .. users:path() )

--
-- Now we want to get all the messages and output their paths.
--
for k,v in ipairs( users:messages()) do
    --
    -- Here we could do something like:
    --
    --   if ( string.find( v:headers["subject"], "troll", 1, true ) ) then v:delete()  end
    --
    -- Instead play-nice and just show the path.
    print( k .. " -> " .. v:path() )
end

This is all a bit ugly, but I've butchered some code together that works, and tried to solicit feedback from lumail users.

I'd write more but I'm tired, and intending to drink whisky and have an early night. Today I mostly replaced pipes in my attic. (Is it "attic", or is it "loft"? I keep alternating!) Fingers crossed this will mean a dry kitchen in the morning.

Sune Vuorela: Is linux about choice?

23 November, 2014 - 04:10

Occasionally, various quotes from people having an opinion if linux is about choice or not. Even pages like http://www.islinuxaboutchoice.com/ has shown up.

My short answer is “YES”. Linux is about choice. And you get all your choices directly from your f/loss definition of choice (FSF’s 4 freedoms / OSI’s opensource definition / Debian Free Software Guidelines)

It doesn’t mean that you get all the gui configuration bits that you want. It doesn’t mean that you without any problems can switch out any component. But it does mean that you can get it exactly your way. But it might require you to edit some source code and compile some stuff.

Daniel Pocock: rtc.debian.org updated for latest browsers

22 November, 2014 - 23:24

I've just updated rtc.debian.org with the latest versions of JSCommunicator and JsSIP.

The version of JsSIP that had been on the site was actually quite old, from February 2014 and the browsers have evolved a lot since then.

If you've tried it before and it didn't work consistently please try again and feel free to share any feedback you have.

Jonathan Wiltshire: Getting things into Jessie (#7)

22 November, 2014 - 17:18
Keep in touch

We don’t really have a lot of spare capacity to check up on things, so if we ask for more information or send you away to do an upload, please stay in touch about it.

Do remove a moreinfo tag if you reply to a question and are now waiting for us again.

Do ping the bug if you get a green light about an upload, and have done it. (And remove moreinfo if it was set.)

Don’t be afraid of making sure we’re aware of progress.

Getting things into Jessie (#7) is a post from: jwiltshire.org.uk | Flattr

Craig Small: WordPress 4.0.1 for Debian

22 November, 2014 - 15:49

WordPress recently released an update that had multiple security patches for their (then) current version 4.0. This release is 4.0.1 and includes important security fixes.  The Debian packages got just uploaded, if you are running the Debian packaged wordpress, you should update to 4.0.1+dfsg-1 or later.

I am going to look at these patches and see if they can and need to be backported to wordpress 3.6.1. Unfortunately I believe they will be. I’m also asking it to be unblocked into Jessie as it is a security fix.

There was, at the time of writing, no CVE numbers.

Petter Reinholdtsen: How to stay with sysvinit in Debian Jessie

22 November, 2014 - 07:00

By now, it is well known that Debian Jessie will not be using sysvinit as its boot system by default. But how can one keep using sysvinit in Jessie? It is fairly easy, and here are a few recipes, courtesy of Erich Schubert and Simon McVittie.

If you already are using Wheezy and want to upgrade to Jessie and keep sysvinit as your boot system, create a file /etc/apt/preferences.d/use-sysvinit with this content before you upgrade:

Package: systemd-sysv
Pin: release o=Debian
Pin-Priority: -1

This file content will tell apt and aptitude to not consider installing systemd-sysv as part of any installation and upgrade solution when resolving dependencies, and thus tell it to avoid systemd as a default boot system. The end result should be that the upgraded system keep using sysvinit.

If you are installing Jessie for the first time, there is no way to get sysvinit installed by default (debootstrap used by debian-installer have no option for this), but one can tell the installer to switch to sysvinit before the first boot. Either by using a kernel argument to the installer, or by adding a line to the preseed file used. First, the kernel command line argument:

preseed/late_command="in-target apt-get install -y sysvinit-core"

Next, the line to use in a preseed file:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

One can of course also do this after the first boot by installing the sysvinit-core package.

I recommend only using sysvinit if you really need it, as the sysvinit boot sequence in Debian have several hardware specific bugs on Linux caused by the fact that it is unpredictable when hardware devices show up during boot. But on the other hand, the new default boot system still have a few rough edges I hope will be fixed before Jessie is released.

Joey Hess: propelling containers

22 November, 2014 - 04:33

Propellor has supported docker containers for a "long" time, and it works great. This week I've worked on adding more container support.

docker containers (revisited)

The syntax for docker containers has changed slightly. Here's how it looks now:

example :: Host
example = host "example.com"
    & Docker.docked webserverContainer

webserverContainer :: Docker.Container
webserverContainer = Docker.container "webserver" "joeyh/debian-stable"
    & os (System (Debian (Stable "wheezy")) "amd64")
    & Docker.publish "80:80"
    & Apt.serviceInstalledRunning "apache2"
    & alias "www.example.com"

That makes example.com have a web server in a docker container, as you'd expect, and when propellor is used to deploy the DNS server it'll automatically make www.example.com point to the host (or hosts!) where this container is docked.

I use docker a lot, but I have drank little of the Docker KoolAid. I'm not keen on using random blobs created by random third parties using either unreproducible methods, or the weirdly underpowered dockerfiles. (As for vast complicated collections of containers that each run one program and talk to one another etc ... I'll wait and see.)

That's why propellor runs inside the docker container and deploys whatever configuration I tell it to, in a way that's both replicatable later and lets me use the full power of Haskell.

Which turns out to be useful when moving on from docker containers to something else...

systemd-nspawn containers

Propellor now supports containers using systemd-nspawn. It looks a lot like the docker example.

example :: Host
example = host "example.com"
    & Systemd.persistentJournal
    & Systemd.nspawned webserverContainer

webserverContainer :: Systemd.Container
webserverContainer = Systemd.container "webserver" chroot
    & Apt.serviceInstalledRunning "apache2"
    & alias "www.example.com"
  where
    chroot = Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase

Notice how I specified the Debian Unstable chroot that forms the basis of this container. Propellor sets up the container by running debootstrap, boots it up using systemd-nspawn, and then runs inside the container to provision it.

Unlike docker containers, systemd-nspawn containers use systemd as their init, and it all integrates rather beautifully. You can see the container listed in systemctl status, including the services running inside it, use journalctl to examine its logs, etc.

But no, systemd is the devil, and docker is too trendy...

chroots

Propellor now also supports deploying good old chroots. It looks a lot like the other containers. Rather than repeat myself a third time, and because we don't really run webservers inside chroots much, here's a slightly different example.

example :: Host
example = host "mylaptop"
    & Chroot.provisioned (buildDepChroot "git-annex")

buildDepChroot :: Apt.Package -> Chroot.Chroot
buildDepChroot pkg = Chroot.debootstrapped system Debootstrap.buildd dir
    & Apt.buildDep pkg
  where
    dir = /srv/chroot/builddep/"++pkg
   system = System (Debian Unstable) "amd64"

Again this uses debootstrap to build the chroot, and then it runs propellor inside the chroot to provision it (btw without bothering to install propellor there, thanks to the magic of bind mounts and completely linux distribution-independent packaging).

In fact, the systemd-nspawn container code reuses the chroot code, and so turns out to be really rather simple. 132 lines for the chroot support, and 167 lines for the systemd support (which goes somewhat beyond the nspawn containers shown above).

Which leads to the hardest part of all this...

debootstrap

Making a propellor property for debootstrap should be easy. And it was, for Debian systems. However, I have crazy plans that involve running propellor on non-Debian systems, to debootstrap something, and installing debootstrap on an arbitrary linux system is ... too hard.

In the end, I needed 253 lines of code to do it, which is barely one magnitude less code than the size of debootstrap itself. I won't go into the ugly details, but this could be made a lot easier if debootstrap catered more to being used outside of Debian.

closing

Docker and systemd-nspawn have different strengths and weaknesses, and there are sure to be more container systems to come. I'm pleased that Propellor can add support for a new container system in a few hundred lines of code, and that it abstracts away all the unimportant differences between these systems.

PS

Seems likely that systemd-nspawn containers can be nested to any depth. So, here's a new kind of fork bomb!

infinitelyNestedContainer :: Systemd.Container
infinitelyNestedContainer = Systemd.container "evil-systemd"
    (Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase)
    & Systemd.nspawned infinitelyNestedContainer

Strongly typed purely functional container deployment can only protect us against a certian subset of all badly thought out systems. ;)

Niels Thykier: Release Team unblock queue flushed

22 November, 2014 - 03:46

At the start of this week, I wrote that we had 58 open unblock requests open (of which 25 were tagged moreinfo).  Thanks to an extra effort from the Release Team, we now down to 25 open unblocks – of which 18 are tagged moreinfo.

We have now resolved 442 unblock requests (out of a total of 467).  The rate has also declined to an average of ~18 new unblock requests a day (over 26 days) and our closing rated increased to ~17.

With all of this awesomeness, some of us are now more than ready to have a well-deserved weekend to recharge our batteries.  Meanwhile, feel free to keep the RC bug fixes flowing into unstable.


Richard Hartmann: Release Critical Bug report for Week 47

22 November, 2014 - 03:31

There's a BSP this weekend. If you're interested in remote participation, please join #debian-muc on irc.oftc.net.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1213 (Including 210 bugs affecting key packages)
    • Affecting Jessie: 342 (key packages: 152) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 260 (key packages: 119) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 37 bugs are tagged 'patch'. (key packages: 20) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 12 bugs are marked as done, but still affect unstable. (key packages: 3) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 211 bugs are neither tagged patch, nor marked done. (key packages: 96) Help make a first step towards resolution!
      • Affecting Jessie only: 82 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 65 bugs are in packages that are unblocked by the release team. (key packages: 26)
        • 17 bugs are in packages that are not unblocked. (key packages: 7)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 49 256 (180+76) 360 (216+155) 50 204 (148+56) 339 (195+144) 51 178 (124+54) 323 (190+133) 52 115 (78+37) 289 (190+99) 1 93 (60+33) 287 (171+116) 2 82 (46+36) 271 (162+109) 3 25 (15+10) 249 (165+84) 4 14 (8+6) 244 (176+68) 5 2 (0+2) 224 (132+92) 6 release! 212 (129+83) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Jonathan Wiltshire: On kFreeBSD and FOSDEM

22 November, 2014 - 02:16

Boy I love rumours. Recently I’ve heard two, which I ought to put to rest now everybody’s calmed down from recent events.

kFreeBSD isn’t an official Jessie architecture because <insert systemd-related scare story>

Not true.

Our sprint at ARM (who kindly hosted and caffeinated us for four days) was timed to coincide with the Cambridge Mini-DebConf 2014. The intention was that this would save on travel costs for those members of the Release Team who wanted to attend the conference, and give us a jolly good excuse to actually meet up. Winners all round.

It also had an interesting side-effect. The room we used was across the hall from the lecture theatre being used as hack space and, later, the conference venue, which meant everybody attending during those two days could see us locked away there (and yes, we were in there all day for two days solid, except for lunch times and coffee missions). More than one conference attendee remarked to me in person that it was interesting for them to see us working (although of course they couldn’t hear what we were discussing), and hadn’t appreciated before that how much time and effort goes into our meetings.

Most of our first morning was taken up with the last pieces of architecture qualification, and that was largely the yes/no decision we had to make about kFreeBSD. And you know what? I don’t recall us talking about systemd in that context at all. Don’t forget kFreeBSD already had a waiver for a reduced scope in Jessie because of the difficulty in porting systemd to it.

It’s sadly impossible to capture the long and detailed discussion we had into a couple of lines of status information in a bits mail. If bits mails were much longer, people would be put off reading them, and we really really want you to take note of what’s in there. The little space we do have needs to be factual and to the point, and not include all the background that led us to a decision.

So no, the lack of an official Jessie release of kFreeBSD has very little, if anything, to do with systemd.

Jessie will be released during (or even before) FOSDEM

Not necessarily true.

Debian releases are made when they’re ready. That sets us apart from lots of other distributions, and is a large factor in our reputation for stability. We may have a target date in mind for a freeze, because that helps both us and the rest of the project plan accordingly. But we do not have a release date in mind, and will not do so until we get much closer to being ready. (Have you squashed an RC bug today?)

I think this rumour originated from the office of the DPL, but it’s certainly become more concrete than I think Lucas intended.

However, it is true that we’ve gone into this freeze with a seriously low bug count, because of lots of other factors. So it may indeed be that we end up in good enough shape to think about releasing near (or even at) FOSDEM. But rest assured, Debian 8 “Jessie” will be released when it’s ready, and even we don’t know when that will be yet.

(Of course, if we do release before then, you could consider throwing us a party. Plenty of the Release Team, FTP masters and CD team will be at FOSDEM, release or none.)

On kFreeBSD and FOSDEM is a post from: jwiltshire.org.uk | Flattr

Gunnar Wolf: Status of the Debian OpenPGP keyring — November update

22 November, 2014 - 01:29

Almost two months ago I posted our keyring status graphs, showing the progress of the transition to >=2048-bit keys for the different active Debian keyrings. So, here are the new figures.

First, the Non-uploading keyring: We were already 100% transitioned. You will only notice a numerical increase: That little bump at the right is our dear friend Tássia finally joining as a Debian Developer. Welcome! \o/

As for the Maintainers keyring: We can see a sharp increase in 4096-bit keys. Four 1024-bit DM keys were migrated to 4096R, but we did have eight new DMs coming in To them, also, welcome \o/.

Sadly, we had to remove a 1024-bit key, as Peter Miller sadly passed away. So, in a 234-key universe, 12 new 4096R keys is a large bump!

Finally, our current-greatest worry — If for nothing else, for the size of the beast: The active Debian Developers keyring. We currently have 983 keys in this keyring, so it takes considerably more effort to change it.

But we have managed to push it noticeably.

This last upload saw a great deal of movement. We received only one new DD (but hey — welcome nonetheless! \o/ ). 13 DD keys were retired; as one of the maintainers of the keyring, of course this makes me sad — but then again, in most cases it's rather an acknowledgement of fact: Those keys' holders often state they had long not been really involved in the project, and the decision to retire was in fact timely. But the greatest bulk of movement was the key replacements: A massive 62 1024D keys were replaced with stronger ones. And, yes, the graph changed quite abruptly:

We still have a bit over one month to go for our cutoff line, where we will retire all 1024D keys. It is important to say we will not retire the affected accounts, mark them as MIA, nor anything like that. If you are a DD and only have a 1024D key, you will still be a DD, but you will be technically unable to do work directly. You can still upload your packages or send announcements to regulated mailing lists via sponsor requests (although you will be unable to vote).

Speaking of votes: We have often said that we believe the bulk of the short keys belong to people not really active in the project anymore. Not all of them, sure, but a big proportion. We just had a big, controversial GR vote with one of the highest voter turnouts in Debian's history. I checked the GR's tally sheet, and the results are interesting: Please excuse my ugly bash, but I'm posting this so you can play with similar runs on different votes and points in time using the public keyring Git repository:

  1. $ git checkout 2014.10.10
  2. $ for KEY in $( for i in $( grep '^V:' tally.txt |
  3. awk '{print "<" $3 ">"}' )
  4. do
  5. grep $i keyids|cut -f 1 -d ' '
  6. done )
  7. do
  8. if [ -f debian-keyring-gpg/$KEY -o -f debian-nonupload-gpg/$KEY ]
  9. then
  10. gpg --keyring /dev/null --keyring debian-keyring-gpg/$KEY \
  11. --keyring debian-nonupload-gpg/$KEY --with-colons \
  12. --list-key $KEY 2>/dev/null \
  13. | head -2 |tail -1 | cut -f 3 -d :
  14. fi
  15. done | sort | uniq -c
  16. 95 1024
  17. 13 2048
  18. 1 3072
  19. 371 4096
  20. 2 8192

So, as of mid-October: 387 out of the 482 votes (80.3%) were cast by developers with >=2048-bit keys, and 95 (19.7%) were cast by short keys.

If we were to run the same vote with the new active keyring, 417 votes would have been cast with >=2048-bit keys (87.2%), and 61 with short keys (12.8%). We would have four less votes, as they retired:

  1. 61 1024
  2. 14 2048
  3. 2 3072
  4. 399 4096
  5. 2 8192

So, lets hear it for November/December. How much can we push down that pesky yellow line?

Disclaimer: Any inaccuracy due to bugs in my code is completely my fault!

Daniel Pocock: PostBooks 4.7 packages available, xTupleCon 2014 award

21 November, 2014 - 21:12

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.

Better prospects for Fedora and RHEL/CentOS/EPEL packages

As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.

Debian wins a prize

While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole.

Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy

This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.

Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:

Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:

and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.

xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

Julien Danjou: Distributed group management and locking in Python with tooz

21 November, 2014 - 19:10

With OpenStack embracing the Tooz library more and more over the past year, I think it's a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I've presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.

You'll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.

All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC).

Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It's also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It's possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That's what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There's again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you're curious and adventurous!

Jonathan Wiltshire: Getting things into Jessie (#6)

21 November, 2014 - 17:30
If it’s not in an unblock bug, we probably aren’t reading it

We really, really prefer unblock bugs to anything else right now (at least, for things relating to Jessie). Mails on the list get lost, and IRC is dubious. This includes for pre-approvals.

It’s perfectly fine to open an unblock bug and then find it’s not needed, or the question is really about something else. We’d rather that than your mail get lost between the floorboards. Bugs are easy to track, have metadata so we can keep the status up to date in a standard way, and are publicised in all the right places. They make a great to-do list.

By all means twiddle with the subject line, for example appending “(pre-approval)” so it’s clearer – though watch out for twiddling too much, or you’ll confuse udd.

(to continue my theme: asking you to file a bug instead costs you one round-trip; don’t forget we’re doing it at scale)

 

Getting things into Jessie (#6) is a post from: jwiltshire.org.uk | Flattr

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้