Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 11 months 2 weeks ago

Bastian Venthur: Synchronizing Google Mail Contacts with Thunderbird

30 January, 2013 - 19:37

Dear Lazyweb,

can anyone recommend a good Thunderbird extension which allows for synchronizing the address book with Google mail? So far I tried Google Contacts, but something went wrong with the syncing and some contacts where deleted on both sides. To avoid this problem, one can use Google Contacts in read-only mode (it will only fetch contacts from Google, but never write to it) but then you have to import new Thunderbird contacts to Google mail manually.

Google introduced CardDav in December 2012, which allows for syncing of contacts, but since Thunderbird’s development is apparently on hold this is probably not gonna be supported out-of-the-box. There are some other extensions for Thunderbird, but since synchronization is hard and a lot more complicated than: “replace newer version with older one” I’m looking for something mature and well tested.

Before someone suggests it, I know Evolution has this feature built-in. I gave it a try last week and found so many other grave bugs with the calendar and newsgroups that Evolution is simply unfit for my needs. I really like Thunderbird and want to stick with it for a few more years until I have to look for something else.

Yours truly,

Basti

Raphael Geissert: A bashism a week: sleep

30 January, 2013 - 16:30
To delay execution of some commands in a shell script, the sleep command comes handy.
Even though many shells do not provide it as a built-in and the GNU sleep command is used, there are a couple of things to note:

  • Suffixes may not be supported. E.g. 1d (1 day), 2m (2 minutes), 3s (3 seconds), 4h (4 hours).
  • Fractions of units (seconds, by default) may not be supported. E.g. sleeping for 1.5 seconds may not work under all implementations.

This of course is regarding what is required by POSIX:2001; it only requires the sleep command to take an unsigned integer. FreeBSD's sleep command does accept fractions of seconds, for example.

Remember, if you rely on any non-standard behaviour or feature make sure you document it and, if feasible, check for it at run-time.

In this case, since the sleep command is not required to be a built-in, it does not matter what shell you specify in your script's shebang. Moreover, calling /bin/sleep doesn't guarantee you anything. The exception is if you specify a shell that has its own sleep built-in, then you could probably rely on it.

The easiest replacement for suffixes is calculating the desired amount of time in seconds. As for the second case, you may want to reconsider your use of a shell script.

Russ Allbery: Quick Module::Build note

30 January, 2013 - 15:57

I'm struggling a little to keep to a schedule since the weekend, so I both didn't follow my normal evening exercise schedule nor did I write this up earlier when I could add more details. Ah well. But I'm still going to post something today, since I like this streak of (mostly) providing some useful or at least hopefully entertaining content.

Today, I converted the Perl module build system inside WebAuth to use Module::Build instead of ExtUtils::MakeMaker, and it was a rousing success. I highly recommend doing this, and am going to be doing this with all of my other Perl packages, both embedded in larger packages and standalone.

I have several packages where Perl XS modules are embedded within a larger Autoconf and Automake project that includes a shared library used by the Perl module. The largest problem with doing this is integrating the build systems in such a way that the Perl module is built against the in-tree version of the shared library, rather than some version of it that may already be on the system.

I've managed to do this with ExtUtils::MakeMaker, but it was horribly ugly, involving overriding internal target rules and setting a bunch of other variables. With Module::Build and ExtUtils::CBuilder, it's much easier and even sensible. Both support standard ways of overriding Config settings, which provides just the lever required. So this:

    our $PATH = '@abs_top_builddir@/lib/.libs';
    my $lddlflags = $Config{lddlflags};
    my $additions = "-L$PATH @LDFLAGS@";
    $lddlflags =~ s%(^| )-L% $additions -L%;
    package MY;
    sub const_loadlibs {
        my $loadlibs = shift->SUPER::const_loadlibs (@_);
        $loadlibs =~ s%^(LD_RUN_PATH =.*[\s:])$main::PATH(:|\n)%$1$2%m;
        return $loadlibs;
    }
    package main;

(comments stripped to just show the code) became this:

    my @LDFLAGS = qw(-L@abs_top_builddir@/lib/.libs @LDFLAGS@);
    my $LDDLFLAGS = join q{ }, @LDFLAGS, $Config{lddlflags};

plus adding config => { lddlflags => $LDDLFLAGS } to the standard Module::Build configuration. So much better!

At some point, I'll write up a set of guidelines on how to embed Perl XS modules into Automake projects.

Gunnar Wolf: The phone is dead. How to stay reasonably unbound?

30 January, 2013 - 12:31

So, today an endurance test can be declared as finished.

In early 2008, for the first time, I paid for a cellular phone (as my previous ones were all 100% subsidized by the operator in a fixed plan). I got a Nokia N95. And, although at the beginnning I was quite thrilled with my smartphone (when such things were still a novelty), it didn't take much to me to start dumbing it down to what is really useful to me: A phone with a GPS. And the GPS only because it is the only toy I want in such a form factor.

Anyway, despite the operator repeatedly offering me newer and more capable models, I kept this one, and as soon as I was free of the forced 18 month rental period, switched to a not-data-enabled, pay-as-you-go plan. I don't want to be connected to the intertubes when I'm away from a usable computer!

But yes, five years are over what a modern phone wants to endure. Over time, I first started getting SIM card errors whenever the phone was dropped or slightly twisted - As I'm a non-frequent phone user, I didn't care much. Charging it also became a skill of patience, as getting the Nokia micro-connector to make contact has been less and less reliable. Over one year ago, the volume control (two sensors on the side) died after a phone drop (and some time later I found the switches broken from the mainboard loose) - A nuisance, yes, but nothing too bad. I don't know how, but some time ago the volume went down when using the radio, and as I can't raise it again, my phone became radio-disabled. And today, the screen died (it gets power, but stays black). I can blind-operate the phone, but of course, it is really not meant for that.

So, I expect this Saturday to go get a new phone. Between now and then, I'll be cellphone-deprived (in case you wonder why I'm not answering to your messages or whatever). I would love to get a phone with a real keyboard (as I prefer not to look at the screen when writing messages, just to check if everything came out right and fix what's needed). I understand Android phones are more likely to keep me happy as a free software geek, and I'd be delighted to use Cyanogen if it is usable and stable — But my phone is *not* my smart computer and it should not attempt to be, so it's not such a big deal. I will look for something with FM radio capability, and GPS.

Of course, I want something cheap. It would be great to get it at no cost, but I don't expect I'll find such a bargain. Oh, and I want something I can find at the first Telcel office I come to, am I asking for too much? :)

Anyway - I'll enjoy some days of being really disconnected from any wireless bugs (that I am aware off).

Paul Tagliamonte: dput-ng/1.4 in unstable

30 January, 2013 - 11:31

Changes:

dput-ng (1.4) unstable; urgency=low

   [ Arno Töll ]
   * Really fix #696659 by making sure the command line tool uses the most recent
     version of the library.
   * Mark several fields to be required in profiles (incoming, method)
   * Fix broken tests.
   * Do not run the check-debs hook in our mentors.d.n profile
   * Fix "[dcut] dm bombed out" by using the profile key only when defined
     (Closes: #698232)
   * Parse the gecos field to obtain the user name / email address from the local
     system when DEBFULLNAME and DEBEMAIL are not set.
   * Fix "dcut reschedule sends "None-day" to ftp-master if the delay is
     not specified" by forcing the corresponding parameter (Closes: #698719)
 .
   [ Luca Falavigna ]
   * Implement default_keyid option. This is particularly useful with multiple
     GPG keys, so dcut is aware of which one to use.
   * Make scp uploader aware of "port" configuration option.
 .
   [ Paul Tagliamonte ]
   * Hack around Launchpad's SFTP implementation. We musn't stat *anything*.
     "Be vewy vewy quiet, I'm hunting wabbits" (Closes: #696558).
   * Rewrote the test suite to actually test the majority of the codepaths we
     take during an upload. Back up to 60%.
   * Added a README for the twitter hook, Thanks to Sandro Tosi for the bug,
     and Gergely Nagy for poking me about it. (Closes: #697768).
   * Added a doc for helping folks install hooks into dput-ng (Closes: #697862).
   * Properly remove DEFAULT from loadable config blocks. (Closes: #698157).
   * Allow upload of more then one file. Thanks to Iain Lane for the
     suggestion. (Closes: #698855).
 .
   [ Bernhard R. Link ]
   * allow empty incoming dir to upload directly to the home directory
 .
   [ Sandro Tosi ]
   * Install example hooks (Closes: #697767).

Thanks to all the contributors!

For anyone who doesn’t know, you should check out the docs.

Uwe Hermann: libsigrokdecode 0.1.1 released, more protocol decoders supported

30 January, 2013 - 09:18

Just a quick announce: We released libsigrokdecode 0.1.1 today, a new version of one of the shared libraries part of the open-source sigrok project (for signal acquisition/analysis of various test&measurement gear, like logic analyzers, scopes, multimeters, etc). I will update the Debian package soonish.

As you probably know, in addition to the infrastructure for protocol decoding, this library also ships with a bunch of protocol decoders written in Python. Currently we support 29 different ones (in various states of "completeness", improvements are ongoing).

This release adds support for the following new protocol decoders:

Please check the announce on the sigrok blog and/or the NEWS file for the full list of changes and improvements.

Happy hacking and decoding!

Rogério Brito: Looking for (Free) a Video Editor

30 January, 2013 - 08:16

Let's suppose that you went to a show of your favourite some time ago and you were able to sneak in a camera (well, in those times, cell phones weren't able to record much more than 176x144 pixels at 12fps).

But, then, you suddenly found that some people uploaded (short) fragments of that very same show to youtube and, collecting those, you may be able to create a "multi-camera" version of the video that you can record to keep of your memorable concert.

The multi-camera, in the sense above, is not the same as multiple angles (like some DVDs), but something like a TV broadcast, where the stage is filmed by cameras positioned at some places and the image that is broadcast is switched from time to time according to that who is singing, or playing etc.

So, Dear Lazyweb, do we happen to have any Free Software (preferably already packaged in Debian) that is able to help with the task of "aligning" (in time) videos from various (different) sources so as to produce one multi-camera video?

What would my options be? I asked this on youtube for a person that did exactly what I want and their answer was Sony Vegas, which I fear that does not exactly would be allowed in Debian.

Any comments are welcome, thanks, and if I am successful, I will upload the final video.

Jonathan Carter: Ubuntu Developer Summit for 13.04 (Raring)

30 January, 2013 - 06:57
The War on Time

Whoosh! I’ve been incredibly quiet on my blog for the last 2-3 months. It’s been a crazy time but I’ll catch up and explain everything over the next few entries.

Firstly, I’d like to get out a few details about the last Ubuntu Developer Summit that took place in Copenhagen, Denmark in October. I’m usually really good at getting my blog post out by the end of UDS or a day or two after, but this time it just flew by so incredibly fast for me that I couldn’t keep up. It was a bit shorter than usual at 4 days, as apposed to the usual 5. The reason I heard for that was that people commented in previous post-UDS surveys that 5 days were too long, which is especially understandable for Canonical staff who are often in sprints (away from home) for the week before the UDS as well. I think the shorter period works well, it might need a bit more fine-tuning, I think the summary session at the end wasn’t that useful because, like me, there wasn’t enough time for people to process the vast amount of data generated during UDS and give nice summaries on it. Overall, it was a great get-together of people who care about Ubuntu and also many areas of interest outside of Ubuntu.

Copenhagen, Denmark

I didn’t take many photos this UDS, my camera is broken and only takes blurry pics (not my fault I swear!). So I just ended up taking a few pictures with my phone. Go tag yourself on Google+ if you were there. One of the first interesting things I saw when arriving in Copenhagen was the hotel we stayed in. The origami-like design reminded me of the design of the Quantal Quetzel logo that is used for the current stable Ubuntu release.

The Road ahead for Edubuntu to 14.04 and beyond

Stéphane previously posted about the vision we share for Edubuntu 14.04 and beyond, this was what was mostly discussed during UDS and how we’ll approach those goals for the 13.04 release.

This release will mostly focus on the Edubuntu Server aspect. If everything works out, you will be able to use the standard Edubuntu DVD to also install an Edubuntu Server system that will act as a Linux container host as well as an Active Directory compatible directory server using Samba 4. The catch with Samba 4 is that it doesn’t have many administration tools for Linux yet. Stéphane has started work on a web interface for Edubuntu server that looks quite nice already. I’m supposed to do some CSS work on it, but I have to say it looks really nice already, it’s based on the MAAS service theme and Stéphane did some colour changes and fixes on it already.

From the Edubuntu installer, you’ll be able to choose whether this machine should act as a domain server, or whether you would like to join an existing domain. Since Edubuntu Server is highly compatible with Microsoft Active Directory, the installer will connect to it regardless of whether it’s a Windows Domain or Edubuntu Domain. This should make it really easy for administrators in schools with mixed environments and where complete infrastructure migrations are planned.

You will be able to connect to the same domain whether you’re using Edubuntu on thin clients, desktops or tablets and everything is controllable using the Epoptes administration tool.

Many people are asking whether this is planned for Ubuntu / Ubuntu Server as well, since this could be incredibly useful in other organisations who have a domain infrastructure. It’s currently meant to be easily rebrandable and the aim is to have it available as a general solution for Ubuntu once all the pieces work together.

Empowering Ubuntu Flavours

This cycle, Ubuntu is making some changes to the release schedule. One of the biggest changes made  this cycle is that the alpha and beta releases are being dropped for the main Ubunut product. This session was about establishing how much divergence and changes the Ubuntu Flavours (Ubuntu Studio, Mythbuntu, Kubuntu, Lubuntu and Edubuntu) could have from the main release cycle. Edubuntu and Kubuntu decided to be a bit more conservative and maintain the snapshot releases. For Edubuntu it has certainly helped so far in identifying and finding some early bugs and I’m already glad that we did that. Mythbuntu is also a notable exception since it will now only do LTS releases. We’re tempted to change Edubuntu’s official policy that the LTS releases are the main releases and treat the releases in between more like technology previews for the next LTS. It’s already not such a far stretch from the truth, but we’ll need to properly review and communicate that at some point.

Valve at UDS and Steam for Linux

One of the first plenaries was from Valve where Drew Bliss talked about Steam on Linux. Steam is one of the most popular publishing and distribution systems for games and up until recently it has only been available on Windows and Mac. Valve (the company behind Steam and many popular games such as Half Life and Portal) are actively working on porting games to run natively on Linux as well.

Some people have asked me what I think about it, since the system is essentially using a free software platform to promote a lot of non-free software. My views on this is pretty simple, I think it’s an overwhelmingly good thing for Linux desktop adoption and it’s been proven to be a good thing for people who don’t even play games. Since the announcement from Valve, Nvidia has already doubled perfomance in many cases for its Linux drivers. AMD, who have been slacking on Linux support the last few years have beefed up their support drastically with the announcement of new drivers that were released earlier this month. This new collection of AMD drivers also adds support for a range of cards where the drivers were completely discontinued, giving new life to many older laptops and machines which would be destined for the dumpster otherwise. This benefits not only gamers, but everyone from an average office worker who wants snappy office suite performance and fast web browsing to designers who work with graphics, videos and computer aided design.

Also, it means that many home users who prefer Linux-based systems would no longer need to dual-boot to Windows or OS X for their games. While Steam will actively be promoting non-free software, it more than makes up for that by the enablement it does for the free software eco-system. I think anyone who disagrees with that is somewhat of a purist and should be more willing to make compromises in order to make progress.

Ubuntu Release Changes

Last week, there was a lot of media noise stating that Ubuntu will no longer do releases and will become a rolling release except for the LTS releases. This is certainly not the case, at least not any time soon. One meme that I’ve noticed increasingly over the last UDSs was that there’s an increasing desire to improve the LTS releases and using the usual Ubuntu releases more and more for experimentation purposes.

I think there’s more and more consensus that the current 6 month cycle isn’t really optimal and that there must be a better way to get Ubuntu to the masses, it’s just the details of what the better way is that leaves a lot to be figured out. There’s a desire between developers to provide better support (better SRUs and backports) for the LTS releases to make it easier for people to stick with it and still have access to new features and hardware support. Having less versions between LTS releases will certainly make that easier. In my opinion it will probably take at least another 2 cycles worth of looking at all the factors from different angles and getting feedback from all the stakeholders before a good plan will have formed for the future of Ubuntu releases. I’m glad to see that there is so much enthusiastic discussion around this and I’m eager to see how Ubuntu’s releases will continue to evolve.

Lightning Talks

Lightning talks are a lot like punk-rock songs. When it’s good, it’s really, really amazingly good and fun. When it’s bad, at least it will be over soon :)

Unfortunately, since it’s been a few months since the UDS, I can’t remember all the details of the lightning talks, but one thing that I find worth mentioning is that they’re not just awesome for the topic they aim to produce (for example, the one lightning talks session I attended was on the topic of “Tests in your software”), but since they are more demo-like than presentation-like, you get to learn a lot of neat tricks and cool things that you didn’t know before. Every few minutes someone would do something and I’d hear someone say something like “Awesome! I didn’t know you could do that with apt-daemon!”. It’s fun and educational and I hope lightning talks will continue to be a tradition at future UDSs.

Social

Stefano Rivera (fellow MOTU, Debianista, Capetonian, Clugger) wins the prize for person I’ve seen in the most countries in one year. In 2012, I saw him in Cape Town for Scaleconf,  Managua during Debconf, Oakland for a previous UDS and Copenhagen for this UDS. Sometimes when I look at silly little statistics like that I realise what a great adventure the year was!

Between the meet ‘n’ greet, an evening of lightning talks and the closing party (which was viking themed and pretty awesome) there was just one free evening left. I used it to gather with the Debian folk who were at UDS. It was great to see how many Debian people were attending, I think we had around a dozen or so people at the dinner and there were even more who couldn’t make it since they work for Canonical or Linaro and had to attend team dinners the same evening. It was as usual, great to put some more faces to names and get to know some people better.

It was also great to have a UDS with many strong technical community folk present who is willing to engage in discussion. There were still a few people who felt missing but it was less than at some previous UDSs.

I also discovered my face on a few puzzles! They were a *great* idea, I saw a few people come and go to work on them during the week, they seem to have acted as good menial activities for people to fix their brains when they got fried during sessions :)

Overall, this was a good and punchy UDS. I’ll probably not make the next one in Oakland due to many changes in my life currently taking place (although I will remotely participate), but will probably make the one later this year, especially if it’s in Europe. I’ll also make a point of live-blogging a bit more, it’s just so hard remembering all the details a few months after the fact. Thanks to everyone who contributed their piece in making it a great week!

Julien Danjou: Going to FOSDEM 2013

30 January, 2013 - 06:01

For the first time, I'll be at FOSDEM 2013 in Brussels on Sunday 2nd February 2013.

You'll find me probably hanging out in the cloud devroom where I'll talk about Ceilometer with my fellow developers Nick Barcet and Eoghan Glynn.

I also hope I'll find time to take a peek at some other talks, like PostgreSQL's ones that Dimitri Fontaine will handle.

See you there!

Hideki Yamane: upstream and distro has a "same" goal

30 January, 2013 - 06:00

Tom Marble has a nice blog entry "Crowdsourcing Upstream Refactoring". It's interesting and I agree with most, but I want to say against one thing in "conclusion" in his presentation.

He said "upstreams and distros have different goals" but I don't think so. We distro and upstream has a same goal, "Deliver the value to users", but we distro have much criteria than upstream, like license, non-duplicate library, etc.

If we (distro and upstream) cannot share this view, then we would be failed, IMHO.

Jon Dowland: Puppet and persistent network interface names

30 January, 2013 - 03:45

On Linux, network interface device naming has been somewhat chaotic: depending on a number of factors, eth0 today might not be eth0 tomorrow. For me, this has never been a problem in practice. At work, our physical servers have a bank of on-mainboard network ports, all managed by the same driver and so are assigned names predictably. For our virtual machines, the same is true: 99% of the time they have one network interface, but when they have more than one, they are of the same type and so are assigned predictably. At home, even on desktops and laptops with multiple network devices being added and removed at a time, I've never actually hit a problem where one was mis-identified.

However, it must have been a problem for some people because it has been solved three times now. Sadly, these incompatible solutions cause headaches in all of my use cases.

At work we have a puppet snippet that relies on knowing the interface name for a host's "primary" interface. In our case, "primary" means something like "the one the default route is over". We used to hardcode eth0 into this recipe. I don't feel too bad about that, as the puppet folks do the exact same thing in many of their scripts.

This breaks for newer physical hosts where one of the three schemes are in play and the interfaces are named/numbered em1 and counting. We worked around it by parameterizing the interface name in our recipe and defining it in our nodes.pp for the troublesome hosts, but that just moves the problem around.

My more permanent solution is to define a facter fact primary_interface that tells puppet what the correct interface should be. Here's my first attempt:

Facter.add("primary_interface") do
  setcode do
    ifs = Dir.new("/sys/class/net").each.to_a - \
      Dir.new("/sys/devices/virtual/net").each.to_a

      # filter out ones with no link
      ifs.reject! do |i|
        begin
          File.read("/sys/class/net/#{i}/carrier") != "1\n"
        rescue Errno::EINVAL
          true
        end
      end
    ifs[0]
  end
end

Of course, there is plenty of room for improvement with this solution: if ifs has more than one element at the end, a lexographic sort may prove more accurate; if there is an existing network default route up, that might be a strong clue too. It's also very Linux specific. My question though is whether this is a sensible approach in the first place?

Daniel Kahn Gillmor: visualizing MIME structure

30 January, 2013 - 00:16
Better debugging tools can help us understand what's going on with MIME messages. A python scrap i wrote a couple years ago named printmimestructure has been very useful to me, so i thought i'd share it.

It reads a message from stdin, and prints a visualisation of its structure, like this:

0 dkg@alice:~$ printmimestructure < 'Maildir/cur/1269025522.M338697P12023.monkey,S=6459,W=6963:2,Sa' 
└┬╴multipart/signed 6546 bytes
 ├─╴text/plain inline 895 bytes
 └─╴application/pgp-signature inline [signature.asc] 836 bytes
0 dkg@alice:~$ 
You can fetch it with git if you like:
git clone git://lair.fifthhorseman.net/~dkg/printmimestructure
It feels silly to treat this ~30 line script as its own project, but i don't know of another simple tool that does this. If you know of one, or of something similar, i'd love to hear about it in the comments (or by sending me e-mail if you prefer).

If it's useful for others, I'd be happy to contribute printmimestructure to a project of like-minded tools. Does such a project exist? Or if people think it would be handy to have in debian, i can also package it up, though that feels like it might be overkill.

and oh yeah, as always: bug reports, suggestions, complaints, and patches are welcome :)

Tags: debugging, mime, python

Tollef Fog Heen: Abusing sbuild for fun and profit

29 January, 2013 - 22:32

Over the last couple of weeks, I have been working on getting binary packages for [Varnish] modules built. In the current version, you need to have a built, unpacked source tree to build a module against. This is being fixed in the next version, but until then, I needed to provide this in the build environment somehow.

RPMs were surprisingly easy, since our RPM build setup is much simpler and doesn't use mock/mach or other chroot-based tools. Just make a source RPM available and unpack + compile that.

Debian packages on the other hand, they were not easy to get going. My first problem was to just get the Varnish source package into the chroot. I ended up making a directory in /var/lib/sbuild/build which is exposed as /build once sbuild runs. The other hard part was getting Varnish itself built. sbuild exposes two hooks that could work: a pre-build hook and a chroot-setup hook. Neither worked: Pre-build is called before the chroot is set up, so we can't build Varnish. Chroot-setup is run before the build-dependencies are installed and it runs as the user invoking sbuild, so it can't install packages.

Sparc32 and similar architectures use the linux32 tool to set the personality before building packages. I ended up abusing this, so I set HOME to a temporary directory where I create a .sbuildrc which sets $build_env_cmnd to a script which in turns unpacks the Varnish source, builds it and then chains to dpkg-buildpackage. Of course, the build-dependencies for modules don't include all the build-dependencies for Varnish itself, so I have to extract those from the Varnish source package too.

No source available at this point, mostly because it's beyond ugly. I'll see if I can get it cleaned up.

Michal &#268;iha&#345;: phpMyAdmin 3.5.6 for Ubuntu and Debian

29 January, 2013 - 19:00

Finally, phpMyAdmin packages for Debian and Ubuntu do not lag much behind upstream. Today, I've prepared packages for yesterday released bug fix release 3.5.6.

For Debian users, the package should be soon available in experimental (sorry no uploads to unstable during freeze).

Ubuntu users, can use my phpMyAdmin PPA. After dozens of comments and no help offered, I've still decided to be nice to Ubuntu users and adjusted the package so that it should work on Lucid as well. The downside is that, unlike in Debian, the package includes bundled copy of many PHP and javascript libraries.

PS: As soon as Debian is not frozen or 4.0 is officially released, I will start uploading 4.0 to experimental. My current bet is that 4.0 release will come earlier.

Filed under: Debian English Phpmyadmin | 0 comments | Flattr this!

Russ Allbery: Much better

29 January, 2013 - 13:30

Indeed, two days was enough time to recover from an afternoon of social, despite having an hour and a half of meetings today. But today was a day of catching up, sorting issues in JIRA, doing lots of planning meetings, and doing some debugging, so I don't have anything new code-wise to give to the world. So have a photograph.

The product backlog is now all sorted out for the Account Services project into the new phases that will define the work through pretty much the end of the calendar year, I'm guessing. And everything is now a story, not a bug or enhancement or something else without story points, and a few things are shuffled into a more reasonable order.

I'm probably going to find a few hours this week to go play video games or work on personal projects, given that I've been working well over 40 hours a week since the start of the year.

Tollef Fog Heen: FOSDEM talk: systemd in Debian

28 January, 2013 - 23:20

Michael Biebl and I are giving a talk on systemd in Debian at FOSDEM on Sunday morning at 10. We'll be talking a bit about the current state in Wheezy, what our plans for Jessie are and what Debian packagers should be aware of. We would love to get input from people about what systemd in Jessie should look like, so if you have any ideas, opinions or insights, please come along. If you're just curious, you are also of course welcome to join.

Michal &#268;iha&#345;: Winter in Lusatian Mountains

28 January, 2013 - 19:00

We've spent last weekend in Lusatian Mountains. It was cold and not so nice weather, but still I think some of the pictures are worth publishing.

Filed under: English Photography | 0 comments | Flattr this!

Russ Allbery: Management and the problem stream

28 January, 2013 - 14:36

Well, today was another day of sleeping, zoning, and having no brain for programming (and no willpower for that or exercise, which is even more frustrating), so you all get more musing about missions and work. Tomorrow, I will re-engage my normal schedule and turn the television off again, because seriously, two days of recovery should be enough even for a whole afternoon of meetings.

Perhaps the best concept in the scrum methodology for agile project management is the concept of the product backlog and the product owner. For those who aren't familiar, a scrum project has, as one of its inputs, a prioritized list of stories that the team will implement. During each planning session (generally between once a week and once a month), the development team estimates and takes from the front of the product backlog the top-priority stories that can be completed during that period of time. The product owner is responsible for building the list of pending development stories based on business needs and keeping them sorted in priority order so that the work of the team follows the goals of the larger organization. But the product owner does not get to say how difficult the stories are; the development team estimates the stories, and the product owner may change order based on those estimates (to do simpler things earlier, for example).

We've been doing scrum to some extent or another for quite some time now, and while there are a variety of things that I like about it, I think this is the best part. And I think it has very useful concepts for management in general.

The product backlog is effectively a problem stream of the sort that I talked about in my last couple of posts. It's a set of problems that need to be solved, that matter to other people. And, if the team is doing scrum properly, the problems are presented in a form that describe a complete unit of desired functionality with its requirements and constraints, but without assuming any sort of implementation. The problem is the domain of the product owner; the solution is the domain of the development team.

I am increasingly convinced that my quality of life at work, and that of many other people doing similar sorts of development and project work, would be drastically improved if managers considered this role the core of their jobs. (Ironically, at least in our initial experiences with scrum, it was quite rare for regular managers to be product owners; instead, individual line staff or project managers act as product owners. I think this speaks to how confused a lot of managers are about their roles in a technical development organization. This seems to be improving.) The core of the sort of work that I do is a combination of ongoing maintenance and solving new problems. Both of those can be viewed as a sequence of (possibly recurring) stories; I know this because that's largely how I do my own time management. Apart from HR and staffing, the core role of a manager is almost exactly what scrum calls "backlog grooming": communicating with everyone who has input into what their group is doing, finding out what their problems are, translating that into a list of things that the organization needs to have done, prioritizing them, breaking them into manageable chunks, ensuring they are fully specified and actionable (read: well-written stories), and communicating their completion back to other stakeholders (and then possibly iterating).

This lept out at me when I started thinking about our larger strategic vision. That strategic vision is a sort of product backlog: a set of development stories (or, in this case, epics that would be broken down into a large number of smaller stories). But most strategic plans have glaring flaws if they're evaluated by the standards of scrum product backlogs. They're normally written as marketing statements and aimed at the clients or customers rather than at the staff. From the staff perspective, they're often hopelessly vague, not concrete, actionable epics. Often, they are almost entirely composed of grand, conceptual epics that any scrum team would immediately reject as too large and nebulous to even discuss. And often they're undefined but still scheduled: statements that the organization will definitely complete some underspecified thing within the next year or two.

Scrum rightfully rejects any form of scheduling for unspecified stories and epics. Additional vagueness and sketchiness is tolerated the farther the story is from the current iteration, so they can be put into the future backlog, but scrum makes no guarantees about when they'll get done until they've been sized and you can apply a reasonable velocity estimate. If you want to put a date on something, you have to make it concrete. This is exactly the problem that I have with long-range strategic plans. We already know this about technology development: long-range strategic plans are feel-good guesses, half-truths, and lies that we tell other people because we're expected to do so, but which no one really believes. The chance that we can predict anything about the shape of projects and rate of delivery of those projects three years from now is laughable, and certainly none of the work that would be required to make real estimates has normally been done for those sorts of long-term strategic projects.

There are a lot of scrum advocates who would go farther than I in asking for a complete revolution of how technical projects are planned. I'm not sure how realistic that is, or how well the whole scrum process would work when rolling up work across dozens, if not hundreds, of scrum-sized teams. But in the specific area of the role of managers in a development organization, I think we could learn a lot from this product backlog concept. Right now, there is a constant tension between managers who feel like they need to provide some sort of visionary leadership and guidance (the cult of Steve Jobs) and line staff who feel like managers don't understand enough about specific technology to lead or guide anything. But if you ask a technical developer whether, instead, it would be useful for managers to provide a well-structured, priority-ordered, and clean problem stream from the larger organization so that the developer can trust the problem stream (and not have to go talk to everyone themselves) but have control over the choice of implementation within the described constraints, I think you would find a great deal of enthusiasm.

As a further bonus, scrum has a lot of advice about what that problem stream should look like. I think some of the story structuring is overblown (for example, I can't stand the story phrasing structures, the "AS A... I WANT TO... SO THAT..." shackles), but one thing it does correctly emphasize is the difference between problem and solution. The purpose of the story is to describe the outcome from the perspective of the person who will use the resulting technology. Describing the constraints on the solution, in terms of cost or integrations, is actively encouraged. Describing the implementation is verboten; it's up to the project team to choose the implementation that satisfies the constraints. If managers in general would pick up even that simple distinction, I think it would greatly improve the engagement, excitement, and joy that developers could take in their work.

There's very little that's more frustrating than to be given an interesting problem in the guise of a half-designed solution and be told that one is not permitted to apply any of one's expertise in elegance, robustness, or durability to replacing the half-baked partial solution. Or even be allowed to know the actual motivating problem, instead of the half-digested form that the manager is willing to pass on.

John Sullivan: FOSDEM

28 January, 2013 - 14:26

I'll be at FOSDEM again this year, arriving in Brussels on Thursday 31st and leaving on Tuesday 5th.

I'll be speaking on Sunday in the legal issues devroom at 10:00.

If you will be there and want to meet up, let me know.

I may be trying to watch the Super Bowl from there, a plan that didn't quite work out last year but seems more likely this year.

State of the GNUnion

FSF licensing policy challenges in 2013

This talk will cover the main challenges facing the Free Software Foundation's Licensing and Compliance lab in 2013, and will invite discussion of the FSF's work and policies in this area. We'll explore:

  • Copyright assignment: Some high-profile GNU maintainers have recently criticized the FSF's copyright assignment policy and system. What are these criticisms, what does the FSF intend to do about them, and what's the point of its assignment process to begin with?
  • GPL adoption: Last year here, in "Is copyleft being framed?", I put numbers supposedly showing declining GPL adoption in perspective, showing problems with the data, questioning the conclusions drawn from the data, and presenting different data leading to the opposite conclusions. We'll look at the questions that were raised since then about my data, and at some new data that's been made available, and draw new conclusions.
  • App Stores: When Apple's App Store launched, the FSF concluded that its terms were incompatible with the GPL -- and with any kind of strong copyleft. Since then, we have several new App Stores; most notably from Google and Microsoft. Are the Apple terms still incompatible with the GPL? Are the other stores any better? Are these stores undermining GPL adoption, and should copyleft relax its standards in order to get free software to this audience, or should it stand its ground?

Ben Hutchings: Testing network link state

28 January, 2013 - 12:04

Jan Wagner writes about using ethtool to poll the link state on network devices. While I'm pleased to see another happy ethtool user, it's a bit heavyweight for reading one bit of information and it's not ideal for scripting.

Much information about each network device is available under the /sys/class/net/name directory, including the carrier and speed attributes. (When the interface is down, some of these attributes cannot be read.) It's also possible to listen for link changes (rather than polling periodically) using the rtnetlink API, but this is probably not suitable for munin.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้