Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 year 2 months ago

Martin Pitt: Urgent PostgreSQL security updates for Debian/Ubuntu

4 April, 2013 - 22:41

PostgreSQL just released security updates. 9.1 (as found in Debian testing and unstable and Ubuntu 11.10 and later) is affected by a critical remote vulnerability which potentially allows anyone who can access the TCP port (without credentials) to corrupt local files. If your PostgreSQL database exposes the TCP port to any potentially untrusted location, please shut down your servers and update now!

PostgreSQL 8.4 for Debian stable (squeeze) and Ubuntu 8.04 LTS and 10.04 LTS also got an update, but these are much less urgent.

Debian and Ubuntu advisories for all stable releases, as well as Debian testing are going out as we speak. The updates are already on and

I also uploaded updates for Debian unstable (8.4, 9.1, and 9.2 in experimental) and the Ubuntu backports PPA, but it will take a bit for these to build as we don’t have embargoed staging builds for those. Christoph updated the repository as well.

Warning: If you use the current Ubuntu raring Beta-2 candidate images, you will still have the old version. So if you do anything serious with those installations, please make sure to upgrade immediately.

Update: Debian and Ubuntu security announcements have been sent out, and all packages in the backports PPA are built.

Please see the official FAQ if you want to know some more details about the nature of the vulnerabilities.

Daniel Pocock: Comparing packaging workflows in Debian and beyond

4 April, 2013 - 22:00

My blog post yesterday about Debian's git packaging workflows was meant to help fill some gaps in the documentation of this process, one of the most significant being the diagram.

I was surprised to find that some people felt my post argues that distribution tarballs are a must-have or that separate repositories are the optimal solution. In fact, my post was more a reflection of how things are being done, and it is great to see contributions from Joey, Russ and Thomas proposing ways to integrate the workflow in a single repository and also raising questions about the future of tarballs. Some of the changes outlined by Russ only entered the tools after many people were already using the workflow I have described.

Given the rise of collaboration, through collab-maint and packaging teams, it is more important than ever that workflows and tools are easily understood and documented. This lets new contributors (including people new to Debian) jump in more quickly and with less risk of disruption. It would be great to see some of these latest ideas covered more thoroughly with diagrams and I'm happy for people to rip-off my dia source file for my own diagram and amend it as they please (under the generous terms of the GPL v3)

A look at other packaging systems

With the role of the tarball in people's sights, it's worth looking outside Debian for a moment:

  • Android packaging is radically different. All packages have a single integer version number. The pretty version number displayed to the user is meaningless. Due to the continuously increasing integer version, and the way that database schemas are versioned, it is not really possible to even keep multiple release branches in a repository. For an example, see the version in the Lumicall AndroidManifest.xml and the database upgrade function. Google Play/Android Market makes no rules about how a project manages its code. f-droid, the open source market for apps, builds projects like Lumicall directly from git, it relies on them having an ant build file in the standard format generated by the SDK.
  • OpenCSW does not keep upstream tarballs at all. They keep a single git repository for tracking all packages. Each package is built from a Makefile (sample), and their tool suite takes care of downloading the upstream sources (using a URL specified in the Makefile) and verifying against checksums tracked in the repository (sample). The common style of the Makefiles makes it very easy for somebody familiar with the tools to work on just about any of the packages, and anybody who knows how to write a Makefile can start quickly.

Of course, these are just a few examples. These days, software is distributed in many different ways, whether it is a runnable VM image or an embedded device implanted during surgery.

Hideki Yamane: waagent_1.2-1_amd64.changes ACCEPTED into unstable

4 April, 2013 - 21:11
Thank you, ftpmasters.

waagent (Windows Azure Linux Agent) hits Debian repository now... Yes, "Windows Azure" is cloud computing service provided by Microsoft.

Thomas Goirand: Git packaging workflow

4 April, 2013 - 19:31

Seeing what has been posted recently in planet.d.o, I would like to share as well my thoughts and work-flow, and tell that I do agree with Joey Hess on many of his arguments. Especially when he tells that Debian fetishises upstream tarballs. We’re in 2013, at the age of Internet, and more and more upstream authors are using Git, and more and more they don’t care about releasing tarballs. I’ve seen some upstream authors who simply stopped doing so completely, as a Git tag is really enough. I also fully agree than disk space and network speed isn’t much of a problem these days.

When there are tags available, I use the following debian/gbp.conf:

upstream-branch = master
debian-branch = debian
upstream-tag = %(version)s
compression = xz

export-dir = ../build-area/

On many of my packages, I now just use Git tags from upstream if they are available. To make it more easy, I now nearly always use the following piece of code in my debian/rules files:

DEBVERS         ?= $(shell dpkg-parsechangelog | sed -n -e 's/^Version: //p')
VERSION         ?= $(shell echo '$(DEBVERS)' | sed -e 's/^[[:digit:]]*://' -e 's/[-].*//')
DEBFLAVOR       ?= $(shell dpkg-parsechangelog | grep -E ^Distribution: | cut -d" " -f2)
DEBPKGNAME      ?= $(shell dpkg-parsechangelog | grep -E ^Source: | cut -d" " -f2)
DEBIAN_BRANCH   ?= $(shell cat debian/gbp.conf | grep debian-branch | cut -d'=' -f2 | awk '{print $1}')
GIT_TAG         ?= $(shell echo '$(VERSION)' | sed -e 's/~/_/')

        git remote add upstream git:// || true
        git fetch upstream
        if ! git checkout master ; then \
                echo "No upstream branch: checking out" ; \
                git checkout -b master upstream/master ; \
        git checkout $(DEBIAN_BRANCH)

        if [ ! -f ../$(DEBPKGNAME)_$(VERSION).orig.tar.xz ] ; then \
                git archive --prefix=$(DEBPKGNAME)-$(GIT_TAG)/ $(GIT_TAG) | xz >../$(DEBPKGNAME)_$(VERSION).orig.tar.xz ; \
        [ ! -e ../build-area ] && mkdir ../build-area || true
        [ ! -e ../build-area/$(DEBPKGNAME)_$(VERSION).orig.tar.xz ] && cp ../$(DEBPKGNAME)_$(VERSION).orig.tar.xz ../build-area || true

Packaging a new upstream VERSION now means that I only have to edit the debian/changelog, do ./debian/rules get-upstream-source so that I get new commits and tags, then “git merge -X theirs VERSION” to import the changes, then finally invoke ./debian/rules make-orig-file to create the orig.tar.xz. My debian branch is now ready for git-buildpackage. Note that the sed with the GIT_TAG thing is there because unfortunately, Git doesn’t support the ~ char in tags, and that most of the time, upstream do not use _ in version numbers. Let’s say upstream is releasing version 1.2.3rc1, then I simply do “git tag 1.2.3_rc1 1.2.3rc1″ so that I have a new tag which points to the same commit as 1.2.3rc1, but that can be used for the Debian 1.2.3~rc1-1 release and the make-orig-file.

All this might looks overkill at first, but in fact it is really convenient and efficient. Also, even though there is a master branch above, it isn’t needed to build the package. Git is smarter than this, so even if you haven’t checked out upstream master branch from the “upstream” remote, make-orig-file and git-buildpackage will simply continue to work. Which is cool, because this means you can store a single branch on Alioth (which is what I do).

Michal Čihař: phpMyAdmin translations status

4 April, 2013 - 18:00

phpMyAdmin 4.0-rc1 is out and it's really time to work on translations if you want them to be ready for final release..

So let's look at which translations are at 100% right now (new ones are bold):

Almost complete:

As you can see, there is still lot of languages missing, this might be your opportunity to contribute to phpMyAdmin. Also you are welcome to translate phpMyAdmin 4.0 using translation server.

If your language is already fully translated and you want to help as well, you can translate our documentation as well.

Filed under: English Phpmyadmin | 0 comments | Flattr this!

Bits from Debian: Improvements in Debian's core infrastructure

4 April, 2013 - 16:00

Thanks to a generous donation by Bytemark Hosting, Debian started deploying machines for its core infrastructure services in a new data center in York, UK.

This hardware and hosting donation will allow the Debian Systems Administration (DSA) team to distribute Debian's core services across a greater number of geographically diverse locations, and improve, in particular, the fault-tolerance and availability of end-user facing services. Additionally, the storage component of this donation will dramatically reduce the storage challenges that Debian currently faces.

The hardware provided by Bytemark Hosting consists of a fully-populated HP C7000 BladeSystem chassis containing 16 server blades:

  • 12 BL495cG5 blades with 2x Opteron 2347 and 64GB RAM each
  • 4 BL465cG7 blades with 2x Opteron 6100 series and 128GB RAM each

and several HP Modular Storage Arrays:

  • 3 MSA2012sa
  • 6 MSA2000 expansion shelves

with 108 drive bays in total, mostly 500GB SATA drives, some 2TB, some 600GB 15kRPM SAS, providing a total of 57 TB.

57 TB today could host roughly 80 times the current Debian archive or 3 times the Debian Snapshot archive. But remember both archives are constantly growing!

Russ Allbery: Debian packaging of Git upstreams

4 April, 2013 - 13:55

Since there's a discussion of packaging software for Debian that uses Git upstream on Planet Debian right now, I wanted to weigh in and advocate for my current workflow for this situation, which I'm quite fond of. It's worth noting that I'm also upstream for quite a few of the packages I maintain, all in Git, and I use (almost) exactly the same structure for packaging my own software as for packaging anyone else's. So I have some experience with both sides of this.

First off, I completely agree with Joey: if upstream is already using Git, there's no reason not to base the Debian packaging on the upstream repository, and many, many reasons to do so. One of the biggest advantages is that when repositories share a common basis and have been regularly merged, you can easily cherry-pick commits, which is wonderful for security releases and situations where you need a quick bug fix from an unreleased upstream branch. I make very heavy use of this when packaging OpenAFS.

I do, however, like to base the Debian packaging on the released tarball, if for no other reason than that's the artifact that other people can more easily confirm. Yes, you can do the same thing with a Git tag, but the tarball is what upstream considers a release, so if one is available, I think it makes the most sense to base the packaging on it. I do this even for my own software.

Thankfully, it's not that difficult to do both. Sam Hartman was the one who showed me this technique, and (after I used a manual script for some time for a couple of packages) Guido Günther incorporated the support into git-import-orig. The key idea is to still import the tarball into the upstream branch, but instead of making that import a simple commit, you make it a merge commit referencing the upstream release tag or commit from their Git repository.

This means that you still get the exact contents of the release tarball on the upstream branch (and pristine-tar works as normal), but that branch is also based on the full upstream line of development. Therefore, so is your packaging branch (master or what have you) since you merge upstream into it. You can then charry-pick and take advantage of all of the normal Git features when following upstream development.

This is dead simple to do with git-import-orig. Just add the upstream repository as a remote for your Git repository, make sure it's up to date with git fetch and you have the upstream tags, and then pass the flag --upstream-vcs-tag <tag> to git-import-orig whenever importing the upstream release tarball. git-import-orig will handle the construction of the merge commit for you and everything will just work, exactly like it normally does with git-buildpackage except with a more complete history.

This support was added in git-buildpackage 0.6.0~git20120324, so it's available in unstable and testing.

(I was going to update my notes on Debian packaging with Git to include this information before posting this, but I see that it will require some restructuring and quite a few changes to that document and I don't have time tonight. Hopefully soon.)

Andrew Pollock: [life/repatexpat] Day #4 of repatriation -- delivery central

4 April, 2013 - 10:54

Today was spent at my apartment with Zoe. Harvey Norman were scheduled to deliver the fridge, washing machine and TV. Someone from Telstra was scheduled to come out and monkey with the MDF to get the naked ADSL happening, and my desk was scheduled to be delivered.

My parents drove us over in the morning, with some of our suitcases. Zoe was very happy with her new room and bed.

I was going to get a 1 hour advance warning of Harvey Norman coming, so we all went for a little walk around the neighbourhood to explore. It turns out there's a convenience store right next door, which is, well extremely convenient. I won't need to even hop in the car to get last minute bread or milk or anything like that. Very happy about that. There's also a really gorgeous little boutique deli/gourmet grocery that is easily within walking distance. The neighbourhood is indeed very nice.

Mid-morning, Brent dropped around with his daughter to say hi. Zoe had a good time playing with her as well, and we went out for lunch at the Hawthorne Garage. At the end of lunch, Harvey Norman called to say they'd be an hour away, which was well timed.

Zoe declined to nap again, so we just hung out waiting for the delivery. In the middle of them delivering, the desk delivery happened as well, and then as Brent was leaving, the Telstra guy turned up, so it all happened at once.

I set up the TV and DVD player and Zoe happily christened it all by watching some Play School DVDs, and then my Dad came back and picked us up.

So the apartment is now almost habitable. I just need my bed. That's scheduled for Saturday. I'm planning on sleeping there on Saturday night.

In the furniture department, I'm still lacking a sofa, a dining table and something to put the TV on. Leah has volunteered to help me shop tomorrow, but I'm starting to think I should focus on resolving the lack of a car, then I can do any further shopping myself.

Nick had set me up with a car wholesaler who was going to search for a used Subaru Forester for me, but so far he hasn't turned anything up, so I'm thinking I need to widen my net a little and use some other avenues as well. I'd really wanted for the car finding to be outsourced as much as possible so I could focus on other things, but it's not looking like that's going to be the case, and I really need mobility.

I got a notification from Internode after I'd left today that the Internet should now be working, so I need to configure the ADSL router when I next get a chance and confirm that's the case, then I'm all sorted for being technologically able to work from home.

Sylvain Le Gall: Sekred a password helper for puppet.

4 April, 2013 - 07:31

Puppet is a nice tool but it has a significant problem with passwords:

  • it is recommended to store puppet manifests (*.pp) and related files in a VCS (i.e. git)
  • it is not recommended to store password in a VCS

This lead to complex situation and various workaround that more or less work:

  • serve password in a separate file/DB or do an extlookup on the master (pre-set passwords)
  • store password on the server and get them through a generate function (random password but on the master)

Most of these workarounds are complex, don't allow you to share the password you have set easily and most of the time are stored in another place than the target node.

So I have decided to create my own solution: sekred (LGPL-2.1).

The idea of sekred is to generate the password on the target node and made it available to the user that needs it. Then the user just have to ssh into the host and get the password.


  • the password is generated and stored on the node
  • no VCS commit of your password
  • no DB storage of your password beside the local filesystem of the host
  • no need to use a common pre-set password for all you host, the password is randomly generated for only one host
  • to steal the password you need to crack the host first but if you have root access on the host, accessing a random generated password is pointless


  • the password is stored in clear text
  • the password is only protected by the filesystem ACL

Let see some concrete examples.

Setting mysql root password

This is a very simple problem. When you first install mysql on Debian Squeeze, the root password is not set. That's bad. Let set it using sekred and puppet.

node "mysqlserver" {

  package {
      ensure => installed;

  service {
      name => "mysql",
      ensure => running,
      hasrestart => true,
      hasstatus => true;

  exec {
      command => "mysqladmin -u root password $(sekred get root@mysql)",
      onlyif => "mysql -u root",  # Trigger only if password-less root account.
      require => [Service["mysqld"], Package["mysql-client", "sekred"]];

And to get the root password for mysql, just login into the node "mysqlserver":

$> sekred get root@mysql
Setting password for SSH-only user

This example is quite typical of the broken fully automated scenario with passwords: - you setup a remote host only accessible through SSH - you create a user and set its SSH public key to authorize access - your user cannot access its account because SSH prevent password-less account login!

In other word, you need to login into the node, set a password for the user and mail him back.... That defeats a little bit the "automation" provided by puppet.

Here is what I do with sekred:

define user::template () {
  user {
      ensure => present,
      membership => minimum,
      shell => "/bin/bash",
  include "ssh_keys::$name"

  # Check password less account and set one, if required.
  $user_passwd="$(sekred get --uid $name $name@login)"
  exec {
      command => "echo $name:$user_passwd | chpasswd",
      onlyif => "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"",
      require => [User[$name], Package["sekred"]];

So the command "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"" test for a password-less account. If this is the case, it creates a password using sekred get --uid $name $name@login and set it through chpasswd.

Note that $user_passwd use a shell expansion that will be evaluated when running the command only, on the host. The --uid flag of sekred assign the ownership of the password to the given user id.

So now the user (foo) can login into the node and retrieve its password using sekred get foo@login.

Try it!

Sekred was a very short project but I am pretty happy with it. It solves a long standing problem and helps to cover an extra mile of automation when setting up new nodes.

The homepage is here and you can download it here. Feel free to send patches, bugs and feature requests (here, login required).

Jonas Smedegaard: Debian Pure Blends - Creating sustainable hacks

4 April, 2013 - 06:46

Today Thursday April 4th I give a talk at Distro-recipes in Paris about Debian Pure Blends.

Slides and sources for them.

Luke Faraone: Teaching free/open source to high school students

4 April, 2013 - 06:16
A few weeks ago I taught a class on Open Source: Contributing to free culture (catalog entry) for Spark, a one-day program put on by the student-run MIT Educational Studies Program. I was fortunate to have two helpful co-teachers, Tyler Hallada and Jacob Hurwitz, who assisted with the lesson plan and the in class lecture.

We ended up teaching 3 sessions of the 1hr 50min class that Saturday, with about 10 students in each session.

I was pretty impressed by the quality of the students; a number of them had used GNU/Linux before, but even those who hadn't were able to gain something from the experience. The class was broken up into three segments:

  1. Lecture on a brief history of open source and the free software movement
  2. Small research project on an open source project
  3. Lab where students could work through OpenHatch's training missions
The point was to mix up what could otherwise be a very boring lecture.

I think we might have missed the mark on the last bit, as I get the feeling that we didn't end up giving the students good actionables. While the quality of OpenHatch is high and the organization's campus outreach programs are amazing, skills practice only goes so far without clear direction to apply said skills. I'll be following up with the class participants to see how they're progressing on their own open source contributor journey, and will post updates if I have any.

While not an OpenHatch event, if this sort of thing interests you, OpenHatch runs a series of events like this one and has a mailing list for discussing planning and sharing best practices. Subscribe and say hi!

The presentation is enclosed below, and of course is licensed under CreativeCommons Attribution-ShareAlike 3.0. [PDF]
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src=";loop=false&amp;delayms=3000" webkitallowfullscreen="true" width="480"></iframe>

Junichi Uekawa: playing with constexpr.

4 April, 2013 - 06:12
playing with constexpr. Much better than C++ templates.

Julien Danjou: Hy, Lisp in Python

4 April, 2013 - 06:01

I've meant to look at Hy since Paul Tagliamonte started to talk to me about it, but never took a chance until now. Yesterday, Paul indicated it was a good time for me to start looking at it, so I spent a few hours playing.

But what's Hy?

Python is very nice: it has a great community and a wide range of useful libraries. But let's face it, it misses a great language.

Hy is an implementation of a Lisp on top of Python.

Technically, Hy is built directly with a custom made parser (for now) which then translates expressions using the Python AST module to generate code, which is then run by Python. Therefore, it shares the same properties as Python, and is a Lisp-1 (i.e. with a single namespace for symbols and functions).

If you're interested to listen Paul talking about Hy during last PyCon US, I recommend watching his lightning talk. As the name implies, it's only a few minutes long.

<iframe allowfullscreen="allowfullscreen" class="shadow" frameborder="0" height="374" src=";autohide=1&amp;egm=0&amp;hd=1&amp;iv_load_policy=3&amp;modestbranding=1&amp;rel=0&amp;showinfo=0&amp;showsearch=0#t=16m14s" width="500"></iframe> Does it work?

I've been cloning the code and played around a bit with Hy. And to my greatest surprise and pleasure, it works quite well. You can imagine writing Python from there easily. Part of the syntax smells like Clojure's, which looks like a good thing since they're playing in the same area.

You can try a Hy REPL in your Web browser if you want.

Here's what some code look like:

(import requests)
(setv req (requests.get ""))
(if (= req.status_code 200)
(for (kv (.iteritems req.headers))
(print kv))
(throw (Exception "Wrong status code")))

This code would ouput:

('date', 'Wed, 03 Apr 2013 12:09:23 GMT')
('connection', 'keep-alive')
('content-encoding', 'gzip')
('transfer-encoding', 'chunked')
('content-type', 'text/html; charset=utf-8')
('server', 'nginx/1.2.6')

As you can see, it's really simple to write Lispy code that really uses Python idioms.

There's obviously still a lots of missing features in Hy. The language if far from complete and many parts are moving, but it's really promising, and Paul's doing a great job implementing every idea.

I actually started to hack a bit on Hy, and will try to continue to do so, since I'm really eager to learn a bit more about both Lisp and Python internals in the process. I've already send a few patches on small bugs I've encountered, and proposed a few ideas. It's really exciting to be able to influence early a language design that I'll love to use! Being a recent fan of Common Lisp, I tend to grab the good stuff from it to add them into Hy.

Petter Reinholdtsen: Isenkram 0.2 finally in the Debian archive

4 April, 2013 - 05:40

Today the Isenkram package finally made it into the archive, after lingering in NEW for many months. I uploaded it to the Debian experimental suite 2013-01-27, and today it was accepted into the archive.

Isenkram is a system for suggesting to users what packages to install to work with a pluggable hardware device. The suggestion pop up when the device is plugged in. For example if a Lego Mindstorm NXT is inserted, it will suggest to install the program needed to program the NXT controller. Give it a go, and report bugs and suggestions to BTS. :)

Sylvestre Ledru: LLVM Debian/Ubuntu nightly packages

4 April, 2013 - 01:38

Lately, I have been working on providing nightly packages of the whole LLVM toolchain.
With the help of folks from Intel, Google and Apple, I am happy to announce the publication of these packages:

Built through a Jenkins instance (, packages for Debian wheezy and Unstable and Ubuntu quantal, precise and raring are created twice a day.

3.2 and 3.3 llvm-toolchain packages are currently waiting in the Debian NEW queue.

More information on the LLVM blog.

Joey Hess: upstream git repositories

4 April, 2013 - 00:53

Daniel Pocock posted The multiple repository conundrum in Linux packaging. While a generally good and useful post, which upstream developers will find helpful to understand how Debian packages their software, it contains this statement:

If it is the first download, the maintainer creates a new git repository. If it has been packaged before, he clones the repository. The important point here is that this is not the upstream repository, it is an independent repository for Debian packaging.

The only thing important about that point is that it highlights an unnecessary disconnect between the Debian developer and upstream development. One which upstream will surely find annoying and should certainly not be bothered with.

There is absolutely no technical reason to not use the upstream git repository as the basis for the git repository used in Debian packaging. I would never package software maintained in a git repository upstream and not do so.

The details are as follows:

  • For historical reasons that are continuingly vanishing in importance, Debian fetishises the tarballs produced by upstream. While upstreams increasingly consider them an unimportant distraction, Debian insists in hoarding and rolling around in on its nest of gleaming pristine tarballs.

    I wrote pristine-tar to facilitate this behavior, while also pointing fun at it, and perhaps introducing a weak spot with which to eventually slay this particular dragon. It is widely used within Debian.

    .. Anyway, the point is that it's no problem to import upstream's tarball into a clone if their git repository. It's fine if that tarball includes files not present in their git repository. Indeed, upstream can do this at release time if they like. Or Debian developers can do it and push a small quantity of data back to upstream in a branch.

  • Sometimes tagged releases in upstream git repositories differ from the files in their released tarballs. This is actually, in my experience, less due to autotools generated files, and more due to manual and imperfect release processes, human error, etc. (Arguably, autotools are a form of human error.)

    When this happens, and the Debian developer is tracking upstream git, they can quite easily modify their branch to reflect the contents of the tarball as closely as they desire. Or modify the source package uploaded to Debian to include anything left out of the tarball.

    My favorite example of this is an upstream who forgot to include their README in their released tarball. Not a made up example; as mentioned tarballs are increasingly an irrelevant side-show to upstreams. If I had been treating the tarball as canonical I would have released a package with no documentation.

  • Whenever Debian developers interact with upstream, whether it's by filing bug reports or sending patches, they're going to be referring to refs in the upstream git repository. They need to have that repository available. The closer and better the relationship with upstream, the more the DD will use that repository. Anything that pulls them away from using that repository is going to add friction to dealing with upstream.

    There have, historically, been quite a lot of sources of friction. From upstreams who choose one VCS while the DD preferred using another, to DDs low on disk space who decided to only version control the debian directory, and not the upstream source code. With disk space increasingly absurdly cheap, and the preponderance of development converging on git, there's no reason for this friction to be allowed to continue.

So using the upstream git repository is valuable. And there is absolutely no technical value, and plenty of potential friction in maintaining a history-disconnected git repository for Debian packaging.

Michal &#268;iha&#345;: Weblate and Hackweek 9

3 April, 2013 - 23:30

You might have already noticed that there is Hackweek 9 coming next week. At SUSE we will get pizzas, icecream and other nice stuff, but most importantly we can spend the week on hacking anything we want.

Same as last year, I want to spend most of my Hackweek on Weblate, nice crowdsourcing tool for translations. The major goal is to finish 1.5 release, what should not be that hard. The most challenging bits for new machine translation interface are already implemented, and the rest is pretty much only tweaking of existing code.

Another thing we want to explore is possibility of using Weblate for openSUSE translations. Currently they are mostly kept in SVN, what is blocker for using Weblate, but we will see what can be done there.

Filed under: English Phpmyadmin Suse Weblate | 0 comments | Flattr this!

Jon Dowland: UKUUG and FLOSS UK

3 April, 2013 - 17:00

Last year, I failed to write and mention that I'd joined the Council of the Free/Libre Open-Source Software UK group FLOSS UK, formerly known as the UK UNIX User's Group (UKUUG). As a council-member, I helped to organise the recent Large Installation Systems Administration conference that took place in my native Newcastle, UK.

Five years ago I gave a talk at the (then) UKUUG Linux conference in Manchester, 2008, about documentation for sysadmins, using ikiwiki. I recently noticed that I hadn't put the abstract or slides up here, so now I have.

Daniel Pocock: The multiple repository conundrum in Linux packaging

3 April, 2013 - 16:00

I'm involved with a number of free software projects as both a developer and as the maintainer of packages for various distributions such as Debian (which also feeds packages to Ubuntu) and OpenCSW.

I regularly come across the following situations:

  • Developers of great software who would like to see it packaged, distributed and promoted conveniently through platforms like Debian
  • Users of Linux distributions who are keen to use free software if it is presented in a convenient and accessible manner they are familiar with.

Sadly, despite everybody having the best intentions, there is sometimes a chasm separating these two groups of people.

Upstream developers are often busy developing new features and don't have time to work on the intricacies of packaging. I hope that by sharing a few of my own experiences I can help more developers get their software packaged more easily.

Fortunately, a number of great tools like git-buildpackage have emerged for streamlining the packaging process, but this has also created more confusion for developers who have their own git repositories and don't quite understand how the Debian git repository and patching process relates to their own repository.

The autotools world

Here, I focus on autotools based software, because this type of software has it's own peculiar issues when packaging. In particular, these issues appear when using a version control system to track upstream releases. Some of the concepts can be applied to the study of regular Makefile or cmake projects as well.

Here is a diagram giving an overview:

Let's work through each of the steps in the diagram:

The upstream release
  1. The developer/release manager updates the version number in

    (sometimes called and tags the code. (Usually this tag is on a dedicated release branch.)

  2. The developer checks out a copy of the code from the tag into a fresh working directory
  3. The developer runs the autoreconf/automake tools, usually from a bootstrap script. These tools create a number of new files that don't exist in the project repository. Finally, the developer runs
    make dist

    , which puts all the files,
    including the generated files, into a distribution tarball. It is worth emphasizing this point: the tarball is not just an archive of the files from the repository/tag, it also contains a number of
    files generated by autotools.

  4. The developer uploads the tarball to a web site such as the Sourceforge download page. Usually a release announcement is made now containing checksums for the tarball.

At this point, the upstream developer's work is done and packaging teams from various projects such as Debian will take over. Sometimes, the upstream developer is also building the packages and continues
onto the next steps himself:

  1. The package is downloaded by the package maintainer
  2. If it is the first download, the maintainer creates a new git repository. If it has been packaged before, he clones the repository. The important point here is that this is not the upstream repository, it is an independent repository for Debian packaging. The maintainer uses the git-import-orig tool to import the upstream tarball into the packaging repository. The git-import-orig tool captures an exact snapshot of the upstream release tarball contents in a branch called upstream. One point where the Debian repository differs fundamentally from the upstream repository is that all files from the tarball will be tracked in the Debian git repository, even those automatically generated files that were created by autotools and don't exist in the upstream repository.
  3. The maintainer creates or updates the various artifacts for packaging. These files are kept on the master branch, and the tarball contents from the upstream branch are merged into master to create packages.
  4. When the maintainer feels the code is ready, he will check out a clean copy of the repository to build the package from.
  5. The maintainer executes a tool such as git-buildpackage or regular dpkg-buildpackage, which creates the *.deb files.
  6. The files are checked with a tool like lintian and some manual testing/installation. If all is OK, a tag is made in the packaging repository, with a suffix appended to the upstream version number to indicate which iteration of the package it applies to.
AttachmentSize release-packaging-workflow.dia3.97 KB

Peter Eisentraut: Installing multiple PostgreSQL versions on Homebrew

3 April, 2013 - 09:59

I was going to post this yesterday, but some might have thought that it was a joke. April 1st is always an annoying day to communicate real information.

If you have been fond of the way Debian and Ubuntu manage multiple PostgreSQL versions in parallel, you can now have the same on OS X with Homebrew:

brew tap petere/postgresql
brew install postgresql-9.2
# etc.
brew install --HEAD postgresql-common

postgresql-common is the same code as in Debian, only mangled a little.

Now you have all the client programs symlinked through pg_wrapper, and you can use the server management tools such as:

pg_createcluster 9.2 main
pg_ctlcluster 9.2 main start

Let me know if you find this useful.



Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้