Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 58 min ago

Gunnar Wolf: Can printing be so hard‽

24 September, 2014 - 02:23

Dear lazyweb,

I am tired of finding how to get my users to happily print again. Please help.

Details follow.

Several years ago, I configured our Institute's server to provide easy, nifty printing support for all of our users. Using Samba+CUPS, I automatically provided drivers to Windows client machines, integration with our network user scheme (allowing for groups authorization — That means, you can only print in your designated printer), flexible printer management (i.e. I can change printers on the server side without the users even noticing — Great when we get new hardware or printers get sent to repairs!)...

Then, this year the people in charge of client machines in the institute decided to finally ditch WinXP licenses and migrate to Windows 7. Sweet! How can it hurt?

Oh, it can hurt. Terribly.

Windows 7 uses a different driver model, and after quite a bit of hair loss, I was not able to convince Samba to deliver drivers to Win7 (FWIW, I think we are mostly using 64 bit versions). Not only that, it also barfs when we try to install drivers manually and print to a share. And of course, it barfs in the least useful way, so it took me quite a bit of debugging and Web reading to find out it was not only my fault.

So, many people have told me that Samba (or rather, Windows-type networking) is no longer regarded as a good idea for printing. The future is here, and it's called IPP. And it is simpler, because Windows can talk directly with CUPS! Not only that, CUPS allows me to set valid users+groups to each printer. So, what's there to lose?

Besides time, that is. It took me some more hair pulling to find out that Windows 7 is shipped by default (at least in the version I'm using) with the Internet Printing Server feature disabled. Duh. OK, enable it, and... Ta-da! It works with CUPS! Joy, happiness!

Only that... It works only when I use it with no authentication.

Windows has an open issue, with its corresponding hotfix even, because Win7 and 2008 fail to provide user credentials to print servers...

So, yes, I can provide site-wide printing capabilities, but I still cannot provide per-user or per-group authorization and accounting, which are needed here.

I cannot believe this issue cannot be solved under Windows 7, several years after it hit the market. Or am I just too blunt and cannot find an obvious solution?

Dear lazyweb, I did my homework. Please help me!

Enrico Zini: pressure

23 September, 2014 - 22:18

I've just stumbled on this bit that seems relevant to me:

Insist on using objective criteria

The final step is to use mutually agreed and objective criteria for evaluating the candidate solutions. During this stage they encourage openness and surrender to principle not pressure.

I find the concept of "pressure" very relevant, and I like the idea of discussions being guided by content rather than pressure.

I'm exploring the idea of filing under this concept of "pressure" most of the things described in code of conducts, and I'm toying with looking at gender or race issues from the point of view of making people surrender to pressure.

In that context, most code of conducts seem to be giving a partial definition of "pressure". I've been uncomfortable at DebConf this year, because the conference PG12 code of conduct would cause me trouble for talking about what lessons can Debian learn from consent culture in BDSM communities, but it would still allow situations in which people would have to yield to pressure, as long as the pressure was done avoiding the behaviours blacklisted by the CoC.

Pressure could be the phrase "you are wrong" without further explanation, spoken by someone with more reputation than I have in a project. It could be someone with the time for writing ten emails a day discussing with someone with barely the time to write one. It could be someone using elaborate English discussing with someone who needs to look up every other word in a dictionary. It could be just ignoring emails from people who have issues different than mine.

I like the idea of having "please do not use pressure to bring your issues forward" written somewhere, rather than spend time blacklisting all possible ways of pressuring people.

I love how the Diversity Statement is elegantly getting all this where it says: «We welcome contributions from everyone as long as they interact constructively with our community.»

However, I also find it hard not to fall back to using pressure, even just for self-preservation: I have often found myself in the situation of having the responsibility to get a job done, and not having the time or emotional resources to even read the emails I get about the subject. All my life I've seen people in such a situation yell "shut up and let me work!", and I feel a burning thirst for other kinds of role models.

A CoC saying "do not use pressure" would not help me much here, but being around people who do that, learning to notice when and how they do it, and knowing that I could learn from them, that certainly would.

If you can link to examples, I'd like to add them here.

Dariusz Dwornikowski: debrfstats software for RFS statistics

23 September, 2014 - 17:35

Last time I told that I would release software I used to make RFS stats plots. You can find it in my github repo -

The software contains small class to get data needed to generate plots, as well as for doing some simple bug analysis. The software also contains an R script to make plots from a CSV file. For now debrfstats uses SOAP interface to Debbugs but I am now working on adding a UDD data source.

The software is written in Python 2 (SOAPpy does not come in 3 flavour), some usage examples are in the file in the repository.

If you have any questions or wishes for debrfstats do not hesitate to contact me.

Keith Packard: easymega-118k

23 September, 2014 - 12:33
Neil Anderson Flies EasyMega to 118k’ At BALLS 23

Altus Metrum would like to congratulate Neil Anderson and Steve Cutonilli on the success the two stage rocket, “A Money Pit”, which flew on Saturday the 20th of September on an N5800 booster followed by an N1560 sustainer.

“A Money Pit” used two Altus Metrum EasyMega flight computers in the sustainer, each one configured to light the sustainer motor and deploy the drogue and main parachutes.

Safely Staged After a 7 Second Coast

After the booster burned out, the rocket coasted for 7 seconds to 250m/s, at which point EasyMega was programmed to light the sustainer. As a back-up, a timer was set to light the sustainer 8 seconds after the booster burn-out. In both cases, the sustainer ignition would have been inhibited if the rocket had tilted more than 20° from vertical. During the coast, the rocket flew from 736m to 3151m, with speed going from 422m/s down to 250m/s.

This long coast, made safe by EasyMega’s quaternion-based tilt sensor, allowed this flight to reach a spectacular altitude.

Apogee Determined by Accelerometer

Above 100k’, the MS5607 barometric sensor is out of range. However, as you can see from the graph, the barometric sensor continued to return useful data. EasyMega doesn’t expect that to work, and automatically switched to accelerometer-only apogee determination mode.

Because off-vertical flight will under-estimate the time to apogee when using only an accelerometer, the EasyMega boards were programmed to wait for 10 seconds after apogee before deploying the drogue parachute. That turned out to be just about right; the graph shows the barometric data leveling off right as the apogee charges fired.

Fast Descent in Thin Air

Even with the drogue safely fired at apogee, the descent rate rose to over 200m/s in the rarefied air of the upper atmosphere. With increasing air density, the airframe slowed to 30m/s when the main parachute charge fired at 2000m. The larger main chute slowed the descent further to about 16m/s for landing.

Dirk Eddelbuettel: RcppArmadillo 0.4.450.1.0

23 September, 2014 - 11:00

Continuing with his standard pace of approximately one new version per month, Conrad released a new minor release of Armadillo a few days ago. As before, I had created a GitHub-only pre-release which was tested against all eighty-seven (!!) CRAN dependents of our RcppArmadillo package and then uploaded RcppArmadillo 0.4.450.0 to CRAN.

The CRAN maintainers pointed out that under the R-development release, a NOTE was issued concerning the C-library's rand() call. This is a pretty new NOTE, but it means using the (sometimes poor quality) rand() generator is now a no-no. Now, Armadillo being as robustly engineered as it is offers a new random number generator based on C++11 as well as a fallback generator for those unfortunate enough to live with an older C++98 compiler. (I would like to note here that I find Conrad's continued support for both C++11, offering very useful modern language idioms, as well as the fallback code for continued deployment and usage by those constrained in their choice of compilers rather exemplary --- because contrary to what some people may claim, it is not a matter of one or the other. C++ always was, and continues to be, a multi-paradigm language which can be supported easily by several standard. But I digress...)

In any event, one cannot argue with CRAN about their prescription of a C++98 compiler. So Conrad and I discussed this over email, and came up with a scheme where a user-package (such as RcppArmadillo) can provide an alternate generator which Armadillo then deploys. I implemented a first solution which was then altered / reflected by Conrad in a revised version 4.450.1 of Armadillo. I packaged, and now uploaded, that version as RcppArmadillo 0.4.450.1.0 to both CRAN and into Debian.

Besides the RNG change already discussed, this release brings a few smaller changes from the Armadillo side. These are detailed below in the extract from the NEWS file. On the RcppArmadillo side, we now have support for pkgKitten which is both very exciting and likely the topic of another blog post with an example of creating an RcppArmadillo package that purrs. In the process, I overhauled and polished how new packages are created by RcppArmadillo.package.skeleton(). An upcoming blog post may provide an example.

Changes in RcppArmadillo version 0.4.450.1.0 (2014-09-21)
  • Upgraded to Armadillo release Version 4.450.1 (Spring Hill Fort)

    • faster handling of matrix transposes within compound expressions

    • expanded symmatu()/symmatl() to optionally disable taking the complex conjugate of elements

    • expanded sort_index() to handle complex vectors

    • expanded the gmm_diag class with functions to generate random samples

  • A new random-number implementation for Armadillo uses the RNG from R as a fallback (when C++11 is not selected so the C++11-based RNG is unavailable) which avoids using the older C++98-based std::rand

  • The RcppArmadillo.package.skeleton() function was updated to only set an "Imports:" for Rcpp, but not RcppArmadillo which (as a template library) needs only LinkingTo:

  • The RcppArmadillo.package.skeleton() function will now prefer pkgKitten::kitten() over package.skeleton() in order to create a working package which passes R CMD check.

  • The pkgKitten package is now a Suggests:

  • A manual page was added to provide documentation for the functions provided by the skeleton package.

  • A small update was made to the package manual page.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gunnar Wolf: One month later: How is the set of Debian keyrings faring?

23 September, 2014 - 02:13

OK, it's almost one month since we (the keyring-maintainers) gave our talk at DebConf14; how are we faring regarding key transitions since then? You can compare the numbers (the graphs, really) to those in our DC14 presentation.

Since the presentation, we have had two keyring pushes:

First of all, the Non-uploading keyring is all fine: As it was quite recently created, and as it is much smaller than our other keyrings, it has no weak (1024 bit) keys. It briefly had one in 2010-2011, but it's long been replaced.

Second, the Maintainers keyring: In late July we had 222 maintainers (170 with >=2048 bit keys, 52 with weak keys). By the end of August we had 221: 172 and 49 respectively, and by September 18 we had 221: 175 and 46.

As for the Uploading developers, in late July we had 1002 uploading developers (481 with >=2048 bit keys, 521 with weak keys). By the end of August we had 1002: 512 and 490 respectively, and by September 18 we had 999: 531 and 468.

Please note that these numbers do not say directly that six DMs or that 50 uploading DDs moved to stronger keys, as you'd have to factor in new people being added, keys migrating between different keyrings (mostly DM⇒DD), and people retiring from the project; you can get the detailed information looking at the public copy of our Git repository, particularly of its changelog.

And where does that put us?

Of course, I'm very happy to see that the lines in our largest keyring have already crossed. We now have more people with >=2048 bit keys. And there was a lot of work to do this processing done! But that still means... That in order not to lock a large proportion of Debian Developers and Maintainers out of the project, we have a real lot of work to do. We would like to keep the replacement slope high (because, remember, in January 1st we will remove all small keys from the keyring).

And yes, we are willing to do the work. But we need you to push us for it: We need you to get a new key created, to gather enough (two!) DD signatures in it, and to request a key replacement via RT.

So, by all means: Do keep us busy!

AttachmentSize Debian Developers (uploading)266.66 KB Debian Developers (non-uploading)204.17 KB Debian Maintainers296.73 KB

Konstantinos Margaritis: EfikaMX updated wheezy and jessie images available

23 September, 2014 - 01:38

A while ago, I promised to some people in forum that I would provide bootable armhf images for wheezy but most importantly for jessie with an updated kernel. After a delay -I did have the images ready and working, but had to clean them up a bit- I decided to publish them here first.

So, here are the images: (559MB) (635MB)

Joachim Breitner: Using my Kobo eBook reader as an external eInk monitor

22 September, 2014 - 02:15

I have an office with a nice large window, but more often than not I have to close the shades to be able to see something on my screen. Even worse: There were so many nice and sunny days where I would have loved to take my laptop outside and work there, but it (a Thinkpad T430s) is simply not usable in bright sun. I have seen those nice eInk based eBook readers, who are clearer the brighter they are. That’s what I want for my laptop, and I am willing to sacrifice color and a bit of usability due to latency for being able to work in the bright daylight!

So while I was in Portland for DebConf14 (where I guess I felt a bit more like tinkering than otherwise) I bought a Kobo Aura HD. I chose this device because it has a resolution similar to my laptop (1440×1080) and I have seen reports from people running their own software on it, including completely separate systems such as Debian or Android.

This week, I was able to play around with it. It was indeed simple to tinker with: You can simply copy a tarball to it which is then extracted over the root file system. There are plenty of instructions online, but I found it easier to take them as inspiration and do it my way – with basic Linux knowledge that’s possible. This way, I extended the system boot script with a hook to a file on the internal SD card, and this file then runs the telnetd daemon that comes with the device’s busybox installation. Then I just have to make the device go online and telnet onto it. From there it is a pretty normal Linux system, albeit without an X server, using the framebuffer directly.

I even found an existing project providing a VNC client implementation for this and other devices, and pretty soon I could see my laptop screen on the Kobo. Black and white worked fine, but colors and greyscales, including all anti-aliased fonts, were quite broken. After some analysis I concluded that it was confusing the bit pattern of the pixels. Luckily kvncclient shares that code with koreader, which worked fine on my device, so I could copy some files and settings from there et voilá: I now have an eInk monitor for my laptop. As a matter of fact, I am writing this text with my Kobo sitting on top of the folded-back laptop screen!

I did some minor adjustments to my laptop:

  • I changed the screen size to match the Kobo’s resolution. Using xrandr’s --panning option this is possible even though my real screen is only 900 pixels high.
  • I disabled the cursor-blink where possible. In general, screen updates should be avoided, so I hide my taffybar (which has a CPU usage monitor) and text is best written at the very end of the line (and not before a, say, </p>).
  • My terminal windows are now black-on-white.
  • I had to increase my font-size a bit (the kobo has quite a high DPI), and color is not helpful (so :set syntax=off in vim).

All this is still very manual (going online with the kobo, finding its IP address, logging in via telnet, killing the Kobo's normal main program, starting x11vnc, finding my ip address, starting the vnc client, doing the adjustments mentioned above), so I need to automate it a bit. Unfortunately, there is no canonical way to extend the Kobo by your own application: The Kobo developers made their device quite open, but stopped short from actually encouraging extensions, so people have created many weird ways to start programs on the Kobo – dedicated start menus, background programs observing when the regular Kobo app opens a specific file, complete replacements for the system. I am considering to simply run an SSH server on the device and drive the whole process from the laptop. I’ll keep you up-to-date.

A dream for the future would be to turn the kobo into a USB monitor and simply connect it to any computer, where it then shows up as a new external monitor. I wonder if there is a standard for USB monitors, and if it is simple enough (but I doubt it).

A word about the kobo development scene: It seems to be quite active and healthy, and a number of interesting applications are provided for it. But unfortunately it all happens on a web forum, and they use it not only for discussion, but also as a wiki, a release page, a bug tracker, a feature request list and as a support line – often on one single thread with dozens of posts. This makes it quite hard to find relevant information and decide whether it is still up-to-date. Unfortunately, you cannot really do without it. The PDF viewer that comes with the kobo is barely okish (e.g. no crop functionality), so installing, say, koreader is a must if you read more PDFs than actual ebooks. And then you have to deal with the how-to-start-it problem.

That reminds me: I need to find a decent RSS reader for the kobo, or possibly a good RSS-to-epub converter that I can run automatically. Any suggestions?

PS and related to this project: Thanks to Kathey!

Dariusz Dwornikowski: statistics of RFS bugs and sponsoring process

21 September, 2014 - 22:21

For some days I have been working on statistics of the sponsoring process in Debian. I find this to be one of the most important things that Debian has to attract and enable new contributions. It is important to know how this process works, whether we need more sponsors, how effective is the sponsoring and what are the timings connected to it.

How I did this ?

I have used Debbugs SOAP interface to get all bugs that are filed against sponsorship-requests pseudo package. SOAP gives a little bit of overhead because it needs to download a complete list of bugs for the sponsorship-requests package, and then process them according to given date ranges. The same information can be easily extracted from the UDD database in the future, it will be faster because SQL is better when working with date ranges than python obviously.

The most problematic part was getting the "real done date" of a particular bug, and frankly most of my time I have spent on writing a rather dirty and complicated script. The script gets a log for a particular bug number and returns a "real done date". I have published a proof of concept in a previous post..

What I measured ?

RFSs is a queue, and in every queue one is interested in a mean time to get processed. In this case I called the metric global MTTGS (mean time to get sponsored). This is a metric that gives the overall performance insight in RFS queue. Time to get sponsored (TTGS) for a bug is a number of days that passed between filing an RFS bug and closing it (bug was sponsored). Mean time to get sponsored is calculated as a sum of TTGSs of all bugs divided by number of bugs (in a given period of time). Global MTTGS is MTTGS calculated for a period of time 2012-1-1 until today().

Besides MTTGS I have also measured typical bug related metrics:

  • number of bugs closed in a given day,
  • number of bugs opened in a given day,
  • number of bugs with status open in a given day,
  • number of bugs with status closed in a given day.
Plots and graphs

Below is a plot of global MTTGS vs. time (click for a larger image).

As you can see, the trend is roughly exponential and MTTGS tends to settle around 60 days at the end of the year 2013. This does not mean that your package will wait 60 days on average nowadays to get sponsored. I remind that this is a global MTTGS, so even if the MTTGS of last month was very low, the global MTTGS would decrease just slightly. It gives, however, a good glance in performance of the process. Even that more packages are filed for sponsoring (see next graphs) now, than in the beginning of the epoch, the sponsoring rate is high enough to flatten the global MTTGS, and with time maybe decrease it.

The image below (click for a larger one) shows how many bugs reside in a queue with status open or closed (calculated for each day). For closed we have an almost linear function, so each day more or less the same amount of bugs are closed and they increase the pool of bugs with status closed. For bugs with status open the interesting part begins around May 2012 after the system is saturated or gets popular. It can be interpreted as a plot of how many bugs reside in the queue, the important part is that it is stable and does not show clear increasing trend.

The last plot shows arrival and departure rate of bugs from RFS queue, i.e. how many bugs are opened and closed each day. The interesting part here are the maxima. Let's look at them.

Maximal number of opened bugs (21) was on 2012-05-06. As it appears it was a bunch upload of RFSs for tryton-modules-*..

  706953  RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 
  706954  RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 
  706948  RFS: tryton-modules-production/2.8.0-1 
  706969  RFS: tryton-modules-account-fr/2.8.0-1 
  706946  RFS: tryton-modules-project-invoice/2.8.0-1 
  706950  RFS: tryton-modules-stock-supply-production/2.8.0-1 
  706942  RFS: tryton-modules-product-attribute/2.8.0-1 
  706957  RFS: tryton-modules-stock-lot/2.8.0-1 
  706958  RFS: tryton-modules-carrier-weight/2.8.0-1 
  706941  RFS: tryton-modules-stock-supply-forecast/2.8.0-1 
  706955  RFS: tryton-modules-product-measurements/2.8.0-1 
  706952  RFS: tryton-modules-carrier-percentage/2.8.0-1 
  706949  RFS: tryton-modules-account-asset/2.8.0-1 
  706904  RFS: chinese-checkers/0.4-1 
  706944  RFS: tryton-modules-stock-split/2.8.0-1 
  706981  RFS: distcc/3.1-6 
  706945  RFS: tryton-modules-sale-supply/2.8.0-1 
  706959  RFS: tryton-modules-carrier/2.8.0-1 
  706951  RFS: tryton-modules-sale-shipment-cost/2.8.0-1 
  706943  RFS: tryton-modules-account-stock-continental/2.8.0-1 
  706956  RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1

Maximum number of closed bugs (18) was on 2013-09-24, and as you probably guessed right also tryton modules had impact on that.

  706953  RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 
  706954  RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 
  706948  RFS: tryton-modules-production/2.8.0-1 
  706969  RFS: tryton-modules-account-fr/2.8.0-1 
  706946  RFS: tryton-modules-project-invoice/2.8.0-1 
  706950  RFS: tryton-modules-stock-supply-production/2.8.0-1 
  706942  RFS: tryton-modules-product-attribute/2.8.0-1 
  706958  RFS: tryton-modules-carrier-weight/2.8.0-1 
  706941  RFS: tryton-modules-stock-supply-forecast/2.8.0-1 
  706955  RFS: tryton-modules-product-measurements/2.8.0-1 
  706952  RFS: tryton-modules-carrier-percentage/2.8.0-1 
  706949  RFS: tryton-modules-account-asset/2.8.0-1 
  706944  RFS: tryton-modules-stock-split/2.8.0-1 
  706959  RFS: tryton-modules-carrier/2.8.0-1 
  723991  RFS: mapserver/6.4.0-2 
  706951  RFS: tryton-modules-sale-shipment-cost/2.8.0-1 
  706943  RFS: tryton-modules-account-stock-continental/2.8.0-1 
  706956  RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1
The software

Most of the software was written in Python. Graphs were generated in R. After a code cleanup I will publish a complete solution on my github account, free to use by everybody. If you would like to see another statistics, please let me know, I can create them if the data provides sufficient information.

Konstantinos Margaritis: VSX port added to Eigen!

21 September, 2014 - 21:03

Being the SIMD fanatic that I am, a few years ago I did the PowerPC Altivec and ARM NEON port for the Eigen linear algebra library, one of the best and most popular libraries -and most ported.

Recently I thought it would be a good idea to extend both ports to 64-bit, and it would also help me with the SIMD book, using VSX in one case and ARMv8 NEON (or Advanced SIMD as ARM likes to call it) in the latter. ARMv8 hardware is a bit scarce at the moment, so I thought I'd start with VSX. Being in Debian, I have access to a number of porterboxes in several architectures, and luckily one of those was a Power7 (with VSX) running ppc64. So I started the porting -or rather extending the code- to use VSX in the 64-bit doubles case. Unluckily, I could not test anything because Debian kernels do not have VSX enabled in wheezy -which is what the porterbox is running and enabling it is a non-option(#758620). So, running VSX code would turn out to be quite hard.

Laura Arjona: Happy Software Freedom Day!

20 September, 2014 - 17:58

Today we celebrate the day of free software (each year, a saturday around mid-September) More info at

There are no public events in Madrid, but I’m going to try to hack and write a bit more this weekend, as my personal celebration.

In this blog post you can find some of my very very recent activities on free software, and my plans for this weekend of celebration!

Debian Children distros aka Derivatives

I had the translation/update of the page pending since long time. It’s a long page, and I was not sure what was better: if picking up the too-outdated last translation, and review it carefully in order to update it, or starting from scratch. I decided to reuse the last translation (thanks Luis Uribe!) and after some days dedicating my commuting time on it, finally, yesterday evening I finished it at home. Now it’s in the review queue, and I hope in 10 days or so it will be uploaded.

In the meantime, I have learned a bit about the Debian Derivatives subproject and  census, I have watched the Derivatives Panel at DebConf13, and had a look at the bug #723069 about keeping the children-distros page up to date.

So now that I’m liberated about this translation, I’m going to put some time in keeping up to date the original English page (I’m part of the www and publicity team, so I think it makes sense). My goal is to review at least one Debian derivative each two days, and when I finish the list, start again. I can update the wiki myself, and for the www, I’ll send patches against #723069, unless I’m told to do it other way.

BTW, wouldn’t be nice to mark web/wiki pages as “RFH” the same as packages?, so other people can easily decide to put some time on them, and make even more awesome! Or make them appear in the how-can-i-help reminders :)  Mmm maybe it’s just a matter of filing a bug and tagging it as “gift”? I think no, because nobody has the package “” installed in their system… I’ll talk with the maintainer about this.

New Member process

I promised myself to try to work a bit more in Debian during the summer and September, and if everything goes well, try to apply to the new member process in October.

I wanted to read all the documentation first, and one challenge is to review/update the translations of folder. This way, both myself and the Spanish speaking community benefit from the effort. Yesterday I translated one of those pending pages and I hope during the weekend I can translate/update the rest. When I finish that, I’ll keep reading the other documentation.


This summer I was invited to join the DebConf15 organization team and pick up tasks in the publicity area. I was very happy to join, I’m not sure at all that I can go to DebConf15 in Heidelberg (Germany), in fact I’m quite sure I will not go since mid-August is the only opportunity to visit family who lives far away, but anyway, there are things that we can do before DebConf15 and I can contribute.

For now, I attended last Monday to the meeting at IRC, and I’m finishing a short blogpost about the DebConf14 talk presenting DebConf15, that will be published in the DebConf15 blog.

Android, F-Droid

I keep on trying to spread the word about F-Droid and the free software available for Android, last week some of my friends updated Kontalk to the 3.0.b1 version (I had updated at the beginning of September) and they liked that now, the images are sent encrypted as well as the text messages :)

Some friends also liked the 2048 game, since it can be played offline, without ads, and so.

I decided to spend some time this weekend contributing translations to the Android apps that I use.

A long pending issue is to try to put workforce in the F-Droid project itself so apps descriptions are internationalized (the program is fully translatable, but the categories of apps and the descriptions themselves, are not). This is a complicated issue, it requires to take some design decisions, and later, of course, the implementation. I cannot do it alone, and I cannot do it in the short time. But today I have filed a bug report (#35) so maybe I find other people able to help.

Jabber/XMPP and the “RedesLibres” chatroom

Since several months I’ve been using more often my Jabber/XMPP account to join the chatroom

I meet there some people that I follow in (for example, the people that write in the Comunícate Libremente or Lignux blogs) and we talk about, free software, free services, and other things. I feel very comfortable there, it’s nice to have a Spanish speaking group inside the Free Software community, and I’m also learning a bit about XMPP (I’ve tried a lot of desktop and Android clients, just for fun!), free networks, and so.

So today I wanted to publicly thank you everybody in that chatroom, that welcomed me so well :)

Thank you, free software friends

And, by extension, I want to thank you all the people that work and have fun in the Free Software communities, in the projects where I contribute or others. They (we) hack to make the world better, and to allow others join to this beautiful challenge that is making machines do what their (final) users wants.


You can comment on this post in this thread.

Filed under: My experiences and opinion Tagged: Android, Communities, Contributing to libre software, Debian, English, F-Droid, federation, Free Software, Freedom, internationalization, libre software, localization, translations

Francesca Ciceri: Four Ways to Forgiveness

20 September, 2014 - 16:20

"I have seen a picture," Havzhiva went on.
The Chosen was impassive; he might or might not know the word. "Lines and colors made with earth on earth may hold knowledge in them. All knowledge is local, all truth is partial," Havzhiva said with an easy, colloquial dignity that he knew was an imitation of his mother, the Heir of the Sun, talking to foreign merchants. "No truth can make another truth untrue. All knowledge is a part of the whole knowledge. A true line, a true color. Once you have seen the larger patttern, you cannot go back to seeing the part as the whole.

I've just finished to read "Four Ways to Forgiveness" by U.K Le Guin.
It deeply resonated within me, it's still there doing its magic in my brain, lingering in the corners of my mind, tickling my view of reality, humming with the beauty of ideas you didn't knew were inside you till you've seen them written on paper.
And then, you know they were there all along, you just didn't know how to make them into words.
Le Guin knows how to do it, wonderfully.

I loved the whole book, but the last two stories were eye-openers.
Thanks Enrico for suggesting me this one, thanks dkg for having introduced me to Le Guin's books (with another fantastic book: The Left Hand of Darkness).

Dariusz Dwornikowski: getting real "done date" of a bug from Debian BTS

19 September, 2014 - 15:17

As I wrote in my last post currently, SOAP interface, nor Ultimate Debian Database do not provide a date when a given bug was closed (done date). It is quite hard to calculate statistics on a bug tracker when you do not know when a bug was closed (!!).

Done date of bug can be found in its log. The log itself can be downloaded by SOAP method get_bug_log but the processing of it is quite complicated. The same comes to web scrapping of a BTS's web interface. Fortunatelly the web interface gives a possibility to download a log in an mbox format.

Below is a script that extracts the done date of a bug from its log in mbox format. It uses requests to download the mbox and caches the result in ~/.cache/rfs_bugs, which you need to create. It performs different checks:

  1. Check existence of a header e.g. Received: (at 657783-done) by; 29 Jan 2012 13:27:42 +0000
  2. Check for header CC: NUMBER-close|done
  3. Check for header TO: NUMBER-close|done
  4. Check for Close: NUMBER in body.

The code is below:

import requests
from datetime import datetime
import mailbox
import re
import os
import tempfile

def get_done_date(bug_num):

    CACHE_DIR = os.path.expanduser("~") + "/.cache/rfs_bugs/"

    def get_from_cache():
        if os.path.exists("{}{}".format(CACHE_DIR, bug_num)):
            with open("{}{}".format(CACHE_DIR, bug_num)) as f:
                return datetime.strptime(f.readlines()[0].rstrip(), "%Y-%m-%d").date()
            return None

    done_date = get_from_cache()

    if done_date is not None:
        return done_date
        r = requests.get(";bug={};mboxstatus=yes".format(self._num))
        d = try_header(r.text)
        if d is None:
            d = try_cc(r.text)
        if d is None:
            d = try_body(r.text)
        if d is not None:
            with open("{}{}".format(CACHE_DIR, bug_num), "w") as f:
            return None

    def try_body(text):
        reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)"
        handle, name = tempfile.mkstemp()
        with open(name, "w") as f:
        mbox = mailbox.mbox(name)
        for i in mbox.items():
            if i[1].is_multipart():
                for m in i[1].get_payload():
                    if "close" in str(m) or "done" in str(m):
                            result =, i[1]['Received'])
                            return datetime.strptime(, "%d %b %Y")
                            return None
                if "close" in i[1].get_payload() or "done" in i[1].get_payload():
                        result =, i[1]['Received'])
                        return datetime.strptime(, "%d %b %Y")
                        return None
        return None

    def try_header(text):
        reg = "Received:\s\(at\s\d\d\d\d\d\d-(close|done)\)\s+by.+"
            result =, r.text)
            line =
            reg2 = "\d{1,2}\s\w\w\w\s\d\d\d\d"
            result =, line)
            d = datetime.strptime(, "%d %b %Y")
            return d
            return None

    def try_cc(text):
        reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)"
        handle, name = tempfile.mkstemp()
        with open(name, "w") as f:
        mbox = mailbox.mbox(name)
        for i in mbox.items():
            if ('CC' in i[1] and "done" in i[1]['CC']) or ('To' in i[1] and "done" in i[1]['To']):
                    result =, i[1]['Received'])
                    return datetime.strptime(, "%d %b %Y")
                    return None

if __name__ == "__main__":
    print get_done_date(752210)

PS: I hope that the script will be not needed in the near future, as Don Armstrong plans a new BTS database, a Debconf14 video is here.

Daniel Pocock: reSIProcate migration from SVN to Git completed

19 September, 2014 - 14:47

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

Paul Tagliamonte: Docker PostgreSQL Foreign Data Wrapper

19 September, 2014 - 09:49

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

If you're interested in some of these, there are a bunch in the Multicorn VCS repo, such as the gitfdw example.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

Currently it only implements reading from the API, but extending this to allow for SQL DELETE operations isn't out of the question, and likely to be implemented soon. This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic.

Setting it up The only stumbling block you might find (at least on Debian and Ubuntu) is that you'll need a Multicorn `.deb`. It's currently undergoing an official Debianization from the Postgres team, but in the meantime I put the source and binary up on my Feel free to use that while the Debian PostgreSQL team prepares the upload to unstable.

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options (
    wrapper 'dockerfdw.wrappers.containers.ContainerFdw');

CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options (
    wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers (
    "id"          TEXT,
    "image"       TEXT,
    "name"        TEXT,
    "names"       TEXT[],
    "privileged"  BOOLEAN,
    "ip"          TEXT,
    "bridge"      TEXT,
    "running"     BOOLEAN,
    "pid"         INT,
    "exit_code"   INT,
    "command"     TEXT[]
) server docker_containers options (
    host 'unix:///run/docker.sock'

CREATE foreign table docker_images (
    "id"              TEXT,
    "architecture"    TEXT,
    "author"          TEXT,
    "comment"         TEXT,
    "parent"          TEXT,
    "tags"            TEXT[]
) server docker_image options (
    host 'unix:///run/docker.sock'

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags
  FROM docker_containers
  RIGHT JOIN docker_images
     ip      |            names            |                  tags                   
             |                             | {ruby:latest}
             |                             | {paultag/vcs-mirror:latest}
             | {/de-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ny-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ar-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest} | {/ms-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest} | {/nc-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ia-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/az-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/oh-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/va-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest} | {/wa-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/jovial_poincare}          | {<none>:<none>}
             | {/jolly_goldstine}          | {<none>:<none>}
             | {/cranky_torvalds}          | {<none>:<none>}
             | {/backstabbing_wilson}      | {<none>:<none>}
             | {/desperate_hoover}         | {<none>:<none>}
             | {/backstabbing_ardinghelli} | {<none>:<none>}
             | {/cocky_feynman}            | {<none>:<none>}
             |                             | {paultag/postgres:latest}
             |                             | {debian:testing}
             |                             | {paultag/crank:latest}
             |                             | {<none>:<none>}
             |                             | {<none>:<none>}
             | {/stupefied_fermat}         | {hackerschool/doorbot:latest}
             | {/focused_euclid}           | {debian:unstable}
             | {/focused_babbage}          | {debian:unstable}
             | {/clever_torvalds}          | {debian:unstable}
             | {/stoic_tesla}              | {debian:unstable}
             | {/evil_torvalds}            | {debian:unstable}
             | {/foo}                      | {debian:unstable}
(31 rows)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

Jaldhar Vyas: Scotland: Vote A DINNAE KEN

19 September, 2014 - 07:39

From the crack journalists at CNN.

Interesting fact: anyone who wore a kilt at debconf is allowed to vote in the referendum.

Jonathan McDowell: Automatic inline signing for mutt with RT

18 September, 2014 - 18:00

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign"
send-hook "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Dariusz Dwornikowski: RFS health in Debian

18 September, 2014 - 16:50

I am working on a small project to create WNPP like statistics for open RFS bugs. I think this could improve a little bit effectiveness of sponsoring new packages by giving insight into bugs that are on their way to being starved (i.e. not ever sponsored, or rotting in a queue).

The script attached in this post is written in Python and uses Debbugs SOAP interface to get currently open RFS bugs and calculates their dust and age.

The dust factor is calculated as an absolute value of a difference between bugs's age and log_modified.

Later I would like to create fully blown stats for an RFS queue, taking into account the whole history (i.e. 2012-1-1 until now), and check its health, calculate MTTGS (mean time to get sponsored).

The list looks more or less like this:

Age  Dust Number  Title
37   0    757966  RFS: lutris/0.3.5-1 [ITP]
1    0    762015  RFS: s3fs-fuse/1.78-1 [ITP #601789] -- FUSE-based file system backed by Amazon S3
81   0    753110  RFS: mrrescue/1.02c-1 [ITP]
456  0    712787  RFS: distkeys/1.0-1 [ITP] -- distribute SSH keys
120  1    748878  RFS: mwc/1.7.2-1 [ITP] -- Powerful website-tracking tool
1    1    762012  RFS: fadecut/0.1.4-1
3    1    761687  RFS: abraca/0.8.0+dfsg-1 -- Simple and powerful graphical client for XMMS2
35   2    758163  RFS: kcm-ufw/0.4.3-1 ITP
3    2    761636  RFS: raceintospace/1.1+dfsg1-1 [ITP]

The script can be found below, it uses SOAPpy (only python <3 unfortunately).


import SOAPpy
import time
from datetime import date, timedelta, datetime

url = ''
namespace = 'Debbugs/SOAP'
server = SOAPpy.SOAPProxy(url, namespace)

class RFS(object):

    def __init__(self, obj):
        self._obj = obj
        self._last_modified = date.fromtimestamp(obj.log_modified)
        self._date = date.fromtimestamp(
        if self._obj.pending != 'done':
            self._pending = "pending"
            self._dust = abs( - self._last_modified).days
            self._pending = "done"
            self._dust = abs(self._date - self._last_modified).days
        today =
        self._age = abs(today - self._date).days

    def status(self):
        return self._pending

    def date(self):
        return self._date

    def last_modified(self):
        return self._last_modified

    def subject(self):
        return self._obj.subject

    def bug_number(self):
        return self._obj.bug_num
    def age(self):
        return self._age

    def dust(self):
        return self._dust

    def __str__(self):
        return "{} subject: {} age:{} dust:{}".format(self._obj.bug_num, self._obj.subject, self._age, self._dust)

if __name__ == "__main__":

    bugi = server.get_bugs("package", "sponsorship-requests", "status", "open")
    buglist = [RFS(b.value) for b in server.get_status(bugi).item]
    buglist_sorted_by_dust = sorted(buglist, key=lambda x: x.dust, reverse=False)
    print("Age  Dust Number  Title")
    for i in buglist_sorted_by_dust:
        print("{:<4} {:<4} {:<7} {}".format(i.age, i.dust, i.bug_number, i.subject))

Jaldhar Vyas: Scotland: Vote NO

18 September, 2014 - 12:21
        _  __<;
      </_/ _/__   
     /> >  7   )  
     ~;</7    /   
     /> /   _*<---- Perth    
     ~ </7  7~\_  
        </7     \ 
         /_ _ _ | 

If you don't, the UK will have to rename itself the K. And that's just silly.

Also vote yes on whether Alex Trebek should keep his mustache.

Steve Kemp: If this goes well I have a new blog engine

18 September, 2014 - 01:23

Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.

Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)

Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.

When chronicle runs it :

  • It reads each post into a complex data-structure.
  • Then it walks this multiple times.
  • Finally it outputs a whole bunch of posts.

In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.

Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.

So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run make steve on my blog sources.

Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.

The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.

In conclusion I've written something last night which is a stepping stone between the current chronicle and chronicle2 which will appear in due course.

PS. This entry was written in markdown, just because I wanted to be sure it worked.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้