Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 40 min 45 sec ago

Richard Hartmann: Release Critical Bug report for Week 37

13 September, 2014 - 04:08

Remember, remember; the fifth of November.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1422
    • Affecting Jessie: 410 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 355 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 52 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 26 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 277 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 55 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 55 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

Jonathan McDowell: Back from DebConf 14

12 September, 2014 - 22:00

I previously forgot to mention that I was planning to attend DebConf14, having missed DebConf13. This year the conference was held in Portland, OR. This is a city I've been to many times before, and enjoy, but I hadn't spent any time wandering around its city centre as a pedestrian. I have to say I really prefer DebConfs that are held in middle of city. It always seems a bit of a shame to travel some distance to somewhere new and spend all the time there in a conference venue. Plus these days I have the added lure of going out and playing Ingress in a new location. DebConf14 didn't disappoint in these respects; the location was super easy to get to from the airport via public transportation, all of the evening social events were within reasonable walking distance (I'll tend to default to walking when possible) and the talk venue/accommodation were close to each other and various eating + drinking options. Throw in the fact at Portland managed to produce some excellent weather (modulo my Ingress session on the last Saturday morning, when rained on me) and it's impossible to fault the physicalities of DebConf this year.

This year the conference format was a bit different; previous years have had a week long DebConf before the week of the conference itself. This year went for a 9 day talk schedule (Saturday -> Sunday) with various gaps of hacking time interspersed. I've found it hard to justify a full two weeks away in the past, so this setup worked a lot better from my viewpoint. Also I rarely go to DebConf with a predetermined list of things to do; the stuff I work on naturally falls out of talks I attend and informal discussions I have. Having hack time throughout the conference helped me avoid feeling I was having to trade off hacking vs talks.

Naturally enough a lot of my involvement at DebConf was around OpenPGP. Gunnar and I spent a fair bit of time getting Daniel up to speed with the keyring-maint team (Gunnar more than I, I'll confess). We finally set a hard timeframe for freeing Debian of older 1024 bit keys. I was introduced to the Gnuk, which is a particularly interesting piece of open specification hardware with a completely Free software stack on top if it that implements the OpenPGP smartcard spec. Currently it's limited to 2K keys but it's hoped that 4K support can be added (and I ended up spending a couple of hours after the closing talk hacking on the source and seeing how much needs to change for 4K support, aided by the very patient Niibe). These are the sort of things that really benefit from the face time that DebConf offers to the Debian project. I've said it before, but I think it's worth saying again: Debian is a bit like a huge telecommuting organization and it's my opinion that any such organization should try and ensure its members actually spend some time together on a regular basis. It improves the ability to work remotely a hell of a lot if you can actually put a face to the entity you're emailing / IRCing and have some sort of idea where they're coming from because you've spent some time with them, whether that's in talks or over dinner or just casual hallway chats.

For once I also found myself considering alternative employment while at DebConf and it was incredibly useful to be able to have various conversations with both old friends and people who were there with an eye on recruitment. Thanks to all those whose ears I bent about the subject (and more on the outcome in a future post). Thank you also to the many people involved with the organization of DebConf; I've been on the periphery a few times over the years and it's given me a glimpse into the amount of hard work all of the volunteers (be they global team, local organizing team, video team or just random volunteers) put into making DebConf one of my must-attend yearly conferences. If you're at all involved in Debian and haven't attended I strongly urge you to do so - I'll see you all next year at DebConf15 in Heidelberg!

Dariusz Dwornikowski: forwarding messages with attachments in mutt

12 September, 2014 - 20:30

This is a pain for every mutt user. I do not know why this solution is so hard to find. Just add these two lines to your .muttrc.

set mime_forward
set mime_forward_rest=yes

This will forward an email with all the attachments, no scripts needed, no fancy tagging or reediting.

Elena 'valhalla' Grandi: Sometimes apologies are the worst part

12 September, 2014 - 19:53
I am sick of hearing apologies directed to me just after a dirty joke.

Usually, I don't mind dirty jokes in themselves: I *do* have a dirty mind and make my share of those, or laugh for the ones my friends with even dirtier minds make.

There are of course times and places where they aren't appropriate, but I'd rather live in a world where they are the exception (although a common one) rather than the norm.

Then there is the kind of dirty jokes strongly based on the assumptions that women are sexual objects, a nuisance that must be tolerated because of sex or both: of course I don't really find them fun, but that's probably because I'm a nuisance and I don't even give them reason to tolerate me. :)

Even worse that those, however, is being singled out as the only women in the group with an empty apology for the joke (often of the latter kind): I know you are not really sorry, you probably only want to show that your parents taught you not to speak that way in front of women, but since I'm intruding in a male-only group I have to accept that you are going to talk as appropriate for it.

P.S. no, the episode that triggered this post didn't happen in a Debian environment, it happened in Italy, in a tech-related environment (and not in a wanking club for penis owners, where they would have every right to consider me an intruder).

Dariusz Dwornikowski: profanity and libstrophe status in Debian

12 September, 2014 - 15:10

profanity is a great console based XMPP client written in ncurses and C by James Booth. The code has a great quality, upstream is super collaborative, and willing, so packaging should be pretty straightforward. This post will show that this was not the case here.

First obstacle was that profanity depended on libstrophe, an XMPP library, which was not in Debian. As it occurred libstrophe's upstream was not responsive, so any changes that were needed to prepare libstrophe for high quality packaging could not be met.

  1. First of all libstrophe's build system (automake and friends) built only a static library.
  2. The second problem was that libstrophe did not tag releases on github, this was needed to make Debian watch file work.
  3. A third, smaller problem was the presence of debian/ directory in upstream's source. It can be neglected most of the time, since you can tell git-import-orig to delete it.

To solve those 3 problems I created a pull request fixing the build system to build also a shared library, deleting debian/ directory and politely asking for tagging releases. You can see my pull request here dated on April 26th. There was no answer for the libstrophe's upstream but I has some support from profanity's developers and other users wanting to make those changes. Finally metajack (libstrophe upstream) gave us right to the repo and we could merge the pull request on August 6th. The lesson learned - be patient and know autotools (a great tutorial is here).

With profanity there were less changes to do. The most important one was that it linked to OpenSSL and due to the license incompatibility with GPL it could not go into Debian. Fortunately upstream added the OpenSSL exception, and profanity could be finally packaged.

Now both profanity and libstrophe are in NEW queue and hopefully they will be accepted by ftp masters. When they are, there is plenty to do with them in the future, upstream closed some bugs, new upstream versions are tagged.

John Goerzen: The Thrill and Stress of Too Many Hobbies

12 September, 2014 - 11:10

Today, 4PM. Jacob and Oliver excitedly peer at the box in our kitchen – a really big box, taller than them. Inside is is the first model airplane I’d ever purchased. The three of us hunkered down on the kitchen floor, opened the box, unpacked the parts, examined the controller, and found the manual with cryptic assembly directions. Oliver turned some screws while Jacob checked out the levers on the controllers. Then they both left for a bit to play with their toy buses.

A little while later, the three of us went outside. It was too windy to fly. I had never flied an RC plane before — only RC quadcopters (much easier to fly), and some practice time on an RC simulator. But the excitement was too much. So out we went, and the plane took off perfectly, climbed, flew over the trees, and circled above our heads at my command. I even managed a good landing in the wind, despite about 5 aborted attempts due to coming in too high, wrong angle, too fast, or last-minute gusts of wind throwing everything off. I am not sure how I pulled that all off on my first flight, but somehow I did! It was thrilling!

I’ve had a lot of hobbies in my life. Computers have run through many of them; I learned Pascal (a programming language) at about the same time I learned cursive handwriting and started with C at around age 10. It was all fun. I’ve been a Debian developer for some 18 years now, and have written a lot of code, and even books about code, over the years.

Photography, music, literature, history, philosophy, and theology have been interests for quite some time as well. In the last few years, I’ve picked up amateur radio, model aircraft, etc. And last month, Laura led me into Ada’s Technical Books during our visit to Seattle, resulting in me getting interested in Arduino. (The boys and I have already built a light-activated crossing gate for their HO-gauge model trains, and Jacob can now say he’s edited a few characters of C!)

Sometimes I find ways to merge hobbies; I’ve set up all sorts of amateur radio systems on Linux, take aerial photographs, and set up systems to stream music in my house.

But I also have a lot less time for hobbies overall than I once did; other things in life, such as my children, are more important. Some of the code I once worked on actively I no longer use or maintain, and I feel guilty about that when people send bug reports that I have no interest in fixing anymore.

Sometimes I feel a need to cut down, and perhaps have; and then, I get an interest in RC aircraft and find an airplane that is great for a beginner and fairly inexpensive.

Perhaps it is the curse of being a curious person living in an interesting world. Do any of the rest of you have a large number of hobbies? How do you feel about that?

Stefano Zacchiroli: debsources bugs and easy hacks

12 September, 2014 - 01:31
debsources debbugs oh

My ongoing quest for lowering the barrier for contributing to Debsources continues.
In this chapter:

  • I've migrated bug reports from the previous ad-hoc text file in the Git repo to the Debian BTS, under the umbrella of the qa.debian.org pseudo-package.
    From now on this is the recommended (and documented) way of reporting bugs against http://sources.debian.net.

    Look ma, it also has one of those newfangled short URLs: http://deb.li/debsrcbugs!

  • While at it, I've also properly tagged the current easy hacks on Debsources using the gift tag. There are definitely opportunities for new contributors there, and there might be more if you submit your own Debsources' pet peeves to the BTS.

    Again, mandatory mnemonic/short URL: http://deb.li/debsrceasy.

What's your excuse for not contributing to Debsources, again?

Dirk Eddelbuettel: pkgKitten 0.1.2: Still creating R Packages that purr

11 September, 2014 - 22:41

A brown bag release 0.1.2 of pkgKitten is now on CRAN, following yesterday's 0.1.1 upload

Next time I'll try to remember that when I have parameters name and path, it won't work so well to call them as path and name ...

Changes in version 0.1.2 (2014-09-11)
  • Brown-bag fix of calling the new helper function with then correct order of arguments.

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Enrico Zini: suspend-i-really-mean-it

11 September, 2014 - 20:32
Laptop, I demand that you suspend!

Dear Lazyweb,

Sometimes some application prevents suspend on my laptop. I want to disable that feature: how?

I understand that there may exist some people who like that feature. I, on the other hand, consider a scenario like this inconceivable:

  1. I'm on a plane working with my laptop, the captain announces preparations for landing, so I quickly hit the suspend button (or close the lid) on my laptop and stow it away.
  2. One connecting flight later, I pick up my backpack, I feel it unusually hot and realise that my laptop has been on all along, and is now dead from either running out of battery or thermal protection.
  3. I think things that, if spoken aloud in front of a pentacle, might invoke major lovecraftian horrors.

I do not want this scenario to ever be possible. I want my suspend button to suspend the laptop no matter what. If a process does not agree, I'm fine with suspending it anyway, or killing it.

If I want my laptop to suspend, I generally have a good enough real-world reason for it, and I cannot conceive that a software could ever be allowed to override my command.

How do I change this? I don't know if I should look into systemd, upowerd, pm-utils, the kernel, the display manager or something else entirely. I worry that I cannot even figure where to start looking for a solution.

This happened to me multiple times already, and I consider it ridiculous. I know that it can cause me data loss. I know that it can cause me serious trouble in case I was relying on having some battery or state left at my arrival. I know that depending on what is in my backpack, this could also be physically dangerous.

So, what knob do I tweak for this? How do I make suspend reliable?

Sylvestre Ledru: Rebuild of Debian using Clang 3.5.0

11 September, 2014 - 20:17

Clang 3.5.0 has just been released. A new rebuild has been done highlight the progress to get Debian built with clang.

tl;dr: Great progress. We decreased from 9.5% to 5.7% of failures. Full results are available on http://clang.debian.net

At time of the rebuild with 3.4.2, we had 2040 packages failing to build with clang. With 3.5.0, this dropped to 1261 packages.

Fixes

With Arthur Marble and Alexander Ovchinnikov, both GSoC students, we worked on various ways to decrease the number of errors.

Upstream fixes

First, the most obvious way, we fixed programming bugs/mistakes in upstream sources. Basically, we took categories of failure and fixed issues one after the other. We started with simple bugs like 'Wrong main declaration', 'non-void function should return a value' or 'Void function should not return a value'.

They are trivial to fix. We continued with harder fixes like ' Undefined reference' or 'Variable length array for a non POD (plain old data) element'.

So, besides these one, we worked on:


In total, we reported 295 bugs with patches. 85 of them have been fixed (meaning that the Debian maintainer uploaded a new version with the fix).

In parallel, I think that the switch by FreeBSD and Mac OS X to Clang also helped to fix various issues by upstreams.

Hacking in clang

As a parallel approach, we started to implement a suggestion from Linus Torvalds and a few others. Instead of trying to fix all upstream, where we can, we tried to update clang to improve the gcc compatibility.

gcc has many flags to disable or enable optimizations. Some of them are legacy, others have no sense in clang, etc. Instead of failing in clang with an error, we create a new category of warnings (showing optimization flag '%0' is not supported) and moved all relevant flags into it. Some examples, r212805, r213365, r214906 or r214907

We also updated clang to silent some useless arguments like -finput_charset=UTF-8 (r212110), clang being UTF-8 compliant.

Finally, we worked on the forwarding of linker flags. Clang and gcc have a very different behavior: when gcc does not know an argument, it is going to forward the argument to the linker. Clang, in this case, is going to reject the argument and fail with an error. In clang, we have to explicitly declare which arguments are going to be transfer to the linker. Of course, the correct way to pass arguments to the linker is to use -Xlinker or -Wl but the Debian rebuild proved that these shortcuts are used. Two of these arguments are now forwarded:

  • -z keyword - r213198
  • -u Force symbol to be entered in the output file as an undefined symbol - r211756. This one fixed most of the haskell build failures. It fixed the most common issue that we had (701 occurrences but this does not mean that all these packages build fine now, some haskell-based package are failing later in the process)
New errors

Just like in other releases, new warnings are added in clang. With (bad) usage of -Werror by upstream software, this causes new build failures:

I also took the opportunity to add some further categorizations in the list of errors. Some examples:

Next steps

The Debile project being close to ready with Clément Schreiner's GSoC, we will now have an automatic and transparent way to rebuild packages using clang.

Conclusion

As stated, we can see a huge drop in term of number of failures over time:

Hopefully, Clang getting better and better, more and more projects adopting it as the default compiler or as a base for plugin/extension developments, this percentage will continue to decrease.
Having some kind of release goal with clang for Jessie+1 can now be considered as potentially reachable.

Want to help?

There are several things which can be done to help:

  • Point me common error patterns in the Not categorized list of errors to create new categories
  • Report and fix packages
  • As an upstream, integrate clang as part of your continuous integration system
  • Hack on cqa-scanlogs, the error detection tool to detect error patterns (example: Undetected error). This tool is used also for the regular rebuilds of the archive.
  • Improve clang.debian.net website
Acknowledgments

Thanks to David Suarez for the rebuilds of the archive, Arthur Marble and Alexander Ovchinnikov for their GSoC works and Nicolas Sévelin-Radiguet for the few fixes.

Original post blogged on b2evolution.

Matthias Klumpp: Listaller: Back to the future!

11 September, 2014 - 16:14

It is time for another report on Listaller, the cross-distro 3rd-party package installer, which is now in development for – depending how you count – 5-6 years. This will become a longer post, so you might grab some coffee or tea

The original idea

The Listaller project was initially started with the goal to make application deployment on Linux distributions as simple as possible, by providing a unified package installation format and tools which make building apps for multiple distributions easier and deployment of updates simple. The key ideas were:

  • Seamless integration of all installation steps into the system – users shouldn’t care about the origin of their application, they just handle all installed apps with the same tool and update all apps with the same interface they use for updating the system.
  • Out-of-the-boy sandboxing for all 3rd-party apps
  • Easy signing and key-validation for Listaller packages
  • Simple creation of updates for developers
  • Resource-sharing: It should always be clear which application uses which library, duplicates should be avoided. The distribution-provided software should take priority, since it is often well-maintained and receives security updates.
The current state

The current release of Listaller handles all of this with a plugin for PackageKit, the cross-distro package-management abstraction layer. It hooks into PackageKit and reads information passing through to the native distributor backend, and if it encounters Listaller software, it handles it appropriately. It can also inject update information. This results in all Listaller software being shown in any PackageKit frontends, and people can work with it just like if the packages were native packages. Listaller package installations are controlled by a machine policy, so the administrator can decide that e.g. only packages from a trusted source (= GPG signature in trusted database) can be installed. Dependencies can be pulled from the distributor’s repositories, or optionally from external sources, like the PyPI.

This sounds good on paper, but the current implementation has various problems.

The issues

The current Listaller approach has some problems. The biggest one lies in the future: Soon, there will be no PackageKit plugins anymore! PackageKit 1.0 will remove support for them, because they appear to be a major source for crashes, even the in-tree plugins cause problems. Also, the PackageKit service itself is currently being trimmed of unneeded features and less-used code. These changes in PackageKit are great and needed for the project (and I support these efforts), but they cause a pretty huge problem for Listaller: The project relies on the PackageKit plugin – if used without it, you loose the system-integration part, which is one of the key concepts of Listaller, and a primary goal.

But this issue is not the only one. There are more. One huge problem for Listaller is dependency-solving: It needs to know where to get software from in case it isn’t installed already. And that has to be done in a cross-distributional way. This is an incredibly complex task, and Listaller contains lots of workarounds for various quirks. It contains so much hacks for distro-specific stuff, that it became really hard to understand. The Listaller dependency model also became very complex, because it tried to handle many corner-cases. This is bad, of course. But the workarounds weren’t added for fun, but because it was assumed to be easier than to fixing the root cause, which would have required collaboration between distributors and some changes on the stack, which seemed unlikely to happen at the time the code was written.

The systemd effort

Also a thing which affects Listaller, is the latest push from the systemd team to allow cross-distro 3rd-party installations to happen. I definitively recommend reading the linked blogpost from Lennart, if you have some spare time! The identified problems are the same as for Listaller, but the solution they propose is completely different, and about three orders of magnitude more invasive than whatever the Listaller project had in mind (I make these numbers up, so don’t ask!). There are also a few issues I see with Lennarts approach, I will probably go into detail about that in another blogpost (e.g. it requires multiple copies of a library lying around, where one version might have a security vulnerability, and another one doesn’t – it’s hard to ensure everything is up to date and secure that way, even if you have a top-notch sandbox). I have great respect for the systemd crew and especially Lennart, and I hope them to succeed with their efforts. However, I also think Listaller can achieve a similar things with a less-invasive solution, at least for the 3rd-party app-installations (Listaller is one of the partial-fix solutions with strict focus, so not a direct competitor to the holistic systemd approach. Both solutions could happily live together.)

A step into the future

Some might have guessed it already: There are some bigger changes coming to Listaller! The most important one is that there will be no Listaller anymore, at least not in its old form.

Since the current code relies heavily on the PackageKit plugin, and contains some ugly workarounds, it doesn’t make much sense to continue working on it.

Instead, I started the Listaller.NEXT project, which is a rewrite of Listaller in C. There are a some goals for the rewrite:

  • No stupid hacks and workarounds: We will not add any workaround. If there is a problem, we will fix it at its source, even if that might be more invasive.
  • Trimmed down project: The new incarnation of Listaller will only support installations of statically linked software at the beginning. We will start with a very small, robust core, and then add more features (like dependency-solving) gradually, but only if they are useful. There will be no feature-creep like in the previous version.
  • Faster development cycle: Releases will happen much faster, not only two or three times a year
  • Integration: Since there is no PackageKit plugin anymore, but integration is still one of Listaller’s key concepts, we will integrate Listaller into downstream tools, ranging from Apper to GNOME-Software. Richard Hughes will help with the integration and user interfaces, so Listaller applications get displayed properly.
  • AppStream-first: AppStream is the ultimate tool for Listaller to detect dependencies. With the 0.6 release, the Listaller component-concept was merged into it, which makes it a very powerful and non-hackish solution for dependency-detection. We will advance the use of its metadata, and probably use it exclusively, which would restrict Listaller to only work properly on distributions which ship AppStream metadata.
  • No desktop-only focus: The previous Listaller was focused only on desktop GUI apps. The new version will be developed with a much larger target audience in mind, including server deployments (“Can I use it to deploy my server app” is one very frequently asked questions about Listaller – with the new version, the answer is yes)
  • We will continue to improve the static-linking and cross-distro development toolchain (libuild, with ligcc, lig++ and binreloc), to make building portable apps easier.

I made a last release of the 0.5.x series of Listaller, to work with PackageKit 0.9.x – the future lies in the C port.

If you are using Listaller (and I know of people who do, for example some deploy statically-linked stuff on internal test-setups with it), stay tuned. The packaging format will stay mostly compatible with the current version, so you will not see many changes there (the plan is to freeze it very soon, so no backwards-incompatible changes are made anymore). The o.5.x series will receive critical bugfixes if necessary.

Help needed!

As always, there is help needed! Writing C is not that difficult But user feedback is welcome as well, in case you have an idea. The new code will be hosted on Github in the new listaller-next branch (currently not that much to find there). Long-term, we will completely migrate away from Launchpad.

You can expect more blogposts about the Listaller concepts and progress in the next months (as soon as I am done with some AppStream-related things, which take priority).

Steve Kemp: A small email utility and other updates.

11 September, 2014 - 15:09

Last night I was looking for an image I knew a model had mailed me a few months ago, as we were talking about rescheduling a shoot at the weekend. I couldn't find it, even with my awesome mail client and filing system.

With some free time I figured I could write a little utility to dump all attachments from email folders, and find it that way.

It did cross my mind that there is the simple mail-utility for dumping headers, etc, called formail, which is distributed alongside procmail, but it doesn't handle attachments ..

I was tempted to write a general purpose script to dump attachments, email header values, etc, etc but given the lack of time I merely solved my own problem.

I suspect there is room for a "mail utilities" package, similar to Joey's "moreutils" and my "". However I note that there is a GNU Mailutils which does things differently than I'd expect - i.e. it contains a POP3 server.

Still if you want to dump attachments from emails, have GMIME installed, and want to filter by attachment-name, or MIME-type, you might look at my trivial attachment-dump program.

Related to that I spent some time last night updating my photography site, so the animals & pets section has updated images at least.

During the course of that I found a bug in my static-site generator, templer which stopped it from automatically populating image height/widths when called in a glob:

Title: Pets & Animals
Images: file_glob( "*.jpg" )
---

This is the page body, it now has access to a variable called 'images'
which is a HTML::Template loop-structure containing name/height/width/etc
for each image in the current directory.

That should now be resolved, and life should once again be good.

Dirk Eddelbuettel: pkgKitten 0.1.1: Still creating R Packages that purr

11 September, 2014 - 09:12

A maintenance release 0.1.1 of pkgKitten is now on CRAN.

It has only one small change: the function playWithPerPackageHelpPage() was factored out of the main function kitten() as I happened to be needing something just like playWithPerPackageHelpPage() to make packages created by the Rcpp function Rcpp.package.skeleton() a little nicer.

We also added a NEWS.Rd file which restates major release features. As it is so short, we include it in its entirety.

Changes in version 0.1.1 (2014-09-09)
  • New (exported) function playWithPerPackageHelpPage() which lets other packages create a non-complaint-generating package help page

Changes in version 0.1.0 (2014-06-13)
  • Initial public version and CRAN upload

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Lucas Nussbaum: Will the packages you rely on be part of Debian Jessie?

11 September, 2014 - 03:28

The start of the jessie freeze is quickly approaching, so now is a good time to ensure that packages you rely on will the part of the upcoming release. Thanks to automated removals, the number of release critical bugs has been kept low, but this was achieved by removing many packages from jessie: 841 packages from unstable are not part of jessie, and won’t be part of the release if things don’t change.

It is actually simple to check if you have packages installed locally that are part of those 841 packages:

  1. apt-get install how-can-i-help (available in backports if you don’t use testing or unstable)
  2. how-can-i-help --old
  3. Look at packages listed under Packages removed from Debian ‘testing’ and Packages going to be removed from Debian ‘testing’

Then, please fix all the bugs :-) Seriously, not all RC bugs are hard to fix. A good starting point to understand why a package is not part of jessie is tracker.d.o.

On my laptop, the two packages that are not part of jessie are the geeqie image viewer (which looks likely to be fixed in time), and josm, the OpenStreetMap editor, due to three RC bugs. It seems much harder to fix… If you fix it in time for jessie, I’ll offer you a $drink!

Raphaël Hertzog: Freexian’s first report about Debian Long Term Support

10 September, 2014 - 19:30

When we setup Freexian’s offer to bring together funding from multiple companies in order to sponsor the work of multiple developers on Debian LTS, one of the rules that I imposed is that all paid contributors must provide a public monthly report of their paid work.

While the LTS project officially started in June, the first month where contributors were actually paid has been July. Freexian sponsored Thorsten Alteholz and Holger Levsen for 10.5 hours each in July and for 16.5 hours each in August. Here are their reports:

It’s worth noting that Freexian sponsored Holger’s work to fix the security tracker to support squeeze-lts. It’s my belief that using the money of our sponsors to make it easier for everybody to contribute to Debian LTS is money well spent.

As evidenced by the progress bar on Freexian’s offer page, we have not yet reached our minimal goal of funding the equivalent of a half-time position. And it shows in the results, the dla-needed.txt still shows around 30 open issues. This is slightly better than the state two months ago but we can improve a lot on the average time to push out a security update…

To have an idea of the relative importance of the contributions of the paid developers, I counted the number of uploads made by Thorsten and Holger since July: of 40 updates, they took care of 19 of them, so about the half.

I also looked at the other contributors: Raphaël Geissert stands out with 9 updates (I believe that he is contracted by Électricité de France for doing this) and most of the other contributors look like regular Debian maintainers taking care of their own packages (Paul Gevers with cacti, Christoph Berg with postgresql, Peter Palfrader with tor, Didier Raboud with cups, Kurt Roeckx with openssl, Balint Reczey with wireshark) except Matt Palmer and Luciano Bello who (likely) are benevolent members of the LTS team.

There are multiple things to learn here:

  1. Paid contributors already handle almost 70% of the updates. Counting only on volunteers would not have worked.
  2. Quite a few companies that promised help (and got mentioned in the press release) have not delivered the promised help yet (neither through Freexian nor directly).

Last but not least, this project wouldn’t exist without the support of multiple companies and organizations. Many thanks to them:

Petter Reinholdtsen: Good bye subkeys.pgp.net, welcome pool.sks-keyservers.net

10 September, 2014 - 19:10

Yesterday, I had the pleasure of attending a talk with the Norwegian Unix User Group about the OpenPGP keyserver pool sks-keyservers.net, and was very happy to learn that there is a large set of publicly available key servers to use when looking for peoples public key. So far I have used subkeys.pgp.net, and some times wwwkeys.nl.pgp.net when the former were misbehaving, but those days are ended. The servers I have used up until yesterday have been slow and some times unavailable. I hope those problems are gone now.

Behind the round robin DNS entry of the sks-keyservers.net service there is a pool of more than 100 keyservers which are checked every day to ensure they are well connected and up to date. It must be better than what I have used so far. :)

Yesterdays speaker told me that the service is the default keyserver provided by the default configuration in GnuPG, but this do not seem to be used in Debian. Perhaps it should?

Anyway, I've updated my ~/.gnupg/options file to now include this line:

keyserver pool.sks-keyservers.net

With GnuPG version 2 one can also locate the keyserver using SRV entries in DNS. Just for fun, I did just that at work, so now every user of GnuPG at the University of Oslo should find a OpenGPG keyserver automatically should their need it:

% host -t srv _pgpkey-http._tcp.uio.no
_pgpkey-http._tcp.uio.no has SRV record 0 100 11371 pool.sks-keyservers.net.
%

Now if only the HKP lookup protocol supported finding signature paths, I would be very happy. It can look up a given key or search for a user ID, but I normally do not want that, but to find a trust path from my key to another key. Given a user ID or key ID, I would like to find (and download) the keys representing a signature path from my key to the key in question, to be able to get a trust path between the two keys. This is as far as I can tell not possible today. Perhaps something for a future version of the protocol?

Ian Donnelly: New Release: Elektra 0.8.8

10 September, 2014 - 17:19

Hi Everybody!

Great news! I am very happy to announce that we have reached a new milestone for Elektra and released a new version, 0.8.8! This release comes right on the tail of the 0.8.7 release and it might just be our biggest release yet! We already have a great article covering all the changes from the previous release on our News documentation on GitHub. I just wanted to focus on a few of those changes on this blog, especially the ones that pertain to my Google Summer of Code Project.

First of all, Felix has worked to greatly improve the ini plug-in. This is the plug-in I used in my technical demo for mounting Samba’s smb.conf file. It now works even better with complex ini files such as smb.conf which means the automatic merging of files like smb.conf is even better now! That really goes to show one of the greatest strengths of the design of Elektra. Just by improving plug-ins, all the functions of Elektra can improve as well. The merge code was not changed in this release, yet because of an updated plug-in, the merge has improved.

Secondly, there have been some good improvements to the kdb command-line tool. Many of these improvements were used in my technical demo, but now they are actually a part of release (and a little more refined from then). We added a new command called kdb remount which allows a user (or script) to mount a file to the Elektra Key Database using an existing backend. An example of this command is:
kdb remount [new filename] [new path] [existing mountpoint]
This command mounts the new file to the new path in the Key Database using an existing backend. This works with the conffile merging by allowing us to mount the various versions of the conffile without having to specify which backend to use (it will use the same backend as the currently used conffile). Additionally, the umount command was updated to allow users to umount using the current mountpath (allowing commands such as kdb umount system/smb.conf) as opposed to backends. Moreover, we added an option to the kdb import command to specify a merge strategy using thing -s option. Now you can import a file into the Key Database and merge the content of that file with the current Keys in the Database.

Thirdly, we added some new scripts to Elektra to help with the ucf integration. These scripts were used in my technical demo, but now they are part of the release. elektra-mount and elektra-umount are wrappers for the commands kdb mount and kdb umount respectively. They are designed to be used in debian package scripts and are adapted for easier use than the generic commands. For instance, running elektra-mount will check to see if a file is already mounted at that location in the Key Database. Similarly, elektra-umount will not produce an error if the file was already unmounted. This is because maintainer scripts can be run multiple times in a row and producing an error will stop dpkg even when it shouldn’t. Additionally, we added a script called elektra-merge which can be used as a method for ucf to merge configuration files. This script acts as a liaison between ucf and elektra allowing automatic merges to be done by ucf using Elektra’s merge features in a seamless manner. For information of how these scripts work, check out my tutorial on integrating elektra-merge into a debian package.

The last bit of news I would like to share is the great progress of the Debian package. Thanks to Pino Toscano, version 0.8.7-4 of Elektra is now available in the Debian testing repo! This is great news as we are now that much closer to replacing the outdated Elektra 0.7 versions that are currently the latest versions of Elektra in the stable repo. Once the 0.8.X versions of Elektra make it to stable it will be much easier for us to keep the latest versions of Elektra in Debian, and that’s key to allowing Elektra to help improve users lives.

You can download the release from:
http://www.markus-raab.org/ftp/elektra/releases/elektra-0.8.8.tar.gz

• size: 1644441
• md5sum: fe11c6704b0032bdde2d0c8fa5e1c7e3
• sha1: 16e43c63cd6d62b9fce82cb0a33288c390e39d12
• sha256: ae75873966f4b5b5300ef5e5de5816542af50f35809f602847136a8cb21104e2

And the API-Documentation can be found here:

http://doc.libelektra.org/api/0.8.8/html/

Hope you enjoy the new release!

Sincerely,
Ian S. Donnelly

Steve Kemp: kvm-hosting will be ceasing, soon.

10 September, 2014 - 16:17

Seven years ago I wanted to move on from the small virtual machine I had to a larger one. Looking at the the options available it seemed the best approach was to rent a big host, and divide it up into virtual machines myself.

Renting a machine with 8Gb of RAM and 500Gb of disk-space, then dividing that into eights would give a decent spec and assuming that I found enough users to pay for the other slots/shares it would be economically viable too.

After a few weeks I took the plunge, advertised here, and found users.

I had six users:

  • 1/8th for me.
  • 1/8th left empty/idle for the host machine.
  • 6/8th for other users.

There were some niggles, one user seemed to suffer from connectivity problems more than the others, but on the whole the experiment worked out well.

These days, thanks to BigV, Digital Ocean, and all the new-comers there is less need for this kind of thing so last December I announced that the service would cease - and gave all current users 1 year of free service to give them time to migrate away.

The service was due to terminate in December, but triggered by some upcoming downtime where our host would have been moved, in the back of a van, from Manchester to York, I've taken the decision to stop it early.

It was a fun experiment, it provided me with low cost hosting (subsidized by the other paying users), and provided some other people with hosting of their own that was setup nicely.

The only outstanding question is what to do with the domain-names? I could let them expire, I could try to sell them, or I could donate them to other people running hosting setups.

If anybody reading this has a use for kvm-hosting.org, kvm-hosting.net, or kvm-hosting.com, then do feel free to get in touch. No promises, obviously, but it'd be a shame for them to end up hosting adverts in a year or twos time..

Holger Levsen: 20140909-lts-august-2014

10 September, 2014 - 05:34
Debian LTS - feedback about the feedback from my LTS talk at DebConf14

So, I'm more or less back from dc14 and today, five days later, I think I might have mostly overcome jetlag. Probably...

So, at DebConf14 I gave a talk about LTS and while I'm sorry that I was that tired, I'm more or less happy how the talk went. Thankfully at least I was calm and relaxed...

There are a couple of things I learned from the talk: a.) LTS has been really really perceived well b.) it fits a demand c.) people already take it for granted (eg plan for Wheezy LTS) d.) people expect the same non-intrusive changes as currently done for security updates.

To explain the last point: when I explained the - so far - rather theoretical problem that ''squeeze-lts'' has no gatekeeper mechanisms whatsoever (eg no ''proposed-updates'', no NEW queue..) the reaction in the audience was basically "something like this should exist, else how can we deploy this in large scale / on important setup?!". Also currently there is no, well-documented, easily to be found policy for what kind of updates are acceptable. I said that we basically follow the same rules as there are for debian-security updates, but this should really be documented properly. This doesn't seem very hard to fix, just like many things it "just" needs someone to do the work.

IOW: we explain how to use LTS, we explain how to contribute to LTS (through uploads or financially) but we lack a simple explaination what LTS is and what kind of updates to expect. It's kinda self evident, but only kinda.

So since giving the talk I changed one thing in my personal usage of LTS: I don't use my personal LTS repo anymore, where I made sure only good packages got in. This is for two reasons: a.) I had too add new packages too often and b.) if it really is a problem that LTS has no gatekeeping mechanism (which I'm not sure anymore it is, after all, the updates are prepared by reasonable people with a common goal...) then I want to suffer this first hand, so I can build solutions which benefit everyone, not just me. That personal LTS repo only helped me.

On the technical side I prepared five DLAs, for lzo2, libwpd, squid3, lua5.1 and bind9. Not much to see here, they all were very smooth. I still enjoyed the challenge of digging in unknown sourcecode, as described in my previous post.

Then more interestingly, and with the help of Raphael Geissert and Salvatore Bonaccorso I fixed the security-tracker to also know about oldstable, after waiting for more than 8 weeks to someone else doing it. I'm very glad that this is done now, as without it was really tedious to check which issues were applying to oldstable.

Oh, and another afterthought from giving the talk: currently at least parts of the security-tracker codebase assume that there won't ever be support for oldoldstable, but once jessie has been released this won't be true anymore. Then we will support stable, oldstable and oldoldstable. And oldstable will be wheezy, not squeeze. We have something like 6 months to fix this, hopefully we won't have much more time... Oh, and surely there are other places than just the security-tracker which will need to be taught about this.

Daniel Pocock: xTupleCon WebRTC talk schedule change, new free event

10 September, 2014 - 02:51

As mentioned in my earlier blog, I'm visiting several events in the US and Canada in October and November. The first of these, the talk about WebRTC in CRM at xTupleCon, has moved from the previously advertised timeslot to Wednesday, 15 October at 14:15.

WebRTC meeting, Norfolk, VA

Later that day, there will be a WebRTC/JavaScript meetup in Norfolk hosted at the offices of xTuple. It is not part of xTupleCon and free to attend. Please register using the Eventbrite page created by xTuple.

This will be a hands on event for developers and other IT professionals, especially those in web development, network administration and IP telephony. Please bring laptops and mobile devices with the latest versions of both Firefox and Chrome to experience WebRTC.

Free software developers at xTupleCon

If you do want to attend xTupleCon itself, please contact xTuple directly through this form for details about the promotional tickets for free software developers.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้