Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 39 min ago

Francois Marier: Outsourcing your webapp maintenance to Debian

31 August, 2014 - 05:45

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l
256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)
Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.
Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

John Goerzen: 2AM to Seattle

31 August, 2014 - 01:11

Monday morning, 1:45AM.

Laura and I walk into the boys’ room. We turn on the light. Nothing happens. (They’re sound sleepers.)

“Boys, it’s time to get up to go get on the train!”

Four eyes pop open. “Yay! Oh I’m so excited!”

And then, “Meow!” (They enjoy playing with their stuffed cats that Laura got them for Christmas.)

Before long, it was out the door to the train station. We even had time to stop at a donut shop along the way.

We climbed into our family bedroom (a sleeping car room on Amtrak specifically designed for families of four), and as the train started to move, the excitement of what was going on crept in. Yes, it’s 2:42AM, but these are two happy boys:

Jacob and Oliver love trains, and this was the beginning of a 3-day train trip from Newton to Seattle that would take us through Kansas, Colorado, the Rocky Mountains of New Mexico, Arizona, Los Angeles, up the California coast, through the Cascades, and on to Seattle. Whew!

Here we are later that morning before breakfast:

Here’s our train at a station stop in La Junta, CO:

And at the beautiful small mountain town of Raton, NM:

Some of the passing scenery in New Mexico:

Through it all, we found many things to pass the time. I don’t think anybody was bored. I took the boys “exploring the train” several times — we’d walk from one end to the other and see what all was there. There was always the dining car for our meals, the lounge car for watching the passing scenery, and on the Coast Starlight, the Pacific Parlor Car.

Here we are getting ready for breakfast one morning.

Getting to select meals and order in the “train restaurant” was a big deal for the boys.

Laura brought one of her origami books, which even managed to pull the boys away from the passing scenery in the lounge car for quite some time.

Origami is serious business:

They had some fun wrapping themselves around my feet and challenging me to move. And were delighted when I could move even though they were trying to weight me down!

Several games of Uno were played, but even those sometimes couldn’t compete with the passing scenery:

The Coast Starlight features the Pacific Parlor Car, which was built over 50 years ago for the Santa Fe Hi-Level trains. They’ve been updated; the upper level is a lounge and small restaurant, and the lower level has been turned into a small theater. They show movies in there twice a day, but most of the time, the place is empty. A great place to go with little boys to run around and play games.

The boys and I sort of invented a new game: roadrunner and coyote, loosely based on the old Looney Tunes cartoons. Jacob and Oliver would be roadrunners, running around and yelling “MEEP MEEP!” Meanwhile, I was the coyote, who would try to catch them — even briefly succeeding sometimes — but ultimately fail in some hilarious way. It burned a lot of energy.

And, of course, the parlor car was good for scenery-watching too:

We were right along the Pacific Ocean for several hours – sometimes there would be a highway or a town between us and the beach, but usually there was nothing at all between us and the coast. It was beautiful to watch the jagged coastline go by, to gaze out onto the ocean, watching the birds — apparently so beautiful that I didn’t even think to take some photos.

Laura’s parents live in California, and took a connecting train. I had arranged for them to have a sleeping car room near ours, so for the last day of the trip, we had a group of 6. Here are the boys with their grandparents at lunch Wednesday:

We stepped off the train in Seattle into beautiful King Street Station.

Our first day in Seattle was a quiet day of not too much. Laura’s relatives live near Lake Washington, so we went out there to play. The boys enjoyed gathering black rocks along the shore.

We went blackberry picking after that – filled up buckets for a cobbler.

The next day, we rode the Seattle Monorail. The boys have been talking about this for months — a kind of train they’ve never been on. That was the biggest thing in their minds that they were waiting for. They got to ride in the very front, by the operator.

Nice view from up there.

We walked through the Pike Market — I hadn’t been in such a large and crowded place like that since I was in Guadalajara:

At the Seattle Aquarium, we all had a great time checking out all the exhibits. The “please touch” one was a particular hit.

Walking underneath the salmon tank was fun too.

We spent a couple of days doing things closer to downtown. Laura’s cousin works at MOHAI, the Museum of History and Industry, so we spent a morning there. The boys particularly enjoyed the old periscope mounted to the top of the building, and the exhibit on chocolate (of course!)

They love any kind of transportation, so of course we had to get a ride on the Seattle Streetcar that comes by MOHAI.

All weekend long, we had been noticing the seaplanes taking off from Lake Washington and Lake Union (near MOHAI). So finally I decided to investigate, and one morning while Laura was doing things with her cousin, the boys and I took a short seaplane ride from one lake to another, and then rode every method of transportation we could except for ferries (we did that the next day). Here is our Kenmore Air plane:

The view of Lake Washington from 1000 feet was beautiful:

I think we got a better view than the Space Needle, and it probably cost about the same anyhow.

After splashdown, we took the streetcar to a place where we could eat lunch right by the monorail tracks. Then we rode the monorail again. Then we caught a train (it went underground a bit so it was a “subway” to them!) and rode it a few blocks.

There is even scenery underground, it seems.

We rode a bus back, and saved one last adventure for the next day: a ferry to Bainbridge Island.

Laura and I even got some time to ourselves to go have lunch at an amazing Greek restaurant to celebrate a year since we got engaged. It’s amazing to think that, by now, it’s only a few months until our wedding anniversary too!

There are many special memories of the weekend I could mention — visiting with Laura’s family, watching the boys play with her uncle’s pipe organ (it’s in his house!), watching the boys play with their grandparents, having all six of us on the train for a day, flying paper airplanes off the balcony, enjoying the cool breeze on the ferry and the beautiful mountains behind the lake. One of my favorites is waking up to high-pitched “Meow? Meow meow meow! Wake up, brother!” sorts of sounds. There was so much cat-play on the trip, and it was cute to hear. I have the feeling we won’t hear things like that much more.

So many times on the trip I heard, “Oh dad, I am so excited!” I never get tired of hearing that. And, of course, I was excited, too.

Joachim Breitner: DebConf 14

30 August, 2014 - 23:10

I’m writing this blog post on the plain from Portland towards Europe (which I now can!), using the remaining battery live after having watched one of the DebConf talks that I missed. (It was the systemd talk, which was good and interesting, but maybe I should have watched one of the power management talks, as my battery is running down faster than it should be, I believe.)

I mostly enjoyed this year’s DebConf. I must admit that I did not come very prepared: I had neither something urgent to hack on, nor important things to discuss with the other attendees, so in a way I had a slow start. I also felt a bit out of touch with the project, both personally and technically: In previous DebConfs, I had more interest in many different corners of the project, and also came with more naive enthusiasm. After more than 10 years in the project, I see a few things more realistic and also more relaxed, and don’t react on “Wouldn’t it be cool to have crazy idea” very easily any more. And then I mostly focus on Haskell packaging (and related tooling, which sometimes is also relevant and useful to others) these days, which is not very interesting to most others.

But in the end I did get to do some useful hacking, heard a few interesting talks and even got a bit excited: I created a new tool to schedule binNMUs for Haskell packages which is quite generic (configured by just a regular expression), so that it can and will be used by the OCaml team as well, and who knows who else will start using hash-based virtual ABI packages in the future... It runs via a cron job on people.debian.org to produce output for Haskell and for OCaml, based on data pulled via HTTP. If you are a Debian developer and want up-to-date results, log into wuiet.debian.org and run ~nomeata/binNMUs --sql; it then uses the projectb and wanna-build databases directly. Thanks to the ftp team for opening up incoming.debian.org, by the way!

Unsurprisingly, I also held a talk on Haskell and Debian (slides available). I talked a bit too long and we had too little time for discussion, but in any case not all discussion would have fitted in 45 minutes. The question of which packages from Hackage should be added to Debian and which not is still undecided (which means we carry on packaging what we happen to want in Debian for whatever reason). I guess the better our tooling gets (see the next section), the more easily we can support more and more packages.

I am quite excited by and supportive of Enrico’s agenda to remove boilerplate data from the debian/ directories and relying on autodebianization tools. We have such a tool for Haskell package, cabal-debian, but it is unofficial, i.e. neither created by us nor fully endorsed. I want to change that, so I got in touch with the upstream maintainer and we want to get it into shape for producing perfect Debian packages, if the upstream provided meta data is perfect. I’d like to see the Debian Haskell Group to follows Enrico’s plan to its extreme conclusion, and this way drive innovation in Debian in general. We’ll see how that goes.

Besides all the technical program I enjoyed the obligatory games of Mao and Werewolves. I also got to dance! On Saturday night, I found a small but welcoming Swing-In-The-Park event where I could dance a few steps of Lindy Hop. And on Tuesday night, Vagrant Cascadian took us (well, three of us) to a blues dancing night, which I greatly enjoyed: The style was so improvisation-friendly that despite having missed the introduction and never having danced Blues before I could jump right in. And in contrast to social dances in Germany, where it is often announced that the girls are also invited to ask the boys, but then it is still mostly the boys who have to ask, here I took only half a minute of standing at the side until I got asked to dance. In retrospect I should have skipped the HP reception and went there directly...

I’m not heading home at the moment, but will travel directly to Göteborg to attend ICFP 2014. I hope the (usually worse) west-to-east jet lag will not prevent me from enjoying that as much as I could.

Gergely Nagy: Happy

30 August, 2014 - 14:16

For the past decade or so, I wasn't exactly happy. I struggled with low self esteem, likely bordered on depression at times. I disappointed friends, family and most of all, myself. There were times I not only disliked the person I was, but hated it. This wasn't healthy, nor forward-looking, I knew that all along, and that made the situation even worse. I tried to maintain a more enthusiastic mask, pretended that nothing's wrong. Being fully aware that there actually is nothing terribly wrong, while still feeling worthless, just added insult to injury.

In the past few years, things started to improve. I had a job, things to care about, things to feel passionate about, people around me who knew nothing about the shadows on my heart, yet still smiled, still supported me. But years of self loathing does not disappear overnight.

Then one day, some six months ago, my world turned upside down. Years of disappointment, hate and loathing - poof, gone. Today, I'm happy. This is something I have not been able to tell myself in all honesty in this century yet (except maybe for very brief periods of time, when I was happy for someone else).

A little over six months ago, I met someone, someone I could open up to. I still remember the first hour, where we talked about our own shortcomings and bad habits. At the end of the day, when She ordered me a crazy-pancake (a pancake with half a dozen random fillings), I felt happy. She is everything I could ever wish for, and more. She isn't just the woman I love, with whom I'll say the words in a couple of months. She's much more than a partner, a friend a soul-mate combined in one person. She is my inspiration, my role model and my Guardian Angel too.

I no longer feel worthless, nor inadequate. I am disappointed with myself no more. I do not hate, I do not loathe, and past mistakes, past feelings seem so far away! I can share everything with Her, She does not judge, nor condemn: she supports and helps. With Her, I am happy. With Her, I am who I wanted myself to be. With Her, I am complete.

Thank You.

Matthew Palmer: Chromium tabs crashing and not rendering correctly?

30 August, 2014 - 11:45

If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).

To tell if you’re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:

[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)]
Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy
failed: Too many open files

And

[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)]
Cannot create shared memory buffer

If you see those errors, congratulations! The rest of this blog post will be of use to you.

There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.

System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:

* - nofile 65535

You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine).

This doesn’t help you if you’ve got Chromium already open and you’d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit.

The prlimit syscall is available if you’re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You’ll have a prlimit command line program if you’ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:

prlimit <PID> 65535 65535

The <PID> argument is taken from the first number in the log messages from .xsession-errors – in the example above, it’s 22161.

And now, you can go back to using your tabs as ersatz bookmarks, like I do.

Dirk Eddelbuettel: BH release 1.54.0-4

30 August, 2014 - 10:02
Another small new release of our BH package providing Boost headers for use by R is now on CRAN. This one brings a one-file change: the file any.hpp comprising the Boost.Any library --- as requested by a fellow package maintainer needing it for a pending upload to CRAN.

No other changes were made.

Changes in version 1.54.0-4 (2014-08-29)
  • Added Boost Any requested by Greg Jeffries for his nabo package

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve Kemp: Migration of services and hosts

29 August, 2014 - 21:28

Yesterday I carried out the upgrade of a Debian host from Squeeze to Wheezy for a friend. I like doing odd-jobs like this as they're generally painless, and when there are problems it is a fun learning experience.

I accidentally forgot to check on the status of the MySQL server on that particular host, which was a little embarassing, but later put together a reasonably thorough serverspec recipe to describe how the machine should be setup, which will avoid that problem in the future - Introduction/tutorial here.

The more I use serverspec the more I like it. My own personal servers have good rules now:

shelob ~/Repos/git.steve.org.uk/server/testing $ make
..
Finished in 1 minute 6.53 seconds
362 examples, 0 failures

Slow, but comprehensive.

In other news I've now migrated every single one of my personal mercurial repositories over to git. I didn't have a particular reason for doing that, but I've started using git more and more for collaboration with others and using two systems felt like an annoyance.

That means I no longer have to host two different kinds of repositories, and I can use the excellent gitbucket software on my git repository host.

Needless to say I wrote a policy for this host too:

#
#  The host should be wheezy.
#
describe command("lsb_release -d") do
  its(:stdout) { should match /wheezy/ }
end


#
# Our gitbucket instance should be running, under runit.
#
describe supervise('gitbucket') do
  its(:status) { should eq 'run' }
end

#
# nginx will proxy to our back-end
#
describe service('nginx') do
  it { should be_enabled   }
  it { should be_running   }
end
describe port(80) do
  it { should be_listening }
end

#
#  Host should resolve
#
describe host("git.steve.org.uk" ) do
  it { should be_resolvable.by('dns') }
end

Simple stuff, but being able to trigger all these kind of tests, on all my hosts, with one command, is very reassuring.

Jakub Wilk: More spell-checking

29 August, 2014 - 19:51

Have you ever wanted to use Lintian's spell-checker against arbitrary files? Now you can do it with spellintian:

$ zrun spellintian --picky /usr/share/doc/RFC/best-current-practice/rfc*
/tmp/0qgJD1Xa1Y-rfc1917.txt: amoung -> among
/tmp/kvZtN435CE-rfc3155.txt: transfered -> transferred
/tmp/o093khYE09-rfc3481.txt: unecessary -> unnecessary
/tmp/4P0ux2cZWK-rfc6365.txt: charater -> character

mwic (Misspelled Words In Context) takes a different approach. It uses classic spell-checking libraries (via Enchant), but it groups misspellings and shows them in their contexts. That way you can quickly filter out false-positives, which are very common in technical texts, using visual grep:

$ zrun mwic /usr/share/doc/debian/social-contract.txt.gz
DFSG:
| …an Free Software Guidelines (DFSG)
| …an Free Software Guidelines (DFSG) part of the
                                ^^^^

Perens:
|    Bruce Perens later removed the Debian-spe…
| by Bruce Perens, refined by the other Debian…
           ^^^^^^

Ean, Schuessler:
| community" was suggested by Ean Schuessler. This document was drafted
                              ^^^ ^^^^^^^^^^

GPL:
| The "GPL", "BSD", and "Artistic" lice…
       ^^^

contrib:
| created "contrib" and "non-free" areas in our…
           ^^^^^^^

CDs:
| their CDs. Thus, although non-free wor…
        ^^^

Antti-Juhani Kaijanaho: Licentiate Thesis is now publicly available

29 August, 2014 - 16:45

My recently accepted Licentiate Thesis, which I posted about a couple of days ago, is now available in JyX.

Here is the abstract again for reference:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

Daniel Pocock: Welcoming libphonenumber to Debian and Ubuntu

29 August, 2014 - 16:02

Google's libphonenumber is a universal library for parsing, validating, identifying and formatting phone numbers. It works quite well for numbers from just about anywhere. Here is a Java code sample (C++ and JavaScript also supported) from their web site:


String swissNumberStr = "044 668 18 00";
PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance();
try {
  PhoneNumber swissNumberProto = phoneUtil.parse(swissNumberStr, "CH");
} catch (NumberParseException e) {
  System.err.println("NumberParseException was thrown: " + e.toString());
}
boolean isValid = phoneUtil.isValidNumber(swissNumberProto); // returns true
// Produces "+41 44 668 18 00"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.INTERNATIONAL));
// Produces "044 668 18 00"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.NATIONAL));
// Produces "+41446681800"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.E164));

This is particularly useful for anybody working with international phone numbers. This is a common requirement in the world of VoIP where people mix-and-match phones and hosted PBXes in different countries and all their numbers have to be normalized.

About the packages

The new libphonenumber package provides support for C++ and Java users. Upstream also supports JavaScript but that hasn't been packaged yet.

Using libphonenumber from Evolution and other software

Lumicall, the secure SIP/ZRTP client for Android, has had libphonenumber from the beginning. It is essential when converting dialed numbers into E.164 format to make ENUM queries and it is also helpful to normalize all the numbers before passing them to VoIP gateways.

Debian includes the GNOME Evolution suite and it will use libphonenumber to improve handling of phone numbers in contact records if enabled at compile time. Fredrik has submitted a patch for that in Debian.

Many more applications can potentially benefit from this too. libphonenumber is released under an Apache license so it is compatible with the Mozilla license and suitable for use in Thunderbird plugins.

Improving libphonenumber

It is hard to keep up with the changes in dialing codes around the world. Phone companies and sometimes even whole countries come and go from time to time. Numbering plans change to add extra digits. New prefixes are created for new mobile networks. libphonenumber contains metadata for all the countries and telephone numbers that the authors are aware of but they also welcome feedback through their mailing list for anything that is not quite right.

Now that libphonenumber is available as a package, it may be helpful for somebody to try and find a way to split the metadata from the code so that metadata changes could be distributed through the stable updates catalog along with other volatile packages such as anti-virus patterns.

Robert Collins: Test processes as servers

29 August, 2014 - 12:10

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


Russ Allbery: git-pbuilder 1.33

29 August, 2014 - 07:37

git-pbuilder is a script to integrate git-buildpackage with pbuilder or cowbuilder. My long-term hope is to rewrite it in Python and just incorporate it into git-buildpackage's source base directly, but in the meantime I've been maintaining it as a separate script.

This new release is a relatively minor bug fix. git-pbuilder has always removed the *_source.changes file built as part of the pbuilder process, since this file is normally useless. It's used to generate the source package to move into the chroot, but then the build in the chroot normally regenerates the Debian source package. The *_source.changes file hangs around with invalid checksums and just confuses matters, so git-pbuilder has always deleted it.

However, Debian is now increasing support for source-only uploads, which means that source-only builds might now be interesting. One can do a source-only build with gbp buildpackage -S. But that also generates a *_source.changes file, one that's actually useful, and git-pbuilder was deleting that as well. This release, thanks to work by Guido Günther, refrains from deleting this file when doing a source-only build.

You can get the latest release of git-pbuilder from my scripts distribution page.

Bernhard R. Link: Where key expiry dates are useful and where they are not.

29 August, 2014 - 02:57

Some recent blog (here and here) suggest short key expiry times.

Then also highlight some thing many people forget: The expiry time of a key can be changed every time with just a new self-signature. Especially that can be made retroactively (you cannot avoid that, if you allow changing it: Nothing would stop an attacker from just changing the clock of one of his computers).

(By the way: did you know you can also reduce the validity time of a key? If you look at the resulting packets in your key, this is simply a revocation packet of the previous self-signature followed by a new self-signature with a shorter expiration date.)

In my eyes that fact has a very simple consequence: An expiry date on your gpg main key is almost totally worthless.

If you for example lose your private key and have no revocation certificate for it, then a expiry time will not help at all: Once someone else got the private key (for example by brute forcing it, as computers got faster over the years or because they could brute-force the pass-phrase for your backup they got somehow), they can just extend the expiry date and make it look like it is still valid. (And if they do not have the private key, there is nothing they can do anyway).

There is one place where expiration dates make much more sense, though: subkeys.

As the expiration date of a subkey is part of the signature of that subkey with the main key, someone having access to only the subkey cannot change the date.

This also makes it feasible to use new subkeys over the time, as you can let the previous subkey expire and use a new one. And only someone having the private main key (hopefully you), can extend its validity (or sign a new one).

(I generally suggest to always have a signing subkey and never ever use the main key except off-line to sign subkeys or other keys. The fact that it can sign other keys just makes the main key just too precious to operate it on-line (even if it is on some smartcard the reader cannot show you what you just sign)).

Gunnar Wolf: Ongoing crypto handling discussions

28 August, 2014 - 23:04

I love to see there is a lot of crypto discussions going on at DebConf. Maybe I'm skewed by my role as keyring-maint, but I have been involved in more than one discussion every day on what do/should signatures mean, on best key handling practices, on some ideas to make key maintenance better, on how the OpenPGPv4 format lays out a key and its components on disk, all that. I enjoy some of those discussions pose questions that leave me thinking, as I am quite far from having all answers.

Discussions should be had face to face, but some start online and deserve to be answered online (and also pose opportunity to become documentation). Simon Josefsson blogs about The case for short OpenPGP key validity periods. This will be an important issue to tackle, as we will soon require keys in the Debian keyring to have a set expiration date (surprise surprise!) and I agree with Simon, setting an expiration date far in the future means very little.

There is a caveat with using, as he suggests, very short expiry periods: We have a human factor sitting in the middle. Keyring updates in Debian are done approximately once a month, and I do not see the period shortening. That means, only once a month we (currently Jonathan McDowell and myself, and we expect to add Daniel Kahn Gillmor soon) take the full changeset and compile a new keyring that replaces the active one in Debian.

This means that if you have, as Simon suggests, a 100-day validity key, you have to remember to update it at least every 70 days, or you might be locked out during the days it takes us to process it.

I set my expiration period to two years, although I might shorten it to only one. I expect to add checks+notifications before we enable this requirement project-wide (so that Debian servers will mail you when your key is close to expiry); I think that mail can be sent at approximately [expiry date - 90 days] to give you time both to you and to us to act. Probably the optimal expiration periods under such conditions would be between 180 and 365 days.

But, yes, this is by far not yet a ruling, but a point in the discussion. We still have some days of DebConf, and I'll enjoy revising this point. And Simon, even if we correct some bits for these details, I'd like to have your permission to use this fine blog post as part of our documentation!

(And on completely unrelated news: Congratulations to our dear and very much missed friend Bubulle for completely losing his sanity and running for 28 hours and a half straight! He briefly describes this adventure when it was about to start, and we all want him to tell us how it was. Mr. Running French Guy, you are amazing!)

Lior Kaplan: The importance of close integration between distribution and upstream

28 August, 2014 - 22:22

Many package maintainers need to decide when to upload a new version to Debian. Should the upload be done only after the official release, or is there a place for uploads during the development process. In the latter case there’s a need to balance between the benefit of early testing and feedback with the stability and not complitly breaking user’s environment (and package relationships) too often.

With the coming PHP 5.6.0 release, Debian kept being on the cutting edge. Thanks to Ondřej, the new version was available in experimental since alpha1 and in unstable/testing since beta3. Considering the timing of the PHP release related to the Debian freeze, I’m happy we started early, and did the transision to PHP 5.6 a few months ago.

But just following the development releases (betas ,RCs) isn’t enough. Both Ondřej and myself are part of the PHP community, and know the planned timelines, current status and what are the critical points. Such knowlege was very useful this week, when we new 5.6.0 was pending finale tagging before release (after RC4). This made take the report of Debiab bug #759381: “php5: TLS connections broken in 5.6.0 RC4″ seriously and contact the release managers.

First it was a “heads up” and then a real problem. After a quick disscussion (both private mails by me and on github by Ondřej), the relevant commit was reverted by the release managers (Julien Pauli & Ferenc Kovacs), and 5.6.0 was retagged. The issue will get more checks towrads 5.6.1 without any time pressure.

Although 5.6.0 isn’t production for anyone (yet), and like any major release can have issues, the close connectivty betweeen everyone saved compliants from the PHP users and ecosystem. I don’t emgine the issue been sorted so quickely 16 hours later. This is also due to the bug been reported on difference between two close release (regression in RC4 comparing to RC3).

To close the circle, if you’ve uploaded 5.6.0 only after the release, the report would be regression between 5.5.x and 5.6.0, which is obviously much harder to pinpoint. So, I’m not sure I have a good answer for the question in the begining of the post, but for this case our policy proved itself.


Filed under: Debian GNU/Linux, PHP

Tim Retout: Pump.io update 1

28 August, 2014 - 21:59

[The story so far: I'm packaging pump.io for Debian.]

4 packages uploaded to NEW:
  • node-webfinger
  • validator.js
  • websocket-driver
  • node-openid
2 packages eliminated as not needed:
  • set-immediate - deprecated
  • crypto-cacerts - not needed on Debian
1 package in progress:
  • node-databank
Got my eye on:
  • oauth-evanp - this is a fork with two patches, so I need to investigate the status of those.
  • node-iconv-lite - needs files downloaded from the internet, so I'm considering how to add them to the source package
  • dateformat/moment - there's an open discussion about combining Node.js modules, and I'm wondering if these are affected.
Thoughts

Currently I'm averaging around one package upload a day, I think? Which would mean ~1 month to go? But there may be challenges around getting packages through the NEW queue in time to build-depend on them.

Someone has asked my temporary Twitter account whether I have a pump.io account. Technically, yes, I do - but I don't post anything on it, because I want to run my own server in the long term. As part of running my own server, I always find that easier if I'm installing software from Debian packages. Hence this work. Sledgehammer, meet nut.

Tim Retout: Pump.io update 1

28 August, 2014 - 20:59

[The story so far: I'm packaging pump.io for Debian.]

4 packages uploaded to NEW:
  • node-webfinger
  • validator.js
  • websocket-driver
  • node-openid
2 packages eliminated as not needed:
  • set-immediate - deprecated
  • crypto-cacerts - not needed on Debian
1 package in progress:
  • node-databank
Got my eye on:
  • oauth-evanp - this is a fork with two patches, so I need to investigate the status of those.
  • node-iconv-lite - needs files downloaded from the internet, so I'm considering how to add them to the source package
  • dateformat/moment - there's an open discussion about combining Node.js modules, and I'm wondering if these are affected.
Thoughts

Currently I'm averaging around one package upload a day, I think? Which would mean ~1 month to go? But there may be challenges around getting packages through the NEW queue in time to build-depend on them.

Someone has asked my temporary Twitter account whether I have a pump.io account. Technically, yes, I do - but I don't post anything on it, because I want to run my own server in the long term. As part of running my own server, I always find that easier if I'm installing software from Debian packages. Hence this work. Sledgehammer, meet nut.

Simon Josefsson: The Case for Short OpenPGP Key Validity Periods

27 August, 2014 - 02:11

After I moved to a new OpenPGP key (see key transition statement) I have received comments about the short life length of my new key. When I created the key (see my GnuPG setup) I set it to expire after 100 days. Some people assumed that I would have to create a new key then, and therefore wondered what value there is to sign a key that will expire in two months. It doesn’t work like that, and below I will explain how OpenPGP key expiration works; how to extend the expiration time of your key; and argue why having a relatively short validity period can be a good thing.

The OpenPGP message format has a sub-packet called the Key Expiration Time, quoting the RFC:

5.2.3.6. Key Expiration Time

   (4-octet time field)

   The validity period of the key.  This is the number of seconds after
   the key creation time that the key expires.  If this is not present
   or has a value of zero, the key never expires.  This is found only on
   a self-signature.

You can print the sub-packets in your OpenPGP key with gpg --list-packets. See below an output for my key, and notice the “created 1403464490″ (which is Unix time for 2014-06-22 21:14:50) and the “subpkt 9 len 4 (key expires after 100d0h0m)” which adds up to an expiration on 2014-09-26. Don’t confuse the creation time of the key (“created 1403464321″) with when the signature was created (“created 1403464490″).

jas@latte:~$ gpg --export 54265e8c | gpg --list-packets |head -20
:public key packet:
	version 4, algo 1, created 1403464321, expires 0
	pkey[0]: [3744 bits]
	pkey[1]: [17 bits]
:user ID packet: "Simon Josefsson "
:signature packet: algo 1, keyid 0664A76954265E8C
	version 4, created 1403464490, md5len 0, sigclass 0x13
	digest algo 10, begin of digest be 8e
	hashed subpkt 27 len 1 (key flags: 03)
	hashed subpkt 9 len 4 (key expires after 100d0h0m)
	hashed subpkt 11 len 7 (pref-sym-algos: 9 8 7 13 12 11 10)
	hashed subpkt 21 len 4 (pref-hash-algos: 10 9 8 11)
	hashed subpkt 30 len 1 (features: 01)
	hashed subpkt 23 len 1 (key server preferences: 80)
	hashed subpkt 2 len 4 (sig created 2014-06-22)
	hashed subpkt 25 len 1 (primary user ID)
	subpkt 16 len 8 (issuer key ID 0664A76954265E8C)
	data: [3743 bits]
:signature packet: algo 1, keyid EDA21E94B565716F
	version 4, created 1403466403, md5len 0, sigclass 0x10
jas@latte:~$ 

So the key will simply stop being valid after that time? No. It is possible to update the key expiration time value, re-sign the key, and distribute the key to people you communicate with directly or indirectly to OpenPGP keyservers. Since that date is a couple of weeks away, now felt like the perfect opportunity to go through the exercise of taking out my offline master key and boot from a Debian LiveCD and extend its expiry time. See my earlier writeup for LiveCD and USB stick conventions.

user@debian:~$ export GNUPGHOME=/media/FA21-AE97/gnupghome
user@debian:~$ gpg --edit-key 54265e8c
gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  3744R/54265E8C  created: 2014-06-22  expires: 2014-09-30  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2014-09-30  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for the primary key.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:47:48 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2014-09-30  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 1

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub* 2048R/32F8119D  created: 2014-06-22  expires: 2014-09-30  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:48:05 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub* 2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 2

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub* 2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 1

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:48:14 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 3

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub* 2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 2

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub* 2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:48:23 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub* 2048R/36BA8F9B  created: 2014-06-22  expires: 2015-01-23  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> save
user@debian:~$ gpg -a --export 54265e8c > /media/KINGSTON/updated-key.txt
user@debian:~$ 

I remove the “transport” USB stick from the “offline” computer, and back on my laptop I can inspect the new updated key. Let’s use the same command as before. The key creation time is the same (“created 1403464321″), of course, but the signature packet has a new time (“created 1409064478″) since it was signed now. Notice “created 1409064478″ and “subpkt 9 len 4 (key expires after 214d19h35m)”. The expiration time is computed based on when the key was generated, not when the signature packet was generated. You may want to double-check the pref-sym-algos, pref-hash-algos and other sub-packets so that you don’t accidentally change anything else. (Btw, re-signing your key is also how you would modify those preferences over time.)

jas@latte:~$ cat /media/KINGSTON/updated-key.txt |gpg --list-packets | head -20
:public key packet:
	version 4, algo 1, created 1403464321, expires 0
	pkey[0]: [3744 bits]
	pkey[1]: [17 bits]
:user ID packet: "Simon Josefsson "
:signature packet: algo 1, keyid 0664A76954265E8C
	version 4, created 1409064478, md5len 0, sigclass 0x13
	digest algo 10, begin of digest 5c b2
	hashed subpkt 27 len 1 (key flags: 03)
	hashed subpkt 11 len 7 (pref-sym-algos: 9 8 7 13 12 11 10)
	hashed subpkt 21 len 4 (pref-hash-algos: 10 9 8 11)
	hashed subpkt 30 len 1 (features: 01)
	hashed subpkt 23 len 1 (key server preferences: 80)
	hashed subpkt 25 len 1 (primary user ID)
	hashed subpkt 2 len 4 (sig created 2014-08-26)
	hashed subpkt 9 len 4 (key expires after 214d19h35m)
	subpkt 16 len 8 (issuer key ID 0664A76954265E8C)
	data: [3744 bits]
:user ID packet: "Simon Josefsson "
:signature packet: algo 1, keyid 0664A76954265E8C
jas@latte:~$ 

Being happy with the new key, I import it and send it to key servers out there.

jas@latte:~$ gpg --import /media/KINGSTON/updated-key.txt 
gpg: key 54265E8C: "Simon Josefsson " 5 new signatures
gpg: Total number processed: 1
gpg:         new signatures: 5
jas@latte:~$ gpg --send-keys 54265e8c
gpg: sending key 54265E8C to hkp server keys.gnupg.net
jas@latte:~$ gpg --keyserver keyring.debian.org  --send-keys 54265e8c
gpg: sending key 54265E8C to hkp server keyring.debian.org
jas@latte:~$ 

Finally: why go through this hassle, rather than set the key to expire in 50 years? Some reasons for this are:

  1. I don’t trust myselt to keep track of a private key (or revocation cert) for 50 years.
  2. I want people to notice my revocation certificate as quickly as possible.
  3. I want people to notice other changes to my key (e.g., cipher preferences) as quickly as possible.

Let’s look into the first reason a bit more. What would happen if I lose both the master key and the revocation cert, for a key that’s valid 50 years? I would start from scratch and create a new key that I upload to keyservers. Then there would be two keys out there that are valid and identify me, and both will have a set of signatures on it. None of them will be revoked. If I happen to lose the new key again, there will be three valid keys out there with signatures on it. You may argue that this shouldn’t be a problem, and that nobody should use any other key than the latest one I want to be used, but that’s a technical argument — and at this point we have moved into usability, and that’s a trickier area. Having users select which out of a couple of apparently all valid keys that exist for me is simply not going to work well.

The second is more subtle, but considerably more important. If people retrieve my key from keyservers today, and it expires in 50 years, there will be no need to refresh it from key servers. If for some reason I have to publish my revocation certificate, there will be people that won’t see it. If instead I set a short validity period, people will have to refresh my key once in a while, and will then either get an updated expiration time, or will get the revocation certificate. This amounts to a CRL/OCSP-like model.

The third is similar to the second, but deserves to be mentioned on its own. Because the cipher preferences are expressed (and signed) in my key, and that ciphers come and go, I would expect that I will modify those during the life-time of my long-term key. If I have a long validity period of my key, people would not refresh it from key servers, and would encrypt messages to me with ciphers I may no longer want to be used.

The downside of having a short validity period is that I have to do slightly more work to get out the offline master key once in a while (which I have to once in a while anyway because I’m signing other peoples keys) and that others need to refresh my key from the key servers. Can anyone identify other disadvantages? Also, having to explain why I’m using a short validity period used to be a downside, but with this writeup posted that won’t be the case any more.

Daniel Pocock: GSoC talks at DebConf 14 today

27 August, 2014 - 00:33

This year I mentored two students doing work in support of Debian and free software (as well as those I mentored for Ganglia).

Both of them are presenting details about their work at DebConf 14 today.

While Juliana's work has been widely publicised already, mainly due to the fact it is accessible to every individual DD, Andrew's work is also quite significant and creates many possibilities to advance awareness of free software.

The Java project that is not just about Java

Andrew's project is about recursively building Java dependencies from third party repositories such as the Maven Central Repository. It matches up well with the wonderful new maven-debian-helper tool in Debian and will help us to fill out /usr/share/maven-repo on every Debian system.

Firstly, this is not just about Java. On a practical level, some aspects of the project are useful for many other purposes. One of those is the aim of scanning a repository for non-free artifacts, making a Git mirror or clone containing a dfsg branch for generating repackaged upstream source and then testing to see if it still builds.

Then there is the principle of software freedom. The Maven Central repository now requires that people publish a sources JAR and license metadata with each binary artifact they upload. They do not, however, demand that the sources JAR be complete or that the binary can be built by somebody else using the published sources. The license data must be specified but it does not appeared to be verified in the same way as packages inspected by Debian's legendary FTP masters.

Thanks to the transitive dependency magic of Maven, it is quite possible that many Java applications that are officially promoted as free software can't trace the source code of every dependency or build plugin.

Many organizations are starting to become more alarmed about the risk that they are dependent upon some rogue dependency. Maybe they will be hit with a lawsuit from a vendor stating that his plugin was only free for the first 3 months. Maybe some binary dependency JAR contains a nasty trojan for harvesting data about their corporate network.

People familiar with the principles of software freedom are in the perfect position to address these concerns and Andrew's work helps us build a cleaner alternative. It obviously can't rebuild every JAR for the very reason that some of them are not really free - however, it does give the opportunity to build a heat-map of trouble spots and also create a fast track to packaging for those heirarchies of JARs that are truly free.

Making WebRTC accessible to more people

Juliana set out to update rtc.debian.org and this involved working on JSCommunicator, the HTML5/JavaScript softphone based on WebRTC.

People attending the session today or participating remotely are advised to set up your RTC / VoIP password at db.debian.org well in advance so the server will allow you to log in and try it during the session. It can take 30 minutes or so for the passwords to be replicated to the SIP proxy and TURN server.

Please also check my previous comments about what works and what doesn't and in particular, please be aware that Iceweasel / Firefox 24 on wheezy is not suitable unless you are on the same LAN as the person you are calling.

Christian Perrier: [life] Follow bubulle running adventures....

26 August, 2014 - 14:09
Just in case some of my free software friends would care and try understanding why I'm currently not attending my first DebConf since 2004...

Starting tomorrow 07:00am EST (so, 22:00 PST for Debconfers), I'll be running the "TDS" race of the Ultra-Trail du Mont-Blanc races.

Ultra-Trail du Mont-Blanc (UTMB) is one of the world famous long distance moutain trail races. It takes places in Chamonix, just below the Mont-Blanc, France's and Europe's highest moutain. The race is indeed simple : "go around the Mont-Blanc in a big circle, 160km long, with 10,000 meters positive climb cumulated on the climb of about 10 high passes between 2000 and 2700 meters altitude".

"My" race is a shortened version of UTMB that does half of the full loop, from Courmayeur in Italy (just "the other side" of Mont-Blanc, from Chamonix) and goes back to Chamonix. It is "only" 120 kilometers long with 7200 meters of positive climb. Some of these are however know as more difficult than UTMB itself.

Many firsts for me in this race : first "over 100km", first "over 24 hours running". Still, I trained hard for this, achieved a very though race in early July (60km, 5000m climb) with a very good result, and I expect to make it well.

Top runners complete this in 17 hours.....last arrivals are expected after 33 hours "running" (often fast walking, indeed). I plan to achieve the race in 28 hours but, indeed, I have no idea..:-)

So, in case you're boring in a night hacklab, or just want to draw your attention out of IRC, or don't have any package to polish...or just want to have a thought for an old friend, you can try to use the following link and follow all this live : http://utmb.livetrail.net/coureur.php?rech=6384⟨=en

Race start : 7am EST, Wednesday Aug 27th. bubulle arrival: Thursday Aug. 28th, between 10am and 4pm (best projection is 11am).

And there will be cheese at pit stops....

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้