Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 14 min ago

Lars Wirzenius: New APT signing key for code.liw.fi/debian

19 June, 2016 - 17:51

For those who use my code.liw.fi/debian APT repository, please be advised that I've today replaced the signing key for the repository. The new key has the following fingerprint:

8072 BAD4 F68F 6BE8 5F01  9843 F060 2201 12B6 1C1F

I've signed the key with my primary key and sent the new key with signature to the key servers. You can also download it at http://code.liw.fi/apt.asc.

Debian Java Packaging Team: Wheezy LTS and the switch to OpenJDK 7

19 June, 2016 - 05:00

Wheezy's LTS period started a few weeks ago and the LTS team had to make an early support decision concerning the Java eco-system since Wheezy ships two Java runtime environments OpenJDK 6 and OpenJDK 7. (To be fair, there are actually three but gcj has been superseded by OpenJDK a long time ago and the latter should be preferred whenever possible.)

OpenJDK 6 is currently maintained by Red Hat and we mostly rely on their upstream work as well as on package updates from Debian's maintainer Matthias Klose and Tiago Stürmer Daitx from Ubuntu. We already knew that both intend to support OpenJDK 6 until April 2017 when Ubuntu 12.04 will reach its end-of-life. Thus we had basically two options, supporting OpenJDK 6 for another twelve months or dropping support right from the start. One of my first steps was to ask for feedback and advice on debian-java since supporting only one JDK seemed to be the more reasonable solution. We agreed on warning users via various channels about the intended change, especially about possible incompatibilities with OpenJDK 7. Even Andrew Haley, OpenJDK 6 project lead, participated in the discussion and confirmed that, while still supported, OpenJDK 6 security releases are "always the last in the queue when there is urgent work to be done".

I informed debian-lts about my findings and issued a call for tests later.

Eventually we decided to concentrate our efforts on OpenJDK 7 because we are confident that for the majority of our users one Java implementation is sufficient during a stable release cycle. An immediate positive effect in making OpenJDK 7 the default is that resources can be relocated to more pressing issues. On the other hand we were also forced to make compromises. The switch to a newer default implementation usually triggers a major transition with dozens of FTBFS bugs and the OpenJDK 7 transition was no exception. I pondered about the usefulness of fixing all these bugs for Wheezy LTS again and focussing on runtime issues instead and finally decided that the latter was both more reasonable and more economic.

Different from regular default Java changes, users will still be able to use OpenJDK 6 to compile their packages and the security impact for development systems is in general neglectable. More important was to avoid runtime installations of OpenJDK 6. I identified eighteen packages that strictly depended on the now obsolete JRE and fixed those issues on 4 May 2016 together with an update of java-common and announced the switch to OpenJDK 7 with a Debian NEWS file.

If you are not a regular reader of Debian news and also not subscribed to debian-lts, debian-lts-announce or debian-java, remember 26 June 2016 is the day when OpenJDK 7 will be made the default Java implementation in Wheezy LTS. Of course there is no need to wait. You can switch right now:

sudo update-alternatives --config java

Elena 'valhalla' Grandi: StickerConstructorSpec compliant swirl

19 June, 2016 - 04:35
StickerConstructorSpec compliant swirl

This evening I've played around a bit with the Sticker Constructor Specification https://github.com/terinjokes/StickerConstructorSpec and its https://github.com/terinjokes/StickerConstructorSpec/blob/master/assets/hexagonal-sticker_template.svg, and this is the result:

http://social.gl-como.it/photos/valhalla/image/e8ee069635b2823e168c358d2a753f14


Now I just have to:

* find somebody in Europe who prints good stickers and doesn't require illustrator (or other proprietary software) to submit files for non-rectangular shapes
* find out which Debian team I should contact to submit the files so that they can be used by everybody interested.

But neither will happen today, nor probably tomorrow, because lazy O:-)

Source svg:

Dominique Dumont: An improved Perl API for cme and Config::Model

19 June, 2016 - 00:38

Hello

While hacking on a script to update build dependencies on a Debian package, it occured to me that using Config::Model in a Perl program should be no more complicated than using cme from a shell script. That was an itch that I scratched immediately.

Fast forward a few days, Config::Model now features new cme() and modify() functions that have a behavior similar to cme modify command.

For instance, the following program is enough to update popcon’s configuration file:

use strict; # let's not forget best practices ;-)
use warnings;
use Config::Model qw(cme); # cme function must be imported
cme('popcon')->modify("PARTICIPATE=yes");

The object returned by cme() is a Config;:Model::Instance. All its methods are available for a finer control. For instance:

my $instance = cme('popcon');
$instance->load("PARTICIPATE=yes");
$instance->apply_fixes;
$instance->say_changes; 
$instance->save;

When run as root, the script above shows:

Changes applied to popcon configuration:
- PARTICIPATE: 'no' -> 'yes'

If need be, you can also retrieve the root node of the configuration tree to use Config;:Model::Node methods:

my $root_node = cme('popcon')->config_root;
say "is popcon active ? ",$root_node->fetch_element_value('PARTICIPATE');

In summary, using cme in a Perl program is now as easy as using cme from a shell script.

To provide feedback, comments, ideas, patches or to report problems, please follow the instructions from CONTRIBUTING page on github.

All the best


Tagged: config-model, configuration, Perl

Manuel A. Fernandez Montecelo: More work on aptitude

18 June, 2016 - 23:54

The last few months have been a bit of a crazy period of ups and downs, with a tempest of events beneath the apparent and deceivingly calm surface waters of being unemployed (still at it).

The daily grind

Chief activities are, of course, those related to the daily grind of job-hunting, sending applications, and preparing and attending interviews.

It is demoralising when one searches for many days or weeks without seeing anything suitable for one's skills or interests, or other more general life expectations. And it takes a lot of time and effort to put one's best in the applications for positions that one is really, really, interested in. And even for the ones which are meh, for a variety of reasons (e.g. one is not very suitable for what the offer demands).

After that, not being invited to interviews (or doing very badly at them) is bad, of course, but quick and not very painful. A swift, merciful end to the process.

But it's all the more draining when waiting for many weeks ─when not a few months─ with the uncertainty of not knowing if one is going to be lucky enough to be summoned for an interview; harbouring some hope ─one has to appear enthusiastic in the interviews, after all─, while trying to keep it contained ─lest it grows too much─; then in the interview hearing good words and some praises, and feeling the impression that one will fit in, that one did nicely and that chances are good ─letting the hope grow again─; start to think about life changes that the job will require ─to make a quick decision should the offer finally arrives─; perhaps make some choices and compromises based on the uncertain result; then wait for a week or two after the interview to know the result...

... only to end up being unsuccessful.

All the effort and hopes finally get squashed with a cold, short email or automatic response, or more often than not, complete radio silence from prospective employers, as an end to a multi-month-long process. An emotional roller coaster [1], which happened to me several times in the last few months.

All in a day's work

The months of preparing and waiting for a new job often imply an impasse that puts many other things that one cares about on hold, and one makes plans that will never come to pass.

All in a day's (half-year's?) work of an unemployed poor soul.

But not all is bad.

This period was also a busy time doing some plans about life, mid- and long-term; the usual ─and some really unusual!─ family events; visits to and from friends, old and new; attending nice little local Debian gatherings or the bigger gathering of Debian SunCamp2016, and other work for side projects or for other events that will happen soon...

And amidst all that, I managed to get some work done on aptitude.

Two pictures worth (less than) a thousand bugs

To be precise, worth 709 bugs ─ 488 bugs in the first graph, plus 221 in the second.

In 2015-11-15 (link to the post Work on aptitude):

In 2016-06-18:

Numbers

The BTS numbers for aptitude right now are:

  • 221 (259 if counting all merged bugs independently)
  • 1 Release Critical (but it is an artificial bug to keep it from migrating to testing)
  • 43 (55 unmerged) with severity Important or Normal
  • 160 (182 unmerged) with severity Minor or Wishlist
  • 17 (21 unmerged) marked as Forwarded or Pending
Highlights

Beyond graphs and stats, I am specially happy about two achievements in the last year:

  1. To have aptitude working today, first and foremost

    Apart from the abandon that suffered in previous years, I mean specifically the critical step of getting it through the troubles of the last summer, with the GCC-5/C++11 transition in parallel with a transition of the Boost library (explained in more detail in Work on aptitude).

    Without that, possibly aptitude would not have survived until today.

  2. Improvements to the suggestions of the resolver

    In the version 0.8, there were a lot of changes related with improving the order of the suggestions from the resolver, when it finds conflicts or other problems with the planned actions.

    Historically, but specially in the last few years, there have been many complaints about the nonsensical or dangerous suggestions from the resolver. The first solution offered by the resolver was very often regarded as highly undesirable (for example, removal of many packages), and preferable solutions like upgrades of one or only a handful of packages being offered only after many removals; and “keeps” only offered as last resort.

Perhaps these changes don't get a lot of attention, given that in the first case it's just to keep working (with few people realising that it could have collapsed on the spot, if left unattended), and the second can probably go unnoticed because “it just works” or “it started to work more smoothly” doesn't get as much immediate attention as “it suddenly broke!”.

Still, I wanted to mention them, because I am quite proud of those.

Thanks

Even if I put a lot of work on aptitude in the last year, the results of the graph and numbers have not been solely achieved by me.

Special thanks go to Axel Beckert (abe / XTaran) and the apt team, David Kalnischkies and Julian Andres Klode ─ who, despite the claim in that page, does not mostly work python-apt anymore... but also in the main tools.

They help with fixing some of the issues directly, or changing things in apt that benefit aptitude, testing changes, triaging bugs or commenting on them, patiently explaining to me why something in libapt doesn't do what I think it does, and good company in general.

Not the least, for holding impromptu BTS group therapy / support meetings, for those cases when prolonged exposure to BTS activity starts to induce very bad feelings.

Thanks also to people who sent their translation updates, notified about corrections, sent or tested patches, submitted bugs, or tried to help in other ways. Change logs for details.

Notes

[1] ^ It's even an example in the Cambridge Dictionaries Online website, for the entry of roller coaster:

He was on an emotional roller coaster for a while when he lost his job.

Kevin Avignon: GSOC 2015 : From NRefactory 6 to RefactoringEssentials

18 June, 2016 - 23:20
Hey guys, In spirit of my withdraw from the Google Summer of Code program this summer, I thought I’d do a piece of the project I successfully completed last summer. So what brought me to the program last year ? I spent a few weeks on working on a new thing in .NET called Roslyn. […]

Sune Vuorela: R is for Randa

18 June, 2016 - 16:23

This week I have been gathered with 38 KDE people in Randa, Switzerland. Randa is a place in a valley in the middle of the Alps close to various peaks like Matterhorn. It has been a week of intense hacking, bugfixing, brainstorming and a bit of enjoying the nature.

R is for Reproducible builds

I spent the first couple of days trying to get the Qt Documentation generation tool to reproducible generate documentation. Some of the fixes were of the usual ‘put data in an randomized datastructure, then iterate over it and create output’, where the fix is similar well known: Sort the datastructure first. Others were a bit more severe bugs that lead to the documentation to shuffle around the ‘obsolete’ bit, and the inheritance chains. Most of these fixes have been reviewed and submitted to the Qt 5.6 branch, one is still pending review, but that hopefully gets fixed soon. Then most of Qt (except things containing copies of (parts) of webkit and derivatives) should be reproducible.

R is for Roaming around in the mountains

Sleeping, hacking and dining in the same building sometimes leads to a enormous desire

Matthias Klumpp: A few words about the future of the Limba project

18 June, 2016 - 03:24

I wanted to write this blogpost since April, and even announced it in two previous posts, but never got to actually write it until now. And with the recent events in Snappy and Flatpak land, I can not defer this post any longer (unless I want to answer the same questions over and over on IRC ^^).

As you know, I develop the Limba 3rd-party software installer since 2014 (see this LWN article explaining the project better then I could do ) which is a spiritual successor to the Listaller project which was in development since roughly 2008. Limba got some competition by Flatpak and Snappy, so it’s natural to ask what the projects next steps will be.

Meeting with the competition

At last FOSDEM and at the GNOME Software sprint this year in April, I met with Alexander Larsson and we discussed the rather unfortunate situation we got into, with Flatpak and Limba being in competition.

Both Alex and I have been experimenting with 3rd-party app distribution for quite some time, with me working on Listaller and him working on Glick and Glick2. All these projects never went anywhere. Around the time when I started Limba, fixing design mistakes done with Listaller, Alex started a new attempt at software distribution, this time with sandboxing added to the mix and a new OSTree-based design of the software-distribution mechanism. It wasn’t at all clear that XdgApp, later to be renamed to Flatpak, would get huge backing by GNOME and later Red Hat, becoming a very promising candidate for a truly cross-distro software distribution system.

The main difference between Limba and Flatpak is that Limba allows modular runtimes, with things like the toolkit, helper libraries and programs being separate modules, which can be updated independently. Flatpak on the other hand, allows just one static runtime and enforces everything that is not in the runtime already to be bundled with the actual application. So, while a Limba bundle might depend on multiple individual other bundles, Flatpak bundles only have one fixed dependency on a runtime. Getting a compromise between those two concepts is not possible, and since the modular vs. static approach in Limba and Flatpak where fundamental, conscious design decisions, merging the projects was also not possible.

Alex and I had very productive discussions, and except for the modularity issue, we were pretty much on the same page in every other aspect regarding the sandboxing and app-distribution matters.

Sometimes stepping out of the way is the best way to achieve progress

So, what to do now? Obviously, I can continue to push Limba forward, but given all the other projects I maintain, this seems to be a waste of resources (Limba eats a lot of my spare time). Now with Flatpak and Snappy being available, I am basically competing with Canonical and Red Hat, who can make much more progress faster then I can do as a single developer. Also, Flatpaks bigger base of contributors compared to Limba is a clear sign which project the community favors more.

Furthermore, I started the Listaller and Limba projects to scratch an itch. When being new to Linux, it was very annoying to me to see some applications only being made available in compiled form for one distribution, and sometimes one that I didn’t use. Getting software was incredibly hard for me as a newbie, and using the package-manager was also unusual back then (no software center apps existed, only package lists). If you wanted to update one app, you usually needed to update your whole distribution, sometimes even to a development version or rolling-release channel, sacrificing stability.

So, if now this issue gets solved by someone else in a good way, there is no point in pushing my solution hard. I developed a tool to solve a problem, and it looks like another tool will fix that issue now before mine does, which is fine, because this longstanding problem will finally be solved. And that’s the thing I actually care most about.

I still think Limba is the superior solution for app distribution, but it is also the one that is most complex and requires additional work by upstream projects to use it properly. Which is something most projects don’t want, and that’s completely fine.

And that being said: I think Flatpak is a great project. Alex has much experience in this matter, and the design of Flatpak is sound. It solves many issues 3rd-party app development faces in a pretty elegant way, and I like it very much for that. Also the focus on sandboxing is great, although that part will need more time to become really useful. (Aside from that, working with Alexander is a pleasure, and he really cares about making Flatpak a truly cross-distributional, vendor independent project.)

Moving forward

So, what I will do now is not to stop Limba development completely, but keep it going as a research project. Maybe one can use Limba bundles to create Flatpak packages more easily. We also discussed having Flatpak launch applications installed by Limba, which would allow both systems to coexist and benefit from each other. Since Limba (unlike Listaller) was also explicitly designed for web-applications, and therefore has a slightly wider scope than Flatpak, this could make sense.

In any case though, I will invest much less time in the Limba project. This is good news for all the people out there using the Tanglu Linux distribution, AppStream-metadata-consuming services, PackageKit on Debian, etc. – those will receive more attention

An integral part of Limba is a web service called “LimbaHub” to accept new bundles, do QA on them and publish them in a public repository. I will likely rewrite it to be a service using Flatpak bundles, maybe even supporting Flatpak bundles and Limba bundles side-by-side (and if useful, maybe also support AppImageKit and Snappy). But this project is still on the drawing board.

Let’s see

P.S: If you come to Debconf in Cape Town, make sure to not miss my talks about AppStream and bundling

Kevin Avignon: Augmented Tactics: A role playing game prototype

18 June, 2016 - 03:23
Hi, Today, I’ll be talking about a special project I have not only designed from A to Z but also implemented. Well, it’s not as  big as this sentence let’s it sound. I worked on developing a prototype of a mobile game in augmented reality. Being a huge fan of tactical RPGs, I found it […]

Shirish Agarwal: Medical awareness, terrorism, racism and Debconf

17 June, 2016 - 23:59
Hi to all the souls on planet.debian.org . I hope to meet many of you in Debconf16 which is being held at UCT (University of Cape Town), Rhondenbosch, South Africa and am excited to be part of it. I pushed a blog post about my journey to debconf till date. I express my sympathy and […]

John Goerzen: Mud, Airplanes, Arduino, and Fun

16 June, 2016 - 11:00

The last few weeks have been pretty hectic in their way, but I’ve also had the chance to take some time off work to spend with family, which has been nice.

Memorial Day: breakfast and mud

For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty.

Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then…. water. They put the hose over the slide and made a “water slide” (more mud slide maybe).

Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours.

Flying to Petit Jean, Arkansas

Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it’s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain.

And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning:

And here’s a view of one of the trails:

The sunset views were pretty nice, too:

And finally, the plane we flew out in, parked all by itself on the ramp:

It was truly a relaxing, peaceful, re-invigorating place.

Flying to Atchison

Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too.

Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River.

I was amused to find this hanging in the county historical society museum:

One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum:

An Arduino Project with Jacob

One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his “police station”, so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn’t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off:

Reproducible builds folks: Reproducible builds: week 59 in Stretch cycle

16 June, 2016 - 06:27

What happened in the Reproducible Builds effort between June 5th and June 11th 2016:

Media coverage

Ed Maste gave a talk at BSDCan 2016 on reproducible builds (slides, video).

GSoC and Outreachy updates

Weekly reports by our participants:

  • Scarlett Clark worked on making some packages reproducible, focusing on KDE backend and utility programs.
  • Ceridwen published an initial design for the interface for reprotest, including a discussion on different types of build variations and the difficulties of specifying certain types of variations.
  • Valerie Young improved documentation for building our tests website, began migrating Debian-specific pages into a new namespace, and planned future work around its navigation.
Documentation update
  • Ximin Luo added some proposed text to our source-date-epoch-spec repository explaining FORCE_SOURCE_DATE.

    Some upstream build tools (e.g. TeX, see below) have expressed a desire to control which cases of embedded timestamps should obey SOURCE_DATE_EPOCH. They were not convinced by our arguments on why this is a bad idea, so we agreed on an environment variable FORCE_SOURCE_DATE for them to implement their desired behaviour - neutrally-named, so that at least we can set it centrally. For more details, see the text just linked. However, we strongly urge most build tools not to use this, and instead obey SOURCE_DATE_EPOCH unconditionally in all cases.

Toolchain fixes
  • TeX Live 2016 released with SOURCE_DATE_EPOCH support for all engines except LuaTeX and original TeX.
  • Continued discussion (alternative archive) with TeX upstream, about SOURCE_DATE_EPOCH corner cases, eventually resulting in the FORCE_SOURCE_DATE proposal from above.
  • gcc-5/5.4.0-4 by Matthias Klose now avoids storing -fdebug-prefix-map in DW_AT_producer, thanks to original patch by Daniel Kahn Gillmor.
  • sphinx/1.4.3-1 by Dmitry Shachnev now drops Debian-specific patches relating to SOURCE_DATE_EPOCH applied upstream, original patch by Alexis Bienvenüe.
  • asciidoctor/1.5.4-2 by Cédric Boutillier now supports SOURCE_DATE_EPOCH, thanks to original patch by Alexis Bienvenüe.
  • dh-python/1.5.4-2 by Piotr Ożarowski now behaves better in some cases, thanks to original patch by Chris Lamb.
Packages fixed

The following 16 packages have become reproducible due to changes in their build-dependencies: apertium-dan-nor apertium-swe-nor asterisk-prompt-fr-armelle blktrace canl-c code-saturne coinor-symphony dsc-statistics frobby libphp-jpgraph paje.app proxycheck pybit spip tircd xbs

The following 5 packages are new in Debian and appear to be reproducible, but we'd need more data to be sure: golang-github-bowery-prompt golang-github-pkg-errors golang-gopkg-dancannon-gorethink.v2 libtask-kensho-perl sspace

The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed:

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #806331 against xz-utils by Ximin Luo: make the selected POSIX shell stable accross build environments
  • #806494 against gnupg by intrigeri: Make man pages not embed a build-time dependent timestamp
  • #806945 against bash by Reiner Herrmann and Ximin Luo: Use the system man2html, and set PGRP_PIPE unconditionally.
  • #825857 against python-setuptools by Anton Gladky: sort libs in native_libs.txt
  • #826408 against brainparty by Reiner Herrmann: Sort object files for deterministic linking order
  • #826416 against blockout2 by Reiner Herrmann: Sort the list of source files
  • #826418 against xgalaga++ by Reiner Herrmann: Sort source files to get a deterministic linking order
  • #826423 against kraptor by Reiner Herrmann: Sort source files for deterministic linking order
  • #826431 against traceroute by Reiner Herrmann: Sort lists of libraries/source/object files
  • #826544 against doc-debian by intrigeri: make the created files stable regardless of the locale
  • #826676 against python-openstackclient by Chris Lamb: make the build reproducible
  • #826677 against cadencii by Chris Lamb: make the build reproducible
  • #826760 against dctrl-tools by Reiner Herrmann: Sort object files for deterministic linking order
  • #826951 against slicot by Alexis Bienvenüe: please make the build reproducible (fileordering)
  • #826982 against hoichess by Reiner Herrmann: Sort object files for deterministic linking order
Package reviews

68 reviews have been added, 19 have been updated and 28 have been removed in this week. New and updated issues:

26 FTBFS bugs have been reported by Chris Lamb, 1 by Santiago Vila and 1 by Sascha Steinbiss.

diffoscope development
  • Mattia Rizzolo uploaded diffoscope/54 to jessie-backports.
strip-nondeterminism development
  • Mattia uploaded strip-nondeterminism/0.018-1 to jessie-backports, to support a debhelper backport.
  • Andrew Ayer uploaded strip-nondeterminism/0.018-2 fixing #826700, a packaging improvement for Multi-Arch to ease cross-build situations.
  • 2 days later Andrew released strip-nondeterminism/0.019; now strip-nondeterminism is able to:
    • recursively normalize JAR files embedded within JAR files (#823917)
    • clamp the timestamp, the same way tar 1.18 can (for now available only for gzip archives)
disorderfs development
  • Andrew Ayer released disorderfs/0.4.3, fixing a issue with umask handling (#826891)
tests.reproducible-builds.org
  • Valerie Young moved Debian pages to /debian/ namespace, with redirects to keep the old URLs working.
  • Holger Levsen improved the reliability of build jobs: the availability of both build nodes (for a given build) is now being tested when a build job is started, to better cope when one of the 25 build nodes go down for some reason.
  • Ximin Luo improved the index of identified issues to include the total popcon scores of each issue, which is now also used for sorting that page.
Misc.

Steven Chamberlain submitted a patch to FreeBSD's makefs to allow reproducible builds of the kfreebsd installer.

Ed Maste committed a patch to FreeBSD's binutils to enable determinstic archives by default in GNU ar.

Helmut Grohne experimented with cross+native reproductions of dash with some success, using rebootstrap.

This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen, Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC.

Enrico Zini: Verifying gpg keys

16 June, 2016 - 02:47

Suppose you have a gpg keyid like 9F6C6333 that corresponds to both key 1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333 and 88BB08F633073D7129383EE71EA37A0C9F6C6333, and you don't know which of the two to use.

You go to http://pgp.cs.uu.nl/ and find out that the site uses short key IDs, so the two keys are indistinguishable.

Building on Clint's hopenpgp-tools, I made a script that screenscrapes http://pgp.cs.uu.nl/ for trust paths, downloads all the potentially connecting keys in a temporary keyring, and runs hkt findpaths on it:

$ ./verify-trust-paths 1793D6AB75663E6BF104953A634F4BD1E7AD5568 1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333
hkt (hopenpgp-tools) 0.18
Copyright (C) 2012-2016  Clint Adams
hkt comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions.
(4,[1,4,3,6])

(1,1793D6AB75663E6BF104953A634F4BD1E7AD5568)
(3,F8921D3A7404C86E11352215C7197699B29B232A)
(4,C331BA3F75FB723B5873785B06EAA066E397832F)
(6,1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333)

$ ./verify-trust-paths 1793D6AB75663E6BF104953A634F4BD1E7AD5568 88BB08F633073D7129383EE71EA37A0C9F6C6333
hkt (hopenpgp-tools) 0.18
Copyright (C) 2012-2016  Clint Adams
hkt comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions.
(0,[])

This is a start: it could look in the local keyring for all ultimately trusted key finegrprints and use those as starting points. It could just take as an argument a short keyid and automatically check all matching fingerprints.

I'm currently quite busy with https://nm.debian.org and at the moment verify-trust-paths scratches enough of my itch that I can move on with my other things.

Please send patches, or take it over: I'd like to see this grow.

Steve Kemp: So I should document the purple server a little more

15 June, 2016 - 23:15

I should probably document the purple server I hacked together in Perl and mentioned in my last post. In short it allows you to centralise notifications. Send "alerts" to it, and when they are triggered they will be routed from that central location. There is only a primitive notifier included, which sends data to the console, but there are sample stubs for sending by email/pushover, and escalation.

In brief you create alerts by sending a JSON object via HTTP-POST. These objects contain a bunch of fields, but the two most important are:

  • id
    • A human-name for the alert. e.g. "disk-space", "heartbeat", or "unread-mail".
  • raise
    • When to raise the alert. e.g. "now", "+5m", "1466006086".

When an update is received any existing alert has its values updated, which makes heartbeat alerts trivial. Send a message with:

{ "id": "heartbeat", "raise": "+5m", .. }

The existing alert will be updated each time such a new event is submitted, which means that the time at which that alert will raise will be pushed back by five minutes. If you send this every 60 seconds then you'll get informed of an outage five minutes after your server explodes (because the "+5m" will have been turned into an absolute time, and that time will eventually become in the past - triggering a notification).

Alerts are keyed on the source IP which sent the submission and the id field, meaning you can send the same update from multiple hosts without causing any problems.

Notifications can be viewed in a reasonably pretty Web UI, so you can clear raised-alerts, see the pending ones, and suppress further notifications on something that has been raised. (By default notifications are issued every sixty seconds, until the alert is cleared. There is support for only raising an alert once, which is useful for services you might deliver events via, such as pushover which will repeat themselves.)

Anyway this is a fun project, which is a significantly simplified and less scalable version of a project which is open-sourced already and used at Bytemark.

Andrew Shadura: Migrate to systemd without a reboot

15 June, 2016 - 18:51

Yesterday I was fixing an issue with one of the servers behind kallithea-scm.org: the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian’s flavour of System V init and djb’s dæmontools to keep things running. To make the hook asynchronous, I wrote a service to be managed to dæmontools, so that concurrency issued would be solved by it. However, I didn’t implement any timeouts, so when last week wget froze while pulling Weblate’s hook, there was nothing to interrupt it, so the hook stopped working since dæmontools thought it’s already running and wouldn’t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future.

I’ve been using systemd at work for the last year, so I am now confident I’m happier with systemd than with dæmontools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with dæmontools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don’t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who’s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong.

After studying the manpages of both systemd’s init and sysvinit’s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd’s init can’t talk to System V init, so before installing systemd I made a backup on it. It’s also important to stop all running services (except probably ssh) to make sure systemd doesn’t start second instances of each. And then: /tmp/init u — and we’re running systemd! A couple of additional checks, and it’s safe to reboot.

Only when I did all that I realised that in the case of systemd not working I’d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it’s not a good idea to perform such manipulations when you don’t have an alternative way to connect to the server :)

Enrico Zini: On discomfort and new groups

15 June, 2016 - 15:14

I recenyly wrote:

When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop.

Last night I asked a group of friends what do they do if they start feeling uncomfortable when they are alone in a group of people they don't know. Here are the replies:

  • Wait outside the group until I figure the group out.
  • Find someone to talk for a while until you get comfortable.
  • If a person is making things uncomfortable for you, let them know, and leave if nobody cares.
  • Sit there in silence.
  • Work around unwelcome people by bearing them for a bit while trying to integrate with others.
  • Some people are easy to bribe into friendship, just bring cake.
  • While you don't know what is going on, you try to replicate what others are doing.
  • Spend time trying to get a feeling of what are the consequences of taking actions.
  • Purposefully disagree with people in a new environment to figure out if having a different opinion is accepted.
  • Once I was new and I was asked to be the person that invites everyone for lunch, that forced me to talk to everyone, and integrate.
  • When you are the first one to point something out, you'll probably soon find out you're not alone.
  • The reaction on the first time something is exposed, influences how often similar cases will be reported.

I think a lot of these point are good food for thought about barriers of entry, and about safety nets that a group has or might want to have.

Russ Allbery: Review: Matter

15 June, 2016 - 10:37

Review: Matter, by Iain M. Banks

Publisher: Orbit Copyright: February 2008 ISBN: 0-316-00536-3 Format: Hardcover Pages: 593

Sursamen is an Arithmetic, Mottled, Disputed, Multiply Inhabited, Multi-million Year Safe, and Godded Shellworld. It's a constructed world with multiple inhabitable levels, each lit by thermonuclear "suns" on tracks, each level supported above the last by giant pillars. Before the recorded history of the current Involved Species, a culture called the Veil created the shellworlds with still-obscure technology for some unknown purpose, and then disappeared. Now, they're inhabited by various transplants and watched over by a hierarchy of mentor and client species. In the case of Sursamen, both the Aultridia and the Oct claim jurisdiction (hence "Disputed"), and are forced into an uneasy truce by the Nariscene, a more sophisticated species that oversees them both.

On Sursamen, on level eight to be precise, are the Sarl, a culture with an early industrial level of technology in the middle of a war of conquest to unite their level (and, they hope, the next level down). Their mentors are the Oct, who claim descendance from the mysterious Veil. The Deldeyn, the next level down, are mentored by the Aultridia, a species that evolved from a parasite on Xinthian Tensile Aranothaurs. Since a Xinthian, treated by the Sarl as a god, lives in the heart of Sursamen (hence "Godded"), tensions between the Sarl and the Aultridians run understandably high.

The ruler of the Sarl had three sons and a daughter. The oldest was killed by the people he is conquering as Matter starts. The middle son is a womanizer and a fop who, as the book opens, watches a betrayal that he's entirely unprepared to deal with. The youngest is a thoughtful, bookish youth pressed into a position that he also is not well-prepared for.

His daughter left the Sarl, and Sursamen itself, fifteen years previously. Now, she's a Special Circumstances agent for the Culture.

Matter is the eighth Culture novel, although (like most of the series) there's little need to read the books in any particular order. The introduction to the Culture here is a bit scanty, so you'll have more background and understanding if you've read the previous novels, but it doesn't matter a great deal for the story.

Sharp differences in technology levels have turned up in previous Culture novels (although the most notable example is a minor spoiler), but this is the first Culture novel I recall where those technological differences were given a structure. Usually, Culture novels have Special Circumstances meddling in, from their perspective, "inferior" cultures. But Sursamen is not in Culture space or directly the Culture's business. The Involved Species that governs Sursamen space is the Morthenveld: an aquatic species roughly on a technology level with the Culture themselves. The Nariscene are their client species; the Oct and Aultridia are, in turn, client species (well, mostly) of the Nariscene, while meddling with the Sarl and Deldeyn.

That part of this book reminded me of Brin's Uplift universe. Banks's Involved Species aren't the obnoxious tyrants of Brin's universe, and mentoring doesn't involve the slavery of the Uplift universe. But some of the politics are a bit similar. And, as with Uplift, all the characters are aware, at least vaguely, of the larger shape of galactic politics. Even the Sarl, who themselves have no more than early industrial technology. When Ferbin flees the betrayal to try to get help, he ascends out of the shellworld to try to get assistance from an Involved species, or perhaps his sister (which turns out to be the same thing). Banks spends some time here, mostly through Ferbin and his servant (who is one of the better characters in this book), trying to imagine what it would be like to live in a society that just invented railroads while being aware of interstellar powers that can do practically anything.

The plot, like the world on which it's set, proceeds on multiple levels. There is court intrigue within the Sarl, war on their level and the level below, and Ferbin's search for support and then justice. But the Sarl live in an artifact with some very mysterious places, including the best set piece in the book: an enormous waterfall that's gradually uncovering a lost city on the level below the Sarl, and an archaeological dig that proceeds under the Deldeyn and Sarl alike. Djan Seriy decides to return home when she learns of events in Sarl, originally for reasons of family loyalty and obligation, but she's a bit more in touch with the broader affairs of the galaxy, including the fact that the Oct are acting very strangely. There's something much greater at stake on Sursamen than tedious infighting between non-Involved cultures.

As always with Banks, the set pieces and world building are amazing, the scenery is jaw-dropping, and I have some trouble warming to the characters. Dramatic flights across tower-studded landscapes seeking access to forbidden world-spanning towers largely, but don't entirely, make up for not caring about most of the characters for most of the book. This did change, though: although I never particularly warmed to Ferbin, I started to like his younger brother, and I really liked his sister and his servant by the end of the book.

Unfortunately, the end of Matter is, if not awful, at least exceedingly abrupt. As is typical of Banks, we get a lot of sense of wonder but not much actual explanation, and the denouement is essentially nonexistent. (There is a coy epilogue hiding after the appendices, but it mostly annoyed me and provides only material for extrapolation about the characters.) Another SF author would have written a book about the Xinthian, the Veil, the purpose of the shellworlds, and the deep history of the galaxy. I should have known going in that Banks isn't that sort of SF author, but it was still frustrating.

Still, Banks is an excellent writer and this is a meaty, complex, enjoyable story with some amazing moments of wonder and awe. If you like Culture novels in general, you will like this. If you like set-piece-heavy SF on a grand scale, such as Alastair Reynolds or Kim Stanley Robinson, you probably like this. Recommended.

Rating: 8 out of 10

Joey Hess: second system

15 June, 2016 - 04:15

Five years ago I built this, and it's worked well, but is old and falling down now.

mark I outdoor shower

The replacement is more minimalist and like any second system tries to improve on the design of the first. No wood to rot away, fully adjustable height. It's basically a shower swing, suspended from a tree branch.

mark II outdoor shower

Probably will turn out to have its own new problems, as second systems do.

Olivier Grégoire: Fourth week at GSoC: let's integrate this new window!

14 June, 2016 - 23:57

I began by trying to integrate my window like a new view. That was not a good idea because my UI is actually integrated in the call view. So, I tried to integrate it like the buttons present on the view. Not a really good idea too, I have some issue with the clutter who do strange stuff. Finally, I work in a new window. I think that's a good idea because when you want to debug ring, it's more easier with a separate window.

I have meet some conflicts between my clean installation from the official website and my installation test. In response to this issue, I began to install my program test in another folder.

Simon McVittie: GTK versioning and distributions

14 June, 2016 - 08:56

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed.

The problem we're trying to address here is the two sides of a trade-off:

  • Without new development, a library (or an OS) can't go anywhere new
  • New development sometimes breaks existing applications

Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.)

In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing).

The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu.

In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable.

The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice.

How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt.

For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop.

At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long.

Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้