Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 53 min ago

Scarlett Clark: KDE: Debian: *ubuntu snappy: Reproducible builds, Randa! and much more…

22 June, 2016 - 23:47

#Randa2016 KDE Sprint

Debian:

I am very late on post due to travel, Flu, jetlag sorry!

choqok:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825322
For this I was able to come up with a patch for kconfig_compiler to encode generated files to utf-8.
Review request is here:
https://git.reviewboard.kde.org/r/128102/
This has been approved and I will be pushing it as soon as I patch the qt5 frameworks version.

Both kde4libs and kf5 kconfig has been pushed upstream kde.

kxmlgui:

WIP this has been a steep learning curve, according to the notes it was an easy embedded kernel version, that was not the case! After grueling hours of
trying to sort out randomness in debug output I finally narrowed it down to cases where QStringLiteral was used and there were non letter characters eg. (” <") These were causing debug symbols to generate with ( lambda() ) which caused unreproducible symbol/debug files. It is now a case of fixing all of these in the code to use QString::fromUtf8 seems to fix this. I am working on a mega patch for upstream and it should be ready early in the week. This last week I spent a large portion making my through a mega patch for kxmlgui, when it was suggested to me to write a small qt app to test QStringLiteral isolated and sure enough two build were byte for byte identical. So this means that QStringLiteral may not be the issue at all. With some more assistance I am to expand my test app with several QStringLiterals of varying lengths, we have suspicion it is a padding issue, which complicates things.

I am still fighting with this one, will set aside to simmer for now, as I have no idea how to fix padding issues.

extra-cmake-modules:
I am testing a patch to fix umask issues for anyone that uses the kapptemplate generation macro. Thank you Simon for pointing me to this.
known affected:
plasma-framework

kdevplatform:
The kapptemplate generation users/groups and umask patch has been pushed upstream.
https://bugs.kde.org/show_bug.cgi?id=363615

KDE Randa!:
Despite managing to get a terrible Flu I accomplished more than I would have at home without awesome devs to help me out!

  • I have delegated the windows backend to Hannah and Kevin, if emerge is successful with Windows we will implement it on OSX as well.
  • Android docker image is up and running.
  • Several snappy packages done. Improved the snapcraft.yaml creation automation scripts started by Harald. Got help from
    David ( he even made a patch! ) with some issues we were facing with kio.
  • KDE CI DSL adjustments for 5 new platforms
  • Port tools/* python scripts to python3

CI TODO:

  • Python automation scripts can no longer find projects except qt5… Need to get help from Ben as these are originally his.
  • Finish yaml CI files

Randa as usual was an amazing experience. Yes it is very hard work, but you have the beauty of the Swiss Alps at your fingertips! Not to mention all the
friendly faces and collaboration. A big thank you to all supporters and the Randa team!

Please help make KDE better by supporting the very important Randa Sprint:
https://www.kde.org/fundraisers/randameetings2016/

Have a great day.

Paul Wise: DebCamp16 day -1

22 June, 2016 - 23:34

Landed late due to technical delays. Mountains! Mountains are everywhere! Beautiful sunny day with clear blue skies. Ran into Valessio as I was shown to my room. Wandered around the campus for a bunch of hours. Ate an all you can eat yum buffet lunch at the pub. Wandered down the hill and ended up on the train and wandering around a lake with lilies in a park. Arriving back at UCT we ran into a beer mission along with some wonderful arriving folks. The warm DebConf nervous centre was quite inviting and soon had plentiful beer, pizza and discussion.

Joey Hess: twenty years of free software -- part 3 myrepos

22 June, 2016 - 23:24

?myrepos is kind of just an elaborated foreach (@myrepos) loop, but its configuration and extension in a sort of hybrid between an .ini file and shell script is quite nice and plenty of other people have found it useful.

I had to write myrepos when I switched from subversion to git, because git's submodules are too limited to meet my needs, and I needed a tool to check out and update many repositories, not necessarily all using the same version control system.

It was called "mr" originally, but I renamed the package because it's impossible to google for "mr". This is the only software I've ever renamed.

Next: ?twenty years of free software -- part 4 ikiwiki-hosting

Andrew Cater: "But I'm a commercial developer / a government employee"

22 June, 2016 - 22:48
Following on:

Having seen some posts about this elsewhere on the 'Net:

  • Your copyright remains your own unless you assign it
  • Establish what you are being paid for: are you being paid for :
  1. Your specific area of FLOSS expertise (or)
  2. Your time / hours in an area unrelated to your FLOSS expertise (or)
  3. A job that has no impact or bearing on your FLOSS expertise (or)
  4. Your time / hours only - and negotiate accordingly
Your employer may be willing to negotiate / grant you an opt-out clause to protect your FLOSS expertise /  accept an additional non-exclusive licence to your FLOSS code / be prepared to sign an assignment e.g.

"You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright
interest in the program `Gnomovision'
(which makes passes at compilers) written
by James Hacker.

signature of Ty Coon, 1 April 1989
Ty Coon, President of Vice"
 
[http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

If none of the above is feasible: don't contribute anything that crosses the streams and mingles commercial and FLOSS expertise, however much you're offered to do so.

Patents / copyrights

"In the 1980s I had not yet realized how confusing it was to speak of “the issue” of “intellectual property”. That term is obviously biased; more subtle is the fact that it lumps together various disparate laws which raise very different issues. Nowadays I urge people to reject the term “intellectual property” entirely, lest it lead others to suppose that those laws form one coherent issue. The way to be clear is to discuss patents, copyrights, and trademarks separately. See further explanation of how this term spreads confusion and bias."
 [http://www.gnu.org/gnu/manifesto.en.html - footnote 8.]

If you want to assert a patent - it's probably not FLOSS. Go away :)

If you want to assert a trademark of your own - it's probably not FLOSS. Go away :)
 [Trademarks may ordinarily be outside the scope of normal FLOSS legal considerations - but should be acknowledged wherever they occur both as a matter of law and as a matter of courtesy]

Copyright gives legal standing (locus standi in the terminology of English common law) to sue for infringement - that's the basis of licence enforcement actions.

Employees of governments and those doing government work
  • Still have the right to own authorship and copyrights and to negotiate accordingly
  • May need to establish more clearly what they're being paid for
  • May be able to advise, influence or direct policy towards FLOSS in their own respective national jurisdiction
  • Should, ideally, be primariily acknowledged as individuals, holding and maintaining an individual reputation  and only secondarily as contractors/employees/others associated with government work.
  • Contribution to national / international standards, international agreements and shared working practices should be informed in the light of FLOSS work.
This is complex: some FLOSS contributors see a significant amount of this as immaterial to them in the same way that some indigenous populations do not acknowledge imposed colonial legal structures as valid - but both value systems can co-exist




Andrew Cater: How to share collaboratively

22 June, 2016 - 22:19
Following on:

When contributing to mailing lists and fora:
  • Contribute constructively - no one likes to be told "You've got a REALLY ugly baby there" or equivalent.
  • Think through what you post: check references and check that it reads clearly and is spelled correctly
  • Add value
 When contributing bug reports:
  •  Provide as full details of hardware and software as you have
  • Answer questions carefully: Ask questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html
  • Be prepared to follow up queries / provide sufficient evidence to reproduce behaviour or provide pathological test cases 
  • Provide a patch if possible: even if it's only pseudocode
When adding to / modifying FLOSS software:
  • Keep pristine sources that you have downloaded
  • Maintain patch series against pristine source
  • Talk to the originators of the software / current maintainers elsewhere
  • Follow upstream style if feasible / a consistent house style if not
  • Be generous in what you accept: be precise in what you put out
  • Don't produce licence conflicts - check and check again that your software can be distributed.
  • Don't apply inconsistent copyrights
When writing new FLOSS software / "freeing" prior commercial/closed code under a FLOSS licence
  • Make permissions explicit and publish under a well established FLOSS licence 
  • Be generous to potential contributors and collaborators: render them every assistance so that they can help you better
  • Be generous in what you accept: be precise in what you put out
  • Don't produce licence conflicts - check and check again that your software can be distributed.
  • Don't apply inconsistent copyrights: software you write is your copyright at the outset until you assign it elsewhere
  • Contribute documentation / examples
  • Maintain a bugtracker and mailing lists for your software
If you are required to sign a contributor license agreement [CLA]
  • Ensure that you have the rights you purport to assign
  • Assign the minimum of rights necessary - if you can continue to allow full and free use of your code, do so
  • Meet any  required code of conduct [CoC] stipulations in addition to the CLA
Always remember in all of this: just because you understand your code and your working practices doesn't mean that anyone else will.
There is no automatic right to contribution nor any necessary assumption or precondition that collaborators will come forward.
Just because you love your own code doesn't mean that it merits anyone else's interest or that anyone else should value it thereby
"Just because it scratches your itch doesn't mean that it scratches anyone else's - or that it's actually any good / any use to anyone else"

Andrew Cater: Why share / why collaborate? - Some useful sources outside Debian.

22 June, 2016 - 21:07
"We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris."
[Larry Wall, Programming Perl, O'Reilly Assoc. (and expanded at http://c2.com/cgi/wiki?LazinessImpatienceHubris) ]

Because "A mind is a terrible thing to waste"
 [The above copyright Young and Rubicam, advertisers, for UNC Fund, 1960s]
"Why I Must Write GNUI consider that the Golden Rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way. I cannot in good conscience sign a nondisclosure agreement or a software license agreement. ... "
[rms, GNU Manifesto copyright 1985-2014 Free Software Foundation Inc. https://www.gnu.org/gnu/manifesto.html]

"La pédagogie, l’information, la culture et le débat d’opinion sont le seul fait des utilisateurs, des webmestres indépendants et des initiatives universitaires et associatives."
 Education, information, culture and debate can only come from users, independent webmasters, academic or associative organizations.
[le minirézo http://www.uzine.net/article60.html]

We value:
  1. Contributors and facilitators over ‘editors’ and ‘authors’
  2. Collaboration over indiviualised production
  3. Here and now production over sometime soon production
  4. Meaningful credit for all contributors over single author attribution
 https://github.com/greyscalepress/manifestos - from whom much of the above quotations were abstracted - Manifestos for the Internet Age
Grayscale Press ISBN-13:978-2-940561-02-5]

So - collaboration matters. Not repeating needless make-work that someone else has already done matters. Giving due credit: sharing: doing and "do-ocracy" matters above all

Perversely, Acknowledging prior work and prior copyright correctly is the beginning and end of the law. Only by doing this conscientiously and sharing in giving due credit can any of us truly participate.

It seems clear to me at least that contributing openly and freely, allowing others to make use of your expertise, opinions, prior experience can anyone progress in good conscience.

Accordingly, I recommend to my work colleagues and those I advise that they only consider FLOSS licences, that they do not make use of code snippets or random, unlicensed code culled form Github and that they contribute








Satyam Zode: GSoC 2016 Week 4 and 5: Reproducible Builds in Debian

22 June, 2016 - 17:47

This is a brief report on my last week work with Debian Reproducible Builds.

In week 4, I mostly worked on designing an interfaces and tackling different issues related to argument completion feature of diffoscope and in week 5 I worked on adding hiding .buildinfo from .changes files.

Update for last week’s activities
  • I researched different diffoscope outputs. In reproducible-builds testing framework only differences of .buildinfo files are given but I needed diffoscope outputs for .changes files. Hence, I had to build packages locally using our experimental tool chain. My goal was to generate different outputs and to see how I can hide .buildinfo files from .changes.
  • I updated argument completion patch as per suggestions given by Paul Wise (pabs). Patch has been reviewed by Mattia Rizzolo, Holger Levsen and merged by Reiner Herrmann (deki) into diffoscope master. This patch closes #826711. Thanks all for support.

  • For Ignore .buildinfo files when comparing .changes files, we finally decided to enable this by default and without having any command line option to hide.

  • Last week I researched more on .changes and .buildinfo files. After getting guidelines from Lunar I was able to understand the need of this feature. I am in the middle of implementation of this particular problem.

Goal for upcoming week:
  • Finish the implementation of hiding .buildinfo from .changes
  • Start thinking on interfaces and discuss about different use cases.

I am thankful to Shirish Agarwal for helping me through visa process. But, unfortunately I won’t get visa till 5th July. So I don’t think, I would make it to debconf this year. I will certainly attend Debconf 2017. Good news for me is I have passed mid-term evaluations of Google Summer of Code 2016. I will continue my work to improve Debian. Even, I have post GSoC plans ready for Debian project ;)

Have a nice day :)

Andrew Cater: Why I must use Free Software - and why I tell others to do so

22 June, 2016 - 16:43
My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times,  I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
 ...
 In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace  1996  https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much  a "do-ocracy"  - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.




Martin-&#201;ric Racine: Batch photo manipulation via free software tools?

22 June, 2016 - 15:12

I have a need for batch-processing pictures. My requirements are fairly simple:

  • Resize the image to fit Facebook's preferred 960 pixel box.
  • Insert Copyright, Byline and Bylinetitle into the EXIF data.
  • Optionally, paste my watermark onto a predefined corner of the image.
  • Optionally, adjust the white balance.
  • Rename the file according to a specific syntax.
  • Save the result to a predefined folder.

Until recently, I was using Phatch to perform all of this. Unfortunately, it cannot edit the EXIF data of my current Lumix camera, whose JPEG it claims to be MPO. I am thus forced to look for other options. Ideally, I would do this via a script inside gThumb (which is my main photo editing software), but I cannot seem to find adequate documentation on how to achieve this.

I am thus very interested in hearing about other options to achieve the same result. Ideas, anyone?

Clint Adams: Only in San Francisco would one brag about this

22 June, 2016 - 13:46

“I dated Appelbaum!” she said.

“I gotta go,” I said.

Gunnar Wolf: Answering to a CACM «Viewpoint»: on the patent review process

22 June, 2016 - 11:40

I am submitting a comment to Wen Wen and Chris Forman's Viewpoint on the Communications of the ACM, titled Economic and business dimensions: Do patent commons and standards-setting organizations help navigate patent thickets?. I believe my comment is worth sharing a bit more openly, so here it goes. Nevertheless, please refer to the original article; it makes very interesting and valid points, and my comment should be taken as an extra note on a great text only!

I was very happy to see an article with this viewpoint published. This article, however, mentions some points I believe should be further stressed out as problematic and important. Namely, still at the introduction, after mentioning that patents «are intended to provide incentives for innovation by granting to inventors temporary monopoly rights», the next paragraph continues, «The presence of patent thickets may create challenges for ICT producers. When introducing a new product, a firm must identify patents its product may infringe upon.»

The authors continue explaining the needed process — But this simple statement should be enough to explain how the patent system is broken and needs repair.

A requisite for patenting an invention was originally the «inventive» and «non-obvious» characteristics. Anything worth being granted a patent should be inventive enough, it should be non-obvious to an expert in the field.

When we see huge bodies of awarded (and upheld) patents falling in the case the authors mention, it becomes clear that the patent applications were not thoroughly researched prior to their patent grant. Sadly, long gone are the days where the United States Patent and Trademarks Office employed minds such as Albert Einstein's; nowadays, the office is more a rubber-stamping bureaucracy where most patents are awarded, and this very important requisite is left open to litigation: If somebody is found in breach of a patent, they might choose to defend the issue that the patent was obvious to an expert. But, of course, that will probably cost more in legal fees than settling for an agreement with the patent holder.

The fact that in our line of work we must take care to search for patents before releasing any work speaks a lot about the process. Patents are too easily granted. They should be way stricter; the occurence of an independent developer mistakenly (and innocently!) breaching a patent should be most unlikely, as patents should only be awarded to truly non-obvious solutions.

Matthew Garrett: I've bought some more awful IoT stuff

22 June, 2016 - 06:11
I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comments

Ian Wienand: Zuul and Ansible in OpenStack CI

22 June, 2016 - 05:16

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Gunnar Wolf: Relax and breathe...

22 June, 2016 - 02:30

Time passes. I had left several (too many?) pending things to be done un the quiet weeks between the end of the lective semestre and the beginning of muy Summer trip to Winter. But Saturday gets closer every moment... And our long trip to the South begins.

Among many other things, I wanted to avance with some Debían stuff - both packaging and WRT keyring analysis. I want to contacto some people I left pending interactions with, but honestly, that will only come face to face un Capetown.

As to "real life", I hace too many pending issues at work to even begin with; I hope to get some time at South África todo do some decent UNAM sysadmining. Also, I want to play the idea of using Git for my students' workflow (handing in projects and assignments, at least)... This can be interesting to talk with the Debían colleagues about, actually.

As a Masters student, I'm making good advances, and will probably finish muy class work next semester, six months ahead of schedule, but muy thesis work so far has progressed way slower than what I'd like. I have at least a better defined topic and approach, so I'll start the writing phase soon.

And the personal life? Family? I am more complete and happy than ever before. My life su completely different from two years ago. Yes, that was obvious. But it's also the only thing I can come up with. Having twin babies (when will they make the transition from "babies" to "kids"? No idea... We will find out as it comes) is more than beautiful, more than great. Our life has changed in every possible aspect. And yes, I admire my loved Regina for all of the energy and love she puts on the babies... Life is asymetric, I am out for most of the day... Mommy is always there.

As I said, happier than ever before.

Joey Hess: twenty years of free software -- part 2 etckeeper

21 June, 2016 - 22:24

etckeeper was a sleeper success for me. I created it, wrote one blog post about it, installed it on all my computers, and mostly forgot about it, except when I needed to look something up in the git history of /etc it helpfully maintains. It's a minor project.

Then I started getting patches porting it to many other version control systems, and other linux distributions, and fixing problems, and adding features. Mountains and mountains of patches over time.

And then I started hearing about distributions that install it by default. (Though Debian for some reason never did so I keep having to install it everywhere by hand.)

Writing this blog post, I noticed etckeeper had accumulated enough patches from other people to warrant a new release. That happens pretty regularly.

So it's still a minor project as far as I'm concerned, but quite a few people seem to be finding it useful. So it goes with free software.

Reproducible builds folks: Reproducible builds: week 60 in Stretch cycle

21 June, 2016 - 15:24

What happened in the Reproducible Builds effort between June 12th and June 18th 2016:

Media coverage GSoC and Outreachy updates

Weekly reports by our participants:

Toolchain fixes
  • texlive-bin/2016.20160513.41080-3 has been uploaded to unstable, featuring support for FORCE_SOURCE_DATE. See the last post for details on it.
  • doxygen/1.4.4-1 has been uploaded to unstable, fixing #822197 (upstream bug), which caused some generated html file to contain unreproducible memory addresses of Python objects used at build time.
  • debhelper/9.20160618 has been uploaded to unstable, fixing #824490, which instructs ant to not save the username of the build user in the generated files. Original patch by Emmanuel Bourg.
  • HW42 reported a long-known (although only internally) bug in our dpkg-buildinfo. this particular bug doesn't affect our current infrastructure, but it's a blocker for having .buildinfo support merged upstream.
  • epydoc/3.0.1+dfsg-13 and 3.0.1+dfsg-14 have been uploaded by Kenneth J. Pronovici which fixes nondeterministic ordering issues in generated documentation and removes memory addresses. Original patches (#825968 and #827416) by Sascha Steinbiss.

With this upload of texlive-bin we decided to stop keeping our patched fork of as most of the patches for SOURCE_DATE_EPOCH support had been integrated upstream already, and the last one (making FORCE_SOURCE_DATE default to 1) had been refused. So, we are now going to let the archive be rebuilt against unstable's texlive-bin and see how many packages will become unreproducible with this change; once enough data will be collected we will ponder whether FORCE_SOURCE_DATE should be exported by helper tools (such as debhelper) or manually exported by every package that needs it.

(For those wondering: we still recommend to follow SOURCE_DATE_EPOCH always and don't recommend other projects to implement FORCE_SOURCE_DATE…)

With the drop of texlive-bin we now have only three modified packages in our experimental repository.

Reproducible work in other projects Packages fixed

The following 12 packages have become reproducible due to changes in their build dependencies: django-floppyforms flask-restful hy jets3t kombu llvm-toolchain-3.8 moap python-bottle python-debtcollector python-django-debug-toolbar python-osprofiler stevedore

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Uploads with reproducibility fixes that currently fail to build:

  • ruby2.3/2.3.1-3 by Christian Hofstaedtler, avoids unreproducible rbconfig.rb files by always using bash for building.

Patches submitted that have not made their way to the archive yet:

  • #827109 against asciijump by Reiner Herrmann: sort source files for deterministic linking order.
  • #827112 against boswars by Reiner Herrmann: sort source files for deterministic linking order.
  • #827114 against overgod by Reiner Herrmann: use C locale for sorting source files.
  • #827115 against netpbm by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH while generating output.
  • #827124 against funguloids by Reiner Herrmann: use C locale for sorting files.
  • #827145 against scummvm by Reiner Herrmann: don't embed extra fields in zip archive and build with proper host architecture.
  • #827150 against netpanzer by Reiner Herrmann: sort source files for deterministic linking order.
  • #827172 against reaver by Alexis Bienvenüe: sort object files for deterministic linking order.
  • #827187 against latex2html by Alexis Bienvenüe: iterate deterministic over Perl hashes; honour SOURCE_DATE_EPOCH for output; strip username from output; sort index keys.
  • #827313 against cherrypy3 by Sascha Steinbiss: prevent memory addresses in output.
  • #827361 against matplotlib by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH in output and sort keys while iterating over dict.
  • #827382 against dwarfutils by Reiner Herrmann: fix array size, which caused memory from outside a table to be embedded into output.
  • #827384 against skytools3 by Sascha Steinbiss: use stable sorting order and remove timestamps from documentation.
  • #827419 against ldaptor by Sascha Steinbiss: sort list of input files and prevent home directory from leaking into documentation.
  • #827546 against git-buildpackage by Sascha Steinbiss: replace timestamps in documentation with changelog date; prevent temporary paths in documentation.
  • #827572 against xprobe by Reiner Herrmann: sort list of object files in static library archives.
Package reviews

36 reviews have been added, 12 have been updated and 31 have been removed in this week.

17 FTBFS bugs have been reported by Chris Lamb, Santiago Vila and Dominic Hargreaves.

diffoscope development

Satyam worked on argument completion (#826711) for diffoscope.

strip-nondeterminism development

Mattia Rizzolo uploaded strip-nondeterminism 0.019-1~bpo8+1 to jessie-backports.

reprotest development

Ceridwen filed an Intent To Package (ITP) bug for reprotest as #827293.

tests.reproducible-builds.org
  • Mattia Rizzolo uploaded pbuilder 0.225 to unstable, providing built-in support for eatmydata. We're planning to use it in armhf and i386 builders where we don't build in tmpfs, to increase the build speed some more.
  • Valery Young reworked the appearance of the package page, hopefully making them more intuitive and usable. In the process she changed the script generating them to use a real templating system, thus improving maintenance for the future.
  • Holger adjusted the scheduler to reschedule packages in state 'depwait' after two days instead of three.
  • Mattia added the bug title next to the bug numbers in the notes.
Misc.

This week's edition was written by Mattia Rizzolo, Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Hideki Yamane: Debian developers reference update (3.4.18)

21 June, 2016 - 11:16
Thanks to our lovely translators, Debian developers reference has now added Russian and Italian version. Hope people who use those languages can look into Debian development more deeply.

And also added fancy CSS for pages, hope you like it...

We have looong queue in BTS, so please help us to make it better with patch (or just article), not just post your opinion. I cannot squash those since I'm not good at English... ;-), so please give your hands.

And maybe it'd be better to use date for versioning, IMHO (not 3.4.18 but 20160621).

Russ Allbery: Review: Furiously Happy

21 June, 2016 - 10:26

Review: Furiously Happy, by Jenny Lawson

Publisher: Flatiron Copyright: September 2015 ISBN: 1-250-07700-1 Format: Hardcover Pages: 329

Jenny Lawson, who blogs as The Bloggess, is best known for a combination of being extremely funny and being extremely open about her struggles with mental illness (a combination of anxiety and depression, alongside a few other problems). Her first book primarily told the story of her family, childhood, and husband. Furiously Happy is a more random and disconnected book, but insofar as it has a theme, it's about surviving depression, anxiety, and other manifestations of your brain being an asshole.

I described Lawson's previous book, Let's Pretend This Never Happened, as the closest thing I've found to a stand-up comedy routine in book form. Furiously Happy is very similar, but it lacks the cohesiveness of a routine. Instead, it feels like a blog collection: a pile of essays with only some vague themes and not a lot of continuity from essay to essay.

This doesn't surprise me. Second books are very different than first books, particularly second books by someone whose writing focus is not writing books, and particularly for non-fiction. I feel like Let's Pretend This Never Happened benefited from drawing on Lawson's full life experience to form the best book she could write. When that became wildly popular, everyone of course wanted a second book, including me. But when the writing is this personal, the second book is, out of necessity, partly leftovers. Lawson's recent experiences don't generate as much material as her whole life up to the point of the first book.

That said, there is a bit of a theme, and the title fits it. Early in the book, Lawson describes how, after the death of a friend and a bout of depression, she decided to be furiously, vehemently happy to get back at the universe, to spite its attempts to destroy her mood. It's one of the best bits in this book. The surrounding philosophy is about embracing the moment, enjoying the hell out of everything that one can enjoy, and taking a lot of chances.

A lot of the stories in this book come after the beginning of Lawson's fame and popularity. She has book tours, a vacation tour of Australia, and a community of people from her blog. That, of course, doesn't make the depression and anxiety any better; indeed, it provides a lot of material for her anxiety to work with. Lawson talks a lot about surviving, about how important that community is to her, about not believing your brain when it lies to you. This isn't as uniformly funny as her first book, and sometimes it feels a bit too much like an earnest pep talk. But there are also moments of insightful life philosophy mixed into the madcap adventures and stream-of-consciousness wild associations.

Lawson also does for anxiety what Allie Brosh does for depression: make the severe form of it relatable to people who have not suffered from it. I was particularly struck by her description of flying: the people around her are getting nervous and anxious as the plane starts to take off, and she's finally able to relax because her anxiety focused on all the things she had to do in order to get onto the right plane at the right time. Once she didn't have to make any decisions or do anything other than sit in one place, her anxiety let go. I don't have any type of clinical anxiety, but I was able to identify with that moment of relief and its contrast with anxiety in a deeper way than with other descriptions.

Furiously Happy is a bit more serious and earnest, and I'm not sure it worked as well. I liked Lawson's first book better; this felt more like a blog archive. But she's still funny, entertaining, and delightful, and I'm happy to support her with a book purchase. Start with either her blog or Let's Pretend This Never Happened if you're new to Lawson, but if you're already a fan, here's more of her writing.

Rating: 7 out of 10

Joey Hess: twenty years of free software -- part 1 ikiwiki

21 June, 2016 - 01:50

I'm forty years old. I've been developing free software for twenty years.

A decade ago, I wrote a series of posts about my first ten years of free software, looking back over projects I'd developed. These retrospectives seem even more valuable in retrospect; there are things in the old posts that jog my memory, and other details I've forgotten by now.

So, I'm doing it again. Over the next two weeks (with a week off in the middle for summer vacation), I'll be writing one post each day about a free software project I've developed in the past decade.

We begin with Ikiwiki. I started it 10 years ago, and still use it to this day; it's the engine that builds this website, and nearly all my other websites, as well as wikis and websites belonging to tons and tons of other projects, like NetBSD, X.org, Freedesktop.org, FreedomBox and many other users.

Indeed I'm often reading a website and find myself wondering "hey.. is this using Ikiwiki?", and glance at the html and find that yes, it is. So, Ikiwiki is a reasonably successful and widely used peice of software, at least in its niche.

More important to me, it was a static site generator before we knew what those were. It wasn't the first, but it broke plenty of new ground. I'm particularly proud of the way it combines a wiki with blogging support seamlessly, and the incremental updating of the static pages including updating wikilinks and backlinks. Some of these and other features are still pretty unique to Ikiwiki despite the glut of other static site generators available now.

Ikiwiki is written in Perl, which was great for getting lots of other contributions (including many of its 113 plugins), but has also held it back some lately. There are less Perl programmers these days. And over the past decade, concurrency has become very important, but Ikiwiki's implementation is stubbornly single threaded, and multithreading such a Perl program is a losing propoisition. I occasionally threaten to rewrite it in Haskell, but I doubt I will.

Ikiwiki has several developers now, and I'm the least active of them. I stepped back because I can't write Perl very well anymore, and am mostly very happy with how Ikiwiki works, so only pop up now and then when something annoys me.

Simon McVittie: GTK Hackfest 2016

21 June, 2016 - 01:37

I'm back from the GTK hackfest in Toronto, Canada and mostly recovered from jetlag, so it's time to write up my notes on what we discussed there.

Despite the hackfest's title, I was mainly there to talk about non-GUI parts of the stack, and technologies that fit more closely in what could be seen as the freedesktop.org platform than they do in GNOME. In particular, I'm interested in Flatpak as a way to deploy self-contained "apps" in a freedesktop-based, sandboxed runtime environment layered over the Universal Operating System and its many derivatives, with both binary and source compatibility with other GNU/Linux distributions.

I'm mainly only writing about discussions I was directly involved in: lots of what sounded like good discussion about the actual graphics toolkit went over my head completely :-) More notes, mostly from Matthias Clasen, are available on the GNOME wiki.

In no particular order:

Thinking with portals

We spent some time discussing Flatpak's portals, mostly on Tuesday. These are the components that expose a subset of desktop functionality as D-Bus services that can be used by contained applications: they are part of the security boundary between a contained app and the rest of the desktop session. Android's intents are a similar concept seen elsewhere. While the portals are primarily designed for Flatpak, there's no real reason why they couldn't be used by other app-containment solutions such as Canonical's Snap.

One major topic of discussion was their overall design and layout. Most portals will consist of a UX-independent part in Flatpak itself, together with a UX-specific implementation of any user interaction the portal needs. For example, the portal for file selection has a D-Bus service in Flatpak, which interacts with some UX-specific service that will pop up a standard UX-specific "Open" dialog — for GNOME and probably other GTK environments, that dialog is in (a branch of) GTK.

A design principle that was reiterated in this discussion is that the UX-independent part should do as much as possible, with the UX-specific part only carrying out the user interactions that need to comply with a particular UX design (in the GTK case, GNOME's design). This minimizes the amount of work that needs to be redone for other desktop or embedded environments, while still ensuring that the other environments can have their chosen UX design. In particular, it's important that, as much as possible, the security- and performance-sensitive work (such as data transport and authentication) is shared between all environments.

The aim is for portals to get the user's permission to carry out actions, while keeping it as implicit as possible, avoiding an "are you sure?" step where feasible. For example, if an application asks to open a file, the user's permission is implicitly given by them selecting the file in the file-chooser dialog and pressing OK: if they do not want this application to open a file at all, they can deny permission by cancelling. Similarly, if an application asks to stream webcam data, the UX we expect is for GNOME's Cheese app (or a similar non-GNOME app) to appear, open the webcam to provide a preview window so they can see what they are about to send, but not actually start sending the stream to the requesting app until the user has pressed a "Start" button. When defining the API "contracts" to be provided by applications in that situation, we will need to be clear about whether the provider is expected to obtain confirmation like this: in most cases I would anticipate that it is.

One security trade-off here is that we have to have a small amount of trust in the providing app. For example, continuing the example of Cheese as a webcam provider, Cheese could (and perhaps should) be a contained app itself, whether via something like Flatpak, an LSM like AppArmor or both. If Cheese is compromised somehow, then whenever it is running, it would be technically possible for it to open the webcam, stream video and send it to a hostile third-party application. We concluded that this is an acceptable trade-off: each application needs to be trusted with the privileges that it needs to do its job, and we should not put up barriers that are easy to circumvent or otherwise serve no purpose.

The main (only?) portal so far is the file chooser, in which the contained application asks the wider system to show an "Open..." dialog, and if the user selects a file, it is returned to the contained application through a FUSE filesystem, the document portal. The reference implementation of the UX for this is in GTK, and is basically a GtkFileChooserDialog. The intention is that other environments such as KDE will substitute their own equivalent.

Other planned portals include:

  • image capture (scanner/camera)
  • opening a specified URI
    • this needs design feedback on how it should work for non-http(s)
  • sharing content, for example on social networks (like Android's Sharing menu)
  • proxying joystick/gamepad input (perhaps via Wayland or FUSE, or perhaps by modifying libraries like SDL with a new input source)
  • network proxies (GProxyResolver) and availability (GNetworkMonitor)
  • contacts/address book, probably vCard-based
  • notifications, probably based on freedesktop.org Notifications
  • video streaming (perhaps using Pinot, analogous to PulseAudio but for video)
Environment variables

GNOME on Wayland currently has a problem with environment variables: there are some traditional ways to set environment variables for X11 sessions or login shells using shell script fragments (/etc/X11/Xsession.d, /etc/X11/xinit/xinitrc.d, /etc/profile.d), but these do not apply to Wayland, or to noninteractive login environments like cron and systemd --user. We are also keen to avoid requiring a Turing-complete shell language during session startup, because it's difficult to reason about and potentially rather inefficient.

Some uses of environment variables can be dismissed as unnecessary or even unwanted, similar to the statement in Debian Policy §9.9: "A program must not depend on environment variables to get reasonable defaults." However, there are two common situations where environment variables can be necessary for proper OS integration: search-paths like $PATH, $XDG_DATA_DIRS and $PYTHONPATH (particularly necessary for things like Flatpak), and optionally-loaded modules like $GTK_MODULES and $QT_ACCESSIBILITY where a package influences the configuration of another package.

There is a stopgap solution in GNOME's gdm display manager, /usr/share/gdm/env.d, but this is gdm-specific and insufficiently expressive to provide the functionality needed by Flatpak: "set XDG_DATA_DIRS to its specified default value if unset, then add a couple of extra paths".

pam_env comes closer — PAM is run at every transition from "no user logged in" to "user can execute arbitrary code as themselves" — but it doesn't support .d fragments, which are required if we want distribution packages to be able to extend search paths. pam_env also turns off per-user configuration by default, citing security concerns.

I'll write more about this when I have a concrete proposal for how to solve it. I think the best solution is probably a PAM module similar to pam_env but supporting .d directories, either by modifying pam_env directly or out-of-tree, combined with clarifying what the security concerns for per-user configuration are and how they can be avoided.

Relocatable binary packages

On Windows and OS X, various GLib APIs automatically discover where the application binary is located and use search paths relative to that; for example, if C:\myprefix\bin\app.exe is running, GLib might put C:\myprefix\share into the result of g_get_system_data_dirs(), so that the application can ask to load app/data.xml from the data directories and get C:\myprefix\share\app\data.xml. We would like to be able to do the same on Linux, for example so that the apps in a Flatpak or Snap package can be constructed from RPM or dpkg packages without needing to be recompiled for a different --prefix, and so that other third-party software packages like the games on Steam and gog.com can easily locate their own resources.

Relatedly, there are currently no well-defined semantics for what happens when a .desktop file or a D-Bus .service file has Exec=./bin/foo. The meaning of Exec=foo is well-defined (it searches $PATH) and the meaning of Exec=/opt/whatever/bin/foo is obvious. When this came up in D-Bus previously, my assertion was that the meaning should be the same as in .desktop files, whatever that is.

We agreed to propose that the meaning of a non-absolute path in a .desktop or .service file should be interpreted relative to the directory where the .desktop or .service file was found: for example, if /opt/whatever/share/applications/foo.desktop says Exec=../../bin/foo, then /opt/whatever/bin/foo would be the right thing to execute. While preparing a mail to the freedesktop and D-Bus mailing lists proposing this, I found that I had proposed the same thing almost 2 years ago... this time I hope I can actually make it happen!

Flatpak and OSTree bug fixing

On the way to the hackfest, and while the discussion moved to topics that I didn't have useful input on, I spent some time fixing up the Debian packaging for Flatpak and its dependencies. In particular, I did my first upload as a co-maintainer of bubblewrap, uploaded ostree to unstable (with the known limitation that the grub, dracut and systemd integration is missing for now since I haven't been able to test it yet), got most of the way through packaging Flatpak 0.6.5 (which I'll upload soon), cherry-picked the right patches to make ostree compile on Debian 8 in an effort to make backports trivial, and spent some time disentangling a flatpak test failure which was breaking the Debian package's installed-tests. I'm still looking into ostree test failures on little-endian MIPS, which I was able to reproduce on a Debian porterbox just before the end of the hackfest.

OSTree + Debian

I also had some useful conversations with developers from Endless, who recently opened up a version of their OSTree build scripts for public access. Hopefully that information brings me a bit closer to being able to publish a walkthrough for how to deploy a simple Debian derivative using OSTree (help with that is very welcome of course!).

GTK life-cycle and versioning

The life-cycle of GTK releases has already been mentioned here and elsewhere, and there are some interesting responses in the comments on my earlier blog post.

It's important to note that what we discussed at the hackfest is only a proposal: a hackfest discussion between a subset of the GTK maintainers and a small number of other GTK users (I am in the latter category) doesn't, and shouldn't, set policy for all of GTK or for all of GNOME. I believe the intention is that the GTK maintainers will discuss the proposals further at GUADEC, and make a decision after that.

As I said before, I hope that being more realistic about API and ABI guarantees can avoid GTK going too far towards either of the possible extremes: either becoming unable to advance because it's too constrained by compatibility, or breaking applications because it isn't constrained enough. The current situation, where it is meant to be compatible within the GTK 3 branch but in practice applications still sometimes break, doesn't seem ideal for anyone, and I hope we can do better in future.

Acknowledgements

Thanks to everyone involved, particularly:

  • Matthias Clasen, who organised the hackfest and took a lot of notes
  • Allison Lortie, who provided on-site cat-herding and led us to some excellent restaurants
  • Red Hat Inc., who provided the venue (a conference room in their Toronto office), snacks, a lot of coffee, and several participants
  • my employers Collabora Ltd., who sponsored my travel and accomodation

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้