Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 30 min ago

Norbert Preining: Debian/TeX Live 2016.20161130-1

5 December, 2016 - 21:58

As we are moving closer to the Debian release freeze, I am shipping out a new set of packages. Nothing spectacular here, just the regular updates and a security fix that was only reported internally. Add sugar and a few minor bug fixes.

I have been silent for quite some time, busy at my new job, busy with my little monster, writing papers, caring for visitors, living. I have quite a lot of things I want to write, but not enough time, so very short only this one.

Enjoy.

New packages

awesomebox, baskervillef, forest-quickstart, gofonts, iscram, karnaugh-map, tikz-optics, tikzpeople, unicode-bidi.

Updated packages

acmart, algorithms, aomart, apa, apa6, appendix, apxproof, arabluatex, asymptote, background, bangorexam, beamer, beebe, biblatex-gb7714-2015, biblatex-mla, biblatex-morenames, bibtexperllibs, bidi, bookcover, bxjalipsum, bxjscls, c90, cals, cell, cm, cmap, cmextra, context, cooking-units, ctex, cyrillic, dirtree, ekaia, enotez, errata, euler, exercises, fira, fonts-churchslavonic, formation-latex-ul, german, glossaries, graphics, handout, hustthesis, hyphen-base, ipaex, japanese, jfontmaps, kpathsea, l3build, l3experimental, l3kernel, l3packages, latex2e-help-texinfo-fr, layouts, listofitems, lshort-german, manfnt, mathastext, mcf2graph, media9, mflogo, ms, multirow, newpx, newtx, nlctdoc, notes, patch, pdfscreen, phonenumbers, platex, ptex, quran, readarray, reledmac, shapes, showexpl, siunitx, talk, tcolorbox, tetex, tex4ht, texlive-en, texlive-scripts, texworks, tikz-dependency, toptesi, tpslifonts, tracklang, tugboat, tugboat-plain, units, updmap-map, uplatex, uspace, wadalab, xecjk, xellipsis, xepersian, xint.

Reproducible builds folks: Reproducible Builds: week 84 in Stretch cycle

5 December, 2016 - 19:31

What happened in the Reproducible Builds effort between Sunday November 27 and Saturday December 3 2016:

Reproducible work in other projects Media coverage, etc.
  • There was a Reproducible Builds hackathon in Boston with contributions from Dafydd, Valerie, Clint, Harlen, Anders, Robbie and Ben. (See the "Bugs filed" section below for the results).

  • Distrowatch mentioned Webconverger's reproducible status.

Bugs filed

Chris Lamb:

Clint Adams:

Dafydd Harries:

Daniel Shahaf:

Reiner Herrmann:

Valerie R Young:

Reviews of unreproducible packages

15 package reviews have been added, 4 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (5)
  • Lucas Nussbaum (8)
  • Santiago Vila (1)
diffoscope development

Is is available now in Debian, Archlinux and on PyPI.

strip-nondeterminism development
  • At the Reproducible Builds Boston hackathon Anders Kaseorg filed #846895 treat .par files as Zip archives, including a patch which was merged into master.
reprotest development tests.reproducible-builds.org
  • Holger made a couple of changes:

    • Group all "done" and all "open" usertagged bugs together in the bugs graphs and move the "done bugs" from the bottom of these gaps.
    • Update list of packages installed on .debian.org machines.
    • Made the maintenance jobs run every 2h instead of 3h.
    • Various bug fixes and minor improvements.
  • After thorough review by Mattia, some patches by Valerie were merged in preparation of the switch from sqlite to Postgresql, most notably a conversion to the sqlalchemy expression language.

  • Holger gave a talk at Profitbricks about how Debian is using 168 cores, 503 GB RAM and 5 TB storage to make jenkins.debian.net and tests.reproducible-builds.org run. Many thanks to Profitbricks for supporting jenkins.debian.net since August 2012!

  • Holger created a Jenkins job to build reprotest from git master branch.

  • Finally, the Jenkins Naginator plugin was installed to retry git cloning in case of Alioth/network failures, this will benefit all jobs using Git on jenkins.debian.net.

Misc.

This week's edition was written by Chris Lamb, Valerie Young, Vagrant Cascadian, Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Markus Koschany: My Free Software Activities in November 2016

5 December, 2016 - 07:48

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Android
  • Chris Lamb was so kind to send in a patch for apktool to make the build reproducible (#845475). Although this was not enough to fix the issue it set me on the right path to eventually resolve bug number 845475.
Debian Games
  • I packaged a couple of new upstream releases for extremetuxracer, fifechan, fife, unknown-horizons, freeciv, atanks and armagetronad. Most notably fifechan was accepted by the FTP team which allowed me to package new versions of fife and unknown-horizons which are both back in testing again. I expect that upstream will make their final release sometime in December. Atanks has been orphaned a while ago and since upstream is still active and I kinda like the game I decided to adopt it. I also uploaded a backport of Freeciv 2.5.6 to jessie-backports.
  • In November we received a bunch of RC bug reports again because, hey, it is almost time for the Freeze, let’s break some packages. Thus I spent some time fixing freeorion (#843132), pokerth (#843078), simutrans (#828545), freeciv (#844198) and warzone2100 (#844870).
  • I also updated the debian-games blend, we are at version 1.6 now, and made some smaller adjustments. The most important change was adding a new binary package, games-all, that installs..well, all! I know this will make at least one person on this planet happy. Actually I was kind of forced into adding it because blends-dev automatically creates it as a requirement for choosing blends with the Debian Installer. But don’t be afraid games-all only recommends games-finest, the rest is suggested.
  • Last but not least I worked on performous and could close a wishlist bug report (#425898). The submitter asked to suggest some free song packages for this karaoke game.
Debian Java
  • I sponsored uncommons-watchmaker for Kai-Chung and also reviewed libnative-platform-java and granted upload rights to him.
  • I packaged new upstream releases of lombok-patcher, electric, undertow, sweethome3d and sweethome3d-furniture-editor.
  • I spent quite some time on reviewing (especially the copyright review took most of the time) and improving the packaging for tycho (#816604) which is a precondition for packaging the latest upstream release of Eclipse, a popular Java IDE. Luca Vercelli has been working on it for the last couple of months and he did most of the initial packaging. Unfortunately I was only able to upload the package last week which means that the chances for updating Eclipse for Stretch are slim.
  • Due to time constraints I could not finish the Netbeans update in time which I had started back in October. This is on my priority list for December now.
  • Several security issues were reported against Tomcat{6,7,8}. I helped with reviewing some of the patches that Emmanuel prepared for Jessie and worked on fixing the same bugs in Wheezy.
Debian LTS

This was my ninth month as a paid contributor and I have been paid to work 11 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 14. November until 21. November I was in charge of our LTS frontdesk. I triaged bugs in teeworlds, libdbd-mysql-perl, bash, libxml2, tiff, firefox-esr, drupal7, moin, libgc, w3m and sniffit.
  • DLA-715-1. Issued a security update for drupal7 fixing 2 CVE.
  • DLA-717-1. Issued a security update for moin fixing 2 CVE.
  • DLA-728-1. Issued a security update for tomcat6 fixing 8 CVE. (Debian bug #845385 was assigned a CVE later).
  • DLA-729-1. Issued a security update for tomcat7 fixing 8 CVE. (Debian bug #845385 was assigned a CVE later).
  • Especially the patches and the subsequent testing for CVE-2016-0762 and CVE-2016-6816 required most of the time.
Non-maintainer uploads
  • I uploaded an NMU for angband to fix #837394. The patch was kindly prepared by Adrian Bunk.

It is already this time of the year again. See you next year for another report.

Ben Hutchings: Linux Kernel Summit 2016, part 2

5 December, 2016 - 07:01

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here.

Updated: I corrected the description of which Intel processors support SMEP.

Kernel Hardening

Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP.

GCC plugins

The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point.

There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet.

Probabilistic protections

These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely.

Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel.

Page and heap allocation, etc., is still quite predictable.

struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers.

Deterministic protections

These protections block a class of attacks completely.

Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.)

Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake Broadwell onward) and on arm64 (PXN, from ARMv8.1). But Skylake Broadwell is not available for servers in high-end server variants and ARMv8.1 is not yet implemented at all! s390 always had this protection.

It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough.

Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.)

Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.)

Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.)

Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_{open,close}_kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet.

Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet.

Kernel Freezer Hell

For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere.

Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)).

At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by:

  • Telling mounted filesystems to freeze. They can quiesce any kthreads they created.
  • Device drivers quiescing any kthreads they created, from their PM suspend implementation.

The system suspend code should not need to directly freeze threads.

Kernel Documentation

Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation.

The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx.

There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders.

Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream.

There is lots of obsolete documentation, and patches to remove those would be welcome.

Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time.

Issues around Tracepoints

Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically.

There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability.

Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them?

Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).

Ben Hutchings: Linux Kernel Summit 2016, part 2

5 December, 2016 - 04:18

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here.

Kernel Hardening

Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP.

GCC plugins

The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point.

There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet.

Probabilistic protections

These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely.

Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel.

Page and heap allocation, etc., is still quite predictable.

struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers.

Deterministic protections

These protections block a class of attacks completely.

Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.)

Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake onward) and on arm64 (PXN, from ARMv8.1). But Skylake is not available for servers and ARMv8.1 is not yet implemented at all! s390 always had this protection.

It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough.

Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.)

Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.)

Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.)

Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_{open,close}_kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet.

Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet.

Kernel Freezer Hell

For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere.

Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)).

At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by:

  • Telling mounted filesystems to freeze. They can quiesce any kthreads they created.
  • Device drivers quiescing any kthreads they created, from their PM suspend implementation.

The system suspend code should not need to directly freeze threads.

Kernel Documentation

Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation.

The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx.

There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders.

Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream.

There is lots of obsolete documentation, and patches to remove those would be welcome.

Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time.

Issues around Tracepoints

Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically.

There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability.

Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them?

Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).

Niels Thykier: Piuparts integration in britney

4 December, 2016 - 18:06

As of today, britney now fetches reports from piuparts.debian.org and uses it as a part of her evaluation for package migration.  As with her RC bug check, we are only preventing (known) regressions from migrating.

The messages (subject to change) look something like:

  • Piuparts tested OK
  • Rejected due to piuparts regression
  • Ignoring piuparts failure (Not a regression)
  • Cannot be tested by piuparts (not a blocker)

If you want to do machine parsing of the Britney excuses, we also provide an excuses.yaml. In there, you are looking for “excuses[X].policy_info.piuparts.test-results”, which will be one of:

  • pass
  • regression
  • failed
  • cannot-with-tested

Enjoy.

 


Filed under: Debian, Release-Team

Jo Shields: A quick introduction to Flatpak

4 December, 2016 - 17:44

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Ben Hutchings: Linux Kernel Summit 2016, part 1

4 December, 2016 - 06:54

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions.

Stable process

Jiri Kosina, in his role as a distribution maintainer, sees too many unsuitable patches being backported - e.g. a fix for a bug that wasn't present or a change that depends on an earlier semantic change so that when cherry-picked it still compiles but isn't quite right. He thinks the current review process is insufficient to catch them.

As an example, a recent fix for a minor information leak (CVE-2016-9178) depended on an earlier change to page fault handling. When backported by itself, it introduced a much more serious security flaw (CVE-2016-9644). This could have been caught very quickly by a system call fuzzer.

Possible solutions: require 'Fixes' field, not just 'Cc: stable'. Deals with 'bug wasn't present', but not semantic changes.

There was some disagreement whether 'Fixes' without 'Cc: stable' should be sufficient for inclusion in stable. Ted Ts'o said he specifically does that in some cases where he thinks backporting is risky. Greg Kroah-Hartman said he takes it as a weaker hint for inclusion in stable.

Is it a good idea to keep 'Cc: stable' given the risk of breaking embargo? On balance, yes, it only happened once.

Sometimes it's hard to know exactly how/when the bug was introduced. Linus doesn't want people to guess and add incorrect 'Fixes' fields. There is still the option to give some explanation and hints for stable maintainers in the commit message. Ideally the upstream developer should provide a test case for the bug.

Is Linus happy?

Linus complained about minor fixes coming later in the release cycle. After rc2, all fixes should either be for new code introduced in the current release cycle or for important bugs. However, new, production-ready drivers without new infrastructure dependencies are welcome at almost any point in the release cycle.

He was unhappy about some big changes in RDMA, but I'm not sure what those were.

Bugzilla and bug tracking

Laura Abbott started a discussion of bugzilla.kernel.org, talking about subsystems where maintainers ignore it and any responses come from random people giving bad advice. This is a terrible experience for users. Several maintainers are actively opposed to using it, and the email bridge no longer works (or not well?). She no longer recommends Fedora bug submitters to submit reports there.

Are there any alternatives? None were proposed.

Someone asked whether Bugzilla could tell reporters to use email for certain products/components instead of continuing with the bug entry process.

Konstantin Ryabitsev talked about the difficulty of upgrading a customised instance of Bugzilla. Much customisation requires patches which don't apply to next version (maybe due to limitations of the extension mechanism?). He has had to drop many such patches.

Email is hard to track when a bug is handed over from one maintainer to another. Email archives are very unreliable. Linus: I'll take Bugzilla over mail-archive.

No-one is currently keeping track of bugs across the kernel and making sure they get addressed by an appropriate maintainer. It's (at least) a full-time job but no individual company has business case for paying for this. Konstantin suggested (I think) that CII might pay for this.

There was some discussion of what information should be included in a bug report. The Cut here line in oops messages was said to be a mistake because there are often relevant messages before it. The model of computer is often important. Beyond that, there was not much interest in the automated information gathering that distributions do. Distribution maintainers should curate bugs before forwarding upstream.

There was a request for custom fields per component in Bugzilla. Konstantin says this is doable (possibly after upgrade to version 5); it doesn't require patches.

The future of the Kernel Summit

The kernel community is growing, and the invitation list for the core day is too small to include all the right people for technical subjects. For 2017, the core half-day will have an even smaller invitation list, only ~30 subsystem maintainers that Linus pulls from. The entire technical track will be open (I think).

Kernel Summit 2017 and some mini-summits will be held in Prague alongside Open Source Summit Europe (formerly LinuxCon Europe) and Embedded Linux Conference Europe. There were some complaints that LinuxCon is not that interesting to kernel developers, compared to Linux Plumbers Conference (which followed this year's Kernel Summit). However, the Linux Foundation is apparently soliciting more hardcore technical sessions.

Kernel Summit and Linux Plumbers Conference are quite small, and it's apparently hard to find venues for them in cities that also have major airports. It might be more practical to co-locate them both with Open Source Summit in future.

time_t and 2038

On 32-bit architectures the kernel's representation of real time (time_t etc.) will break in early 2038. Fixing this in a backward-compatible way is a complex problem.

Arnd Bergmann presented the current status of this process. There has not yet been much progress in mainline, but more fixes have been prepared. The changes to struct inode and to input events are proving to be particularly problematic. There is a need to add new system calls, and he intends to add these for all (32-bit) achitectures at once.

Copyright retention

James Bottomley talked about how developers can retain copyright on their contributions. It's hard to renegotiate within an existing employment; much easier to do this when preparing to sign a new contract.

Some employers expect you to fill in a document disclosing 'prior inventions' you have worked on. Depending on how it's worded, this may require the employer to negotiate with you again whenever they want you to work on that same software.

It's much easier for contractors to retain copyright on their work - customers expect to have a custom agreement and don't expect to get copyright on contractor's software.

Vincent Bernat: Build-time dependency patching for Android

4 December, 2016 - 05:20

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.

buildscript {
    dependencies {
        classpath 'com.android.tools.build:gradle:2.2.+'
        classpath 'com.android.tools.build:transform-api:1.5.+'
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'
    }
}

Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context§

This section adds some context to the example. Feel free to skip it.

Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1.

Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:

Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.

The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2.

Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement§

Let’s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:

// In SslUtil class
public static boolean shouldDenyRequest(int error) {
    return false;
}
Transform registration§

Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:

import com.android.build.api.transform.Context
import com.android.build.api.transform.QualifiedContent
import com.android.build.api.transform.Transform
import com.android.build.api.transform.TransformException
import com.android.build.api.transform.TransformInput
import com.android.build.api.transform.TransformOutputProvider
import org.gradle.api.logging.Logger

class PatchXWalkTransform extends Transform {
    Logger logger = null;

    public PatchXWalkTransform(Logger logger) {
        this.logger = logger
    }

    @Override
    String getName() {
        return "PatchXWalk"
    }

    @Override
    Set<QualifiedContent.ContentType> getInputTypes() {
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)
    }

    @Override
    Set<QualifiedContent.Scope> getScopes() {
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)
    }

    @Override
    boolean isIncremental() {
        return true
    }

    @Override
    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException {
        // We should do something here
    }
}

// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))

The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources.

The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It’s also possible to transform our own classes.

The isIncremental() method returns true because we support incremental builds.

The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn’t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform§

To keep all external dependencies unmodified, we must copy them:

@Override
void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException {
    inputs.each {
        it.jarInputs.each {
            def jarName = it.name
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
                                                         Format.JAR);
            def status = it.getStatus()
            if (status == Status.REMOVED) { // ❶
                logger.info("Remove ${src}")
                FileUtils.delete(dest)
            } else if (!isIncremental || status != Status.NOTCHANGED) { // ❷
                logger.info("Copy ${src}")
                FileUtils.copyFile(src, dest)
            }
        }
    }
}

We also need two additional imports:

import com.android.build.api.transform.Status
import org.apache.commons.io.FileUtils

Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed (❶) or it has been modified (❷). In all other cases, we can safely assume the file is already correctly copied.

JAR patching§

When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ❷):

if ("${src}" ==~ ".*/org.xwalk/xwalk_core.*/classes.jar") {
    def pool = new ClassPool()
    pool.insertClassPath("${src}")
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') // ❸

    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) // ❹

    ctc.addMethod(CtNewMethod.make("""
public static boolean shouldDenyRequest(int error) {
    return false;
}
""", ctc)) // ❺

    def sslUtilBytecode = ctc.toBytecode() // ❻

    // Write back the JAR file
    // …
} else {
    logger.info("Copy ${src}")
    FileUtils.copyFile(src, dest)
}

We also need the following additional imports to use Javassist:

import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod

Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in (❸). We locate the appropriate method and delete it (❹). Then, we add our custom method using the same name (❺). The whole operation is done in memory. We retrieve the bytecode of the modified class in ❻.

The remaining step is to rebuild the JAR file:

def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))

// ❼
input.entries().each {
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class")) {
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)
        s.close()
    }
}

// ❽
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))
output.write(sslUtilBytecode)

output.close()

We need the following additional imports:

import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream
import org.apache.commons.io.IOUtils

There are two steps. In ❼, all classes are copied to the new JAR, except the SslUtil class. In ❽, the modified bytecode for SslUtil is added to the JAR.

That’s all! You can view the complete example on GitHub.

More complex method replacement§

In the above example, the new method doesn’t use any external dependency. Let’s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:

import org.chromium.net.NetError;
import android.net.http.SslCertificate;
import android.net.http.SslError;

// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url) {
    switch(error) {
        case NetError.ERR_CERT_COMMON_NAME_INVALID:
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
        case NetError.ERR_CERT_AUTHORITY_INVALID:
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
        default:
            break;
    }
    return new SslError(SslError.SSL_INVALID, cert, url);
}

The major difference with the previous example is that we need to import some additional classes.

Android SDK import§

The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:

androidJar = "${android.getSdkDirectory().getAbsolutePath()}/platforms/" +
             "${android.getCompileSdkVersion()}/android.jar"

We need to load it before adding the new method into SslUtil class:

def pool = new ClassPool()
pool.insertClassPath(androidJar)
pool.insertClassPath("${src}")
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
// …
External dependency import§

We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.

def pool = new ClassPool()
pool.insertClassPath(androidJar)
inputs.each {
    it.jarInputs.each {
        def jarName = it.name
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED) {
            pool.insertClassPath("${src}")
        }
    }
}
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
pool.importPackage('org.chromium.net.NetError');
ctc.addMethod(CtNewMethod.make("…"))
// Then, rebuild the JAR...

Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on. 

  2. I hope to have this fixed in later versions. 

  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk

  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources. 

Ross Gammon: My Open Source Contributions June – November 2016

3 December, 2016 - 18:52

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.

Debian
  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.
Ubuntu
  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.
Other
  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.
Plan for December Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

Ubuntu
  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.
Other
  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

Shirish Agarwal: Air Congestion and Politics

2 December, 2016 - 22:20

Confession time first – I am not a frequent flyer at all. My first flight was in early late 2006. It was a 2 hour flight from Bombay (BOM) to Bengaluru (formerly Bangalore, BLG) . I still remember the trepidation, the nervousness and excitement the first time I took to air. I still remember the flight very vividly,

It was a typical humid day for Bombay/Mumbai and we (me and a friend) had gone to Sahar (the domestic airport) to take the flight in the evening. Before starting the sky had turned golden-orange and I was wondering how I would feel once I would be in air.We started at around 20:00 hours in the evening and as it was a clear night were able to see the Queen’s necklace (Marine Drive) in all her glory.

The photographs on the wikipedia page don’t really do justice to how beautiful the whole boulevard looks at night, especially how it looks from up there. While we were seeing, it seemed the pilot had actually banked at 45 degrees angle so we can have the best view of the necklace OR maybe the pilot wanted to take a photo OR ME being in overdrive (like Robin Williams, the Russian immigrant in Moscow on the Hudson experiences the first time he goes to the mall ;))

In either way, this would be an experience I would never forget till the rest of my life. I remember I didn’t move an inch (even to go the loo) as I didn’t want to let go of the whole experience. While I came back after 3-4 days, I still remember re-experiencing/re-imagining the flights for a whole month each time I went to sleep.

While I can’t say it has become routinised, but have been lucky to have the opportunity to fly domestic around the country primarily for work. After the initial romanticism wears off, you try and understand the various aspects of the flight which are happening around you.

These experiences are what lead to file/share today’s blog post. Yesterday, Ms. Mamata Banerjee, one of the leaders of the Opposition cried wolf because the Aircraft was circling the Airport. Because she is the Chief Minister she feels she should have got precedent or at least that seems to be the way the story unfolded on TV.

I have been about 15-20 times on flight in the last decade for work or leisure. Almost all the flights I have been, it has been routine that the flights fly around the Airport for 15-20 minutes before landing. This is ‘routine’. I have seen Airlines being stacked (remember the scene from Die Hard 2 where Holly Mclane, John Mclane’s wife looks at different aircraft at different altitudes from her window seat) this is what an Airport has to do when it doesn’t have enough runaways. In fact just read few days back MIAL is going for an emergency expansion as they weren’t expecting as many passengers as they did this year as well as last. In fact the same day there was a near-miss between two aircraft in Mumbai airport itself. Because of Ms. Mamata’s belligerence, this story didn’t even get a mention in the TV mainstream media.

The point I wanna underscore is that this is a fact of life and not just in India, world-over it seems hubs are being busier than ever, for instance Heathrow has been also a busy bee and they will to rework air operations as per a recent article .

In India, Kolkata is also one of the busier airports . If anything, I hope it teaches her the issues that plague most Indian airports and she works with the Government in Center so the Airport can expand more. They just got a new terminal three years back.

It is for these issues that the Indian Government has come with the ‘Regional Connectivity Scheme‘ .

Lastly, a bit of welcome news to people thinking to visit India, the Govt. of the day is facilitating easier visa norms to increase tourism and trade to India. Hope this is beneficial to all and any Debian Developers who wanna come visit India


Filed under: Miscellenous Tagged: # Domestic Flights, #Air Congestion, #Airport Expansion, #Kolkata, #near-miss, #Visa for tourists

Rapha&#235;l Hertzog: My Free Software Activities in November 2016

2 December, 2016 - 18:45

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Matthew Garrett: Ubuntu still isn't free software

2 December, 2016 - 16:37
Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.

The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu.

This remains incompatible with the principles of free software. The freedom to take someone else's work and redistribute it is a vital part of the four freedoms. It's legitimate for Canonical to insist that you not pass it off as their work when doing so, but their IP policy continues to insist that you remove all references to Canonical's trademarks even if their use would not infringe trademark law.

If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.

Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.

[1] And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise

comments

Thorsten Alteholz: My Debian Activities in November 2016

2 December, 2016 - 04:33

FTP assistant

This month I marked 377 packages for accept and rejected 36 packages. I also sent 13 emails to maintainers asking questions.

Debian LTS

This was my twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 11h. During that time I did uploads of

  • [DLA 696-1] bind9 security update for one CVE
  • [DLA 711-1] curl security update for nine CVEs

The upload of curl started as an embargoed one but the discussion about one fix took some time and the upload was a bit delayed.

I also prepared a test package for jasper which takes care of nine CVEs and is available here. If you are interested in jasper, please download it and check whether everything is working in your environment. As upstream only takes care of CVEs/bugs at the moment, maybe we should not upload the old version with patches but the new version with all fixes. Any comments?

Other stuff

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

In November I also uploaded new versions of libmatthew-java, node-array-find-index, node-ejs, node-querystringify, node-require-dir, node-setimmediate, libkeepalive,
Further I added node-json5, node-emojis-list, node-big.js, node-eslint-plugin-flowtype to the NEW queue, sponsored an upload of node-lodash, adopted gnupg-pkcs11-scd, reverted the -fPIC-patch in libctl and fixed RC bugs in alljoyn-core-1504, alljoyn-core-1509, alljoyn-core-1604.

Daniel Pocock: Using a fully free OS for devices in the home

1 December, 2016 - 20:11

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy will be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?
Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Carl Chenet: My Free Software activities in November 2016

1 December, 2016 - 06:00

My Monthly report for Novembre 2016 gives an extended list of what were my Free Software related activities during this month.

Personal projects: Journal du hacker:

The Journal du hacker is a frenck-speaking Hacker News-like website dedicated to the french-speaking Free and Open source Software community.

That’s all folks! See you next month!

Enrico Zini: Links for December 2016

1 December, 2016 - 06:00
German Cities Are Solving The Age-Old Public Toilet Problem [map]
«City governments are paying local businesses to open up their restrooms to the public. … The program, called Nette Toilette or Nice Toilet, is active in 210 cities and has been running since 2000. Cities pay from $34 to $112 per month to a business, and it puts a sticker in its window to tell people that they can come in and pee for free. … Bremen, a city with a population of over half a million people, reckons it saves $1 million per year by using the network, which costs it $168,000 per year. So successful is the scheme that it has given Bremen the best ratio of public toilets to citizens in Germany.»

Joey Hess: drought

1 December, 2016 - 05:09

Drought here since August. The small cistern ran dry a month ago, which has never happened before. The large cistern was down to some 900 gallons. I don't use anywhere near the national average of 400 gallons per day. More like 10 gallons. So could have managed for a few more months. Still, this was worrying, especially as the area moved from severe to extreme drought according to the US Drought Monitor.

Two days of solid rain fixed it, yay! The small cistern has already refilled, and the large will probably be full by tomorrow.

The winds preceeding that same rain storm fanned the flames that destroyed Gatlinburg. Earlier, fire got within 10 miles of here, although that may have been some kind of controlled burn.

Climate change is leading to longer duration weather events in this area. What tended to be a couple of dry weeks in the fall, has become multiple months of drought and weeks of fire. What might have been a few days of winter weather and a few inches of snow before the front moved through has turned into multiple weeks of arctic air, with multiple 1 ft snowfalls. What might have been a few scorching summer days has become a week of 100-110 degree temperatures. I've seen all that over the past several years.

After this, I'm adding "additional, larger cistern" to my todo list. Also "larger fire break around house".

Chris Lamb: Free software activities in November 2016

1 December, 2016 - 04:18

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Started work on a Python API to the UK Postbox mail scanning and forwarding service. (repo)
  • Lots of improvements to buildinfo.debian.net, my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them, including making GPG signatures mandatory (#7), updating jenkins.debian.net to sign them and moving to SSL.
  • Improved the Django client to the KeyError error tracking software, enlarging the test coverage and additionally adding support for grouping errors using a context manager.
  • Made a number of improvements to travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change:
    • Install build-dependencies with debugging output. Thanks to @waja. (#31)
    • Install Lintian by default. Thanks to @freeekanayaka. (#33).
    • Call mktemp with --dry-run to avoid having to delete it later. (commit)
  • Submitted a pull request to Wheel (a utility to package Python libraries) to make the output of METADATA files reproducible. (#73)
  • Submitted some miscellaneous documentation updates to the Tails operating system. (patches)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.


This month:


My work in the Reproducible Builds project was also covered in our weekly reports. (#80, #81, #82 #83.


Toolchain issues

I submitted the following patches to fix reproducibility-related toolchain issues with Debian:


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.


jenkins.debian.net

jenkins.debian.net runs our comprehensive testing framework.

  • buildinfo.debian.net has moved to SSL. (ac3b9e7)
  • Submit signing keys to keyservers after generation. (bdee6ff)
  • Various cosmetic changes, including
    • Prefer if X not in Y over if not X in Y. (bc23884)
    • No need for a dictionary; let's just use a set. (bf3fb6c)
    • Avoid DRY violation by using a for loop. (4125ec5)

I also submitted 9 patches to fix specific reproducibility issues in apktool, cairo-5c, lava-dispatcher, lava-server, node-rimraf, perlbrew, qsynth, tunnelx & zp.

Debian
Debian LTS

This month I have been paid to work 11 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 697-1 for bsdiff fixing an arbitrary write vulnerability.
  • Issued DLA 705-1 for python-imaging correcting a number of memory overflow issues.
  • Issued DLA 713-1 for sniffit where a buffer overflow allowed a specially-crafted configuration file to provide a root shell.
  • Issued DLA 723-1 for libsoap-lite-perl preventing a Billion Laughs XML expansion attack.
  • Issued DLA 724-1 for mcabber fixing a roster push attack.
Uploads
  • redis:
    • 3.2.5-2 — Tighten permissions of /var/{lib,log}/redis. (#842987)
    • 3.2.5-3 & 3.2.5-4 — Improve autopkgtest tests and install upstream's MANIFESTO and README.md documentation.
  • gunicorn (19.6.0-9) — Adding autopkgtest tests.
  • libfiu:
    • 0.94-1 — Add autopkgtest tests.
    • 0.95-1, 0.95-2 & 0.95-3 — New upstream release and improve autopkgtest coverage.
  • python-django (1.10.3-1) — New upstream release.
  • aptfs (0.8-3, 0.8-4 & 0.8-5) — Adding and subsequently improving the autopkgtext tests.


I performed the following QA uploads:



Finally, I also made the following non-maintainer uploads:

  • libident (0.22-3.1) — Move from obsolete Source-Version substvar to binary:Version. (#833195)
  • libpcl1 (1.6-1.1) — Move from obsolete Source-Version substvar to binary:Version. (#833196)
  • pygopherd (2.0.18.4+nmu1) — Move from obsolete Source-Version substvar to ${source:Version}. (#833202)
Debian bugs filed RC bugs

I also filed 59 FTBFS bugs against arc-gui-clients, asyncpg, blhc, civicrm, d-feet, dpdk, fbpanel, freeciv, freeplane, gant, golang-github-googleapis-gax-go, golang-github-googleapis-proto-client-go, haskell-cabal-install, haskell-fail, haskell-monadcatchio-transformers, hg-git, htsjdk, hyperscan, jasperreports, json-simple, keystone, koji, libapache-mod-musicindex, libcoap, libdr-tarantool-perl, libmath-bigint-gmp-perl, libpng1.6, link-grammar, lua-sql, mediatomb, mitmproxy, ncrack, net-tools, node-dateformat, node-fuzzaldrin-plus, node-nopt, open-infrastructure-system-images, open-infrastructure-system-images, photofloat, ppp, ptlib, python-mpop, python-mysqldb, python-passlib, python-protobix, python-ttystatus, redland, ros-message-generation, ruby-ethon, ruby-nokogiri, salt-formula-ceilometer, spykeviewer, sssd, suil, torus-trooper, trash-cli, twisted-web2, uftp & wide-dhcpv6.

FTP Team

As a Debian FTP assistant I ACCEPTed 70 packages: bbqsql, coz-profiler, cross-toolchain-base, cross-toolchain-base-ports, dgit-test-dummy, django-anymail, django-hstore, django-html-sanitizer, django-impersonate, django-wkhtmltopdf, gcc-6-cross, gcc-defaults, gnome-shell-extension-dashtodock, golang-defaults, golang-github-btcsuite-fastsha256, golang-github-dnephin-cobra, golang-github-docker-go-events, golang-github-gogits-cron, golang-github-opencontainers-image-spec, haskell-debian, kpmcore, libdancer-logger-syslog-perl, libmoox-buildargs-perl, libmoox-role-cloneset-perl, libreoffice, linux-firmware-raspi3, linux-latest, node-babel-runtime, node-big.js, node-buffer-shims, node-charm, node-cliui, node-core-js, node-cpr, node-difflet, node-doctrine, node-duplexer2, node-emojis-list, node-eslint-plugin-flowtype, node-everything.js, node-execa, node-grunt-contrib-coffee, node-grunt-contrib-concat, node-jquery-textcomplete, node-js-tokens, node-json5, node-jsonfile, node-marked-man, node-os-locale, node-sparkles, node-tap-parser, node-time-stamp, node-wrap-ansi, ooniprobe, policycoreutils, pybind11, pygresql, pysynphot, python-axolotl, python-drizzle, python-geoip2, python-mockupdb, python-pyforge, python-sentinels, python-waiting, pythonmagick, r-cran-isocodes, ruby-unicode-display-width, suricata & voctomix-outcasts.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against node-cliui, node-core-js, node-cpr & node-grunt-contrib-concat.

Jonas Meurer: debian lts report 2016.11

1 December, 2016 - 02:34
Debian LTS report for November 2016

Noevember 2016 was my third month as a Debian LTS team member. I was allocated 11 hours and had 1,75 hours left from October. This makes a total of 12,75 hours. In November I spent all 12,75 hours (and even a bit more) preparing security updates for spip, memcached and monit.

In particular, the updates of spip and monit took a lot of time (each one more than six hours). The patches for both packages were horrible to backport as the affected codebase changed a lot between the Wheezy versions and current upstream versions. Still it was great fun and I learned a lot during the backporting work. Due to the intrusive nature of the patches, I also did much more extensive testing before uploading the packages, which took quite a bit of time as well.

Monit 5.4-2+deb7u1 is not uploaded to wheezy-security yet as I decided to ask for further review and testing on the debian-lts mailinglist first.

Below follows the list of items I worked on in November in the well known format:

  • DLA 695-1: several XSS, CSRF and code execution flaws fixed in spip 2.1.17-1+deb7u6
  • DLA 701-1: integer overflows, buffer over-read fixed in memcached 1.4.13-0.2+deb7u2
  • CVE-2016-7067: backported CSRF protection to monit 5.4-2+deb7u1
Links

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้