Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 9 min ago

Stig Sandbeck Mathisen: MIME types and applications

9 December, 2016 - 06:00

On a Linux system with ‘desktop-file-utils’ installed, the default application for opening a file with a file manager, from a web browser, or using “xdg-open” on the command line is not static. The last installed or upgraded application becomes the default.

For example: After installing gimp, that application will be used to open any of the many types of files it supports. This lasts until another application which can open those mime types is installed or upgraded.

If I later install or upgrade “mupdf”, that application will be used for PDF, until, etcetera.

There are several bug reports filed for this confusing behaviour:

Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=525077

Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gimp/+bug/574342

Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=727422

Components /usr/bin/update-desktop-database

…is a command in the package ‘desktop-file-utils’

This command is run in the package postinst script, and triggers on writes to /usr/share/applications where .desktop files are written.

/usr/share/applications

This directory contains a list of applications (files ending with .desktop). These desktop files include mime types they are able to work with.

The ‘mupdf.desktop’ example shows it is able to work with (among other) application/pdf

[Desktop Entry]
Encoding=UTF-8
Name=MuPDF
GenericName=PDF file viewer
Comment=PDF file viewer
Exec=mupdf %f
TryExec=mupdf
Icon=mupdf
Terminal=false
Type=Application
MimeType=application/pdf;application/x-pdf;
Categories=Viewer;Graphics;
NoDisplay=true

[Desktop Action View]
Exec=mupdf %f

The gimp.desktop application entry shows it is more capable:

[Desktop Entry]
Version=1.0
Type=Application
Name=GNU Image Manipulation Program
# [...]
MimeType=image/bmp;image/g3fax;image/gif;image/x-fits;image/x-pcx;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-psd;image/x-sgi;image/x-tga;image/x-xbitmap;image/x-xwindowdump;image/x-xcf;image/x-compressed-xcf;image/x-gimp-gbr;image/x-gimp-pat;image/x-gimp-gih;image/tiff;image/jpeg;image/x-psp;application/postscript;image/png;image/x-icon;image/x-xpixmap;image/svg+xml;application/pdf;image/x-wmf;image/x-xcursor;

However, I’m quite sure I do not want ‘gimp’ to be the default viewer for all those file types.

/usr/share/applications/mimeinfo.cache

This is a list of MIME types, with a list of applications able to open them. The first entry in the list is the default application.

Examples:

With ‘gimp.desktop’ first, “xdg-open test.pdf” will use gimp

[MIME Cache]
# [...]
application/pdf=gimp.desktop;mupdf.desktop;evince.desktop;libreoffice-draw.desktop;

After uninstalling and reinstalling mupdf, “mupdf.desktop” is first in the list, and “xdg-open test.pdf” will use mupdf

[MIME Cache]
# [...]
application/pdf=mupdf.desktop;gimp.desktop;evince.desktop;libreoffice-draw.desktop;

The order of .desktop files in mimeinfo.cache is the reverse of the order they are added to that directory.

The last installed utility is first in that list.

Application Trace

This was fun to dig into. I’ve just gotten some training which included a a better look at auditd. Auditd is a nice hammer, and this problem was a good nail.

I ran the command under “autrace”, and then looked for the order of reads from each run.

When “mupdf” is installed last, mupdf.desktop is read last, and placed first in the list of applications:

root@laptop:~# autrace /usr/bin/update-desktop-database
Waiting to execute: /usr/bin/update-desktop-database
Cleaning up...
Trace complete. You can locate the records with 'ausearch -i -p 13507'

root@laptop:~# ausearch -p 13507 | aureport --file | egrep 'gimp|mupdf'
389. 12/09/2016 17:35:37 /usr/share/applications/gimp.desktop 4 yes /usr/bin/update-desktop-database 1000 8002
390. 12/09/2016 17:35:37 /usr/share/applications/gimp.desktop 2 yes /usr/bin/update-desktop-database 1000 8003
391. 12/09/2016 17:35:37 /usr/share/applications/mupdf.desktop 4 yes /usr/bin/update-desktop-database 1000 8010
392. 12/09/2016 17:35:37 /usr/share/applications/mupdf.desktop 2 yes /usr/bin/update-desktop-database 1000 8011

root@laptop:~# grep application/pdf /usr/share/applications/mimeinfo.cache
application/pdf=mupdf.desktop;gimp.desktop;evince.desktop;libreoffice-draw.desktop;

Reinstalling “gimp” puts that first in the entry for application/pdf

root@laptop:~# apt install --reinstall gimp
[...]
Preparing to unpack .../gimp_2.8.18-1_amd64.deb ...
Unpacking gimp (2.8.18-1) over (2.8.18-1) ...
Processing triggers for mime-support (3.60) ...
Processing triggers for desktop-file-utils (0.23-1) ...
Setting up gimp (2.8.18-1) ...
Processing triggers for gnome-menus (3.13.3-8) ...
[...]

root@laptop:~# autrace /usr/bin/update-desktop-database
Waiting to execute: /usr/bin/update-desktop-database
Cleaning up...
Trace complete. You can locate the records with 'ausearch -i -p 15043'

root@laptop:~# ausearch -p 15043 | aureport --file | egrep 'gimp|mupdf'
389. 12/09/2016 17:39:53 /usr/share/applications/mupdf.desktop 4 yes /usr/bin/update-desktop-database 1000 9550
390. 12/09/2016 17:39:53 /usr/share/applications/mupdf.desktop 2 yes /usr/bin/update-desktop-database 1000 9551
391. 12/09/2016 17:39:53 /usr/share/applications/gimp.desktop 4 yes /usr/bin/update-desktop-database 1000 9556
392. 12/09/2016 17:39:53 /usr/share/applications/gimp.desktop 2 yes /usr/bin/update-desktop-database 1000 9557

root@laptop:~# grep application/pdf /usr/share/applications/mimeinfo.cache
application/pdf=gimp.desktop;mupdf.desktop;evince.desktop;libreoffice-draw.desktop;

Alessio Treglia: The new professionals of the interconnected world

8 December, 2016 - 21:02

There is an empty chair at the conference table of business professionals, a not assigned place that increasingly demands for the presence of a new type of integration manager. The demands for an ever-increasing specialization, imposed by the modern world, are bringing out with great emphasis the need for an interdisciplinary professional who understands the demands of specialists and who is able to coordinate and to link actions and decisions. This need, often still ignored, is a direct result of the growing complexity of the modern world and the fast communications inside the network.

Complexity” is undoubtedly the most suitable paradigm to characterize the historical and social model of today’s world, in which the interactions and connections between the various areas now form an inextricable network of relations. Since the ’60s and’ 70s a large group of scholars – including the chemist Ilya Prigogine and the physicist Murray Gell-Mann – began to study what would become a true Science of Complexity.

Yet this is not an entirely new concept: the term means “composed of several parts connected to each other and dependent on each other“, exactly as reality, nature, society, and the environment around us. A “complex” mode of thought integrates and considers all contexts, interconnections, interrelationships between the different realities as part of the vision.

What is professionalism? And who are professionals? What can define a professional? <…>

<Read More…[by Fabio Marzocca]>

Dirk Eddelbuettel: RcppAPT 0.0.3

8 December, 2016 - 08:19

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- is now on CRAN.

We changed the package to require C++11 compilation as newer Debian systems with g++-6 and the current libapt-pkg-dev library cannot build under the C++98 standard which CRAN imposes (and let's not get into why ...). Once set to C++11 we have no issues. We also added more examples to the manual pages, and turned on code coverage.

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Shirish Agarwal: Day trip in Cape Town, part 2

8 December, 2016 - 04:10

The post continues from the last post shared.

Let me get some interesting tit-bits not related to the day-trip out-of-the-way first –

I don’t know whether we had full access to see all parts of fuller hall or not. Couple of days I was wondering around Fuller Hall, specifically next to where clothes were pressed. Came to know of the laundry service pretty late but still was useful. Umm… next to where the ladies/gentleman pressed our clothes, there is a stairway which goes down. In fact even on the opposite side there is a stairway which goes down. I dunno if other people explored them or not.

I was surprised and shocked to see bars in each room as well as connecting walkways etc. I felt a bit sad, confused and curious and went on to find more places like that. After a while I came up to the ground-level and enquired with some of the ladies therein. I was shocked to know that UCT some years ago (they were not specific) was a jail for people. I couldn’t imagine that a place which has so much warmth (in people, not climate) could be ‘evil’ in a sense. I was not able to get much information out of them about the nature of jail it was, maybe it is a dark past that nobody wants to open up, dunno. There were also two *important* aspects of UCT which Bernelle either forgot, didn’t share or I just came to know via the Wikipedia page then but nothing else.

1. MeerKAT – Apparently quite a bit of the technology was built-in UCT itself. This would have been interesting for geeks and wanna-be geeks like me

2. The OpenContent Initiative by UCT – This would have been also something worth exploring.

One more interesting thing which I saw was the French council in Cape Town from outside

I would urge to look at the picture in the gallery as the picture I shared doesn’t really show all the details. For e.g. the typical large french windows which are the hall-mark of French architecture doesn’t show its glory but if you look at 1306×2322 original picture instead of the 202×360 reproduction you will see that. You will also the insignia of the French Imperial Eagle whose history

I came to know only after I looked it up on the Wikipedia page on that day. It seemed fascinating and probably would have the same pride as the State Emblem of India has for Indians with the four Asiatic Lions standing in a circle protecting each other.

I also like the palm tree and the way the French Council seemed little and yet had character around all the big buildings.

What also was interesting that there wasn’t any scare/fear-build and we could take photos from outside unlike what I had seen and experienced in Doha, Qatar as far as photography near Western Embassies/Councils were concerned.

One of the very eye-opening moments for me was also while I was researching flights from India to South Africa. While perhaps unconsciously I might have known that Middle East is close to India, in reality, it was only during the search I became aware that most places in Middle East by flight are only an hour or two away. This was shocking as there is virtually no mention of one of our neighbours when they are source of large-scale remittances every year. I mean this should have been in our history and geography books but most do not dwell on the subject. It was only during and after that I could understand Mr. Modi’s interactions and trade policies with the Middle East.

Another interesting bit was seeing a bar in a Sprinbok bus –

While admittedly it is not the best picture of the bar, I was surprised to find a bar at the back of a bus. By bar I mean a machine which can serve anything from juices to alcoholic drinks depending upon what is stocked. What was also interesting in the same bus is that the bus also had a middle entrance-and-exit.

This is something I hadn’t seen in most Indian buses. Some of the Volvo buses have but it is rarely used (only except emergencies) . An exhaustive showcase of local buses can be seen here . I find the hand-drawn/cad depictions of all the buses by Amit Pense near to the T.

This is also something which I have not observed in Indian inter-city buses (axe to break the window in case of accident and breakable glass which doesn’t hurt anyone I presume), whether they are State-Transport or the high-end Volvo’s . Either it’s part of South African Roads Regulations or something that Springbok buses do for their customers. All of these queries about the different facets I wanted to ask the bus-driver and the attendant/controller but in the excitement of seeing, recording new things couldn’t ask

In fact one of the more interesting things I looked at and could look day and night is the variety of vehicles on display in Cape Town. In hindsight, I should have bought a couple of 128 GB MMC cards for my mobile rather than the 64 GB one. It was just plain inadequate to capture all that was new and interesting.

This truck I had seen about some 100 metres near the Auditorium on Upper Campus. The truck’s design, paint was something I had never seen before. It is/was similar to casket trucks seen in movies but the way it was painted and everything made it special.

What was interesting is to see the gamut of different vehicles. For instance, there were no bicycles that I saw in most places. There were mostly Japanese/Italian bikes and all sorts of trucks. If I had known before, I would definitely have bought an SD specifically to take snaps of all the different types of trucks, cars etc. that I saw therein.

The adage/phrase ” I should stop in any one place and the whole world will pass me by ” seemed true on quite a few South African Roads. While the roads were on par or a shade better than India, many of those were wide roads. Seeing those, I was left imagining how the Autobahn in Germany and other high-speed expressways would look in feel.

India has also been doing that with the Pune-Mumbai Expressway and projects like Yamuna Expressway and now the extension Agra Lucknow Expressway but doing this all over India would take probably a decade or more. We have been doing it since a decade and a half. NHDP and PMGSY are two projects which are still ongoing to better the roads. We have been having issues as to should we have toll or no toll issues but that is a discussion for some other time.

One of the more interesting sights I saw was the high-arched gothic-styled church from outside. This is near Longstreet as well.

I have seen something similar in Goa, Pondicherry but not such high-arches. I did try couple of times to gain entry but one time it was closed, the other time some repairing/construction work was going on or something. I would loved to see it from inside and hopefully they would have had an organ (music) as well. I could imagine to some extent the sort of music that would have come out.

Seafood enthusiasts/lover/aficionado, or/and Pescatarianism would have a ball of a time in Goa. Goa is on the Konkan coast and while I’m veggie, ones who enjoy seafood really have a ball of a time in Goa. Fouthama’s Festival which happens in February is particularly attractive as Goan homes are thrown open for people to come and sample their food, exchange recipes and alike. This happens around 2 weeks before the Goan Carnival and is very much a part of the mish-mashed Konkani-Bengali-Parsi-Portugese culture.

I better stop here about the Goa otherwise I’ll get into reminiscing mode.

To put the story and event back on track from where we left of (no fiction hereon), Nicholas was in constant communication with base, i.e. UCT as well as another group who was hiking from UCT to Table Mountain. We waited for the other group to join us then at 13:00 hrs. it was decided to move along as they had lost their way somewhere in-between .

We came down the same cable-car and then ventured on towards Houtbay. Houtbay has it all, a fisherman’s wharf, actual boats with tough-mean looking men with tattoos working on boats puffing cigars/pipes, gaggle of sea-gulls, the whole scene. Sharing a few pictures of the way in-between.

I just now had a quick look at the restaurant and it seems they had options for veggies too. Unfortunately, the rating leaves a bit to be desired but then dunno as Indian flavoring is something that takes time to get used too. Zomato doesn’t give any idea of from when a restaurant is in business and has too few reviews so not easy to know.

Notice the pattern, the pattern of small houses I saw all the way till Houtbay and back. I do vaguely remember starting a discussion about it but don’t really remember. I have seen (on TV) cities like Miami, Dubai or/and Hong Kong who have big buildings but both in Konkan as well as Houtbay there were small buildings. I guess a combination of zoning regulations, feel of community, fear of being flooded all play into beaches being the way they are.

Also, this probably is good as less stress on the environment.

The Audi – rare car to be seen in India. This car has been associated with Ravi Shastri when he won it in 1985. I was young but still get goosebumps remembering those days.

First glance of Houtbay beach and pier. Notice how clean and white the beach is.

You can see the wharf grill restaurant in the distance (side-view), see the back of the hop on and hop off bus (a concept which was unknown to me till then). Once I came back and explored came to know this concept is prevalent in many a touristy places around the world. Umm… also By sheer happenchance also captured a beautiful looking Indian female .

In Hindi, we would call this picture ‘virodabhas’ or ‘contradiction’. this is in afternoon, around 1430 hrs. You have the sun, the clouds, the Mountains, the x number of boats, the pier, the houses, the cars, the shops. It was all crazy and beautiful at the same time. The Biggest Contradiction is seeing the Mountain, the beach and the Sea in the same Picture. Baffles the mind.

We were supposed to go on a short cruise to seal/dolphin island but as we were late (as had been waiting for the other group) didn’t go and instead just loitered there.

IIRC the lookout bar is situated just next to Houtbay Search and Rescue. Although was curious if the Lookout tower was used in case of disappearances.

Seal jumping over water, what a miracle !

It looked like the boat we could have been on. I clicked as I especially liked the name Calypso and Calypso . I shared the two links as the mythologies, interpretation differ a bit between Greek and Hollywood culture

Can see few Debian folks in the foreground, next to the Pole and the area around. Also can see a bit of the area around.

I don’t know anything about water sports and after sometime he came out. I was left wondering though, how safe he was in that water. While he was close to the pier and he was just paddling, there weren’t big waves still felt a bit of concern.

While the act was not to the level we see in the movies, still for the time I hung around, I saw him showing attitude for his younger audiences, eating out of their hands, making funny sounds. Btw he farted a few times, whether that was a put-on or not can’t really say but produced a few guffaws from his audience.

I dunno why the birds came down for. Mr. Seal was being fed oily small fish parts, dunno if the oil was secreted by the fish themselves or whatever, it just looked oily from distance.

There wasn’t much activity on the time we went. It probably would have been different on sunrise and would be on sunset. The only activity I saw was on this boat where they were busy fixing and disentangling the lines. I came up with 5-15 different ideas for a story but rejected them as –

a. Probably all of them have been tried. People have been fishing since the beginning of time and modern fishing probably 200 odd years or so. I have read accounts of fishing companies in early 1800s onwards, so probably all must have been tried.

b. More dangerous one, if there is a unique idea, then it becomes more dangerous as writing is an all-consuming process. Writing a blog post (bad or good) takes lots of time. I constantly read, re-read, try and improvise till I can or my patience loses out. In book you simply can’t have such luxuries.

No parking/tow zone in/near the Houtbay search and rescue. Probably to take out emergency vehicles once something untoward happens.

Saved 54 lives, boats towed 154 – Salut! Houtbay sea rescue.

The only small criticism is for Houtbay – there wasn’t a single public toilet. We had to ask favor at kraal kraft to use their toilets and there could have been accidents, it wasn’t lighted well and water was spilled around.

For us, because we were late we missed both the boat-cruise as well as some street shops selling trinkets. Other than that it was all well. We should have stayed till sunset, I am sure the view would have been breath-taking but we hadn’t booked till evening.

Overall it was an interesting day as we had explored part of Table Mountain, seen the somewhat outrageously priced trinkets there as well as explored Houtbay sea-side as well.


Filed under: Miscellenous Tagged: #Audi, #Cape Town, #Cruises, #Debconf16, #French Council, #Geography, #Houtbay Sea Rescue, #Jail, #Middle East, #Springbok Atlas, #Vehicles

Tianon Gravi: My Docker Install Process

7 December, 2016 - 14:00

I’ve had several requests recently for information about how I personally set up a new machine for running Docker (especially since I don’t use the infamous curl get.docker.com | sh), so I figured I’d outline the steps I usually take.

For the purposes of simplicity, I’m going to assume Debian (specifically stretch, the upcoming Debian stable release), but these should generally be easily adjustable to jessie or Ubuntu.

These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways.

grab Docker’s APT repo GPG key

The way I do this is probably a bit unconventional, but the basic gist is something like this:

export GNUPGHOME="$(mktemp -d)"
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
gpg --export --armor 58118E89F3A912897C070ADBF76221572C52609D | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc
rm -rf "$GNUPGHOME"

(On jessie or another release whose APT doesn’t support .asc files in /etc/apt/trusted.gpg.d, I’d drop --armor and the .asc and go with simply /.../docker.gpg.)

This creates me a new GnuPG directory to work with (so my personal ~/.gnupg doesn’t get cluttered with this new key), downloads Docker’s signing key from the keyserver gossip network (verifying the fetched key via the full fingerprint I’ve provided), exports the key into APT’s keystore, then cleans up the leftovers.

For completeness, other popular ways to fetch this include:

sudo apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

(worth noting that man apt-key discourages the use of apt-key adv)

wget -qO- 'https://apt.dockerproject.org/gpg' | sudo apt-key add -

(no verification of the downloaded key)

Here’s the relevant output of apt-key list on a machine where I’ve got this key added in the way I outlined above:

$ apt-key list
...

/etc/apt/trusted.gpg.d/docker.gpg.asc
-------------------------------------
pub   rsa4096 2015-07-14 [SCEA]
      5811 8E89 F3A9 1289 7C07  0ADB F762 2157 2C52 609D
uid           [ unknown] Docker Release Tool (releasedocker) <docker@docker.com>

...
add Docker’s APT source

If you prefer to fetch sources via HTTPS, install apt-transport-https, but I’m personally fine with simply doing GPG verification of fetched packages, so I forgo that in favor of less packages installed. YMMV.

echo 'deb http://apt.dockerproject.org/repo debian-stretch main' | sudo tee /etc/apt/sources.list.d/docker.list

Hopefully it’s obvious, but debian-stretch in that line should be replaced by debian-jessie, ubuntu-xenial, etc. as desired. It’s also worth pointing out that this will not include Docker’s release candidates. If you want those as well, add testing after main, ie ... debian-stretch main testing' | ....

At this point, you should be safe to run apt-get update to verify the changes:

$ sudo apt-get update
...
Hit:1 http://apt.dockerproject.org/repo debian-stretch InRelease
...
Reading package lists... Done

(There shouldn’t be any warnings or errors about missing keys, etc.)

configure Docker

This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later.

sudo mkdir -p /etc/docker
sudo sensible-editor /etc/docker/daemon.json

(sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default)

I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option).

If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver).

Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re just the two I personally use most often).

{
	"storage-driver": "overlay2"
}
configure boot parameters

I usually set a few boot parameters as well (in /etc/default/grub’s GRUB_CMDLINE_LINUX_DEFAULT option – run sudo update-grub after adding these, space-separated).

  • cgroup_enable=memory – enable “memory accounting” for containers (allows docker run --memory for setting hard memory limits on containers)
  • swapaccount=1 – enable “swap accounting” for containers (allows docker run --memory-swap for setting hard swap memory limits on containers)
  • systemd.legacy_systemd_cgroup_controller=yes – newer versions of systemd may disable the legacy cgroup interfaces Docker currently uses; this instructs systemd to keep those enabled (for more details, see systemd/systemd#4628, opencontainers/runc#1175, docker/docker#28109)
  • vsyscall=emulate – allow older binaries to run (debian:wheezy, etc.; see docker/docker#28705)

All together:

...
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.legacy_systemd_cgroup_controller=yes vsyscall=emulate"
...
install Docker!

Finally, the time has come.

$ sudo apt-get install -V docker-engine
...

$ sudo docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:45:16 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:45:16 2016
 OS/Arch:      linux/amd64

$ sudo usermod -aG docker "$(id -un)"

(Reboot or logout/login to update your session to include docker group membership and thus no longer require sudo for using docker commands.)

Hope this is useful to someone! If nothing else, it’ll serve as a concise single-page reference for future-tianon. 😇

Sylvain Le Gall: Release of OASIS 0.4.8

7 December, 2016 - 06:17

I am happy to announce the release of OASIS v0.4.8.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Fix various problems of parsing present in OASIS 0.4.7 (extraneous whitespaces, handling of ocamlbuild argument...)
  • Enable creation of OASIS plugin and OASIS command line plugin.
  • Various fixes for the plugin "omake".
  • Create 2 branches to pin OASIS with OPAM, making easier for contributor to test dev. version.

Thanks to Edwin Török, Yuri D. Lensky and Gerd Stolpmann for their contributions.

Jonas Meurer: On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

6 December, 2016 - 21:21
On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration

On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned.

In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about

Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful

The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row.

Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script

In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself.

Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:

It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not.

There are many "levels" of physical access. [...]

In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations.

And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.

While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts.

The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least.

Hector and Ismael argue that the default should be changed for enhanced security:

[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.

For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree.

But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well.

To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs.

But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion.

For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual:

After discussing the topic with initramfs-tools maintainers today, I finally decided to not change any defaults.

2. tries=n option ignored, local brute-force slightly cheaper

Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords.

Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug.

The reason for the bug was twofold:

  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:

    setup_mapping()
    {
      [...]
      # Try to get a satisfactory password $crypttries times
      count=0                              
    while [ $crypttries -le 0 ] || [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...] }

    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:

    setup_mapping()
    {
      [...]
      while [ $crypttries -le 0 ] || [ $count -lt $crypttries ]; do
          [...]
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
          [...]
      done
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then
          [...]
      fi
      [...]
    }
    

    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!

  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks.

    In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid.

    The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function.

    The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting

I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned.

Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes.

But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System.

Honestly, this headline is missleading - if not wrong - in several ways:

  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.

Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was reveiled and that it compromised the protection that cryptsetup respective LUKS offer.

If these topics did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all.

After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

Links

Mirco Bauer: Secure USB boot with Debian

6 December, 2016 - 20:28
Foreword

The moment you leave your laptop, say in a hotel room, you can no longer trust your system as it could have been modified while you were away. Think you are safe because you have a crypted disk? Well, if the boot partition is on the laptop itself, it can be manipulated and you will not notice because the boot partition can't be encrypted. The BIOS needs to access the MBR and boot loader and that loads the Linux kernel, all uncrypted. There has been some reports lately that the Linux cryptsetup is insecure because you can spawn a root shell by hitting the enter key for 70 seconds. This is not the real threat to your system, really. If someone has physical access to your hardware, he can get a root shell in less than a second by passing init=/bin/bash as parameter to the Linux kernel in the boot loader regardless if cryptsetup is used or not! The attacker can also use other ways like booting a live system from CD/USB etc. The real insecurity here is that the uncrypted boot partition and not some script that gets executed from it. So how to prevent this physical access attack vector? Just keep reading this guide.

This guide explains how to install Debian securely on your laptop with using an external USB boot disk. The disk inside the laptop should not contain your /boot partition since that is an easy target for manipulation. An attacker could for example change the boot scripts inside the initrd image to capture your passphrase of your crypted volume. With an USB boot partition, you can unplug the USB stick after the operating system has booted. Best practice here is to have the USB stick together with your bunch of keys. That way you will disconnect your USB stick early after the boot as finished so you can put it back into your pocket.

Secure Hardware Assumptions

We have to assume here that the hardware you are using to download and verify the install media is safe to use. Same applies with the hardware where you are doing the fresh Debian install. Say the hardware does not contain any malware in the form of code in EFI or other manipulation attempts that influence the behavior of the operating system we are going to install.

Download Debian Install ISO

Feel free to use any Debian mirror and install flavor. For this guide I am using the download mirror in Germany and the DVD install flavor.

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/debian-8.6.0-amd64-DVD-1.iso
Verify hashsum of ISO file

To know if the ISO file was downloaded without modification we have to check the hashsum of the file. The hashsum file can be found in the same directory as the ISO file on the download mirror. With hashsums if a single bit differs in the file, the resulting SHA512 sum will be completely different.

Obtain the hashsum file using:

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/SHA512SUMS

Calculate a local hashsum from the downloaded ISO file:

sha512sum debian-8.6.0-amd64-DVD-1.iso

Now you need to compare the hashsum with that is in the SHA512SUMS file. Since the SHA512SUMS file contains the hashsums of all files that are in the same directory you need to find the right one first. grep can do this for you:

grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS

Both commands executed after each other should show following output:

$ sha512sum debian-8.6.0-amd64-DVD-1.iso
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso
$ grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso

As you can see the hashsum found in the SHA512SUMS file matches with the locally generated hashsum using the sha512sum command.

At this point we are not finished yet. These 2 matching hashsums just means whatever was on the download server matches what we have received and stored locally on your disk. The ISO file and SHA512SUM file could still be a modified version!

And this is where GPG signatures chime in, covered in the next section.

Download GPG Signature File

GPG signature files usually have the .sign file name extension but could also be named .asc. Download the signature file using wget:

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/SHA512SUMS.sign
Obtain GPG Key of Signer

Letting gpg verify the signature will fail at this point as we don't have the public key of the signer:

$ gpg --verify SHA512SUMS.sign
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: Can't check signature: No public key

Downloading a key is trivial with gpg, but more importantly we need to verify that this key (DA87E80D6294BE9B) is trustworthy, as it could also be a key of the infamous man-in-the-middle.

Here you can find the GPG fingerprints of the official signing keys used by Debian. The ending of the "Key fingerprint" line should match the key id we found in the signature file from above.

gpg:                using RSA key DA87E80D6294BE9B

Key fingerprint = DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

DA87E80D6294BE9B matches Key fingerprint = DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B

To download and import this key run:

$ gpg --keyserver keyring.debian.org --recv-keys DA87E80D6294BE9B

Verify GPG Signature of Hashsum File

Ok, we are almost there. Now we can run the command which checks if the signature of the hashsum file we have, was not modified by anyone and matches what Debian has generated and signed.

gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "Debian CD signing key <debian-cd@lists.debian.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

The important line in this output is the "Good signature from ..." one. It still shows a warning since we never certified (signed) that Debian key. This can be ignored at this point though.

Write ISO Image to Install Media

With a verified pristine ISO file we can finally start the install by writing it to an USB stick or blank DVD. So use your favorite tool to write the ISO to your install media and boot from it. I have used dd and a USB stick attached as /dev/sdb.

dd if=debian-8.6.0-amd64-DVD-1.iso of=/dev/sdb bs=1M oflag=sync
Install Debian on Crypted Volume with USB boot partition

I am not explaining each step of the Debian install here. The Debian handbook is a good resource for covering each install step.

Follow the steps until the installers wants to partition your disk.

There you need to select the "Guided, use entire disk and set up encrypted LVM" option. After that select the built-in disk of your laptop, which usually is sda but double check this before you go ahead, as it will overwrite the data! The 137 GB disk in this case is the built-in disk and the 8 GB is the USB stick.

It makes no difference at this point if you select "All files in one partition" or "Separate /home partition". The USB boot partition can be selected a later step.

Confirm that you want to overwrite your built-in disk shown as sda. It will take a while as it will write random data to the disk to ensure there is no uncrypted data left on the disk from previous installations for example.

Now you need to enter your passphrase that will be used to protect the private key of the crypt volume. Choose something long enough like a sentence and don't forget the passphrase else you can no longer access your data! Don't save the passphrase on any computer, smartphone or password manager. If you want to make a backup of your passphrase then use a ball pen and paper and store the paper backup in a secure location.

The installer will show you a summary of the partitioning as shown above but we need to make the change for the USB boot disk. At the moment it wants to put /boot on sda which is the built-in disk, while our USB stick is sdb. Select /boot and hit enter, after that select "Delete this partition".

After /boot was deleted we can create /boot on the USB stick shown as sdb. Select sdb and hit enter. It will ask if you want to create an empty partition table. Confirm that question with yes.

The partition summary shows sdb with no partitions on it. Select FREE SPACE and select "Create a new partition". Confirm the suggested partition size. Confirm the partition type to be "Primary".

It is time to tell the installer to use this new partition on the USB stick (sdb1) as /boot partition. Select "Mount point: /home" and in the next dialog select "/boot - static files of the boot loader" as shown below:

Confirm the made changes by selecting "Done setting up the partition".

The final partitioning should look now like the following screenshot:

If the partition summary looks good, go ahead with the installation by selecting "Finish partitioning and write changes to disk".

When the installer asks if it should force EFI, then select no, as EFI is not going to protect you.

Finish the installation as usual, select your preferred desktop environment etc.

GRUB Boot Loader

Confirm the dialog that wants to install GRUB to the master boot record. Here it is important to install it to the USB stick and not your built-in SATA/SSD disk! So select sdb (the USB stick) in the next dialog.

First Boot from USB

Once everything is installed, you can boot from your USB stick. As simple test you can unplug your USB stick and the boot should fail with "no operating system found" or similar error message from the BIOS. If it doesn't boot even though the USB stick is connected, then most likely your BIOS is not configured to boot from USB media. Also a blank screen and nothing happening is usually meaning the BIOS can't find a boot device. You need to change the boot setting in your BIOS. As the steps are very different for each BIOS, I can't provide a detailed step-by-step list here.

Usually you can enter the BIOS using F1, F2 or F12 after powering on your computer. In the BIOS there is a menu to configure the boot order. In that list it should show USB disk/storage as the first position. After you have made the changes save and exit the BIOS. Now it will boot from your USB stick first and GRUB will show up and proceeds with the boot process till it will ask for your passphrase to unlock the crypt volume.

Unmount /boot partition after Boot

If you boot your laptop from the USB stick, we want to remove the stick after it has finished booting. This will prevent an attacker to make modifications to your USB stick. To avoid data loss, we should not simply unplug the USB stick but unmount /boot first and then unplug the stick. Good news is that we can automate this unmounting and you just need to unplug the stick after the laptop has finished booting to your login screen.

Just add this line to your /etc/rc.local file:

umount /boot

After boot you can once verify that it automatically unmounts /boot for you by running:

mount | grep /boot

If that command produces no output, then /boot is not mounted and you can safely unplug the USB stick.

Final Words

From time to time you need to upgrade your Linux kernel of course which is on the /boot partition. This can still be done the regular way using apt-get upgrade, except that you need to mount /boot before that and unmount it again after the kernel upgrade.

Enjoy your secured laptop. Now you can leave it in a hotel room without the possibility of someone trying you obtain your passphrase by putting a key logger in your boot partition. All the attacker will see is a fully encrypted harddisk. If he tries to mess with your crypted disk, you will notice as the decryption will fail.

Disclaimer: there are still other attack vectors possible, but they are much harder to do. Your hardware or BIOS can still be modified. But not by holding down the enter key for 70 seconds or by booting a live system.

Shirish Agarwal: The Anti-Pollito squad – arrest and confession

6 December, 2016 - 00:01

Disclaimer – This is an attempt at humor and hence entirely fictional in nature. While some incidents depicted are true, the context and the story woven around them are by yours truly. None of the Mascots of Debian were hurt during the blog post. I also disavow any responsibility for any hurt (real or imagined) to any past, current and future mascots. The attempt should not be looked upon as demeaning people who are accused of false crimes, tortured and confessions eked out of them as this happens quite a lot (In India for sure, but guess it’s the same world over in various degrees). The idea is loosely inspired by Chocolate:Deep Dark Secrets. (2005)

On a more positive note, let’s start –

Being a Sunday morning woke up late to find incessant knocking on the door, incidentally mum was not at home. Opening the door, found two official looking gentleman. They asked my name, asked my credentials, tortured and arrested me for “Group conspiracy of Malicious Mischief in second and third degrees” .

The torture was done by means of making me forcefully watch endless reruns of ‘Norbit‘ . While I do love Eddie Murphy, this was one of his movies he could have done without. I guess for many people watching it once was torture enough. I *think* they were nominated for razzie awards dunno if they won it or not, but this is beside the point.

Unlike the 20 years it takes for a typical case to reach to its conclusion even in the smallest court in India, due to the torture, I was made to confess (due to endless torture) and was given summary judgement. The judgement was/is as follows –

a. Do 100 hours of Community service in Debian in 2017. This could be done via blog posts, raising tickets in the Debian BTS or in whichever way I could be helpful to Debian.

b. Write a confessional with some photographic evidence sharing/detailing some of the other members who were part of the conspiracy in view of the reduced sentence.

So now, have been forced to write this confession –

As you all know, I won a bursary this year for debconf16. What is not known by most people is that I also got an innocuous looking e-mail titled ‘ Pollito for DPL ‘. While I can’t name all the names as investigation is still ongoing about how far-reaching the conspiracy is . The email was purportedly written by members of ‘cabal within cabal’ which are in Debian. I looked at the email header to see if this was genuine and I could trace the origin but was left none the wiser, as obviously these people are far more technically advanced than to fall in simple tricks like this –

Anyways, secretly happy that I have been invited to be part of these elites, I did the visa thing, packed my bags and came to Debconf16.

At this point in juncture, I had no idea whether it was real or I had imagined the whole thing. Then to my surprise saw this –

Just like the Illuminati the conspiracy was for all to see those who knew about it. Most people were thinking of it as a joke, but those like me who had got e-mails knew better. I knew that the thing is real, now I only needed to bide my time and knew that the opportunity would present itself.

And few days later, sure enough, there was a trip planned for ‘Table Mountain, Cape Town’ . Few people planned to hike to the mountain, while few chose to take the cable car till up the mountain.

Quite a few people came along with us and bought tickets for the to and fro to the mountain and back.

Incidentally, I was thinking if the South African Govt. were getting the tax or not. If you look at the ticket, there is just a bar-code. In India as well as the U.S. there is TIN – Tax Identification Number –

Few links to share what it is all about . While these should be on all invoices, need to specially check when taking high-value items. In India as shared in the article the awareness, knowledge leaves a bit to be desired. While I’m drifting from the incident, it would be nice if somebody from SA could share how things work there.

Moving on, we boarded the cable car. It was quite spacious cable car with I guess around 30-40 people or some more who were able to see everything along with the controller.

It was a pleasant cacophony of almost two dozen or more nationalities on this 360 degrees moving chamber. I was a little worried though as it essentially is a bucket and there is always a possibility that a severe wind could damage it. Later somebody did share that some frightful incidents had occurred not too long ago on the cable car.

It took about 20-25 odd minutes to get to the top of table mountain and we were presented with views such as below –

The picture I am sharing is actually when we were going down as all the pictures of going up via the cable car were over-exposed. Also, it was pretty crowded on the way up then on the way down so handling the mobile camera was not so comfortable.

Once we reached up, the wind was blowing at incredible speeds. Even with my jacket and everything I was feeling cold. Most of the group around 10-12 people looked around if we could find a place to have some refreshments and get some of the energy in the body. So we all ventured to a place and placed our orders –

I was introduced to Irish Coffee few years back and have had some incredible Irish Coffees in Pune and elsewhere. I do hope to be able to make Irish Coffee at home if and when I have my own house. This is hotter than brandy and is perfect if you are suffering from cold etc if done right, really needs some skills. This is the only drink which I wanted in SA which I never got right . As South Africa was freezing for me, this would have been the perfect antidote but the one there as well as elsewhere were all …bleh.

What was interesting though, was the coffee caller besides it. It looked like a simple circuit mounted on a PCB board with lights, vibrations and RFID and it worked exactly like that. I am guessing as and when the order is ready, there is an interrupt signal sent via radio waves which causes the buzzer to light and vibrate. Here’s the back panel if somebody wants to take inspiration and try it as a fun project –

Once we were somewhat strengthened by the snacks, chai, coffee etc. we made our move to seeing the mountain. The only way to describe it is that it’s similar to Raigad Fort but the plateau seemed to be bigger. The wikipedia page of Table Mountain attempts to share but I guess it’s more clearly envisioned by one of the pictures shared therein.

I have to say while Table Mountain is beautiful and haunting as it has scenes like these –

There is something there which pulls you, which reminds you of a long lost past. I could have simply sat there for hours together but as was part of the group had to keep with them. Not that I minded.

The moment I was watching this, I was transported to some memories of the Himalayas about 20 odd years or so. In that previous life, I had the opportunity to be with some of the most beautiful women and also been in the most happening places, the Himalayas. I had shared years before some of my experiences I had in the Himalayas. I discontinued it as I didn’t have a decent camera at that point in time. While I don’t wanna digress, I would challenge anybody to experience the Himalayas and then compare. It is just something inexplicable. The beauty and the rawness that Himalayas shows makes you feel insignificant and yet part of the whole cosmos. What Paulo Cohello expressed in The Valkyries is something that could be felt in the Himalayas. Leh, Ladakh, Himachal , Garwhal, Kumaon. The list will go on forever as there are so many places, each more beautiful than the other. Most places are also extremely backpacker-friendly so if you ask around you can get some awesome deals if you want to spend more than a few days in one place.

Moving on, while making small talk @olasd or Nicolas Dandrimont , the headmaster of our trip made small talk to each of us and eked out from all of us that we wanted to have Pollito as our DPL (Debian Project Leader) for 2017. Few pictures being shared below as supporting evidence as well –

While I do not know who further up than Nicolas was on the coup which would take place. The idea was this –

If the current DPL steps down, we would take all and any necessary actions to make Pollito our DPL.

This has been taken from Pollito’s adventure

Being a responsible journalist, I also enquired about Pollito’s true history as it would not have been complete without one. This is the e-mail I got from Gunnar Wolf, a friend and DD from Mexico

Turns out, Valessio has just spent a week staying at my house And
in any case, if somebody in Debian knows about Pollito’s
childhood… That is me.

Pollito came to our lives when we went to Congreso Internacional de
Software Libre (CISOL) in Zacatecas city. I was strolling around the
very beautiful city with my wife Regina and our friend Alejandro
Miranda, and at a shop at either Ramón López Velarde or Vicente
Guerrero, we found a flock of pollitos.

http://www.openstreetmap.org/#map=17/22.77111/-102.57145

Even if this was comparable to a slave market, we bought one from
them, and adopted it as our own.

Back then, we were a young couple… Well, we were not that young
anymore. I mean, we didn’t have children. Anyway, we took Pollito with
us on several road trips, such as the only time I have crossed an
international border driving: We went to Encuentro Centroamericano de
Software Libre at Guatemala city in 2012 (again with Alejandro), and
you can see several Pollito pics at:

http://gwolf.org/album/road-trip-ecsl-2012-guatemala-0

Pollito likes travelling. Of course, when we were to Nicaragua for
DebConf, Pollito tagged along. It was his first flight as a passenger
(we never asked about his previous life in slavery; remember, Pollito
trust no one).

Pollito felt much welcome with the DebConf crowd. Of course, as
Pollito is a free spirit, we never even thought about forcing him to
come back with us. Pollito went to Switzerland, and we agreed to meet
again every year or two. It’s always nice to have a chat with him.

Hugs!

So with that backdrop I would urge fellow Debianities to take up the slogans –

LONG LIVE THE DPL !

LONG LIVE POLLITO !

LONG LIVE POLLITO THE DPL !

The first step to make Pollito the DPL is to ensure he has a @debian.org (pollito@debian.org)

We also need him to be made a DD because only then can he become a DPL.

In solidarity and in peace


Filed under: Miscellenous Tagged: #caller, #confession, #Debconf16, #debian, #Fiction, #history, #Pollito, #Pollito as DPL, #Table Mountain, Cabal, memories, south africa

Norbert Preining: Debian/TeX Live 2016.20161130-1

5 December, 2016 - 21:58

As we are moving closer to the Debian release freeze, I am shipping out a new set of packages. Nothing spectacular here, just the regular updates and a security fix that was only reported internally. Add sugar and a few minor bug fixes.

I have been silent for quite some time, busy at my new job, busy with my little monster, writing papers, caring for visitors, living. I have quite a lot of things I want to write, but not enough time, so very short only this one.

Enjoy.

New packages

awesomebox, baskervillef, forest-quickstart, gofonts, iscram, karnaugh-map, tikz-optics, tikzpeople, unicode-bidi.

Updated packages

acmart, algorithms, aomart, apa, apa6, appendix, apxproof, arabluatex, asymptote, background, bangorexam, beamer, beebe, biblatex-gb7714-2015, biblatex-mla, biblatex-morenames, bibtexperllibs, bidi, bookcover, bxjalipsum, bxjscls, c90, cals, cell, cm, cmap, cmextra, context, cooking-units, ctex, cyrillic, dirtree, ekaia, enotez, errata, euler, exercises, fira, fonts-churchslavonic, formation-latex-ul, german, glossaries, graphics, handout, hustthesis, hyphen-base, ipaex, japanese, jfontmaps, kpathsea, l3build, l3experimental, l3kernel, l3packages, latex2e-help-texinfo-fr, layouts, listofitems, lshort-german, manfnt, mathastext, mcf2graph, media9, mflogo, ms, multirow, newpx, newtx, nlctdoc, notes, patch, pdfscreen, phonenumbers, platex, ptex, quran, readarray, reledmac, shapes, showexpl, siunitx, talk, tcolorbox, tetex, tex4ht, texlive-en, texlive-scripts, texworks, tikz-dependency, toptesi, tpslifonts, tracklang, tugboat, tugboat-plain, units, updmap-map, uplatex, uspace, wadalab, xecjk, xellipsis, xepersian, xint.

Reproducible builds folks: Reproducible Builds: week 84 in Stretch cycle

5 December, 2016 - 19:31

What happened in the Reproducible Builds effort between Sunday November 27 and Saturday December 3 2016:

Reproducible work in other projects Media coverage, etc.
  • There was a Reproducible Builds hackathon in Boston with contributions from Dafydd, Valerie, Clint, Harlen, Anders, Robbie and Ben. (See the "Bugs filed" section below for the results).

  • Distrowatch mentioned Webconverger's reproducible status.

Bugs filed

Chris Lamb:

Clint Adams:

Dafydd Harries:

Daniel Shahaf:

Reiner Herrmann:

Valerie R Young:

Reviews of unreproducible packages

15 package reviews have been added, 4 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (5)
  • Lucas Nussbaum (8)
  • Santiago Vila (1)
diffoscope development

Is is available now in Debian, Archlinux and on PyPI.

strip-nondeterminism development
  • At the Reproducible Builds Boston hackathon Anders Kaseorg filed #846895 treat .par files as Zip archives, including a patch which was merged into master.
reprotest development tests.reproducible-builds.org
  • Holger made a couple of changes:

    • Group all "done" and all "open" usertagged bugs together in the bugs graphs and move the "done bugs" from the bottom of these gaps.
    • Update list of packages installed on .debian.org machines.
    • Made the maintenance jobs run every 2h instead of 3h.
    • Various bug fixes and minor improvements.
  • After thorough review by Mattia, some patches by Valerie were merged in preparation of the switch from sqlite to Postgresql, most notably a conversion to the sqlalchemy expression language.

  • Holger gave a talk at Profitbricks about how Debian is using 168 cores, 503 GB RAM and 5 TB storage to make jenkins.debian.net and tests.reproducible-builds.org run. Many thanks to Profitbricks for supporting jenkins.debian.net since August 2012!

  • Holger created a Jenkins job to build reprotest from git master branch.

  • Finally, the Jenkins Naginator plugin was installed to retry git cloning in case of Alioth/network failures, this will benefit all jobs using Git on jenkins.debian.net.

Misc.

This week's edition was written by Chris Lamb, Valerie Young, Vagrant Cascadian, Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Markus Koschany: My Free Software Activities in November 2016

5 December, 2016 - 07:48

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Android
  • Chris Lamb was so kind to send in a patch for apktool to make the build reproducible (#845475). Although this was not enough to fix the issue it set me on the right path to eventually resolve bug number 845475.
Debian Games
  • I packaged a couple of new upstream releases for extremetuxracer, fifechan, fife, unknown-horizons, freeciv, atanks and armagetronad. Most notably fifechan was accepted by the FTP team which allowed me to package new versions of fife and unknown-horizons which are both back in testing again. I expect that upstream will make their final release sometime in December. Atanks has been orphaned a while ago and since upstream is still active and I kinda like the game I decided to adopt it. I also uploaded a backport of Freeciv 2.5.6 to jessie-backports.
  • In November we received a bunch of RC bug reports again because, hey, it is almost time for the Freeze, let’s break some packages. Thus I spent some time fixing freeorion (#843132), pokerth (#843078), simutrans (#828545), freeciv (#844198) and warzone2100 (#844870).
  • I also updated the debian-games blend, we are at version 1.6 now, and made some smaller adjustments. The most important change was adding a new binary package, games-all, that installs..well, all! I know this will make at least one person on this planet happy. Actually I was kind of forced into adding it because blends-dev automatically creates it as a requirement for choosing blends with the Debian Installer. But don’t be afraid games-all only recommends games-finest, the rest is suggested.
  • Last but not least I worked on performous and could close a wishlist bug report (#425898). The submitter asked to suggest some free song packages for this karaoke game.
Debian Java
  • I sponsored uncommons-watchmaker for Kai-Chung and also reviewed libnative-platform-java and granted upload rights to him.
  • I packaged new upstream releases of lombok-patcher, electric, undertow, sweethome3d and sweethome3d-furniture-editor.
  • I spent quite some time on reviewing (especially the copyright review took most of the time) and improving the packaging for tycho (#816604) which is a precondition for packaging the latest upstream release of Eclipse, a popular Java IDE. Luca Vercelli has been working on it for the last couple of months and he did most of the initial packaging. Unfortunately I was only able to upload the package last week which means that the chances for updating Eclipse for Stretch are slim.
  • Due to time constraints I could not finish the Netbeans update in time which I had started back in October. This is on my priority list for December now.
  • Several security issues were reported against Tomcat{6,7,8}. I helped with reviewing some of the patches that Emmanuel prepared for Jessie and worked on fixing the same bugs in Wheezy.
Debian LTS

This was my ninth month as a paid contributor and I have been paid to work 11 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 14. November until 21. November I was in charge of our LTS frontdesk. I triaged bugs in teeworlds, libdbd-mysql-perl, bash, libxml2, tiff, firefox-esr, drupal7, moin, libgc, w3m and sniffit.
  • DLA-715-1. Issued a security update for drupal7 fixing 2 CVE.
  • DLA-717-1. Issued a security update for moin fixing 2 CVE.
  • DLA-728-1. Issued a security update for tomcat6 fixing 8 CVE. (Debian bug #845385 was assigned a CVE later).
  • DLA-729-1. Issued a security update for tomcat7 fixing 8 CVE. (Debian bug #845385 was assigned a CVE later).
  • Especially the patches and the subsequent testing for CVE-2016-0762 and CVE-2016-6816 required most of the time.
Non-maintainer uploads
  • I uploaded an NMU for angband to fix #837394. The patch was kindly prepared by Adrian Bunk.

It is already this time of the year again. See you next year for another report.

Ben Hutchings: Linux Kernel Summit 2016, part 2

5 December, 2016 - 07:01

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here.

Updated: I corrected the description of which Intel processors support SMEP.

Kernel Hardening

Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP.

GCC plugins

The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point.

There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet.

Probabilistic protections

These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely.

Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel.

Page and heap allocation, etc., is still quite predictable.

struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers.

Deterministic protections

These protections block a class of attacks completely.

Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.)

Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake Broadwell onward) and on arm64 (PXN, from ARMv8.1). But Skylake Broadwell is not available for servers in high-end server variants and ARMv8.1 is not yet implemented at all! s390 always had this protection.

It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough.

Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.)

Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.)

Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.)

Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_{open,close}_kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet.

Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet.

Kernel Freezer Hell

For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere.

Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)).

At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by:

  • Telling mounted filesystems to freeze. They can quiesce any kthreads they created.
  • Device drivers quiescing any kthreads they created, from their PM suspend implementation.

The system suspend code should not need to directly freeze threads.

Kernel Documentation

Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation.

The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx.

There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders.

Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream.

There is lots of obsolete documentation, and patches to remove those would be welcome.

Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time.

Issues around Tracepoints

Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically.

There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability.

Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them?

Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).

Ben Hutchings: Linux Kernel Summit 2016, part 2

5 December, 2016 - 04:18

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here.

Kernel Hardening

Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP.

GCC plugins

The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point.

There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet.

Probabilistic protections

These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely.

Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel.

Page and heap allocation, etc., is still quite predictable.

struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers.

Deterministic protections

These protections block a class of attacks completely.

Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.)

Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake onward) and on arm64 (PXN, from ARMv8.1). But Skylake is not available for servers and ARMv8.1 is not yet implemented at all! s390 always had this protection.

It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough.

Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.)

Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.)

Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.)

Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_{open,close}_kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet.

Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet.

Kernel Freezer Hell

For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere.

Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)).

At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by:

  • Telling mounted filesystems to freeze. They can quiesce any kthreads they created.
  • Device drivers quiescing any kthreads they created, from their PM suspend implementation.

The system suspend code should not need to directly freeze threads.

Kernel Documentation

Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation.

The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx.

There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders.

Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream.

There is lots of obsolete documentation, and patches to remove those would be welcome.

Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time.

Issues around Tracepoints

Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically.

There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability.

Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them?

Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).

Niels Thykier: Piuparts integration in britney

4 December, 2016 - 18:06

As of today, britney now fetches reports from piuparts.debian.org and uses it as a part of her evaluation for package migration.  As with her RC bug check, we are only preventing (known) regressions from migrating.

The messages (subject to change) look something like:

  • Piuparts tested OK
  • Rejected due to piuparts regression
  • Ignoring piuparts failure (Not a regression)
  • Cannot be tested by piuparts (not a blocker)

If you want to do machine parsing of the Britney excuses, we also provide an excuses.yaml. In there, you are looking for “excuses[X].policy_info.piuparts.test-results”, which will be one of:

  • pass
  • regression
  • failed
  • cannot-with-tested

Enjoy.

 


Filed under: Debian, Release-Team

Jo Shields: A quick introduction to Flatpak

4 December, 2016 - 17:44

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Ben Hutchings: Linux Kernel Summit 2016, part 1

4 December, 2016 - 06:54

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions.

Stable process

Jiri Kosina, in his role as a distribution maintainer, sees too many unsuitable patches being backported - e.g. a fix for a bug that wasn't present or a change that depends on an earlier semantic change so that when cherry-picked it still compiles but isn't quite right. He thinks the current review process is insufficient to catch them.

As an example, a recent fix for a minor information leak (CVE-2016-9178) depended on an earlier change to page fault handling. When backported by itself, it introduced a much more serious security flaw (CVE-2016-9644). This could have been caught very quickly by a system call fuzzer.

Possible solutions: require 'Fixes' field, not just 'Cc: stable'. Deals with 'bug wasn't present', but not semantic changes.

There was some disagreement whether 'Fixes' without 'Cc: stable' should be sufficient for inclusion in stable. Ted Ts'o said he specifically does that in some cases where he thinks backporting is risky. Greg Kroah-Hartman said he takes it as a weaker hint for inclusion in stable.

Is it a good idea to keep 'Cc: stable' given the risk of breaking embargo? On balance, yes, it only happened once.

Sometimes it's hard to know exactly how/when the bug was introduced. Linus doesn't want people to guess and add incorrect 'Fixes' fields. There is still the option to give some explanation and hints for stable maintainers in the commit message. Ideally the upstream developer should provide a test case for the bug.

Is Linus happy?

Linus complained about minor fixes coming later in the release cycle. After rc2, all fixes should either be for new code introduced in the current release cycle or for important bugs. However, new, production-ready drivers without new infrastructure dependencies are welcome at almost any point in the release cycle.

He was unhappy about some big changes in RDMA, but I'm not sure what those were.

Bugzilla and bug tracking

Laura Abbott started a discussion of bugzilla.kernel.org, talking about subsystems where maintainers ignore it and any responses come from random people giving bad advice. This is a terrible experience for users. Several maintainers are actively opposed to using it, and the email bridge no longer works (or not well?). She no longer recommends Fedora bug submitters to submit reports there.

Are there any alternatives? None were proposed.

Someone asked whether Bugzilla could tell reporters to use email for certain products/components instead of continuing with the bug entry process.

Konstantin Ryabitsev talked about the difficulty of upgrading a customised instance of Bugzilla. Much customisation requires patches which don't apply to next version (maybe due to limitations of the extension mechanism?). He has had to drop many such patches.

Email is hard to track when a bug is handed over from one maintainer to another. Email archives are very unreliable. Linus: I'll take Bugzilla over mail-archive.

No-one is currently keeping track of bugs across the kernel and making sure they get addressed by an appropriate maintainer. It's (at least) a full-time job but no individual company has business case for paying for this. Konstantin suggested (I think) that CII might pay for this.

There was some discussion of what information should be included in a bug report. The Cut here line in oops messages was said to be a mistake because there are often relevant messages before it. The model of computer is often important. Beyond that, there was not much interest in the automated information gathering that distributions do. Distribution maintainers should curate bugs before forwarding upstream.

There was a request for custom fields per component in Bugzilla. Konstantin says this is doable (possibly after upgrade to version 5); it doesn't require patches.

The future of the Kernel Summit

The kernel community is growing, and the invitation list for the core day is too small to include all the right people for technical subjects. For 2017, the core half-day will have an even smaller invitation list, only ~30 subsystem maintainers that Linus pulls from. The entire technical track will be open (I think).

Kernel Summit 2017 and some mini-summits will be held in Prague alongside Open Source Summit Europe (formerly LinuxCon Europe) and Embedded Linux Conference Europe. There were some complaints that LinuxCon is not that interesting to kernel developers, compared to Linux Plumbers Conference (which followed this year's Kernel Summit). However, the Linux Foundation is apparently soliciting more hardcore technical sessions.

Kernel Summit and Linux Plumbers Conference are quite small, and it's apparently hard to find venues for them in cities that also have major airports. It might be more practical to co-locate them both with Open Source Summit in future.

time_t and 2038

On 32-bit architectures the kernel's representation of real time (time_t etc.) will break in early 2038. Fixing this in a backward-compatible way is a complex problem.

Arnd Bergmann presented the current status of this process. There has not yet been much progress in mainline, but more fixes have been prepared. The changes to struct inode and to input events are proving to be particularly problematic. There is a need to add new system calls, and he intends to add these for all (32-bit) achitectures at once.

Copyright retention

James Bottomley talked about how developers can retain copyright on their contributions. It's hard to renegotiate within an existing employment; much easier to do this when preparing to sign a new contract.

Some employers expect you to fill in a document disclosing 'prior inventions' you have worked on. Depending on how it's worded, this may require the employer to negotiate with you again whenever they want you to work on that same software.

It's much easier for contractors to retain copyright on their work - customers expect to have a custom agreement and don't expect to get copyright on contractor's software.

Vincent Bernat: Build-time dependency patching for Android

4 December, 2016 - 05:20

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.

buildscript {
    dependencies {
        classpath 'com.android.tools.build:gradle:2.2.+'
        classpath 'com.android.tools.build:transform-api:1.5.+'
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'
    }
}

Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context§

This section adds some context to the example. Feel free to skip it.

Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1.

Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:

Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.

The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2.

Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement§

Let’s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:

// In SslUtil class
public static boolean shouldDenyRequest(int error) {
    return false;
}
Transform registration§

Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:

import com.android.build.api.transform.Context
import com.android.build.api.transform.QualifiedContent
import com.android.build.api.transform.Transform
import com.android.build.api.transform.TransformException
import com.android.build.api.transform.TransformInput
import com.android.build.api.transform.TransformOutputProvider
import org.gradle.api.logging.Logger

class PatchXWalkTransform extends Transform {
    Logger logger = null;

    public PatchXWalkTransform(Logger logger) {
        this.logger = logger
    }

    @Override
    String getName() {
        return "PatchXWalk"
    }

    @Override
    Set<QualifiedContent.ContentType> getInputTypes() {
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)
    }

    @Override
    Set<QualifiedContent.Scope> getScopes() {
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)
    }

    @Override
    boolean isIncremental() {
        return true
    }

    @Override
    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException {
        // We should do something here
    }
}

// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))

The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources.

The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It’s also possible to transform our own classes.

The isIncremental() method returns true because we support incremental builds.

The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn’t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform§

To keep all external dependencies unmodified, we must copy them:

@Override
void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException {
    inputs.each {
        it.jarInputs.each {
            def jarName = it.name
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
                                                         Format.JAR);
            def status = it.getStatus()
            if (status == Status.REMOVED) { // ❶
                logger.info("Remove ${src}")
                FileUtils.delete(dest)
            } else if (!isIncremental || status != Status.NOTCHANGED) { // ❷
                logger.info("Copy ${src}")
                FileUtils.copyFile(src, dest)
            }
        }
    }
}

We also need two additional imports:

import com.android.build.api.transform.Status
import org.apache.commons.io.FileUtils

Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed (❶) or it has been modified (❷). In all other cases, we can safely assume the file is already correctly copied.

JAR patching§

When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ❷):

if ("${src}" ==~ ".*/org.xwalk/xwalk_core.*/classes.jar") {
    def pool = new ClassPool()
    pool.insertClassPath("${src}")
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') // ❸

    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) // ❹

    ctc.addMethod(CtNewMethod.make("""
public static boolean shouldDenyRequest(int error) {
    return false;
}
""", ctc)) // ❺

    def sslUtilBytecode = ctc.toBytecode() // ❻

    // Write back the JAR file
    // …
} else {
    logger.info("Copy ${src}")
    FileUtils.copyFile(src, dest)
}

We also need the following additional imports to use Javassist:

import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod

Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in (❸). We locate the appropriate method and delete it (❹). Then, we add our custom method using the same name (❺). The whole operation is done in memory. We retrieve the bytecode of the modified class in ❻.

The remaining step is to rebuild the JAR file:

def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))

// ❼
input.entries().each {
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class")) {
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)
        s.close()
    }
}

// ❽
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))
output.write(sslUtilBytecode)

output.close()

We need the following additional imports:

import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream
import org.apache.commons.io.IOUtils

There are two steps. In ❼, all classes are copied to the new JAR, except the SslUtil class. In ❽, the modified bytecode for SslUtil is added to the JAR.

That’s all! You can view the complete example on GitHub.

More complex method replacement§

In the above example, the new method doesn’t use any external dependency. Let’s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:

import org.chromium.net.NetError;
import android.net.http.SslCertificate;
import android.net.http.SslError;

// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url) {
    switch(error) {
        case NetError.ERR_CERT_COMMON_NAME_INVALID:
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
        case NetError.ERR_CERT_AUTHORITY_INVALID:
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
        default:
            break;
    }
    return new SslError(SslError.SSL_INVALID, cert, url);
}

The major difference with the previous example is that we need to import some additional classes.

Android SDK import§

The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:

androidJar = "${android.getSdkDirectory().getAbsolutePath()}/platforms/" +
             "${android.getCompileSdkVersion()}/android.jar"

We need to load it before adding the new method into SslUtil class:

def pool = new ClassPool()
pool.insertClassPath(androidJar)
pool.insertClassPath("${src}")
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
// …
External dependency import§

We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.

def pool = new ClassPool()
pool.insertClassPath(androidJar)
inputs.each {
    it.jarInputs.each {
        def jarName = it.name
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED) {
            pool.insertClassPath("${src}")
        }
    }
}
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
pool.importPackage('org.chromium.net.NetError');
ctc.addMethod(CtNewMethod.make("…"))
// Then, rebuild the JAR...

Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on. 

  2. I hope to have this fixed in later versions. 

  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk

  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources. 

Ross Gammon: My Open Source Contributions June – November 2016

3 December, 2016 - 18:52

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.

Debian
  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.
Ubuntu
  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.
Other
  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.
Plan for December Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

Ubuntu
  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.
Other
  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้