Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 min 40 sec ago

Lucy Wayland: Diversity and Inclusion

4 October, 2016 - 02:55

So this morning, along with a few other members of staff, I was filmed for a Diversity and Inclusion video for Ada Lovelace Day at work. Very positive experience, and I was wearing my rainbow chain mail necklace made by the wonderful Rosemary Warner, and a safety pin, which I had to explain the meaning of to the two peeps doing the filming. We all of us read the same script, and they are going to paste it together with each of us saying one sentence at a time. The script was not just about gender, it also mentioned age, skills, sexual orientation and physical ability among other things (I cannot remember the entire list). I was very happy and proud to take part.

Markus Koschany: My Free Software Activities in September 2016

4 October, 2016 - 01:12

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Android, Java, Games and LTS topics, this might be interesting for you.

Debian Android Debian Games
  • I packaged a new upstream release of hyperrogue, a rogue-like game settled in a non-euclidian world, fixing one RC bug (#811991). I uploaded two more revisions later that addressed  build failures on arm64 and hppa.
  • I fixed more RC bugs (build failures with GCC-6) in torus-trooper (#835712) and fife (#811858).
  • I packaged new upstream releases of pygame-sdl2, renpy, freeorion, netrek-client-cow, redeclipse, redeclipse-data, hitori, atomix, adonthell and adonthell-data.
  • I updated gtkballs and fixed a documentation bug (#820588) but also a /usr/share/locale issue that prevented the actual use of the translations.
  • I raised the severity of #797998 to grave in unknown-horizons because the game cannot be started currently. In order to fix this issue I packaged a new build-dependency, fifechan, which is currently awaiting approval by the FTP team. As soon as fifechan got accepted I will upload new upstream releases of fife and unknown-horizons.
  • I released debian-games 1.5, a Debian blend and collection of games metapackages.
  • Hardening-wrapper has been deprecated for some time and this issue became release critical now. I updated cookietool, alex4 and netrek-client-cow to use dpkg-buildflags instead.
  • Together with Russel Coker I packaged a new upstream release of warzone2100. This package would benefit from a new regular uploader. If you are interested in it, please get involved. (Same story for hyperrogue, redeclipse, renpy and unknown-horizons and many other games.)
  • I started a new Bullet transition (#839243). The package is currently waiting in the NEW queue and I hope to complete this work in October.
  • I triaged #838199 and reassigned the issue to fonts-roboto. Initially I prepared an NMU but eventually the maintainer uploaded a new revision himself. It is now possible to install the hinted and unhinted versions of fonts-roboto together which also resolved former installation problems with kodi and freeorion.
Debian Java
  • I packaged new upstream releases of undertow, activemq and jackrabbit.
  • I fixed RC bugs in libphonenumber (#836768), wagon2 (#837022) and activemq (#839244).
  • I updated syncany in experimental and simplified the packaging a little. Unfortunately upstream has been on hiatus for the past year and we haven’t seen new releases in the meantime. Nevertheless give it a try, even though it is still alpha software, it’s an useful cloud-storage and synchronization tool.
  • I sponsored a new upstream release of freeplane for Felix Natter.
  • I prepared and uploaded security updates for jackrabbit and zookeeper in Jessie.
Debian LTS

This was my eight month as a paid contributor and I have been paid to work 12,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 12. September until 19. September I was in charge of our LTS frontdesk. I triaged bugs in tiff3, mysql-5.5, curl, dropbear, mantis, icu, dwarfutils, jackrabbit, zendframework, zookeeper and graphicsmagick. For the latter I skimmed through all commits since the last version to identify the patches that fix the recent issues in graphicsmagick. I also answered questions on the mailing list and contacted Diego Biurrun again about his progress with libav. It is now anticipated that Hugo Lefeuvre and Diego will issue a new libav security release this month.
  • I reviewed and tested a patch by Raphaël Hertzog for roundcube.
  • DLA-629-1. Issued a security update for jackrabbit fixing 1 CVE.
  • DLA-630-1. Issued a security update for zookeeper fixing 1 CVE.
  • DLA-633-1. Issued a security update for wordpress fixing 7 CVE. This one also required backports of certain functions from newer releases and a database upgrade that required careful testing.
  • I also issued DLA-622-1 and DLA-623-1, two security issues that I already mentioned last month. It was discovered that Debian’s versions of Tomcat were vulnerable to a root privilege escalation issue. However it was also necessary that another exploit, for instance in a web application, could be used to gain write access as the tomcat user. Former security issues were already fixed and new ones are not known. Nevertheless since a zero-day exploit could not be ruled out, the issue was embargoed for a month to give other distributions time to fix this issue as well. You can read more about this topic at
Non-maintainer uploads Misc
  • I packaged a new upstream release of MediathekView.
  • I uploaded a new revision of xarchiver and applied a patch from Helmut Grohne that made it possible to cross-build the package.

Matthew Garrett: The importance of paying attention in building community trust

4 October, 2016 - 00:14
Trust is important in any kind of interpersonal relationship. It's inevitable that there will be cases where something you do will irritate or upset others, even if only to a small degree. Handling small cases well helps build trust that you will do the right thing in more significant cases, whereas ignoring things that seem fairly insignificant (or saying that you'll do something about them and then failing to do so) suggests that you'll also fail when there's a major problem. Getting the small details right is a major part of creating the impression that you'll deal with significant challenges in a responsible and considerate way.

This isn't limited to individual relationships. Something that distinguishes good customer service from bad customer service is getting the details right. There are many industries where significant failures happen infrequently, but minor ones happen a lot. Would you prefer to give your business to a company that handles those small details well (even if they're not overly annoying) or one that just tells you to deal with them?

And the same is true of software communities. A strong and considerate response to minor bug reports makes it more likely that users will be patient with you when dealing with significant ones. Handling small patch contributions quickly makes it more likely that a submitter will be willing to do the work of making more significant contributions. These things are well understood, and most successful projects have actively worked to reduce barriers to entry and to be responsive to user requests in order to encourage participation and foster a feeling that they care.

But what's often ignored is that this applies to other aspects of communities as well. Failing to use inclusive language may not seem like a big thing in itself, but it leaves people with the feeling that you're less likely to do anything about more egregious exclusionary behaviour. Allowing a baseline level of sexist humour gives the impression that you won't act if there are blatant displays of misogyny. The more examples of these "insignificant" issues people see, the more likely they are to choose to spend their time somewhere else, somewhere they can have faith that major issues will be handled appropriately.

There's a more insidious aspect to this. Sometimes we can believe that we are handling minor issues appropriately, that we're acting in a way that handles people's concerns, while actually failing to do so. If someone raises a concern about an aspect of the community, it's important to discuss solutions with them. Putting effort into "solving" a problem without ensuring that the solution has the desired outcome is not only a waste of time, it alienates those affected even more - they're now not only left with the feeling that they can't trust you to respond appropriately, but that you will actively ignore their feelings in the process.

It's not always possible to satisfy everybody's concerns. Sometimes you'll be left in situations where you have conflicting requests. In that case the best thing you can do is to explain the conflict and why you've made the choice you have, and demonstrate that you took this issue seriously rather than ignoring it. Depending on the issue, you may still alienate some number of participants, but it'll be fewer than if you just pretend that it's not actually a problem.

One warning, though: while building trust in this way enhances people's willingness to join your community, it also builds expectations. If a significant issue does arise, and if you fail to handle it well, you'll burn a lot of that trust in the process. The fact that you've built that trust in the first place may be what saves your community from disintegrating completely, but people will feel even more betrayed if you don't actively work to rebuild it. And if there's a pattern of mishandling major problems, no amount of getting the details right will matter.

Communities that ignore these issues are, long term, likely to end up weaker than communities that pay attention to them. Making sure you get this right in the first place, and setting expectations that you will pay attention to your contributors, is a vital part of building a meaningful relationship between your community and its members.


Lars Wirzenius: A tiny PC as a router

3 October, 2016 - 20:47

We needed a router and wifi access point in the office, and simultaneously both I and my co-worker Ivan needed such a thing at our respective homes. After some discussion, and after reading articles in Ars Technica about building PCs to act as routers, we decided to do just that.

  • The PC solution seem to offer better performance, but this is actually not a major reason for us.

  • We want to have systems we understand and can hack. A standard x86 PC running Debian sounds ideal to use.

  • Why not a cheap commercial router? They tend to be opaque and mysterious, and can't be managed with standard tooling such as Ansible. They may or may not have good security support. Also, they may or may not have sufficient functionality to be nice things, such as DNS for local machines, or the full power if iptables for firewalling.

  • Why not OpenWRT? Some models of commercial routers are supported by OpenWRT. Finding good hardware that is also supported by OpenWRT is a task in itself, and not the kind of task especially I like to do. Even if one goes this route, the environment isn't quite a standard Linux system, because of various hardware limitations. (OpenWRT is a worthy project, just not our preference.)

We got some hardware:

Component Model Cost Barebone Qotom Q190G4, VGA, 2x USB 2.0, 134x126x36mm, fanless 130€ CPU Intel J1900, 2-2.4MHz quad-core - NIC Intel WG82583, 4x 10/100/1000 - Memory Crucial CT102464BF160B, 8GB DDR3L-1600 SODIMM 1.35V CL11 40€ SSD Kingston SSDNow mS200, 60GB mSATA 42€ WLAN AzureWave AW-NU706H, Ralink RT3070L, 300M 802.11b/g/n, half mPCIe 17€ mPCIe adapter Half to full mPCIe adapter 3€ Antennas 2x 2.4/5GHz 6dBi, RP-SMA, U.FL Cables 7€

These were bought at various online shops, including AliExpress and

After assembling the hardware, we installed Debian on them:

  • Connect the PC to a monitor (VGA) and keyboard (USB), as well as power.

  • I built a "factory image" to be put on the SSD, and a USB stick [installer image][], which includes the factory one. Write the installer image on a USB stick, boot off that, then copy the factory image to the SSD and reboot off the SSD.

  • The router now runs a very bare-bones, stripped-down Debian system, which runs a DHCP server on eth3 (marked LAN4 on the box). You can log as root on the console (no password), or via ssh, but for ssh you need to replace the /home/ansible/.ssh/authorized_keys file with one that contains only your public ssh key.

  • Connect a laptop to the Ethernet port marked LAN4, and get an IP address with DHCP.

  • Log in with ssh to ansible@, and verify that sudo id works without password. Except you can't do this, unless you put in your ssh key in the authorized keys file above.

  • Git clone the ansible playbooks, adjust their parameters in minipc-router.yml as wanted, and run the playbook. Then reboot the router again.

  • You should now have wifi, routing (with NAT), and be generally speaking able to do networking.

There's a lot of limitations and problems:

  • There's no web UI for managing anything. If you're not comfortable doing sysadmin via ssh (with or without ansible), this isn't for you.

  • No IPv6. We didn't want to enable it yet, until we understand it better. You can, if you want to.

  • No real firewalling, but adjust roles/router/files/ferm.conf as you wish.

  • The router factory image is 4 GB in size, and our SSD is 60 GB. That's a lot of wasted space.

  • The router factory image embeds our public keys in the ansible user's authorized keys file for ssh. This is because we built this for ourselves first. If there's interest by others in using the images, we'll solve this.

  • Probably a lot of stupid things. Feel free to tell us what it is ( would be a good address for that).

If you'd like to use the images and Ansible playbooks, please do. We'd be happy to get feedback, bug reports, and patches. Send them to me ( or my ticketing system (

Shirish Agarwal: Using JOSM and gpx tracks

3 October, 2016 - 20:18

This would be a longish post. I had bought a Samsung Galaxy J-5/500 just a few days before Debconf16 which I had written about a bit earlier as well. As can be seen in the specs there isn’t much to explore other than A-GPS. There were a couple of temperature apps. which I wanted to explore before buying the smartphone but as there were budget constraints and there weren’t any good budget smartphones with environmental sensors built-in had to let go of those features.

I was looking for a free app. which would have OSM support and came across osmand . I was looking for an app. which would have support for OSM and support for the gpx format.

I was planning to use osmand in South Africa but due to the over-whelming nature of meeting people, seeing places and just being didn’t actually get the time and place to try it.

Came back home and a month and a half passed. In-between I had done some simple small tracks but nothing major. This week-end I got the opportunity as I got some free data balance from my service provider (a princely 50 MB) as well an opportunity to go about 40 odd kms. from the city. I had read about osmand and was looking if the off-line method worked or not – from the webpage

• Works online (fast) or offline (no roaming charges when you are abroad)

So armed with a full battery I started the journey which took about an hour and half even though technically it was a holiday. On the way back, got a different route and recorded that as well. The app. worked flawlessly. I was able to get the speed of the vehicle and everything. The only thing I haven’t understood till date is how to select waypoints but other than that I got the whole route on my mobile.

Just for fun I also looked at the gpx file after copying it from mobile to hdd (an extract)

While it’s not a complete extract, What was interesting for me to note here is the time was in UTC . What was also interesting is that in the gpx tracks I also saw some entries about speed as can be seen in the paste above. Although it doesn’t say whether it was in km/hr or mph, I believe it probably is km/hr. as that is the unit I defined in the app.

Anyways, the next step was trying to see which tool was good enough to show me the tracks with tiles underneath and labels of places, paths etc.

I tried three tools –

1. jmapviewer – this didn’t work at all.
2. gnome-maps – this worked remarkably well but has numerous gtk3.0 warnings –

┌─[shirish@debian] - [~/osmand] - [10149]
└─[$] gnome-maps 2016-10-01_08-11_Sat.gpx

(gnome-maps:21017): Gtk-WARNING **: Theme parsing error: gtk.css:63:28: The :prelight pseudo-class is deprecated. Use :hover instead.

(gnome-maps:21017): Gtk-WARNING **: Theme parsing error: gtk.css:73:35: The :prelight pseudo-class is deprecated. Use :hover instead.

(gnome-maps:21017): Gtk-WARNING **: Theme parsing error: application.css:14:30: The style property GtkButton:image-spacing is deprecated and shouldn't be used anymore. It will be removed in a future version

(gnome-maps:21017): Gtk-WARNING **: Theme parsing error: application.css:15:31: The style property GtkWidget:interior-focus is deprecated and shouldn't be used anymore. It will be removed in a future version

(gnome-maps:21017): Gdk-WARNING **: /build/gtk+3.0-Tod2iD/gtk+3.0-3.22.0/./gdk/x11/gdkwindow-x11.c:5554 drawable is not a native X11 window

(gnome-maps:21017): Gdk-WARNING **: /build/gtk+3.0-Tod2iD/gtk+3.0-3.22.0/./gdk/x11/gdkwindow-x11.c:5554 drawable is not a native X11 window

(gnome-maps:21017): Gdk-WARNING **: /build/gtk+3.0-Tod2iD/gtk+3.0-3.22.0/./gdk/x11/gdkwindow-x11.c:5554 drawable is not a native X11 window

(gnome-maps:21017): Gdk-WARNING **: /build/gtk+3.0-Tod2iD/gtk+3.0-3.22.0/./gdk/x11/gdkwindow-x11.c:5554 drawable is not a native X11 window

(gnome-maps:21017): Gdk-WARNING **: /build/gtk+3.0-Tod2iD/gtk+3.0-3.22.0/./gdk/x11/gdkwindow-x11.c:5554 drawable is not a native X11 window

(gnome-maps:21017): Gdk-WARNING **: /build/gtk+3.0-Tod2iD/gtk+3.0-3.22.0/./gdk/x11/gdkwindow-x11.c:5554 drawable is not a native X11 window

(gnome-maps:21017): Gtk-WARNING **: GtkClutterOffscreen 0x4c4f3f0 is drawn without a current allocation. This should not happen.

(gnome-maps:21017): Gtk-WARNING **: GtkImage 0x4ed4140 is drawn without a current allocation. This should not happen.

Now I’m not sure whether all of those are gtk3+ issues or me running them under Debian MATE. I know that there are issues with mate and gtk3+ as had been told/shared a few times in p.d.o.

Anyways, one of the issues I encountered is that gnome-maps doesn’t work in offline-mode, saw . Also saw ~/.cache/champlain/osm-mapquest and the listing underneath is gibberish in the sense you don’t know what it meant to do –

┌─[shirish@debian] - [~/.cache/champlain/osm-mapquest] - [10163]
└─[$] ll -h

drwx------ 6 shirish shirish 4.0K Jun 11 2015 10
drwx------ 26 shirish shirish 4.0K Oct 24 2014 11
drwx------ 10 shirish shirish 4.0K Jun 11 2015 12
drwx------ 11 shirish shirish 4.0K Jun 11 2015 13
drwx------ 12 shirish shirish 4.0K Jun 11 2015 14
drwx------ 12 shirish shirish 4.0K Jun 11 2015 15
drwx------ 27 shirish shirish 4.0K Oct 24 2014 16
drwx------ 25 shirish shirish 4.0K Oct 24 2014 17
drwx------ 4 shirish shirish 4.0K Mar 4 2014 3
drwx------ 5 shirish shirish 4.0K Mar 4 2014 8
drwx------ 9 shirish shirish 4.0K Mar 29 2014 9

What was/is interesting to see things like this –

As I was in a moving vehicle, it isn’t easy to know if the imagery is at fault or was it app. , sensor of my mobile ?

Did see but as can be seen that requires more effort from my side.

The last tool proved to be the most problematic

3. JOSM – Getting the tracks into josm which was easily done. While firing up josm came across which I subsequently filed.

One of the other first things which has been a major irritant for a long time is JOSM is, for a lack of better term, ugly. See the interface, especially the one having preferences, all cluttered look and specifically see the plugins corner/tab –

The part about it being ugly, I dunno but have seen most java apps are a bit ugly. It is a bit generalist I know but that has been my experience with whatever little java apps. I have used.

I don’t know what the reasons for that are, maybe because java is known/rumoured to use lot of memory which seems true in my case as well OR it doesn’t have toolkits like gtk3+ or qt quick, although have to say that the looks have improved from before when I used it last some years ago –

┌─[shirish@debian] - [~] - [10340]
└─[$] ps -eo size,pid,user,command | awk '{ hr=$1/1024 ; printf("%13.6f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | grep josm

0.324219 Mb /bin/sh /usr/bin/josm
419.468750 Mb /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djosm.restart=true -jar /usr/share/josm/josm.jar

This is when I’m just opening josm and have not added any tracks or done any work.

Now I wanted to explore the routing in good amount of detail on josm. This was easily said than done. When trying to get imagery I got the ‘Download area too large’ issue/defect . Multiple tries didn’t get me anywhere. Then hunting on the web came across the continous-download plugin which is part of the plugin infrastructure . This I found to be a very good tool. It downloads the tiles and puts them in ~/.josm/cache/tiles

┌─[shirish@debian] - [~/.josm/cache/tiles] - [10147]
└─[$] ll -h

total 28M
-rw-r--r-- 1 shirish shirish 28M Oct 2 02:13
-rw-r--r-- 1 shirish shirish 290K Oct 3 12:59 TMS_BLOCK_v2.key
-rw-r--r-- 1 shirish shirish 4 Oct 3 12:59 WMS_BLOCK_v2.key
-rw-r--r-- 1 shirish shirish 4 Oct 3 12:59 WMTS_BLOCK_v2.key

While unfortunately I cannot see this/make sense of it I’m guessing it is some sort of database with key and data files.

What did become apparent is that the OSM needs lots more love if it is to become something which can be used everyday. At the end I had to change the open-source gpx track file to a Google map kml file to be able to make sense as there are whole areas which need to be named, numbered etc.

One of the newbie mistakes that I did was trying to use the slippy map in josm (using Openstreetmap/Mapnik at the back-end) to move/pan using the left-hand mouse button. It took me quite sometime to figure out that it is with right-hand mouse button that you can make the slippy map pan. This is different from almost all maps, gnome-maps uses the traditional left-hand button, Google maps also uses the same. I have filed it in upstream as .

So at least, in these rounds it is gnome-maps which has kind of won even though it doesn’t do any of the things that josm claims to do.

I am sure there might be some interesting tricks and tips that people might have to share about mapping

Filed under: Miscellenous Tagged: #Debconf16, #GNOME-MAPS, #JOSM, OSM

Bálint Réczey: Harden Debian with PIE and bindnow!

3 October, 2016 - 19:14

Shipping Position Independent Executables and using read-only Global Offset Table was already possible for packages but needed package maintainers to opt-in for each package (see Hardening wiki) using the “pie” and “bindnow” Dpkg hardening flags.

Many critical packages enabled the extra flags but there are still way more left out according to Lintian hardening-no-bindnow and hardening-no-pie warnings.

Now we can change that. We can make those hardening flags the default for every package.
We already have the needed patches for GCC (#835148) and dpkg (#835146, #835149). We already have all packages rebuilt once to test which breaks (Thanks to Lucas Nussbaum!). The Release Team already asked porters if they feel their ports ready for enabling PIE and most ports tentatively opted-in (Thanks to Niels Thykier for pushing this!).

What is left is fixing the ~75 open bugs found during the test rebuilds and this is where You can help, too! Please check if your packages are affected or give a helping hand to other maintainers who need it. (See PIEByDefaultTransition wiki for hints on fixing the bugs.) Many thanks to those who already fixed their packages!

If we can get past those last bugs we can enable those badly needed security features and make Stretch the most secure release ever!

Russell Coker: 10 Years of Glasses

3 October, 2016 - 18:13

10 years ago I first blogged about getting glasses [1]. I’ve just ordered my 4th pair of glasses. When you buy new glasses the first step is to scan your old glasses to use that as a base point for assessing your eyes, instead of going in cold and trying lots of different lenses they can just try small variations on your current glasses. Any good optometrist will give you a print-out of the specs of your old glasses and your new prescription after you buy glasses, they may be hesitant to do so if you don’t buy because some people get a prescription at an optometrist and then buy cheap glasses online. Here are the specs of my new glasses, the ones I’m wearing now that are about 4 years old, and the ones before that which are probably about 8 years old:

New 4 Years Old Really Old R-SPH 0.00 0.00 -0.25 R-CYL -1.50 -1.50 -1.50 R-AXS 180 179 180 L-SPH 0.00 -0.25 -0.25 L-CYL -1.00 -1.00 -1.00 L-AXS 5 10 179

The Specsavers website has a good description of what this means [2]. In summary SPH is whether you are log-sighted (positive) or short-sighted (negative). CYL is for astigmatism which is where the focal lengths for horizontal and vertical aren’t equal. AXS is the angle for astigmatism. There are other fields which you can read about on the Specsavers page, but they aren’t relevant for me.

The first thing I learned when I looked at these numbers is that until recently I was apparently slightly short-sighted. In a way this isn’t a great surprise given that I spend so much time doing computer work and very little time focusing on things further away. What is a surprise is that I don’t recall optometrists mentioning it to me. Apparently it’s common to become more long-sighted as you get older so being slightly short-sighted when you are young is probably a good thing.

Astigmatism is the reason why I wear glasses (the Wikipedia page has a very good explanation of this [3]). For the configuration of my web browser and GUI (which I believe to be default in terms of fonts for Debian/Unstable running KDE and Google-Chrome on a Thinkpad T420 with 1600×900 screen) I can read my blog posts very clearly while wearing glasses. Without glasses I can read it with my left eye but it is fuzzy and with my right eye reading it is like reading the last line of an eye test, something I can do if I concentrate a lot for test purposes but would never do by choice. If I turn my glasses 90 degrees (so that they make my vision worse not better) then my ability to read the text with my left eye is worse than my right eye without glasses, this is as expected as the 1.00 level of astigmatism in my left eye is doubled when I use the lens in my glasses as 90 degrees to it’s intended angle.

The AXS numbers are for the angle of astigmatism. I don’t know why some of them are listed as 180 degrees or why that would be different from 0 degrees (if I turn my glasses so that one lens is rotated 180 degrees it works in exactly the same way). The numbers from 179 degrees to 5 degrees may be just a measurement error.

Related posts:

  1. more on vision I had a few comments on my last so I...
  2. right-side visual migraine This afternoon I had another visual migraine. It was a...
  3. New Portslave release after 5 Years I’ve just uploaded Portslave version 2010.03.30 to Debian, it replaces...

Kees Cook: security things in Linux v4.7

3 October, 2016 - 14:47

Onward to security things I found interesting in Linux v4.7:

KASLR text base offset for MIPS

Matt Redfearn added text base address KASLR to MIPS, similar to what’s available on x86 and arm64. As done with x86, MIPS attempts to gather entropy from various build-time, run-time, and CPU locations in an effort to find reasonable sources during early-boot. MIPS doesn’t yet have anything as strong as x86′s RDRAND (though most have an instruction counter like x86′s RDTSC), but it does have the benefit of being able to use Device Tree (i.e. the “/chosen/kaslr-seed” property) like arm64 does. By my understanding, even without Device Tree, MIPS KASLR entropy should be as strong as pre-RDRAND x86 entropy, which is more than sufficient for what is, similar to x86, not a huge KASLR range anyway: default 8 bits (a span of 16MB with 64KB alignment), though CONFIG_RANDOMIZE_BASE_MAX_OFFSET can be tuned to the device’s memory, giving a maximum of 11 bits on 32-bit, and 15 bits on EVA or 64-bit.

SLAB freelist ASLR

Thomas Garnier added CONFIG_SLAB_FREELIST_RANDOM to make slab allocation layouts less deterministic with a per-boot randomized freelist order. This raises the bar for successful kernel slab attacks. Attackers will need to either find additional bugs to help leak slab layout information or will need to perform more complex grooming during an attack. Thomas wrote a post describing the feature in more detail here: Randomizing the Linux kernel heap freelists. (SLAB is done in v4.7, and SLUB in v4.8.)

eBPF JIT constant blinding

Daniel Borkmann implemented constant blinding in the eBPF JIT subsystem. With strong kernel memory protections (CONFIG_DEBUG_RODATA) in place, and with the segregation of user-space memory execution from kernel (i.e SMEP, PXN, CONFIG_CPU_SW_DOMAIN_PAN), having a place where user-space can inject content into an executable area of kernel memory becomes very high-value to an attacker. The eBPF JIT was exactly such a thing: the use of BPF constants could result in the JIT producing instruction flows that could include attacker-controlled instructions (e.g. by directing execution into the middle of an instruction with a constant that would be interpreted as a native instruction). The eBPF JIT already uses a number of other defensive tricks (e.g. random starting position), but this added randomized blinding to any BPF constants, which makes building a malicious execution path in the eBPF JIT memory much more difficult (and helps block attempts at JIT spraying to bypass other protections).

Elena Reshetova updated a 2012 proof-of-concept attack to succeed against modern kernels to help provide a working example of what needed fixing in the JIT. This serves as a thorough regression test for the protection.

The cBPF JITs that exist in ARM, MIPS, PowerPC, and Sparc still need to be updated to eBPF, but when they do, they’ll gain all these protections immediatley.

Bottom line is that if you enable the (disabled-by-default) bpf_jit_enable sysctl, be sure to set the bpf_jit_harden sysctl to 2 (to perform blinding even for root).

fix brk ASLR weakness on arm64 compat

There have been a few ASLR fixes recently (e.g. ET_DYN, x86 32-bit unlimited stack), and while reviewing some suggested fixes to arm64 brk ASLR code from Jon Medhurst, I noticed that arm64′s brk ASLR entropy was slightly too low (less than 1 bit) for 64-bit and noticeably lower (by 2 bits) for 32-bit compat processes when compared to native 32-bit arm. I simplified the code by using literals for the entropy. Maybe we can add a sysctl some day to control brk ASLR entropy like was done for mmap ASLR entropy.

LoadPin LSM

LSM stacking is well-defined since v4.2, so I finally upstreamed a “small” LSM that implements a protection I wrote for Chrome OS several years back. On systems with a static root of trust that extends to the filesystem level (e.g. Chrome OS’s coreboot+depthcharge boot firmware chaining to dm-verity, or a system booting from read-only media), it’s redundant to sign kernel modules (you’ve already got the modules on read-only media: they can’t change). The kernel just needs to know they’re all coming from the correct location. (And this solves loading known-good firmware too, since there is no convention for signed firmware in the kernel yet.) LoadPin requires that all modules, firmware, etc come from the same mount (and assumes that the first loaded file defines which mount is “correct”, hence load “pinning”).

That’s it for v4.7. Prepare yourself for v4.8 next!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Russ Allbery: Review: Winds of Fury

3 October, 2016 - 08:18

Review: Winds of Fury, by Mercedes Lackey

Series: Mage Winds #3 Publisher: DAW Copyright: August 1994 ISBN: 0-88677-612-0 Format: Mass market Pages: 427

This is the concluding book of the Mage Winds trilogy and a direct sequel to Winds of Change. This series doesn't make sense to read out of order.

In traditional fantasy trilogies of this type, the third book is often the best. The author can stop developing characters, building the world, and setting the scene and can get to the meat of the story. All the guns on the mantles go off, all the twists the author has been saving can be used, and there's usually a lot more action than in the second book of a trilogy. That is indeed the case here. I'm not sure Winds of Fury rises to the level of a good book, but if you're invested in the Valdemar story, it works better than the previous books of this series.

As one might expect, the protagonists do return to Valdemar, finally. The method of that return makes sense of some things that happened in Winds of Change and is an entertaining surprise, although I wish more had been done with it through the rest of the book and we'd gotten more world-building details. I also like how Lackey handles the Valdemar reactions to the returning protagonists, and their own reactions to Valdemar. Lackey's characters might fit some heavy-handed stereotypes a bit too neatly, but they do grow and change over the course of a series, and the return home is a good technique for showing that.

Lackey also throws in her final twist for the villains of this series at the start of this book, and it's a good one (if typical Lackey; she does love her abused youth characters). The villains are still far too one-dimensional and far too stereotyped evil, but the twist (which I'll avoid spoiling) does make that dynamic a bit more interesting. And she manages to get the reader to root for one evil over another, since one of them is at least competent.

You'd think from the direction the series was taking that Winds of Fury would culminate in another epic war of magic, but not this book. Lackey takes a more personal and targeted approach, heavier on characterization and individual challenge. This gives Firesong a chance to grow into an almost-likable character and earn some of that empathy and insight that he'd gotten for free. Unfortunately, it sidelines Nyara a lot, and pushes her back into a stereotyped role, which made me sad. The series otherwise emphasizes the importance of magic users who know their own limitations and can thoughtfully use the power they have, but rarely extends that to Nyara herself. I would have much rather seen her play a role like that in the final climax instead of the one she played.

This is partly made up for by centering Need in the story and having her play the key connecting role between two different threads of effort. Those are probably the best parts of this book. I wish the entire series had been told from Need's perspective, with a heavy helping of exasperated grumbling, although I don't think Lackey could have written that series. But what we get of her is a delight. The gryphons, sadly, are relegated to fairly minor roles, but we get a few more tantalizing hints about the Companions to make up for it. (Although not enough to figure out their great mystery, which isn't revealed until future books.)

This series is nowhere near as good as I remembered, sadly, but I did enjoy bits of it. If you like Valdemar in general, the world building is fairly important and reveals quite a lot about the underpinnings and power dynamics of Lackey's universe. I'm not sure that makes up for some tedious characters, poor communication, and some uncomfortable and dragging sections, but if you're trying to get the whole Valdemar story, the events here are rather important. And this ending was at least entertaining, if not great fiction.

Rating: 6 out of 10

Thorsten Alteholz: My Debian Activities in September 2016

3 October, 2016 - 04:37

FTP assistant

This month I was rather busy with other stuff and only marked 191 packages for accept and rejected 21 packages. I also sent 6 emails to maintainers asking questions.

Debian LTS

This was my twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 12.25h. During that time I did an upload of php5 fixing 17 CVEs and two additional bugs, I uploaded mactelnet and fixed one CVE. I also prepared a package for testing of zendframework, which will fix one CVE. Unfortunately my bind9 upload needed to be postponed as Florian Weimer found an incomplete patch of a previous CVE. I am trying to fix that as well. I also had some progress with the asterisk CVEs and of course the next round of php5 patches is waiting…

This month I also had a few days of frontdesk work at the beginning of the month and a few days at the end.

Other stuff

For the Alljoyn framework I uploaded alljoyn-services-1604 and as I forgot a Conflict:, I had to take care of RC-bugs: #836717, #836718 and #836719. Thanks a lot to Ralf Treinen for his automatic installation tests.

As mentioned earlier, openzwave is on its way to the Debian archive. While it is still in non-free, the author of a used library gave his permission to relicense this code, so the way to main is paved now.

Gregor Herrmann: RC bugs 2016/38-39

3 October, 2016 - 03:40

the last two weeks have seen the migration of perl 5.24 into testing, most of the bugs I worked on were related to it. additionally a few more build dependencies on tzdata werde needed. – here's the list:

  • #784845 – libdevel-gdb-perl: "libdevel-gdb-perl: FTBFS: t/expect.t #8 sometimes fails"
    skip brittle test (pkg-perl)
  • #825629 – src:libgd-perl: "libgd-perl: FTBFS: Could not find gdlib-config in the search path. "
    add patch to use pkg-config instead of the removed gdlib-config (pkg-perl)
  • #832840 – src:license-reconcile: "license-reconcile: FTBFS: dh_auto_test: perl Build test --verbose 1 returned exit code 255"
    sponsor upload prepared by gfa (pkg-perl)
  • #838310 – keyboard-configuration: "keyboard-configuration: user configuration lost + error message from setupcon"
    propose a patch
  • #838851 – libcoro-perl: "libcoro-perl: FTBFS with Perl 5.24: panic: corrupt saved stack index -144185424"
    resurrect parts of the removed patch (pkg-perl)
  • #838933 – libio-compress-lzma-perl: "libio-compress-lzma-perl: uninstallable and unbuildable with Perl 5.24"
    fix dependencies (pkg-perl)
  • #838934 – libperl-apireference-perl: "libperl-apireference-perl: FTBFS with Perl 5.24.1"
    add support for 5.24.1 (pkg-perl)
  • #839187 – sa-compile: "sa-compile: failed make after perl upgraded to 5.24.1~rc3-3 on testing"
    close on suggestion of submitter after investigation
  • #839442 – src:libtime-parsedate-perl: "libtime-parsedate-perl: FTBFS: Tests failures"
    add build dependency on tzdata (pkg-perl)
  • #839477 – src:libposix-strftime-compiler-perl: "libposix-strftime-compiler-perl: FTBFS: dh_auto_test: perl Build test --verbose 1 returned exit code 5"
    add build dependency on tzdata (pkg-perl)
  • #839513 – src:libapache-logformat-compiler-perl: "libapache-logformat-compiler-perl: FTBFS: dh_auto_test: perl Build test --verbose 1 returned exit code 8"
    add build dependency on tzdata (pkg-perl)
  • #839516 – src:libclass-date-perl: "libclass-date-perl: FTBFS: Tests failures"
    add build dependency on tzdata (pkg-perl)

Steinar H. Gunderson: SNMP MIB setup

2 October, 2016 - 18:15

If you just install the “snmp” package out of the box, you won't get the MIBs, so it's pretty much useless for anything vendor without some setup. I'm sure this is documented somewhere, but I have to figure it out afresh every single time, so this time I'm writing it down; I can't possibly be the only one getting confused.

First, install snmp-mibs-downloader from non-free. You'll need to work around bug #839574 to get the Cisco MIBs right:

# cp /usr/share/doc/snmp-mibs-downloader/examples/cisco.conf /etc/snmp-mibs-downloader/
# gzip -cd /usr/share/doc/snmp-mibs-downloader/examples/ciscolist.gz > /etc/snmp-mibs-downloader/ciscolist

Now you can download the Cisco MIBs:

# download-mibs cisco

However, this only downloads them; you will need to modify snmp.conf to actually use them. Comment out the line that says “mibs :”, and then add:

mibdirs +/var/lib/snmp/mibs/cisco/

Voila! Now you can use snmpwalk with e.g. -m AIRESPACE-WIRELESS-MIB to get the full range of Cisco WLC objects (and the first time you do so as root or the Debian-snmp user, the MIBs will be indexed in /var/lib/snmp/mib_indexes/.)

Russell Coker: Hostile Web Sites

2 October, 2016 - 12:06

I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general.

Wget Overview

Some spam messages are designed to attack the recipient’s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn’t attempt to interpret or display them.

As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget.

In practice wget is a very simple program and simplicity makes security easier. A large portion of security flaws in web browsers are related to plugins such as flash, rendering the page for display on a GUI system, and javascript – features that wget lacks.

The Profit Motive

An attacker that aims to compromise online banking accounts probably isn’t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users.

However if the attacker doesn’t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team [1]). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won’t look like regular spam. For more information about targeted attacks Brian Krebs’ article about CEO scams is worth reading [2].

Privilege Separation

If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run “ssh otheraccount@localhost” and then run the wget command so that it can’t attack you. Don’t run “su – otheraccount” as it is possible for a compromised program to escape from that.

I think that most Linux distributions have supported a “switch user” functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs.

It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack.

Browser Features

Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that’s on Google’s blacklist I would lock their account without doing any further checks.


I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems.

Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won’t download the content that you want (EG in the case of HTML frames).

Related posts:

  1. Google web sites and Chromium CPU Use Chromium is the free software build of the Google Chrome...
  2. How SE Linux Prevents Local Root Exploits In a comment on my previous post about SE Linux...
  3. Can SE Linux Stop a Linux Storm Bruce Schneier has just written about the Storm Worm [1]...

Russ Allbery: Review: Winds of Change

2 October, 2016 - 09:31

Review: Winds of Change, by Mercedes Lackey

Series: Mage Winds #2 Publisher: DAW Copyright: August 1993 ISBN: 0-88677-563-9 Format: Mass market Pages: 475

Winds of Change is a direct sequel to Winds of Fate. This is a more closely connected trilogy than the previous Valdemar books. It's not the sort of thing you want to read out of order.

The events of Winds of Fate predictably left the multiple protagonists united and with some breathing space, but none of their problems are resolved. The Heartstone is still a mess; in fact, it may be getting worse. Elspeth needs to learn how to wield the magical power she apparently has. And there are a lot of interpersonal tensions, lingering hurt feelings, and (in the case of Elspeth and Darkwind) a truly prodigious quantity of whining that has to be worked through before the protagonists can feel safe and happy.

Winds of Change is the training montage book, and wow did my memory paper over a lot of flaws in this series. This is 475 pages of not much happening, occasionally in very irritating ways. Yes, we do finally meander to a stronger conclusion than the last book, and there is much resolving of old hurts and awkward interactions, as well as a bit of discovery of true love (this is Lackey, after all). But far, far too much of the book is Elspeth and Darkwind sniping at each other, being immature, not communicating, and otherwise being obnoxious while all the people around them try to gently help. Lackey's main characterization flaw for me is that she tends to default into generating characters who badly need to be smacked upside the head, and then does so in ways and for things at odd angles to the reasons why I think they should be smacked. It can make for frustrating reading.

The introduction of Firesong as a character about halfway through this book does not help. Firesong is a flamboyant, amazingly egotistical, and stunningly arrogant show-off who also happens to be a magical super-genius and hence has "earned" his arrogance. This is an intentional character design, not my idiosyncratic reaction to the character, since every other character in the book finds him insufferable as well at first. But he's also a deeply insightful healing Adept by, honestly, authorial fiat, so by the end of the novel he's helped patch up everyone's problems and the other characters have accepted his presentation as a character quirk.


So, okay, one doesn't read popcorn fantasy for its deep characterization or realistic grasp of human subtlety. But this is just way more than I can swallow. Lackey's concept of a healing Adept (which I like a great deal as a bit of world-building) necessarily involves both deep knowledge and deep empathy and connection with other people. Firesong is so utterly full of himself that there's simply no way that he could have the empathy required to do what he is shown to do here. (Lackey does try to explain this away in the book, but the explanation didn't work for me.) Every time he successfully intervenes in other people's emotional lives, he does so with a sudden personality change, some stunning insight that he previously showed no evidence of ability to understand, and somehow only enough arrogance in his presentation to prickle but not to close people's mind to whatever he's trying to say.

That's not how this works. That's not how any of this works. Lackey always treats psychology as a bit of a blunt instrument, and one either learns to tolerate that or gives up on her series, but Firesong is flatly the most unbelievable emotional mentor figure in any of her books I've read. (One of the more satisfying, if slight, bits of this series comes up in the next book, where Firesong runs into someone else who can do the same thing but has actually earned the empathy the hard way, and is a bit taken aback by it.)

My other complaint with this book is that Lackey adds more chapters from the viewpoint of the big bad of the series. These are deeply unpleasant, since he's a deeply unpleasant person, and seem largely unnecessary. It's vaguely interesting to follow the magical maneuverings from both sides, but there are more of these scenes than strictly necessary for that purpose, and the sheer unmitigated evil of Lackey's evil characters is a bit hard to take. Also, he somehow has vast resources of staff and assistants, and much suspension of disbelief is required to believe that anyone would continue working for this person. It's one thing to imagine people being drawn to a charismatic Hitler type; it's quite another when the boss is a brooding, imperious asshole who roams the hallways and tortures random people to death whenever he's bored. Fear and magic only go so far in maintaining a large following when you do that, and he generates dead bodies at a remarkable rate.

The best characters in this series continue to be Nyara, Need, and the gryphons. I'd rather read a book just about them. Need does use a bit too much of Lackey's tough love technique (another recurring theme of this larger series), but from Need that's wholly believable; her gruff and dubious empathy is in line with her character and history and fits a talking sword extremely well. But they, despite having a bit of their own training montage, are a side story here. The climax of the story is moderately satisfying, but the book takes far too long to get to it.

I remember liking this series when I first read it, and I still like some aspects of Lackey's world-building and a few of the characters, but it's much weaker than I had remembered. I can't really recommend it.

Followed by Winds of Fury.

Rating: 5 out of 10

Dirk Eddelbuettel: RcppAnnoy 0.0.8

1 October, 2016 - 23:47

An new version 0.0.8 of RcppAnnoy, our Rcpp-based R integration of the nifty Annoy library by Erik, is now on CRAN. Annoy is a small, fast, and lightweight C++ template header library for approximate nearest neighbours.

This release pulls in a few suggested changes which had piled up since the last release.

Changes in this version are summarized here:

Changes in version 0.0.8 (2016-10-01)
  • New functions getNNsByItemList and getNNsByVectorList, by Michael Phan-Ba in #12

  • Added destructor (PR #14 by Michael Phan-Ba)

  • Extended templatization (PR #11 by Dan Dillon)

  • Switched to for Travis (PR #17)

  • Added test for admissible value to addItem (PR #18 closing issue #13)

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

David Moreno: Thanks Debian

1 October, 2016 - 20:06

I sent this email to debian-private a few days ago, on the 10th anniversary of my Debian account creation:

Date: Fri, 14 Aug 2015 19:37:20 +0200
From: David Moreno 
Subject: Retiring from Debian
User-Agent: Mutt/1.5.23 (2014-03-12)

[-- PGP output follows (current time: Sun 23 Aug 2015 06:18:36 PM CEST) --]
gpg: Signature made Fri 14 Aug 2015 07:37:20 PM CEST using RSA key ID 4DADEC2F
gpg: Good signature from "David Moreno "
gpg:                 aka "David Moreno "
gpg:                 aka "David Moreno (1984-08-08) "
[-- End of PGP output --]

[-- The following data is signed --]


Ten years ago today (2005-08-14) my account was created:

Today, I don't feel like Debian represents me and neither do I represent the
project anymore.

I had tried over the last couple of years to retake my involvement but lack of
motivation and time always got on the way, so the right thing to do for me is
to officially retire and gtfo.

I certainly learned a bunch from dozens of Debian people over these many years,
and I'm nothing but grateful with all of them; I will for sure carry the project
close to my heart — as I carry it with the Debian swirl I still have tattooed
on my back ;)

I have three packages left that have not been updated in forever and you can
consider orphaned now: gcolor2, libperl6-say-perl and libxml-treepp-perl.

With all best wishes,
David Moreno.

[-- End of signed data --]

I received a couple of questions about my decision here. I basically don’t feel like Debian represents my interests and neither do I represent the project – this doesn’t mean I don’t believe in free software, to the contrary. I think some of the best software advancements we’ve made as society are thanks to it. I don’t necessarily believe on how the project has evolved itself, whether that has been the right way, to regain relevancy and dominance, and if it’s remained primarily a way to feed dogmatism versus pragmatism. This is the perfect example of a tragic consequence. I was very happy to learn that the current Debian Conference being held in Germany got the highest attendance ever, hopefully that can be utilized in a significant and useful way.

Regardless, my contributions to Debian were never noteworthy so it’s also not that big of a deal. I just need to close cycles myself and move forward, and the ten year anniversary looked like a significant mark for that.

Poke me in case you wanna discuss some more. I’ll always be happy to. Specially over beer :)


Vincent Sanders: Paul Hollywood and the pistoris stone

1 October, 2016 - 19:39
There has been a great deal of comment among my friends recently about a particularly British cookery program called "The Great British Bake Off". There has been some controversy as the program is moving from the BBC to a commercial broadcaster.

Part of this discussion comes from all the presenters, excepting Paul Hollywood, declining to sign with the new broadcaster and partly because of speculation the BBC might continue with a similar format show with a new name.

Rob Kendrick provided the start to this conversation by passing on a satirical link suggesting Samuel L Jackson might host "cakes on a plane"

This caused a large number of suggestions for alternate names which I will be reporting but Rob Kendrick, Vivek Das Mohapatra, Colin Watson, Jonathan McDowell, Oki Kuma, Dan Alderman, Dagfinn Ilmari Mannsåke, Lesley Mitchell and Daniel Silverstone are the ones to blame.

  • Strictly come baking
  • Stars and their pies
  • Baking with the stars
  • Bake/Off.
  • Blind Cake
  • Cake or no cake?
  • The cake is a lie
  • Bake That.
  • Bake Me On
  • Bake On Me
  • Bakin' Stevens.
  • The Winner Bakes It All
  • Bakerloo
  • Bake Five
  • Every breath you bake
  • Every bread you bake
  • Unbake my heart
  • Knead and let prove
  • Bake me up before you go-go
  • I want to bake free
  • Another bake bites the dust
  • Cinnamon whorl is not enough
  • The pie who loved me
  • The yeast you can do.
  • Total collapse of the tart
  • Bake and deliver
  • You Gotta Bake
  • Bake's Seven
  • Natural Born Bakers
  • Bake It Or Leaven It
  • Driving the last pikelet
  • Pie crust on the dancefloor
  • Tomorrow never pies
  • Murder on the pie crust
  • The pie who came in from the cold.
  • You only bake twice (Every body has to make one sweet and one savoury dish).

So that is our list, anyone else got better ideas?

Ritesh Raj Sarraf: GNOME Shell Extensions and Chromium

1 October, 2016 - 18:04

Most GNOME users may be using one or more extensions for the GNOME Shell.  These extensions allow extending functionality for the shell, or modify default behavior, to suit the taste of many users, who may want more than the default. Having flexibility to customize the desktop to ones personal need is a great feature, and the extensions help achieve them.

The GNOME Shell Extensions distribution mechanism is primarily through the web. I think they aspire to achieve something similar to Chrome's Web Store. Up till recently, the ability to install those Shell Extensions was a) Through your distribution channel, where your distribution may have packaged some of the extensions. b) Through the Firefox web browser. GNOME Shell Extensions installation from the web, to most of what I'm aware of, until recently, only worked with Firefox browser.

With the chrome-gnome-shell package, which is now available in Debian, Debian GNOME users should be able to use the Chromium browser for managing their GNOME Shell Extensions.

  1. Install package chrome-gnome-shell
  2. Open Chromium Browser and go to Web Store
  3. Install Chrome Shell Integration extension for Chromium
    1. .
  4. Point your browser to:


In future releases, the plan is to automate the installation of the browser extension (step 3), when the package is installed. This feature is Chromium specific and will be achieved using a system-wide chromium browser policy, which can be set/overridden by an administrator.


Categories: Keywords: Like: 

Jonas Meurer: debian lts report 2016 09

1 October, 2016 - 15:34
Debian LTS Report for September 2016

September 2016 was my first month as a payed Debian LTS Team member. After doing two small uploads to wheezy-security in August and got to know the LTS Team workflow, this month I was allocated 9 hours by Freexian. I spent all 9 hours on working on security updates to Debian Wheezy.

In particular, I worked on the following issues:

  • DLA 612-1: libtomcrypt PKCS#1 RSA signature verification
  • DLA 617-1: libarchive out of bounds and denial of service
  • DLA 625-1: libcurl escape/unescape integer overflows
  • DLA 627-1: pdns qname's length>255b, missing zone size limits
  • worked on mat issue with embeded images in PDFs (#826101)

For reference, these were the issues I worked on in August:

  • DLA 584-1: libsys-syslog-perl opportunistic loading of optional modules
  • DLA 589-1: mupdf out of bounds write access to memory locations

Kees Cook: security things in Linux v4.6

1 October, 2016 - 14:45

The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella.

seccomp support for parisc

Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working.

x86 32-bit mmap ASLR vs unlimited stack fixed

Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. “ulimit -s unlimited“) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead.

x86 execute-only memory

Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for “execute only” memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I’m looking forward to either emulated QEmu support or access to one of these fancy CPUs.

CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86

Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel.

On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar’s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we’ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it’s reasonable to continue to leave an “out” for developers that find themselves tripping over it.

arm64 KASLR text base offset

Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the “/chosen/kaslr-seed” property) or from UEFI (via EFI_RNG_PROTOCOL), so if you’re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader.

zero-poison after free

Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity’s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn’t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called “sanity checking”), which can catch another small subset of flaws.

To understand the pieces of this, it’s worth describing that the kernel’s higher level allocator, the “page allocator” (e.g. __get_free_pages()) is used by the finer-grained “slab allocator” (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator).

Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn’t worth the performance trade-off.

Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you’re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well.

To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:


and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:

page_poison=on slub_debug=P

To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add “F” to slub_debug as “slub_debug=PF“.

read-only after init

I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity’s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above).

Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared “const” in the kernel source code, making them read-only (an therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked “__init“) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc.

As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it’s trivial to declare a new data section (“.data..ro_after_init“) and add it to the existing read-only data section (“.rodata“). Kernel structures can be annotated with the new section (via the “__ro_after_init” macro), and they’ll become read-only once boot has finished.

The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel’s attack surface can be made read-only for the majority of its lifetime.

As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these).

That’s it for v4.6, next up will be v4.7!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้