Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 week 5 days ago

Richard Hartmann: Accuracy

24 February, 2015 - 06:21

Even if you disregard how amazing this is, this quote blows my proverbial mind:

The test rig is carefully designed to remove any possible sources of error. Even the lapping of waves in the Gulf of Mexico 25 miles away every three to four seconds would have showed up on the sensors, so the apparatus was floated pneumatically to avoid any influence. The apparatus is completely sealed, with power and signals going through liquid metal contacts to prevent any force being transmitted through cables.

Andrew Cater: Cubietruck now running Debian :)

24 February, 2015 - 06:17
Following a debootstrap build of sid on one machine to complete the cross-compilation of mainline u-boot, I managed to get vanilla Debian installed on my Cubietruck 

A USB-serial cable is a must for the install and for any subsequent major reconfiguration as the stock Debian installer does not have drivers for the video / audio. Various Cubietruck derivative distributions do - but the Sunxi kernel appears flaky

All was fine for a few days, then I decided to try and configure the Wifi by hand configuring /etc/network/interfaces and wpasupplicant files. I managed to break the network connectivity by doing things in a hurry and typing blind. I'd put it into the appropriate closed metal case so was rather stuck.

A friend carefully took the case apart by easing off the metal cover plates, removed two screws holding the whole thing together and precision drilled  the metal cover plates on one side so that four screws can be undone and the entire inner part of the case can slide out as one while the other metal clover plate remains captive. He will follow this procedure with his two later.

Very pleased with the way it's turned out. The WiFi driver has non-free firmware but I now have a tiny, silent machine, drawing about 3W tops and both interfaces are now working.


Simon Josefsson: Laptop Buying Advice?

24 February, 2015 - 05:49

My current Lenovo X201 laptop has been with me for over four years. I’ve been looking at new laptop models over the years thinking that I should upgrade. Every time, after checking performance numbers, I’ve always reached the conclusion that it is not worth it. The most performant Intel Broadwell processor is the the Core i7 5600U and it is only about 1.5 times the performance of my current Intel Core i7 620M. Meanwhile disk performance has increased more rapidly, but changing the disk on a laptop is usually simple. Two years ago I upgraded to the Samsung 840 Pro 256GB disk, and this year I swapped that for the Samsung 850 Pro 1TB, and both have been good investments.

Recently my laptop usage patterns have changed slightly, and instead of carrying one laptop around, I have decided to aim for multiple semi-permanent laptops at different locations, coupled with a mobile device that right now is just my phone. The X201 will remain one of my normal work machines.

What remains is to decide on a new laptop, and there begins the fun. My requirements are relatively easy to summarize. The laptop will run a GNU/Linux distribution like Debian, so it has to work well with it. I’ve decided that my preferred CPU is the Intel Core i7 5600U. The screen size, keyboard and mouse is mostly irrelevant as I never work longer periods of time directly on the laptop. Even though the laptop will be semi-permanent, I know there will be times when I take it with me. Thus it has to be as lightweight as possible. If there would be significant advantages in going with a heavier laptop, I might reconsider this, but as far as I can see the only advantage with a heavier machine is bigger/better screen, keyboard (all of which I find irrelevant) and maximum memory capacity (which I would find useful, but not enough of an argument for me). The only sub-1.5kg laptops with the 5600U CPU on the market right now appears to be:

Lenovo X250 1.42kg 12.5″ 1366×768 Lenovo X1 Carbon (3rd gen) 1.44kg 14″ 2560×1440 Dell Latitude E7250 1.25kg 12.5″ 1366×768 Dell XPS 13 1.26kg 13.3″ 3200×1800 HP EliteBook Folio 1040 G2 1.49kg 14″ 1920×1080 HP EliteBook Revolve 810 G3 1.4kg 11.6″ 1366×768

I find it interesting that Lenovo, Dell and HP each have two models that meets my 5600U/sub-1.5kg criteria. Regarding screen, possibly there exists models with other screen resolutions. The XPS 13, HP 810 and X1 models I looked had touch screens, the others did not. As screen is not important to me, I didn’t evaluate this further.

I think all of them would suffice, and there are only subtle differences. All except the XPS 13 can be connected to peripherals using one cable, which I find convenient to avoid a cable mess. All of them have DisplayPort, but HP uses DisplayPort Standard and the rest uses miniDP. The E7250 and X1 have HDMI output. The X250 boosts a 15-pin VGA connector, none of the others have it — I’m not sure if that is a advantage or disadvantage these days. All of them have 2 USB v3.0 ports except the E7250 which has 3 ports. The HP 1040, XPS 13 and X1 Carbon do not have RJ45 Ethernet connectors, which is a significant disadvantage to me. Ironically, only the smallest one of these, the HP 810, can be memory upgraded to 12GB with the others being stuck at 8GB. HP and the E7250 supports NFC, although Debian support is not certain. The E7250 and X250 have a smartcard reader, and again, Debian support is not certain. The X1, X250 and 810 have a 3G/4G card.

Right now, I’m leaning towards rejecting the XPS 13, X1 and HP 1040 because of lack of RJ45 ethernet port. That leaves me with the E7250, X250 and the 810. Of these, the E7250 seems like the winner: lightest, 1 extra USB port, HDMI, NFC, SmartCard-reader. However, it has no 3G/4G-card and no memory upgrade options. Looking for compatibility problems, it seems you have to be careful to not end up with the “Dell Wireless” card and the E7250 appears to come in a docking and non-docking variant but I’m not sure what that means.

Are there other models I should consider? Other thoughts?

Enrico Zini: akonadi-client-example

23 February, 2015 - 21:44
Akonadi client example

After many failed attemps I have managed to build a C++ akonadi client. It has felt like one of the most frustrating programming experiences of my whole life, so I'm sharing the results hoping to spare others from all the suffering.

First thing first, akonadi client libraries are not in libakonadi-dev but in kdepimlibs5-dev, even if kdepimlibs5-dev does not show in apt-cache search akonadi.

Then, kdepimlibs is built with Qt4. If your application uses Qt5 (mine was) you need to port it back to Qt4 if you want to talk to Akonadi.

Then, kdepimlibs does not seem to support qmake and does not ship pkg-config .pc files, and if you want to use kdepimlibs your build system needs to be cmake. I ported by code from qmake to cmake, and now qtcreator wants me to run cmake by hand every time I change the CMakeLists.txt file, and it stopped allowing to add, rename or delete sources.

Finally, most of the code / build system snippets found on the internet seem flawed in a way or another, because the build toolchain of Qt/KDE applications has undergone several redesignins during time, and the network is littered with examples from different eras. The way to obtain template code to start a Qt/KDE project is to use kapptemplate. I have found no getting started tutorial on the internet that said "do not just copy the snippets from here, run kapptemplate instead so you get them up to date".

kapptemplate supports building an "Akonadi Resource" and an "Akonadi Serializer", but it does not support generating template code for an akonadi client. That left me with the feeling that I was dealing with some software that wants to be developed but does not want to be used.

Anyway, now an example of how to interrogate Akonadi exists as is on the internet. I hope that all the tears of blood that I cried this morning have not been cried in vain.

Enrico Zini: akonadi-build-hth

23 February, 2015 - 17:36
The wonders of missing documentation

Update: I have managed to build an example Akonadi client application.

I'm new here, I want to make a simple C++ GUI app that pops up a QCalendarWidget which my local Akonadi has appointments.

I open qtcreator, create a new app, hack away for a while, then of course I get undefined references for all Akonadi symbols, since I didn't tell the build system that I'm building with akonadi. Ok.

How do I tell the build system that I'm building with akonadi? After 20 minutes of frantic looking around the internet, I still have no idea.

There is a package called libakonadi-dev which does not seem to have anything to do with this. That page mentions everything about making applications with Akonadi except how to build them.

There is a package called kdepimlibs5-dev which looks promising: it has no .a files but it does have haders and cmake files. However, qtcreator is only integrated with qmake, and I would really like the handholding of an IDE at this stage.

I put something together naively doing just what looked right, and I managed to get an application that segfaults before main() is even called:

/*
 * Copyright © 2015 Enrico Zini <enrico@enricozini.org>
 *
 * This work is free. You can redistribute it and/or modify it under the
 * terms of the Do What The Fuck You Want To Public License, Version 2,
 * as published by Sam Hocevar. See the COPYING file for more details.
 */
#include <QDebug>

int main(int argc, char *argv[])
{
    qDebug() << "BEGIN";
    return 0;
}
QT       += core gui widgets
CONFIG += c++11

TARGET = wtf
TEMPLATE = app

LIBS += -lkdecore -lakonadi-kde

SOURCES += wtf.cpp

I didn't achieve what I wanted, but I feel like I achieved something magical and beautiful after all.

I shall now perform some haruspicy on those oscure cmake files to see if I can figure something out. But seriously, people?

Dirk Eddelbuettel: drat Tutorial: Publishing a package

23 February, 2015 - 09:04
Introduction

The drat package was released earlier this month, and described in a first blog post. I received some helpful feedback about what works and what doesn't. For example, Jenny Bryan pointed out that I was not making a clear enough distinction between the role of using drat to publish code, and using drat to receive/install code. Very fair point, and somewhat tricky as R aims to blur the line between being a user and developer of statistical analyses, and hence packages. Many of us are both. Both the main point is well taken, and this note aims to clarify this issue a little by focusing on the former.

Another point make by Jenny concerns the double use of repository. And indeed, I conflated repository (in the sense of a GitHub code repository) with repository for a package store used by a package manager. The former, a GitHub repository, is something we use to implement a personal drat with: A GitHub repository happens to be uniquely identifiable just by its account name, and given an (optional) gh-pages branch also offers a stable and performant webserver we use to deliver packages for R. A (personal) code repository on the other hand is something we implement somewhere---possibly via drat which supports local directories, possibly on a network share, as well as anywhere web-accessible, e.g. via a GitHub repository. It is a little confusing, but I will aim to make the distinction clearer.

Just once: Setting up a drat repository

So let us for the remainder of this post assume the role of a code publisher. Assume you have a package you would like to make available, which may not be on CRAN and for which you would like to make installation by others easier via drat. The example below will use an interim version of drat which I pushed out yesterday (after fixing a bug noticed when pushing the very new RcppAPT package).

For the following, all we assume (apart from having a package to publish) is that you have a drat directory setup within your git / GitHub repository. This is not an onerous restriction. First off, you don't have to use git or GitHub to publish via drat: local file stores and other web servers work just as well (and are documented). GitHub simply makes it easiest. Second, bootstrapping one is trivial: just fork my drat GitHub repository and then create a local clone of the fork.

There is one additional requirement: you need a gh-pages branch. Using the fork-and-clone approach ensures this. Otherwise, if you know your way around git you already know how to create a gh-pages branch.

Enough of the prerequisities. And on towards real fun. Let's ensure we are in the gh-pages branch:

edd@max:~/git/drat(master)$ git checkout gh-pages
Switched to branch 'gh-pages'
Your branch is up-to-date with 'origin/gh-pages'.
edd@max:~/git/drat(gh-pages)$ 
Publish: Run one drat command to insert a package

Now, let us assume you have a package to publish. In my case this was version 0.0.1.2 of drat itself as it contains a fix for the very command I am showing here. So if you want to run this, ensure you have this version of drat as the CRAN version is currently behind at release 0.0.1 (though I plan to correct that in the next few days).

To publish an R package into a code repository created via drat running on a drat GitHub repository, just run insertPackage(packagefile) which we show here with the optional commit=TRUE. The path to the package can be absolute are relative; the easists is often to go up one directory from the sources to where R CMD build ... has created the package file.

edd@max:~/git$ Rscript -e 'library(drat); insertPackage("drat_0.0.1.2.tar.gz", commit=TRUE)'
[gh-pages 0d2093a] adding drat_0.0.1.2.tar.gz to drat
 3 files changed, 2 insertions(+), 2 deletions(-)
 create mode 100644 src/contrib/drat_0.0.1.2.tar.gz
Counting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 7.37 KiB | 0 bytes/s, done.
Total 7 (delta 1), reused 0 (delta 0)
To git@github.com:eddelbuettel/drat.git
   206d2fa..0d2093a  gh-pages -> gh-pages
edd@max:~/git$ 

You can equally well run this as insertPackage("drat_0.0.1.2.tar.gz"), then inspect the repo and only then run the git commands add, commit and push. Also note that future versions of drat will most likely support git operations directly by relying on the very promising git2r package. But this just affect package internals, the user-facing call of e.g. insertPackage("drat_0.0.1.2.tar.gz", commit=TRUE) will remain unchanged.

And in a nutshell that really is all there is to it. With the newly drat-ed package pushed to your GitHub repository with a single function call), it is available via the automatically-provided gh-pages webserver access to anyone in the world. All they need to do is to point R's package management code (which is built into R itself and used for e.g._ CRAN and BioConductor R package repositories) to the new repo---and that is also just a single drat command. We showed this in the first blog post and may expand on it again in a follow-up.

So in summary, that really is all there is to it. After a one-time setup / ensuring you are on the gh-pages branch, all it takes is a single function call from the drat package to publish your package to your drat GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Rogério Brito: User-Agent strings and privacy

23 February, 2015 - 06:54

I just had my hands on some mobile devices (a Samsung's Galaxy Tab S 8.4", an Apple's iPad mini 3, and my no-name tablet that runs Android).

I got curious to see how the different browsers identify themselves to the world via their User agent strings and I must say that each browser's string reveals a lot about both the browser makers and their philosophies regarding user privacy.

Here is a simple table that I compiled with the information that I collected (sorry if it gets too wide):

Device Browser User-Agent String Samsung Galaxy Tab S Firefox 35.0 Mozilla/5.0 (Android; Tablet; rv:35.0) Gecko/35.0 Firefox/35.0 Samsung Galaxy Tab S Firefox 35.0.1 Mozilla/5.0 (Android; Tablet; rv:35.0.1) Gecko/35.0.1 Firefox/35.0.1 Samsung Galaxy Tab S Android's 4.4.2 stock browser Mozilla/5.0 (Linux; Android 4.4.2; en-gb; SAMSUNG SM-T700 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.5 Chrome/28.0.1500.94 Safari/537.36 Samsung Galaxy Tab S Updated Chrome Mozilla/5.0 (Linux; Android 4.4.2; SM-T700 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.109 Safari/537.36 Vanilla tablet Android's 4.1.1 stock browser Mozilla/5.0 (Linux; U; Android 4.1.1; en-us; TB1010 Build/JRO03H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Safari/534.30 Vanilla tablet Firefox 35.0.1 Mozilla/5.0 (Android; Tablet; rv:35.0.1) Gecko/35.0.1 Firefox/35.0.1 iPad Safari's from iOS 8.1.3 Mozilla/5.0 (iPad; CPU OS 8_1_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B466 Safari/600.1.4 Notebook Debian's Iceweasel 35.0.1 Mozilla/5.0 (X11; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0 Iceweasel/35.0.1

So, briefly looking at the table above, you can tell that the stock Android browser reveals quite a bit of information: the model of the device (e.g., SAMSUNG SM-T700 or TB1010) and even the build number (e.g., Build/KOT49H or Build/JRO03H)! This is super handy for malicious websites and I would say that it leaks a lot of possibly undesired information.

The iPad is similar, with Safari revealing the version of the iOS that it is running. It doesn't reveal, though, the language that the user is using via the UA string (it probably does via other HTTP fields).

Chrome is similar to the stock Android browser here, but, at least, it doesn't reveal the language of the user. It does reveal the version of Android, including the patch-level (that's a bit too much, IMVHO).

I would say that the winner respecting privacy of the users among the browsers that I tested is Firefox: it conveys just the bare minimum, not differentiating from a high-end tablet (Samsung's Galaxy Tab S with 8 cores) and a vanilla tablet (with 2 cores). Like Chrome, Firefox still reveals a bit too much in the form of the patch-level. It should be sufficient to say that it is version 35.0 even if the user has 35.0.1 installed.

The bonus points with Firefox is that it is also available on F-Droid, in two versions: as Firefox itself and as Fennec.

Hideki Yamane: New laptop ThinkPad E450

22 February, 2015 - 12:55
I've got a new laptop, Lenovo ThinkPad E450.

  • CPU: Intel Core i5 (upgraded)
  • Mem: 8GB (upgraded, one empty slot, can up to 16GB)
  • HDD: 500GB
  • LCD: FHD (1920x1080, upgraded)
  • wifi: 802.11ac (upgraded, Intel 7265 BT ACBGN)
nice,  it was less than $600 $500.

Well, probably you know about Superfish issue with Lenovo Laptop, but it didn't affect to me because first thing when I got it is replacing HDD with another empty one, and did fresh install Debian Jessie (of course).

Francesca Ciceri: Dudes in dresses, girls in trousers

22 February, 2015 - 01:03

"As long as people still think of people like me as "a dude in a dress" there is a lot work to do to fight transphobia and gain tolerance and acceptance."

This line in Rhonda's most recent blogpost broke my heart a little, and sparked an interesting conversation with her about the (perceived?) value of clothes, respect and identity.

So, guess what? Here's a pic of a "girl in trousers". Just because.

(Sorry for the quality: couldn't find my camera and had to use a phone. Also, I don't own a binder, so I used a very light binding)

Dominique Dumont: Performance improvement for ‘cme check dpkg’

21 February, 2015 - 22:09

Hello

Thanks to Devel::NYTProf, I’ve realized that Module::CoreList was used in a not optimal way (to say the least) in Config::Model::Dpkg::Dependency when checking the dependency between Perl packages. (Note that only Perl packages with many dependencies were affected by this lack of performance)

After a rework, the performance are much better. Here’s an example comparing check time before and after the modification of libconfig-model-dpkg-perl.

With libconfig-model-dpkg-perl 2.059:
$ time cme check dpkg
Using Dpkg
loading data
Reading package lists... Done
Building dependency tree
Reading state information... Done
checking data
check done

real 0m10.235s
user 0m10.136s
sys 0m0.088s

With libconfig-model-dpkg-perl 2.060:
$ time cme check dpkg
Using Dpkg
loading data
Reading package lists... Done
Building dependency tree
Reading state information... Done
checking data
check done

real 0m1.565s
user 0m1.468s
sys 0m0.092s

 

All in all, a 8x performance improvement on the dependency check.

Note that, due to the freeze, the new version of libconfig-model-dpkg-perl is available only in experimental.

All the best


Tagged: Config::Model, debian, dpkg, package

Dirk Eddelbuettel: RcppAPT 0.0.1

21 February, 2015 - 21:33

Over the last few days I put together a new package RcppAPT which interfaces the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their GUI-based brethren.

The package currently implements two functions which permit search for package information via a regular expression, as well as a (vectorised) package name-based check. More to come, and contributions would be very welcome.

A few examples just to illustrate follow.

R> hasPackages(c("r-cran-rcpp", "r-cran-rcppapt"))
   r-cran-rcpp r-cran-rcppapt 
          TRUE          FALSE 

This shows that Rcpp is (of course) available as a binary, but this (very new) package is (unsurprisingly) not yet available pre-built.

We can search by regular expression:

R> library(RcppAPT)
R> getPackages("^r-base-c.")
          Package      Installed       Section
1 r-base-core-dbg 3.1.2-1utopic0 universe/math
2 r-base-core-dbg           <NA> universe/math
3     r-base-core 3.1.2-1utopic0 universe/math
4     r-base-core           <NA> universe/math
R> 

With the (default) expression catching everything, we see a lot of packages:

R> dim(getPackages())
[1] 104431      3
R> 

A bit more information is on the package page here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Vasudev Kamath: Running Plan9 using 9vx - using vx32 sandboxing library

21 February, 2015 - 14:02

Now a days I'm more and more attracted towards Plan9, an Operating System meant to be the successor of UNIX and created by same people who created original UNIX. I'm always baffled by the simplicity of Plan9. Sadly Plan9 never took off for whatever reasons.

I've been for a while trying to run Plan9, I ran Plan9 on Raspberry Pi model B using 9pi, but I couldn't experiment with it more due to some restrictions in my home setup.

I installed original Plan9 4th Edition from Bell labs (now part of Alcatel-Lucent), I will write about it in on different post. But running virtual machine on my system is again PITA as system is already old (3 and half year). I came across the 9vx which is port of Plan9 for FreeBSD, Linux and Mac OSX by Russ Cox.

I downloaded original 9vx version 0.9.12 from Russ's page linked above. The archive contains a Plan9 rootfs along with precompiled 9vx binaries for Linux, FreeBSD and Mac OS X. I ran the Linux binary but it crashed.

./9vx.Linux -u glenda

I was seeing some illegal instruction error in dmesg. I didn't bother to do more investigation.

A bit of googling showed me Arch Linux's wiki page on 9vx. I got errors trying to compile the original vx32 from rsc's repository but later saw that AUR 9vx package is built from different repository forked from rsc's found here.

I cloned the repository to local and compiled it, I don't really remember if I had installed any additional packages. But if you get error you will know what additional thing is required. After compilation the 9vx binary is found inside src/9vx/9vx. I used this newly compiled 9vx to run the the rootfs I downloaded from Russ's website.

9vx -u glenda -r /path/to/extracted/9vx-0.9.12/

This launches Plan9 and allows you to work inside Plan9. The good part is its not resource hungry and still looks like you have a VM running with Plan9 on it.

But there seems to be a better way to do this directly from plan9 iso from bell labs. It can be found on 9fans list. Now I'm going to try that out too :-). And in next post I will share my experience of using Plan9 on Qemu.

Richard Hartmann: Release Critical Bug report for Week 08

21 February, 2015 - 02:32

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1069 (Including 188 bugs affecting key packages)
    • Affecting Jessie: 147 (key packages: 114) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 96 (key packages: 81) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 23 bugs are tagged 'patch'. (key packages: 19) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 2 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 71 bugs are neither tagged patch, nor marked done. (key packages: 62) Help make a first step towards resolution!
      • Affecting Jessie only: 51 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 34 bugs are in packages that are unblocked by the release team. (key packages: 22)
        • 17 bugs are in packages that are not unblocked. (key packages: 11)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 147 (96+51) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Rhonda D'Vine: Queer-Positive Songs

20 February, 2015 - 23:05

Just recently I stumbled upon one of these songs again and thought to myself: Are there more out there? With these songs I mean songs that could from its lyrics be considered queer-positive. Lyrics that cointain parts that speak about queer topics. To get you an idea of what I mean here are three songs as examples:

  • Saft by Die Fantastischen Vier: The excert from the lyrics I am refering to is: "doch im Grunde sucht jeder Mann eine Frau // Wobei so mancher Mann besser mit Männern kann // und so manche Frau lässt lieber Frauen ran" ("but basically every man looks for a woman // though some man prefer men // and some women prefer women").
  • Liebe schmeckt gut by Grossstadtgeflüster: Here the lyrics go like "Manche lieben sich selber // manche lieben unerkannt // manche drei oder fünf" ("some love themself // some love in secrecy // some three or five"). For a stereo sound version of the song watch this video instead, but I love the video. :)
  • Mein schönstes Kleid by Früchte des Zorns: This song is so much me. It starts off with "Eines Tages werd ich aus dem Haus geh'n und ich trag mein schönstes Kleid" ("One day I'll go out and I'll wear my most beautiful dress" sung by a male voice). I was made aware of it after the Poetry Night at debconf12 in Nicaragua. As long as people still think of people like me as "a dude in a dress" there is a lot work to do to fight transphobia and gain tolerance and acceptance.

Do you have further examples for me? I know that I already mentioned another one in my blog entry about Garbage for a start. I am aware that there probably are dedicated bands that out of their own history do a lot songs in that direction, but I also want to hear about songs in which it is only mentioned in a side note and not made the central topic of the whole song, making it an absolutely normal random by-note.

Like always, enjoy—and I'm looking forward to your suggestions!

/music | permanent link | Comments: 3 | Flattr this

David Bremner: Dear Lenovo, it's not me, it's you.

20 February, 2015 - 21:00

I've been a mostly happy Thinkpad owner for almost 15 years. My first Thinkpad was a 570, followed by an X40, an X61s, and an X220. There might have been one more in there, my archives only go back a decade. Although it's lately gotten harder to buy Thinkpads at UNB as Dell gets better contracts with our purchasing people, I've persevered, mainly because I'm used to the Trackpoint, and I like the availability of hardware service manuals. Overall I've been pleased with the engineering of the X series.

Over the last few days I learned about the installation of the superfish malware on new Lenovo systems, and Lenovo's completely inadequate response to the revelation. I don't use Windows, so this malware would not have directly affected me (unless I had the misfortune to use this system to download installation media for some GNU/Linux distribution). Nonetheless, how can I trust the firmware installed by a company that seems to value its users' security and privacy so little?

Unless Lenovo can show some sign of understanding the gravity of this mistake, and undertake not to repeat it, then I'm afraid you will be joining Sony on my list of vendors I used to consider buying from. Sure, it's only a gross income loss of $500 a year or so, if you assume I'm alone in this reaction. I don't think I'm alone in being disgusted and angered by this incident.

Wouter Verhelst: LOADays 2015

20 February, 2015 - 17:47

Looks like I'll be speaking at LOADays again. This time around, at the suggestion of one of the organisers, I'll be speaking about the Belgian electronic ID card, for which I'm currently employed as a contractor to help maintain the end-user software. While this hasn't been officially confirmed yet, I've been hearing some positive signals from some of the organisers.

So, under the assumption that my talk will be accepted, I've started working on my slides. The intent is to explain how the eID middleware works (in general terms), how the Linux support is supposed to work, and what to do when things fail.

If my talk doesn't get rejected at the final hour, I will continue my uninterrupted "speaker at loadays" streak, which has started since loadays' first edition...

MJ Ray: Rebooting democracy? The case for a citizens constitutional convention.

20 February, 2015 - 11:03

I’m getting increasingly cynical about our largest organisations and their voting-centred approach to democracy. You vote once, for people rather than programmes, then you’re meant to leave them to it for up to three years until they stand for reelection and in most systems, their actions aren’t compared with what they said they’d do in any way.

I have this concern about Cooperatives UK too, but then its CEO publishes http://www.uk.coop/blog/ed-mayo/2015-02-18/rebooting-democracy-case-citizens-constitutional-convention and I think there may be hope for it yet. Well worth a read if you want to organise better groups.

Matthew Garrett: It has been 0 days since the last significant security failure. It always will be.

20 February, 2015 - 02:43
So blah blah Superfish blah blah trivial MITM everything's broken.

Lenovo deserve criticism. The level of incompetence involved here is so staggering that it wouldn't be a gross injustice for the company to go under as a result[1]. But let's not pretend that this is some sort of isolated incident. As an industry, we don't care about user security. We will gladly ship products with known security failings and no plans to update them. We will produce devices that are locked down such that it's impossible for anybody else to fix our failures. We will hide behind vague denials, we will obfuscate the impact of flaws and we will deflect criticisms with announcements of new and shinier products that will make everything better.

It'd be wonderful to say that this is limited to the proprietary software industry. I would love to be able to argue that we respect users more in the free software world. But there are too many cases that demonstrate otherwise, even where we should have the opportunity to prove the benefits of open development. An obvious example is the smartphone market. Hardware vendors will frequently fail to provide timely security updates, and will cease to update devices entirely after a very short period of time. Fortunately there's a huge community of people willing to produce updated firmware. Phone manufacturer is never going to fix the latest OpenSSL flaw? As long as your phone can be unlocked, there's a reasonable chance that there's an updated version on the internet.

But this is let down by a kind of callous disregard for any deeper level of security. Almost every single third-party Android image is either unsigned or signed with the "test keys", a set of keys distributed with the Android source code. These keys are publicly available, and as such anybody can sign anything with them. If you configure your phone to allow you to install these images, anybody with physical access to your phone can replace your operating system. You've gained some level of security at the application level by giving up any real ability to trust your operating system.

This is symptomatic of our entire ecosystem. We're happy to tell people to disable security features in order to install third-party software. We're happy to tell people to download and build source code without providing any meaningful way to verify that it hasn't been tampered with. Install methods for popular utilities often still start "curl | sudo bash". This isn't good enough.

We can laugh at proprietary vendors engaging in dreadful security practices. We can feel smug about giving users the tools to choose their own level of security. But until we're actually making it straightforward for users to choose freedom without giving up security, we're not providing something meaningfully better - we're just providing the same shit sandwich on different bread.

[1] I don't see any way that they will, but it wouldn't upset me

comments

Niels Thykier: Partial rewrite of lintian’s reporting setup

20 February, 2015 - 01:54

I had the mixed pleasure of doing a partial rewrite of lintian’s reporting framework.  It started as a problem with generating the graphs, which turned out to be “not enough memory”. On the plus side, I am actually quite pleased with the end result.  I managed to “scope-creep” myself quite a bit and I ended up getting rid of a lot of old issues.

The major changes in summary:

  • A lot of logic was moved out of harness, meaning it is now closer to becoming a simple “dumb” task scheduler.  With the logic being moved out in separate processes, harness now hogs vastly less memory that I cannot convince perl to release to the OS.  On lilburn.debian.org “vastly less” is on the order of reducing “700ish MB” to “32 MB”.
  • All important metadata was moved into the “harness state-cache”, which is a simple YAML file. This means that “Lintian laboratory” is no longer a data store. This change causes a lot of very positive side effects.
  • With all metadata now stored in a single file, we can now do atomic updates of the data store. That said, this change itself does not enable us to run multiple lintian’s in parallel.
  • As the lintian laboratory is no longer a data store, we can now do our processing in “throw away laboratories” like the regular lintian user does.  As the permanent laboratory is the primary source of failure, this removes an entire class of possible problems.

There are also some nice minor “features”:

  • Packages can now be “up to date” in the generated reports.  Previously, they would always be listed as “out of date” even if they were up to date.  This is the only end user/website-visitor visible change in all of this (besides the graphs are now working again \o/).
  • The size of the harness work list is no longer based on the number of changes to the archive.
  • The size of the harness work list can now be changed with a command line option and is no longer hard coded to 1024.  However, the “time limit” remains hard coded for now.
  • The “full run” (and “clean run”) now simply marks everything “out-of-date” and processes its (new) backlog over the next (many) harness runs.  Accordingly, a full-run no longer causes lintian to run 5-7 days on lilburn.d.o before getting an update to the website.  Instead we now get incremental updates.
  • The “harness.log” now features status updates from lintian as they happen with “processed X successfully” or “error processing Y” plus a little wall time benchmark.  With this little feature I filed no less than 3 bugs against lintian – 2 of which are fixed in git.  The last remains unfixed but can only be triggered in Debian stable.
  • It is now possible with throw-away labs to terminate the lintian part of a reporting run early with minimal lost processing.  Since the lintian-harness is regular fed status updates from lintian, we can now mark successfully completed entries as done even if lintian does not complete its work list.  Caveat: There may be minor inaccuracies in the generated report for the particular package lintian was processing when it was interrupted.  This will fix itself when the package is reprocessed again.
  • It is now vastly easier to collect new meta data to be used in the reports.  Previously, they had to be included in the laboratory and extracted from there.  Now, we just have to fit it into a YAML file.  In fact, I have been considering to add the “wall time” and make a “top X slowest” page.
  • It is now possible to generate the html pages with only a “state-cache” and the “lintian.log” file.  Previously, it also required a populated lintian laboratory.

As you can probably tell, I am quite pleased with the end result.  The reporting framework lacks behind in development, since it just “sits there and takes care of itself”.  Also with the complete lack of testing, it also suffers from the “if it is not broken, then do not fix it” paradigm (because we will not notice if we broke until it is too late).

Of course, I managed to break the setup a couple of times in the process.  However, a bonus feature of the reporting setup is that if you break it, it simply leaves an outdated report on the website.

Anyway, enjoy. :)

 


Filed under: Debian, Lintian

Michal &#268;iha&#345;: Weblate 2.2

20 February, 2015 - 00:00

Weblate 2.2 has been released today. It comes with improved search, user interface cleanup and various other fixes.

Full list of changes for 2.2:

  • Performance improvements.
  • Fulltext search on location and comments fields.
  • New SVG/javascript based activity charts.
  • Support for Django 1.8.
  • Support for deleting comments.
  • Added own SVG badge.
  • Added support for Google Analytics.
  • Improved handling of translation file names.
  • Added support for monolingual JSON translations.
  • Record component locking in a history.
  • Support for editing source (template) language for monolingual translations.
  • Added basic support for Gerrit.

You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.

Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, Gammu, Weblate itself and other projects.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

PS: The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้