Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 45 min 18 sec ago

Matthew Garrett: Actions have consequences (or: why I'm not fixing Intel's bugs any more)

3 October, 2014 - 00:40
A lot of the kernel work I've ended up doing has involved dealing with bugs on Intel-based systems - figuring out interactions between their hardware and firmware, reverse engineering features that they refuse to document, improving their power management support, handling platform integration stuff for their GPUs and so on. Some of this I've been paid for, but a bunch has been unpaid work in my spare time[1].

Recently, as part of the anti-women #GamerGate campaign[2], a set of awful humans convinced Intel to terminate an advertising campaign because the site hosting the campaign had dared to suggest that the sexism present throughout the gaming industry might be a problem. Despite being awful humans, it is absolutely their right to request that a company choose to spend its money in a different way. And despite it being a dreadful decision, Intel is obviously entitled to spend their money as they wish. But I'm also free to spend my unpaid spare time as I wish, and I no longer wish to spend it doing unpaid work to enable an abhorrently-behaving company to sell more hardware. I won't be working on any Intel-specific bugs. I won't be reverse engineering any Intel-based features[3]. If the backlight on your laptop with an Intel GPU doesn't work, the number of fucks I'll be giving will fail to register on even the most sensitive measuring device.

On the plus side, this is probably going to significantly reduce my gin consumption.

[1] In the spirit of full disclosure: in some cases this has resulted in me being sent laptops in order to figure stuff out, and I was not always asked to return those laptops. My current laptop was purchased by me.

[2] I appreciate that there are some people involved in this campaign who earnestly believe that they are working to improve the state of professional ethics in games media. That is a worthy goal! But you're allying yourself to a cause that disproportionately attacks women while ignoring almost every other conflict of interest in the industry. If this is what you care about, find a new way to do it - and perhaps deal with the rather more obvious cases involving giant corporations, rather than obsessing over indie developers.

For avoidance of doubt, any comments arguing this point will be replaced with the phrase "Fart fart fart".

[3] Except for the purposes of finding entertaining security bugs

comments

Raphaël Hertzog: My Free Software Activities in September 2014

3 October, 2014 - 00:20

This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (26.6 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.

Django 1.7

Since Django 1.7 got released early September, I updated the package in experimental and continued to push for its inclusion in unstable. I sent a few more patches to multiple reverse build dependencies who had asked for help (python-django-bootstrap-form, horizon, lava-server) and then sent the package to unstable. At that time, I bumped the severity of all bug filed against packages that were no longer building with Django 1.7.

Later in the month, I made sure that the package migrated to testing, it only required a temporary removal of mumble-django (see #763087). Quite a few packages got updated since then (remaining bugs here).

Debian Long Term Support

I have worked towards keeping Debian Squeeze secure, see the dedicated article: My Debian LTS report for September 2014.

Distro Tracker

The pace of development on tracker.debian.org slowed down a bit this month, with only 30 new commits in the repository, closing 6 bugs. Some of the changes are noteworthy though: the news now contain true links on bugs, CVE and plain URLs (example here). I have also fixed a serious issue with the way users were identified when they used their Alioth account credentials to login via sso.debian.org.

On the development side, we’re now able to generate the test suite code coverage which is quite helpful to identify parts of the code that are clearly missing some tests (see bin/gen-coverage.sh in the repository).

Misc packaging

Publican. I have been behind packaging new upstream versions of Publican and with the freeze approaching, I decided to take care of it. Unfortunately, it wasn’t as easy as I had hoped and found numerous issues that I have filed upstream (invalid public identifier, PDF build fails with noNumberLines function available, build of the manual requires the network). Most of those have been fixed upstream in the mean time but the last issue seems to be a problem in the way we manage our Docbook XML catalogs in Debian. I have thus filed #763598 (docbook-xml: xmllint fails to identify local copy of docbook entities file) which is still waiting an answer from the maintainer.

Package sponsorship. I have sponsored new uploads of dolibarr (RC bug fix), tcpdf (RC bug fix), tryton-server (security update) and django-ratelimit.

GNOME 3.14. With the arrival of GNOME 3.14 in unstable, I took care of updating gnome-shell-timer and also filed some tickets for extensions that I use: https://github.com/projecthamster/shell-extension/issues/79 and https://github.com/olebowle/gnome-shell-timer/issues/25

git-buildpackage. I filed multiple bugs on git-buildpackage for little issues that have been irking me since I started using this tool: #761160 (gbp pq export/switch should be smarter), #761161 (gbp pq import+export should preserve patch filenames), #761641 (gbp import-orig should be less fragile and more idempotent).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Julian Andres Klode: Acer Chromebook 13 (FHD): Initial impressions

3 October, 2014 - 00:10

Today, I received my Acer Chromebook 13, in the glorious FullHD variant with 4GB RAM. For those of you who don’t know it, the Acer Chromebook 13 is a 13.3 inch chromebook powered by a Tegra K1 cpu.

This version cannot be ordered currently, only pre-orders were shipped yesterday (at least here in Germany). I cannot even review it on Amazon (despite having it bought there), as they have not enabled reviews for it yet.

The device feels solidly built, and looks good. It comes in all-white matte plastic and is slightly reminiscent of the old white MacBooks. The keyboard is horrible, there’s no well defined pressure point. It feels like your typing on a pillow. The display is OK, an IPS would be a lot nicer to work with, though. Oh, and it could be brighter. I do not think that using it outside on a sunny day would be a good idea. The speakers are loud and clear compared to my ThinkPad X230.

The performance of the device is about acceptable (unfortunately, I do not have any comparison in this device class). Even when typing this blog post in the visual wordpress editor, I notice some sluggishness. Opening the app launcher or loading the new tab page while music is playing makes the music stop for or skip a few ms (20-50ms if I had to guess). Running a benchmark in parallel or browsing does not usually cause this stuttering, though.

There are still some bugs in Chrome OS:  Loading the Play Books library the first time resulted in some rendering issues. The “Browser” process always consumes at least 10% CPU, even when idling, with no page open; this might cause some of the sluggishness I mentioned above. Also watching Flash videos used more CPU than I expected given that it is hardware accelerated.

Finally, Netflix did not work out of the box, despite the Chromebook shipping with a special Netflix plugin. I always get some unexpected issue-type page. Setting the user agent to Chrome 38 from Windows, thus forcing the use of the EME video player instead of the Netflix plugin, makes it work.

I reported these software issues to Google via Alt+Shift+I. The issues appeared on the current version of the stable channel, 37.0.2062.120.

What’s next? I don’t know.


Filed under: Uncategorized

Sha Liu: Back again

2 October, 2014 - 20:50

I cannot believe I’ll ever be here again.


Joachim Breitner: 11 ways to write your last Haskell program

2 October, 2014 - 20:00

At my university, we recently held an exam that covered a bit of Haskell, and a simple warm-up question at the beginning asked the students to implement last :: [a] -> a. We did not demand a specific behaviour for last [].

This is a survey of various solutions, only covering those that are actually correct. I elided some variation in syntax (e.g. guards vs. if-then-else).

Most wrote the naive and straightforward code:

last [x] = x
last (x:xs) = last xs

Then quite a few seemed to be uncomfortable with pattern-matching and used conditional expressions. There was some variety in finding out whether a list is empty:

last (x:xs)
  | null xs == True = x
  | otherwise       = last xs

last (x:xs)
  | length (x:xs) == 1 = x
  | otherwise          = last xs

last (x:xs)
  | length xs == 0 = x
  | otherwise      = last xs

last xs
  | lenght xs > 1 = last (tail xs)
  | otherwise     = head xs

last xs
  | lenght xs == 1 = head xs
  | otherwise      = last (tail xs)

last (x:xs)
  | xs == []  = x
  | otherwise = last xs

The last one is not really correct, as it has the stricter type Eq a => [a] -> a. Also we did not expect our students to avoid the quadratic runtime caused by using length in every step.

The next class of answers used length to pick out the right elemet, either using (!!) directly, or simulating it with head and drop:

last xs = xs !! (length xs - 1)

last xs = head (drop (length xs - 1) xs)

There were two submissions that spelled out an explicit left folding recursion:

last (x:xs) = lastHelper x xs
  where
    lastHelper z [] = z
    lastHelper z (y:ys) = lastHelper y ys

And finally there are a few code-golfers that just plugged together some other functions:

last x = head (reverse x)

Quite a lot of ways to write last!

Michal Čihař: Merging Weblate instances

2 October, 2014 - 18:00

For quite some time, I've been running translation server for projects where I am involved at l10n.cihar.com. Historically this used Pootle, but when we had more and more problems with that, I've written Weblate and started to use it there.

As Weblate become more popular and I got requests to help people with running it, I've realized that it might be good idea to run server where I could host translations for other projects. This is when Hosted Weblate was born.

After some time, I've realized that it really makes little sense to run and maintain separate servers for these sets of projects, so I've decided to move all translations from l10n.cihar.com to hosted.weblate.org. Today this move was completed by moving translations for phpMyAdmin.

Filed under: English phpMyAdmin Weblate | 0 comments | Flattr this!

Thorsten Alteholz: My Debian Activities in September 2014

2 October, 2014 - 02:16

FTP assistant

Starting an article with self laudation might be bad style, but this month I was busy as a bee and could accept 312 packages, 75 packages more than last month. 34 times I contacted the maintainer to ask a question and 51 times I had to reject a package. These numbers remain constant.

The number of packages in NEW dropped to about 180. If you want your package included in Jessie, please double-check it and upload an improved version.

Squeeze LTS

This was my third month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian.

All in all I got assigned a workload of 11h for September and I spent these hours to upload new versions of

  • [DLA 43-1] eglibc security update
  • [DLA 64-1] curl security update
  • [DLA 67-1] php5 security update
  • [DLA 68-1] fex security update

I further tried to upload a new version of python-django. Unfortunately I could not figure out why some of the internal tests of the package failed. So I fowarded the package to Raphael, who could resolve all issues.

The Squeeze version of PHP5 contains 140 patches. According to quilt 47 of them are identified to be already in 5.3.29 and 48 patches need to be revised. Some of them are really big, rather old and not really supported in the new 5.3.n version.
As nobody will talk about Squeeze LTS in a few months, I better better avoid the hassle of preparing a point release and concentrate only on security patches further on.

Other packages

This month I uploaded a new version of net-dns-fingerprint, which closes an RC bug. Unfortunately the package does not work with all DNS servers anymore. Patches or hints what happened are very welcome .

Support

If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at donation@alteholz.eu if you prefer another way to donate. Every kind of support is most appreciated.

Matt Zimmerman: Join me in supporting The Ada Initiative

2 October, 2014 - 00:30

When I first read that Linux kernel developer Valerie Aurora would be changing careers to work full-time on behalf of women in open source communities, I never imagined it would lead so far so fast. Today, The Ada Initiative is a non-profit organization with global reach, whose programs have helped create positive change for women in a wide range of communities beyond open source. Building on this foundation, imagine how much more they can do in the next four years! That’s why I’m pledging my continuing support, and asking you to join me.

For the next 7 days, I will personally match your donations up to $4,096. My employer, Heroku (Salesforce.com), will match my donations too, so every dollar you contribute will be tripled!

My goal is that together we will raise over $12,000 toward The Ada Initiative’s 2014 fundraising drive.

Since about 1999, I had been working in open source communities like Debian and Ubuntu, where women are vastly underrepresented even compared to the professional software industry. Like other men in these communities, I had struggled to learn what I could do to change this. Such a severe imbalance can only be addressed by systemic change, and I hardly knew where to begin. I worked to raise awareness by writing and speaking, and joined groups like Debian Women, Ubuntu Women and Geek Feminism. I worked on my own bias and behavior to avoid being part of the problem myself. But it never felt like enough, and sometimes felt completely hopeless.

Perhaps worst of all, I saw too many women burning out from trying to change the system. It was often taxing just to participate as a woman in a male-dominated community, and the extra burden of activism seemed overwhelming. They were all volunteers, doing this work in evenings and weekends around work or study, and it took a lot of time, energy and emotional reserve to deal with the backlash they faced for speaking out about sexism. Valerie Aurora and Mary Gardiner helped me to see that an activist organization with full-time staff could be part of the solution. I joined the Ada Initiative advisory board in February 2011, and the board of directors in April.

Today, The Ada Initiative is making a difference not only in my community, but in my workplace as well. When I joined Heroku in 2012, none of the engineers were women, and we clearly had a lot of work to do to change that. In 2013, I attended AdaCamp SF along with my colleague Peter van Hardenberg, joining the first “allies track”, open to participants of any gender, for people who wanted to learn the skills to support the women around them. We’ve gone on to host two ally skills workshops of our own for Heroku employees, one taught by Ada Initiative staff and another by a member of our team, security engineer Leigh Honeywell. These workshops taught interested employees simple, everyday ways to take positive action to challenge sexism and create a better workplace for women. The Ada Initiative also helped us establish a policy for conference sponsorship which supports our gender diversity efforts. Today, Heroku engineering includes about 10% women and growing. The Ada Initiative’s programs are helping us to become the kind of company we want to be.

I attended the workshop with a group of Heroku colleagues, and it was a powerful experience to see my co-workers learning tactics to support women and intervene in sexist situations. Hearing them discuss power and privilege in the workplace, and the various “a-ha!” moments people had, were very encouraging and made me feel heard and supported.
– Leigh Honeywell

If you want to see more of these programs from The Ada Initiative, please contribute now:


Craig Small: IPv6 and bridges

1 October, 2014 - 20:54

I’ve reported a bug on bridge-utils, but perhaps someone has already seen this and has a fix. My virtual IPv6 machines often lose connectivity from time to time. Tracking this down, it seems that the router sends Neighbor Solicitations (IPv6 ARPs basically). The physical interface of the bridge group receives it, but the vnet0 one does not.

Using tshark I can see the pings on vnet0 but on br0 and eth1 I see the ping requests and the NS packets. So there is something odd going on with the bridge interface.

If I remove and add the vnet0 interface from the bridge group, the connectivity comes back.

Holger Levsen: 20141001-lts-september-2014

1 October, 2014 - 17:04
My LTS September

In the beginning of September I spent quite some time fixing bugs in the Debian Security Tracker, which now, thanks to the awesome CSS from Ulrike looks really good and professional! There are still some bugs to fix and features I'd like to add, eg. the ability to in- and exclude (old)oldstable/lts/backports/nodsa/EOL everywhere. It was fun to squash #742382 #642987 #742855 #762214 #479727 #610220 #611163 and #755800!

And then I also discovered dgit, as in "I've used it for the first time". It was so great, I immediatly did a backport of it and uploaded it to wheezy-backports.

So during the last month these uploads I made to squeeze-lts:

  • DLA 56-1 for wordpress, fixing CVE-2014-2053 CVE-2014-5204 CVE-2014-5205 CVE-2014-5240 CVE-2014-5265 CVE-2014-5266
  • DLA 57-1 for libstruts1.2-java, fixing CVE-2014-0114
  • DLA 60-1 for icinga, fixing CVE-2013-7108 and CVE-2014-1878
  • DLA 61-1 for libplack-perl, fixing CVE-2014-5269
  • DLA 62-1 for nss, fixing CVE-2014-1568
  • DLA 66-1 for apache2, fixing CVE-2013-6438 CVE-2014-0118 CVE-2014-0226 CVE-2014-0231

Plus I filed #762715, asking the devscripts maintainers to 'add an --lts option to dch' and #763339 against lintian: please 'recognize "squeeze-lts" as suite'.

Here's three things you could do to contribute to Debian LTS:

Thanks to everybody supporting LTS already!

Keith Packard: chromium-dri3

1 October, 2014 - 14:51
Chromium (the browser) and DRI3

I got a note on IRC a week ago that Chromium was crashing with DRI3.

The Google team working on Chromium eventually sent me a link to the bug report. That's secret Google stuff, so you won't be able to follow the link, even though it's a bug in a free software application when running on free software drivers.

There's a bug report in the freedesktop bugzilla which looks the same to me.

In both cases, the recommended “fix” was to switch from DRI3 back to DRI2. That's not exactly a great plan, given that DRI3 offers better security between GPU-using applications, which seems like a pretty nice thing to have when you're running random GL applications from the web.

Chromium Sandboxing

I'm not entirely sure how it works, but Chromium creates a process separate from the main browser engine to talk to the GPU. That process has very limited access to the operating system via some fancy library adventures. Presumably, the hope is that security bugs in the GL driver would be harder to leverage into a remote system exploit.

Debugging in this environment is a bit tricky as you can't simply run chromium under gdb and expect to be able to set breakpoints in the GL driver. Instead, you have to run chromium with a magic flag which causes the GPU process to pause before loading the driver so you can connect to it with gdb and debug from there, along with a flag that lets you see crashes within the gpu process and the usual flag that causes chromium to ignore the GPU black list which seems to always include the Intel driver for one reason or another:

$ chromium --gpu-startup-dialog --disable-gpu-watchdog --ignore-gpu-blacklist

Once Chromium starts up, it will print out a message telling you to attach gdb to the GPU process and send that process a SIGUSR1 to continue it. Now you can happily debug and get a stack trace when the crash occurs.

Locating the Bug

The bug manifested with a segfault at the first access to a DRI3-allocated buffer within the application. We've seen this problem in the past; whenever buffer allocation fails for some reason, the driver ignores the problem and attempts to de-reference through the (NULL) buffer pointer, causing a segfault. In this case, Chromium called glClear, which tried (and failed) to allocate a back buffer causing the i965 driver to subsequently segfault.

We should probably go fix the i965 driver to not segfault when buffer allocation fails, but that wouldn't provide a lot of additional information. What I have done is add some error messages in the DRI3 buffer allocation path which at least tell you why the buffer allocation failed. That patch has been merged to Mesa master, and should also get merged to the Mesa stable branch for the next stable release.

Once I had added the error messages, it was pretty easy to see what happened:

$ chromium --ignore-gpu-blacklist
[10618:10643:0930/200525:ERROR:nss_util.cc(856)] After loading Root Certs, loaded==false: NSS error code: -8018
libGL: pci id for fd 12: 8086:0a16, driver i965
libGL: OpenDriver: trying /local-miki/src/mesa/mesa/lib/i965_dri.so
libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted.
libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted.
libGL error: DRI3 Fence object allocation failure Operation not permitted

The first two errors were just the sandbox preventing Mesa from using my GL configuration file. I'm not sure how that's a security problem, but it shouldn't harm the driver much.

The last error is where the problem lies. In Mesa, the DRI3 implementation uses a chunk of shared memory to hold a fence object that lets Mesa know when buffers are idle without using the X connection. That shared memory segment is allocated by creating a temporary file using the O_TMPFILE flag:

fd = open("/dev/shm", O_TMPFILE|O_RDWR|O_CLOEXEC|O_EXCL, 0666);

This call “cannot fail” as /dev/shm is used by glibc for shared memory objects, and must therefore be world writable on any glibc system. However, with the Chromium sandbox enabled, it returns EPERM.

Running Without a Sandbox

Now that the bug appears to be in the sandboxing code, we can re-test with the GPU sandbox disabled:

$ chromium --ignore-gpu-blacklist --disable-gpu-sandbox

And, indeed, without the sandbox getting in the way of allocating a shared memory segment, Chromium appears happy to use the Intel driver with DRI3.

Final Thoughts

I looked briefly at the Chromium sandbox code. It looks like it needs to know intimate details of the OpenGL implementation for every possible driver it runs on; it seems to contain a fixed list of all possible files and modes that the driver will pass to open(2). That seems incredibly fragile to me, especially when used in a general Linux desktop environment. Minor changes in how the GL driver operates can easily cause the browser to stop working.

Vincent Sanders: It is a bad plan that admits of no modification

1 October, 2014 - 09:05
I find it somewhat interesting that thousands of years later that our society still uses Publilius Syrus sententiae though I imagine the tendency to leave well enough alone means such phrases stay in usage.

One weekend Steve McIntyre asked me if I could find a source of some of some 40mm fans for some systems with some pretty strict requirements. They needed to be long life and shift a lot of air to combat a persistent overheating issue.

I sat with him and went through the Farnell utterly hateful parametric web interface and eventually came up with a couple of options which were very expensive. Only then did I stop and ask what the actual problem was.

Steve showed me one of the Debian ARM buildd boxes which are Marvell development machines. These systems are powerful quad core machines housed in compact steel enclosures.

There is a single 40mm fan trying to provide cooling for the entire enclosure. When the units are placed horizontally and used intermittently this proves adequate. Unfortunately when the system are arranged vertically in a rack and run at full load continuously they often overheat and have to be restarted. In addition the small high speed fans need replacing frequently as their bearings wore out quickly.

This was obviously causing some issues for the ARM Debian ports which Steve wanted to rectify. After talking the problem through for a while we came to the conclusion we could use much larger 60mm fans to blow air directly through the top of the case onto the cpu heatsink.

Larger fans can be run much more slowly to move a similar volume of air to the smaller 40mm fans which gives a much longer service life.

Steve proceeded to order enough parts to allow us to modify all the Debian systems, this worked out cheaper than a single "special" 40mm high volume fan.

I acquired a rather large steel hole punch, I chose this tool because it produces a much superior finish to a hole cutter and this project demanded a high level of finish (not to mention I loved having a valid excuse to own and use a huge allen key!)

If we had simply been modifying a single case I would have measured and marked up by hand. With the prospect of altering at least eight I laser cut a template from plywood which Andy Simpkins took great glee in excessively annotating.

We also used the opportunity to add bolt holes to securely attach the 2.5 inch SATA drives instead of using sticky pads.

Steve and I modified a single system to begin with both to check our alignment and the efficacy of the change. We were pleasantly surprised to discover that hoiby could now repeatedly do kernel compiles with all four cores flat out which was not possible before. The measured CPU temperature, which had previously been around 90°C, did not rise above 40°C

Steve, Andy and I then arranged a day where we took all the remaining units out of the rack at ARM, modified and returned them. We used the facilities at the Cambridge Makespace where I am a member to do the modifications.

I broke two 3mm drill bits and dulled a 4mm bit drilling all the holes, Roger Smith was good enough to loan us the use of his "Christmas tree bit" to ream the fan hole out to 16mm so we could thread the hole punch and cut the 60mm fan aperture out.

We managed to get quite an assembly line going and, in my opinion, the results look pretty professional.

It has been several months since we did this work and these systems continue to run without issue. To complete the story we can see some graphs courtesy of the DSA munin instance.

You can clearly see the huge drop in temperature at the end of Week 25 despite the continuously high CPU load. Also there is only a single gap in the data after the changes (these indicate crashes where data was not recorded) where before there were frequent and extensive times where the systems were simply unusable.

One reason I continue to enjoy Debian so much is the wide variety of ways in which I can contribute not only by maintaining my packages. Sometimes this kind of work does not receive the credit it deserves and hopefully highlights a small part of the frantic paddling that goes on under the serene surface of the Debian project to keep things "just working".

Junichi Uekawa: Start of fourth quarter this year.

1 October, 2014 - 09:04
Start of fourth quarter this year. How is everything going ?

Lisandro Damián Nicanor Pérez Meyer: Qt5 in Jessie: we will release with 5.3.2

1 October, 2014 - 01:08
Qt 5.3.2 has entered testing a few hours ago. This will be the version of Qt we will release with Debian Jessie, and it happens to be a nice coincidence, because upstream focused in stability for the 5.3 branch.

I'll now focus in fixing as many bugs as possible and in backporting Qt5 to Wheezy.

Let me warn you: if you are an upstream for a Qt4 based project be sure to be ready to switch to Qt5. If you are a maintainer of a Qt4 based project you better start asking your upstream to be ready for it :)

Gunnar Wolf: Diego Gómez: Imprisoned for sharing

30 September, 2014 - 22:01

I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles.

Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd.

Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us.

And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics!

Some links with further information:

Russell Coker: Links September 2014

30 September, 2014 - 21:55

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones.

Paul Wayper has some insightful comments about the Liberal party’s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia.

Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I’m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles.

Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can’t summarise it well in a paragraph, I recommend reading it all.

Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about “controversial” issues.

George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6].

Mychal Denzel Smith wrote an insightful article “How to know that you hate women” [7].

Sam Byford wrote an informative article about Google’s plans to develop and promote cheap Android phones for developing countries [8]. That’s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries.

Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix.

David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10].

Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised.

Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about “good people with guns” are lies, the NRA are evil people with guns.

Related posts:

  1. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...
  2. Links May 2014 Charmian Gooch gave an interesting TED talk about her efforts...
  3. Links September 2013 Matt Palmer wrote an insightful post about the use of...

Raphaël Hertzog: My Debian LTS report for September

30 September, 2014 - 21:24

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

I started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Mario Lang: A simple C++11 concurrent workqueue

30 September, 2014 - 20:20

For a little toy project of mine (a wikipedia XML dump word counter) I wrote a little C++11 helper class to distribute work to all available CPU cores. It took me many years to overcome my fear of threading: In the past, whenever I toyed with threaded code, I ended up having a lot of deadlocks, and generally being confused. It appears that I finally have understood enough of this crazyness to be able to come up with the small helper class below.

It makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining work is processed and all background threads are joined.

The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don't know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc.

The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block.

So the most basic usage example is probably something like:

int main() {
  typedef std::string item_type;
  distributor<item_type> process([](item_type &item) {
    // do work
  });

  while (/* input */) process(std::move(/* item */));

  return 0;
}

That is about as simple as it can get, IMHO.

The code can be found in the GitHub project mentioned above. However, since the class is relatively short, here it is.

#include <algorithm>
#include <condition_variable>
#include <mutex>
#include <queue>
#include <stdexcept>
#include <thread>
#include <vector>

template <typename Type, typename Queue = std::queue<Type>>
class distributor: Queue, std::mutex, std::condition_variable {
  typename Queue::size_type capacity;
  bool done = false;
  std::vector<std::thread> threads;

public:
  template<typename Function>
  distributor( Function function
             , unsigned int concurrency = std::thread::hardware_concurrency()
             , typename Queue::size_type max_items_per_thread = 1
             )
  : capacity{concurrency * max_items_per_thread}
  {
    if (not concurrency)
      throw std::invalid_argument("Concurrency must be positive and non-zero");
    if (max_items_per_thread)
      std::invalid_argument("Max items per thread must be positive and non-zero");

    for (unsigned int count {0}; count < concurrency; count += 1)
      threads.emplace_back(static_cast<void (distributor::*)(Function)>
                           (&distributor::consume), this, function);
  }

  distributor(distributor &&) = default;
  distributor(distributor const &) = delete;
  distributor& operator=(distributor const &) = delete;

  ~distributor() {
    {
      std::lock_guard<std::mutex> guard(*this);
      done = true;
    }
    notify_all();
    std::for_each(threads.begin(), threads.end(),
                  std::mem_fun_ref(&std::thread::join));
  }

  void operator()(Type &&value) {
    std::unique_lock<std::mutex> lock(*this);
    while (Queue::size() == capacity) wait(lock);
    Queue::push(std::forward<Type>(value));
    notify_one();
  }

private:
  template <typename Function>
  void consume(Function process) {
    std::unique_lock<std::mutex> lock(*this);
    while (true) {
      if (not Queue::empty()) {
        Type item { std::move(Queue::front()) };
        Queue::pop();
        lock.unlock();
        notify_one();
        process(item);
        lock.lock();
      } else if (done) {
        break;
      } else {
        wait(lock);
      }
    }
  }
};

If you have any comments regarding the implementation, please drop me a mail.

Francois Marier: Encrypted mailing list on Debian and Ubuntu

30 September, 2014 - 13:30

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.

Requirements

What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder
Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps
Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

Dirk Eddelbuettel: Rcpp 0.11.3

30 September, 2014 - 09:39

A new release 0.11.3 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package has been uploaded too.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 273 packages on CRAN depend on Rcpp for making analyses go faster and further.

This release brings a fairly large number of continued enhancements, fixes and polishing to Rcpp. These were provided by a total of seven different contributors---which is a new record as well.

See below for a detailed list of changes extracted from the NEWS file, but some highlights included in this release are

  • Several API cleanups, polishes and a pre-announced code removal
  • New InternalFunction interface, and new Timer functionality.
  • More robust functionality of Rcpp Attributes as well as a new dryRun option.
  • The Rcpp FAQ was updated, as was the main Description: in the DESCRIPTION file.
  • Rcpp.package.skeleton() can now deploy functionality from pkgKitten to create Rcpp packages that purr.

One sore point, however, is that we missed that packages using Rcpp Modules appear to require a rebuild. We are sorry for the inconvenience; this has highlighted a shortcoming in our fairly robust and extensive tests. While we test our packages against all known CRAN dependents, such tests check for the ability to compile and run freshly and not whether previously built packages still run. We intend to augment our testing in this direction to avoid a repeat occurrence of such a misfeature.

Changes in Rcpp version 0.11.3 (2014-09-27)
  • Changes in Rcpp API:

    • The deprecation of RCPP_FUNCTION_* which was announced with release 0.10.5 last year is proceeding as planned, and the file macros/preprocessor_generated.h has been removed.

    • Timer no longer records time between steps, but times from the origin. It also gains a get_timers(int) methods that creates a vector of Timer that have the same origin. This is modelled on the Rcpp11 implementation and is more useful for situations where we use timers in several threads. Timer also gains a constructor taking a nanotime_t to use as its origin, and a origin method. This can be useful for situations where the number of threads is not known in advance but we still want to track what goes on in each thread.

    • A cast to bool was removed in the vector proxy code as inconsistent behaviour between clang and g++ compilations was noticed.

    • A missing update(SEXP) method was added thanks to pull request by Omar Andres Zapata Mesa.

    • A proxy for DimNames was added.

    • A no_init option was added for Matrices and Vectors.

    • The InternalFunction class was updated to work with std::function (provided a suitable C++11 compiler is available) via a pull request by Christian Authmann.

    • A new_env() function was added to Environment.h

    • The return value of range eraser for Vectors was fixed in a pull request by Yixuan Qiu.

  • Changes in Rcpp Sugar:

    • In ifelse(), the returned NA type was corrected for operator[].

  • Changes in Rcpp Attributes:

    • Include LinkingTo in DESCRIPTION fields scanned to confirm that C++ dependencies are referenced by package.

    • Add dryRun parameter to sourceCpp.

    • Corrected issue with relative path and R chunk use for sourceCpp.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was updated with respect to OS X issues.

    • A new entry in the Rcpp-FAQ clarifies the use of licenses.

    • Vignettes build results no longer copied to /tmp to please CRAN.

    • The Description in DESCRIPTION has been shortened.

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function will now use pkgKitten package, if available, to create a package which passes R CMD check without warnings. A new Suggests: has been added for pkgKitten.

    • The modules=TRUE case for Rcpp.package.skeleton() has been improved and now runs without complaints from R CMD check as well.

  • Changes in Rcpp unit test functions:

    • Functions from the RUnit package are now prefixed with RUnit::

    • The testRcppModule and testRcppClass sample packages now pass R CMD check --as-cran cleanly with NOTES or WARNINGS

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้