Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 month 2 days ago

Dirk Eddelbuettel: random 0.2.3

9 January, 2015 - 09:50

A new release of my random package for truly (hardware-based) random numbers as provided by random.org is now on CRAN.

The main change is a switch to the curl() function from the eponymous package by Jeroen Ooms. This was caused by random.org now using https instead of http, annd the fact that te url() function from R does not cope well with the redirect. Besides this (enforced) change, everything else remains the same.

Courtesy of CRANberries comes a diffstat report for this release. Current and previous releases are available here as well as on CRAN.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Peter Eisentraut: Directing output to multiple files with zsh

9 January, 2015 - 08:00

Normally, this doesn’t work as one might naively expect:

program > firstfile > secondfile

The second redirection will override the first one. You’d have to use an external tool to make this work, maybe something like:

program | tee firstfile secondfile

But with zsh, this type of thing actually works. It will duplicate the output and write it to multiple files.

This feature also works with a combination of redirections and pipes. For example

ls > foo | grep bar

will write the complete directory listing into file foo and print out files matching bar to the terminal.

That’s great, but this feature pops up in unexpected places.

I have a shell function that checks whether a given command produces any output on stderr:

! myprog "$arg" 2>&1 >/dev/null | grep .

The effect of this is:

  • If no stderr is produced, the exit code is 0.
  • If stderr is produced, the exit code is 1 and the stderr is shown.

(Note the ordering of 2>&1 >/dev/null to redirect stderr to stdout and silence the original stdout, as opposed to the more common incantation of >/dev/null 2>&1, which silences both stderr and stdout.)

The reason for this is that myprog has a bug that causes it to print errors but not produce a proper exit status in some cases.

Now how will my little shell function snippet behave under zsh? Well, it’s quite confusing at first, but the following happens. If there is stderr output, then only stderr is printed. If there is no stderr output, then stdout is passed through instead. But that’s not what I wanted.

This can be reproduced simply:

ls --bogus 2>&1 >/dev/null | grep .

prints an error message, as expected, but

ls 2>&1 >/dev/null | grep .

prints a directory listing. That’s because zsh redirects stdout to both /dev/null and the pipe, which makes the redirection to /dev/null pointless.

Note that in bash, the second command prints nothing.

This behavior can be changed by turning off the MULTIOS option (see zshmisc man page), and my first instinct was to do that, but options are not lexically scoped (I think), so this would break again if the option was somehow changed somewhere else. Also, I think I kind of like that option for interactive use.

My workaround is to use a subshell:

! ( myprog "$arg" 2>&1 >/dev/null ) | grep .

The long-term fix will probably be to write an external shell script in bash or plain POSIX shell.

Jonathan McDowell: Cup!

8 January, 2015 - 20:58

I got a belated Christmas present today. Thanks Jo + Simon!

Wouter Verhelst: ExtreMon example

8 January, 2015 - 19:47

About a month ago, I blogged about extremon. As a reminder, ExtreMon is a monitoring tool that allows you to view things as they are happening, rather than with the ~5 minute delay that munin gives you, and also avoiding the quad-state limitation of Nagios' "good", "bad", "ugly", and "unknown" states. No, they're not really called that. Yes, I know you knew that.

Anyway. In my blog post, I explained how you can set up ExtreMon, and I also set up a fairly limited demo version on my own server. But I have since realized that while it is functional, it doesn't actually show why ExtreMon is so great. In an effort to remedy that, I present you an example of what ExtreMon can do.

Let's start with a screenshot of the ExtreMon console at the customer for which I spent time trying to figure out how to get it up and running:

Click for full sized version. You'll note that even in that full-sized version, many things are unreadable. This is because the ExtreMon console allows one to move around (right mouse button drag for zoom; left mouse button drag for moving around; control+RMB for rotate; center mouse button to reset to default); so what matters is that everything fits on the screen, not whether it is all readable (if you need to read, you zoom).

The image shows 18 rectangles. Each rectangle represents a single machine in this particular customer's HPC cluster. The top three rectangles are the cluster's file servers; the rest are its high performance nodes.

You'll note that the left fileserver has 8 processor cores (top row), 8 network cards (bottom row, left part), and it also shows information on its memory usage (bottom row, small rectangle in the middle) as well as its NFS client and server procedure calls (bottom row, slightly larger rectangles to the right). This file server is the one on which I installed ZFS a while back; hence the large amount of disks visible in the middle row. The leftmost disk is the root filesystem (which is an ext4 off a hardware RAID1); the two rightmost "disks" are the PCIe-attached SSDs which are used for the ZFS L2ARC and write log. The other disks in this file server nicely show how ZFS does write load balancing over all its disks.

The second file server has a hardware RAID1 on which it stores all its data; as such, there is only one disk graph there. It is also somewhat more limited in network, as it has only two NICs. It does, however, also have 8 cores.

The last file server has no more than four processor cores; in addition, it also does not have a hardware RAID controller, so it must use software RAID over its four hard disks. This server is used for archival purposes, mostly, since it is insufficient for most anything else.

As said, the other nodes are the "compute nodes", where the hard work is done. Most of these compute nodes have 16 cores each; two have 12 instead. When this particular screenshot was taken, four of the nodes (the ones showing red in their processor graphs) were hard at work; the others seem to have been mostly idling. In addition to the familiar memory, NFS (client only), network, and processor graphs, these nodes also show a "swap space" graph (just below the memory one), which seems fine for most nodes, except for the bottom left one (which shows a few bars that are coloured yellow rather than green).

The green/yellow/red stuff is supposed to represent the "ok", "warning", "bad" states that would be familiar from Nagios. In this particular case, however, where "processor is busy all the time" is actually a wanted state, a low amount of idleness on the part of the processor isn't actually a problem, on the contrary. I did consider, therefore, to modify the ExtreMon configuration so that the processor graphs would not show red when the system was under high load; however, I found that differences in colour like this actually makes it more easy to see, at a glance, which machines are busy -- and that's one of the main reasons why we wanted to set this up.

If you look carefully, you can find a particular processor core in the graph which shows 100% usage for "idle", "system", and "softirq", at the same time. Obviously that can't be the case, so there's a bug somewhere. Frank seems to believe it is a bug in CollectD; I haven't verified that. At any rate, though, this isn't usually a problem, due to the high update frequency of ExtreMon.

The amount of data that's flowing through ExtreMon is amazing:

  • 22 values for NFS (times two for the file servers) per server: 22x2x3+22x15
  • 4 values for memory: 4x18
  • 3 values for swap: 3x15
  • 8 values per CPU core: 8x8x2+8x4+8x12x2+8x16x13
  • 2 values per disk: 2x25+2+2x4
  • 2 values per NIC: 2x8x12+2x2x2+2x4x4

Which renders a grand total of 2887 data points that are shown in this particular screenshot; and then I'm not even counting all the intermediate values, some of which also pass through ExtreMon. Nor am I counting the extra bits which have since been added (this screenshot is a few days old, now, and I'm still finetuning things). Yet even so, ExtreMon manages to update those values once every few seconds, in the worst case. As a result, the display isn't static for a moment, constantly moving and updating data so that what you see is never out of date for more than a second or two.

Awesome.

Stefano Zacchiroli: JeSuisCharlie RIP Bernard Maris

8 January, 2015 - 19:21
R.I.P. Bernard Maris and his thoughts on research and the sharing economy

via Le Monde, 16 Sep 2014:

Le Monde: Que devrait être une politique de gauche? Une régulation du capitalisme ou une politique de rupture radicale avec ce système économique?

B.M.: […] Nous allons vers une économie de partage, de la gratuité, du logiciel libre en effet. La figure centrale de demain sera le chercheur qui, lorsqu'il donne quelque chose à la communauté, ne le perd pas. Le chercheur répond aux besoins fondamentaux de l'homme: la création, la curiosité, le changement, le progrès. Il est obligé de coopérer. La coopération canalise la violence, que le libéralisme espérait canaliser par le doux commerce! L'au-delà du capitalisme sera une économie solidaire et fraternelle. Aujourd'hui, la question incontournable porte sur la nature du travail.[…]

Bernard Maris
23 Sep 1946 - 7 Jan 2015
#JeSuisCharlie

Russell Coker: Conference Suggestions

8 January, 2015 - 19:02

LCA 2015 is next week so it seems like a good time to offer some suggestions for other delegates based on observations of past LCAs. There’s nothing LCA specific about the advice, but everything is based on events that happened at past LCAs.

Don’t Oppose a Lecture

Question time at the end of a lecture isn’t the time to demonstrate that you oppose everything about the lecture. Discussion time between talks at a mini-conf isn’t a time to demonstrate that you oppose the entire mini-conf. If you think a lecture or mini-conf is entirely wrong then you shouldn’t attend.

The conference organisers decide which lectures and mini-confs are worthy of inclusion and the large number of people who attend the conference are signalling their support for the judgement of the conference organisers. The people who attend the lectures and mini-confs in question want to learn about the topics in question and people who object should be silent. If someone gives a lecture about technology which appears to have a flaw then it might be OK to ask one single question about how that issue is resolved, apart from that the lecture hall is for the lecturer to describe their vision.

The worst example of this was between talks at the Haecksen mini-conf last year when an elderly man tried at great length to convince me that everything about feminism is wrong. I’m not sure to what degree the Haecksen mini-conf is supposed to be a feminist event, but I think it’s quite obviously connected to feminism – which is of course was why he wanted to pull that stunt. After he discovered that I was not going to be convinced and that I wasn’t at all interested in the discussion he went to the front of the room to make a sexist joke and left.

Consider Your Share of Conference Resources

I’ve previously written about the length of conference questions [1]. Question time after a lecture is a resource that is shared among all delegates. Consider whether you are asking more questions than the other delegates and whether the questions are adding benefit to other people. If not then send email to the speaker or talk to them after their lecture.

Note that good questions can add significant value to the experience of most delegates. For example when a lecturer appears to be having difficulty in describing their ideas to the audience then good questions can make a real difference, but it takes significant skill to ask such questions.

Dorm Walls Are Thin

LCA is one of many conferences that is typically held at a university with dorm rooms offered for delegates. Dorm rooms tend to have thinner walls than hotel rooms so it’s good to avoid needless noise at night. If one of your devices is going to make sounds at night please check the volume settings before you start it. At one LCA I was startled at about 2AM but the sound of a very loud porn video from a nearby dorm room, the volume was reduced within a few seconds, but it’s difficult to get to sleep quickly after that sort of surprise.

If you set an alarm then try to avoid waking other people. If you set an early alarm and then just get up then other people will get back to sleep, but pressing “snooze” repeatedly for several hours (as has been done in the past) is anti-social. Generally I think that an alarm should be at a low volume unless it is set for less than an hour before the first lecture – in which case waking people in other dorm rooms might be doing them a favor.

Phones in Lectures

Do I need to write about this? Apparently I do because people keep doing it!

Phones can be easily turned to vibrate mode, most people who I’ve observed taking calls in LCA lectures have managed this but it’s worth noting for those who don’t.

There are very few good reasons for actually taking a call when in a lecture. If the hospital calls to tell you that they have found a matching organ donor then it’s a good reason to take the call, but I can’t think of any other good example.

Many LCA delegates do system administration work and get calls at all times of the day and night when servers have problems. But that isn’t an excuse for having a conversation in the middle of the lecture hall while the lecture is in progress (as has been done). If you press the green button on a phone you can then walk out of the lecture hall before talking, it’s expected that mobile phone calls sometimes have signal problems at the start of the call so no-one is going to be particularly surprised if it takes 10 seconds before you say hello.

As an aside, I think that the requirement for not disturbing other people depends on the number of people who are there to be disturbed. In tutorials there are fewer people and the requirements for avoiding phone calls are less strict. In BoFs the requirements are less strict again. But the above is based on behaviour I’ve witnessed in mini-confs and main lectures.

Smoking

It is the responsibility of people who consume substances to ensure that their actions don’t affect others. For smokers that means smoking far enough away from lecture halls that it’s possible for other delegates to attend the lecture without breathing in smoke. Don’t smoke in the lecture halls or near the doorways.

Also using an e-cigarette is still smoking, don’t do it in a lecture hall.

Photography

Unwanted photography can be harassment. I don’t think there’s a need to ask for permission to photograp people who harass others or break the law. But photographing people who break the social agreement as to what should be done in a lecture probably isn’t. At a previous LCA a man wanted to ask so many questions at a keynote lecture that he had a page of written notes (seriously), that was obviously outside the expected range of behaviour – but probably didn’t justify the many people who photographed him.

A Final Note

I don’t think that LCA is in any way different from other conferences in this regard. Also I don’t think that there’s much that conference organisers can or should do about such things.

Related posts:

  1. A Linux Conference as a Ritual Sociological Images has an interesting post by Jay Livingston PhD...
  2. Suggestions and Thanks One problem with the blog space is that there is...
  3. Length of Conference Questions After LCA last year I wrote about “speaking stacks” and...

Tanguy Ortolo: Proof of address: use common sense!

8 January, 2015 - 18:54

As I have just moved to a new home, I had to declare my new address to all my providers, including banks and administrations which require a proof of address, which can be a phone, DSL or electricity bill.

Well, this is just stupid, as, by definition, one will only have a bill after at least a month. Until then, that means the bank will keep a false address, and that the mail they send may not be delivered to the customer.

Now, bankers and employees of similar administrations, if you could use some common sense, I have some information for you: when someone moves to a new home, unless he is hosted by someone else, he is either renter or owner. Well, you should now that a renter has one contract that proves it, which is called a lease. And an owner has one paper that proves it, which is called a title, or, before it has been issued by administration, a certificate of sale. Now if you do not accept that as a proof of address, you just suck.

Besides, such a zeal to check one's address is just pointless, as it is just to get a proof of address without waiting for a phone, DSL or electricity bill (or to prove a false address, actually…) by just faking one. And as a reminder, at least in France, forgery is punishable by law but defined as an alteration of truth which can cause a prejudice, which means modifying a previous electricity bill to prove your actual address is not considered as a forgery (but using the same mean to prove a false address is, of course!).

MJ Ray: Social Network Wishlist

8 January, 2015 - 11:10

All I want for 2015 is a Free/Open Source Software social network which is:

  • easy to register on (no reCaptcha disability-discriminator or similar, a simple openID, activation emails that actually arrive);
  • has an email help address or online support or phone number or something other than the website which can be used if the registration system causes a problem;
  • can email when things happen that I might be interested in;
  • can email me summaries of what’s happened last week/month in case they don’t know what they’re interested in;
  • doesn’t email me too much (but this is rare);
  • interacts well with other websites (allows long-term members to post links, sends trackbacks or pingbacks to let the remote site know we’re talking about them, makes it easy for us to dent/tweet/link to the forum nicely, and so on);
  • isn’t full of spam (has limits on link-posting, moderators are contactable/accountable and so on, and the software gives them decent anti-spam tools);
  • lets me back up my data;
  • is friendly and welcoming and trolls are kept in check.

Is this too much to ask for? Does it exist already?

Chris Lamb: Web scraping: Let's move on

8 January, 2015 - 01:53

Every few days, someone publishes a new guide, tutorial, library or framework about web scraping, the practice of extracting information from websites where an API is either not provided or is otherwise incomplete.

However, I find these resources fundamentally deceptive — the arduous parts of "real world" scraping simply aren't in the parsing and extraction of data from the target page, the typical focus of these articles.

The difficulties are invariably in "post-processing"; working around incomplete data on the page, handling errors gracefully and retrying in some (but not all) situations, keeping on top of layout/URL/data changes to the target site, not hitting your target site too often, logging into the target site if necessary and rotating credentials and IP addresses, respecting robots.txt, target site being utterly braindead, keeping users meaningfully informed of scraping progress if they are waiting of it, target site adding and removing data resulting in a null-leaning database schema, sane parallelisation in the presence of prioritisation of important requests, difficulties in monitoring a scraping system due to its implicitly non-deterministic nature, and general problems associated with long-running background processes in web stacks.

Et cetera.

In other words, extracting the right text on the page is the easiest and trivial part by far, with little practical difference between an admittedly cute jQuery-esque parsing library or even just using a blunt regular expression.

It would be quixotic to simply retort that sites should provide "proper" APIs but I would love to see more attempts at solutions that go beyond the superficial.

Dirk Eddelbuettel: RcppCNPy 0.2.4

7 January, 2015 - 09:56

A new release of the RcppCNPy package is now on CRAN.

This release mostly solidifies and fixes things. Support for saving integer objects, which was expanded in release 0.2.3, was not entirely correct. Operations on big-endian systems were not up to snuff either.

Wush Wu helped in getting this right with very diligent testing and patching particularly on big-endian hardware. We also got a pull request from Romain to reflect better const correctness at the Rcpp side of things. Last but not least we obliged by the CRAN Maintainers to not assume one could call gzip from system() call because, well, you guessed it.

Changes in version 0.2.4 (2015-01-05)
  • Support for saving integer objects was not correct and has been fixed.

  • Support for loading and saving on 'big endian' systems was incomplete, has been greatly expanded and corrected, thanks in large part to very diligent testing as well as patching by Wush Wu.

  • The implementation now uses const iterators, thanks to a pull request by Romain Francois.

  • The vignette no longer assumes that one can call gzip via system as the world's leading consumer OS may disagree.

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the rcpp-devel mailing list off the R-Forge page for Rcpp is may be the best place to start a discussion. GitHub issue tickets are also welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steve McIntyre: Bootstrapping arm64 in Debian

7 January, 2015 - 00:03

I promised to write about this a long time, ooops... :-)

Another ARM port in Debian - yay!

arm64 is officially a release architecture for Jessie, aka Debian version 8. That's taken a lot of manual porting and development effort over the last couple of years, and it's also taken a lot of CPU time - there are ~21,000 source packages in Debian Jessie! As is often the case for a brand new architecture like arm64 (or AArch64, to use ARM's own terminology), hardware can be really difficult to get hold of. In time this will cease to be an issue as hardware becomes more commoditised, but in Debian we really struggled to get hold of equipment for a very long time during the early part of the port.

First bring-up in Debian Ports

To start with, we could use ARM's own AArch64 software models to build the first few packages. This worked, but only very slowly. Then Chen Baozi and the folks running the Tianhe-2 supercomputer project in Guangzhou, China contacted us to offer access to some arm64 hardware, and this is what Wookey used for bootstrapping the new port in the unofficial Debian Ports archive. This has now become the normal way for new architectures to get into Debian. We got most of the archive built in debian-ports this way, and we could then use those results to seed the initial core set of packages in the main Debian archive.

Second bring-up - moving into the main Debian archive

By the time that first Debian bring-up was done, ARM was starting to produce its own "Juno" development boards, and with the help of my boss^4 James McNiven we managed to acquire a couple of those machines for use as official Debian build machines. The existing machines in China were faster, but for various reasons quite difficult to maintain as official Debian machines. So I set up the Junos as buildds just before going to DebConf in August 2014. They ran very well, and (for dev boards!) were very fast and stable. They built a large chunk of the Debian archive, but as the release freeze for Jessie grew close we weren't quite there. There was a small but persistent backlog of un-built packages that were causing us issues, plus the Juno machines are/were not quite suitable as porter boxes for Debian developers all over the world to use for debugging their packages on the new architecture.

More horsepower - Linaro machines

This is where Linaro came to our aid. Linaro's goal is to help improve Free and Open Source Software on ARM, and one of the more recent projects in Linaro is a cluster of servers that are made available for software developers to use to get early access to ARMv8 (arm64) hardware. It's a great way for people who are interested in this new architecture to try things out, port their software or indeed just help with the general porting effort.

As Debian is seen as such an important part of the FLOSS ecosystem, we managed to negotiate dedicated access to three of the machines in that cluster for Debian's use and we set those up in October, shortly before the freeze for Jessie. Andy Doan spent a lot of his time getting these machines going for us, and then I set up two of them as build machines and one as the porter box we were still needing.

With these extra machines available, we quickly caught up with the ever-busy "Needs-Build" queue and we've got sufficient build power now to keep things going for the Jessie release. We were officially added to the list of release architectures at the Cambridge mini-Debconf in November, and all is looking good now!

And in the future?

I've organised the loan of another arm64 machine from AMD for Debian to use for further porting and/or building. We're also expecting that more and more machines will be coming out soon as vendors move on from prototyping to producing real customer equipment. Once that's happened, more kit will be available and everybody will be able to have arm64-powered computers in the server room, on their desk and even inside their laptop! Mine will be running Debian Jessie... :-)

Thanks!

There's been a lot of people involved in the Debian arm64 bootstrapping at various stages, so many that I couldn't possibly credit them all! I'll highlight some, though. :-)

First of all, Wookey's life has revolved around this port for the last few years, tirelessly porting, fixing and hacking out package builds to get us going. We've had loads of help from other teams in Debian, particularly the massive patience of the DSA folks with getting early machines up and running and the prodding of the ftpmaster, buildd and release teams when we've been grinding our way through ever more package builds and dependency loops. We've also had really good support from toolchain folks in Debian and ARM, fixing bugs as we've found them by stressing new code and new machines. We've had a number of other people helping by filing bugs and posting patches to help us get things built and working. And (last but not least!) thanks to all the folks who've helped us beg and borrow the hardware to make the Debian arm64 port a reality.

Rumours of even more ARM ports coming soon are entirely scurrilous... *grin*

Holger Levsen: 20150106-lts-december-2014

6 January, 2015 - 23:34
My LTS December

In December 2014 I spent 11h on Debian LTS work and managed to get six DLAs released and another one almost done... I did:

  • Release DLA 103-1 which was previously prepared by Ben, Raphael and myself. So while for this release in December I only had to review one patch, I also had to build the package, provide prelimary .debs, ask for feedback, do some final smoke tests, write the announcement and do the upload. In total this still took 2.5h to "just release it"...
  • Doing DLA 114-1 for bind9 was rather straightforward,
  • As was DLA 116-1 for ntp, which I managed to release within one hour after the DSA for wheezy, despite me having to make the patch apply cleanly due to some openssl differences...
  • I mentioned the bit about openssl because noone ever made a mistake with such patches. Seriously, I mean: I would welcome a public review system for security fixes. We are all humans and we all make mistakes. I do think my ntp patching was safe, but... mistakes happen.
  • DLA 118-1 was basically "just" a new 2.6.32.65 kernel update, which I almost released on my own, until (thankfully) Ben helped me wih one patch from .65 not applying (a fix for a wrong fix which Debian already had correctly fixed), which was due to a patch not correctly removed due to linenumber changes. And while I was still wrapping my head around applying+deapplying these very similar looking patches, Ben had already commited the fix. I'm quite happy with this sharing the work, due to the following benefits: a.) Ben can spend more time on important tasks and b.) the LTS user get more kernel security fixes faster.
  • DLA 119-1 for subversion was a rather straight forward take from the wheez DSAs again, I just had to make sure to also include the 2nd regression fixing DSA.
  • And then, I failed to finish my work on a jqueryui update before 31c3 started. And 31c3 really only ended yesterday when I helped putting stuff on trucks and cleaned the big hall... So that's also why I'm only writing this blog post now, and not two weeks ago, like I probably better had. Anyway, according to the security-tracker jqueryui is affected by two CVEs and that's wrong: CVE-2012-6662 does not affect the squeeze version. CVE-2010-5312 on the other hand affects the squeeze version, I know how to fix it, I just lacked a quiet moment to prepare my fix properly and test it, and so I've rather postponed doing so during 31c3... so, expect a DLA for jqeuryui very soon now!

Thanks to everyone who is supporting Squeeze LTS in whatever form! Even just expressing that you or the company or project you're working with is using LTS, is useful, as it's always nice to hear once work is used and appreciated. If you can contribute more, please do so. If you can't, that's also fine. It's free software after all

Tiago Bortoletto Vaz: A few excerpts from The Invisible Committe's latest article

6 January, 2015 - 22:29

Just sharing some points from "2. War against all things smart!" and "4. Techniques against Technology" by The Invisible Committee's "Fuck off Google" article. You may want to get the "Fuck off Google" pdf and watch that recent talk at 31C3.

"...predicts The New Digital Age, “there will be people who resist adopting and using technology, people who want nothing to do with virtual profiles, online data systems or smart phones. Yet a government might suspect that people who opt out completely have something to hide and thus are more likely to break laws, and as a counterterrorism measure, that government will build the kind of ‘hidden people’ registry we described earlier. If you don’t have any registered social-networking profiles or mobile subscriptions, and on-line references to you are unusually hard to find, you might be considered a candidate for such a registry. You might also be subjected to a strict set of new regulations that includes rigorous airport screening or even travel restrictions.”"

I've been introduced to following observations about 5 years ago when reading "The Immaterial" by André Gorz. Now The Invisible Committee makes that even clearer in a very few words:

"Technophilia and technophobia form a diabolical pair joined together by a central untruth: that such a thing as the technical exists. [...] Techniques can’t be reduced to a collection of equivalent instruments any one of which Man, that generic being, could take up and use without his essence being affected."

"[...] In this sense capitalism is essentially technological; it is the profitable organization of the most productive techniques into a system. Its cardinal figure is not the economist but the engineer. The engineer is the specialist in techniques and thus the chief expropriator of them, one who doesn’t let himself be affected by any of them, and spreads his own absence from the world every where he can. He’s a sad and servile figure. The solidarity between capitalism and socialism is confirmed there: in the cult of the engineer. It was engineers who drew up most of the models of the neoclassical economy like pieces of contemporary trading software."

Bálint Réczey: Kodi from Debian

6 January, 2015 - 19:13

The well known XBMC Media Center has been renamed to Kodi with the 14.0 Helix release and following upstream’s decision the xbmc packages are renamed to kodi as well. Debian ships a slightly changed version of XBMC using the “XBMC from Debian” name and following that tradition ladies and gentlemen let me introduce you “Kodi from Debian”:

Kodi from Debian main screen

As of today Kodi from Debian uses the FFmpeg packages instead of the Libav ones which have been used by XBMC from Debian. The reason for the switch was upstream’s decision of dropping the Libav compatibility code and FFmpeg becoming available again packaged in Debian (thanks to Andreas Cadhalpun). It is worth noting that while upstream Kodi 14.0 downloads and builds FFmpeg 2.4.4 by default, Debian ships FFmpeg 2.5.1 already and FFmpeg under Kodi will be updated independently from Kodi thanks to the packaging mechanism.

The new kodi packages are uploaded to the NEW queue and are waiting for being accepted by the FTP Masters who are busy with preparing Jessie for the release (Many thanks to them for their hard work!), but in the meantime you can install kodi from https://people.debian.org/~rbalint/ppa/xbmc-ffmpeg/.

Happy recovery from the holidays!

Steve McIntyre: UEFI Debian installer work for Jessie, part 4

6 January, 2015 - 18:18

Time for another update on my work for UEFI improvements in Jessie!

I now have a mixed 32- and 64-bit UEFI netinst up and running right now, which will boot and install on the Asus X205TA machine I have. Since the last build, I've added 64-bit (amd64) support and added CONFIG_EFI_MIXED in the kernel so that the 64-bit kernel will also work with a 32-bit UEFI firmware. Visit http://cdimage.debian.org/cdimage/unofficial/efi-development/jessie-upload2/ to download and test the image. There are a few other missing pieces yet for a complete solution, but I'm getting there...!

WARNING: this CD is provided for testing only. Use at your own risk! If you have appropriate (U)EFI hardware, please try this image and let me know how you get on, via the debian-cd and debian-boot mailing lists.

Russ Allbery: Review: Code Complete, Second Edition

6 January, 2015 - 12:09

Review: Code Complete, Second Edition, by Steve McConnell

Publisher: Microsoft Copyright: June 2004 ISBN: 0-7356-1967-0 Format: Kindle Pages: 960

As mentioned in the title, this is a review of the second edition of Code Complete, published in 2004. There doesn't appear to be a later edition at the time of this writing.

I should say, as a prefix to this review, that I'm the sort of person who really likes style guides. When learning a language, a style guide is usually the second or third document I read. I enjoy debates over the clearest way to express a concept in code, trying to keep all the code in a large project consistent, and discussing the subtle trade-offs that sit on the boundary between mechanical style issues and the expressiveness of programming. I try to spend some time reading good code and getting better at expressing myself in code.

Presumably, therefore, I'm the target audience for this book. It sounded good from the descriptions, so I picked it up during one of the Microsoft Press sales. The stated goal of Code Complete is to collect in one place as much as possible of the oral tradition and lore of the programming field, to try to document and communicate the techniques and approaches that make someone a good programmer. The table of contents sounds like a style guide, with entire sections on variables and statements in addition to topics like how to improve existing code and how to design a new program.

If you're starting to think that a 960 page style guide sounds like a bad idea, you're wiser than I. (In my defense, I grabbed this as an ebook and didn't realize how large it was before I bought it.)

I have not actually finished this book. I hate to do this: I don't like reviewing books I haven't finished (this will be the first), and I hate starting books and not finishing them. (This is probably not particularly wise, since some books aren't worth finishing, but I've gotten into a rhythm of reading and reviewing that works for me, so I try not to mess with it.) But I've been trying to finish this book off and on for about a year, I don't think it's worth the time investment, and I think I've gotten far enough in it to provide some warnings by others who are deceived by the very high ratings that it gets on Amazon and other places.

The primarily problem with Code Complete is its sheer, mind-numbing comprehensiveness. It tries to provide a set of guidelines and a checklist to think about at each level of writing code. This is one of those ideas that might sound good on paper, but which completely doesn't work for me. There is no way I'm going to keep this many rules in my head, in the form of rules, while programming. Much of good style has to be done by feel, and the book I'm looking for is one that improves my feel and my sense of taste for code.

What Code Complete seems to provide instead is a compilation of every thought that McConnell has ever had about programming. There's a lot of basic material, a few thoughtful statements, a ton of style advice, an endless compilation of trade-offs and concepts that one should keep in mind, and just a massive, overwhelming pile of stuff.

Each chapter (and there are a lot of chapters) ends in a checklist of things that you should think about when doing a particular programming task. To give you a feel for the overwhelming level of trivia here, this is the checklist at the end of the chapter where I stopped reading, on quality assurance in software. This is one picked at random; a lot of them are longer than this.

  • Have you identified specific quality characteristics that are important to your project?

  • Have you made others aware of the project's quality objectives?

  • Have you differentiated between external and internal quality characteristics?

  • Have you thought about the ways in which some characteristics might compete with or complement others?

  • Does your project call for the use of several different error-detection techniques suited to finding several different kinds of errors?

  • Does your project include a plan to take steps to assure software quality during each stage of software development?

  • Is the quality measured in some way so that you can tell whether it's improving or degrading?

  • Does management understand that quality assurance incurs additional costs up front in order to save costs later?

I'm not saying those are bad things to think about with quality assurance, but you may notice a few issues immediately. They're very general and vague, they're not phrased in a particularly compelling or memorable way, and there are a lot of them. This falls between two stools: it's too much for the programmer who is thinking about quality as part of an overall project but not focusing on it (particularly when you consider that the book is full of checklists like this for everything from variable naming to how to structuring if statements to program debugging), but it's not nearly specific or actionable enough for someone who is focusing on quality assurance.

It's not that the information isn't organized: there's a lot of structure here. And there are bits and pieces here that are occasionally interesting. McConnell is very data-driven and tries to back up recommendations with research on error rates and similar concrete measurements. It's just insufficiently filtered and without elegant or memorable summary. There is far too much here, an overwhelming quantity, and hopelessly mixed between useful tidbits and obvious observations that anyone who has been programming for a while would pick up, all presented in the same earnest but dry tone.

It didn't help that there's a lot here I didn't agree with. Some of that is to be expected: I've never agreed completely with any style guide. But McConnell kept advocating variable and function naming conventions that I find rather ugly and tedious, and the general style of code he advocates feels very "bureaucratic" to me. It's not exactly wrong, but one of the things that I look for in style discussions is to be inspired by the elegant and simple way someone finds to phrase something in code. A lot of the code in this book just felt mind-numbing. It's functional, but uninteresting; perfectly adequate for a large project, but not the sort of discussion that inspires me to improve the quality of my craft.

So, I didn't finish this. I gave up about halfway through. It's frustrating, since I was occasionally finding an interesting nugget of information. But they were too few and far between, and the rest of the book was mostly... boring. It's possible that I just know too much about programming to be the person for whom that McConnell was writing this book. It's certainly true that the book has not aged particularly well; it's focused on fairly old-school languages (C, C++, Java, and Visual Basic) and says almost nothing about modern language techniques, although it does have a bit about extreme programming. But whatever the reason is, it didn't work for me at all. I would rate it as one of the worst books about programming I've tried to read. And that's notably different enough from its reviews that it seems worth throwing this out there as a warning.

I'm quite disappointed, since I'd heard nothing but praise for this book before picking it up. But it's not for me, and I'm now dubious of its value for any programmer outside of a fairly narrow, large-team, waterfall development process involving large numbers of people writing very large quantities of code in languages that aren't very expressive. And, well, in that situation I think one would get more benefit from changing that environment than reading this book.

Rating: unfinished

Joey Hess: a bug in my ear

6 January, 2015 - 08:36

True story: Two days ago, as I was about to drift off to sleep at 2 am, a tiny little bug flew into my ear. Right down to my eardrum, which it fluttered against with its wings.

It was a tiny little moth-like bug, the kind you don't want to find in a bag of flour, and it had been beating against my laptop screen a few minutes before.

This went on for 20 minutes, in which I failed to get it out with a cue tip and shaking my head. It is very weird to have a bug flapping in your head.

I finally gave up and put in eardrops, and stopped the poor thing flapping. I happen to know these little creatures mass almost nothing, and rapidly break up into nearly powder when dead. So while I've not had any bug bits come out, I'm going by the way my ear felt a little stopped up yesterday, and just fine today, and guessing it'll be ok. Oh, and I've been soaking it in the tub and putting in eardrops for good measure.

If I've seemed a little distracted lately, now you know why!

Steve Kemp: Here we go again.

6 January, 2015 - 07:00

Once upon a time I worked from home for seven years, for a company called Bytemark. Then, due to a variety of reasons, I left. I struck out for adventures and pastures new, enjoyed the novelty of wearing clothes every day, and left the house a bit more often.

Things happened. A year passed.

Now I'm working for Bytemark again, although there are changes and the most obvious one is that I'm working in a shared-space/co-working setup, renting a room in a building near my house instead of being in the house.

Shame I have to get dressed, but otherwise it seems to be OK.

Dirk Eddelbuettel: BH release 1.55.0-3

6 January, 2015 - 06:16

Right on the heels of yesterday's BH release 1.55.0-2 bringing Boost Fusion, we now have release 1.55.0-3 bringing Boost Graph. To recap, BH is our CRAN package providing (a large part of the) Boost C++ libraries as a set of template headers for use by R and of course Rcpp.

And as a small project I am working on--and which should now be so much closer to release--needed not only Boost Fusion but also Boost Graph, I had to bother the CRAN maintainers twice in two days. This new version closes issue ticket 9, and may be of interest to other packages such as the venerable RBGL formerly on CRAN and now a BioConductor package which includes its own copy of this graph library (plus depends).

A brief summary of changes from the NEWS file is below.

Changes in version 1.55.0-3 (2015-01-04)
  • Added Boost Graph requested in GH ticket #9 by Dirk for RcppStreams

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHubGitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Raphaël Hertzog: My Free Software Activities for December 2014

5 January, 2015 - 20:43

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 20 hours on Debian LTS. I did the following tasks:

  • CVE triage: I pushed 47 commits to the security tracker this month. Due to this, I submitted two wishlist bugs against the security tracker: #772927 and #772961.
  • I released DLA-106-1 which had been prepared by Osamu Aoki.
  • I released DLA-111-1 fixing one CVE on cpio.
  • I released DLA-113-1 and DLA-114-1 on bsd-mailx/heirloom-mailx fixing one CVE for the former and two CVE for the latter.
  • I released DLA-120-1 on xorg-server. This update alone took more than 6h to backport all the patches, fixing a massive set of 12 CVE.

Not in the paid hours, but still related to Debian LTS, I kindly asked Linux Weekly News to cover Debian LTS in their security page and this is now live. You will see DLA on the usual security page and there’s also a dedicated page tracking this: http://lwn.net/Alerts/Debian-LTS/

I modified the LTS wiki page to have a dedicated Funding sub-page. This avoids having a direct link to Freexian’s offer on the main LTS page (which surprised a few persons) and allows to give some more background information and makes it possible for other persons/companies to also get listed in the same way (since there’s no exclusive relationship between Debian and Freexian here!).

And I also answered some questions of Nguyen Cong (a new LTS contributor, employed by Toshiba with explicit permission to contribute to LTS during work hours! \o/), on IRC, on ask.debian.net (again) and on the mailing list! It’s great to see the LTS project expanding beyond current members of the Debian project.

Distro Tracker

I want to give again some more priority to Distro Tracker at least to complete the transition from the old PTS to this new service… last month has been a bit better than November but not by much.

I reviewed a patch in #771604 (about displaying long descriptions), I merged another patch in #757443 (fixing bad markup which rendered the page unusable with Konqueror), I fixed #760382 where package gone through NEW would never lose their version in NEW.

Kali related contributions

I’m not covering my Kali work here but only some things which got contributed upstream (or to Debian).

First I ensured that we could build the Kali ISO with live-build 4.x in jessie. This resulted in multiple patches merged to the Debian live project (1 2 3 4). I also submitted a patch for a regression in the handling of conditionals in package lists, it got dropped and has been fixed differently instead. I also filed #772651 to report a problem in how live-build decided of the variant of the live-config package to install.

Kali has forked the sysvinit package to be able to disable the services by default and I was investigating how to port this feature in the new systemd world. It turns out systemd has such a feature natively: it’s called Preset files. Unfortunately it’s not usable in Debian because Debian does not call systemctl preset during package installation. I filed bug #772555 to get this fixed (in Stretch, it’s too late for Jessie :-().

Saltstack

I’m using salt to automate some administration task in Kali, at home and at work. I discovered recently that the project tries to collect “Salt Formulas”: those are ready to use instructions for as many services as possibles.

I started using this for some simple services and quickly felt the need to extend “salt-formula”, the set of states used to configure salt with salt. I submitted 5 pull requests (#73 and #74 to configure salt in standalone mode, #75 to enable the upstream package repositories, #76 to automatically download and enable the desired salt formulas, #77 for some bugfixes) and they have all been merged in less than 24 hours (that’s the kind of thing that motivates you to contribute again in the future!).

I also submitted a bug fix for samba-formula and a bug report in salt itself (#19180).

BTW I have some salt states to setup schroot and sbuild. I will try to package those as proper salt formulas in the future…

Misc stuff

Mailing list governance. In Debian, we often complain about meta-discussion on mailing lists (i.e. discussions about how we discuss together) and at the same time we need to have that kind of discussions from time to time. So I suggested to host those discussions in a new mailing list and to get this new list setup, our rules require to have other people interested in having this list. The idea had some support when we discussed it on debian-private, so I relaunched it on debian-project while filing the official request in the BTS: #772645. Unfortunately, I only got one second. So if you’re interested in pursuing this idea, speak up now…

Sponsorship. I sponsored another Galette plugin this month: galette-plugin-fullcard. Thanks to François-Régis Vuillemin for his work.

Publican. Following one of my bug report against Publican and with the help of the upstream author, we identified the problem and I submitted a patch.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้