Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 34 min ago

Laura Arjona: New GPG Key!

14 July, 2014 - 03:47

Achievement unlocked: I have a new GPG key:

0xF22674467E4AF4A3

pub   4096R/7E4AF4A3 2014-07-13 [caduca: 2016-07-12]
Fingerprint = 445E 3AD0 3690 3F47 E19B  37B2 F226 7446 7E4A F4A3
uid                  Laura Arjona Reina <laura.arjona@upm.es>
uid                  Laura Arjona Reina <larjona@fsfe.org>
uid                  Laura Arjona Reina <larjona99@gmail.com>
sub   3072R/CC706B74 2014-07-13 [expires: 2016-07-12]
sub   3072R/7E51465F 2014-07-13 [expires: 2016-07-12]
sub   4096R/74C23D6E 2014-07-13 [expires: 2016-07-12]

The master key is 4096 bit, stored in a safe place, and 2 subkeys 3072 bit, stored in an FSFE Smartcard (I cannot store 4096 keys there).

I have carefully used the FSFE SmartCard Howto and “Creating the perfect GPG keypair” by Alex Cabal for strenghtening hash preferences and creating revocation certificate.

It seems everything works as intended. Passphrase is strong and this time I will not forget it.

As first celebration, 1/2 lt icecream is waiting for me after dinner :)

People knowing me and around Madrid, please send me an encrypted mail as test or normal communication, and ping me to meet and sign keys :)

One more step towards involvement in Debian and free software, controlling my digital life and communications, and becoming familiar with these technologies so I can teach them to my son as ‘the natural way’.

Yay!


Filed under: My experiences and opinion, Tools Tagged: Contributing to libre software, Debian, encryption, English, Free Software, gpg, libre software, Moving into free software, mswl-cases

Steve Kemp: A brief twitter experiment

14 July, 2014 - 03:08

So I've recently posted a few links on Twitter, and I see followers clicking them. But also I see random hits.

Tonight I posted a link to http://transient.email/, a domain I use for "anonymous" emailing, specifically to see which bots hit the URL.

Within two minutes I had 15 visitors the first few of which were:

IP User-Agent Request 199.16.156.124Twitterbot/1.0;GET /robots.txt 199.16.156.126Twitterbot/1.0;GET /robots.txt 54.246.137.243python-requests/1.2.3 CPython/2.7.2+ Linux/3.0.0-16-virtualHEAD / 74.112.131.243Mozilla/5.0 ();GET / 50.18.102.132Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD / 50.18.102.132Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD / 199.16.156.125Twitterbot/1.0;GET /robots.txt 185.20.4.143Mozilla/5.0 (compatible; TweetmemeBot/3.0; +http://tweetmeme.com/)GET / 23.227.176.34MetaURI API/2.0 +metauri.comGET / 74.6.254.127Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp);GET /robots.txt

So what jumps out? The twitterbot makes several requests for /robots.txt, but never actually fetches the page itself which is interesting because there is indeed a prohibition in the supplied /robots.txt file.

A surprise was that both Google and Yahoo seem to follow Twitter links in almost real-time. Though the Yahoo site parsed and honoured /robots.txt the Google spider seemed to only make HEAD requests - and never actually look for the content or the robots file.

In addition to this a bunch of hosts from the Amazon EC2 space made requests, which was perhaps not a surprise. Some automated processing, and classification, no doubt.

Anyway beer. It's been a rough weekend.

Dimitri John Ledkov: Hacking on launchpadlib

13 July, 2014 - 20:32
So here is a quick sample of my progress playing around with launchpadlib using lp-shell from lptools:
In [1]: lp
Out[1]: <launchpadlib.launchpad.Launchpad at 0x7f49ecc649b0>

In [2]: lp.distributions
Out[2]: <launchpadlib.launchpad.DistributionSet at 0x7f49ddf0e630>

In [3]: lp.distributions['ubuntu']
Out[3]: <distribution at https://api.launchpad.net/1.0/ubuntu>

In [4]: lp.distributions['ubuntu'].display_name
Out[4]: 'Ubuntu'

In [5]: lp.distributions['ubuntu'].summary
Out[5]: 'Ubuntu is a complete Linux-based operating system, freely available with both community and professional support.'

In [7]: import sys; print(sys.version)
3.4.1 (default, Jun 9 2014, 17:34:49)
[GCC 4.8.3]

There is not much yet, but it's a start. python3 port of launchpadlib is coming soon. It has been attempted a few times before and I am leveraging that work. Porting this stack has proven to be the most difficult python3 port I have ever done. But there is always python-libvirt that still needs porting ;-)

Some of above is just merge proposals against launchpadlib & lazr.restfulclient, and requires not yet packaged modules in the archive. When trying it out, I'm still getting a lot of run-time asserts and things that haven't been picked up by e.g. pyflakes3 and has not been unit-tested yet.

Russ Allbery: Review: Neptune's Brood

13 July, 2014 - 12:54

Review: Neptune's Brood, by Charles Stross

Series: Freyaverse #2 Publisher: Ace Copyright: July 2013 ISBN: 1-101-62453-1 Format: Kindle Pages: 325

Neptune's Brood is set in the same universe as Saturn's Children, but I wouldn't call it a sequel. It takes place considerably later, after substantial expansion of the robot civilization to the stars, and features entirely different characters (or, if there was overlap, I didn't notice). It also represents a significant shift in tone: while Saturn's Children is clearly a Heinlein pastiche and parody, Neptune's Brood takes its space opera more seriously. There is some situational humor — assault auditors, for example — but this book is played mostly straight, and I detected little or no Heinlein. This is Stross fleshing out his own space opera concept.

This being Stross, that concept is not exactly conventional. This is a space opera about economics. Specifically, it's a space opera about interstellar economics, a debt pyramid, and a very interesting remapping of the continual growth requirements of capitalism to the outward expansion of colonization. The first-person protagonist comes from a "family" (as in Saturn's Children, the concept exists but involves rather more aggressive control of the instantiated "children") of bankers, but she is a forensic accountant and historian who specializes in analysis of financial scams. As you might expect, this is a significant clue about the plot.

Neptune's Brood opens with Krina in search of her sister. She is supposed to be an itinerant scholar, moving throughout colonized space to spend some time with various scattered sisters, spreading knowledge and expanding her own. But it's clear from the start of the book that something else is going on, even before an assassin with Krina's face appears on her trail. Unfortunately, it takes roughly a third of the book to learn just what is happening beneath the surface, and most of that time is spent in a pointless interlude on a flying cathedral run by religious fanatics.

The religion is a callback to Saturn's Children: robots who are trying to spread original humanity (the Fragiles) to the stars. This mostly doesn't go well, and is going particularly poorly for the ship that Krina works for passage on. But this is a brief gag that I thought went on much too long. The plot happens to Krina for this first section of the book rather than the other way around, little of lasting significance other than some character introductions occurs, and the Church itself, while playing a minor role in the later plot, is not worth the amount of attention that it gets. The best parts of the early book are the interludes in which Krina explains major world concepts to the reader. These are absolutely blatant infodumping, and I'm not sure how Stross gets away with them, but somehow he does, at least for me. They remind me of some of his blog posts, except tighter and fit into an interesting larger structure.

Thankfully, once Krina finally arrives on Shin-Tethys, the plot improves considerably. There was a specific moment for me when the book became interesting: when Krina finds her sib's quarters in Shin-Tethys and analyzes what she finds there. It's the first significant thing in the book that she does rather than have done to her or thrust upon her, and she's a much better character when she's making decisions. This is also about the point where Stross starts fully explaining slow money, which is key to both the economics and the plot, and the plot starts to unwind its various mysteries and identify the motives of the players.

Even then, Krina suffers from a lack of agency. Only at rare intervals does she get a chance to affect the story. Most of what she did of relevance to this book she did in the past, and while those descriptions of the backstory are interesting, they don't entirely make up for a passive protagonist. Thankfully, the other characters are varied and interesting enough, and the political machinations and cascading revelations captivating enough, that the last part of the book was very satisfying even with Krina on for the ride.

This is a Stross novel, so it's full of two-dollar technical words mixed with technobabble. However, it shares with Saturn's Children the recasting of robots as the norm and fleshy humans as the exception, which means much of the technobabble is a straight substitution for our normal babble about meaty bodies and often works as an alienation technique. That makes it a bit more tolerable for me, although I still wished Stross would turn down the manic vocabulary in places. This bothers some people more than others; if you had no trouble with Accelerando, Neptune's Brood will pose no problems.

I don't think the first section of this book was successful, but I liked the rest well enough to recommend it. If you like your space opera with a heavy dose of economics, a realistic attitude towards deep space exploration without faster-than-light technology, and a realistic perspective on the hostility of alien planets to Earth life, Neptune's Brood is a good choice. And any book that quotes David Graeber's Debt, and whose author has clearly paid attention to its contents, wins bonus points from me.

Rating: 8 out of 10

Dirk Eddelbuettel: RcppArmadillo 0.4.320.0

13 July, 2014 - 07:45
While I was out at the (immensely impressive and equally enjoyable) useR! 2014 conference at UCLA, Conrad provided a bug-fix release 4.320 of Armadillo, the nifty templated C++ library for linear algebra. I quickly rolled that into RcppArmadillo release 0.4.320.0 which has been on CRAN and in Debian for a good week now.

This release fixes some minor things with sparse and dense Eigen solvers (as well as one RNG issue probably of lesser interest to R users deploying the RNGs from R) as shown in the NEWS entry below.

Changes in RcppArmadillo version 0.4.320.0 (2014-07-03)
  • Upgraded to Armadillo release Version 4.320 (Daintree Tea Raider)

    • expanded eigs_sym() and eigs_gen() to use an optional tolerance parameter

    • expanded eig_sym() to automatically fall back to standard decomposition method if divide-and-conquer fails

    • automatic installer enables use of C++11 random number generator when using gcc 4.8.3+ in C++11 mode

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Paul Tagliamonte: Satuday's the new Sunday

12 July, 2014 - 08:41

Hello, World!

For those of you who enforce my Sundays on me (keep doing that, thank you!), I’ll be changing my Saturdays with my Sundays.

That’s right! In this new brave world, I’ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!

This means, as a logical result, I will not be around tomorrow, Saturday.

Much love.

Matt Brown: GPG Key Management Rant

12 July, 2014 - 08:17

2014 and it’s still annoyingly hard to find a reasonable GPG key management system for personal use… All I want is to keep the key material isolated from any Internet connected host, without requiring me to jump through major inconvenience every time I want to use the key.

An HSM/Smartcard of some sort is an obvious choice, but they all suck in their own ways:
* FSFE smartcard – it’s a smartcard, requires a reader, which are generally not particular portable compared to a USB stick.
* Yubikey Neo – restricted to 2048 bits, doesn’t allow imports of primary keys (only subkeys), so you either generate on device and have no backup, or maintain some off-device primary key with only subkeys on the Neo, negating the main benefits of it in the first place.
* Smartcard HSM – similar problems to the Neo, plus not really supported by GPG well (needs 2.0 with specific supporting module version requirements).
* Cryptostick – made by some Germans, sounds potentially great, but perpetually out of stock.

Which leaves basically only the “roll your own” dm-crypt+LUKS usb stick approach. It obviously works well, and is what I currently use, but it’s a bunch of effort to maintain, particularly if you decide, as I have, that the master key material can never touch a machine with a network connection. The implication is that you now need to keep an airgapped machine around, and maintain a set of subkeys that are OK for use on network connected machines to avoid going mad playing sneakernet for every package upload.

The ideal device would be a USB form factor, supporting import of 4096 bit keys, across all GPG capabilities, but with all crypto ops happening on-device, so the key material never leaves the stick once imported. Ideally also cheap enough (e.g. ~100ish currency units) that I can acquire two for redundancy.

As far as I can tell, such a device does not exist on this planet. It’s almost enough to make a man give up on Debian and go live a life of peace and solitude with the remaining 99.9% of the world who don’t know or care about this overly complicated mess of encryption we’ve wrought for ourselves.

end rant.

Steve Kemp: A partial perl-implementation of Redis

12 July, 2014 - 04:36

So recently I got into trouble running Redis on a host, because the data no-longer fits into RAM.

As an interim measure I fixed this by bumping the RAM allocated to the guest, but a real solution was needed. I figure there are three real alternatives:

  • Migrate to Postgres, MySQL, or similar.
  • Use an alternative Redis implementation.
  • Do something creative.

Looking around I found a couple of Redis-alternatives, but I was curious to see how hard it would be to hack something useful myself, as a creative solution.

This evening I spotted Protocol::Redis, which is a perl module for decoding/encoding data to/from a Redis server.

Thinking "Ahah" I wired this module up to AnyEvent::Socket. The end result was predis - A perl-implementation of Redis.

It's a limited implementation which stores data in an SQLite database, and currently has support for:

  • get/set
  • incr/decr
  • del/ping/info

It isn't hugely fast, but it is fast enough, and it should be possible to use alternative backends in the future.

I suspect I'll not add sets/hashes, but it could be done if somebody was keen.

Russ Allbery: Net::Duo 1.00

12 July, 2014 - 04:22

This is the first release of Net::Duo, which provides an object-oriented Perl interface for the Duo Security REST APIs. It attempts to abstract some of the API details and provide an object-oriented view of the returned objects in order to make use of the API in Perl code more natural than dealing with JSON data structures directly. Currently, some parts of the Auth and Admin APIs are implemented alongside with generic methods to call any of the JSON-based APIs.

The approach I took with this module was a bit of a science experiment, and I'm still not entirely sure what I think about the results. Duo Security offers sample Perl code that provides the equivalent of the call and call_json Net::Duo methods but stops there. One sends in data structures and gets back data structures from JSON and manipulates everything in that format.

I prefer a more object-oriented style, and want the module to do a bit more of the work for me, so this implementation wraps some of the APIs in objects with method calls. For updates, there are setters for the object itself and then a commit method to push the changes to Duo. This requires more implementation effort, and each API that should get richer treatment has to be modelled, but the resulting code looks like more natural object-oriented code.

I wasn't completely sure going in if the effort to reward tradeoff made sense, and having finished the module sufficiently for Stanford's immediate needs, I'm still not sure. It was certainly more effort to write the base module this way, but on the other hand it also meant that I could map Perl notions of true and false to Duo's and provide much simpler methods for common operations. I still think this will make the code more maintainable in the long run, but I think it's within the margin of difference of opinion.

Regardless, you can get the latest version from the Net::Duo distribution page and shortly from CPAN as well.

Joseph Bisch: GSoC Update

12 July, 2014 - 00:30

It has been a couple of months since my last GSoC update and a lot has happened since then.

You can view the web interface at metrics.debian.net. There is a dynamic interface viewable with JavaScript enabled, and a static interface viewable with JS disabled. I am currently working on the dynamic interface.

In the weeks since my last blog post I have accomplished a lot. There is support for pull and push metrics. Pull metrics run on the debmetrics server and pull data from some source. Push metrics run on a remote server and push data to the debmetrics server via HTTP POST. There is flask code and the static interface portion uses a general route, so that the flask code does not need to be modified every time a metric is added. There is a makefile that runs manifest2index.py and manifest2orm.py to generate the index page of the web interface and the SQLAlchemy model code respectively. I have a config file that allows the user to set the location of the manifest, pull_scripts, and graph_scripts directories. I have a debmetrics.wsgi file for easy deployment of debmetrics.

There is a single layout.html that is the base layout for all the other pages.

Nosetests is used for tests. Sphinx and readthedocs are used for the documentation. I wrote a minified_grabber script that downloads all the JS and css files that are not included with debmetrics itself. That helps make deployment easy.

If you want to take a look at the code you can view the repository.

I still have more work to do before the end of GSoC, mostly with the dynamic web interface and packaging of debmetrics.

To keep up to date on my progress, you can either read this blog, or you can read the soc-corrdination mailing list.

Russell Coker: Improving Computer Reliability

11 July, 2014 - 12:51

In a comment on my post about Taxing Inferior Products [1] Ben pointed out that most crashes are due to software bugs. Both Ben and I work on the Debian project and have had significant experience of software causing system crashes for Debian users.

But I still think that the widespread adoption of ECC RAM is a good first step towards improving the reliability of the computing infrastructure.

Currently when software developers receive bug reports they always wonder whether the bug was caused by defective hardware. So when bugs can’t be reproduced (or can’t be reproduced in a way that matches the bug report) they often get put in a list of random crash reports and no further attention is paid to them.

When a system has ECC RAM and a filesystem that uses checksums for all data and metadata we can have greater confidence that random bugs aren’t due to hardware problems. For example if a user reports a file corruption bug they can’t repeat that occurred when using the Ext3 filesystem on a typical desktop PC I’ll wonder about the reliability of storage and RAM in their system. If however the same bug report came from someone who had ECC RAM and used the ZFS filesystem then I would be more likely to consider it a software bug.

The current situation is that every part of a typical PC is unreliable. When a bug can be attributed to one of several pieces of hardware, the OS kernel and even malware (in the case of MS Windows) it’s hard to know where to start in tracking down a bug. Most users have given up and accepted that crashing periodically is just what computers do. Even experienced Linux users sometimes give up on trying to track down bugs properly because it’s sometimes very difficult to file a good bug report. For the typical computer user (who doesn’t have the power that a skilled Linux user has) it’s much worse, filing a bug report seems about as useful as praying.

One of the features of ECC RAM is that the motherboard can inform the user (either at boot time, after a NMI reboot, or through system diagnostics) of the problem so it can be fixed. A feature of filesystems such as ZFS and BTRFS is that they can inform the user of drive corruption problems, sometimes before any data is lost.

My recommendation of BTRFS in regard to system integrity does have a significant caveat, currently the system reliability decrease due to crashes outweighs the reliability increase due to checksums at this time. This isn’t all bad because at least when BTRFS crashes you know what the problem is, and BTRFS is rapidly improving in this regard. When I discuss BTRFS in posts like this one I’m considering the theoretical issues related to the design not the practical issues of software bugs. That said I’ve twice had a BTRFS filesystem seriously corrupted by a faulty DIMM on a system without ECC RAM.

Related posts:

  1. Reliability of RAID ZDNet has an insightful article by Robin Harris predicting the...
  2. Improving Blog Latency to Benefit Readers I just read an interesting post about latency and how...
  3. Banking with an Infected Computer Bruce Schneier summarised a series of articles about banking security...

Steve Kemp: Blogspam moved, redis alternatives being examined

10 July, 2014 - 16:45

As my previous post suggested I'd been running a service for a few years, using Redis as a key-value store.

Redis is lovely. If your dataset will fit in RAM. Otherwise it dies hard.

Inspired by Memcached, which is a simple key=value store, redis allows for more operations: using sets, using hashes, etc, etc.

As it transpires I mostly set keys to values, so it crossed my mind last night an alternative to rewriting the service to use a non-RAM-constrained service might be to juggle redis out and replace it with something else.

If it were possible to have a redis-compatible API which secretly stored the data in leveldb, sqlite, or even Berkley DB, then that would solve my problem of RAM-constraints, and also be useful.

Looking around there are a few projects in this area nds fork of redis, ssdb, etc.

I was hoping to find a Perl Redis::Server module, but sadly nothing exists. I should look at the various node.js stub-servers which exist as they might be easy to hack too.

Anyway the short version is that this might be a way forward, the real solution might be to use sqlite or postgres, but that would take a few days work. For the moment the service has been moved to a donated guest and has 2Gb of RAM instead of the paltry 512Mb it was running on previously.

Happily the server is installed/maintained by my slaughter tool so reinstalling took about ten minutes - the only hard part was migrating the Redis-contents, and that's trivial thanks to the integrated "slave of" support. (I should write that up regardless though.)

Russell Coker: A Linux Conference as a Ritual

10 July, 2014 - 16:00

Sociological Images has an interesting post by Jay Livingston PhD about a tennis final as a ritual [1]. The main point is that you can get a much better view of the match on your TV at home with more comfort and less inconvenience, so what you get for the price of the ticket (and all the effort of getting there) is participating in the event as a spectator.

It seems to me that the same idea applies to community Linux conferences (such as LCA) and some Linux users group meetings. In terms of watching a lecture there are real benefits to downloading it after the conference so that you can pause it and study related web sites or repeat sections that you didn’t understand. Also wherever you might sit at home to watch a video of a conference lecture you will be a lot more comfortable than a university lecture hall. Some people don’t attend conferences and users’ group meetings because they would rather watch a video at home.

Benefits of Attending (Apart from a Ritual)

One of the benefits of attending a lecture is the ability to ask questions. But that seems to mostly apply to the high status people who ask most questions. I’ve previously written about speaking stacks and my observations about who asks questions vs the number that can reasonably be asked [2].

I expect that most delegates ask no questions for the entire conference. I created a SurveyMonkey survey to discover how many questions people ask [3]. I count LCA as a 3 day conference because I am only counting the days where there are presentations that have been directly approved by the papers committee, approving a mini-conf (and thus delegating the ability to approve speeches) is different.

Another benefit of attending is the so-called “hallway track” where people talk to random other people. But that seems to be of most benefit to people who have some combination of high status in the community and good social skills. In the past I’ve attended the “Professional Delegates Networking Session” which is an event for speakers and people who pay the “Professional” registration fee. Sometimes at such events there has seemed to be a great divide between speakers (who mostly knew each other before the conference) and “Professional Delegates” which diminishes the value of the event to anyone who couldn’t achieve similar benefits without it.

How to Optimise a Conference as a Ritual

To get involvement of people who have the ritualistic approach one could emphasise the issue of being part of the event. For example to get people to attend the morning keynote speeches (which are sometimes poorly attended due to partying the night before) one could emphasise that anyone who doesn’t attend the keynote isn’t really attending the conference.

Conference shirts seem to be strongly correlated with the ritual aspect of conferences, the more “corporate” conferences don’t seem to offer branded clothing to delegates. If an item of branded schwag was given out before each keynote then that would increase the attendance by everyone who follows the ritual aspect (as well as everyone who just likes free stuff).

Note that I’m not suggesting that organisers of LCA or other conferences go to the effort of giving everyone schwag before the morning keynote, that would be a lot of work. Just telling people that anyone who misses the keynote isn’t really attending the conference would probably do.

I’ve always wondered why conference organisers want people to attend the keynotes and award prizes to random delegates who attend them. Is a keynote lecture a ritual that is incomplete if the attendance isn’t good enough?

Related posts:

  1. Length of Conference Questions After LCA last year I wrote about “speaking stacks” and...
  2. meeting people at Linux conferences One thing that has always surprised me is how few...
  3. Creating a Micro Conference The TEDxVolcano The TED conference franchise has been extended to...

Russell Coker: Taxing Inferior Products

10 July, 2014 - 10:48

I recently had a medical appointment cancelled due to a “computer crash”. Apparently the reception computer crashed and lost all bookings for a day and they just made new bookings for whoever called – and anyone who had a previous booking just missed out. I’ll probably never know whether they really had a computer problem or just used computer problems as an excuse when they made a mistake. But even if it wasn’t a real computer problem the fact that computers are so unreliable overall that “computer crash” is an acceptable excuse indicates a problem with the industry.

The problem of unreliable computers is a cost to everyone, it’s effectively a tax on all business and social interactions that involve computers. While I spent the extra money on a server with ECC RAM for my home file storage I have no control over the computers purchased by all the companies I deal with – which are mostly the cheapest available computers. I also have no option to buy a laptop with ECC RAM because companies like Lenovo have decided not to manufacture them.

It seems to me that the easiest way of increasing overall reliability of computers would be to use ECC RAM everywhere. In the early 90′s all IBM compatible PCs had parity RAM, that meant that for each byte there was one extra bit which would report 100% of single-bit errors and 50% of errors that involved random memory corruption. Then manufacturers decided to save a tiny amount of money on memory by using 8/9 the number of chips for desktop/laptop systems and probably make more money on selling servers with ECC RAM. If the government was to impose a 20% tax on computers that lack ECC RAM then manufacturers would immediately start using it everywhere and the end result would be no price increase overall as it’s cheaper to design desktop systems and servers with the same motherboards – apparently some desktop systems have motherboard support for ECC RAM but don’t ship with suitable RAM or advertise the support for such RAM.

This principle applies to many other products too. One obvious example is cars, a car manufacturer can sell cheap cars with few safety features and then when occupants of those cars and other road users are injured the government ends up paying for medical expenses and disability pensions. If there was a tax for every car that has a poor crash test rating and a tax for every car company that performs badly in real world use then it would give car companies some incentive to manufacture safer vehicles.

Now there are situations where design considerations preclude such features. For example implementing ECC RAM in mobile phones might involve technical difficulties (particularly for 32bit phones) and making some trucks and farm equipment safer might be difficult. But when a company produces multiple similar products that differ significantly in quality such as PCs with and without ECC RAM or cars with and without air-bags there would be no difficulty in making them all of them higher quality.

I don’t think that we will have a government that implements such ideas any time soon, it seems that our government is more interested in giving money to corporations than taxing them. But one thing that could be done is to adopt a policy of only giving money to companies if they produce high quality products. If a car company is to be given hundreds of millions of dollars for not closing a factory then that factory should produce cars with all possible safety features. If a computer company is going to be given significant tax breaks for doing R&D then they should be developing products that won’t crash.

No related posts.

Paul Tagliamonte: Dell XPS 13

10 July, 2014 - 10:38

More hardware adventures.

I got my Dell XPS13. Amazing.

The good news: This MacBook Air clone is clearly an Air competitor, and easily slightly better in nearly every regard except for the battery.


The bad news is that the Intel Wireless card needs non-free (I’ll be replacing that shortly), and the touchpad’s driver isn’t totally implemented until Kernel 3.16. I’m currently building a 3.14 kernel with the patch to send to the kind Debian kernel people. We’ll see if that works. Ubuntu Trusty already has the patch, but it didn’t get upstreamed. That kinda sucks.

It also shipped with UEFI disabled, and was defaulting to boot in ‘legacy’ mode. It shipped with Ubuntu, a bit disappointed to not see Ubuntu keys on the machine.

Touchscreen works; in short -stunning. I think I found my new travel buddy. Debian unstable runs great, stable had some issues.

Mike Gabriel: Cooperation between X2Go and TheQVD

10 July, 2014 - 01:51

I recently got in contact with Nicolas Arenas Alonso and Nito Martinez from the Quindel group (located in Spain) [1].

Those guys bring forth a software product called TheQVD (The Quality Virtual Desktop) [2]. The project does similar things that X2Go does. In fact, they use NX 3.5 from NoMachine internally like we do in X2Go. Already a year ago, I noticed their activity on TheQVD and thought.. "Ahaaa!?!".

Now, a couple of weeks back we received a patch for libxcomp3 that fixes an FTBFS (fails to build from source) for nx-libs-lite against Android [3].

read more

Christoph Berg: New urxvt tab in current directory

10 July, 2014 - 01:13

Following Enrico's terminal-emulators comparison, I wanted to implement "start a new terminal tab in my current working directory" for rxvt-unicode aka urxvt. As Enrico notes, this functionality is something between "rather fragile" and non-existing, so I went to implement it myself. Martin Pohlack had the right hint, so here's the patch:

--- /usr/lib/urxvt/perl/tabbed  2014-05-03 21:37:37.000000000 +0200
+++ ./tabbed    2014-07-09 18:50:26.000000000 +0200
@@ -97,6 +97,16 @@
       $term->resource (perl_ext_2 => $term->resource ("perl_ext_2") . ",-tabbed");
    };
 
+   if (@{ $self->{tabs} }) {
+      # Get the working directory of the current tab and append a -cd to the command line
+      my $pid = $self->{cur}{pid};
+      my $pwd = readlink "/proc/$pid/cwd";
+      #print "pid $pid pwd $pwd\n";
+      if ($pwd) {
+         push @argv, "-cd", $pwd;
+      }
+   }
+
    push @urxvt::TERM_EXT, urxvt::ext::tabbed::tab::;
 
    my $term = new urxvt::term
@@ -312,6 +322,12 @@
    1
 }
 
+sub tab_child_start {
+   my ($self, $term, $pid) = @_;
+   $term->{pid} = $pid;
+   1;
+}
+
 sub tab_start {
    my ($self, $tab) = @_;
 
@@ -402,7 +418,7 @@
 # simply proxies all interesting calls back to the tabbed class.
 
 {
-   for my $hook (qw(start destroy key_press property_notify)) {
+   for my $hook (qw(start destroy key_press property_notify child_start)) {
       eval qq{
          sub on_$hook {
             my \$parent = \$_[0]{term}{parent}

Sune Vuorela: CMake and library properties

9 July, 2014 - 14:30

When writing libraries with CMake, you need to set a couple of properties, especially the VERSION and SOVERSION properties. For library libbar, it could look like:

set_property(TARGET bar PROPERTY VERSION “0.0.0″)
set_property(TARGET bar PROPERTY SOVERSION 0 )

This will give you a libbar.so => libbar.so.0 => libbar.so.0.0.0 symlink chain with a SONAME of libbar.so.0 encoded into the library.

The SOVERSION target property controls the number in the middle part of the symlink chain as well as the numeric part of the SONAME encoded into the library. The VERSION target property controls the last part of the last element of the symlink chain

This also means that the first part of VERSION should match what you put in SOVERSION to avoid surprises for others and for the future you.

Both these properties control “Technical parts” and should be looked at from a technical perspective. They should not be used for the ‘version of the software’, but purely for the technical versioning of the library.

In the kdeexamples git repository, it is handled like this:

set(BAR_VERSION_MAJOR 1)
set(BAR_VERSION_MINOR 2)
set(BAR_VERSION_PATCH 3)
set(BAR_VERSION ${BAR_VERSION_MAJOR}.${BAR_VERSION_MINOR}.${BAR_VERSION_PATCH} )

And a bit later:

set_target_properties(bar PROPERTIES VERSION ${BAR_VERSION}
SOVERSION ${BAR_VERSION_MAJOR} )

which is a fine way to ensure that things actually matches.

Oh. And these components is not something that should be inherited from other external projects.

So people, please be careful to use these correct.

Junichi Uekawa: Having fun with lisp.

9 July, 2014 - 08:18
Having fun with lisp. I was writing lisp interpreter in C++ using boost::spirit. I am happy that my eval can do lambda. Took me a long time to figure out what was wrong with the different types. The data structure was recursive, and I needed to make a recursive type. make_recursive_variant works but it's not obvious when it doesn't.

Wouter Verhelst: HP printers require systemd, apparently

9 July, 2014 - 02:15
printer-driver-postscript-hp Depends: hplip
hplip Depends: policykit-1
policykit-1 Depends: libpam-systemd
libpam-systemd Depends: systemd (= 204-14)

Since the last in the above is a versioned dependency, that means you can't use systemd-shim to satisfy this dependency.

I do think we should migrate to systemd. However, it's unfortunate that this change is being rushed like this. I want to migrate my personal laptop to systemd—but not before I have the time to deal with any fallout that might result, and to make sure I can properly migrate my configuration.

Workaround (for now): hold policykit-1 at 0.105-3 rather than have it upgrade to 0.105-6. That version doesn't have a dependency on libpam-systemd.

Off-hand questions:

  • Why does one need to log in to an init system? (yes, yes, it's probably a session PAM module, not an auth or password module. Still)
  • What does policykit do that can't be solved with proper use of Unix domain sockets and plain old unix groups?

All this feels like another case of overengineering, like most of the *Kit thingies.

Update: so the systemd package doesn't actually cause systemd to be run, there are other packages that do that, and systemd-shim can be installed. I misread things. Of course, the package name is somewhat confusing... but that's no excuse.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้