Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 11 months 2 weeks ago

Gunnar Wolf: Still more e-voting related rants

10 May, 2013 - 01:45

Some weeks ago, I contacted Rosa Martínez, a tech journalist with some questions regarding what I regarded as a trick interview with an e-voting salesman. Well, not only she offered me to publish an answer to that interview, but she also offered me to write another article on a second site she also works with.

So, I accepted. Being quite time-deprived, although I managed to send her the first answer quickly, by April 22, I only sent the second article yesterday night.

Anyway, the links. The texts are published in Spanish:

Niels Thykier: Wheezy was brought to you by …

9 May, 2013 - 20:36

During the Wheezy freeze, the Debian release team deployed 3254 hints[1].  This number may include some duplicates (i.e. where two members of the team hinted the same package), it certainly does not include a lot of rejected requests <insert more disclaimers here>.

The top hinter was *drum roll*… Adam, who did 1799 hints(That is 55% of all hints during the freeze).  For comparison, the second and third runner ups added together did 1023 hints (or 31.4%).  Put in a different way, on average Julien Cristau and I would both add about 1.5 hints each day and Adam would on his own add 5.6 hints a day.

Of course, this is not intended to diminish the work other the rest of the team.  Reviewing and unblocking packages is not all there is to a release.  Particularly, a great thanks to Cyril Brulebois for his hard work on the Debian Installer (without which Debian Wheezy could not be released at all).

Enjoy!

[1] Determined by:

  egrep -c 'done 201(3|2-?(07|08|09|10|11|12)) $HINT_FILE

It does not count hints, but the little “header” we add above our hints.  One header can apply to multiple hints (which is good, because udeb unblocks are usually paired with a regular unblock and we don’t want to count them as two separate hints).


Bernhard R. Link: gnutls and valgrind

9 May, 2013 - 19:14

Memo to myself (as I tend to forget it): If you develop gnutls using applications, recompile gnutls with --disable-hardware-acceleration to be able to test them without getting flooded with false-positives.

Hideki Yamane: 48GB mem machine for $600

9 May, 2013 - 12:12
DELL Power Edge T320 is sold for $250 and 8GB memory is $50, then 250 + 50 *6 = $550-600, 48GB memory. wow.

Christine Spang: Debian on an X1 Carbon

9 May, 2013 - 09:09

Installing fresh hot Debian 7.0 on a shiny new ThinkPad X1 Carbon laptop turns out to be easy as cake. You just need to make sure to grab the wifi firmware from unstable instead of the all-in-one firmware tarballs, which contain a version that is missing a couple required files.

wget http://cdimage.debian.org/debian-cd/7.0.0/multi-arch/iso-cd/debian-7.0.0-amd64-i386-netinst.iso
dd if=debian-7.0.0-amd64-i386-netinst.iso of=/dev/sdb

(Make sure /dev/sdb is really the usb stick you want to overwrite with the installer!)

wget http://ftp.us.debian.org/debian/pool/non-free/f/firmware-nonfree/firmware-iwlwifi_0.38_all.deb

And put that on a second usb stick for the installer to load the firmware off of.

As far as I can tell, everything works. (Did not mess around with the fingerprint reader, don't care.)

Paul Tagliamonte: Hylang updates

9 May, 2013 - 09:05

We’ve got all sorts of spiffy changes lined up and another major release! We’ve got Hy version 0.9.7 released. The website is updated with the latest cut of the hylang, and y’all should check it out.


Sadly, we’ve not attracted any women interested in hacking on hy, so I’d like to re-iterate that I’m quite disappointed to see that, and encourage female hackers to check out the source and see what they can do with it.

As always, the source is over at https://github.com/hylang/hy - star it, hack it, fork it, use it!

Lior Kaplan: Debian in space

9 May, 2013 - 05:10

One more step at being the universal operating system: getting Debian to space:

Specifically, the International Space Station astronauts will be using computers running Debian 6.


Filed under: Debian GNU/Linux

Ingo Juergensmann: Is GSOC a whitewashing project?

9 May, 2013 - 04:50

"The same procedure as last year, Ms. Sophie?" - "The same procedure as every year, James!" - at least when summer is coming, every year Google starts its "Google Summer of Code" (GSoC). This contest is a yearly event since 2005. Wikipedia states: 

The Google Summer of Code (GSoC) is an annual program, first held from May to August 2005,[1] in which Google awards stipends (of 5,000 USD, as of 2013)[2] to hundreds of students who successfully complete a requested free and open-source software coding project during the summer. The program is open to students aged 18 or over – the closely related Google Code-In is intended for students under the age of 18.

[...]

The program invites students who meet their eligibility criteria to post applications that detail the software-coding project they wish to perform. These applications are then evaluated by the corresponding mentoring organization. Every participating organization must provide mentors for each of the project ideas received, if the organization is of the opinion that the project would benefit from them. The mentors then rank the applications and decide among themselves which proposals to accept. Google then decides how many projects each organization gets, and asks the organizations to mark at most that many projects accordingly.

Sounds nice, eh? Submit a nice project, do some cool coding and get 5000.- US-$ for having some sort of fun!

While writing Open Source software (FLOSS/Libre Software), often there's no money it. It's an honory task, just for the benefit of creating a better world. A little bit, at least. Doing some coding on FLOSS and getting paid is great, eh?

But think twice! Maybe Google is not that friendly company it always states that it is? In the first place Google is a company and wants to earn money. And it has a mantra: "Don't be evil!" But the companys main purpose is to earn money and it will do anything to achieve this.

Think of GSoC as a cheap marketing project for Google. A contest for whitewashing Googles image. They can say: "hey, look! We are supporting the FLOSS community! We are not evil!" And you can look at GSoC as a cheap recruitment program for Google. Overall it appears that Google has a bigger benefit from GSoC than the participants as a single or than FLOSS community as a whole. There is a danger that the community gets pocketed by Google instead of enforcing the FLOSS standards and being as independant as possible.

Sure, you need to pay bills, get something to eat and so on, but do you really want to help Google to whitewash its image as a monopolistic company? Or would it be worth to try out some sort of crowd funding when you have a great idea for a program you want to write?
 

Kategorie: DebianTags: DebianGoogleOpenSource 

Joey Hess: faster dh

9 May, 2013 - 03:18

With wheezy released, the floodgates are opened on a lot of debhelper changes that have been piling up. Most of these should be pretty minor, but I released one yesterday that will affect all users of dh. Hopefully in a good way.

I made dh smarter about selecting which debhelper commands it runs. It can tell when a package does not use the stuff done by a particular command, and skips running the command entirely.

So the debian/rules binary of a package using dh will now often look like this:

dh binary
   dh_testroot
   dh_prep
   dh_auto_install
   dh_installdocs
   dh_installchangelogs
   dh_perl
   dh_link
   dh_compress
   dh_fixperms
   dh_installdeb
   dh_gencontrol
   dh_md5sums
   dh_builddeb

Which is pretty close to the optimal hand-crafted debian/rules file (and just about as fast, too). But with the benefit that if you later add, say, cron job files, dh_installcron will automatically start being run too.

Hopefully this will not result in any behavior changes, other than packages building faster and with less noise. If there is a bug it'll probably be something missing in the specification of when a command needs to be run.

Beyond speed, I hope that this will help to lower the bar to adding new commands to debhelper, and to the default dh sequences. Before, every such new command slowed things down and was annoying. Now more special-purpose commands won't get in the way of packages that don't need them.

The way this works is that debhelper commands can include a "PROMISE" directive. An example from dh_installexamples

# PROMISE: DH NOOP WITHOUT examples

Mostly this specifies the files in debian/ that are used by the command, and whose presence triggers the command to run. There is also a syntax to specify items that can be present in the package build directory to trigger the command to run.

(Unfortunatly, dh_perl can't use this. There's no good way to specify when dh_perl needs to run, short of doing nearly as much work as dh_perl would do when run. Oh well.)

Note that third-party dh_ commands can include these directives too, if that makes sense.

I'm happy how this turned out, but I could be happier about the implementation. The PROMISE directives need to be maintained along with the code of the command. If another config file is added, they obviously must be updated. Other changes to a command can invalidate the PROMISE directive, and cause unexpected bugs.

What would be ideal is to not repeat the inputs of the command in these directives, but instead write the command such that its inputs can be automatically extracted. I played around with some code like this:

$behavior = main_behavior("docs tmp(usr/share/doc/)", sub {
       my $package=shift;
       my $docs=shift;
       my $docdir=shift;

       install($docs, $docdir);
});
$behavior->($package);

But refactoring all debhelper commands to be written in this style would be a big job. And I was not happy enough with the flexability and expressiveness of this to continue with it.

I can however, dream about what this would look like if debhelper were written in Haskell. Then I would have a Debhelper a monad, within which each command executes.

main = runDebhelperIO installDocs

installDocs :: Monad a => Debhelper a
installDocs = do
    docs <- configFile "docs"
    docdir <- tmpDir "usr/share/doc"
    lift $ install docs docdir

To run the command, runDebhelperIO would loop over all the packages and run the action, in the Debhelper IO monad.

But, this also allows making an examineDebhelper that takes an action like installDocs, and runs it in a Debhelper Writer monad. That would accumulate a list of all the inputs used by the action, and return it, without performing any side effecting IO actions.

It's been 15 years since I last changed the language debhelper was written in. I did that for less gains than this, really. (The issue back then was that shell getopt sucked.) IIRC it was not very hard, and only took a few days. Still, I don't really anticipate reimplementing debhelper in Haskell any time soon.

For one thing, individual Haskell binaries are quite large, statically linking all Haskell libraries they use, and so the installed size of debhelper would go up quite a bit. I hope that forthcoming changes will move things toward dynamically linked haskell libraries, and make it more appealing for projects that involve a lot of small commands.

So, just a thought experiment for now..

Ben Hutchings: Warning: Debian 7.0 'wheezy' on VIA C3 and Cyrix III systems

8 May, 2013 - 22:07

The 'longhaul' module may cause instability or even hardware damage on some systems. Unfortunately, it is now being auto-loaded on all systems with a compatible CPU (VIA C3 or Cyrix III). See bug #707047.

Before upgrading one of these systems to Debian 7.0 'wheezy', if you do not currently use the 'longhaul' module, you should blacklist it by creating e.g. /etc/modprobe.d/blacklist-longhaul.conf containing the line:

  blacklist longhaul

I would advise against attempting a fresh installation on these systems at present.

Raphael Geissert: Almost one million requests per day

8 May, 2013 - 15:20
In the first 48 hours after its log files were rotated last Sunday, http.debian.net handled almost 2 million requests, for an average of 11 requests per second.

In the last weeks before the release of Debian wheezy the number of requests had dropped slightly below 2 million per week.

Debian is alive.

Steve Langasek: Plymouth is not a bootsplash

8 May, 2013 - 13:50

Congrats to the Debian release team on the new release of Debian 7.0 (wheezy)!

Leading up to the release, a meme making the rounds on Planet Debian has been to play a #newinwheezy game, calling out some of the many new packages in 7.0 that may be interesting to users. While upstart as a package is nothing new in wheezy, the jump to upstart 1.6.1 from 0.6.6 is quite a substantial change. It does bring with it a new package, mountall, which by itself isn't terribly interesting because it just provides an upstart-ish replacement for some core scripts from the initscripts package (essentially, /etc/rcS.d/*mount*). Where things get interesting (and, typically, controversial) is the way in which mountall leverages plymouth to achieve this.

What is plymouth?

There is a great deal of misunderstanding around plymouth, a fact I was reminded of again while working to get a modern version of upstart into wheezy. When Ubuntu first started requiring plymouth as an essential component of the boot infrastructure, there was a lot of outrage from users, particularly from Ubuntu Server users, who believed this was an attempt to force pretty splash screen graphics down their throats. Nothing could be further from the truth.

Plymouth provides a splash screen, but that's not what plymouth is. What plymouth is, is a boot-time I/O multiplexer. And why, you ask, would upstart - or mountall, whose job is just to get the filesystem mounted at boot - need a boot-time I/O multiplexer?

Why use plymouth?

The simple answer is that, like everything else in a truly event-driven boot system, filesystem mounting is handled in parallel - with no defined order. If a filesystem is missing or fails an fsck, mountall may need to interact with the user to decide how to handle it. And if there's more than one missing or broken filesystem, and these are all being found in parallel, there needs to be a way to associate each answer from the user to the corresponding question from mountall, to avoid crossed signals... and lost data.

One possible way to handle this would be for mountall to serialize the fsck's / mounts. But this is a pretty unsatisfactory answer; all other things (that is, boot reliability) being equal, admins would prefer their systems to boot as fast as possible, so that they can get back to being useful to users. So we reject the idea of solving the problem of serializing prompts by making mountall serialize all its filesystem checks.

Another option would be to have mountall prompt directly on the console, doing its own serialization of the prompts (even though successful mounts / fscks continue to be run in parallel). This, too, is not desirable in the general case, both because some users actually would like to have pretty splash screens at boot time, and this would be incompatible with direct console prompting; and because mountall is not the only piece of software that need to prompt at boot time (see also: cryptsetup).

Plymouth: not just a pretty face

Enter plymouth, which provides the framework for serializing requests to the user while booting. It can provide a graphical boot splash, yes; ironically, even its own homepage suggests that this is its purpose. But it can also provide a text-only console interface, which is what you get automatically when booting without a splash boot argument, or even handle I/O over a serial console.

Which is why, contrary to the initial intuitions of the s390 porters upon seeing this package, plymouth is available for all of Debian's Linux architectures in wheezy, s390 and s390x included, providing a consistent architecture for boot-time I/O for systems that need it - which is any machine using a modern boot system, such as upstart or systemd.

Room for improvement

Now, having a coherent architecture for your boot I/O is one thing; having a bug-free splash screen is another. The experience of plymouth in Ubuntu has certainly not been bug-free, with plymouth making significant demands of the kernel video layer. Recently, the binary video driver packages in Ubuntu have started to blacklist the framebuffer kernel driver entirely due to stability concerns, making plymouth splash screens a non-starter for users of these drivers and regressing the boot experience.

One solution for this would be to have plymouth offload the video handling complexity to something more reliable and better tested. Plymouth does already have an X backend, but we don't use that in Ubuntu because even if we do have an X server, it normally starts much later than when we would want to display the splash screen. With Mir on the horizon for Ubuntu, however, and its clean separation between system and session compositors, it's possible that using a Mir backend - that can continue running even after the greeter has started, unlike the current situation where plymouth has to cede the console to the display manager when it starts - will become an appealing option.

This, too, is not without its downsides. Needing to load plymouth when using crypted root filesystems already makes for a bloated initramfs; adding a system compositor to the initramfs won't make it any better, and introduces further questions about how to hand off between initramfs and root fs. Keeping your system compositor running from the initramfs post-boot isn't really ideal, particularly for low-memory systems; whereas killing the system compositor and restarting it will make it harder to provide a flicker-free experience. But for all that, it does have its architectural appeal, as it lets us use plymouth as long as we need to after boot. As the concept of static runlevels becomes increasingly obsolete in the face of dynamic systems, we need to design for the world where the distinction between "booting" and "booted" doesn't mean what it once did.

Antoine Beaupré: Debian Wheezy et Debian Québec ce samedi!

8 May, 2013 - 10:41

[En français ci-bas]

As announced by Fabian we are launching the Debian Québec group this saturday, with a release party at UQAM. I may not be present as I was planning to go camping this weekend, but since the weather forecast is rainy so far, I may cancel.

Tel qu'annoncé par Fabian, nous lançons le groupe Debian Québec ce samedi, avec un "release party". Je vais *peut-être être absent vu que je prévoyais sortir en camping, mais vu qu'ils prévoient de la pluie pour 4 jours, je vais peut-être m'abstenir...

Evgeni Golov: Wheezy, ejabberd, Pidgin and SRV records

8 May, 2013 - 04:57

TL;DR: {fqdn, "jabber.die-welt.net"}.

So, how many servers do you have, that are still running Squeeze? I count one, mostly because I did not figure out a proper upgrade path from OpenVZ to something else yet, but this is a different story.

This post is about the upgrade of my “communication” machine, dengon.die-welt.net. It runs my private XMPP and IRC servers. I upgraded it to Wheezy, checked that my irssi and my BitlBee still could connect and left for work. There I noticed, that Pidgin could only connect to one of the two XMPP accounts I have on that server. sargentd@jabber.die-welt.net worked just fine, while evgeni@golov.de failed to connect.

ejabberd was logging a failed authentication:
I(<0.1604.0>:ejabberd_c2s:802) : ({socket_state,tls,{tlssock,#Port<0.5130>,#Port<0.5132>},<0.1603.0>}) Failed authentication for evgeni@golov.de

While Pidgin was just throwing “Not authorized” errors.

I checked the password in Pidgin (even if it did not change). I tried different (new) accounts: anything@jabber.die-welt.net worked, nothing@golov.de did not and somethingdifferent@jabber.<censored>.de worked too. So where was the difference between the three vhosts? jabber.die-welt.net and jabber.<censored>.de point directly (A/CNAME) to dengon.die-welt.net. golov.de has SRV records for XMPP pointing to jabber.die-welt.net.

Let’s ask Google about “ejabberd pidgin srv”. There are some bugs. But they are marked as fixed in Wheezy.

Mhh… Let’s read again… Okay, I have to set {fqdn, "<my_srv_record_name>"}. when this does not match my hostname. Edit /etc/ejabberd/ejabberd.cfg, add {fqdn, "jabber.die-welt.net"}. (do not forget the dot at the end) and restart the ejabberd. Pidgin can connect again. Yeah.

Steve Kemp: So progress is going well on lumail

8 May, 2013 - 02:40

A massive marathon has resulted in my lumail mail client working well.

Functionally the application looks little different to the previous C-client, but it is a lot cleaner, neater, and nicer internally.

The configuration file luamail.lua gives a good flavour of the code, and the github repository has brief instructions.

Initially I decied that the navigation/index stuff was easy and the rest of the program would be hard; dealing with GPG-signatures, MIME-parts, etc.

But I'm stubborn enough to keep going.

If I can get as far as reading messages, with MIME handled properly, and replying then I can switch to using it immediately which will spur further development.

I'm really pleased with the keybinding code, and implementing the built-in REPL-like prompt was a real revelation. Worht it for that alone.

The domain name lumail.org was available. So I figured why not?

Matthew Garrett: A short introduction to TPMs

8 May, 2013 - 01:18
I've been working on TPMs lately. It turns out that they're moderately awful, but what's significantly more awful is basically all the existing documentation. So here's some of what I've learned, presented in the hope that it saves someone else some amount of misery.
What is a TPM?TPMs are devices that adhere to the Trusted Computing Group's Trusted Platform Module specification. They're typically microcontrollers[1] with a small amount of flash, and attached via either i2c (on embedded devices) or LPC[2] (on PCs). While designed for performing cryptographic tasks, TPMs are not cryptographic accelerators - in almost all situations, carrying out any TPM operations on the CPU instead would be massively faster[3]. So why use a TPM at all?
Keeping secrets with a TPMTPMs can encrypt and decrypt things. They're not terribly fast at doing so, but they have one significant benefit over doing it on the CPU - they can do it with keys that are tied to the TPM. All TPMs have something called a Storage Root Key (or SRK) that's generated when the TPM is initially configured. You can ask the TPM to generate a new keypair, and it'll do so, encrypt them with the SRK (or another key descended from the SRK) and hand it back to you. Other than the SRK (and another key called the Endorsement Key, which we'll get back to later), these keys aren't actually kept on the TPM - the running OS stores them on disk. If the OS wants to encrypt or decrypt something, it loads the key into the TPM and asks it to perform the desired operation. The TPM decrypts the key and then goes to work on the data. For small quantities of data, the secret can even be stored in the TPM's nvram rather than on disk.

All of this means that the keys are tied to a system, which is great for security. An attacker can't obtain the decrypted keys, even if they have a keylogger and full access to your filesystem. If I encrypt my laptop's drive and then encrypt the decryption key with the TPM, stealing my drive won't help even if you have my passphrase - any other TPM simply doesn't have the keys necessary to give you access.

That's fine for keys which are system specific, but what about keys that I might want to use on multiple systems, or keys that I want to carry on using when I need to replace my hardware? Keys can optionally be flagged as migratable, which makes it possible to export them from the TPM and import them to another TPM. This seems like it defeats most of the benefits, but there's a couple of features that improve security here. The first is that you need the TPM ownership password, which is something that's set during initial TPM setup and then not usually used afterwards. An attacker would need to obtain this somehow. The other is that you can set limits on migration when you initially import the key. In this scenario the TPM will only be willing to export the key by encrypting it with a pre-configured public key. If the private half is kept offline, an attacker is still unable to obtain a decrypted copy of the key.
So I just replace the OS with one that steals the secret, right?Say my root filesystem is encrypted with a secret that's stored on the TPM. An attacker can replace my kernel with one that grabs that secret once the TPM's released it. How can I avoid that?

TPMs have a series of Platform Configuration Registers (PCRs) that are used to record system state. These all start off programmed to zero, but applications can extend them at runtime by writing a sha1 hash into them. The new hash is concatenated to the existing PCR value and another sha1 calculated, and then this value is stored in the PCR. The firmware hashes itself and various option ROMs and adds those values to some PCRs, and then grabs the bootloader and hashes that. The bootloader then hashes its configuration and the files it reads before executing them.

This chain of trust means that you can verify that no prior system component has been modified. If an attacker modifies the bootloader then the firmware will calculate a different hash value, and there's no way for the attacker to force that back to the original value. Changing the kernel or the initrd will result in the same problem. Other than replacing the very low level firmware code that controls the root of trust, there's no way an attacker can replace any fundamental system components without changing the hash values.

TPMs support using these hash values to decide whether or not to perform a decryption operation. If an attacker replaces the initrd, the PCRs won't match and the TPM will simply refuse to hand over the secret. You can actually see this in use on Windows devices using Bitlocker - if you do anything that would change the PCR state (like booting into recovery mode), the TPM won't hand over the key and Bitlocker has to prompt for a recovery key. Choosing which PCRs to care about is something of a balancing act. Firmware configuration is typically hashed into PCR 1, so changing any firmware configuration options will change it. If PCR 1 is listed as one of the values that must match in order to release the secret, changing any firmware options will prevent the secret from being released. That's probably overkill. On the other hand, PCR 0 will normally contain the firmware hash itself. Including this means that the user will need to recover after updating their firmware, but failing to include it means that an attacker can subvert the system by replacing the firmware.
What about using TPMs for DRM?In theory you could populate TPMs with DRM keys for media playback, and seal them such that the hardware wouldn't hand them over. In practice this is probably too easily subverted or too user-hostile - changing default boot order in your firmware would result in validation failing, and permitting that would allow fairly straightforward subverted boot processes. You really need a finer grained policy management approach, and that's something that the TPM itself can't support.

This is where Remote Attestation comes in. Rather than keep any secrets on the local TPM, the TPM can assert to a remote site that the system is in a specific state. The remote site can then make a policy determination based on multiple factors and decide whether or not to hand over session decryption keys. The idea here is fairly straightforward. The remote site sends a nonce and a list of PCRs. The TPM generates a blob with the requested PCR values, sticks the nonce on, encrypts it and sends it back to the remote site. The remote site verifies that the reply was encrypted with an actual TPM key, makes sure that the nonce matches and then makes a policy determination based on the PCR state.

But hold on. How does the remote site know that the reply was encrypted with an actual TPM? When TPMs are built, they have something called an Endorsement Key (EK) flashed into them. The idea is that the only way to have a valid EK is to have a TPM, and that the TPM will never release this key to anything else. There's a couple of problems here. The first is that proving you have a valid EK to a remote site involves having a chain of trust between the EK and some globally trusted third party. Most TPMs don't have this - the only ones I know of that do are recent Infineon and STMicro parts. The second is that TPMs only have a single EK, and so any site performing remote attestation can cross-correlate you with any other site. That's a pretty significant privacy concern.

There's a theoretical solution to the privacy issue. TPMs never actually sign PCR quotes with the EK. Instead, TPMs can generate something called an Attestation Identity Key (AIK) and sign it with the EK. The OS can then provide this to a site called a PrivacyCA, which verifies that the AIK is signed by a real EK (and hence a real TPM). When a third party site requests remote attestation, the TPM signs the PCRs with the AIK and the third party site asks the PrivacyCA whether the AIK is real. You can have as many AIKs as you want, so you can provide each service with a different AIK.

As long as the PrivacyCA only keeps track of whether an AIK is valid and not which EK it was signed with, this avoids the privacy concerns - nobody would be able to tell that multiple AIKs came from the same TPM. On the other hand, it makes any PrivacyCA a pretty attractive target. Compromising one would not only allow you to fake up any remote attestation requests, it would let you violate user privacy expectations by seeing that (say) the TPM being used to attest to HolyScriptureVideos.com was also being used to attest to DegradingPornographyInvolvingAnimals.com.

Perhaps unsurprisingly (given the associated liability concerns), there's no public and trusted PrivacyCAs yet, and even if they were (a) many computers are still being sold without TPMs and (b) even those with TPMs often don't have the EK certificate that would be required to make remote attestation possible. So while remote attestation could theoretically be used to impose DRM in a way that would require you to be running a specific OS, practical concerns make it pretty difficult for anyone to deploy that at any point in the near future.
Is this just limited to early OS components?Nope. The Linux kernel has support for measuring each binary run or each module loaded and extending PCRs accordingly. This makes it possible to ensure that the running binaries haven't been modified on disk. There's not a lot of distribution infrastructure for setting this up, but in theory a distribution could deploy an entirely signed userspace and allow the user to opt into only executing correctly signed binaries. Things get more interesting when you add interpreted scripts to the mix, so there's still plenty of work to do there.
So what can I actually use a TPM for?Drive encryption is probably the best example (Bitlocker does it on Windows, and there's a LUKS-based implementation for Linux here) - while in theory you could do things like use your TPM as a factor in two-factor authentication or tie your GPG key to it, there's not a lot of existing infrastructure for handling all of that. For the majority of people, the most useful feature of the TPM is probably the random number generator. rngd has support for pulling numbers out of it and stashing them in /dev/random, and it's probably worth doing that unless you have an Ivy Bridge or other CPU with an RNG.

Things get more interesting in more niche cases. Corporations can bind VPN keys to corporate machines, making it possible to impose varying security policies. Intel use the TPM as part of their anti-theft technology on education-oriented devices like the Classmate. And in the cloud, projects like Trusted Computing Pools use remote attestation to verify that compute nodes are in a known good state before scheduling jobs on them.
Is there a threat to freedom?At the moment, probably not. The lack of any workable general purpose remote attestation makes it difficult for anyone to impose TPM-based restrictions on users, and any local code is obviously under the user's control - got a program that wants to read the PCR state before letting you do something? LD_PRELOAD something that gives it the desired response, or hack it so it ignores failure. It's just far too easy to circumvent.
Summary?TPMs are useful for some very domain-specific applications, drive encryption and random number generation. The current state of technology doesn't make them useful for practical limitations of end-user freedom.

[1] Ranging from 8-bit things that are better suited to driving washing machines, up to full ARM cores
[2] "Low Pin Count", basically ISA without the slots.
[3] Loading a key and decrypting a 5 byte payload takes 1.5 seconds on my laptop's TPM.

comments

Gunnar Wolf: Talking about Debian while Debian was getting released

8 May, 2013 - 00:59

Last Saturday, I was invited to talk about Debian to Hackerspace DF, a group that is starting to work at a very nice place together with other collectives, in a quite centric place (Colonia Obrera). I know several of the people in the group (visited them a couple of times in the space's previous incarnation), and wish them great luck in this new hackerspace!

Anyway — I was invited to give an informal talk about Debian. And of course, I was there. And so was Alfredo, who recorded (most of) it.

So, in case you want to see me talking about how Debian works, mostly on a social organization level (but also regarding some technical details). Of course, given the talk was completely informal (it started by me standing there, asking, "OK, any questions?"), I managed to mix up some names and stuff... But I hope that, in the end, the participants understood better what Debian means than when we started.

Oh, and by the end of the talk, we were all much happier. Not only because I was about to shut up, but because during my talk, we got notice that Debian 7.0 "Wheezy" was released.

Anyway — If you want to see me talking for ~1hr, you can download the video or watch it on YouTube.

Jo Shields: Windows 8: Blood from a Stone

7 May, 2013 - 23:52

Ordinarily, I’m a big believer that it is important to keep up to date with what every piece of software which competes with yours is doing, to remain educated on the latest concepts. Sometimes, there are concepts that get added which are definitely worth ripping off. We’ve ripped off plenty of the better design choices from Windows or Mac OS, over the years, for use in the Free Desktop.

So, what about Windows 8, the hip new OS on everyone’s lips?

Well, here’s the thing… I’ve been using it on and off for a few months now for running legacy apps, and I can’t for the life of me find anything worth stealing.

Let’s take the key change – Windows 8 has apps built with a new design paradigm which definitely isn’t called Metro. Metro apps don’t really have “windows” in the traditional sense – they’re more modeled on full-screen apps from smartphones or tablets than on Windows 1.0 -> 7. Which is fine, really, if you’re running Windows 8 on a tablet or touchscreen device. But what if you’re not? What about the normal PC user?

As Microsoft themselves ask:

The answer to that is, well, you sorta don’t.

Metro apps can exist in three states – fullscreen, almost fullscreen, or vertical stripe. You’re allowed to have two apps at most at the same time – one mostly full screen, and one vertical stripe. So what happens if you try to *use* that? Let’s take a fairly common thing I do – watch a video and play Minesweeper. In this example, the video player is the current replacement for Windows Media Player, and ships by default. The Minesweeper game isn’t installed by default, but is the only Minesweeper game in the Windows 8 app store which is gratis and by Microsoft Game Studios.

Here’s option A:

And for contrast, here’s option B:

Which of these does a better job of letting me play Minesweeper and watch a video at the same time?

Oh, here’s option C, dumping Microsoft’s own software, and using a third-party video player and third party Minesweeper implementation:

It’s magical – almost as if picking my own window sizes makes the experience better.

So, as you can see above, the “old” OS is still hiding there, in the form of a Windows 8 app called “Desktop”. Oh, sorry, didn’t I say? Metro apps, and non-Metro apps, are segregated. You can run both (the Desktop app can also be almost-fullscreen or a vertical strip), but they get their own lists of apps when multitasking. Compare the list on the left with the list at the bottom:

And it’s even more fun for apps like Internet Explorer, which can be started in both modes (and you often need both modes). Oh, and notice how the Ribbon interface from Office 2007 has invaded Explorer, filling the view with large buttons to do things you never want to do under normal circumstances.

So, that’s a short primer on why Windows 8 is terrible.

Is there really nothing here worth stealing? Actually, yes, there is! After much research, I have discovered Windows 8′s shining jewel:

The new Task Manager is lovely. I want it on my Linux systems. But that’s it.

Hideki Yamane: meet to openSUSE folks (OBS dojo)

7 May, 2013 - 19:18
4th May, I went to Shimokitazawa (Tokyo) to participate to OBS dojo by openSUSE developer. OBS, Open Build Service (formerly known as openSUSE Build Service) is kind a buildd in Debian but anyone can use it with signup to its site. <a href="http://www.slideshare.net/ftake">@ftake</a> explains it with his slide.

 And, osc package/command is like pbuilder, download dependency packages from Internet and build package in chroot (or lxc container, KVM, Xen). (disadvantage: osc is not usable if build.opensuse.org is not fine state e.g. overload. It's rare but it happened ;)

I want to learn something this OBS and osc.



Timo Jyrinki: Qt 5 in Debian and Ubuntu, patches upstreaming

7 May, 2013 - 15:05
PackagesI quite like the current status of Qt 5 in Debian and Ubuntu (the links are to the qtbase packages, there are ca. 15 other modules as well). Despite Qt 5 being bleeding edge and Ubuntu having had the need to use it before even the first stable release came out in December, the co-operation with Debian has gone well. Debian is now having the first Qt 5 uploads done to experimental and later on to unstable. My work contributed to pkg-kde git on the modules has been welcomed, and even though more work has been done there by others, there haven't been drastic changes that would cause too big transition problems on the Ubuntu side. It has of course helped to ask others what they want, like the whole usage of qtchooser. Now with Qt 5.0.2 I've been able to mostly re-sync all newer changes / fixes to my packaging from Debian to Ubuntu and vice versa.

There will remain some delta, as pkg-kde plans to ask for a complete transition to qtchooser so that all Qt using packages would declare the Qt version either by QT_SELECT environment variable (preferable) or a package dependency (qt5-default or qt4-default). As a temporary change related to that, Debian will have a debhelper modification that defaults QT_SELECT to qt4 for the duration of the transition. Meanwhile, Ubuntu already shipped the 13.04 release with Qt 5, and a shortcut was taken there instead to prevent any Qt 4 package breakage. However, after the transition period in Debian is over, that small delta can again be removed.

I will also need to continue pushing any useful packaging I do to Debian. I pushed qtimageformats and qtdoc last week, but I know I'm still behind with some "possibly interesting" git snapshot modules like qtsensors and qtpim.

PatchesMore delta exists in the form of multiple patches related to the recent Ubuntu Touch efforts. I do not think they are of immediate interest to Debian – let's start packaging Qt 5 apps to Debian first. However, about all of those patches have already been upstreamed to be part of Qt 5.1 or Qt 5.2, or will be later on. Some already were for 5.0.2.

A couple of months ago Ubuntu did have some patches hanging around with no clear author information. This was a result of the heated preparation for the Ubuntu Touch launches, and the fact that patches flew (too) quickly in place into various PPA:s. I started hunting down the authors, and the situation turned out to be better than I thought. About half of the patches were already upstreamed, and work on properly upstreaming the other ones was swiftly started after my initial contact. Proper DEP3 fields do help understanding the overall situation. There are now 10 Canonical individuals in the upstream group of contributors, and in the last week's sprint it turned out more people will be joining them to upstream their future patches.

Nowadays about all the requests I get for including patches from developers are stuff that was already upstreamed, like the XEmbed support in qtbase. This is how it should be.

One big patch still being Ubuntu only is the Unity appmenu support. There was a temporary solution for 13.04 that forward-ported the Qt 4 way of doing it. This will be however removed from the first 13.10 ('saucy') upload, as it's not upstreamable (the old way of supporting Unity appmenus was deliberately dropped from Qt 5). A re-implementation via QPA plugin support is on its way, but it may be that the development version users will be without appmenu support for some duration. Another big patch is related to qtwebkit's device pixel ratio, which will need to be fixed. Apart from these two areas of work that need to be followed through, patches situation is quite nice as mentioned.
ConclusionFree software will do world domination, and I'm happy to be part of it.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้