The Shim code does a couple of things. The obvious one is to measure the second-stage bootloader into PCR 9. The perhaps less expected one is to measure the contents of the MokList and MokSBState UEFI variables into PCR 14. This means that if you're happy simply running a system with your own set of signing keys and just want to ensure that your secure boot configuration hasn't been compromised, you can simply seal to PCR 7 (which will contain the UEFI Secure Boot state as defined by the UEFI spec) and PCR 14 (which will contain the additional state used by Shim) and ignore all the others.
The grub code is a little more complicated because there's more ways to get it to execute code. Right now I've gone for a fairly extreme implementation. On BIOS systems, the grub stage 1 and 2 will be measured into PCR 9. That's the only BIOS-specific part of things. From then on, any grub modules that are loaded will also be measured into PCR 9. The full kernel image will be measured into PCR10, and the full initramfs will be measured into PCR11. The command line passed to the kernel is in PCR12. Finally, each command executed by grub (including those in the config file) is measured into PCR 13.
That's quite a lot of measurement, and there are probably fairly reasonable circumstances under which you won't want to pay attention to all of those PCRs. But you've probably also noticed that several different things may be measured into the same PCR, and that makes it more difficult to figure out what's going on. Thankfully, the spec designers have a solution to this in the form of the TPM measurement log.
Rather than merely extending a PCR with a new hash, software can extend the measurement log at the same time. This is stored outside the TPM and so isn't directly cryptographically protected. In the simplest form, it contains a hash and some form of description of the event associated with that hash. If you replay those hashes you should end up with the same value that's in the TPM, so for attestation purposes you can perform that verification and then merely check that specific log values you care about are correct. This makes it possible to have a system perform an attestation to a remote server that contains a full list of the grub commands that it ran and for that server to make its attestation decision based on a subset of those.
No promises as yet about PCR allocation being final or these patches ever going anywhere in their current form, but it seems reasonable to get them out there so people can play. Let me know if you end up using them!
 The code for this is derived from the old Trusted Grub patchset, by way of Sirrix AG's Trusted Grub 2 tree.
Back in early 2012 I had been helping with system administration of a number of Debian/Ubuntu-based machines, and the odd Solaris machine, for a couple of years at $DAYJOB. We had a combination of hand-written scripts, documentation notes that we cut’n’paste’d from during installation, and some locally maintained Debian packages for pulling in dependencies and providing some configuration files. As the number of people and machines involved grew, I realized that I wasn’t happy with how these machines were being administrated. If one of these machines would disappear in flames, it would take time (and more importantly, non-trivial manual labor) to get its services up and running again. I wanted a system that could automate the complete configuration of any Unix-like machine. It should require minimal human interaction. I wanted the configuration files to be version controlled. I wanted good security properties. I did not want to rely on a centralized server that would be a single point of failure. It had to be portable and be easy to get to work on new (and very old) platforms. It should be easy to modify a configuration file and get it deployed. I wanted it to be easy to start to use on an existing server. I wanted it to allow for incremental adoption. Surely this must exist, I thought.
During January 2012 I evaluated the existing configuration management systems around, like CFEngine, Chef, and Puppet. I don’t recall my reasons for rejecting each individual project, but needless to say I did not find what I was looking for. The reasons for rejecting the projects I looked at ranged from centralization concerns (single-point-of-failure central servers), bad security (no OpenPGP signing integration), to the feeling that the projects were too complex and hence fragile. I’m sure there were other reasons too.
In February I started going back to my original needs and tried to see if I could abstract something from the knowledge that was in all these notes, script snippets and local dpkg packages. I realized that the essence of what I wanted was one shell script per machine, OpenPGP signed, in a Git repository. I could check out that Git repository on every new machine that I wanted to configure, verify the OpenPGP signature of the shell script, and invoke the script. The script would do everything needed to get the machine up into an operational stage again, including package installation and configuration file changes. Since I would usually want to modify configuration files on a system even after its initial installation (hey not everyone is perfect), it was natural to extend this idea to a cron job that did ‘git pull’, verified the OpenPGP signature, and ran the script. The script would then have to be a bit more clever and not redo everything every time.
Since we had many machines, it was obvious that there would be huge code duplication between scripts. It felt natural to think of splitting up the shell script into a directory with many smaller shell scripts, and invoke each shell script in turn. Think of the /etc/init.d/ hierarchy and how it worked with System V initd. This would allow re-use of useful snippets across several machines. The next realization was that large parts of the shell script would be to create configuration files, such as /etc/network/interfaces. It would be easier to modify the content of those files if they were stored as files in a separate directory, an “overlay” stored in a sub-directory overlay/, and copied into the file system’s hierarchy with rsync. The final realization was that it made some sense to run one set of scripts before rsync’ing in the configuration files (to be able to install packages or set things up for the configuration files to make sense), and one set of scripts after the rsync (to perform tasks that require some package to be installed and configured). These set of scripts were called the “pre-tasks” and “post-tasks” respectively, and stored in sub-directories called pre-tasks.d/ and post-tasks.d/.
I started putting what would become Cosmos together during February 2012. Incidentally, I had been using etckeeper on our machines, and I had been reading its source code, and it greatly inspired the internal design of Cosmos. The git history shows well how the ideas evolved — even that Cosmos was initially called Eve but in retrospect I didn’t like the religious connotations — and there were a couple of rewrites on the way, but on the 28th of February I pushed out version 1.0. It was in total 778 lines of code, with at least 200 of those lines being the license boiler plate at the top of each file. Version 1.0 had a debian/ directory and I built the dpkg file and started to deploy on it some machines. There were a couple of small fixes in the next few days, but development stopped on March 5th 2012. We started to use Cosmos, and converted more and more machines to it, and I quickly also converted all of my home servers to use it. And even my laptops. It took until September 2014 to discover the first bug (the fix is a one-liner). Since then there haven’t been any real changes to the source code. It is in daily use today.
The README that comes with Cosmos gives a more hands-on approach on using it, which I hope will serve as a starting point if the above introduction sparked some interest. I hope to cover more about how to use Cosmos in a later blog post. Since Cosmos does so little on its own, to make sense of how to use it, you want to see a Git repository with machine models. If you want to see how the Git repository for my own machines looks you can see the sjd-cosmos repository. Don’t miss its README at the bottom. In particular, its global/ sub-directory contains some of the foundation, such as OpenPGP key trust handling.
The automotive industry has been in the spotlight after a massive scandal at Volkswagen, using code hidden in the engine management software to cheat emissions tests.What else is hidden in your new car's computer?
In a large piece of equipment like a car, there are many opportunities for computerization. In most cases, consumers aren't even given a choice whether or not they want software in their car.
It has long been known that such software is spying on the habits of the driver and this data is extracted from the car when it is serviced and uploaded to the car company. Car companies are building vast databases about the lifestyles, habits and employment activities of their customers.Computers aren't going away, so what can be done?
Most people realize that computers aren't going to go away any time soon. That doesn't mean that people have to put up with these deceptions and intrusions on our lives.
For years, many leading experts in the software engineering world have been promoting the benefits and principles of free software.
What we mean by free is that users, regulators and other independent experts should have the freedom to see and modify the source code in the equipment that we depend on as part of modern life. In fact, experts generally agree that there is no means other than software freedom to counter the might of corporations like Volkswagen and their potential to misuse that power, as demonstrated in the emissions testing scandal.
If Governments and regulators want to be taken seriously and protect society, isn't it time that they insisted that the car industry replaces all hidden code with free and open source software?
Review: Getting Back to Full Employment, by Dean Baker & Jared BernsteinPublisher: CEPR Copyright: 2013 ISBN: 0-615-91836-0 Format: Kindle Pages: 116
Getting Back to Full Employment is more of a policy paper than a book. The authors both work for economic think tanks; one of those, the Center for Economic and Policy research (co-founded by Baker), is the publisher of this book. It's also free on the Kindle, another sign that the purpose is more about educating and convincing the public than about selling a non-fiction analysis.
I found out about this through Paul Krugman, and if you're a regular reader of Krugman's columns and blog, not much here will be a surprise. Baker and Bernstein are advocating what I would call conventional-liberal economic policy by US standards (which means that it's not really that liberal). The short version is that full employment is vital for improving the economic position of the average person, inflation is nowhere near as much of a risk as people claim, and the best economic action the US government can take at present is to aggressively pursue a full employment strategy without worrying excessively about inflation.
If you're at all familiar with this debate, you probably already have an opinion about everything in that summary. I certainly do. If you follow economics at all, this debate has been raging ever since the 2008 financial crisis (and earlier versions of the debate were raging before then). Since I'm a regular Krugman reader, you probably know my opinion: the people who have been crying wolf about the dangers of inflation have been systematically wrong about everything for seven years solid, and it's amazing that anyone is still giving them the time of day. Baker and Bernstein make a similar argument at greater length, and with a more academic tone.
The risk of any policy paper like this is that reactions will be almost entirely decided by one's pre-existing opinion about the political actions that each economic policy implies. If you (like me) have already decided that government intervention in the economy is useful to prevent concentration of wealth in the hands of a few and provide support for the most vulnerable, everything Baker and Bernstein say here is going to sound quite convincing. If you've already decided that government intervention in the economy almost always leads to tears, and that a large government with the ability to take economic action is far more dangerous than any of the effects of economic downturns and unemployment, it would surprise me if Baker and Bernstein will change your mind. In other words, I'm skeptical that policy papers like this accomplish much.
That said, there are a lot of facts and data here, which at least help ground this ongoing argument in more specifics. The authors show you the math: the evidence for moderate inflation not leading to any serious impact for economies, the very strong correlations between low unemployment rates and rising median wages, the theoretical basis for lower bounds on unemployment rates and our extensive past history of assuming that lower bound is much higher than it actually is, and some pointed comparisons to countries like Germany that have taken many of the specific policy recommendations included here and have a far healthier labor market as a result. All the statistics, charts, and discussion of analysis techniques is fairly dry, but I think the data is compelling. Of course, I was already convinced by earlier reading.
There are a few bits here that I hadn't seen before, or at least hadn't internalized. One is a subtle but very important statistical limitation: any uncertainty in measuring the inflation rate will lead to incorrect statistical findings that inflation slows growth. The explanation for this takes several pages and is somewhat technical: the short and somewhat inaccurate version is that growth is measured in inflation-adjusted GDP, so if you get inflation wrong, you magnify the correlation between GDP and inflation. An inaccurately-high measure of inflation rate simultaneously leads to underestimating GDP growth for that country; an inflation rate that's inaccurate on the low side leads to overestimating GDP growth. This is inherent in how this comparison is done, and it's hard to see how to avoid it. It's therefore worth taking any analysis of the growth impact from higher inflation with a grain of salt. (Please note here that we're talking about moderate inflation — the typical rates between 1% and 8% that you see in developed economics.)
Another bit that was new to me was the analysis of the connection between trade deficits and full employment. I typically dismiss any international trade analysis in macroeconomic policy for the US since international trade is a much smaller part of our economy than most people think it is, and because protectionism has been rightfully discarded as a policy approach. But Baker and Bernstein make an interesting argument here about the effects of a strong currency on increasing unemployment; even with the relatively small amount of trade, reducing the deficit would have a noticeable effect. (Increasing the volume of trade but not changing the balance would not.) The authors certainly don't recommend protectionism; they do recommend being willing to let the value of the dollar drop against other currencies as a good way to reduce the deficit, and to be aware that other central banks have intentional policies of propping up the dollar that we're largely ignoring. They also mention that this is a zero-sum game for the world economy as a whole, although I would have preferred a bit more emphasis on that. You can't get very far via beggar-thy-neighbor economic policy. But it's a good reminder that the obsession in some parts with not "devaluing the currency" is economic nonsense and directly hurts employment.
The end of the book discusses a variety of sensible policy recommendations ranging from international currency policy to Germany's work-sharing program where the government subsidies reduction of worker hours the same way that we already subsidize unemployment. They all seemed reasonable to me, although, sadly, I have little faith that the dysfunctional US government will consider any of them seriously, particularly more direct approaches such as government taking on the role of employer of last resort. But they're fun to read about and imagine a US in which the government actually did sensible things like that.
Getting Back to Full Employment is pretty dry, and it's hard to shake the feeling that writing things like this is basically pointless, so I'm not sure I can really recommend it. But if you like reading Krugman's columns and are interested in similar material from different people at a longer length, that's basically what this is.
Rating: 6 out of 10
[As You Like It, Act III, Scene V]
I thought of you on the Fourth of July; which does not, in and of itself, distinguish the day from any other since I met you. It was remarkable only in that it was justifiable, given our conversation about fireworks displays. Of course, I'm happy to take the flimsiest of displays as an excuse to mark you.
I'm home alone right now, after watching The Bad Seed. Truth be told, the shadows keep spooking me. I can't seem to stop myself from imagining precocious blonde murderers in them. It's a manageable silliness, but made a little less so by the fact that I forgot to lock the door. Less troublesome than the night I spent after Ringu (the original Japanese version of The Ring). I finished watching it in the wee hours of the morning, and I wanted to go to sleep. I was in half a stupor, but the incessant inner critic in me kept imagining all the changes that could have been made to make the movie more truly unsettling until visions of Obake were swimming around me.
Ordinarily I doubt I'd be bothered particularly by a 50's classic, but ··· went back up to Boston this afternoon. ·····'s at a conference in Finland, so I invited him down to visit while I have the apartment all to myself. It was strange to have a visitor actually in my home for whom I didn't have to play at being contented. At any rate, being around ··· for three days essentially meant carrying on a three-day-long conversaiton, and the abrupt drop of sociability makes me feel my isolation a little more acutely.
We watched The Way We Were together, and agreed that it should be remade with casting that actually works. I hadn't seen it before, and was surprised to find it unusually nuanced and substantial, yet still not good. It was nice to have someone around who would dissect it with me afterward. It's been a while since I actually discussed a film with someone. Partly out of my own fault; I don't always enjoy verbalizing my opinions of movies immediately after watching them if I've found them in the least bit moving. I guess I consider the aftertaste part of the experience. In this case, we both felt the film had missed its emotional mark so it wasn't so much of an issue. On the other hand, I don't find most movies moving. I find them frustratingly flawed, and by the time they end I'm raring to rant about their petty contradictions and failures of logic. I think it might give people the impression that I don't actually make any effort to tease out the messages filmmakers weave into their work. Or maybe I'm just making excuses for having uninteresting friends. Either way, it was pleasant to be in the company of someone eager to tolerate the convolutions of my thought process.
On Wednesday night, I have a date to meet up with some former co-workers/friends that I've been passively avoiding for several years now. Every time I fail to carefully manage my visibility, people seem to come flooding back into my life. This time the culprit was a day spent logged into instant messaging without stringent privacy settings. I should feel lucky for that, I suppose. I'm not sure how I actually feel. One of the formers is a woman I was very close to, as far as most of the world—her included—could discern. The other is a Boston boy I admired for the touch of golden child in the air that hung about him. The main theme of his life was (and I suspect still is) getting to drinks with friends at one of his regular bars at the end of every evening. Which did not at all stop him from being productive, interested in the world, and bright. If he had been a girl I would have been hateful with envy. Instead he's always stood out in my memory as the only person I've had a bit of a crush on despite not finding him particularly intellectually stimulating. A month or so ago he sold his company to google. Now he spends a lot of time out of town giving lectures. I suspect I may be generally happy for him, and I'm not quite sure what to do with that.
Thursday I'm leaving for a few days in Denver. I wish I hadn't scheduled it for a time when I could have been the sole occupant of my domicile, but other than that I'm looking forward to it. I've no idea what I'll do there, but at least that means I really am going someplace that wouldn't occur to me outside of a peculiar set of constraints. I think it would be advisable to work out the transportation system before I depart, though.
I hope this letter finds you… relatively satisfied, at a minimum. I don't actually need to tell you how much I miss talking to you, do I?
Affectionally, as always,
This is the first major release of INN in several years, and incorporates lots of updates to the build system, protocol support, and the test suite. Major highlights and things to watch out for:
Cleanups of header handling in nnrpd, including using the new standardized headers for information about the post origin and rejecting many other obsolete headers.
nnrpd now treats IHAVE identically to POST when configured to accept IHAVE (a compatibility hack that's only necessary when dealing with some weird news implementations that can only do IHAVE).
innd authentication now requires a username be sent first, matching the NNTP protocol.
There are also tons of accumulated bug fixes, including (of course) all the fixes that were in INN 2.5.5. There are a lot of other changes, so I won't go into all the details here. If you're curious, take a look at the NEWS file.
As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!
Review: Half Life, by S.L. HuangSeries: Russell's Attic #2 Publisher: S.L. Huang Copyright: 2014 ISBN: 0-9960700-5-2 Format: Kindle Pages: 314
This is a sequel to Zero Sum Game and the second book about Cas Russell, a mercenary superhero (in a world without the concept of superheroes) with preternatural ability to analyze anything about her surroundings with mathematics. While it reuses some personal relationships from the first book and makes a few references to the villains, it's a disconnected story. It would be possible to start here if you wanted to.
Cas is now in the strange and unexpected situation of having friends, and they're starting to complicate her life. First, Arthur has managed to trigger some unexpected storehouse of morals and gotten her to try to stop killing people on jobs. That conscience may have something to do with her willingness to take a job from an apparently crazy man who claims a corporation has stolen his daughter, a daughter who appears nowhere in any official records. And when her other friend, Checker, gets in trouble with the mob, Cas tries to protect him in her own inimitable way, which poses a serious risk of shortening her lifespan.
Even more than the first book, the story in Half Life is a mix of the slightly ridiculous world of superheroes with gritty (and bloody) danger. It featuring hit men, armed guards, lots of guns, and quite a lot of physical injury and blood. A nasty corporation that's obviously hiding serious secrets shares pages with the matriarch of a mob family who considers Checker sleeping with her daughter to be an abuse of her honor. The story eventually escalates into more outlandish bits of technology, an uncanny little girl, and a plot that would feel at home in a Batman comic. I like books that don't take themselves too seriously, but the contrast between the brutal treatment Cas struggles through and the outrageous mad scientist villain provokes a bit of cognitive whiplash.
That said, the villains of Half Life are neither as freakish nor as disturbing as those in Zero Sum Game, which I appreciated. Huang packs in several plot twists, some inobvious decisions and disagreements between Russell and her friends about appropriate strategy, and Cas's discovery that there are certain things she cares very strongly about other than money and having jobs. Cas goes from a barely moral, very dark hero in the first book to something closer to a very grumbly chaotic good who insists she's not as good as she actually is. It's a standard character type, but Huang does a good job with it.
Huang also adds a couple of supporting cast members in this book that I hope will stick around. Pilar starts as a receptionist at one of the companies Cas breaks into, and at first seems like she might be comic relief. But she ends up being considerably more competent than she first appears (or that she seems to realize); by the end of the book, I had a lot of respect for her. And Miri makes only a few appearances, but her unflappable attitude is a delight. I hope to see more of her.
The biggest drawback to this book for me is that Cas gets hurt a lot. At times, the story falls into one of the patterns of urban fantasy: the protagonist gets repeatedly beaten up and abused until they figure out what's going on, and spends most of the story up against impossible odds and feeling helpless. That's not a plot pattern I'm fond of. I don't enjoy reading about physical pain, and I had trouble at some points in the story with the constant feeling of dread. Parts of the book I read in short bursts, putting it aside to look at something else. But the sense of dread falls off towards the end of the book, as Cas figures out what's actually going on, and none of it is as horrible as it felt it could be. If you have a similar problem with some urban fantasy tropes, I think it's safe to stick with the story.
This was a fun story, but it doesn't develop much in the way of deeper themes in the series. There's essentially no Rio, no further discoveries about the villains of the first book, and no further details on what makes Cas tick or why she seems to be the only, or at least one of the few, super-powered people in this world. The advance publicity for the third book seems to indicate that's coming next. I'm curious enough now that I'll keep reading this series.
Recommended if you liked the first book. Half Life is very similar, but I think slightly better.
Followed by Root of Unity.
Rating: 7 out of 10
Going back to Android recently, I saw that all tools binaries from the Android project are now click-wrapped by a quite ugly proprietary license, among others an anti-fork clause (details below). Apparently those T&C are years old, but the click-wrapping is newer.
This applies to the SDK, the NDK, Android Studio, and all the essentials you download through the Android SDK Manager.
Since I keep my hands clean of smelly EULAs, I'm working on rebuilding the Android tools I need.
We're talking about hours-long, quad-core + 8GB-RAM + 100GB-disk-eating builds here, so I'd like to publish them as part of a project who cares.
(Replicant is currently stuck to a 2013 code base though.)
I also have in-progress instructions on my hard-drive to rebuild various newer versions of the SDK/API levels, and for the NDK whose releases are quite hard to reproduce (no git tags, requires fixes committed after the release, updates are partial rebuilds, etc.) - not to mention that Google doesn't publish the source code until after the official release (closed development) And in some cases like Android Support Repository [not Library] I didn't even find the proper source code, only an old prebuilt.
Would you be interested in contributing, and would you recommend a structure that would promote Free, rebuilt Android *DK?The legalese
3.4 You agree that you will not take any actions that may cause or result in the fragmentation of Android, including but not limited to distributing, participating in the creation of, or promoting in any way a software development kit derived from the SDK.
So basically the source is Apache 2 + GPL, but the binaries are non-free. By the way this is not a GPL violation because right after:
3.5 Use, reproduction and distribution of components of the SDK licensed under an open source software license are governed solely by the terms of that open source software license and not this License Agreement.
Still, AFAIU by clicking "Accept" to get the binary you still accept the non-free "Terms and Conditions".
(Incidentally, if Google wanted SDK forks to spread and increase fragmentation, introducing an obnoxious EULA is probably the first thing I'd have recommended. What was its legal team thinking?)
12.1 To the maximum extent permitted by law, you agree to defend, indemnify and hold harmless Google, its affiliates and their respective directors, officers, employees and agents from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorneys fees) arising out of or accruing from (a) your use of the SDK, (b) any application you develop on the SDK that infringes any copyright, trademark, trade secret, trade dress, patent or other intellectual property right of any person or defames any person or violates their rights of publicity or privacy, and (c) any non-compliance by you with this License Agreement.
3.1 Subject to the terms of this License Agreement, Google grants you a limited, worldwide, royalty-free, non-assignable and non-exclusive license to use the SDK solely to develop applications to run on the Android platform.
3.3 You may not use the SDK for any purpose not expressly permitted by this License Agreement. Except to the extent required by applicable third party licenses, you may not: (a) copy (except for backup purposes), modify, adapt, redistribute, decompile, reverse engineer, disassemble, or create derivative works of the SDK or any part of the SDK; or (b) load any part of the SDK onto a mobile handset or any other hardware device except a personal computer, combine any part of the SDK with other software, or distribute any software or device incorporating a part of the SDK.
If you know the URLs, you can still direct-download some of the binaries which don't embed the license, but all this feels fishy. GNU licensing didn't answer me (yet). Maybe debian-legal has an opinion?
In any case, the difficulty to reproduce the *DK builds is worrying enough to warrant an independent rebuild.
Did you notice this?
After 3 months of installing an automatic decrufter in DAK, it:
- has removed 689 cruft items from unstable and experimental
- average removal rate being just shy of 230 cruft items/month
- has become the “top 11th remover”.
- is expected to become top 10 in 6 days from now and top 9 in 10 days.
- This is assuming a continued average removal rate of 7.6 cruft items per day
On a related note, the FTP masters have removed 28861 items between 2001 and now. The average being 2061 items a year (not accounting for the current year still being open). Though, intriguingly, in 2013 and 2014 the FTP masters removed 3394 and 3342 items. With the (albeit limited) states from the auto-decrufter, we can estimate that about 2700 of those were cruft items.
One could certainly also check the removal messages and check for the common “tags” used in cruft removals. I leave that as an exercise to the curious readers, who are not satisfied with my estimate. :)
Filed under: Debian, Release-Team
I have a Dell E7240. I’m pretty happy with it - my main complaint is that it has a very shiny screen, and that seems to be because it’s the touchscreen variant. While I don’t care about that feature I do care about the fact it means I get FullHD in 12.5”
Anyway. I’ve had issues with using a dock and an external monitor with the laptop for some time, including getting so far as mentioning the problems on the appropriate bug tracker. I’ve also had discussions with a friend who has the same laptop with the same issues, and has some time trying to get it reliably work. However up until this week I haven’t had a desk I’m sitting at for any length of time to use the laptop, so it’s always been low priority for me. Today I sat down to try and figure out if there had been any improvement.
Firstly I knew the dock wasn’t at fault. A Dell E6330 works just fine with multiple monitors on the same dock. The E6330 is Ivybridge, while the E7240 is Haswell, so I thought potentially there might be an issue going on there. Further digging revealed another wrinkle I hadn’t previously been aware of; there is a DisplayPort Multi-Stream Transport (MST) hub in play, in particular a Synaptics VMM2320. Dell have a knowledge base article about Multiple external display issues when docked with a Latitude E7440/E7240 which suggests a BIOS update (I was already on A15) and a firmware update for the MST HUB. Sadly the firmware update is Windows only, so I had to do a suitable dance to be able to try and run it. I then discovered that the A05 update refused to work, complaining I had an invalid product ID. The A04 update did the same. The A01 update thankfully succeeded and told me it was upgrading from 2.00.002 to 2.15.000. After that had completed (and I’d power cycled to switch to the new firmware) I tried A05 again and this time it worked and upgraded me to 2.22.000.
Booting up Linux again I got further than before; it was definitely detecting that there was a monitor but it was very unhappy with lots of [drm:intel_dp_start_link_train] *ERROR* too many full retries, give up errors being logged. This was with 4.2, and as I’d been meaning to try 4.3-rc2 I thought this was a good time to give it a try. Lo and behold, it worked! Even docking and undocking does what I’d expect, with the extra monitor appearing / disappearing as you’d expect.
Now, I’m not going to assume this means it’s all happy, as I’ve seen this sort-of work in the past, but the clue about MST, the upgrade of that firmware (and noticing that it made things better under Windows as well) and the fact that there have been improvements in the kernel’s MST support according to the post 4.2 log gives me some hope that things will be better from here on.
On Friday, the reSIProcate community released the latest beta of reSIProcate 1.10.0. One of the key features of the 1.10.x release series is support for presence (buddy/status lists) over SIP, the very thing that is currently out of action in Skype. This is just more proof that free software developers are always anticipating users' needs in advance.
Unlike Skype, reSIProcate is genuine free software. You are free to run it yourself, on your own domain or corporate network, using the same service levels and support strategies that are important for you. That is real freedom.Not sure where to start?
If you have deployed web servers and mail servers but you are not quite sure where to start deploying your own real-time communications system, please check out the RTC Quick Start Guide. You can read it online or download the PDF e-book.Is your community SIP and XMPP enabled?
The Debian community has a federated SIP service, supporting standard SIP and WebRTC at rtc.debian.org for all Debian Developers. XMPP support was tested at DebConf15 and will be officially announced very soon now.
Would you like to extend this concept to other free software and non-profit communities that you are involved in? If so, please feel free to contact me personally for advice about how you can replicate these successful initiatives. If your community has a Drupal web site, then you can install everything using packages and the DruCall module.Comment and discuss
Please join the Free-RTC mailing list to discuss or comment
What happened in the reproducible builds effort this week:Media coverage
Nathan Willis covered our DebConf15 status update in Linux Weekly News. Access to non-LWN subscribers will be given on Thursday 24th.
Linux Journal published a more general piece last Tuesday.
Unexpected praise for reproducible builds appeared this week in the form of several iOS applications identified as including spyware. The malware was undetected by Apple screening. This actually happened because application developers had simply downloaded a trojaned version of XCode through an unofficial source. While reproducible builds can't really help users of non-free software, this is exactly the kind of attacks that we are trying to prevent in our systems.Toolchain fixes
- Mathieu Malaterre uploaded abi-compliance-checker/1.99.11-1 which drops the timestamps from the generated HTML reports and makes the generated .abi.tar.gz files reproducible. Original patches by Chris Lamb.
The following 24 packages became reproducible due to changes in their build dependencies: apache-curator, checkbox-ng, gant, gnome-clocks, hawtjni, jackrabbit, jersey1, libjsr305-java, mathjax-docs, mlpy, moap, octave-geometry, paste, pdf.js, pyinotify, pytango, python-asyncssh, python-mock, python-openid, python-repoze.who, shadow, swift, tcpwatch-httpproxy, transfig.
The following packages became reproducible after getting fixed:
- apparmor/2.10-2 uploaded by intrigeri, fixed upstream by Christian Boltz, with the same change suggested by Reiner Herrmann.
- ardour/1:4.2~dfsg-2 by IOhannes m zmölnig.
- dcmtk/3.6.1~20150629-1 uploaded by Andreas Tille, original patch by akira.
- deap/1.0.1-4 by Daniel Stender.
- firebird2.5/18.104.22.168856.ds4-2 by Damyan Ivanov.
- gamera/3.4.2+svn1437-1 by Daniel Stender.
- genometools/1.5.7-1 by Sascha Steinbiss.
- golang-github-go-xorm-core/0.4.4-1 by Alexandre Viau.
- klibc/2.0.4-4 by Ben Hutchings.
- libgtk2-perl/2:1.2496-3 by intrigeri.
- lsof/4.89+dfsg-0.1 uploaded by Laurent Bigonville, original patch by Lunar.
- monotone/1.1-6 by Markus Wanner.
- ndisc6/1.0.1-4 by Santiago Vila.
- privoxy/3.0.23-4 by Roland Rosenfeld.
- ruby-flexmock/2.0.0~rc1-1 by Antonio Terceiro.
- ruby-html2haml/2.0.0-1 by Lunar.
- tunnelx/20140102-3 uploaded by Wookey, original patch by Chris Lamb.
- wtforms/2.0.2-1 by Orestis Ioannou, original patch by Juan Picca.
Some uploads fixed some reproducibility issues but not all of them:
Patches submitted which have not made their way to the archive yet:
- #783152 on kmod by Lunar: export SOURCE_DATE_EPOCH in debian/rules.
- #799010 on 389-ds-base by Chris Lamb: use SOURCE_DATE_EPOCH value as the build date.
- #799206 on python-sqlalchemy-utils by Chris Lamb: sort the list of extra requirement.
- #799330 on cappuccino by Chris Lamb: pass a fixed seed to polygen.
- #799410 on segment by Chris Lamb: use date of the latest debian/changelog entry as build date.
Python 3 offers new features (namely yield from and concurrent.futures) that could help implement parallel processing. The clear separation of bytes and unicode strings is also likely to reduce encoding related issues.
The rest of the code has been moved to the point where only incompatibilities between Python 2.7 and Pyhon 3.4 had to be changed. The commit stream still require some cleanups but all tests are now passing under Python 3.Documentation update
The documentation on how to assemble the weekly reports has been updated. (Lunar)
The example on how to use SOURCE_DATE_EPOCH with CMake has been improved. (Ben Beockel, Daniel Kahn Gillmor)
The solution for timestamps in man pages generated by Sphinx now uses SOURCE_DATE_EPOCH. (Mattia Rizzolo)Package reviews
45 reviews have been removed, 141 added and 62 updated this week.
67 new FTBFS reports have been filled by Chris Lamb, Niko Tyni, and Lisandro Damián Nicanor Pérez Meyer.
The prebuilder script is now properly testing umask variations again.
Santiago Villa started a discussion on debian-devel on how binNMUs would work for reproducible builds.
I stumbled over this CD a few weeks ago, and immediately ordered it from some second-hand dealer in the US: International Sad Hits – Volume 1: Altaic Group. Four artists from different countries (2x Japan, Korea, Turkey) and very different music style, but connected in one thing: That they don’t fit into the happy-peppy culture of AKB48, JPOP, KPOP and the like, but singers and songwriters that probe the depths of sadness. With two of my favorite Japanese appearing in the list (Tomokawa Kazuki and Mikami Kan), there was no way I could not buy this CD.
The four artist combined in this excellent CD are: Fikret Kızılok, a Turkish musician, singer and songwriter. Quoting from the pamphlet:
However, in 1983 Kızılok returned with one of his most important and best albums: Zaman Zaman (Time to time). […] These albums presented an alternative to the horrible pop scene emerging in Turkey — they criticized the political situation, the so-called intellectuals, and the pop stars.
The Korean artist is 김두수 (Kim Doo Soo), who was the great surprise of the CD for me. The sadness and beauty that is transmitted through is music is special. The pamphlet of the CD states:
The strictness of his home atmosphere suffocated him, and in defiance against his father he dropped out, and walked along the roads. He said later that, “Hatred towards the world, and the emptiness of life overwhelmed me. I lived my life with alcohol every day.”
After some problems due to political crisis, and fierce reactions to his song “Bohemian” (included) made him disappear into the mountains for 10 years, only to return with even better music.
Author Tatematsu Wahei has described Tomokawa as “a man standing naked, his sensibility utterly exposed and tingling.” It’s an accurate response to a creativity that seems unmediated by embarrassment, voraciously feeding off the artist’s personal concern.
The forth artist is again from Japan, Mikami Kan, a well known wild man from the Japanese music scene. After is debut at the Nakatsugawa All-Japan Folk Jamboree in 1971, he was something like a star for some years, but without a record deal his popularity decreased steadily.
During this period his songwriting gradually altered, becoming more dense, surreal and uncompromisingly personal.
Every song on this CD is a masterpiece by itself, but despite me being a great fan of Tomokawa, my favorite is Kim Doo Soo here, with songs that grip your heart and soul, stunningly beauty and sad at the same time.
Then I found this article describing the implementation of a bridge between the Echo and Belkin Wemo switches, cunningly called Fauxmo. The Echo already supports controlling Wemo switches, and the code in question simply implements enough of the Wemo API to convince the Echo that there's a bunch of Wemo switches on your network. When the Echo sends a command to them asking them to turn on or off, the code executes an arbitrary callback that integrates with whatever API you want.
This seemed like a good starting point. There's a free implementation of the LIFX bulb API called Lazylights, and with a quick bit of hacking I could use the Echo to turn my bulb on or off. But the Echo's Hue support also allows dimming of lights, and that seemed like a nice feature to have. Tcpdump showed that asking the Echo to look for Hue devices resulted in similar UPnP discovery requests to it looking for Wemo devices, so extending the Fauxmo code seemed plausible. I signed up for the Philips developer program and then discovered that the terms and conditions explicitly forbade using any information on their site to implement any kind of Hue-compatible endpoint. So that was out. Thankfully enough people have written their own Hue code at various points that I could figure out enough of the protocol by searching Github instead, and now I have a branch of Fauxmo that supports searching for LIFX bulbs and presenting them as Hues.
Running this on a machine on my local network is enough to keep the Echo happy, and I can now dim my bedroom light in addition to turning it on or off. But it demonstrates a somewhat awkward situation. Right now vendors have no real incentive to offer any kind of compatibility with each other. Instead they're all trying to define their own ecosystems with their own incompatible protocols with the aim of forcing users to continue buying from them. Worse, they attempt to restrict developers from implementing any kind of compatibility layers. The inevitable outcome is going to be either stacks of discarded devices speaking abandoned protocols or a cottage industry of developers writing bridge code and trying to avoid DMCA takedowns.
The dystopian future we're heading towards isn't Gibsonian giant megacorporations engaging in physical warfare, it's one where buying a new toaster means replacing all your lightbulbs or discovering that the code making your home alarm system work is now considered a copyright infringement. Is there a market where I can invest in IP lawyers?
 It also requires an additional phrase at the beginning of a request to indicate which third party app you want your query to go to, so it's much more clumsy to make those requests compared to using a built-in app.
 I only have one bulb, so as yet I haven't added any support for groups.
Review: Shady Characters, by Keith HoustonPublisher: W.W. Norton Copyright: 2013 ISBN: 0-393-34972-1 Format: Trade paperback Pages: 250
Subtitled The Secret Life of Punctuation, Symbols & Other Typographical Marks, Shady Characters is one of those delightfully quirky books that examines the history of something you would not normally connect to history. It's an attempt to document, and in some cases reconstruct, the history of several specific punctuation marks. If you've read and enjoyed Lynn Truss's Eats, Shoots & Leaves, this is a near-perfect complement, focusing on more obscure marks and adding in a more detailed and coherent history.
Houston has some interest in the common and quotidian punctuation marks, with chapters on the hyphen, dash, and quotation mark, but he avoids giving a full-chapter treatment to the most obvious periods and commas. (Although one learns quite a bit about them as asides in other histories.) The rest of the book focuses on the newly-popular (the at symbol), the recognizable but less common (the hash mark, the asterisk and dagger symbols used for footnotes, and the ampersand), and the historical but now obscure (the pilcrow or paragraph mark, and the manicule or pointing finger). He even devotes two chapters to unsuccessful invented punctuation: the interrobang and the long, failed history of irony and sarcasm punctuation.
And this is an actual history, not just a collection of anecdotes and curious facts. Houston does a great job of keeping the text light, engaging, and readable, but he's serious about his topic. There are many reproductions of ancient documents showing early forms of punctuation, several serious attempts to adjudicate between competing theories of origin, a few well-marked and tentative forays into guesswork, and an open acknowledgment of several areas where we simply don't know. Along the way, the reader gets a deeper feel for the chaos and arbitrary personal stylistic choices of the days of hand-written manuscripts and the transformation of punctuation by technology. So much of what we now use in punctuation was shaped and narrowed by the practicalities of typesetting. And then modern technology revived some punctuation, such as the now-ubiquitous at sign, or the hash mark that graces every telephone touchpad.
I think my favorite part of this history was using punctuation as perspective from which to track the changing relationship between people and written material. It's striking how much early punctuation was primarily used for annotations and commentary on the text, as opposed to making the text itself more readable. Much early punctuation was added after the fact, and then slowly was incorporated into the original manuscript, first via recopying and then via intentional authorial choice. Texts started including their own pre-emptive commentary, and brought the corresponding marks along with the notes. And then printing forced a vast simplification and standardization of punctuation conventions.
Full compliments to W.W. Norton, as well, for putting the time and effort into making the paper version of this book a beautiful artifact. Punctuation is displayed in red throughout when it is the focus of the text. Endnotes are properly split from footnotes, with asides and digressions presented properly at the bottom of the page, and numbered endnotes properly reserved solely for citations. There is a comprehensive index, list of figures, and a short and curated list of further readings. I'm curious how much of the typesetting care was preserved in the ebook version, and dubious that all of it would have been possible given the current state of ebook formatting.
Typesetting, obscure punctuation, and this sort of popular history of something quotidian are all personal interests of mine, so it's unsurprising I found this book delightful. But it's so well-written and engaging that I can recommend it even to people less interested in those topics. The next time you're in the mood for learning more about a corner of human history that few people consider, I recommend Shady Characters to your attention.
Rating: 8 out of 10
Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.
This release contains small upstream improvements:Changes in RcppArmadillo version 0.5.600.2.0 (2015-09-19)
Upgraded to Armadillo 5.600.2 ("Molotov Cocktail Deluxe")
expanded .each_col() and .each_row() to handle out-of-place operations
added .each_slice() for repeated matrix operations on each slice of a cube
faster handling of compound expressions by join_rows() and join_cols()
Courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.
Changes in RQuantLib code:
A simple shiny application is now included in the directory shiny/DiscountCurve/ and accessible via the new demo function ShinyDiscountCurve.
The option surface plotting example in arrays.R now checks for rgl by using requireNamespace.
The files NAMESPACE and DESCRIPTION have been updated to reflect all the suggestions of R CMD check.
The Travis CI tests now use binary Debian packages for all package dependencies making the tests a little faster.
Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.
While looking at NEW I sometimes see a package and think “wow this is great, you have to try this”.
Current weather in Chemnitz => 12 °C ☔ – Wind => 2.52 m/s WNW – Humidity => 80 % – Pressure => 1014 hPa
Now I see that I need a coat and an umbrella for my walk. Or better I don’t go outside and continue looking at other stuff in NEW
Weblate 2.4 has been released today. It comes with extended support for various file formats, extended hook scripts, better keyboard shortcuts and dozen of bug fixes.
Full list of changes for 2.4:
- Improved support for PHP files.
- Ability to add ACL to anonymous user.
- Improved configurability of import_project command.
- Added CSV dump of history.
- Avoid copy/paste errors with whitespace chars.
- Added support for Bitbucket webhooks.
- Tigher control on fuzzy strings on translation upload.
- Several URLs have changed, you might have to update your bookmarks.
- Hook scripts are executed with VCS root as current directory.
- Hook scripts are executed with environment variables descriping current component.
- Add management command to optimize fulltext index.
- Added support for error reporting to Rollbar.
- Projects now can have multiple owners.
- Project owners can manage themselves.
- Support for adding new translations in XLIFF.
- Improved file format autodetection.
- Extended keyboard shortcuts.
- Improved dictionary matching for several languages.
- Improved layout of most of pages.
- Support for adding words to dictionary while translating.
- Added support for filtering languages to be managed by Weblate.
- Added support for translating and importing CSV files.
- Rewritten handling of static files.
- Direct login/registration links to third party service if that's the only one.
- Commit pending changes on account removal.
- Add management command to change site name.
- Add option to confiugure default committer.
- Add hook after adding new translation.
- Add option to specify multiple files to add to commit.
If you are upgrading from older version, please follow our upgrading instructions.
You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.
If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.
Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!
In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.
The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.Hooking into the events
event=ibm/hotkey LEN0068:00 00000080 00004010 action=su francois -c "/home/francois/bin/external-monitor dock"
event=ibm/hotkey LEN0068:00 00000080 00004011 action=su francois -c "/home/francois/bin/external-monitor undock"
then restart udev:
sudo service udev restartFinding the right events
To make sure the events are the right ones, lift them off of:
and ensure that your script is actually running by adding:
logger "ACPI event: $*"
at the begininng of it and then looking in /var/log/syslog for this lines like:
logger: external-monitor undock logger: external-monitor dock
If that doesn't work for some reason, try using an ACPI event script like this:
event=ibm/hotkey action=logger %e
to see which event you should hook into.Using xrandr inside an ACPI event script
Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:
#!/bin/sh logger "ACPI event: $*" xrandr -d :0.0 --output DP2 --auto xrandr -d :0.0 --output eDP1 --auto xrandr -d :0.0 --output DP2 --left-of eDP1