Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 25 min ago

Dirk Eddelbuettel: R and Docker

26 September, 2014 - 10:57

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

Steve Kemp: Today I mostly removed python

26 September, 2014 - 03:11

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

Aigars Mahinovs: Distributing third party applications via Docker?

26 September, 2014 - 02:54

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

Jan Wagner: Redis HA with Redis Sentinel and VIP

26 September, 2014 - 01:56

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave.

Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).

The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':

# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
# 
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
# 
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
#
# This script should be resistant to multiple invocations.

This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on ${6} which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.

Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

Gunnar Wolf: #bananapi → On how compressed files should be used

26 September, 2014 - 00:37

I am among the lucky people who got back home from DebConf with a brand new computer: a Banana Pi. Despite the name similarity, it is not affiliated with the very well known Raspberry Pi, although it is a very comparable (although much better) machine: A dual-core ARM A7 system with 1GB RAM, several more on-board connectors, and same form-factor.

I have not yet been able to get it to boot, even from the images distributed on their site (although I cannot complain, I have not devoted more than a hour or so to the process!), but I do have a gripe on how the images are distributed.

I downloaded some images to play with: Bananian, Raspbian, a Scratch distribution, and Lubuntu. I know I have a long way to learn in order to contribute to Debian's ARM port, but if I can learn by doing... ☻

So, what is my gripe? That the three images are downloaded as archive files:

  1. 0 gwolf@mosca『9』~/Download/banana$ ls -hl bananian-latest.zip \
  2. > Lubuntu_For_BananaPi_v3.1.1.tgz Raspbian_For_BananaPi_v3.1.tgz \
  3. > Scratch_For_BananaPi_v1.0.tgz
  4. -rw-r--r-- 1 gwolf gwolf 222M Sep 25 09:52 bananian-latest.zip
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz
  6. -rw-r--r-- 1 gwolf gwolf 1.3G Sep 25 10:01 Raspbian_For_BananaPi_v3.1.tgz
  7. -rw-r--r-- 1 gwolf gwolf 1.2G Sep 25 10:05 Scratch_For_BananaPi_v1.0.tgz

Now... that is quite an odd way to distribute image files! Specially when looking at their contents:

  1. 0 gwolf@mosca『14』~/Download/banana$ unzip -l bananian-latest.zip
  2. Archive: bananian-latest.zip
  3. Length Date Time Name
  4. --------- ---------- ----- ----
  5. 2032664576 2014-09-17 15:29 bananian-1409.img
  6. --------- -------
  7. 2032664576 1 file
  8. 0 gwolf@mosca『15』~/Download/banana$ for i in Lubuntu_For_BananaPi_v3.1.1.tgz \
  9. > Raspbian_For_BananaPi_v3.1.tgz Scratch_For_BananaPi_v1.0.tgz
  10. > do tar tzvf $i; done
  11. -rw-rw-r-- bananapi/bananapi 3670016000 2014-08-06 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img
  12. -rwxrwxr-x bananapi/bananapi 3670016000 2014-08-08 04:30 Raspbian_For_BananaPi_v3_1.img
  13. -rw------- bananapi/bananapi 3980394496 2014-05-27 01:54 Scratch_For_BananaPi_v1_0.img

And what is bad about them? That they force me to either have heaps of disk space available (2GB or 4GB for each image) or to spend valuable time extracting before recording the image each time.

Why not just compressing the image file without archiving it? That is,

  1. 0 gwolf@mosca『7』~/Download/banana$ tar xzf Lubuntu_For_BananaPi_v3.1.1.tgz
  2. 0 gwolf@mosca『8』~/Download/banana$ xz Lubuntu_1404_For_BananaPi_v3_1_1.img
  3. 0 gwolf@mosca『9』~/Download/banana$ ls -hl Lubun*
  4. -rw-r--r-- 1 gwolf gwolf 606M Aug 6 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img.xz
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz

Now, wouldn't we need to decompress said files as well? Yes, but thanks to the magic of shell redirections, we can just do it on the fly. That is, instead of having 3×4GB+1×2GB files sitting on my hard drive, I just need to have several files ranging between 145M and I guess ~1GB. Then, it's as easy as doing:

  1. 0 gwolf@mosca『8』~/Download/banana$ dd if=<(xzcat bananian-1409.img.xz) of=/dev/sdd

And the result should be the same: A fresh new card with Bananian ready to fly. Right, right, people using these files need to have xz installed on their systems, but... As it stands now, I can suppose current prospective users of a Banana Pi won't fret about facing a standard Unix tool!

(Yes, I'll forward this rant to the Banana people, it's not just bashing on my blog :-P )

Marco d'Itri: CVE-2014-6271 fix for Debian sarge and etch

25 September, 2014 - 23:01

Very old Debian releases like sarge (3.1) and etch (4.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug):

http://ftp.linux.it/pub/People/md/bash/

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

Julian Andres Klode: hardlink 0.3.0 released; xattr support

25 September, 2014 - 20:42

Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

I also added support for a –minimum-size option.

Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.


Filed under: Uncategorized

Petter Reinholdtsen: Suddenly I am the new upstream of the lsdvd command line tool

25 September, 2014 - 17:20

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

Mike Hommey: So, hum, bash…

25 September, 2014 - 15:43

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Russ Allbery: Review: Turn the Ship Around!

25 September, 2014 - 11:16

Review: Turn the Ship Around!, by L. David Marquet

Publisher: Portfolio Copyright: 2012 ISBN: 1-101-62369-1 Format: Kindle Pages: 272

Turn the Ship Around! (yes, complete with the irritating exclamation point in the title) is marketed to the business and management non-fiction market, which is clogged with books claiming to provide simple techniques to be a great manager or fix an organization. If you're like me, this is a huge turn-off. The presentation of the books is usually just shy of the click-bait pablum of self-help books. Many of the books are written by famous managers best known for doing horrible things to their staff (*cough* Jack Welch). It's hard to get away from the feeling that this entire class of books is an ocean of bromides covering a small core of outright evil.

This book is not like that, and Marquet is not one of those managers. It can seem that way at times: it is presented in the format that caters to short attention span, with summaries of primary points at the end of every short chapter and occasionally annoying questions sprinkled throughout. I'm capable of generalizing information to my own life without being prompted by study questions, thanks. But that's just form. The core of this book is a surprisingly compelling story of Marquet's attempt to introduce a novel management approach into one of the most conservative and top-down of organizations: a US Navy nuclear submarine.

I read this book as an individual employee, and someone who has no desire to ever be a manager. But I recently changed jobs and significantly disrupted my life because of a sequence of really horrible management decisions, so I have strong opinions about, at least, the type of management that's bad for employees. A colleague at my former employer recommended this book to me while talking about the management errors that were going on around us. It did such a good job of reinforcing my personal biases that I feel like I should mention that as a disclaimer. When one agrees with a book this thoroughly, one may not have sufficient distance from it to see the places where its arguments are flawed.

At the start of the book, Marquet is assigned to take over as captain of a nuclear submarine that's struggling. It had a below-par performance rating, poor morale, and the worst re-enlistment rate in the fleet, and was not advancing officers and crew to higher ranks at anywhere near the level of other submarines. Marquet brought to this assignment some long-standing discomfort with the normal top-down decision-making processes in the Navy, and decided to try something entirely different: a program of radical empowerment, bottom-up decision-making, and pushing responsibility as far down the chain of command as possible. The result (as you might expect given that you can read a book about it) was one of the best-performing submarines in the fleet, with retention and promotion rates well above average.

There's a lot in here about delegated decision-making and individual empowerment, but Turn the Ship Around! isn't only about that. Those are old (if often ignored) rules of thumb about how to manage properly. I think the most valuable part of this book is where Marquet talks frankly about his own thought patterns, his own mistakes, and the places where he had to change his behavior and attitude in order to make his strategy successful. It's one thing to say that individuals should be empowered; it's quite another to stop empowering them (which is still a top-down strategy) and start allowing them to be responsible. To extend trust and relinquish control, even though you're the one who will ultimately be held responsible for the performance of the people reporting to you. One of the problems with books like this is that they focus on how easy the techniques presented in the book are. Marquet does a more honest job in showing how difficult they are. His approach was not complex, but it was emotionally quite difficult, even though he was already biased in favor of it.

The control, hierarchy, and authority parts of the book are the most memorable, but Marquet also talks about, and shows through specific examples from his command, some accompanying principles that are equally important. If everyone in an organization can make decisions, everyone has to understand the basis for making those decisions and understand the shared goals, which requires considerable communication and open discussion (particularly compared to a Navy ideal of an expert and taciturn captain). It requires giving people the space to be wrong, and requires empowering people to correct each other without blame. (There's a nice bit in here about the power of deliberate action, and while Marquet's presentation is more directly applicable to the sorts of physical actions taken in a submarine, I was immediately reminded of code review.) Marquet also has some interesting things to say about the power of, for lack of a better term, esprit de corps, how to create it, and the surprising power of acting like you have it until you actually develop it.

As mentioned, this book is very closely in alignment with my own biases, so I'm not exactly an impartial reviewer. But I found it fascinating the degree to which the management situation I left was the exact opposite of the techniques presented in this book in nearly every respect. I found it quite inspiring during my transition period, and there are bits of it that I want to read again to keep some of the techniques and approaches fresh in my mind.

There is a fair bit of self-help-style packaging and layout here, some of which I found irritating. If, like me, you don't like that style of book, you'll have to wade through a bit of it. I would have much preferred a more traditional narrative story from which I could draw my own conclusions. But it's more of a narrative than most books of this sort, and Marquet is humble enough to show his own thought processes, tensions, and mistakes, which adds a great deal to the presentation. I'm not sure how directly helpful this would be for a manager, since I've never been in that role, but it gave me a lot to think about when analyzing successful and unsuccessful work environments.

Rating: 8 out of 10

Laura Arjona: 10 short steps to contribute translations to free software for Android

25 September, 2014 - 07:14

This small guide assumes that you know how to create a public repository with git (or other version control system). Maybe some projects use other VCS, Subversion or whatever; the process would be similar although the commands will be different of course.

If you don’t want to use any VCS, you can just download the corresponding file, translate it, and send it by email or to the BTS of the project, but the commands required are very easy and you’ll see soon that using git (or any VCS) is quite comfortable and less scary than what it seems.

So, you were going to recommend a nice app that you use or found in F-Droid to your friend, but she does not understand English. Why not translating the app for her? And for everybody? It’s a job that can be done in 15 minutes or so (Android apps have very short strings, few menus, and so). Let’s go!

1.- Search the app in the F-Droid website

You can do it going to the URL:

https://f-droid.org/repository/browse/?fdfilter=wordofappname

Example: https://f-droid.org/repository/browse/?fdfilter=pomodoro

Then, open the details of the app, and learn where’s the source code.

2.- Clone the source code

If you have an account in that forge, fork/clone the project into your account, and then, clone your fork/clone in local.

If you haven’t got an account in that forge, clone the project in local.

git clone URLofTheProjectOrYourClone

3.- In local, create a new branch, and checkout to it.


git checkout -b Spanish

4.- Then, copy the “/res/values” folder into “res/values-XX” folder (where XX is your language code)

cd nameofrepo
cp ./res/values /res/values-es -R

5.- Translate

Edit the “strings.xml” file that is in the “res/values-XX” folder, and change the English strings to your language (respect the XML format).

6.- Translate other files, or delete them

If there are more files in that folder (e.g. “arrays.xml”), review them to know if they have “translatable” strings. If yes, translate them. If not, delete the files.

7.- Commit

When you are finished, commit your changes:

git add res/values-es/*
git commit -a

(Message can be “Spanish translation” or so)

8.- Push your changes to your public repo

If you didn’t create a public clone of the repo in your forge, create a public repo and push your local stuff into there.

git push --all

9.- Request a merge to the original repo

(Using the web interface of the forge, if it is the same for the original repo and your clone, or sending an email or creating an issue and providing the URL of your repo).

For example, open a new issue in the project’s BTS

Title: Spanish translation available for merging
Body: Hi everybody. Thanks for your work in "nameofapp".
I have completed a Spanish translation, it's available for review/merge
in the Spanish branch of my repo:

https://urlofyourclone

Best regards

10.- Congratulations!

Translations are new features, and having a new feature in your app for free is a great thing, so probably the app developer(s) will merge your translation soon.

Share your joy with your friends, so they begin to use the app you translated, and maybe become translators too!

Comments?

You can comment on this post in this pump.io thread.


Filed under: Tools, Writings (translations) Tagged: Android, Contributing to libre software, English, Free Software, libre software, translations

Julian Andres Klode: APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

25 September, 2014 - 05:06

Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

There are three things that we looked at:

  1. Reducing privileges by setting a new user and group
  2. chroot()
  3. seccomp-bpf sandbox

Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

Known issues:

  • We drop all groups except the primary gid of the user
  • copy breaks if that group has no read access to the files

We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.


Filed under: Uncategorized

Vincent Sanders: I wanted to go to Portland because it's a really good book town.

24 September, 2014 - 17:37
Patti Smith is right, more than any other US city I have visited, Portland feels different. Although living in Cambridge, which sometimes feels like where books were invented, might give me a warped sense of a place.

I have visited Portland a few times previously and I feel comfortable every time I arrive at PDX. Sure the place still suffers from the american obsession with the car but similar to New York you can rely on public transport to get about.

On this occasion my visit was for the Debian Conference which i was excited to attend having missed the previous one in Switzerland. This time the conference has changed its format to being 10 days long and mixing the developer time in with the more formal sessions.

The opening session gave Steve McIntyre and myself the opportunity to present a small token of our appreciation to Russ. The keynote speakers that afternoon were all very interesting both Stefano Zacchiroli and Gabriella Coleman giving food for thought on two very different subjects.

Several conferences in the past have experienced issues with sponsored accommodation and food, I am very pleased to report that both were very good this time. The room I was in had a small kitchen area, en-suite bathroom, desks and most importantly comfortable beds.

The food provision was in the form of a buffet in the Ondine facility. The menu was not greatly varied but catered to all requirements including vegetarian and gluten free diets.

Some of us went on a visit to the Evergreen air and space museum to look at some rare aircraft and rockets. I can thoroughly recommend a visit if you are in the area.

These are just the highlights of the week though, the time in the hack-labs was productive with several practical achievements Including:
- Uploading new packages reducing the bug count
- Sorting out getting an updated key into the Debian keyring.

Overall I had a thoroughly enjoyable time and got a lot out of the conference this year. The new format suited me surprisingly well and as usual the social side was as valuable as the practical.

I hope the organisers have recovered enough to appreciate just how good a job they did and not get hung up on the small number of things that went wrong when the majority of things went perfectly to plan.

Russell Coker: Cheap 3G Data in Australia

24 September, 2014 - 15:06
The Request

I was asked for advice about cheap 3G data plans. One of the people who asked me has a friend with no home Internet access, the friend wants access but doesn’t want to pay too much. I don’t know whether the person in question can’t use ADSL/Cable (maybe they are about to move house) or whether they just don’t want to pay for it.

3G data in urban areas in Australia is fast enough for most Internet use. But it’s not good for online games or VOIP. It’s also not very useful for Youtube and other online video. There is a variety of 3G speed testing apps for Android phones and there are presumably similar apps for the iPhone. Before signing up for 3G at home it’s probably best to get a friend who’s on the network in question to test Internet speed at your house, it would be annoying to sign up for an annual contract and then discover that your home is in a 3G dead spot.

Cheapest Offers

The best offer at the moment for moderate data use seems to be Amaysim with 10G for $99.90 and an expiry time of 365 days [1]. 10G in a year isn’t a lot, but it’s pre-paid so the user can buy another 10G of data whenever they want. At the moment $10 for 1G of data in a month and $20 for 2G of data in a month seem to be common offerings for 3G data in Australia. If you use exactly 1G per month then Amaysim isn’t any better than a number of other telcos, but if your usage varies (as it does with most people) then spreading the data use over several months offers significant savings without the need to save big downloads for the last day of the month.

For more serious Internet use Virgin has pre-paid offerings of 6G for $30 and 12G for $40 which has to be used in a month [2]. Anyone who uses an average of more than 3G per month will get better value from the Virgin offers.

If anyone knows of cheaper options than Amaysim and Virgin then please let me know.

Better Coverage

Both Amaysim and Virgin use the Optus network which covers urban areas quite well. I used Virgin a few years ago (and presume that it has only improved since then) and my wife uses Amaysim now. I haven’t had any great problems with either telco. If you need better coverage than the Optus network provides then Telstra is the only option. Telstra have a number of prepaid offers, the most interesting is $100 for 10G of data that expires in 90 days [3].

That Telstra offer is the same price as the Amaysim offer and only slightly more expensive than Virgin if you average 3.3G per month. It’s a really good deal if you average 3.3G per month as you can expect it to be faster and have better coverage.

Which One to Choose?

I think that the best option for someone who is initially connecting their home via 3g is to start with Amaysim. Amaysim is the cheapest for small usage and they have an Amaysim Android app and web page for tracking usage. After using a few gig of data on Amaysim it should be possible to determine which plan is going to be most economical in the long term.

Connecting to the Internet

To get the best speed you need a 4G AKA LTE connection. But given that 3G speed is great enough to use expensive amounts of data it doesn’t seem necessary to me. I’ve done a lot of work over the Internet with 3G from Virgin, Kogan, Aldi, and Telechoice and haven’t felt a need to pay for anything faster.

I think that the best thing to do is to use an old phone running Android 2.3 or iOS 4.3 as a Wifi access point. The cost of a dedicated 3G Wifi AP is enough to significantly change the economics of such Internet access and most people have access to old smart phones.

Related posts:

  1. Changing Phone Prices in Australia 18 months ago when I signed up with Virgin Mobile...
  2. Cheap Net Access in Australia The cheapest ADSL or Cable net access in Australia seems...
  3. Aldi Changes, Cheap Telcos, and Estimating Costs I’ve been using Aldi as my mobile phone provider for...

Matthew Garrett: My free software will respect users or it will be bullshit

24 September, 2014 - 14:59
I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comments

Robert Collins: what-poles-for-the-tent

24 September, 2014 - 13:11

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


Russ Allbery: Review: 2014 Hugos: Short Story Nominees

24 September, 2014 - 11:46

Review: 2014 Hugos: Short Story Nominees, edited by Loncon 3

Publisher: Loncon 3 Copyright: 2014 Format: Kindle

This is a bit of a weird "book review," since this is not a book. Rather, it's the collection of Hugo-nominated short stories for the 2014 Hugos (given for works published in 2013) at Loncon 3, the 2014 Worldcon. As such, the "editor" is the pool of attendees and supporting members who chose to nominate works, all of which had been previously edited by other editors in their original publication.

This is also not something that someone else can acquire; if you were not a supporting or attending member, you didn't get the voting packet. But I believe all of the stories here are available on-line for free in some form, a short search away.

"If You Were a Dinosaur, My Love" by Rachel Swirsky: The most common complaint about this story is that it's not really a story, and I have to agree. It's a word image of an alternate world in which the narrator's love is a human-sized dinosaur, starting with some surreal humor and then slowly shifting tone as it reveals the horrible event that's happened to the narrator's actual love, and that's sparked the wish for her love to have claws and teeth. It's reasonably good at what it's trying to do, but I wanted more of a story. The narrator's imagination didn't do much for me. (5)

"The Ink Readers of Doi Saket" by Thomas Olde Heuvelt: At least for me, this story suffered from being put in the context of a Hugo nominee. It's an okay enough story about a Thai village downstream from a ritual that involves floating wishes down the river, often with offerings in the improvised small boats. The background of the story is somewhat cynical: the villagers make some of the wishes come true, sort of, while happily collecting the offerings and trying to spread the idea that the wishes with better offerings are more likely to come true. The protagonist follows a familiar twist: he actually can make wishes come true, maybe, but is very innocent about his role in the world.

This is not a bad story, although stories written by people with western-sounding names about non-western customs worry me, and there were a few descriptions and approaches here (such as the nickname translations in footnotes and the villager archetypes) that made my teeth itch. But it is not a story that belongs on the Hugo nomination slate, at least in my opinion. It's either cute or mildly irritating, depending on one's mood when one meets it, not horribly original, and very forgettable. (5)

"Selkie Stories Are for Losers" by Sofia Samatar: I really liked this story for much of its length. It features a couple of young, blunt, and bitter women, and focuses on the players in the typical selkie story that don't get much attention. The selkie's story is one of captivity or freedom; her lover's story is the inverse, the captor or the lover. But I don't recall a story about the children before, and I think Samatar got the tone right. It has the bitterness of divorce and abandonment mixed with the disillusionment of fantasy turned into pain.

My problem with this story is the ending, or rather, the conclusion, since the story doesn't so much end as stop. There's a closing paragraph that gives some hint of the shape to come, but it gave me almost no closure, and it didn't answer any of the emotional questions that the rest of the story raised for me. I wanted something more, some sort of epiphany or clearer determination. (7)

"The Water That Falls on You from Nowhere" by John Chu: This was by far my favorite of the nominees, which is convenient since it won. I thought it was the only nominee that felt in the class of stories I would expect to win a Hugo.

I think this story needs one important caveat up front. The key conceit of the story is that, in this world, water falls on you out of nowhere if you tell any sort of lie. It does not explore the practical impact on that concept for the broader world. That didn't bother me; for some reason, I wasn't really expecting it to do so. But it did bother several other people I've seen comment on this story. They were quite frustrated that the idea was used primarily to shape a personal and family emotional dilemma, not to explore the impact on the world. So, go into this with the right expectations: if you want world-building or deep exploration of a change in physical laws, you will want a different story.

This story, instead, is a beautiful gem about honesty in relationships, about communication about very hard things and very emotional things, about coming out, about trusting people, and about understanding people. I thought it was beautiful. If you read Captain Awkward, or other discussion of how to deal with difficult families and the damage they cause to relationships, seek this one out. It surprised me, and delighted me, and made me cry in places, and I loved the ending. It's more fantasy than science fiction, and it uses the conceit as a trigger for a story about people instead of a story about worlds and technology, but I'm still very happy to see it win. (9)

Rating: 7 out of 10

Junichi Uekawa: Sending GCM send message from nodejs.

24 September, 2014 - 08:46
Sending GCM send message from nodejs. I wanted to send a message from emacs, but it seemed to be relatively difficult to send HTTPS POST request from emacs, so I just decided to use handy nodejs. The attached 'data' is attached to intent as extra, so it can be extracted by intent.getExtras.getString("message") in IntentService#onHandleIntent()

Steve Kemp: Waiting for features upstream

24 September, 2014 - 04:42

I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.

One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.

I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.

On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).

Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.

Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU findutils package doesn't allow you to do the obvious thing:

$ find ~/foo -xattr user.comment
/home/skx/foo/bar/t.txt
/home/skx/foo/bar/xc.txt
/home/skx/foo/bar/x.txt

There are several xattr patches floating around, but I had to bundle my own in debian/patches to get support for finding files that have particular attribute names.

Maybe one day extended attributes will be taken seriously. (rsync, cp, etc will preserve them. I'm hazy on the compatibility with tar, but most things seem to be working.)

Gunnar Wolf: Can printing be so hard‽

24 September, 2014 - 02:23

Dear lazyweb,

I am tired of finding how to get my users to happily print again. Please help.

Details follow.

Several years ago, I configured our Institute's server to provide easy, nifty printing support for all of our users. Using Samba+CUPS, I automatically provided drivers to Windows client machines, integration with our network user scheme (allowing for groups authorization — That means, you can only print in your designated printer), flexible printer management (i.e. I can change printers on the server side without the users even noticing — Great when we get new hardware or printers get sent to repairs!)...

Then, this year the people in charge of client machines in the institute decided to finally ditch WinXP licenses and migrate to Windows 7. Sweet! How can it hurt?

Oh, it can hurt. Terribly.

Windows 7 uses a different driver model, and after quite a bit of hair loss, I was not able to convince Samba to deliver drivers to Win7 (FWIW, I think we are mostly using 64 bit versions). Not only that, it also barfs when we try to install drivers manually and print to a share. And of course, it barfs in the least useful way, so it took me quite a bit of debugging and Web reading to find out it was not only my fault.

So, many people have told me that Samba (or rather, Windows-type networking) is no longer regarded as a good idea for printing. The future is here, and it's called IPP. And it is simpler, because Windows can talk directly with CUPS! Not only that, CUPS allows me to set valid users+groups to each printer. So, what's there to lose?

Besides time, that is. It took me some more hair pulling to find out that Windows 7 is shipped by default (at least in the version I'm using) with the Internet Printing Server feature disabled. Duh. OK, enable it, and... Ta-da! It works with CUPS! Joy, happiness!

Only that... It works only when I use it with no authentication.

Windows has an open issue, with its corresponding hotfix even, because Win7 and 2008 fail to provide user credentials to print servers...

So, yes, I can provide site-wide printing capabilities, but I still cannot provide per-user or per-group authorization and accounting, which are needed here.

I cannot believe this issue cannot be solved under Windows 7, several years after it hit the market. Or am I just too blunt and cannot find an obvious solution?

Dear lazyweb, I did my homework. Please help me!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้