Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 32 min ago

Hideki Yamane: Now we don't have do-release-upgrade/fedup

10 May, 2015 - 00:30
We've noted some instructions to release note, but probably next step is providing upgrade tool like do-release-upgrade (ubuntu) and fedup (fedora). I hope such cool tool would be provided before Stretch release. 
Or if someone would start it already, tell me about it.

John Goerzen: First impressions and review of OwnCloud

9 May, 2015 - 08:57

In my recent post (I give up on Google), a lot of people suggested using OwnCloud as a replacement for several Google services. I’ve been playing around with it for a few days, and it is something of a mix of awesome and disappointing, in my opinion.

Files

OwnCloud started as a file-sync tool, somewhat akin to Google Drive and Dropbox. It has clients for every platform, and it is also a client for every platform: you can have subfolders of your OwnCloud installation stored on WebDav, *FTP*, Google Drive, Dropbox, you name it. It is a pretty nice integrator of other storage services, and provides the only way to use some of them on Linux (*cough* Google Drive *cough*)

One particularly interesting feature is the live editing in the browser of ODT, DOCX, and TXT files. This is somewhat similar to Google Docs and the only such thing I’ve seen in Open Source software. It writes changes directly back to the documents and, in my limited testing, seems to work well. A very nice feature!

I’ve tested the syncing only on Linux so far, but it looks solid.

There are two surprising issues, however: there is no deduplication and no delta-uploads. Add 10 bytes to the end of a 1GB file, and you re-upload the 1GB file. Thankfully the OwnCloud GUI client is smart enough to use inotify to notice an mv, but my guess is — and I haven’t tested this, but apparently OwnCloud doesn’t use hashes at all — that the CLI client would require a reupload after any mv, because it doesn’t run continuously.

In some situations, Syncany may be a useful work-around for this, as it does chunk-based dedup and client-side encryption. However, you would lose a lot of the sharing features inside OwnCloud by doing this, and the integration with the OwnCloud “apps” for photos, videos, and music.

The Android/mobile apps support all the usual auto-upload options.

Calendar

A lot of people report using OwnCloud as a calendar server, and it does indeed use CalDAV. With a program like DAVDroid or Mozilla Lightning, this makes, in theory, a full-functioning calendar syncing tool. There is, of course, also a web interface to the calendar. It, sadly, is limited. Or shall we say, VERY limited. Even something like sending an invite is missing — and in fact, the GUI for sharing an event is baffling. You can share it with someone, they get no say in whether or not it shows up, and it shows up on their calendar on the web only (not on synced copies) and they have no way to remove it!

Sharing calendars is similar; you can hide the display of any one of your calendars on the web interface, but not of any calendars shared with you. Baffling.

Address Book

I haven’t tested this yet, but there’s not much to test, I suspect. It can be shared with others, which I could see as a nice feature.

Bookmarks

An interesting bookmarks manager, though mysteriously not with Firefox sync support. There is Chrome sync support, and a separate Mozilla Sync support, but it doesn’t provide cross-browser syncing, apparently.

Music

It is designed to present an interface to music that is stored in Files. It provides an Ampache-compatible API, so there are a lot of clients that can stream music. It has very few options, not even for transcoding, so I don’t see how it would be useful for my FLAC collection.

Pictures

Sort of a gallery view of photos synced up with Files. Very basic. Has a sharing button to share a link to an entire folder, but no option to embed photos in blog posts at a lower resolution or shortcut to sharing individual photos.

Notes, Tasks, etc.

I haven’t had the chance to look at this much. Some of them sync to various clients. The Notes are saved as HTML files that get synced down.

Clients overall

There is a very helpful page that lists all the sync clients for OwnCloud — not just for files, but also for calendars, contacts, etc. The list is extensive!

Other options

The two other Open Source options mentioned on my blog post were Kolab and Sogo, and there is also Zimbra which also has a community edition. The Debian Groupware page lists a number of other groupware options as well. Citadel caught my eye (wow, it’s still around!). Sogo has ActiveSync support, which might make phone integration a lot easier. It is not dead-simple to set up like OwnCloud is, though, so I haven’t tried it out, but I will probably be looking at it and Citadel next.

Daniel Kahn Gillmor: Cheers to audacity!

9 May, 2015 - 01:43
When paultag recently announced a project to try to move debian infrastructure to python3, my first thought was how large that undertaking would likely be. It seems like a classic engineering task, full of work and nit-picky details to get right, useful/necessary in the long-term, painful in the short-term, and if you manage to pull it off successfully, the best you can usually hope for is that no one will notice that it was done at all.

I always find that kind of task a little off-putting and difficult to tackle, but I was happy to see someone driving the project, since it does need to get done. Debian is potentially also in a position to help the upstream python community, because we have a pretty good view of what things are being used, at least within our own ecosystem.

I'm happy to say that i also missed one of the other great benefits of paultag's audacious proposal, which is how it has engaged people who already knew about debian but who aren't yet involved. Evidence of this engagement is already visible on the py3porters-devel mailing list. But if that wasn't enough, I ran into a friend recently who told me, "Hey, I found a way to contribute to debian finally!" and pointed me to the py3-porters project. People want to contribute to the project, and are looking for ways in.

So cheers to the people who propose audacious projects and make them inviting to everyone, newcomers included. And cheers to the people who step up to potentially daunting work, stake out a task, roll up their sleeves, and pitch in. Even if the py3porters project doesn't move all of debian's python infrastructure to pyt3 as fast as paultag wants it to, i think it's already a win for the project as a whole. I am looking forward to seeing what comes out of it (and it's reminding me i need to port some of my own python work, too!)

The next time you stumble over something big that needs doing in debian, even something that might seem impossible, please make it inviting, and dive in. The rest of the project will grow and improve from the attempt.

Tags: py3-porters

Gunnar Wolf: Guests in the Classroom: Felipe Esquivel (@felipeer) on the applications on parallelism, focusing on 3D animation

8 May, 2015 - 22:44

I love having guests give my classes :)

This time, we had Felipe Esquivel, a good friend who had been once before invited by me to the Faculty, about two years ago. And it was due time to invite him again!

Yes, this is the same Felipe I recently blogged about — To give my blog some credibility, you can refer to Felipe's entry in IMDb and, of course, to the Indiegogo campaign page for Natura.

Felipe knows his way around the different aspects of animation. For this class (2015-04-15), he explained how traditional ray-tracing techniques work, and showed clear evidences on the promises and limits of parallelism — Relating back to my subject and to academic rigor, he clearly shows the speed with which we face Amdahl's Law, which limits the efficiency of parallelization at a certain degree perprogram construct, counterpointed against Gustafson's law, where our problem will be able to be solved in better detail given more processing abilities (and will thus not hit Amdahl's hard ceiling).

A nice and entertaining talk. But I know you are looking for the videos! Get them, either at my server or at archive.org.

Eddy Petrișor: Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1

8 May, 2015 - 07:24
About 2 months ago I set a goal to run some kind of BSD on the spare Linksys NSLU2 I had. This was driven mostly by curiosity, after listening to a few BSDNow episodes and becoming a regular listener, but it was a really interesting experience (it was also somewhat frustrating, mostly due to lacking documentation or proprietary code).

Looking for documentation on how to install any BSD flavour on the Linksys NSLU2, I have found what appears to be some too-incomplete-to-be-useful-for-a-BSD-newbie information about installing FreeBSD, no information about OpenBSD and some very detailed information about NetBSD on the Linksys NSLU2.

I was very impressed by the NetBSD build.sh script which can be used to cross-compile the entire NetBSD system - to do that, it also builds the appropriate toolchain - NetBSD kernel and the base system, even when ran on a Linux host. Having some experience with cross compilation for GNU/Linux embedded systems I can honestly say this is immensely impressive, well done NetBSD!

Gone were a few failed attempts to properly follow the instruction and lots of hours of (re)building, but then I had the kernel and the sets (the NetBSD system is split into several parts which are grouped by functionality, these are the sets), so I was in the position to have to set things up to be able to net boot - kernel loading via TFTP and rootfs on NFS.

But it wouldn't be challenging if the instructions were followed to the letter, so the first thing I wanted to change was that I didn't want to run dhcpd just to pass the DHCP boot configuration to the NSLU2, that seemed like a waste of resources since I already had dnsmasq running.

After some effort and struggling with missing documentation, I managed to use dnsmasq to pass DHCP boot parameters to the slug, but also use it as TFTP server - after some time I documented this for future reference on my blog and expect to refer to it in the future.

Setting up NFS wasn't a problem, but, when trying to boot, I found that I managed to misread at least 3 or 4 times some of the NSLU2 related information on the NetBSD wiki. To be able to debug what was happening I concluded the slug should have a serial console attached to it, which helped a lot.

Still the result was that I wasn't able to boot the trunk version of the NetBSD code on my NSLU2.

Long story short, with the help of some people from the #netbsd IRC channel on Freenode and from the port-arm NetBSD mailing list I found out that I might have a better chance with specific older versions. In practice what really worked was the code from the netbsd_6_1 branch.

Discussions on the port-arm mailing list, some digging into the (recently found) PR (problem reports), and a successful execution of the trunk kernel (at the time, version 7.99.4) together with 6.1.5 userspace lead me to the conclusion the NetBSD userspace for armbe was broken in the trunk branch.

And since I concluded this would be a good occasion to learn a few details about NetBSD, I set out to git bisect through the trunk history to identify when this happened. But that meant being able to easily load kernels and run them from TFTP, which was not how the RedBoot bootloader flashed into the slug behaves by default.

Be default, the RedBoot bootloader flashed into the NSLU2 waits for 2 seconds for a manual interaction (it waits for a ^C) on the serial console or on the telnet RedBoot prompt, then, if no such event happens, it copies the Linux image it has in flash starting with adress 0x50060000 into RAM at address 0x01d00000 (after stripping the Sercomm header) and then executes the copied code from RAM.

Of course, this is not a very handy way to try to boot things from TFTP, so my first idea to overcome this limitation was to use a second stage bootloader which would do the loading via TFTP of the NetBSD kernel, then execute it from RAM. Flashing this second stage bootloader instead of the Linux kernel at 0x50060000 would make sure that no manual intervention except power on would be necessary when a new kernel+userspace pair is ready to be tested.

Another advantage was that I would not risk bricking the NSLU2 since I would not be changing RedBoot, the original bootloader.

I knew Apex was used as the second stage bootloader in Debian, so I started configuring my own version of the APEX bootloader to make it work for the netbsd-nfs.bin image to be loaded via TFTP.

My first disappointment was that Apex was did not support receiving the boot parameters via DHCP, but only via RARP (it was clear it was less tested with BOOTP or DHCP) and TFTP was documented in the code as being problematic. That meant that I would have to hard code the boot configuration or configure RARP, but that wasn't too bad.

Later I found out that I wasted time on that avenue because the network driver in Apex was some Intel code (NPE Access Library) which can't be freely distributed, but could have been downloaded from Intel's site back in 2008-2009. The bad news was that current versions did not work at all with the old patch work that was done in Apex to allow for the driver made for Linux to compile in a world of its own so it could be incorporated in Apex.

I was stuck and the only options I were:
  1. Fight with the available Intel code and make it compile in Apex
  2. Incorporate the NPE driver from NetBSD into a rump kernel which will be included in Apex, since I knew the NetBSD driver only needed a very easily obtainable binary blob, instead of the entire driver as was in Apex before
  3. Hack together an Apex version that simulates the typing of the necessary commands to load the netbsd-nfs.bin image inside RedBoot, or in other words, call from Apex the RedBoot functions necessary to load from TFTP and execute NetBSD.
Option 1 did not look that appealing after looking into the horrible Intel build system and its endless dependencies into a specific Linux kernel version.

Option 2 was more appealing, but since I didn't knew NetBSD and only tried once to build and run a NetBSD rump kernel, it seemed like a doable project only for an experienced NetBSD developer or at least an experienced NetBSD user, which I was not.

So I was left with option 3, which meant I had to do some reverse engineering of the code, because, although RedBoot is GPL, Linksys did not publish the source from which the running RedBoot was built from.


(to be continued)

Ben Hutchings: Debian LTS work, April 2015

7 May, 2015 - 17:14

This was my fifth month working on Debian LTS. I was assigned 16 hours by Freexian's Debian LTS initiative. I worked on several packages but haven't uploaded updates yet.

linux-2.6

There is a steady stream of security issues in all Linux kernel versions. As we don't want to prompt reboots too often, these won't necessarily result in an update every month; it depends on the severity of the issue(s). However, I triaged the current issues and committed the available fixes to version control.

dpkg

The PGP signature validation in dpkg-source has a number of bugs, including CVE-2015-0840 which allows the validation to be completely subverted. I backported fixes for these from the 1.16 (wheezy) branch to the 1.15 (squeeze) branch. dpkg has a test suite and the fixes all came with new regression tests, so this was easy to verify.

As dpkg is a native package (i.e. there is little separation between the 'upstream' and Debian packaging), I preferred that the dpkg maintainers would turn this into an 'upstream' release that I could upload. Guillem Jover has now reviewed and (semi-)released my changes, so I will complete this update soon.

tiff

A large number of bugs have been found in the venerable tiff library and tools, mostly by 'fuzzing' them with afl. The bugs have been unhelpfully grouped into a smaller number of CVE IDs, although it's not clear that these groups were introduced at the same time and they certainly weren't fixed at the same time.

Most of the upstream bug reports come with samples to reproduce the bug (crash or memory corruption detectable with valgrind), but many of these did not turn out to be reproducible (either upstream or with the version in 'squeeze'). Where they were reproducible, I've verified that the patches fix the issue.

Unfortunately tiff does not have a test suite, so I made a call for testing on the debian-lts list. If I don't get any responses soon, I'll run my own basic tests with programs that use libtiff before uploading.

Benjamin Mako Hill: Books Room

7 May, 2015 - 11:11

Is the locked “books room” at McMahon Hall at UW a metaphor for DRM in the academy? Could it be, like so many things in Seattle, sponsored by Amazon?

Mika noticed the room several weeks ago but felt that today’s International Day Against DRM was a opportune time to raise the questions in front of a wider audience.

Benjamin Mako Hill: DRM on Streaming Services

7 May, 2015 - 10:43

For the 2015 International Day Against DRM, I wrote a short essay on DRM for streaming services posted on the Defective by Design website. I’m republishing it here.

Between 2003 and 2009, most music purchased through Apple’s iTunes store was locked using Apple’s FairPlay digital restrictions management (DRM) software, which is designed to prevent users from copying music they purchased. Apple did not seem particularly concerned by the fact that FairPlay was never effective at stopping unauthorized distribution and was easily removed with publicly available tools. After all, FairPlay was effective at preventing most users from playing their purchased music on devices that were not made by Apple.

No user ever requested FairPlay. Apple did not build the system because music buyers complained that CDs purchased from Sony would play on Panasonic players or that discs could be played on an unlimited number of devices (FairPlay allowed five). Like all DRM systems, FairPlay was forced on users by a recording industry paranoid about file sharing and, perhaps more importantly, by technology companies like Apple, who were eager to control the digital infrastructure of music distribution and consumption. In 2007, Apple began charging users 30 percent extra for music files not processed with FairPlay. In 2009, after lawsuits were filed in Europe and the US, and after several years of protests, Apple capitulated to their customers’ complaints and removed DRM from the vast majority of the iTunes music catalog.

Fundamentally, DRM for downloaded music failed because it is what I’ve called an antifeature. Like features, antifeatures are functionality created at enormous cost to technology developers. That said, unlike features which users clamor to pay extra for, users pay to have antifeatures removed. You can think of antifeatures as a technological mob protection racket. Apple charges more for music without DRM and independent music distributors often use “DRM-free” as a primary selling point for their products.

Unfortunately, after being defeated a half-decade ago, DRM for digital music is becoming the norm again through the growth of music streaming services like Pandora and Spotify, which nearly all use DRM. Impressed by the convenience of these services, many people have forgotten the lessons we learned in the fight against FairPlay. Once again, the justification for DRM is both familiar and similarly disingenuous. Although the stated goal is still to prevent unauthorized copying, tools for “stripping” DRM from services continue to be widely available. Of course, the very need for DRM on these services is reduced because users don’t normally store copies of music and because the same music is now available for download without DRM on services like iTunes.

We should remember that, like ten years ago, the real effect of DRM is to allow technology companies to capture value by creating dependence in their customers and by blocking innovation and competition. For example, DRM in streaming services blocks third-party apps from playing music from services, just as FairPlay ensured that iTunes music would only play on Apple devices. DRM in streaming services means that listening to music requires one to use special proprietary clients. For example, even with a premium account, a subscriber cannot listen to music from their catalog using an alternative or modified music player. It means that their television, car, or mobile device manufacturer must cut deals with their service to allow each paying customer to play the catalog they have subscribed to. Although streaming services are able to capture and control value more effectively, this comes at the cost of reduced freedom, choice, and flexibility for users and at higher prices paid by subscribers.

A decade ago, arguments against DRM for downloaded music focused on the claim that users should have control over the music they purchase. Although these arguments may not seem to apply to subscription services, it is worth remembering that DRM is fundamentally a problem because it means that we do not have control of the technology we use to play our music, and because the firms aiming to control us are using DRM to push antifeatures, raise prices, and block innovation. In all of these senses, DRM in streaming services is exactly as bad as FairPlay, and we should continue to demand better.

Steve Kemp: On de-duplicating uploaded file-content.

7 May, 2015 - 07:00

This evening I've been mostly playing with removing duplicate content. I've had this idea for the past few days about object-storage, and obviously in that context if you can handle duplicate content cleanly that's a big win.

The naive implementation of object-storage involves splitting uploaded files into chunks, storing them separately, and writing database-entries such that you can reassemble the appropriate chunks when the object is retrieved.

If you store chunks on-disk, by the hash of their contents, then things are nice and simple.

The end result is that you might upload the file /etc/passwd, split that into four-byte chunks, and then hash each chunk using SHA256.

This leaves you with some database-entries, and a bunch of files on-disk:

/tmp/hashed/ef267892ee080862c96a8d2d05de62f48e20f0875f27379e7d58c73ea4455bf1
/tmp/hashed/a378977155fb42bb006496321cbe31f74cbda803c3f6ca590f30e76d1afad921
..
/tmp/hashed/3805b0245bc8375be7125ae228eef711552ac082ffb9bf8756e2964a2393a9de

In my toy-code I wrote out the data in 4-byte chunks, which is grossly ineffeciant. But the value of using such small pieces is that there is liable to be a lot of collisions, and that means we save-space. It is a trade-off.

So the main thing I was experimenting with was the size of the chunks. If you make them too small you lose I/O due to the overhead of writing out so many small files, but you gain because collisions are common.

The rough testing I did involved using chunks of 16, 32, 128, 255, 512, 1024, 2048, and 4096 bytes. As sizes went up the overhead shrank, but also so did the collisions.

Unless you could handle the case of users uploading a lot of files like /bin/ls which are going to collide 100% of the time with prior uploads using larger chunks just didn't win as much as I thought they would.

I wrote a toy server using Sinatra & Ruby, which handles the splitting/hashing/and stored block-IDs in SQLite. It's not so novel given that it took only an hour or so to write.

The downside of my approach is also immediately apparent. All the data must live on a single machine - so that reassmbly works in the simple fashion. That's possible, even with lots of content if you use GlusterFS, or similar, but it's probably not a great approach in general. If you have large capacity storage avilable locally then this might would well enough for storing backups, etc, but .. yeah.

Matthieu Caneill: Debsources got swag and continous integration

7 May, 2015 - 05:00

Debsources (http://sources.debian.net) is still under active development. We recently had a Gnome Outreachy intern, Jingjie Jiang, and we're about to work with 2 GSoC students, Clément Schreiner and Orestis Ioannou.

I will present here the GitHub mirror we've set up, in order to allow external pull requests to be submitted, and to use the continous integration service provided by Travis-CI.

GitHub and Travis-CI

Debsources' source code is hosted on Debian's git servers, and from there is mirrored to GitHub. Every time a commit is pushed (to master or other branches) or a pull request is open, the test suite will be automatically run on Travis-CI, and the result (tests pass or don't) is displayed on GitHub. This allows us to quickly filter external contributions (when they are submitted on GitHub), and be sure everything works with our setup, before reviewing work.

Travis-CI runs the tests on OpenVZ containers. The complete infrastructure was a bit challenging to setup, but as we now have a Docker recipe to quicly begin to hack on Debsources, most of the work could be done using the Dockerfile instructions.

In average, a run on Travis-CI (which includes git cloning the code and test data, setup the server, and run the tests suite) takes 7 minutes, which is an ok amount of time to wait for before submitting a pull request, in my opinion.

Bugs discovered in the process

Setting up this continuous integration infrastructure made me discover a few bugs.

Python magic does black magic

Debsources runs fine on Debian (not surprisingly), but I got tricked by black magic when I tried to run it on Ubuntu (which is the OS run in Travis-CI's containers).

We use the magic library to guess the type of files we're dealing with, for instance when we need to decide between rendering a file (for text files) or downloading it (for binary files).

Here comes the tricky part: the Python bindings for libmagic are not the same in Debian and Pypi. Debsources uses Debian package python-magic, which is not in Ubuntu 12.04. Moreover, there's no Python egg for it on Pypi, which has however another package (called magic) which provides a different API.

I solved this with a dirty hack, using the fact python-magic lies in a single file:

mkdir /tmp/python-magic && wget https://raw.githubusercontent.com/file/file/master/python/magic.py -O /tmp/python-magic/magic.py && export PYTHONPATH=/tmp/python-magic/:$PYTHONPATH

It simply downloads the library, saves it in a temporary folder and includes it in the Python path. Let's see for how long it works before everything breaks!

Size of a directory

One test in the suite was ensuring the information returned by ls -l on a directory and stored in the DB was the right information. Inode metadata was tested, such as name, permissions, type, or size.

Interestingly enough, the size of a directory was tested, and expected to be 4096 bytes. The size of a directory actually depends on the filesystem in use, and on the number of files this directory contains. We often see 4096 because it's the size of a not-too-big directory on ext4.

Travis-CI doesn't use ext4:

$ df -T
Filesystem            Type     1K-blocks      Used Available Use%
Mounted on
/vz/private/209140041 simfs    125829120 103460612  22368508  83% /
none                  devtmpfs   1572864         8   1572856   1% /dev
none                  tmpfs       314576        56    314520   1% /run
none                  tmpfs         5120         4      5116   1%
/run/lock
none                  tmpfs      1572864         0   1572864   0%
/run/shm
/dev/null             tmpfs       786432    171584    614848  22%
/var/ramfs

Simfs is a container filesystem for OpenVZ, on which directories have different sizes than on ext4:

$ ls -al /
total 0
drwxr-xr-x 23 root     root      480 Feb  4 18:08 .
drwxr-xr-x 23 root     root      480 Feb  4 18:08 ..
drwxr-xr-x  2 root     root     2480 Feb  4 18:20 bin
drwxr-xr-x  2 root     root       40 Apr 19  2012 boot
drwxr-xr-x  5 root     root      660 Apr 30 13:56 dev
drwxr-xr-x 99 root     root     3560 Apr 30 13:56 etc
-rw-r--r--  1 root     root        0 Feb  4 17:56 fastboot
drwxr-xr-x  3 root     root       80 Feb  4 17:57 home
[...]

Directory sizes are not even powers of 2.

Hence I changed the test to not check directory sizes. Hopefully this will help to make Debsources work on more filesystems!

An empty file is hiding

Last but not least, because this bug is still open in the wild. A file, which appears to be empty, is not taken into account by Debsources' updater. This file is sources/non-free/m/make-doc-non-dfsg/4.0-2/.pc/applied-patches. It is present in the filesystem in the container, is not the only empty file over there, but still doesn't appear in the database, and make fail the test which counts files.

The test has been commented out (booooooh), so that we still can use Travis-CI's platform for our GSoC students, before it's fixed.

Conclusion

Making Debsources run automatically on a different platform as the one we usually use permitted us to spot bugs, write dirty hacks, and expand the filesystems it's supposed to run on.

Now, let's hope the continuous integration will help our GSoC students, and let's wish them good luck!

C.J. Adams-Collier: A harowing upgrade

7 May, 2015 - 03:16

I just dist-upgraded the blog server to jessie. A couple of errors from code with checks for which modules are enabled from .htaccess (deprecated and now obsolete behavior). Nothing all that rocky. Go team Debian!

John Goerzen: I Give Up on Google: Free is Too Expensive

7 May, 2015 - 01:26

I am really tired of things Google has done lately.

The most recent example being retiring Classic Maps. That’s a problem, because the current Maps mysteriously doesn’t show most of my saved (“starred”) places. Google has known about this since at least 2013. There are posts all over their forums about it going back to when what is now “regular” Google Maps was beta. Google employees even knew about it and did nothing. For someone that made heavy use of it, this was quite annoying.

But there have been plenty of others:

  • Removing My Places and My Maps from Maps for Android. Those features were used to, for instance, plan trips, highlight routes, add campground possibilities, etc. (They eventually brought this feature back months/years later, in limited form.)
  • Removed the 7-day and month views from Calendar for Android, claiming this was “better” for users. Finally re-added those views a few months later after many complaints. I even participated in a survey process with them where they were clearly struggling to understand why anybody wanted to see 7 days at once, when that feature had been there for years…
  • Removing the XMPP capabilities in Google Talk/Hangouts.
  • Picasaweb pretty much shut down, with very strong redirects to Google+ Photos. Which still to this day doesn’t have a handy feature for embedding in a blog post or anything that’s not, well, Google+.
  • General creeping crapification of everything they touch. It’s almost like Microsoft in the 90s all over again. All of a sudden my starred places stop showing up in Google Maps, but show up in Google Drive — shared with the whole world. What? I never wanted them in Google Drive to start with.
  • All the products that are all-but-dead — Google Groups and the sad state of the Deja News archives. Maybe Google+ itself goes on this list soon?
  • Looks like they’re trying to kill off Google Voice and merge it into hangouts, but I can’t send a text from the web with Hangouts.
  • And this massive list of discontinued services and products. Yeowch. Remember when Google Code was hot, and then they didn’t touch it at all for years?
  • And they still haven’t fixed some really basic things, such as letting people change their email address when they get married.
  • Dropping SIP from Grand Central, ActiveSync from Apps, etc.

I even used to use Flickr, then moved to Picasa when Yahoo stopped investing in Flickr. Now I’m back to Flickr, because Google stopped investing in Picasa.

The takeaway is that you can’t really rely on Google for anything. Counting on something being there for an upcoming trip and then having it be suddenly yanked away is a level of frustration that just makes the service not so useful. Never knowing when obvious things (7-day calendar view) will be removed means you just can’t depend on it.

So, are there good alternatives? Things I’m thinking of include:

  • Alternative calendar applications. Ideally it would support shared calendars for multiple people in a family, an Android app that lets you easily view some or all calendars, etc. I wonder if outlook.com is really the only competitor here? Last I looked — a few years ago — none of the Open Source options really worked well.
  • Alternative mapping applications. Must-haves include directions, navigation in the car, saving points of interest, and offline storage on Android. Nice-to-haves would include restaurant review integration, etc. Looks like Nokia (HERE.com) and Mapquest, plus a few OSM spinoffs, are the leading contenders here.
  • Email is easily enough found elsewhere, and I’ve never used Gmail much anyhow.

Anybody else moving off Google?

Ben Hutchings: Debian LTS work, April 2015

6 May, 2015 - 23:27

This was my fifth month working on Debian LTS. I was assigned 16 hours by Freexian's Debian LTS initiative. I worked on several packages but haven't uploaded updates yet.

linux-2.6

There is a steady stream of severity issues in all Linux kernel versions. As we don't want to prompt reboots too often, these won't necessarily result in an update every month; it depends on the severity of the issue(s). However, I triaged the current issues and committed the available fixes to version control.

dpkg

The PGP signature validation in dpkg-source has a number of bugs, including CVE-2015-0840 which allows the validation to be completely subverted. I backported fixes for these from the 1.16 (wheezy) branch to the 1.15 (squeeze) branch. dpkg has a test suite and the fixes all came with new regression tests, so this was easy to verify.

As dpkg is a native package (i.e. there is little separation between the 'upstream' and Debian packaging), I preferred that the dpkg maintainers would turn this into an 'upstream' release that I could upload. Guillem Jover has now reviewed and (semi-)released my changes, so I will complete this update soon.

tiff

A large number of bugs have been found in the venerable tiff library and tools, mostly by 'fuzzing' them with afl. The bugs have been unhelpfully grouped into a smaller number of CVE IDs, although it's not clear that these groups were introduced at the same time and they certainly weren't fixed at the same time.

Most of the upstream bug reports come with samples to reproduce the bug (crash or memory corruption detectable with valgrind), but many of these did not turn out to be reproducible (either upstream or with the version in 'squeeze'). Where they were reproducible, I've verified that the patches fix the issue.

Unfortunately tiff does not have a test suite, so I made a call for testing on the debian-lts list. If I don't get any responses soon, I'll run my own basic tests with programs that use libtiff before uploading.

Martin Pitt: autopkgtest 3.14 “now twice as rebooty”

6 May, 2015 - 17:15

Almost every new autopkgtest release brings some small improvements, but 3.14 got some reboot related changes worth pointing out.

First of all, I simplified and unified the implementation of rebooting across all runners that support it (ssh, lxc, and qemu). If you use a custom setup script for adt-virt-ssh you might have to update it: Previously, the setup script needed to respond to a reboot function to trigger a reboot, wait for the testbed to go down, and come back up. This got split into issuing the actual reboot system command directly by adt-run itself on the testbed, and the “wait for go down and back up” part. The latter now has a sensible default implementation: it simply waits for the ssh port to become unavailable, and then waits for ssh to respond again; most testbeds should be fine with that. You only need to provide the new wait-reboot function in your ssh setup script if you need to do anything else (such as re-enabling ssh after reboot). Please consult the manpage and the updated SKELETON for details.

The ssh runner gained a new --reboot option to indicate that the remote testbed can be rebooted. This will automatically declare the reboot testbed capability and thus you can now run rebooting tests without having to use a setup script. This is very useful for running tests on real iron.

Finally, in testbeds which support rebooting your tests will now find a new /tmp/autopkgtest-reboot-prepare command. Like /tmp/autopkgtest-reboot it takes an arbitrary “marker”, saves the current state, restores it after reboot and re-starts your test with the marker; however, it will not trigger the actual reboot but expects the test to do that. This is useful if you want to test a piece of software which does a reboot as part of its operation, such as a system-image upgrade. Another use case is testing kernel crashes, kexec or another “nonstandard” way of rebooting the testbed. README.package-tests shows an example how this looks like.

3.14 is now available in Debian unstable and Ubuntu wily. As usual, for older releases you can just grab the deb and install it, it works on all supported Debian and Ubuntu releases.

Enjoy, and let me know if you run into troubles or have questions!

Michal Čihař: This was Sri Lanka

6 May, 2015 - 17:00

We've spent beautiful weeks on Sri Lanka in January and February. On the way we've seen many different places - ancient cities, mountains, natural parks or beaches. Here comes selection of photos which I like most.

Filed under: English Photography Travelling | 0 comments

Charles Plessy: Long life to Jessie!

6 May, 2015 - 12:57

Debian published Jessie last month. Big congratulations to the release team and all the contributors; quality is again at the rendez-vous!

This time, I could not contribute much, being busy with parental activities (hello my son, if you read me), apart from making sure that my packages are ready for the D Day. This work was made easy by the automated removals of buggy packages set up by the release team. Many thanks for that development.

I hope to use Jessie for a long time without the need of upgrading to Testing. At the moment, it provides everything I need, including the input of Japanese characters together with the possibility to switch between Japanese (or American) and multilingual Canadian keyboard layouts, which was not easy to do anymore in Wheezy.

Many thanks again.

Junichi Uekawa: Mail configuration.

6 May, 2015 - 07:30
Mail configuration. Sometimes I send mail to root@some-domain-I-don't-own from cron or sudo or whatever system and wonder why that happened.

Miriam Ruiz: SuperTuxKart 0.9: The other side of the story

6 May, 2015 - 06:15

I approached the SuperTuxKart community fearing some backslash due to last week’s discussion about their release 0.9, to find instead a nice, friendly and welcoming community. I have already had some very nice talks with them since then, and they have patiently explained to me the sequence of events that led to the situation that I mentioned and that, for the sake of fairness, I consider that I have to share here too. You can read the log of the first conversation I had with them (the log has been edited and cleared up for clarity and readability). I seriously recommend reading it, it’s a honest friendly conversation, and it’s first hand.

For those who don’t already know the game:

All this story seems to start with the complain of a 6 yo girl, close relative of one of the developers and STK user, who explained that she always felt that Mario Kart was better because there was a princess in it. I’m not particularly happy with princesses as role models for girls, but one thing I have always said is that we have to listen to kids and take their opinions into accounts, and I know that if I had such a request from one of the kids closer to me, I probably would have fulfilled it too. In any case, Free Software projects based on volunteer work are essentially a do-ocracy and it is assumed that whoever does the work, gets to decide about it.

So that is how Princess Sara was added to the game. While developing it, I was assured that they took extra care that her proportions were somehow realistic, and not as distorted as we’re used to see in Barbie or many Disney films. Sara is inspired on an OpenGameArt’s wizard and is not supposed to be a weak damsel in distress, but in fact a powerful character in the world’s universe.

Sara is not the only female character playable. There are a few others: Suzanne (a monkey, Blender’s mascot), Xue (XFCE’s mouse) and Amanda (a panda, the mascot of windows maker). Sara happens to be the only human character playable, male or female. While it has been argued that by adding that character, a player might have the impression that the rest of the characters would be male by default, I have been told that the intention is exactly the opposite,and that the fact that the only human playable character in the game is female should make it more attractive to girls. To some, at least.

Here are some images of Sara:

So the fact is that they have invested a lot of time in developing Sara’s model. I’m not an artist myself, so I don’t know first hand how much time and effort it takes to make such a model, but in any case it seems that quite a lot. When they designed the beach track Gran Paradiso, they wanted to add people to the beach. That track is, in fact, inspired on a real existing place: Princess Juliana Airport. Time was over and they wanted to publish a version with what they already had, so they used Sara’s model in a bikini on the beach, with the intention of adding more people, male and female, later. The overall view of the beach would be:

This is how that track shows when the players are driving in it:

Now, about the poster of version 0.9, it is supposed to be inspired in the previous poster of version 0.8.1, only this time inspired in Carnival (which is, in fact, a celebration in which sexualization of both genders is a core part). I know that there are accusations of cultural apropriation, but I couldn’t know, as my white privilege probably shields me from seeing that. Up to now, no one has said anything about that, only Gunnar explaining his point of view as a non-native mexican: “While the poster does not strike as the most cautious possible, I do not see it as culturally offensive. It does not attempt to set a scene portraiting what were the cultures really like; the portrait it paints is similar to so many fantasy recreations”. In my opinion, even when the model is done in good taste, with no superbig breasts and no unrealistic waist, it’s still depicting a girl without much clothes as the main element of the scene, with an attire, a posture and an attitude that clearly resembles carnival and, thus, inevitably conveys a message of sexualization. Even though I can’t deny that it’s a cute poster, it’s one I wouldn’t be happy to see for example in a school, if someone wanted to promote the game there.

The author of the poster, anyway, tells me that he had a totally different intention when doing it, and he wanted to depict a powerful princess, in the center of SuperTuxKart’s universe, celebrating the new engine.

 

About the panties showing every now and then, I’ve been told that it’s something so hard to see that in fact you would really have to open the model itself to view them. I’m not saying that I like them though, I think it would have been better if Sara would have had short pants under the skirt, if she was going to drive the snowmobile with a dress, but I’m not sure if that’s something important enough to condemn the game. The original girl mentioned at the beginning of this post seems to have found the animation funny, started laughing, and said that Sara is very silly, and that was all. It’s probably something more silly than naughty, I guess. Even though, as I said, it’s something I don’t like too much. I don’t have to agree with STK developers in everything. I guess.

There’s one thing I would like to highlight about my conversations with the developers of SuperTuxKart, though. I like them. They seem to be as concerned about the wellbeing of kids as I am, they have their own ethic norms of what’s acceptable and what’s not, and they want to do something to be proud of. Sometimes, many of these conflicts arise from a lack of trust. When I first saw the screenshots with the girl in bikini and the panties showing, I was honestly concerned about the direction the project was taking. After having talked with the developers, I am more calmed about it, because they seem to have their heart in the right place, they care, they are motivated and they work hard. I don’t know if a princess would be my first choice for a main female character, but at least their intention seems to be to give some girls a sensible role model in the game with who they can identify.

 

Jamie McClelland: Docker networking... private range or not?

6 May, 2015 - 02:37

Am I missing something?

I installed docker and noticed that it created a virtual interface named docker0 with the IP address 172.17.42.1. This behavior is consistent with the Docker networking documentation. However, I was confused by this statement:

It randomly chooses an address and subnet from the private range defined by RFC 1918 that are not in use on the host machine, and assigns it to docker0. Docker made the choice 172.17.42.1/16 when I started it a few minutes ago...

It seems like RFC 1918 defines:

  • 10.0.0.0 - 10.255.255.255 (10/8 prefix)
  • 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
  • 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

How is 172.17.42.1/16 from the private ranges listed above? Is 172.17.42.1 a potentially public IP address?

Raphaël Hertzog: My Free Software Activities in April 2015

5 May, 2015 - 17:00

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 26.25 hours on Debian LTS. In that time I did the following:

  • CVE triage: I pushed 52 commits to the security tracker. I finished a new helper script (bin/lts-cve-triage.py) that builds on the JSON output that Holger implemented recently. It helps to triage more quickly some issues based on the triaging work already done by the Debian Security team.
  • I filed #783005 to clarify the situation of libhtp and suricata in unstable (discovered this problem while triaging issues affecting those packages).
  • I reviewed and sponsored DLA-197-1 for Nguyen Cong fixing 5 CVE on libvncserver.
  • I released DLA-199-1 fixing one CVE on libx11. I also used codesearch.debian.net to identify all packages that had to be rebuilt with the fixed macro and uploaded them all (there was 11 of them).
  • I sponsored DLA-207-1 for James McCoy fixing 7 CVE on subversion.
  • I released DLA-210-1 fixing 5 CVE on qt4-x11.
  • I released DLA-213-1 fixing 7 CVE on openjdk-6.
  • I released DLA-214-1 fixing 1 CVE on libxml-libxml-perl.
  • I released DLA-215-1 fixing 1 CVE on libjson-ruby. This backport was non-trivial but luckily included some non-regression tests.
  • I filed #783800 about the security-tracker not handling correctly squeeze-lts/non-free.

Now, still related to Debian LTS, but on unpaid hours I did quite a few other things:

Other Debian work

Feature request in update-alternatives. After a discussion with Josselin Mouette during the Mini-DebConf in Lyon, I filed #782493 to request the possibility to override at a system-wide level the default priority of alternatives recorded in update-alternatives. This would make it easier for derivatives to make different choices than Debian.

Sponsored a dnsjava NMU. This NMU introcuded a new upstream version which is needed by jitsi. And I also notified the MIA team that the dnsjava maintainers have disappeared.

python-crcmod bug fix and uploads to *-backports. A member of the Google Cloud team wanted this package (with its C extension) to be available to Wheezy users so I NMUed the package in unstable (to fix #782379) and prepared backports for wheezy-backports and jessie-backports (the latter only once the release team rejected a fix in jessie proper, see #782766).

Old and new PTS updates for Jessies’s release. I took care to update tracker.debian.org and packages.qa.debian.org to take into account Jessie’s release (which, most notably, introduced the “oldoldstable” suite as the new name for Squeeze until its end of life).

Received thanks with pleasure. This is not something that I did but I enjoyed reading so many spontaneous thanks in response to Guillem’s terse and thankless notification of me stepping down from dpkg maintenance. I love the Debian community. Thank you.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้