Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 11 months 2 weeks ago

Richard Hartmann: Release Critical Bug report for Week 15

15 April, 2013 - 00:00

This week: Monday 47, Tuesday 47, Wednesday 41, Thursday 39, Friday 37

Addendum: the bug count was at 37 around Friday noon and that's what I am tracking with the short stats above.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 717
    • Affecting wheezy: 39 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting wheezy and unstable: 32 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 11 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 2 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 19 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting wheezy only: 7 Those are already fixed in unstable, but the fix still needs to migrate to wheezy. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 2 bugs are in packages that are unblocked by the release team.
        • 5 bugs are in packages that are not unblocked.

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Diff 43 284 (213+71) 468 (332+136) +184 (+119/+65) 44 261 (201+60) 408 (265+143) +147 (+64/+83) 45 261 (205+56) 425 (291+134) +164 (+86/+78) 46 271 (200+71) 401 (258+143) +130 (+58/+72) 47 283 (209+74) 366 (221+145) +83 (+12/+71) 48 256 (177+79) 378 (230+148) +122 (+53/+69) 49 256 (180+76) 360 (216+155) +104 (+36/+79) 50 204 (148+56) 339 (195+144) +135 (+47/+90) 51 178 (124+54) 323 (190+133) +145 (+66/+79) 52 115 (78+37) 289 (190+99) +174 (+112/+62) 1 93 (60+33) 287 (171+116) +194 (+111/+83) 2 82 (46+36) 271 (162+109) +189 (+116/+73) 3 25 (15+10) 249 (165+84) +224 (+150/+74) 4 14 (8+6) 244 (176+68) +230 (+168/+62) 5 2 (0+2) 224 (132+92) +222 (+132/+90) 6 release! 212 (129+83) +212 (+129/+83) 7 release+1 194 (128+66) +194 (+128/+66) 8 release+2 206 (144+62) +206 (+144/+62) 9 release+3 174 (105+69) +174 (+105/+69) 10 release+4 120 (72+48) +120 (+72/+48) 11 release+5 115 (74+41) +115 (+74/+41) 12 release+6 93 (47+46) +93 (+47/+46) 13 release+7 50 (24+26) +50 (+24/+26) 14 release+8 51 (32+19) +51 (+32/+19) 15 release+9 39 (32+7) +39 (+32/+7) 16 release+10 17 release+11 18 release+12

Graphical overview of bug stats thanks to azhag:

Vasudev Kamath: Handling C++ symbols that vary across architectures

14 April, 2013 - 21:15
Disclaimer

I'm no symbol expert and dealing with symbol files for first time from the time I started packaging. What I did here is depending on some suggestions I got in #debian-mentors.

If you think what I did was wrong please enlighten me :-).

Problem

Recently 2 of my library packages pugixml and ctpp2 got accepted into Debian archive and when buildd tried to build them on remaining architectures other than one for which I uploaded (amd64) builds failed. This was expected as symbols file which I generated was for amd64. As usual I got 2 serious bug reports #704718 and #705135.

Solution

First I was not sure how to handle this. I read article on symbols files handling by rra [1] and tried to use pkgkde-symbolshelper tool only to quickly figure out that I need to use pkgkde_symbolshelper addon for dh sequencer. But this was not possible for me as I was using CDBS for packaging.

I did a quick chat on #debian-mentors and some one suggested me to tag symbols which vary across architecture with (c++) tag. First I was not sure but after reading dpkg-gensymbols man page understood that I need to replace the entire mangled symbols lines with de-mangled version tagged with (c++) in the beginning.

But this was hectic job searching for each deleted symbols and replacing it. So I thought of writing a script to do the job and after struggling for 3 days (yeah I was bit dumb that I didn't read manual date: "2013-04-14 18:39:20 +0530" pugixml and ctpp2 package using it, which is now waiting for Jonas for uploading it.

Here is the script

#!/bin/bash
set +x

if [ $# -lt 3 ]; then
    echo "Usage: $0 failed_buildlogs_directory symbols_file package_version"
    exit 2
fi
    
BUILD_LOG_DIRECTORY=$1
SYMBOLS_FILE=$2
PACKAGE_VERSION=$3
VERSION_TO_REPLACE=" $PACKAGE_VERSION\""

for LOGFILE in $(ls $BUILD_LOG_DIRECTORY/*.build); do
    for i in $(grep '^-\s_Z' $LOGFILE | perl -pe 's/-//g;'); do
        if [ $i = $PACKAGE_VERSION ];then
            continue
        fi
        demangled_version="\""$(echo $i" "$PACKAGE_VERSION | c++filt)"\""
        tagged_version="(c++)"${demangled_version%$VERSION_TO_REPLACE}"\" "$PACKAGE_VERSION
        escaped_tagged_version=$(echo $tagged_version | sed 's/\&/\\\&/')
        sed -i "s#$i $PACKAGE_VERSION#$escaped_tagged_version#" $SYMBOLS_FILE
   done
done

So basically to make this work we need all buildlogs to be downloaded from buildd's again this was easy thanks to rra for developing pkgkde-getbuildlogs :-).

Once you have build logs directory run the above script as follows

cppsymbol_replace.sh path_to_buildlogs path_to_symbol_file upstream_version

After replacing symbols I tried to build package on i386 chroot and build passed successfully but lintian told me that there are symbols which have Debian version appended to it and this might lead to date: "2013-04-14 18:42:42 +0530" back to mentors :-).

It is this time I really understood concept of mangled names generated by compiler and why it vary across the architecture ;-).

This time some one by nick pochu suggested me to pass option -v with package version to dpkg-gensymbols to make it generate symbols with package version and not Debian version.

The following probably needs to be done if package uses dh sequencer but I'm not sure as I've not tested it. If wrong please correct me.

override_dh_makeshlibs:
    dh_makeshlibs -- -v$PACKAGEVERION #package version needs to be either extracted using parsechangelog or manually fed

If you are using CDBS this is pretty simple. Just add following to rules

DEB_DH_MAKESHLIBS_ARGS_$(pkgname) += -- -v$(DEB_UPSTREAM_VERSION)

I noticed that when I provide (c++) tagged de-mangled name dpkg-gensymbols simply replaces it with proper mangled name but the deletion doesn't trigger an error in dpkg-gensymbols.

This script me allowed to replace 128 symbols which were very tricky and long with de-mangled and tagged version in ctpp2, so I hope it should work across different packages without any problem. Only silly mistake I did was occurrence of & symbol in function making sed go mad which I took one full day to debug :facepalm:.

So that's it folks, if you see something wrong in what I did please let me know through the comments.

[1] http://www.eyrie.org/~eagle/journal/2012-01/008.html

Bits from Debian: DPL election is over, congratulations Lucas Nussbaum!

14 April, 2013 - 18:25

The Debian Project Leader election has concluded and the winner is Lucas Nussbaum. Of a total of 988 developers, 390 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2013 page.

The new term for the project leader will start on April 17th and expire on April 17th 2014.

Andreas Metzler: balance sheet snowboarding season 2012/13

14 April, 2013 - 16:57

All in a below average season. Although we had lots of snow in December, my first day on-piste was December 22nd. Riding in snowstorm or thick fog is just not my kind of thing. The christmas holiday season was absurdely warm, getting rid of most of the snow again. I managed 7 snow-days from December 22nd to January 1st, but this was more sport than fun. Really sunny days were rare in the whole winter. Due to minor injury and minor illness I had to take long breaks from snowboarding (just two snow days in the periode from January 8th to February 15th!). Early easter cut the season short.

This year I ended up in skiline.cc's top-100 list for both most meters of altitude in a single day and the whole season which shows that other people had a short season, too.

On the plus side, we had enough and good snow. This is also evident from the balance sheet below, I almost always went to Diedamskopf where there is almost no artificial snow.

Here is the balance sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 number of (partial) days2517293730302523 Damüls10105101623104 Diedamskopf15424231341419 Warth/Schröcken03041310 total meters of altitude12463474096219936226774202089203918228588203562 highscore10247m8321m12108m11272m11888m10976m13076m13885m # of runs309189503551462449516468

Andreas Metzler: New Toy

14 April, 2013 - 02:09

Ski-lifts are closing tomorrow, but I have the perfect thing to look at for the next 8 months:

Steve Kemp: A mixed week with minor tweaks

14 April, 2013 - 00:23

As previously mentioned I was looking to package pwsafe for Wheezy, as this is one of the few tools that I rely upon which isn't present.

There are now packages available, with the source on github.

I've also been doing some minor scripting because I've run into a few common problems recently:

run-parts

run-parts is a simple utility which will run every executable in a directory, more or less.

In Debian-land run-parts is the mechanism for /etc/cron.daily and /etc/cron.hourly - and that is where I've had problems recently.

Imagine you run a backup via cron.daily. Further imagine that you run a post-backup rsync and that this might take many many hours. If your backup takes >=24 hours you're screwed.

To that end I've patched my run-parts tool to alert and exit if a prior invocation is still running.

silent-run

I think everybody has this script - hide all output when running a command, unless the command fails. Looking today I see chronic from Joey's excellent moreutils does this. D'oh.

I think I've done more, but I cannot remember. In conclusion software is both easy and hard - easy because these two trivial changes were within my reach, but hard because years after encountering GNU/Linux we still have to add in the missing pieces.

Still could be worse, I spent four/five hours yesterday evening fighting with MS-SQL server, and that is time I'm never going to get back.

Andrew Shadura: A bit more on TrueCrypt

13 April, 2013 - 22:01

Few days ago I was discussing my last blog post with a colleague of mine, Lukaš Tvrdý, and he’s mentioned that it actually is possible to harmlessly decrypt the TrueCrypted filesystem in online mode, resize it, and encrypt it back again. Well, I decided to try, because I have had some problems with that setup. Apart from init script reordering I had to make (not covered in the post, as I did it later and haven’t found time yet to write about that), I ran into a trouble while trying to enable swapping. As the only partition I had was formatted into NTFS, placing a swap file there isn’t the best idea ever. I actually tried doing so, but after two deadlocks I had in few hours I had to rethink that. Indeed, as NTFS is implemented in the user space using ntfs-3g, as soon as this process gets swapped out, the system is left helpless. The swap is on the file system driver of which is in the swap, which is on the file system driver of which is… well, you get the idea.

Basically, I got rid of ext3-on-the-loop, and now I have real ext3 in a LUKS container, plus a separate swap partition. Now the other problem is: I still have an encrypted NTFS filesystem which I occasionally need to access, but I don’t want to type the pass phrase twice… Something to think about.

Charles Plessy: Umegaya now hosted at Branchable

13 April, 2013 - 18:50

Thanks to Joey's present, umegaya is now hosted at Branchable. Thanks !

Paul Tagliamonte: Hy’s got a new home (and team!) As some of you interested...

13 April, 2013 - 11:47


Hy’s got a new home (and team!)

As some of you interested folks know, I’ve been hacking on Hy, and I’m proud to announce it’s new home at hylang.org and future deb repo at gethy.org.

I’ve posted this as an image for one very important reason - Hy is now a full community based project. There’s been in insane reaction to hy, and I’d like to keep it sustainable.

So, come hack with us on #hy on freenode, or just help us come up with more puns.

As always, try hy, and see if you don’t have some feature requests!

Craig Small: Pie Charts in TurboGears

13 April, 2013 - 07:40

You might of looked at Ralph Bean’s tutorial on graphs and thought, that’s nice but I’d like something different.  The great thing about ToscaWidgets using the jqPlot library is that pretty much anything you can do in jqPlot, you can do in ToscaWidgets and by extension in TurboGears.  You want different graph types? jqPlot has heaps!

My little application needed a Pie Chart to display the overall status of attributes. There are 6 states an attribute can be in: Up, Alert, Down, Testing, Admin Down and Unknown.  The Pie Chart would show them all with some sort of colour representing the status. For this example I’ve used hard-coded data but you would normally extract it from the database using a TurboGears model and possibly some bucket sorting code.

I’ve divided my code into widget specific, which is found in myapp/widgets/attribute.py and the controller code found unsurprisingly at myapp/controllers/attribute.py  Also note that some versions of ToscaWidgets have a bug which means any jqPlot widget won’t work, version 2.0.4 has the fix for issue 80 that explains briefly the bug.

The widget code looks like this:

from tw2.jqplugins.jqplot import JQPlotWidget
from tw2.jqplugins.jqplot.base import pieRenderer_js
import tw2.core as twc
 
class AttributeStatusPie(JQPlotWidget):
    """
    Pie Chart of the Attributes' Status """
    id = 'attribute-status-pie'
    resources = JQPlotWidget.resources + [
            pieRenderer_js,
            ]
 
    options = {
            'seriesColors': [ "#468847", "#F89406", "#B94A48", "#999999", "#3887AD", "#222222"], 
            'seriesDefaults' : {
                'renderer': twc.js_symbol('$.jqplot.PieRenderer'),
                },
            'legend': {
                'show': True, 
                'location': 'e',
                },
            }

Some important things to note are:

  • resources are the way of pulling in the javascript includes that actually do the work, generally if you have something like a renderer using js_symbol further on, it needs to be listed in the resources too.
  • seriesColors is how you make a specific data item a specific colour, or perhaps change the range of colours.  It’s not required if you use the default set, which is defined in the jqPlot options.
  • The renderer tells jqPlot what sort of graph we want, the line above says we want pie

 

Next the controller needs to be defined:

from myapp.widgets.attribute import AttributeStatusPie
 
@expose('myapp.templates.widget')
    def statuspie(self):
        data = [[
                ['Up', 20], ['Alert', 7], ['Down', 12], ['Admin Down', 3], ['Testing', 1], ['Unknown', 4],
                ]]
        pie = AttributeStatusPie(data=data)
        return dict(w=pie)

And that is about it, we now have a controller path attributes/statuspie which shows us the pie chart.
My template is basically a bare template with a ${w.display | n} in it to just show the widget for testing.

Pie Chart in Turbogears

 

Keith Packard: dri3k first steps

13 April, 2013 - 06:40
DRI3K — First Steps

Here’s an update on DRI3000. I’ll start by describing what I’ve managed to get working and then summarize discussions that happened on the xorg-devel mailing list.

Private Back Buffers

One of the big goals for DRI3000 is to finish the job of moving buffer management out of the X server and into applications. The only thing still allocated by DRI2 in the X server are back buffers; everything else moved to the client side. Yes, I know, this breaks the GLX requirement for sharing buffers between applications, but we just don’t care anymore.

As a quick hack, I figured out how to do this with DRI2 today — allocate our back buffers separately by creating X pixmaps for them, and then using the existing DRI2GetBuffersWithFormat request to get a GEM handle for them.

Of course, now that all I’ve got is a pixmap, I can’t use the existing DRI2 swap buffer support, so for now I’m just using CopyArea to get stuff on the screen. But, that works fine, as long as you don’t care about synchronization.

Handling Window Resize

The biggest pain in DRI2 has been dealing with window resize. When the window resizes in the X server, a new back buffer is allocated and the old one discarded. An event is delivered to ‘invalidate’ the old back buffer, but anything done between the time the back buffer is discarded and when the application responds to the event is lost.

You can easily see this with any GL application today — resize the window and you’ll see occasional black frames.

By allocating the back buffer in the application, the application handles the resize within GL; at some point in the rendering process the resize is discovered, and GL creates a new buffer, copies the existing data over, and continues rendering. So, the rendered data are never lost, and every frame gets displayed on the screen (although, perhaps at the wrong size).

The puzzle here was how to tell that the window was resized. Ideally, we’d have the application tell us when it received the X configure notify event and was drawing the frame at the new size. We thought of a cute hack that might do this; track GL calls to change the viewport and make sure the back buffer could hold the viewport contents. In theory, the application would receive the X configure notify event, change the viewport and render at the new size.

Tracking the viewport settings for an entire frame and constructing their bounding box should describe the size of the window; at least it should describe the intended size of the window.

There’s at least one serious problem with this plan — applications may well call glClear before calling glViewport, and as glClear does not use the current viewport, instead clearing the “whole” window, we couldn’t use the viewport as an indication of the current window size.

However, what this exercise did lead us to realize was that we don’t care what size the window actually is, we only care what size the application thinks it is. More accurately, the GL library just needs to be aware of any window configuration changes before the application, so that it will construct a buffer that is not older than the application knowledge of the window size.

I came up with two possible mechanisms here; the first was to construct a shared memory block between application and X server where the X server would store window configuration changes and signal the application by incrementing a sequence number in the shared page; the GL library would simply look at the sequence number and reallocate buffers when it changed.

The problem with the shared memory plan was that it wouldn’t work across the network, and we have a future project in mind to replace GLX indirect rendering with local direct rendering and PutImage which still needs accurate window size tracking. More about that project in a future post though…

X Events to the Rescue

So, I decided to just have the X server send me events when the window size changed. I could simply use the existing X configure notify events, but that would require a huge infrastructure change in the application so that my GL library could get those events and have the application also see them. Not knowing what the application is up to, we’d have to track every ChangeWindowAttributes call and make sure the event_mask included the right bits. Ick.

Fortunately, there’s another reason to use a new event — we need more information than is provided in the ConfigureNotify event; as you know, the Swap extension wants to have applications draw their content within a larger buffer that can have the window decorations placed around it to avoid a copy from back buffer to window buffer. So, our new ConfigureNotify event would also contain that information.

Making sure that ConfigureNotify event is delivered before the core ConfigureNotify event ensures that the GL library should always be able to know about window size changes before the application.

Splitting the XCB Event Stream

Ok, so I’ve got these new events coming from the X server. I don’t want the application to have to receive them and hand them down to the GL library; that would mean changing every application on the planet, something which doesn’t seem very likely at all.

Xlib does this kind of thing by allowing applications to stick themselves into the middle of the event processing code with a callback to filter out the events they’re interested in before they hit the main event queue. That’s how DRI2 captures Invalidate events, and it “works”, but using callbacks from the middle of the X event processing code creates all kinds of locking nightmares.

As discussed above, I don’t care when GL sees the configure events, as long as it gets them before the application finds about about the window size change. So, we don’t need to synchronously handle these events, we just need to be able to know they’ve arrived and then handle them on the next call to a GL drawing function.

What I’ve created as a prototype is the ability to identify specific events and place them in a separate event queue, and when events are placed in that event queue, to bump a ‘sequence number’ so that the application can quickly identify that there’s something to process.

Making the Event Mask Per-API Instead of Per-Client

The problem described above about using the core ConfigureNotify events made me think about how to manage multiple APIs all wanting to track window configuration. For core events, the selection of which events to receive is all based on the client; each client has a single event mask, and each client receives one copy of each event.

Monolithic applications work fine with this model; there’s one place in the application selecting for events and one place processing them. However, modern applications end up using different APIs for 3D, 2D and media. Getting those libraries to cooperate and use a common API for event management seems pretty intractable. Making the X server treat each API as a separate entity seemed a whole lot easier; if two APIs want events, just have them register separately and deliver two events flagged for the separate APIs.

So, the new DRI3 configure notify events are created with their own XID to identify the client-side owner of the event. Within the X server, this required a tiny change; we already needed to allocate an XID for each event selection so that it could be automatically cleaned up when the client exited, so the only change was to use the one provided by the client instead of allocating one in the server.

On the wire, the event includes this new XID so that the library can use it to sort out which event queue to stick the event in using the new XCB event stream splitting code.

Current Status

The above section describes the work that I’ve got running; with it, I can run GL applications and have them correctly track window size changes without losing a frame. It’s all available on the ‘dri3’ branches of my various repositories for xcb proto, libxcb, dri3proto and the X server.

Future Directions

The first obvious change needed is to move the configuration events from the DRI3 extension to the as-yet-unspecified new ‘Swap’ extension (which I may rename as ‘Present’, as in ‘please present this pixmap in this window’). That’s because they aren’t related to direct rendering, but rather to tracking window sizes for off-screen rendering, either direct, indirect or even with the CPU to memory.

DRI3 and Fences

Right now, I’m not synchronizing the direct rendering with the CopyArea call; that means the X server will end up with essentially random contents as the application may be mid-way through the next frame before it processes the CopyArea. A simple XSync call would suffice to fix that, but I want a more efficient way of doing this.

With the current Linux DRI kernel APIs, it is sufficient to serialize calls that post rendering requests to the kernel to ensure that the rendering requests are themselves serialized. So, all I need to do is have the application wait until the X server has sent the CopyArea request down to the kernel.

I could do that by having the X server send me an X event, but I think there’s a better way that will extend to systems that don’t offer the kernel serialization guarantee. James Jones and Aaron Plattner put together a proposal to add Fences to the X Sync extension. In the X world, those offer a method to serialize rendering between two X applications, but of course the real goal is to expose those fences to GL applications through the various GL sync extensions (including GLARBsync and GLNVfence).

With the current Linux DRI implementation, I think it would be pretty easy to implement these fences using pthread semaphores in a block of memory shared between the server and application. That would be DRI-specific; other direct rendering interfaces would use alternate means to share the fences between X server and application.

Swap/Present — The Second Extension

By simply using CopyArea for my application presentation step, I think I’ve neatly split this problem into manageable pieces. Once I’ve got the DRI3 piece working, I’ll move on to fixing the presentation issue.

By making that depend solely on existing core Pixmap objects as the source of data to present, I can develop that without any reference to DRI. This will make the extension useful to existing X applications that currently have only CopyArea for this operation.

Presentation of application contents occurs in two phases; the first is to identify which objects are involved in the presentation. The second is to perform the presentation operation, either using CopyArea, or by swapping pages or the entire frame buffer. For offscreen objects, these can occur at the same time. For onscreen, the presentation will likely be synchronized with the scanout engine.

The second form will mean that the Fences that mark when the presentation has occurred will need to signaled only once the operation completes.

A CopyArea operation means that the source pixmap is “ready” immediately after the Copy has completed. Doing the presentation by using the source pixmap as the new front buffer means that the source pixmap doesn’t become “ready” until after the next swap completes.

What I don’t know now is whether we’ll need to report up-front whether the presentation will involve a copy or a swap. At this point, I don’t think so — the application will need two back buffers in all cases to avoid blocking between the presentation request and the presentation execution. Yes, it could use a fence for this, but that still sticks a bubble in the 3D hardware where it’s blocked waiting for vblank instead of starting on the next frame immediately.

Plan of Attack

Right now, I’m working on finishing up the DRI3 piece:

  • Replace the DRI2 buffer allocation kludge with actual local buffer allocation, mapping them into pixmaps using FD passing.

  • Replace the DRI2 authentication scheme with having the X server open the DRI object, preparing it for rendering and passing it back to the application.

  • Working on the XCB pieces to get the split event-queue stuff landed upstream.

  • Implementing the Fencing stuff to correctly serialize access to the pixmap.

The first three seem fairly straight forward. The fencing stuff will involve working with James and Aaron to integrate their XSync changes into the server.

After that, I’ll start working on the presentation piece. Foremost there is figuring out the right name for this new extension; I started with the name ‘Swap’ as that’s the GL call it implements. However, ‘Swap’ is quite misleading as to the actual functionality; a name more like ‘Present’ might provide a better indication of what it actually does. Of course, ‘Present’ is both a verb and a noun, with very different connotations. Suggestions on this most complicated part of the project are welcome!

Michal Čihař: Hackweek 9 is over

12 April, 2013 - 23:37

Hackweek 9 is over and it's time to share what I've done on Weblate during that.

I think everything went quite well and Weblate is now ready for 1.5 release. I'm slowly deploying it on my installations (unfortunately this release migration will need some noticeable downtime for bigger installations) and everything seems to work fine so far. I believe this is possible thanks to massive test coverage - all important code is covered by testcases.

So what you can expect in 1.5 release? The most visible change is probably new machine translation support, providing support for way more backends and allow you to plug in own services as well. The other changes include word counting (what might give you more idea how much work is remaining) or fancy progress bars in all places (they used to be available for translations only).

From the other side, Weblate can now run custom scripts to pre-process translations before commit, what can be used for various things from generating byte compiled files to sorting or cleaning up the translation files.

Also Weblate should be now much faster - there were dozen of optimizations done, leading to much lower press on database server.

If you want to see more detailed work progress, check Hackweek project page or Weblate changelog.

PS: In case no problems appear, Weblate 1.5 should be released on Sunday.

Filed under: English Phpmyadmin Suse Weblate | 0 comments | Flattr this!

Daniel Pocock: Switzerland Schilthorn with 007 (leg 2)

12 April, 2013 - 21:07

What has DebConf13 got in common with James Bond? The connection with Switzerland of course. Bond creator Ian Flemming actually spent time living in Switzerland and this beautiful alpine country was featured in more than one of his books/movies, including Goldfinger

On Her Majesty's Secret Service took Commander Bond to the summit of the Schilthorn, a distinctive peak of 2,970m altitude in the center of Switzerland.

To follow up on my recent Glacier Express and Righ Bahn rack railway videos, here is a video of the second leg of the dramatic four-stage cable car journey to the top of Schilthorn:

<video controls="" height="340" poster="http://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Schilthorn_with_Bernese_Alps%2C_2012_August.jpg/800px-Schilthorn_with_Bernese_Alps%2C_2012_August.jpg" width="560">
<source src="http://video.danielpocock.com/VID_20121025_121324.m4v" type="video/mp4"></source>
<source src="http://video.danielpocock.com/VID_20121025_121324.webm" type="video/webm"></source>
</video>

Download and watch later:
.m4v
.webm

Some observations about Schilthorn visits:

  • There is a revolving restaurant at the top - to get a seat by the window, which is by far the best place to sit, ring and make a reservation the day before. Meal prices are no more extraordinary than any other Swiss restaurant, so it is worthwhile spending some time up there enjoying lunch and the views.
  • Jungfrau is higher - but from Schilthorn, you can see Jungfrau, which is a spectacular view, yet without paying the extraordinary price of a ticket for Jungfrau. The ride to Jungfrau (high altitude railway) is also slightly less spectacular, as much of it is in a tunnel through the rock, while the ride to Schilthorn is all cable car.
  • Almost all rail ticket passes give a 50% discount on the Schilthorn cable car.
  • The SBB sometimes offers an extra 30% discount on Schilthorn tickets, see the monthly `railaway' offers, which can be combined with the 50% discount. This type of ticket can't be purchased in the same region, (e.g. within about 20km of Schilthorn)
  • An easy way to save another 50% on the ticket is to walk down from the top. If you do this, plan to start very early, but it is a spectacular walk (requires hiking boots and protective clothing, snowy/slippery at most times of the year)
  • All those discounts are cumulative: so if you use them all and walk down, you only pay 0.5 * 0.7 * 0.5 (which is 17.5%) of the full price.

The next leg of the cable car journey will appear soon...

Olivier Berger: Slides + Manual + programs generated from single org-mode source

12 April, 2013 - 05:22

I’ve been working on maintaining lecture slides and a manual, by writing a single source org-mode file.

From a single source I want to be able to generate different output PDFs, only changing a few switches :

  • slides deck
  • a manual document
  • source files for examples

The slides may contain notes.

Here’s an archive that contains an example document and complementary files. See this documentation document for more details (itself maintained with such an .org source).

Daniel Pocock: reSIProcate and reTURN come to Fedora

11 April, 2013 - 16:19

This week, I've released the first official reSIProcate packages for Fedora. EPEL packages for RHEL5 and RHEL6 are on the way too. This is exceptionally good news for Debian, Ubuntu, OpenWRT routers and every other platform that is already carrying these packages.

reSIProcate is not just SIP: it also contains the powerful STUN/TURN server, reTurn Server. The TURN protocol is the IETF standard that solves many of the difficulties faced using VoIP/RTC from behind NAT. It is equally useful for SIP or XMPP (Jabber) and is supported in various ways by Empathy, Jitsi and Lumicall and it is a mandatory element of WebRTC which is expected to rapidly rise in prominence during 2013.

Metcalfe's law tells us that the success of a communications platform grows quadratically in proportion to the number of users. In simple terms, doubling the number of potential SIP users gives four times the benefit to existing users.

The tide lifts all ships equally

The significance of these developments should not be underestimated. The success of Free software and open standards is intricately linked with the success of Free and open communications technology. In the world of RTC, projects don't exist in an ideological bubble, they have to be implemented in the real world and connected together for personal use, small business and large enterprise. People committed to free and open technology need to win all those domains.

Conversely, the peer pressure of proprietary softphone technology is one of the strongest barriers faced in the deployment of all forms of Free software on the desktop. Offering an alternative is a high priority campaign of the FSF

Martin Pitt: New fatrace released, Debian package coming

11 April, 2013 - 15:16

Paul Wise poked me this morning about uploading fatrace (“file access trace”, see the original announcement for details) to Debian, thanks for the reminder!

So I filed an Intent To Package, and will upload it in a few days, unless some discussion evolves.

I also took the opportunity to do some modernization: The power-usage-report script now uses the current PowerTop 2.x instead of the old 1.13, uses Python 3 now, and includes the “process device activity” in the report. I released this as 0.5. The actual fatrace binary didn’t change its behaviour, it just got some code optimizations; thanks to Yann Droneaud for those.

Russ Allbery: Hugo nominee haul

11 April, 2013 - 13:36

I need to write up new Kindle books, of which there are now quite a few due to various sales plus the Hugo nominee slate, but I got another set of paper books and they're sitting in front of me. So here's a list.

Saladin Ahmed — Throne of the Crescent Moon (sff)
Elizabeth Bear — Shattered Pillars (sff)
Ta-Nehisi Coates — The Beautiful Struggle (non-fiction)
Guy Gavriel Kay — River of Stars (sff)
Jenny Larson — Let's Pretend This Never Happened (non-fiction)
Domenica Ruta — With or Without You (non-fiction)
Jay Wexler — The Odd Clauses (non-fiction)

Ahmed's book is the remaining Hugo nominee that I didn't already pick up. I'm delighted to see the diversity on the Hugo and particularly the Nebula slate this year, and I'm curious to see the spin that Ahmed brings to the epic fantasy genre.

Bear's book is the sequel to Range of Ghosts, another book I'm very much looking forward to reading to but haven't yet. I'm rather behind on reading Bear's work right now.

Let's Pretend This Never Happened, With or Without You, and The Beautiful Struggle are all memoirs, of varying degrees of seriousness. I've gotten hooked on Coates's writing at The Atlantic, and I highly recommend it if you've not yet seen it. Domenica Ruta's memoir is the latest Indiespensible selection.

Jay Wexler's book was recommended by Lowering the Bar, a legal humor blog whose entire archives I'm slowly reading.

But the highlight of this order is Kay's River of Stars, which is a sequel of sorts to one of my favorite books ever. This is probably the book I'll read during my next vacation.

Matthew Palmer: RSpec the easy way

11 April, 2013 - 13:00

Anyone that has a fondness for good ol’ RSpec knows that there’s a fair number of matchers and predicates and whatnot involved. Life isn’t helped by the recent (as of 2.11) decision to switch to using expect everywhere instead of should (apparently should will be going away entirely at some point in the future).

There is a good looking RSpec cheatsheet out on the ‘net, but it dates from 2006, and things have changed since then. We’re using RSpec at work a lot at the moment, though, so our tech writer kindly updated it for the new-style syntax, gave it a nice blue tint, and put it out there for the world at large to use. Here is our updated RSpec cheatsheet for anyone who is interested.

I can tell you for certain that a double-sided, laminated version of this sucker looks very nice, and is a handy addition to the desk-of-many-things.

Andrew Pollock: [life/repatexpat] Differences on how one purchases petrol

11 April, 2013 - 12:22

The differences between how one fuels one's car are quite pronounced, between California and Australia.

Firstly, if you're paying with plastic, it's a given that you can pay at the pump. I could count on my hands the number of times I've had to walk into a gas station to pay for gas in the US. Having a small child, I was not looking forward to having to either leave her in the car so I could pay for my petrol, or having to deal with all of the rigmarole of getting her out of her car seat, just so she can accompany me inside the petrol station to make a brief transaction and then have to get her back into her car seat again.

Not to mention how it drags out the whole process. Yesterday I had to wait for a pump while everyone leaves their car, queues inside to pay a single cashier, and then returns to their car and drives away. It'd be an interesting Productivity Commission report to see how much time is lost, just so people can be tempted by the high-margin items inside.

Then there's pumping the petrol. California, being all hippy, requires all the fuel nozzles to have these fandangled "vapour recovery" things, which basically fit over the part of the pump that goes inside the fuel tank and does some sort of, well, vapour recovery. The upside, you're not sniffing fumes while you're pumping your petrol.

The other fabulous thing about Californian fuel pumps is you can lock the handle down, so you don't have to stand there like a shag on a rock squeezing the handle while a $100 trickles into your car. You can get back in your car and listen to the radio. Or clean your windscreen. Or entertain your kid. I'd love to know why Australian pumps don't lock on any more. I have memories from my early childhood of them locking on.

So yeah, I think Australians lose out quite badly when it comes to the petrol station experience.

I was pleased to discover that the Woolworths branded Caltex petrol stations seem to have some sort of pay at the pump infrastructure, it just requires you to have their specific credit card or something. I need to do more research, because if I can pay at the pump, I will.

Russ Allbery: Review: Familiar

11 April, 2013 - 12:06

Review: Familiar, by J. Robert Lennon

Publisher: Greywolf Copyright: 2012 ISBN: 1-55597-535-6 Format: Hardcover Pages: 205

This is the first book of an experiment. I'm fairly well-read in science fiction and fantasy and increasingly well-read in non-fiction of interest (although there's always far more of that than I'll get to in a lifetime), but woefully unfamiliar with what's called "mainstream" literature. Under the principal that things people are excited about are probably exciting, I've wanted to read more and understand the appeal.

Powell's, which I like to support anyway, has a very nice (albeit somewhat expensive) book club called Indiespensable, which sends its subscribers very nice editions of new works that Powell's thinks are interesting, with a special focus on independent publishers. So I signed up and hope to stick with it for at least a year. (The trick will be fitting these books in amongst my regular reading.) Familiar is the first feature selection I received.

Elisa Brown is a mother with a dead son and a living one, a failing marriage, an affair, and a life that is, in short, falling apart. Then, while driving back from her annual pilgrimage from her son's grave, the world seems to twist and change. She finds herself dressed for business, wearing a nametag and apparently coming back from a work-related convention, driving a car that's entirely unfamiliar to her. When she gets home, everything else has changed too: her marriage seems to be on firmer ground, but based on rules she doesn't understand. She has a different job, different clothes, a different body in some subtle ways. And both of her sons are alive.

I'm going to have to have a long argument with myself about where to (meaninglessly) categorize this on my review site, since in construction it is an alternate reality story and therefore a standard SF trope. Any SF reader is going to immediately assume Elisa has somehow been transported into an alternate reality with a key divergence point from her own. But that's not Lennon's focus. He stays ambiguous on the question of whether this is really happening or whether Elisa had some sort of nervous breakdown, and while some amount of investigation of the situation does take place, it's the sort of investigation that an average person with no access to special resources or scientific knowledge and a completely unbelievable story would be able to do: Internet conspiracy chatrooms and some rather dodgy characters. The focus is instead on Elisa's reaction to the situation, her choices about how to treat this new life, and on how she processes her complex emotions about her family and herself.

I had profoundly mixed feelings about this book when I finished it, and revisiting it to review it, I still do. The writing is excellent: spare, evocative, and enjoyable to read. Lennon has a knack for subjective description of emotion and physical experience. The reader feels Elisa's deep discomfort with her changed body and her changed car, her swings between closed-off emotions and sudden emotional connection with a specific situation, and her struggle with the baffling question of how to come to terms with a whole new life. The part of the book from about the middle to nearly the end is excellent. Video games make an appearance and are handled surprisingly well. And when Elisa starts being blunt with people, I found myself both liking her and caring about what happens to her.

On the other hand, Familiar also has some serious problems, and one of the biggest is the reaction I feared I'd have to mainstream literature: until Elisa started opening up and taking action, I found it extremely difficult to care about anyone in this book. They're all so profoundly petty, so closed off and engrossed in what seem like depressing and utterly boring lives. I'm sure that some of this is intentional and is there to lay the groundwork for Elisa's own self-discovery, but even towards the end of that self-discovery, everything here is so relentlessly middle-class suburbia that I felt stifled just reading about it. I think it's telling that no one in this book ever seems to have any substantial problem with money, or even with work. Elisa walks into a job that she's never done before and within a few weeks is doing it so well that she can take large amounts of time to wander around for plot purposes.

This is a book about highly privileged people being miserable in a bubble. While those people certainly do exist, and I can believe that they act like this, I'm not sure how much I want to read about them. Thankfully, the plot does lead Elisa to poke some holes in that bubble, if never get out of it entirely.

This is also another one of those stories in which every character has massive communication problems. Now, this deserves some caveats: Elisa's communication problems with her husband are part of the problem that starts the book and are clearly intentional, as are her communication difficulties with her children. And she's not really close enough to anyone to confide in them. But even with those caveats, no one in this book really talks to anyone else. It's amazing that anyone forms any connections at all, given how many walls and barriers they have around themselves. As someone with a bit of a thing for communication, this drove me nuts to read about, particularly in the first half of the book.

But the worst problem is that Lennon completely blows the ending. And by that I don't just mean that I disliked the ending. I mean the ending is so unbelievable and so contrary to the entire rest of the book, at least the way I was reading and understanding it, that I think Familiar is a much better novel if you just remove the final scene entirely. It was such a bizarre and unnecessary twist that I found it infuriating.

I don't want to spoil an ending, even a bad ending, so I'll only say this: it felt to me like Lennon just wasn't comfortable with his setting and plot driver and couldn't leave it alone. I think an experienced SF author wouldn't have made this mistake. There were two obvious possible conclusions to draw from the setting, plus a few interesting combinations, and I think someone comfortable with this sort of alternate reality story would have taken one of those options, any of which would have been a reasonable dismount for the plot. Alternately, they could have left it entirely ambiguous to the end and explored why the explanation may not actually matter. But Lennon seemed to me to have a tin ear for plausibility and for the normal flow of this sort of story and seems to have taken it as license for arbitrary events, thus completely violating the internal consistency and emergent rules that he'd spent the rest of the book building.

I've mostly talked about my reactions to the characters and the writing and have not said much about the plot. That's somewhat intentional, since figuring out where the story will go is one of the best parts of this book. It's surprisingly tense and well-crafted for not having that much inherent dramatic tension. The excellent writing kept me reading through the first part, when I hated everyone in the story, and then Elisa started taking responsibility for her own life and actions and I started really enjoying the book while being constantly surprised. I think it's the sort of story that's best to take without too much foreknowledge of where it's going.

I'm going to call this first experiment a qualfied success. Familiar was certainly interesting to read, and quite different from what I normally read despite the SF premise. If it weren't for the ending, I'd be recommending it to other people.

Rating: 6 out of 10

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้