Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 11 months 2 weeks ago

Chris Lamb: Thames Turbo Sprint Triathlon 2013 Race 1

6 April, 2013 - 18:21

On Monday I took part in my first triathlon, a "sprint" distance race organised by Thames Turbo Triathlon Club.

Swim
Distance
428m (+ pool exit chute)
Time
10:23 (2:26/100m)

I was glad to be given a low race number as it would reduce the time standing around 0°C in swimming gear. The 28°C heated pool was a welcome relief and I was quickly on my way, although perhaps overcooking it in the first half.

Overall, I was quite pleased with my swim time as I had cut it by well over 50% during March; there are perhaps 1-2 minutes of fairly easy gains still to be made as well.

T1 (time: 5:32)

Due to the cold temperature and forecast for snow, the race director insisted that competitors added extra layers covering their arms and torso on the bike. An additional "free" 5 minutes was also granted for T1 in order that clothing was not skipped in expense of time. I eventually wore a long-sleeve base layer, tights, a short-sleeved jersey, arm warmers and two pairs of gloves.

Bike
Distance
13.8 miles (+ mounting/demounting chutes)
Time
39:59 (19.5 mph)

I overtook 10-15 riders in the first half of the bike leg, a couple of them even seeking me out to later to congratulate me on looking so fast. I was thus more than a little disappointed when I discovered my final bike split as I was hoping for at least a sub-39 minute time.

Analysing my GPS later I can see my average speed was over 1.5mph slower in the second half, yet I cannot recall being limited by fatigue or headwind in the last 6 miles. I was either overly reliant on other riders to "reel in" during the first half (but had run out of people to catch in the latter stage) or I simply was not giving 100%.

T2 (time: 3:30)

Leaving T2 I failed to start both my GPS and stopwatch correctly so I do not have splits of my time. I also did not pick up my running gloves resulting in distractingly cold hands in Bushy Park.

Run
Distance
5k
Time
24:21 (4:52/km)

Throughout the first kilometer my legs were extremely sluggish and I simply could not get them to do what I wanted - typical triathlon "brick legs". I did not know it at the time but I was also running up a slight—but extended—incline.

I struggled to get to the 1k marker and briefly stopped to mentally regroup. I joined another runner a few moments later and we ran the rest of the 5k at a fairly easy pace.

It's clear that the lack of serious brick workouts compromised my run time, as well as my inability to dial down the pace once it was clearly too optimistic. Knowing the elevation profile of the run should have adjusted my expectations too, leading to a better overall time.

Overall
Total time
1:18:44
Position
127/261
Position male
109/202
Position 25-29 male
16/25

Despite finishing just to the left of the overall bell curve, it is difficult avoid feeling disappointed with my time as it was hindered by well-documented and avoidable errors. The sub-optimal pacing throughout meant I still had something left "in the tank" at the end, adding to a general feeling of under-achieving on the way home.

Preperation was also less than ideal - two days before the race I had twinged my back in a 70-mile reconnaissance ride (!) by overusing my clip-on bars although this only affected me mentally in the race. I additionally ate too much before and during the race in retrospect.

Nevertheless, there must be some value in learning these lessons first-hand and and early on.

(Full results)

Daniel Pocock: Switzerland Rigi-Bahn railway, Swiss rail travel

6 April, 2013 - 16:25

With DebConf13 registrations opening very soon, it's about time to share some more about Swiss travel.

For all those who enjoyed the Glacier Express video, now there is another. This is the Rigi Bahn rack railway which goes from Vitznau up to the top of Mount Rigi (often just referred to as Rigi). At the top, it meets another train which goes to the junction of main railway lines at Arth-Goldau, and a cable car to Weggis

The video starts at the station in Vitznau (beside the ferry terminal) and rises up rapidly in a breathtaking ride to the snowy peak at approximately 2,000 meters. There are spectacular views of mountains to the south across the lake, and look out for the wild animals on the way. You need at least a 2Mbit connection to stream this video:

<video controls="" height="340" poster="http://danielpocock.com/sites/danielpocock.com/files/rigi-poster.jpg" width="560">
<source src="http://video.danielpocock.com/DSC_1789.mp4" type="video/mp4"></source>
<source src="http://video.danielpocock.com/DSC_1789.webm" type="video/webm"></source>
</video>

and here are download links for those who prefer to download now and watch later (about 250MB):

Swiss railway tickets

When looking at the price of rail passes, keep in mind the maximum price paid by a Swiss resident is never more than 8 CHF per day, because that is the cost of the unlimited annual ticket that many people have. People who don't have the unlimited pass have what is called a `half price card' which costs 150 CHF per year and allows them to buy any ticket at half the advertised price. This means that only the tourists pay the full advertised price, which is usually quite exorbitant. It is an extremely effective system of discrimination. A touist pays 8 CHF just for a 1 day pass for the center of a city like Zurich.

Switzerland does offer some short term unlimited passes that provide some discounts to visitors, it is necessary to think carefully about how many days you want to use to get the optimal package:

  • Swiss Travel System web site offers the passes for all of Switzerland. Visitors can buy a multi-day unlimited pass, and also get several days of half-price travel as well. For 5 days unlimited travel within 1 month, the cost is 364 CHF, includes half price travel for all other days in the month. Shorter and longer periods available.
  • The Tell Pass, named after William Tell is significantly cheaper, offering unlimited travel just within the central mountain area. The Tell Pass includes use of the Rigi Bahn train shown in this blog, and is valid for travel to Andrematt, where you have to buy an extra ticket to ride on the Glacier Express train from the other blog. That is not so bad though, because you can buy a one-way ticket from Andermatt up to Natschen and then walk back down. The region covered by the Tell Pass is really quite big, centered around Luzern, it goes all the way west to Interlaken and south-east to Andermatt in the high mountains. Tell Pass costs 246 CHF for 15 days half-price travel including 5 days unlimited travel. Shorter periods available too.
  • More regional passes like the Tell Pass are available for people who want to spend less money by just visiting one area of Switzerland.
  • The SBB.ch web site has all the timetables for the trains, ferries, Postbuses and other services.

Steve Kemp: So I have a wheezy desktop

6 April, 2013 - 12:27

I look after a bunch of servers, working for Bytemark that is not a surprise, but I only touch a very small number of desktop systems.

precious - My desktop

This is the machine upon which I develop, check my personal mail, play my music & etc.

steve - My work machine

To keep the working from home separation going I have a machine I only use for work purposes.

travel/travel2 - EEPC box

I have two EEPC machines, a personal 701 and a work-provided 901.

Honestly these rarely get used. One is for when I'm on holiday or traveling, the second for when I'm on-call.

Yesterday I got round to upgrading both the toy EEPC machines to wheezy. The good news? Both of them upgraded/reinstalled easily. Hardware was all detected, sleeping, hibernation, wifi, etc all "just worked".

Unfortunately I am now running GNOME 3.x and the experience is unpleasant. This is a shame, because I've enjoyed GNOME 2.x & bluetile for the past few years.

The only other concern is that pwsafe appears to be scheduled for removal from Debian GNU/Linux - the list of open bugs shows some cause, but there are bugs there that are trivial to fix.

For the moment I've rebuilt the package and if I cannot find a suitable alternative - available for squeeze and wheezy - then I will host the package on my package repository.

In conclusion: Debian, you did good. GNOME, I've loved and appreciated you for years, but you might not be the desktop I want these days. It's not you, it's me.

Andrew Pollock: [life/repatexpat] Day #6 of repatriation -- the crash continues

6 April, 2013 - 12:22

I was really not doing well by yesterday, I had developed quite the runny nose. I've discovered that it's neigh on impossible (from my sample set of two pharmacies) to get pseudoephedrine over the counter in this country. In the US, you have to provide ID and they report all purchases to the government and if you start buying too much, they come and kick down the door of your meth lab. Here, you seem to need a prescription. One pharmacy told me that 1 in 10 pharmacies will sell it over the counter. I ended up with the Australian equivalent of Afrin, which I don't particularly like, but it at least dried up my nose. Discussions on Facebook suggest that I may have been dealing with second-rate pharmacies, and the "big ones" would be more useful. I was also advised to try begging and pleading for Claritin-D. The damn meth labs have ruined it for everybody. It's too bad they can't come up with an additive that is safe to ingest, but would fuck up the meth cooking process.

Not content with only two marathon shopping days, Kristy came back for a third day of driving me all around town, as my quest for a sofa bed and a dining table continued.

It turns out that one does not simply walk into a furniture store and walk out with a sofa bed (or a dining table, for that matter). These things all seem to be on boats from China, or at best interstate warehouses, and most places can sell you something they know is in transit at best, but they're loathe to sell floor stock (for obvious reasons), and they seem to not have anything in a Brisbane warehouse (plenty of stuff was in Sydney or Melbourne and they'd ship it up). Plushhad a chaise sofa bed that had a nice sprung mattress, and was due in late this month or early next, and they would lend me something in the meantime, so they got my business. I look forward to having something to sit on.

We then had an epic time at Bunnings getting all sorts of random household stuff, with the obligatory sausage sizzle before and after. Oh, how I have missed proper sausages! It turns out I'm looking for something that doesn't seem to exist over here, Rubbermaid don't seem to make the plastic "shed" cupboards in Australia, so I'll have to look elsewhere (Clark Rubber seems to make something approximately like what I'm looking for).

Then I picked up some towels from Westfield Chermside and resumed the search for a dining table. I was really liking the idea of at least one bench seat, and we finally found a matching table, a bench seat, some shelving and a coffee table that would work as an entertainment unit, at OZ Design Furniture. They had a 20% off sale that made it all fairly reasonable. The entertainment unit was available immediately, and the rest of the stuff should be delivered in a couple of weeks. That just leaves finding some dining chairs that will go with it.

OZ Design Furniture had the most unusual delivery charging system. They charge by the flight of stairs. Living on the 2nd floor does have its disadvantages. At least I won't be moving out of here any time soon.

By the end of the day, I was totally done, but very happy to have finally sorted out the elusive remaining bits of furniture. I had my first night sleeping in my new home.

Matthew Garrett: Update on leaked UEFI signing keys - probably no significant risk

6 April, 2013 - 07:21
According to the update here, the signing keys are supposed to be replaced by the hardware vendor. If vendors do that, this ends up being uninteresting from a security perspective - you could generate a signed image, but nothing would trust it. It should be easy enough to verify, though. Just download a firmware image from someone using AMI firmware, pull apart the capsule file, decompress everything and check whether the leaked public key is present in the binaries.

The real risk here is that even if most vendors have replaced that key, some may not have done. There's certainly an argument that shipping test keys at all increases the probability that a vendor will accidentally end up using those rather than generating their own, and it's difficult to rule out the possibility that that's happened.

comments

Lior Kaplan: The commit police: reference you bugs properly

6 April, 2013 - 05:54

Besides my RTL work on Libreoffice, every once in a while I just go over the regular commit log to see what have changed. I don’t necessary understand each line in the commit, but do get the general idea from the commit message. Being more dependent on the commit messages makes me review them more thoroughly (hence the topic of this post).

As many projects, Libreoffice has notifications of commits related to bugs reports when the bug number is properly mentioned in the commit message. This is very useful for other developers and also for QA people. After verification of a bug is fixed, I often use the commits listed on the bug report to cherry pick them to an older branch. Going to search for an unreferenced commit isn’t much fun.

One of the things I notice is different ways people reference bugs – from not referencing them at all to referencing them in various ways like linking to the bugs system, just writing the number (without the fdo# prefix) and other creative ways… This is also true for first time contributors, which might not know the standards or the “rules”. When I see such a case I usually put a manual notification in the bug report, and mail the author of doing so. For new or sporadic contributors this is also an opportunity to welcome and thank them for the commit and even encourage future contributions.

I have been asked why aren’t that info on the wiki, so I went looking and found out the info is in the right place under the “Development/GetInvolved” page. The relevant part is:

When you type a commit message:

  • start the first line with a bug reference like fdo#12345, if you have one for your commit (see details below)
  • use the rest of the first line as a concise summary of your changes
  • the 2nd line remains empty
  • and starting on the 3rd line you can explain how and what changes have been made for what reasons.

Thanks in advance and happy coding (:


Filed under: LibreOffice

Richard Hartmann: Release Critical Bug report for Week 14

6 April, 2013 - 04:28

Over the week, around noon CEST, we had:

  • Monday: 44 total
  • Tuesday 43 total
  • Wednesday: 49 total
  • Thursday: 53 total

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 718
    • Affecting wheezy: 51 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting wheezy and unstable: 32 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 9 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 1 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 22 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting wheezy only: 19 Those are already fixed in unstable, but the fix still needs to migrate to wheezy. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 9 bugs are in packages that are unblocked by the release team.
        • 10 bugs are in packages that are not unblocked.

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Diff 43 284 (213+71) 468 (332+136) +184 (+119/+65) 44 261 (201+60) 408 (265+143) +147 (+64/+83) 45 261 (205+56) 425 (291+134) +164 (+86/+78) 46 271 (200+71) 401 (258+143) +130 (+58/+72) 47 283 (209+74) 366 (221+145) +83 (+12/+71) 48 256 (177+79) 378 (230+148) +122 (+53/+69) 49 256 (180+76) 360 (216+155) +104 (+36/+79) 50 204 (148+56) 339 (195+144) +135 (+47/+90) 51 178 (124+54) 323 (190+133) +145 (+66/+79) 52 115 (78+37) 289 (190+99) +174 (+112/+62) 1 93 (60+33) 287 (171+116) +194 (+111/+83) 2 82 (46+36) 271 (162+109) +189 (+116/+73) 3 25 (15+10) 249 (165+84) +224 (+150/+74) 4 14 (8+6) 244 (176+68) +230 (+168/+62) 5 2 (0+2) 224 (132+92) +222 (+132/+90) 6 release! 212 (129+83) +212 (+129/+83) 7 release+1 194 (128+66) +194 (+128/+66) 8 release+2 206 (144+62) +206 (+144/+62) 9 release+3 174 (105+69) +174 (+105/+69) 10 release+4 120 (72+48) +120 (+72/+48) 11 release+5 115 (74+41) +115 (+74/+41) 12 release+6 93 (47+46) +93 (+47/+46) 13 release+7 50 (24+26) +50 (+24/+26) 14 release+8 51 (32+19) +51 (+32/+19) 15 release+9 16 release+10 17 release+11 18 release+12

Graphical overview of bug stats thanks to azhag:

Phil Hands: AllTrials campaign

6 April, 2013 - 04:14

Ben Goldacre (Bad Science), is running a campaign for all medical trials to be published.

It seems that best current evidence indicates that about half of the trials for treatments currently in use remain unpublished.

That's pretty astounding when you think about it. Given that many of the trials that are published only compare new treatments against placebo, rather than against the best available existing treatment, and often barely show them as being more effective than homeopathy^Wsugar, one has to wonder how dire were the results that are not being published.

If you prefer to have science done in the open, then I'd encourage you to visit www.alltrials.net and sign the petition and/or give them a donation.

The fix is pretty straight-forward: Simply insist that clinical trials have to register before they start in order to be considered valid, and that all registered trials have to report their results. It need not be expensive to do -- a git repo. shared across the industry, containing a file for each trial in a standard format would do the trick probably. I imagine they'll come up with a more expensive solution, but in comparison to the cost of running a decent sized trial, it'll still be pocket change.

(A note from the campaign, sent to existing petition signatories, is what prompted this blog post -- please read it and be inspired to help)

Matthew Garrett: Leaked UEFI signing keys

6 April, 2013 - 03:18
(See here for an update to this)

A hardware vendor apparently had a copy of an AMI private key on a public FTP site. This is concerning, but it's not immediately obvious how dangerous this is for a few reasons. The first is that this is apparently the firmware signing key, not any of the Secure Boot keys. That means it can't be used to sign a UEFI executable or bootloader, so can't be used to sidestep Secure Boot directly. The second is that it's AMI's key, not a board vendor - we don't (yet) know if this key is used to sign any actual shipping firmware images, or whether it's effectively a reference key. And, thirdly, the code apparently dates from early 2012 - even if it was an actual signing key, it may have been replaced before any firmware based on this code shipped.

But there's still the worst case scenario that this key is used to sign most (or all) AMI-based vendor firmware. Can this be used to subvert Secure Boot? Plausibly. The attack would involve producing a new, signed firmware image with Secure Boot either disabled or with an additional key installed, and then to reflash that firmware. Firmware images are very board-specific, so unless you're engaging in a very targeted attack you either need a large repository of firmware for every board you want to attack, or you need to perform in-place modification.

Taking a look at the firmware update tool used for AMI systems, the latter might be possible. It seems that the AMI firmware driver allows you to dump the existing ROM to a file. It'd then be a matter of pulling apart the firmware image, modifying the key database, putting it back together, signing it and flashing it. It looks like doing this does require that the user enter the firmware password if one's set, so the simplest mitigation strategy would be to do that.

So. If this key is used by most vendors shipping AMI-based firmware, and if it's a current (rather than test) key, then it may well be possible for it to be deployed in an automated malware attack that subverts the Secure Boot trust model on systems running AMI-based firmware. The obvious lesson here is that handing out your private keys to third parties that you don't trust is a pretty bad idea, as is including them in source repositories.

(Wow, was this really as long ago as 2004? How little things change)

comments

Benjamin Mako Hill: Students for Free Culture Conference FCX2013

5 April, 2013 - 23:13

On the weekend of April 20-21, Students for Free Culture is going to be holding its annual conference, FCX2013, at New York Law School in New York City. As a long-time SFC supporter and member, I am enormously proud to be giving the opening keynote address.

Although the program for Sunday is still shaping up, the published Saturday schedule looks great. If previous years are any indication, the conference can serve as an incredible introduction to free culture, free software, wikis, remixing, copyright, patent and trademark reform, and participatory culture. For folks that are already deeply involved, FCX is among the best places I know to connect with other passionate, creative, people working on free culture issues.

I’ve been closely following and involved with SFC for years and I am particularly excited about the group that is driving the organization forward this year. If you will be in or near New York that weekend — or if you can make the trip — you should definitely try to attend.

FCX2013 is pay what you can with a $15 suggested donation. You can register online now. Travel assistance — especially for members of active SFC chapters — may still be available. I hope to see you there!

Clint Adams: And the men who hold high places

5 April, 2013 - 22:13

Antonia came into the room. “Where's Alieta?” she asked.

“We don't know,” replied Luda. It was true; we hadn't seen Alieta in quite some time, and we had no idea where she was.

Antonia left. Varvara eyed me mischievously and said, “You could call her.”

“How?” I asked.

“Touch the grandmother clock and you will figure it out.”

I touched the clock trepidly. Nothing happened. I opened my mouth to speak, but the word I had been about to say resonated in my mind as if spoken through a comb filter. I concentrated. “Alieta,” dozens of human-like voices sang over and over in different harmonies. I let go.

“What was that?” I demanded.

“Clock,” Varvara said dismissively.

“Come on, let's do laundry,” Luda grunted.

We carried our laundry out to the garage, which is where the washer and dryer lived. Alieta slipped in through one of the open garage doors, and surreptitiously pulled Varvara over for a whispered conversation.

Antonia wandered in and squinted in Alieta's general direction. Luda quickly pulled Varvara in front of her. “It's just Varvara,” she announced.

Antonia's eyes glazed over and she went back into the house. Alieta pursed her lips and studied the wall as if there might be some reflective object that had alerted her mother to her presence. Then she grabbed Varvara by the hand, pulled her over to push the buttons that closed the garage doors, and dragged her into the house.

I watched the garage doors close. Then I watched the one on the left reverse direction. I opened my mouth to notify someone of this, but the doors reversed direction again, and proceeded to partially open and close as if possessed. I tried to hide behind Luda, without success.

Now she was watching as well, as reality seemed to to shimmer and bend. The roof became arched, and there were loud mechanical noises as our environs transformed. When it finished, the place where the garage doors had been was now a wall of screens displaying video as you might encounter in a television store. More disconcertingly, we were in front of a railing which overlooked what appeared to be a car dealership showroom.

Luda sucked in her breath. “Benjamin Snow, you were into some serious shit,” she muttered.

Michal &#268;iha&#345;: Unknown phpMyAdmin features - server monitoring

5 April, 2013 - 18:00

phpMyAdmin has in last year received various useful features, which are not that well known. I've decided to give them some promotion before releasing phpMyAdmin 4.0.

The server monitoring part is already present since phpMyAdmin 3.5, but some of the parts were further improved in 4.0.

Server monitor (as you can see on picture above or on demo server) allows you to follow server status in real time. Besides predefined charts, you can choose to follow any of MySQL server status variables or some system parameters.

If you see something weird in the charts, you can select interval and inspect slow or general query log (if you have enabled it). This can help you finding most problematic queries for your server.

Advisor (on picture above or on demo server) is another way to improve server performance - it comes with extensive set of rules, which can help you tuning performance for your workload. Your server has to be running for significant time to give some reasonable recommendations (so don't expect these on the demo server, which is restarted quite often). However it is still recommended to read server documentation before doing any adjustments, as the setting might have some side effects, which will affect your workload as well.

Of course these are not magic pill to cure your unresponsive server, but can help you a lot in finding possible bottlenecks.

Filed under: English Phpmyadmin | 0 comments | Flattr this!

Olivier Berger: “Using RDF metadata for traceability among projects and distributions” presented at DistroRecipes 2013

5 April, 2013 - 17:12

I’ve given a lightning talk at Distro Recipes 2013 about what I’ve been working on for several months: adding Semantic meta-data to Debian PTS, and stuff.

Here are my slides :
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="356" marginheight="0" marginwidth="0" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/18183576" style="border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px" webkitallowfullscreen="webkitallowfullscreen" width="427"> </iframe>

An in PDF.

Thanks to the Hupstream staff and other participants of Distro Recipes for the interesting discussions and contacts. Looking forward to participating to the next edition.

Daniel Pocock: Quickstart: Integrating SIP support into any application with reSIProcate

5 April, 2013 - 14:26

Today's applications are more dynamic and interactive than ever before, typically offering the user many ways to collaborate or get live data about their community or environment as they need it.

Two open standards exist with a broad focus on real-time messaging and session control, SIP and XMPP (Jabber). Both of them have been supercharged by the development of technologies like WebRTC that makes every browser into a communications endpoint. Here, we look at integrating SIP into any C++ application (either client or server). We use the industry standard SIP stack reSIProcate.

The quickest way to get started is with a Debian, Ubuntu or Fedora development environment. reSIProcate is conveniently packaged on all those platforms (for Fedora, it is not yet in yum and must be built from the tarball using rpmbuild).

Getting the environment set up

Once you have Debian or Ubuntu installed, just do:

# apt-get build-dep resiprocate
# apt-get install libresiprocate-1.8-dev

This will install all the necessary tools, libraries and header files. Headers are under /usr/include/resip

Writing some code

The object orientated nature of C++ is heavily leveraged by reSIProcate, and this makes it easy to code. The stack is based on a callback paradigm: you create some objects based on well-defined interfaces, give your objects to the stack, and put the stack into a loop. The stack will call your object when some event occurs (e.g. new incoming call, new text message)

The easiest way to see this in practice is to browse the test cases. The test cases under the resip/dum(mirrored on github) section of the source tree provide good high-level examples of the type of code an application writes when using reSIProcate in an existing application.

Sending a SIP text message

Based on the basicMessage test case, I've created a trivial piece of code below to send a text message over SIP. The whole demo project is uploaded into github, see the README for instructions on how to build and run.

Here, we have a quick look at the one object in the demo, it is notified on the progress of the SIP text message:

class ClientMessageHandler : public ClientPagerMessageHandler {
public:
   ClientMessageHandler()
      : finished(false),
        successful(false)
   {
   };

   virtual void onSuccess(ClientPagerMessageHandle, const SipMessage& status)
   {
      InfoLog(<<"ClientMessageHandler::onSuccess\n");
      successful = true;
      finished = true;
   }

   virtual void onFailure(ClientPagerMessageHandle, const SipMessage& status, std::auto_ptr<contents> contents)
   {
      ErrLog(<<"ClientMessageHandler::onFailure\n");
      successful = false;
      finished = true;
   }

   bool isFinished() { return finished; };
   bool isSuccessful() { return successful; };

private:
   bool finished;
   bool successful;
};


int main(int argc, char *argv[])
{
   // boilerplate initialisation code...
...
   NameAddr naTo(to.c_str());
   ClientPagerMessageHandle cpmh = clientDum.makePagerMessage(naTo);

   Data messageBody("Hello world!");
   auto_ptr<contents> content(new PlainContents(messageBody));
   cpmh.get()->page(content);
...
}
</contents></contents>
Going further with reSIProcate

Andrew Pollock: [life/repatexpat] Day #5 of repatriation -- the crash

5 April, 2013 - 12:22

The jet lag, the lack of sleep, and the general pace of the week has caught up with me. I'm feeling decidedly run down today.

Leah volunteered to drive me around a bit today, and it was great to catch up with her. I decided to check out a white 2004 Forester that I'd found on carsales.com.au the night before.

I took it for a test drive, and it seemed fine. I transferred my NRMA membership back to the RACQ and upped it to something decent, and arranged for them to do an inspection on Monday. Depending on when the inspection report gets to me, I'll head back there with a bank cheque and I'll have a car.

I had another half-hearted look at furniture after lunch (I really wasn't feeling it) and then headed over to Woolloongabba to take a look at the condition of Sarah's apartment.

The low light of the day was leaving my packet of car-related paperwork (including my temporary driver's licence) on the roof of her car as we left Woolloongabba. It should only be mildly inconvenient, but I was annoyed with myself for being so dumb.

Tomorrow should be pretty quiet. I just have my bed getting delivered at 8:30am, then I'll stay in my apartment from then on. I think I'll just take it easy.

Andrew Pollock: [life/repatexpat] Day #3 of repatriation

5 April, 2013 - 07:44

Not content with just one day of driving all over town, Kristy came back for another day of it.

I decided that rather than specifically shopping, I needed to do some of the more bureaucratic stuff, so that I could ensure things like the electricity could get put on in my name. I determined that in order to get the electricity in my name, I'd need to provide either a driver's licence number or Medicare card number. Unfortunately I remember separating my expired ACT licence and Medicare cards from my other random cards while I was packing up my temporary apartment in the US, but I can't for the life of me recall what I did with them, so our first port of call was the Department of Transport in Zillmere to get a new licence.

Talk about a painless experience. The most annoying thing was that on Wednesdays they open at 9:30am instead of 8:30am. We got there at 9:10am. I had Zoe with me, and Kristy had her daughter, and they happily played while we waited outside.

I just had to fill out a fairly simple form, and I was called up promptly and there were no problems at all. In under 30 minutes of walking in the door, we were driving away with a temporary licence. Vastly different from my experience with the DMV. I'll get the genuine article in the mail in a few weeks.

Then we headed over to Westfield Chermside to go to Medicare, Medibank Private, and as I was growing frustrated with how long my mobile phone number was taking to port from Telstra to Internode, a Telstra Shop to try and get a replacement Telstra SIM.

This is where I ran into more of a bureaucratic brick wall.

For Medicare, I wanted to get my own Medicare card (and number) again, instead of a shared one, and so I essentially had to re-apply. They wanted more than just a passport entry stamp and something with an address on it. They wanted specific documents with an address on it, and an offer letter to show I was employed, so I had to leave there empty handed.

Medibank Private was even worse. In a "shut up and take my money!" kind of moment, they told me to prematurely unsuspend a suspended policy, I needed to request a document from the Department of Immigration that showed my international movements to confirm that I was indeed back in the country. I've always found this somewhat ironic, given I'm sitting in front of a Medibank Private employee when they're telling me this, and I'm trying to give them money.

So I left there empty handed as well.

I grabbed some cutlery and crockery from Big W.

The Telstra Shop had a 45 minute wait, and as I didn't want to over-stretch Zoe, we headed back to my parents. I took the opportunity to open an electricity account with AGL, now that I had a driver's licence number. Zoe declined to take a nap, and was having a good time playing with Kristy's daughter, and they both wanted to stay at my parent's place, so we left them there and dashed over to Ikea to rectify the bed slat issue.

While we were at Ikea, my number finally ported across and my phone started working, which was a huge relief. Being uncontactable during a period of many interactions with random people was highly frustrating for me. Not having mobile Internet access for a few days of extreme mobility showed how much my phone is an extension of my brain.

I also picked up a bunch of other random stuff from Ikea, stools that hadn't been in stock the day before, drinking glasses, that sort of thing. We then dropped all that off at the apartment before heading back to my parents' place.

So it was another busy day of running all over town, and again, I'm very grateful to Kristy for volunteering her time to make it happen. Most notable accomplishments: complete bed for Zoe, electricity, and a working mobile phone.

Andrew Pollock: [life/repatexpat] Day #2 of repatriation

5 April, 2013 - 07:44

This was the first "normal" (i.e. not part of the Easter "long weekend") day. As it happened, Easter Monday was surprisingly retail-friendly anyway.

My friend Kristy picked me up in the morning, and we dropped past the real estate agent for my apartment to see if the tenants had happened to drop the keys back yet (they hadn't) and then we went to Ikea and bought a bed for Zoe a bunch of other random stuff. I also bought a bed frame and mattress while we were at the neighbouring Logan MegaCentre (it's not a bad shopping centre)

The bed frame is wooden and is taking 6-8 weeks to be made, so in the meantime, because I bought a mattress from them, they're lending me a mattress base. That's getting delivered on Saturday.

We then headed over to Harvey Norman in Fortitude Valley. We'd just got started there when the real estate agent called to say the tenants had dropped the keys back, so we stopped and headed back over to meet the property manager at the apartment and get the keys.

After that, we headed back to Harvey Norman and bought a fridge, TV, and a bunch of small appliances, and then headed back to the apartment to do some Ikea assembly. We'd just about finished putting Zoe's bed together when we discovered we'd gotten the wrong width slats for her bed. It turns out there are two widths of the "Sultan Lade" slats, and they're right next to each other in the warehouse. We'd picked up from the correct location, but I think the piles had become jumbled. Lesson learned: cross check the SKU as well as the pickup location.

It was a long day, and I was enormously grateful to Kristy for driving me all over the place, and generally helping me shop. I think it ended up being a 15 hour day for her.

Daniel Silverstone: Adventures in Haskell — LLVM for BF…

5 April, 2013 - 03:43

I didn’t publicise it very widely when I uploaded it, but I did an LLVM compiler for BF as a Haskell video. You guys should go learn about Haskell and LLVM. YAY

Bernhard R. Link: Git package workflows

5 April, 2013 - 02:39

Given the recent discussions on planet.debian.org I use the opportunity to describe how you can handle upstream history in a git-dpm workflow.

One of the primary points of git-dpm is that you should be able to just check out the Debian branch, get the .orig.tar file(s) (for example using pristine-tar, by git-dpm prepare or by just downloading them) and then calling dpkg-buildpackage.

Thus the contents of the Debian branch need to be clean from dpkg-source's point of view, that is do not contain any files the .orig.tar file(s) contains not nor any modified files.

The easy way

The easiest way to get there is by importing the upstream tarball(s) as a git commit, which one will usually do with git-dpm import-new-upstream as that also does some of the bookkeeping.

This new git commit will have (by default) the previous upstream commit and any parent you give with -p as parents. (i.e. with -p it will be a merge commit) and its content will be the contents of the tarball (with multiple orig files, it gets more complicated).

The idea is of course that you give the upstream tag/commit belonging to this release tarball with -p so that it becomes part of your history and so git blame can find those commits.

Thus you get a commit with the exact orig contents (so pristine-tar can more easily create small deltas) and the history combined.

.

Sometimes there are files in the upstream tarball that you do not want to have in your Debian branch (as you remove them in debian/rules clean), then when using this method you will have those files in the upstream branch but you delete them in the Debian branch. (This is why git-dpm merge-patched (the operation to merge a new branch with upstream + patches with your previous debian/ directory) will look which files relative to the previous upstream branch are deleted and delete them also in the newly merged branch by default).

The complicated way

There is only a way without importing the .orig.tar file(s), though that is a bit more complicated: The idea is that if your upstream's git repository contains all the files needed for building your Debian package (for example if you call autoreconf in your Debian package and clean all the generated files in the clean target, or if upstream has a less sophisticated release process and their .tar contains only stuff from the git repository), you can just use the upstream git commit as base for your Debian branch.

Thus you can make upstream's commit/tag your upstream branch, by recording it with git-dpm new-upstream together with the .orig.tar it belongs to (Be careful, git-dpm does not check if that branch contains any files different than your .orig.tar and could not decide if it misses any files you need to build even if it tried to tell).

Once that is merged with the debian/ directory to create the Debian branch, you run dpkg-buildpackage, which will call dpkg-source which compares your working directory with the contents of the .orig.tar with the patches applied. As it will only see files not there but no files modified or added (if everything was done correctly), one can work directly in the git checkout without needing to import the .orig.tar files at all (altough the pristine-tar deltas might get a bit bigger).

Jon Dowland: awk

5 April, 2013 - 00:21

Recently, I've been using awk in shell scripts more and more often.

When I saw Thomas' Blog I reeled a bit from of the make/shell quoted within. (Sorry Thomas! It's still a thought provoking blog post):

DEBVERS         ?= $(shell dpkg-parsechangelog | sed -n -e 's/^Version: //p')
VERSION         ?= $(shell echo '$(DEBVERS)' | sed -e 's/^[\[:digit:]]*://' -e 's/[-].*//')
DEBFLAVOR       ?= $(shell dpkg-parsechangelog | grep -E ^Distribution: | cut -d" " -f2)
DEBPKGNAME      ?= $(shell dpkg-parsechangelog | grep -E ^Source: | cut -d" " -f2)
DEBIAN_BRANCH   ?= $(shell cat debian/gbp.conf | grep debian-branch | cut -d'=' -f2 | awk '{print $1}')
GIT_TAG         ?= $(shell echo '$(VERSION)' | sed -e 's/~/_/')

I couldn't help but re-write it to be more efficient (and in the case of DEBIAN_BRANCH, more correct):

DEBVERS        := $(shell dpkg-parsechangelog | awk '/^Version:/ {print $$2}')
VERSION        := $(shell echo '$(DEBVERS)' | sed -e 's/^[0-9]*://' -e 's/-.*//')
DEBFLAVOR      := $(shell dpkg-parsechangelog | awk '/^Distribution:/ {print $$2}')
DEBPKGNAME     := $(shell dpkg-parsechangelog | awk '/^Source:/ {print $$2}')
DEBIAN_BRANCH  := $(shell awk 'BEGIN{FS="[= ]+"} /debian-branch/ {print $$2}' debian/gbp.conf)
GIT_TAG        := $(subst ~,_,$(VERSION))

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้