Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 11 months 2 weeks ago

Keith Packard: Shared Memory Fences

27 April, 2013 - 07:28
Shared Memory Fences

In our last adventure, dri3k first steps, one of the ‘future work’ items was to deal with synchronization between the direct rendering application and the X server. DRI2 “handles” this by performing a round trip each time the application starts using a buffer that was being used by the X server.

As DRI3 manages buffer allocation within the application, there’s really no reason to talk to the server, so this implicit serialization point just isn’t available to us. As I mentioned last time, James Jones and Aaron Plattner added an explicit GPU serialization system to the Sync extension. These SyncFences serializing rendering between two X clients, but within the server there are hooks provided for the driver to use hardware-specific serialization primitives.

The existing Linux DRM interfaces queue rendering to the GPU in the order requests are made to the kernel, so we don’t need the ability to serialize within the GPU, we just need to serialize requests to the kernel. Simple CPU-based serialization gating access to the GPU will suffice here, at least for the current set of drivers. GPU access which is not mediated by the kernel will presumably require serialization that involves the GPU itself. We’ll leave that for a future adventure though; the goal today is to build something that works with the current Linux DRM interfaces.

SyncFence Semantics

The semantics required by SyncFences is for multiple clients to block on a fence which a single client then triggers. All of the blocked clients start executing requests immediately after the trigger fires.

There are four basic operations on SyncFences:

  • Trigger. Mark the fence as ready and wake up all waiting clients

  • Await. Block until the fence is ready.

  • Query. Retrieve the current state of the fence.

  • Reset. Unset the fence; future Await requests will block.

SyncFences are the same as Events as provided by Python and other systems. Of course all of the names have been changed to keep things interesting. I’ll call them Fences here, to be consistent with the current X usage.

Using Pthread Primitives

One fact about pthreads that I recently learned is that the synchronization primitives (mutexes, barriers and semaphores) are actually supposed to work across process boundaries, if those objects are in shared memory mapped by each process. That seemed like a great simplification for this project; allocate a page of shared memory, map into the X server and direct rendering application and use the existing pthreads APIs.

Alas, the pthread objects are architecture specific. I’m pretty sure that when that spec was written, no-one ever thought of running multiple architectures within the same memory space. I went and looked at the code to check, and found that each of these objects has a different size and structure on x86 and x86_64 architectures. That makes it pretty hard to use this API within X as we often have both 32- and 64- bit applications talking to the same (presumably 64-bit) X server.

As a last resort, I read through a bunch of articles on using futexes directly within applications and decided that it was probably possible to implement what I needed in an architecture-independent fashion.

Futexes

Linux Futexes live in this strange limbo of being a not-quite-public kernel interface. Glibc uses them internally to implement locking primitives, but it doesn’t export any direct interface to the system call. Certainly they’re easy to use incorrectly, but it’s unusual in the Linux space to have our fundamental tools locked away ‘for our own safety’.

Fortunately, we can still get at futexes by creating our own syscall wrappers.

static inline long sys_futex(void *addr1, int op, int val1,
                 struct timespec *timeout, void *addr2, int val3)
{
    return syscall(SYS_futex, addr1, op, val1, timeout, addr2, val3);
}

For this little exercise, I created two simple wrappers, one to block on a futex:

static inline int futex_wait(int32_t *addr, int32_t value) {
    return sys_futex(addr, FUTEX_WAIT, value, NULL, NULL, 0);
}

and one to wake up all futex waiters:

static inline int futex_wake(int32_t *addr) {
    return sys_futex(addr, FUTEX_WAKE, MAXINT, NULL, NULL, 0);
}
Atomic Memory Operations

I need atomic memory operations to keep separate cores from seeing different values of the fence value, GCC defines a few such primitives and I picked _syncboolcompareandswap and _syncvalcompareandswap. I also need fetch and store operations that the compiler won’t shuffle around:

#define barrier() __asm__ __volatile__("": : :"memory")

static inline void atomic_store(int32_t *f, int32_t v)
{
    barrier();
    *f = v;
    barrier();
}

static inline int32_t atomic_fetch(int32_t *a)
{
    int32_t v;
    barrier();
    v = *a;
    barrier();
    return v;
}

If your machine doesn’t make these two operations atomic, then you would redefine these as needed.

Futex-based Fences

These wake-all semantics of Fences greatly simplify reasoning about the operation as there’s no need to ensure that only a single thread runs past Await, the only requirement is that no threads pass the Await operation until the fence is triggered.

A Fence is defined by a single 32-bit integer which can take one of three values:

  • 0 - The fence is not triggered, and there are no waiters.
  • 1 - The fence is triggered (there can be no waiters at this point).
  • -1 - The fence is not triggered, and there are waiters (one or more).

With those, I built the fence operations as follows. Here’s Await:

int fence_await(int32_t *f)
{
    while (__sync_val_compare_and_swap(f, 0, -1) != 1) {
        if (futex_wait(f, -1)) {
            if (errno != EWOULDBLOCK)
                return -1;
        }
    }
    return 0;
}

The basic requirement that the thread not run until the fence is triggered is met by fetching the current value of the fence and comparing it with 1. Until it is signaled, that comparison will return false.

The compareandswap operation makes sure the fence is -1 before the thread calls futex_wait, either it was already -1 in the case where there were other waiters, or it was 0 before and is now -1 in the case where there were no waiters before. This needs to be an atomic operation so that the fence value will be seen as -1 by the trigger operation if there are any threads in the syscall.

The futex_wait call will return once the value is no longer -1, it also ensures that the thread won’t block if the trigger occurs between the swap and the syscall.

Here’s the Trigger function:

int fence_trigger(int32_t *f)
{
    if (__sync_val_compare_and_swap(f, 0, 1) == -1) {
        atomic_store(f, 1);
        if (futex_wake(f) < 0)
            return -1;
    }
    return 0;
}

The atomic compareandswap operation will make sure that no Await thread swaps the 0 for a -1 while the trigger is changing the value from 0 to 1; either the Await switches from 0 to -1 or the Trigger switches from 0 to 1.

If the value before the compareandswap was -1, then there may be threads waiting on the Fence. An atomic store, constructed with two memory barriers and a regular store operation, to mark the Fence triggered is followed by the futex_wake call to unblock all Awaiting threads.

The Query function is just an atomic fetch:

int fence_query(int32_t *f)
{
    return atomic_fetch(f) == 1;
}

Reset requires a compareandswap so that it doesn’t disturb things if the fence has already been reset and there are threads waiting on it:

void fence_reset(int32_t *f)
{
    __sync_bool_compare_and_swap(f, 1, 0);
}
A Request for Review

Ok, so we’ve all tried to create synchronization primitives only to find that our ‘obvious’ implementations were full of holes. I’d love to hear from you if you’ve identified any problems in the above code, or if you can figure out how to use the existing glibc primitives for this operation.

Russ Allbery: WebAuth 4.5.0

27 April, 2013 - 07:09

Time to finally release three months of work!

This is a very large feature release for the multifactor support in WebAuth and for the integration with the user information service, which is the way that WebAuth interacts with local site policy. (Currently, you still have to write your own to a well-defined protocol, although we do hope to provide Perl modules to help with this in the future.)

The major driving motivation for this release is to add the infrastructure required to set long-lived persistent cookies in a browser that contribute factors to authentications, which allows WebAuth to support "remember this device" functionality and to only require multifactor from devices from which a user has not previously authenticated. This includes a new token type, new attributes the user information service can send (including a way to invalidate such tokens), and new data that's sent back to the user information service. The user information service also now has the ability to add arbitrary additional factors to the current authentication, something that is intended to provide a hook for a local help desk to bypass multifactor for a user for some time if required.

This release also contains substantial contributions by Benjamin Coddington at UVM to improve multifactor interactions, including sending the OTP type back to the user information service if WebLogin knows it, a mechanism for the user information service to communicate a message to the user that's displayed on the multifactor login page, opaque state that can be sent back and forth between WebLogin and the user information service, and the ability for the user information service to add specific authentication factors to the required set for a particular authentication.

Other improvements in multifactor handling include the ability to set a lifetime on factors obtained via OTP login, a fix for a long-standing bug where an initial multifactor factor would satisfy a session requirement for random multifactor, and logging of even ignored errors when contacting the user information service.

There are other changes too. This release touches almost every part of WebAuth. The change to WebAuthForceLogin in 4.4.0 was reverted since, on further consideration, the original semantics seemed more useful. Password change handling in WebLogin was fixed (it's been broken for some time). Apache 2.4 error logging for all modules is much improved, and mod_webauth and mod_webkdc now produce better error logs for all versions of Apache. And WebLogin now communicates password expiration times to its templates in seconds since epoch in addition to a pre-formatted English time for better localization support.

William Orr contributed a new WebAuthLdapOperationalAttribute directive for mod_webauthldap that allows it to query operational attributes and include them in the environemnt.

There are two backward-incompatible changes for WebLogin. First, WebAuth now supports a user checkbox indicating either to remember their login on that device or to not remember their login (local site templates can present it either way). However, proper implementation of this matching the normal expected wording of "remember me on this device" required changing the default, so a straight upgrade from an earlier version will result in no single sign-on. To preserve behavior, either a template change to add the checkbox (checked by default) or a configuration change are required.

Second, support for getting password expiration times directly with remctl to a kadmin-remctl backend has been removed in favor of using data from the user information service by way of the WebKDC.

Finally, I got to do a lot of cleanup of the API, fix diagosis of undef passed to Perl XS functions, and fixed a compilation error with Heimdal.

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

Thorsten Glaser: mksh R45 released

27 April, 2013 - 06:24

The MirBSD Korn Shell R45 has been released today, and R44 has been named the new stable/bugfix-only series. (That’s version 45.1, not 0.45, dear Homebrew/MacOSX packagers.)

Packagers rejoice: the -DMKSH_GCC55009 dance is no longer needed, and even the run-time check for integer division is gone. Why? Because I realised one cannot use signed integers in C, at all, and rewrote the mksh(1) arithmetics code to use unsigned integers only. Special thanks to the people from musl libc and, to some lesser amount, Natureshadow for providing me with ideas what algorithms to replace some functionality with (signed shell arithmetic is, of course, still usable, it is just emulated using unsigned C integers now).

The following entertainment…

	tg@blau:~ $ echo foo >/bar\ baz
	/bin/mksh: can't create /bar baz: Permission denied
	1|tg@blau:~ $ doch
	tg@blau:~ $ cat /bar\ baz
	foo
 

… was provided by Tonnerre Lombard; like Swedish, German has got a number of words that cannot be expressed in English so I feel not up to the task of explaining this to people who don’t know the German word “doch”, just rest assured it calls the last input line (be careful, this is literally a line, so don’t use backslash-newline sequences) using sudo(8).

Richard Hartmann: Release Critical Bug report for Week 17

27 April, 2013 - 05:46

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 728
    • Affecting wheezy: 24 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting wheezy and unstable: 19 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 0 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 1 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 18 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting wheezy only: 5 Those are already fixed in unstable, but the fix still needs to migrate to wheezy. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 5 bugs are in packages that are unblocked by the release team.
        • 0 bugs are in packages that are not unblocked.

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Diff 43 284 (213+71) 468 (332+136) +184 (+119/+65) 44 261 (201+60) 408 (265+143) +147 (+64/+83) 45 261 (205+56) 425 (291+134) +164 (+86/+78) 46 271 (200+71) 401 (258+143) +130 (+58/+72) 47 283 (209+74) 366 (221+145) +83 (+12/+71) 48 256 (177+79) 378 (230+148) +122 (+53/+69) 49 256 (180+76) 360 (216+155) +104 (+36/+79) 50 204 (148+56) 339 (195+144) +135 (+47/+90) 51 178 (124+54) 323 (190+133) +145 (+66/+79) 52 115 (78+37) 289 (190+99) +174 (+112/+62) 1 93 (60+33) 287 (171+116) +194 (+111/+83) 2 82 (46+36) 271 (162+109) +189 (+116/+73) 3 25 (15+10) 249 (165+84) +224 (+150/+74) 4 14 (8+6) 244 (176+68) +230 (+168/+62) 5 2 (0+2) 224 (132+92) +222 (+132/+90) 6 release! 212 (129+83) +212 (+129/+83) 7 release+1 194 (128+66) +194 (+128/+66) 8 release+2 206 (144+62) +206 (+144/+62) 9 release+3 174 (105+69) +174 (+105/+69) 10 release+4 120 (72+48) +120 (+72/+48) 11 release+5 115 (74+41) +115 (+74/+41) 12 release+6 93 (47+46) +93 (+47/+46) 13 release+7 50 (24+26) +50 (+24/+26) 14 release+8 51 (32+19) +51 (+32/+19) 15 release+9 39 (32+7) +39 (+32/+7) 16 release+10 20 (12+8) +20 (+12/+8) 17 release+11 24 (19+5) +24 (+19/+5) 18 release+12

Graphical overview of bug stats thanks to azhag:

Vincent Sanders: When you make something, cleaning it out of structural debris is one of the most vital things you do.

27 April, 2013 - 05:20
Collabora recently had a problem with a project's ARM build farm. In a nice change of pace it was not that the kernel was crashing, nor indeed any of the software or hardware.
The ProblemInstead our problem was our build farm could best be described as "a pile of stuff" and we wanted to add more systems to it and have switched power control for automated testing.

Which is kinda where the Christopher Alexander quote comes into this. I suggested that I might be able to come up with a better, or at least cleaner, solution.
The IdeaPrevious experience had exposed me to the idea of using 19 inch subracks for mounting circuits inside submodules.

I originally envisaged the dev boards individually mounted inside these boxes. However preliminary investigation revealed that the enclosures were both expensive and used a lot of space which would greatly increase the rack space required to house these systems.

I decided to instead look at eurocard type subracks with carriers for the systems. Using my 3D printer I came up with a carrier design for the imx53 QSB and printed it. I used the basic eurocard size of 100mm x 160mm which would allow the cards to be used within a 3U subrack.

Once assembled it became apparent that each carrier would be able to share resources like power supply, ethernet port and serial console via USB just as the existing setup did and that these would need to be housed within the subrack.
The PrototypeThe carrier prototype was enough to get enough interest to allow me to move on to the next phase of the project. I purchased a Schroff 24563-194 subrack kit and three packs of guide rails from Farnell and assembled it.

Initially I had envisaged acquiring additional horizontal rails from Schroff which would enable constructing an area suitable for mounting the shared components behind the card area.

Unfortunately Schroff have no suitable horizontal profiles in their catalog and are another of those companies who seem to not want to actually sell products to end users but rather deal with wholesalers who do not have their entire product range!

Undaunted by this I created my own horizontal rail profile and 3D printed some lengths. The profile is designed to allow a 3mm thick rear cover sheet attached with M2.5 mounting bolts and fit rack sides in the same way the other profiles do.

At this point I should introduce some information on how these subracks are dimensioned. A standard 19 inch rack (as defined in IEC 60297) has a width of 17.75 inches(450.85mm) between the vertical posts. The height is measured in U (1.75 inches)

A subrack must obviously fit in the horizontal gap while providing as much internal space as possible. A subrack is generally either 3 or 6 U high. The width within a subrack is defined in units called HP (Horizontal Pitch) which are 0.2 inches(5.08 mm) and subracks like the Schroff generally list 84 usable HP.

However we must be careful (or actually just learn from me stuffing this up ;-) as the usable HP is not the same thing as the actual length of the horizontal rails! The enclosures actually leave and additional 0.1 inch at either end giving a total internal width of 85HP (17 inches, 431.8 mm) which leaves 0.75 inches for the subrack sides and some clearance.

The Schroff subrack allows eurocards to be slotted into rails where the card centre line is on HP boundaries, hence we describe the width of a card in the slot in terms of HP

I cannot manufacture aluminium extrusions (I know it is a personal failing) nor produce more than 100 mm long length of the plastic profile on my printer.

Even if full lengths are purchased from a commercial service (120 euros for a pair including tax and shipping) the plastic does not have sufficient mechanical strength.

The solution I came up with was somewhat innovative, as an alternative a M5 bolt into a thread in the aluminium extrusion I used a 444mm long length of 4mm threaded rod with nuts at either end. This arrangement puts the extrusion under compression and gives it a great deal of additional mechanical strength as the steel threaded rod is very strong.

Additionally to avoid having to print enough extrusion for the entire length I used some 6mm aluminium tube as a spacer between 6HP(30.48mm) wide sections of the printed extrusion.

It was intended to use a standard modular PC power supply which is 150mm wide which is pretty close to 30HP (6 inches) so it was decided to have a 6HP section of rail at that point to allow a rear mounting plate for the PSU to be attached.

This gives 6HP of profile, 21HP(106.68mm) of tube spacer, 6HP of profile, 46HP(233.68 mm) of tube spacer and a final 6HP profile summing to our total of 85HP. Of course this would be unnecessary if a full continuous 85HP rail had been purchased, but 6 of 6 HP long profile is only 51 euro a saving of 70 euro.

To provide a flat area on which to mount the power switching, Ethernet switch and USB hubs I ordered a 170 x 431 mm sheet of 3mm thick aluminium from inspiredsteel who, while being an ebay company, were fast, cheap and the cutting was accurate.

Do be sure to mention you would prefer it if any error made the sheet smaller rather than larger or it might not fit, for me though they were accurate to the tenth of a mm! If you would prefer the rear section of the rack to be enclosed when you are finished, buy a second sheet for the top. For my prototype I only purchased a 170 x 280mm sheet as I was unsure if I wanted a surface under the PSU (you do, buy the longer sheet)

Mounting the PSU was a simple case of constructing a 3 mm thick plate with the correct cutouts and mounting holes for an ATX supply. Although the images show the PSU mounted on the left hand side of the rack this was later reversed to improve cable management.

The subrack needed to provide Ethernet switch ports to all the systems. A TP-Link TL-SF1016DS 16-Port 10/100Mbps Switch  was acquired and the switch board removed from its enclosure. The switch selected has an easily removed board and is powered by a single 3.3V input which is readily available from the ATX PSU.

Attention now returned to the eurocard carriers for the systems, the boards to be housed were iMX53 QSB and iMX6 SABRE Lite and a Raspberry Pi control system to act as USB serial console etc.

The carriers for both main boards needed to be 8HP wide, comprised of:
  • Combined USB and Ethernet Jack on both boards was 30 mm tall 
  • PCB width of 2mm
  • underside components of 4mm
  • clearance between boards of 2mm
Although only 38 mm this is 7.5HP and fractions of an HP are not possible with the selected subrack.
With 8HP wide modules this would allow for ten slots, within the 84 usable HP, and an eleventh 4HP wide in which the Raspberry Pi system fits.
Carrier designs for both the i.MX53 QSB and the i.MX6 SABRE Lite boards were created and fabricated at a professional 3D print shop which gave a high quality finish product and removed the perceived risk of relying on a personal 3D printer for a quantity of parts.
This resulted in changes in the design to remove as much material as possible as commercial 3D services charge by the cubic cm. This Design For Manufacture (DFM) step removed almost 50% from the price of the initial design. 
The i.MX6 design underwent a second iteration to allow for the heatsink to be mounted and not mechanically interfere with the hard drive (although the prototype carrier has been used successfully for a system that does not require a hard drive). The lesson learned here is to be aware that an design iteration or two is likely and that it is not without cost.
The initial installation was to have six i.MX53 and two i.MX6 this later changed to a five/four split, however the carrier solution allows for almost any combination, the only caveat (discovered later) is the imx53 carriers should be to the right hand side with the small 4HP gap at that end as they have a JTAG connector underneath the board which otherwise foul the hard drive of the next carrier.
A wiring loom was constructed for each board giving them a connector tail long enough to allow them to be removed. This was the wrong approach! if you implement this design (or when I do it again) the connector tails on the wiring loom should present all the connections to the rear at the same depth as the Ethernet connection.

The rack cables themselves should be long enough to allow the slides to be removed but importantly it is not desirable to have the trailing cable on the cards. I guess the original eurocard designers figured this out as they designed the cards around the standard fixed DIN connectors at the back of the card slots.

We will now briefly examine a misjudgement that caused the initially deployed solution to be reimplemented. As the design was going to use USB serial converters to access the serial console a USB connected relay board was selected to switch the power to each slot. I had previously used serial controlled relay boards with a USB serial convertor however these were no longer available.

All the available USB relay boards were HID controlled, this did not initially seem to be an issue and Linux software was written to provide a reasonable interface. However it soon became apparent that the firmware on the purchased board was very buggy and crashed the host computer's USB stack multiple times.
Deployed solutionOnce it became apparent that the USB controlled power board was not viable a new design was conceived. As the Ethernet switch had ports available Ethernet controlled relay boards were acquired.

It did prove necessary to design and print some PCB support posts with M3 nut traps to allow the relay boards to be easily mounted using double sided adhesive pads.

By stacking the relay boards face to face and the Ethernet switch on top separated using nylon spacers it was possible to reduce the cable clutter and provide adequate cable routing space.

A busbar for Ground (black) and unswitched 12V (yellow) was constructed from two lengths of 5A chock block.

An issue with power supply stability was noted so a load resistor was added to the 12V supply and an adhesive thermal pad used to attach it to the aluminium base plate.

It was most fortunate that the ethernet switch mounting holes lined up very well with the relay board mounting holes allowing for a neat stack.

This second edition is the one currently in use, it has proved reliable in operation and has been successfully updated with additional carriers.

The outstanding issues are mainly centered around the Raspberry Pi control board:
  • Needs its carrier fitting. It is currently just stuck to the subrack end plate.
  • Needs its Ethernet cable replacing. The existing one has developed a fault post installation.
  • Needs the USB hub supply separating from the device cable. The current arrangement lets the hub power the Pi which means you cannot power cycle it.
  • Connect its switched supply separately to the USB hub/devices.
Shopping listThe final bill of materials (excluding labour and workshop costs) which might be useful to anyone hoping to build their own version.

Prices are in GBP currency converted where appropriate and include tax at 20% and delivery to Cambridge UK and were correct as of April 2013.

The purchasing was not optimised and for example around 20GBP could be saved just by ordering all the shapeways parts in one order.
Base subrackItemSupplierQuantityLine PriceSchroff 24563-194 subrack kitFarnell141.28Schroff 24560-351 guide railsFarnell313.65Schroff rack rear horizontal railShapeways2100.001000mm length of 4mm threaded rodB and Q11.48170mm x 431mm x 3mm Aluminium sheetinspired steel240.00PSU mounting plateShapeways135.42PCB standoffShapeways422.30160mm Deep Modular PC supplyCCL155.76TP-Link TL-SF1016DS 16-Port 10/100Mbps-SwitchCCL123.778 Channel 16A Relay Board Controlled Via EthernetRapid2126.00Raspberry PiFarnell126.48USB Serial convertersCCL1037.4010 port strip style USB HUBEbay17.00Parts for custom Ethernet cablesRS1326.00Parts for custom molex power cables (salvaged from scrap ATX PSU)Workshop1111.0033R 10W wirewound resistor for dummy loadRS11.2624pin ATX female connector pre-wiredMaplin12.99Akasa double sided thermal padMaplin15.00Small cable tie basesMaplin16.49Miscellaneous cable, connectors, nylon standoffs, solder, heatshrink, zip ties, nuts, washers etc. Workshop120.00Total for subrack603.28
The carriers are similarly not optimally priced as over five GBP each can be saved by combining shipping on orders alone. Also the SSD drive selection was made some time ago and a newer model may be more suitable.
i.MX53 QSB carrierItemSupplierQuantityLine Pricei.MX53 QSBFarnell1105.52Intel 320 SSD 80GCCL1111.83Carrier boardShapeways130.00combined sata data and power (15 to 20cm version)EBay15.00Low profile right angle 5.5mm x 2.1mm barrel jackEBay10.25Parts for 9pin serial cable extensionRS15.00Miscellaneous solder, heatshrink, nylon nuts, bolts and washersWorkshop15.00Total for carrier262.60
i.MX6 SABRE Lite carrierItemSupplierQuantityLine Pricei.MX6 SABRE LiteFarnell1128.06Intel 320 SSD 80GCCL1111.83Carrier boardShapeways135.00combined sata data and power (15 to 20cm version)EBay15.00Low profile right angle 5.5mm x 2.1mm barrel jackEBay10.25Parts for 9pin serial cable modificationRS12.00Miscellaneous solder, heatshrink, nylon nuts, bolts and washersWorkshop15.00Total for carrier287.14ConclusionThe solution works and in a 3U high 355mm deep subrack ten ARM development boards can be racked complete with local ethernet switching, power control and serial consoles.

The solution is neat and provides flexibility, density and reproducibility the "pile of stuff" solution failed to do.

For current prototype with nine filled slots the total cost was around 3000GBP or around 330GBP per slot which indicates a 100GBP per slot overhead over the "pile of stuff" solution. These figures omit the costs of the engineer and  workshop time, which are estimated at an additional 1500GBP. Therefore a completed rack, fully filled with i.MX6 carriers costs around 5000GBP

Density could be increased if boards with lower height requirements were used however above twelve units there issues with Ethernet switch, power switch and USB port availability become a factor. For Example the 16 port Ethernet switch requires a port for uplink, one for each relay board and one for the console server which leaves only 12 ports for systems.

Addressing the outstanding issues would result in a much more user friendly solution. As the existing unit is in full time use and downtime is not easily scheduled for all ten systems, the issues are not likely to be fixed on the prototype and would have to be solved on a new build.
The solution is probably not suitable for turning into a product but that was not really the original aim. A commercial ARM blade server using this format would almost certainly use standard DIN connectors and a custom PCB design rather than adapting existing boards.

Ian Campbell: Generating Random MAC Addresses

27 April, 2013 - 04:49

Working with virtualisation I find myself occasionally needing to generate a random MAC address for use with a virtual machine. I have a stupid little local script which spits one out, which is all fine and dandy for my purposes. However this week one of my colleagues was writing a wiki page and needed to include instructions on how to set the MAC address on the Arndale board. He wanted to include a shell snippet to generate one and I suggested that there must be websites which will generate you a suitable address. But when I came to look we found that not a single one of the half a dozen site which I looked at handled the locally administered or multicast bits correctly, meaning that they would randomly generate either multicast addresses or addresses using assigned OUI (or both). Using a random OUI may not cause too much trouble in practice but using a multicast address is sure to lead to strange behaviour, and in any case it's kind of lame.

So last night I sat down and wrote a quick hack cgi script to generate random addresses and tonight I deployed it at: http://www.hellion.org.uk/cgi-bin/randmac.pl.

This is where the world and his dog leaves a comment pointing me to the existing service my searches missed...

Steve Kemp: Modern console mail clients?

27 April, 2013 - 03:51

I've recently started staging upgrades from Squeeze to Wheezy. One unpleasant surprise was that the mutt-patched package available to Debian doesn't contain the "sidebar-new-only" patch.

This means I need to maintain it myself again, which I'd rather avoid. Over time I've been slowly moving to standard Debian systems, trying to not carry too many local perversions around.

Unfortunately if you've kept all your mail since 1994 you have many mailboxes. having mutt-patched available at all, with the sidebar patch, is a great timesaver. But I don't want to see mailboxes I'm never going to touch; just mailboxes with new mail in them.

Also I find the idea of having to explicitly define mailboxes a pain. Just run inotify on ~/Maildir and discover the damn things yourself. Please computer, compute!

If you divide up "mail client" into distinct steps it doesn't seem so hard:

  • Show a list of folders: all, new-mail-containing only.
  • Viewing a list of mail-messages: all in folder, or folders.
  • Compose a new mail.
  • Reply to a mail.

Obviously there is more to it than that. Sending mail? exec( sendmail ). Filtering mail? procmail/sieve/etc. Editing mail? exec(vim).

I'm sure if I were to start a core of a program, suitable for myself, would be simple enough. Maybe with lua, maybe with javascript, but with a real language at the core.

Anyway I've thought this before, and working with quilt and some ropy patches has always seemed like the way to go. Maybe it still is, but I can dream.

(PS. Sup + Notmuch both crash on my archives. I do not wish to examine them further. Still some interesting ideas. It should be possible to say "maildirs are tags; view "~/Maildir/.livejournal.2003" and ~/Maildir/.livejournal.2007 at the same time. Why just a single directory in the "index-view? So 1994.)

Disjointed posts R Us.

Obquote: "How hard could it be?" -- Patrick.

Francesca Ciceri: And the winner is...

26 April, 2013 - 23:03

I totally forgot it, but as the DPL elections are now done, we have a winner for the #DPL game.

Of the (more or less) fifteen persons who participated to the game (thank you!), only four received points for having at least one of their Fantastic Four running for DPL:

As Lucas is now the new DPL, our one and only winner of the DPL game is...

... Mehdi Dogguy! Congrats!

Bits from Debian: Last days for the DebConf13 matching fund!

26 April, 2013 - 21:30

As part of the DebConf13 fundraising efforts, Brandorr Group is funding a matching initiative for DebConf13, which will be in place for 4 more days (through April 30th).

You can donate here!

Please consider donating $100, or even $5 or any amount in between, as we can use all the help we can get to reach our fundraising target. The rules are simple:

  • for each dollar donated by an individual to DebConf13 though this mechanism, Brandorr Group will donate another dollar;
  • individual donations will be matched only up to 100USD each;
  • only donations in USD will be matched;
  • Brandorr Group will match the donated funds up to a maximum total of 5000 USD.

This generous offer will only stay in place through the end of April 30th.

Please act quickly, and help spread the world!

Daniel Pocock: An RTC infrastructure for debian.org

26 April, 2013 - 18:58

A number of potential Summer of Code students are now looking at the question of how to improve RTC in Debian. Making RTC available as a service to the Debian community may be part of the solution, as this would give the community a foundation to test things against and a useful tool for communication.

One core component would be a TURN server to help people traverse NAT. There are two free TURN servers already packaged in Debian, and a third one that will potentially be packaged in the near future. Challenge for SoC students: how to make the TURN authentication mechanism work with the existing passwords of people with Debian accounts or alioth accounts? All of the TURN servers would require some code changes or development of new administration tools to link the authentication systems and support LDAP.

Another component would be a SIP or XMPP proxy. A very basic implementation would need to allow anybody with a debian.org account to register, make and receive calls, send and receive chat messages. Most SIP proxies and XMPP servers can do this. Challenges for SoC students: how to link up the authentication systems for user passwords? How/where can Debian/alioth users set preferences for call forwarding (adapting the code for the web interface)? How can these tools be integrated with other infrastructure such as the bug tracking system or IRC?

There are many additional services that are possible with RTC. A common example is conference calling. Challenge for students: how to set up one of the free conferencing solutions so that Debian Developers can quickly set up conferences using SIP, XMPP or WebRTC? Would it be administered by a web interface? Would it generate emails inviting people to join a conference with a WebRTC link to click?

Having RTC services always available on debian.org would enable RTC/VoIP packages to be automatically tested against the central infrastructure using the Jenkins continuous integration system. This would ensure that packages are always interoperable. Challenges for students: pick some of the packages and develop unit testing code for them that allows end-to-end tests against any other VoIP client software. Make it all run under Jenkins.

Lior Kaplan: The commit police: learning for recent reference mistakes

26 April, 2013 - 17:22

Continuing the previous post about commits and bugs, I’d like to review some mistakes I saw recently. Mistakes do happen, but mentioning them here is meant to teach others and hopefully to reduce similar ones in the future. This post isn’t about shaming the authors/commiters. Also, the points I highlight are what I consider as a mistakes or problems, other people might think differently.

  1. Mentioning two bugs in one commit message, which our system doesn’t support right now. So the second bug doesn’t get the commit notification, and that should be done manually (example: commit 2933935 and fdo#53278).
  2. Referencing gerrit changes as part of the commit message (example: commit 87f185d). Giving references as part of the commit is great and helpful, but I would prefer to see the reference to the actual commit and not to its review process. This is meaningful when you search the log. If the followup change is suggested when the first change is still in review it should be combined, otherwise it already have a commit to reference.
  3. Following up a commit, and not mentioning the bug it references (example: commit 87f185d). In this specific case, the we’re talking about a meta bug for translating comments from German to English (fdo#39468), so no harm done. But this important when you want to cherry pick a fix for a bug to other branches and might forget the follow up commits. It’s also relevant to more technical meta bugs (example: fdo#62096).
  4. Referencing a mailing list which reference a commit and a bug instead of referencing them directly (commit 21fea27, fdo#60534). This bug shows very well why correct referencing is important, a commit was made to fix the bug, a follow up commit was done without proper reference, and than the first commit was reverted. No one involved in trying to fix that specific bug knew about the follow up commit as it wasn’t documented anywhere.
  5. Referencing bugs though full bug URL instead of the right format (example: commit e1f6dac). Also the bug is referenced in the commit in the body of the message instead of the first line (header) which is more visible.
  6. Referencing non existing bugs (example: commit 3a4534b). Which got a manual notification in the bug by the comitter (fdo#33091).
  7. Using shortening services URL as part of commit messages (example: commit 86f8fba). There’s no way to know to what the reference is without using the service, which in this case was leading to a post on one of the projects mailing list. There isn’t any problem giving the direct URL to the list’s archive. It’s interesting whether we should link to our URL and is it “OK” to use other external services who also archive our mailing lists (example: commit e83990a).

To conclude, having references in commit message is really helpful, but please reference the right resource to help people find it easily and to let our automated services parse it.


Filed under: LibreOffice

Petter Reinholdtsen: First alpha release of Debian Edu / Skolelinux based on Debian Wheezy

26 April, 2013 - 14:30

The Debian Edu / Skolelinux project is still going strong and made its first Wheezy based release today. This is the release announcement:

New features for Debian Edu ~7.0.0 alpha0 released 2013-04-26

This is the release notes for for Debian Edu / Skolelinux ~7.0.0 edu alpha0, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediatly after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network.

This is the first test release based on Wheezy (which currently is not released yet). Basically this is an updated and slightly improved version compared to the Squeeze release.

Software updates

  • Everything which is new in Debian Wheezy, eg:
    • Linux kernel 3.2.x
    • Desktop environments KDE "Plasma" 4.8.4, GNOME 3.4, and LXDE 4 (KDE is installed by default; to choose GNOME or LXDE: see manual.)
    • Web browser Iceweasel 10 ESR
    • LibreOffice 3.5.4
    • LTSP 5.4.2
    • GOsa 2.7.4
    • CUPS print system 1.5.3
    • Educational toolbox GCompris 12.01
    • Music creator Rosegarden 12.04
    • Image editor Gimp 2.8.2
    • Virtual universe Celestia 1.6.1
    • Virtual stargazer Stellarium 0.11.3
    • Scratch visual programming environment 1.4.0.6
    • New version of debian-installer from Debian Wheezy, see installation manual for more details.
    • Debian Wheezy includes about 37000 packages available for installation.
    • More information about Debian Wheezy 7.0 is provided in the release notes and the installation manual.

Documentation

  • The (English) Debian Edu Wheezy Manual is fully translated to German, French, Italian and Danish. Partly translated versions exist for Norwegian Bokmal and Spanish.

LDAP related changes

  • Slight changes to some objects and acls to have more types to choose from when adding systems in GOsa. Now systems can be of type server, workstation, printer, terminal or netdevice.

Other changes

  • LTSP clients start as diskless workstation / thin client can be configured via command line argument -- or individually adding an entry in lts.conf or LDAP.
  • GOsa gui: Now some options that seemed to be available, but are non functional, are greyed out (or are not clickable). Some tabs are completely hidden to the end user, others even to the GOsa admin.

Regressions

  • No mass import of user account data in GOsa (ldif or csv) available yet.

No updated artwork

  • Updated artwork which is visible during installation, in the login screen and as desktop wallpaper is still missing or the same as we had for our Squeeze based release.

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: c5e773ddafdaa4f48c409c682f598b6c

The SHA1SUM of this image is: 25934fabb9b7d20235499a0a51f08ce6c54215f2

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

David Bremner: Exporting Debian packaging patches from git, (redux)*

26 April, 2013 - 00:58
(Debian) packaging and Git.

The big picture is as follows. In my view, the most natural way to work on a packaging project in version control [1] is to have an upstream branch which either tracks upstream Git/Hg/Svn, or imports of tarballs (or some combination thereof, and a Debian branch where both modifications to upstream source and commits to stuff in ./debian are added [2]. Deviations from this are mainly motivated by a desire to export source packages, a version control neutral interchange format that still preserves the distinction between upstream source and distro modifications. Of course, if you're happy with the distro modifications as one big diff, then you can stop reading now gitpkg $debian_branch $upstream_branch and you're done. The other easy case is if your changes don't touch upstream; then 3.0 (quilt) packages work nicely with ./debian in a separate tarball.

So the tension is between my preferred integration style, and making source packages with changes to upstream source organized in some nice way, preferably in logical patches like, uh, commits in a version control system. At some point we may be able use some form of version control repo as a source package, but the issues with that are for another blog post. At the moment then we are stuck with trying bridge the gap between a git repository and a 3.0 (quilt) source package. If you don't know the details of Debian packaging, just imagine a patch series like you would generate with git format-patch or apply with (surprise) quilt.

From Git to Quilt.

The most obvious (and the most common) way to bridge the gap between git and quilt is to export patches manually (or using a helper like gbp-pq) and commit them to the packaging repository. This has the advantage of not forcing anyone to use git or specialized helpers to collaborate on the package. On the other hand it's quite far from the vision of using git (or your favourite VCS) to do the integration that I started with.

The next level of sophistication is to maintain a branch of upstream-modifying commits. Roughly speaking, this is the approach taken by git-dpm, by gitpkg, and with some additional friction from manually importing and exporting the patches, by gbp-pq. There are some issues with rebasing a branch of patches, mainly it seems to rely on one person at a time working on the patch branch, and it forces the use of specialized tools or workflows. Nonetheless, both git-dpm and gitpkg support this mode of working reasonably well [3].

Lately I've been working on exporting patches from (an immutable) git history. My initial experiments with marking commits with git notes more or less worked [4]. I put this on the back-burner for two reasons, first sharing git notes is still not very well supported by git itself [5], and second Gitpkg maintainer Ron Lee convinced me to automagically pick out what patches to export. Ron's motivation (as I understand it) is to have tools which work on any git repository without extra metadata in the form of notes.

Linearizing History on the fly.

After a few iterations, I arrived at the following specification.

  • The user supplies two refs upstream and head. upstream should be suitable for export as a .orig.tar.gz file [6], and it should be an ancestor of head.

  • At source package build time, we want to construct a series of patches that

    1. Is guaranteed to apply to upstream
    2. Produces the same work tree as head, outside ./debian
    3. Does not touch ./debian
    4. As much as possible, matches the git history from upstream to head.

Condition (4) suggests we want something roughly like git format-patch upstream..head, removing those patches which are only about Debian packaging. Because of (3), we have to be a bit careful about commits that touch upstream and ./debian. We also want to avoid outputting patches that have been applied (or worse partially applied) upstream. git patch-id can help identify cherry-picked patches, but not partial application.

Eventually I arrived at the following strategy.

  1. Use git-filter-branch to construct a copy of the history upstream..head with ./debian (and for technical reasons .pc) excised.

  2. Filter these commits to remove e.g. those that are present exactly upstream, or those that introduces no changes, or changes unrepresentable in a patch.

  3. Try to revert the remaining commits, in reverse order. The idea here is twofold. First, a patch that occurs twice in history because of merging will only revert the most recent one, allowing earlier copies to be skipped. Second, the state of the temporary branch after all successful reverts represents the difference from upstream not accounted for by any patch.

  4. Generate a "fixup patch" accounting for any remaining differences, to be applied before any if the "nice" patches.

  5. Cherry-pick each "nice" patch on top of the fixup patch, to ensure we have a linear history that can be exported to quilt. If any of these cherry-picks fail, abort the export.

Yep, it seems over-complicated to me too.

TL;DR: Show me the code.

You can clone my current version from

git://pivot.cs.unb.ca/gitpkg.git

This provides a script "git-debcherry" which does the history linearization discussed above. In order to test out how/if this works in your repository, you could run

git-debcherry --stat $UPSTREAM

For actual use, you probably want to use something like

git-debcherry -o debian/patches

There is a hook in hooks/debcherry-deb-export-hook that does this at source package export time.

I'm aware this is not that fast; it does several expensive operations. On the other hand, you know what Don Knuth says about premature optimization, so I'm more interested in reports of when it does and doesn't work. In addition to crashing, generating multi-megabyte "fixup patch" probably counts as failure.

Notes
  1. This first part doesn't seem too Debian or git specific to me, but I don't know much concrete about other packaging workflows or other version control systems.

  2. Another variation is to have a patched upstream branch and merge that into the Debian packaging branch. The trade-off here that you can simplify the patch export process a bit, but the repo needs to have taken this disciplined approach from the beginning.

  3. git-dpm merges the patched upstream into the Debian branch. This makes the history a bit messier, but seems to be more robust. I've been thinking about trying this out (semi-manually) for gitpkg.

  4. See e.g. exporting. Although I did not then know the many surprising and horrible things people do in packaging histories, so it probably didn't work as well as I thought it did.

  5. It's doable, but one ends up spending about a bunch lines of code on duplicating basic git functionality; e.g. there is no real support for tags of notes.

  6. Since as far as I know quilt has no way of deleting files except to list the content, this means in particular exporting upstream should yield a DFSG Free source tree.

Julien Danjou: OpenStack Design Summit Havana, from a Ceilometer point of view

25 April, 2013 - 22:49

Last week was the OpenStack Design Summit in Portland, OR where we, developers, discussed and designed the new OpenStack release (Havana) coming up.

The summit has been wonderful. It was my first OpenStack design summit -- even more as a PTL -- and bumping into various people I've never met so far and worked with online only was a real pleasure!

<figure> <figcaption>Me and Nick ready to talk about Ceilometer new features.</figcaption> </figure>

Nick Barcet from eNovance, our dear previous Ceilometer PTL, and myself, talked about Ceilometer and presented the work that bas been done for Grizzly, with some previews of what we'll like to see done for its Havana release. You can take a look at the slides if you're curious.

Design sessions

Ceilometer had his design sessions during the last days of the summit. We noted a lot of things and commented during the sessions in our Etherpads instances.

The first session was a description of Ceilometer core architecture for interested people, and was a wonderful success considering that the room was packed. Our Doug Hellmann did a wonderful job introducing people to Ceilometer and answering question.

<figure> <figcaption>Doug explaining Ceilometer architecture.</figcaption> </figure>

The next session was about getting feedbacks from our users. We had a lot of surprise to discover wonderful real use-cases and deployments, like the CERN using Ceilometer and generating 2 GB of data per day!

The following sessions ran on Thursday and were much more about new features discussion. A lot ot already existing blueprints were discussed and quickly validated during the first morning session. Then, Sandy Walsh introduced the architecture they use inside StackTach, so we can start thinking about getting things from it into Ceilometer.

API improvements were discussed without surprises and with a good consensus on what needs to be done. The four following sessions that occupied a lot of the days were related to alarming. All were lead by Eoghan Glynn, from RedHat, who did an amazing job presenting the possible architectures with theirs pros and cons. Actually, all we had to do was to nod to his designs and acknowledge the plan on how to build this.

That last two sessions were about discussing advanced models for billing where we got some interesting feedback from Daniel Dyer from HP, and then were a quick follow-up of the StackTach presentation from the morning session.

Havana roadmap

The list of blueprints targetting Havana is available and should be finished by next week. If you want to propose blueprints, you're free to do so and inform us about it so we can validate it. The same applies if you wish to implement one of them!

API extension

I do think the API version 2 is going to be heavily extended during this release cycle. We need more feature, like the group-by functionnality.

Healthnmon

In parallel of the design sessions, discussions took place in the unconference room with the Healthnmon developers to figure out a plan in order to merge some of their efforts into Ceilometer. They should provide a component to help Ceilometer supports more hypervisors than it currently does.

Alarming

Alarming is definitely going to be the next big project for Ceilometer. Today, Eoghan and I started building blueprints on alarming, centralized in a general blueprint.

We know this is going to happen for real and very soon, thanks to the engagements of eNovance and RedHat who are commiting resources to this amazing project!

Wouter Verhelst: Dear supplier,

25 April, 2013 - 19:17

I don't usually blog about work, but this time around, you maxed it.

When I specifically ask you to not ship goods, I have good reasons for that. Specifically:

  • I'm not at my office all the time. Yes, I'm often there, but I'm also often at a customer's place (you know, so I can actually make money). When I'm not at a customer, I tend to be at the office in a noon-through-evening schedule, rather than a morning-through-late-afternoon one (I hate getting out of bed if I don't have to). Since we don't have any employees, this likely means your logistics partner will find a closed door with nobody answering the bell.
  • The result of that is that we'll often find that your shipments end up at your logistics partner's warehouse. Since I have to drive to a warehouse anyway, I might as well choose to drive to the warehouse that is closest—yours.
  • For this "service" of shipping goods away from interesting locations, you charge €20+.

Not Happy(tm)

Jeff Licquia: Linux Is Hard, Except When It Isn’t

25 April, 2013 - 12:47

Online tech news site Ars Technica (which I recommend, by the way) recently reviewed the Dell XPS 13 Developer Edition.  Its unique feature: it ships with Ubuntu Linux as the default operating system.  This preload deal had a few unique properties:

  • It’s from a major system vendor, not a no-name or third-party integrator.
  • It’s a desktop-oriented product, not a server.
  • Most notably, the vendor actually put effort into making it work well.

That last point deserves some explanation.  A few vendors have grabbed a Windows computer they sell and allowed the option to preload Linux on it, but without support; you’re on your own if it doesn’t work in some way, which is likely.  Essentially, they save you the time of wiping Windows off the box and doing a fresh install, but not much more.  But this laptop comes out of Dell’s Project Sputnik, a project to put out Linux machines for developers with a “DevOps” flavor, and they felt the machine had to work as well as their regular products.  So they actually put effort and testing into getting the laptop to run Ubuntu well, with all the drivers configured properly and tweaked to support the machine’s quirks, just like they do for Windows.

And so, the review is surprised to learn that Ubuntu on the XPS 13, well, just works!  It’s even in the title of the review.  Here’s reviewer Lee Hutchinson’s observations:

I’ve struggled before with using Linux as my full-time operating environment both at work and at home. I did it for years at work, but it was never quite as easy as I wanted it to be—on an older Dell laptop, keeping dual monitor support working correctly across updates required endless fiddling with xorg.conf, and whether or not it was Nvidia’s fault was totally irrelevant to swearing, cursing Past Lee, trying desperately to get his monitors to display images so he could make his 10am conference call without having to resort to running the meeting on the small laptop screen.

And thence comes the astonishment: on this Linux laptop, everything just works.  Most of the review is spent on the kinds of hardware features that distinguish this from other laptops: the keyboard is like this, the screen is that resolution, it has this CPU and this much RAM and so on.  Some space is devoted to impressions of the default Ubuntu 12.04 install, and some space is given to the special “DevOps” software, which helps the developer reproduce the software environment on the laptop when deploying apps.

But before all that, Hutchinson has to put in a dig:

It’s an impressive achievement, and it’s also a sad comment on the overall viability of Linux as a consumer-facing operating system for normal people. I don’t think anyone is arguing that Linux hasn’t earned its place in the data center—it most certainly has—but there’s no way I’d feel comfy installing even newbie-friendly Ubuntu or Mint on my parents’ computers. The XPS 13 DE shows the level of functionality and polish possible with extra effort, and that effort and polish together means this kind of Linux integration is something we won’t see very often outside of boutique OEMs.

Of course, Windows is actually worse than Linux on the hardware front–when you don’t get it pre-installed.  Imagine if more vendors put as much effort into preinstalled Linux as they did into preinstalled Windows.  In that alternate reality, I imagine people would react more like this:

“Isn’t that what you’re looking for in a mainstream product?” Rick chided. “In 1996 it was: ‘Wow look at this, I got Linux running on xxxxxxxx.’ Even in 2006 that was at times an accomplishment… When was the last time you turned on an Apple or Windows machine and marveled that it ‘just worked?’ It should be boring.”

Which was, of course, the reaction Hutchinson got when discussing the review with a Linux-using friend.

With Microsoft being less of a friend to the hardware vendors every day, here’s a case study more of them should be paying attention to.

Jon Dowland: Debian Day #13

25 April, 2013 - 04:54

Wow, my first "Debian day" in 2013, but it's a bit of a misnomer because I didn't do any Debian work tonight.

I spent some time poking at geary, Yorba's new email client which looks promising. I haven't managed to get it working much, though. Someone packaged an old version in Debian but it wouldn't work with either my home or my work IMAP/SMTP servers for various reasons. I did get a git checkout to build in March or April but that stopped working on Debian when they dropped "Precise" support (It seems they were backporting various GIR/Webkit bits and pieces into their own code and didn't want to carry that around any more). They've currently got a fundraiser going, they're aiming for $100k and (at the time of writing) need half of that with only 10 hours to go.

Recently I've also been trying to fix a rockbox bug on Sansa Fuze v1 which really cripples writing onto the Fuze. The original firmware does not support microsd capacities >32G which rules it out for me. Sadly the v1 version of the Fuze is uncommon enough that I don't think this bug is getting much attention. My investigations have been limited to fairly dumb bisect-like approaches, combined with lots of writing onto a microsd card (which will probably be killed dead by all this). My initial attempts at a bisect have failed as it turns out the problem is the inverse of a regression: it bizarrely seems as if the bug was fixed in a version-branch, rather than stopped working in one, so it has never been squashed in the main branch at all. Furthermore, the commit that seems to fix the problem is a totally benign version bump, and almost certainly hasn't fixed the bug. So with sufficient further testing I'll probably just prove that this bug has never been fixed. It might be time to switch tack and look into diagnosing the bug rather than trying to side-step it. Or perhaps just spend £15 on a Sansa Clip Zip and forget all about the Fuze v1. Either way I hope someone releases a 128G microsd card this year!

Every now and then I wonder whether it would make sense to package rockbox in Debian in some way. Probably not.

Bits from Debian: Release date for Wheezy announced

25 April, 2013 - 03:04

Neil McGovern, on behalf of the Debian Release Team, announced the target date of the weekend of 4th/5th May for the release of Debian 7.0 "Wheezy".

Now it's time to organize some Wheezy release parties to celebrate the event and show all your Debian love!

Joachim Breitner: The carbondioxide footprint of Debian's Haskell packages

24 April, 2013 - 19:36

By now, Debian ships quite a lot of Haskell packages (~600). Because of GHC's ABI volatility, whenever we upload a new version of a library, we have to rebuild all libraries that depend on that. In particular, if we upload a new version of the compiler itself, we have to rebuild all Haskell library packages. So we have to rebuild stuff a lot. Luckily, Debian has a decent autobuilding setup so that I just need to tell it what to rebuild, and the rest happens automatically (including figuring out the actual order to build things).

I was curious how much we use the buildd system compared to other packages, and also how long the builders are busy building Haskell packages. All the data is in a postgresql database on buildd.debian.org, so with some python and javascript code, I can visualize this. The graphs show the number of all uploads by autobuilder on the amd64 architecture, with haskell uploads specially marked, and the second graph does the same for the build time. You can select time ranges and get aggregate statistics for that time span.

During the last four days a complete rebuild was happening, due to the upload of GHC 7.6.3. During these 2 days and 18 hours building 537 packages took 48 hours of build time and produced 15kg of CO2. That is 94% of all uploads and 91% the total build time. The numbers are lower for the whole of last year: 52% of uploads, 31% of build time and 57kg of CO2. (The CO2 numbers are very rough estimates.)

Note that amd64 is a bit special, as most packages are uploaded on this architecture by the developers, so no automatic builds are happening. On other architectures have, every upload of a (arch:any) package is built, so the share of Haskell packages will be lower. Unfortunately, at the moment the database does not provide me with a table across all architectures (and I was too lazy to make it configurable yet).

Ulrich Dangel: Visualizing DPL votes

24 April, 2013 - 07:00

Axel ‘XTaran’ Beckert recently asked in #debian-devel for a visualization of the Debian Project Leader Election results. Unfortunately there seems to be none, except the table in the mail. As I am currently trying to use ggplot2 for more things, I thought I would give it a try and convert the data into a csv file and process the data via R.

For converting the original data i used vim to do a little text processing and created a csv file. By running the following code we can create a nice looking simple graphic:

library("ggplot2")
votes <- read.csv("dpl-2013.csv")
votes2 <- melt(subset(votes, select=c("Year", "DDs", "unique")), id="Year")

p <- ggplot(votes2, aes(Year, value, group=variable, fill=variable)) + ylab('DD #')
p <- p + geom_point() + geom_line() + geom_area(position='identity')
p <- p + scale_fill_manual("Values", values=c(alpha('#d70a53', 0.5),
alpha("blue", 0.5)), breaks=c("DDs", "unique"), labels=c("Developer count", "Voters"))

print(p)

I also created a little script for automatically processing the csv file and creating a similar plot. Feel free to fork/clone extend this script.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้