Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 35 min ago

Christian Perrier: Developers per country (July 2014)

27 July, 2014 - 15:37
This is time again for my annual report about the number of developers per country.

This is now the sixth edition of this report. Former editions:

So, here we are with the July 2014 version, sorted by the ratio of *active* developers per million population for each country.

Act: number of active developers
Dev: total number of developers
A/M: number of active devels per million pop.
D/M: number of devels per million pop.
2009: rank in 2009
2010: rank in 2010
2011: rank in 2011 (June)
2012: rank in 2012 (June)
2013: rank in 2012 (July)
2014: rank now
Code Name Population Act Dev Dev Act/Million Dev/Million 2009 2010 June 2011 June 2012 July 2013 July 2014
fi Finland 5259250 19 31 3,61 5,89 1 1 1 1 1 1
ie Ireland 4670976 13 17 2,78 3,64 13 9 6 2 2 2
nz New Zealand 4331600 11 15 2,54 3,46 4 3 5 7 7 3 * mq Martinique 396404 1 1 2,52 2,52

3 4 4 4
se Sweden 9088728 22 37 2,42 4,07 3 6 7 5 5 5
ch Switzerland 7870134 19 29 2,41 3,68 2 2 2 3 3 6 * no Norway 4973029 11 14 2,21 2,82 5 4 4 6 6 7 * at Austria 8217280 18 29 2,19 3,53 6 8 10 10 10 8 * de Germany 81471834 164 235 2,01 2,88 7 7 9 9 8 9 * lu Luxemburg 503302 1 1 1,99 1,99 8 5 8 8 9 10 * fr France 65350000 101 131 1,55 2 12 12 11 11 11 11
au Australia 22607571 32 60 1,42 2,65 9 10 12 12 12 12
be Belgium 11071483 14 17 1,26 1,54 10 11 13 13 13 13
uk United-Kingdom 62698362 77 118 1,23 1,88 14 14 14 14 14 14
nl Netherlands 16728091 18 40 1,08 2,39 11 13 15 15 15 15
ca Canada 33476688 34 63 1,02 1,88 15 15 17 16 16 16
dk Denmark 5529888 5 10 0,9 1,81 17 17 16 17 17 17
es Spain 46754784 34 56 0,73 1,2 16 16 19 18 18 18
it Italy 59464644 36 52 0,61 0,87 23 22 22 19 19 19
hu Hungary 10076062 6 12 0,6 1,19 18 25 26 20 24 20 * cz Czech Rep 10190213 6 6 0,59 0,59 21 20 21 21 20 21 * us USA 313232044 175 382 0,56 1,22 19 21 25 24 22 22
il Israel 7740900 4 6 0,52 0,78 24 24 24 25 23 23
hr Croatia 4290612 2 2 0,47 0,47 20 18 18 26 25 24 * lv Latvia 2204708 1 1 0,45 0,45 26 26 27 27 26 25 * bg Bulgaria 7364570 3 3 0,41 0,41 25 23 23 23 27 26 * sg Singapore 5183700 2 2 0,39 0,39


33 33 27 * uy Uruguay 3477778 1 2 0,29 0,58 22 27 28 28 28 28
pl Poland 38441588 11 15 0,29 0,39 29 29 30 30 30 29 * jp Japan 127078679 36 52 0,28 0,41 30 28 29 29 29 30 * lt Lithuania 3535547 1 1 0,28 0,28 28 19 20 22 21 31 * gr Greece 10787690 3 4 0,28 0,37 33 38 34 35 35 32 * cr Costa Rica 4301712 1 1 0,23 0,23 31 30 31 31 31 33 * by Belarus 9577552 2 2 0,21 0,21 35 36 39 39 32 34 * ar Argentina 40677348 8 10 0,2 0,25 34 33 35 32 37 35 * pt Portugal 10561614 2 4 0,19 0,38 27 32 32 34 34 36 * sk Slovakia 5477038 1 1 0,18 0,18 32 31 33 36 36 37 * rs Serbia 7186862 1 1 0,14 0,14



38 38
tw Taiwan 23040040 3 3 0,13 0,13 37 34 37 37 39 39
br Brazil 192376496 18 21 0,09 0,11 36 35 38 38 40 40
cu Cuba 11241161 1 1 0,09 0,09
38 41 41 41 41
co Colombia 45566856 4 5 0,09 0,11 41 44 46 47 46 42 * kr South Korea 48754657 4 6 0,08 0,12 39 39 42 42 42 43 * gt Guatemala 13824463 1 1 0,07 0,07



43 44 * ec Ecuador 15007343 1 1 0,07 0,07
40 43 43 45 45
cl Chile 16746491 1 2 0,06 0,12 42 41 44 44 47 46 * za South Africa 50590000 3 10 0,06 0,2 38 48 48 48 48 47 * ru Russia 143030106 8 9 0,06 0,06 43 42 47 45 49 48 * mg Madagascar 21281844 1 1 0,05 0,05 44 37 40 40 50 49 * ro Romania 21904551 1 2 0,05 0,09 45 43 45 46 51 50 * ve Venezuela 28047938 1 1 0,04 0,04 40 45 50 49 44 51 * my Malaysia 28250000 1 1 0,04 0,04

49 50 52 52
pe Peru 29907003 1 1 0,03 0,03 46 46 51 51 53 53
tr Turkey 74724269 2 2 0,03 0,03 47 47 52 52 54 54
ua Ukraine 45134707 1 1 0,02 0,02 48 53 58 59 55 55
th Thailand 66720153 1 2 0,01 0,03 50 50 54 54 56 56
eg Egypt 80081093 1 3 0,01 0,04 51 51 55 55 57 57
mx Mexico 112336538 1 1 0,01 0,01 49 49 53 53 58 58
cn China 1344413526 10 14 0,01 0,01 53 53 57 56 59 59
in India 1210193422 8 9 0,01 0,01 52 52 56 57 60 60
sv El Salvador 7066403 0 1 0 0,14

36 58 61 61































969 1561 62,08%







A few interesting facts:
  • New Zealand bumps from rank 7 to rank 3, thanks to one new active developer
  • Switzerland loses one developer and goes donw to rank 6
  • Norway also slightly goes down by losing one developer
  • With two more developers, Austria climbs up to rank 8 and overtakes Germany...;-)
  • Hungary climbs a little bit by gaining one developer
  • Singapore doubles its number of developers from 1 to 2 and bumps from 33 to 27
  • One rank up too for Poland that gained one developer
  • Down to rank 31 for Lithuania by losing one developer
  • Up to rank 32 for Greece with 4 developers instead of 3
  • Argentina goes up by havign two more developers (it lost 2 last year)
  • Up from 46 to 42 for Colombia by winning one more developer
  • One more developer and Russia climps from 49 to 48
  • One less for Venezuela that has only one developer left...:-(
  • No new country this year. Less movement towards "the universal OS"?
  • We have 12 more active Debian developers and 26 more developers overall. Less progression than last year
  • The ratio of active developers increases is nearly stable though slightly decreasing

Holger Levsen: 20140726-the-future-is-now

26 July, 2014 - 20:08
Do you remember the future?

Unless you are over 60, you weren't promised flying cars. You were promised an oppressive cyberpunk dystopia. Here you go.

(Source: found in the soup)

Luckily the future today is still unwritten. Shape it well.

Richard Hartmann: Release Critical Bug report for Week 30

26 July, 2014 - 05:58

I have been asked to publish bug stats from time to time. Not exactly sure about the schedule yet, but I will try and stick to Fridays, as in the past; this is for the obvious reason that it makes historical data easier to compare. "Last Friday of each month" may or may not be too much. Time will tell.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1511
    • Affecting Jessie: 431 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 383 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 44 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 319 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

Juliana Louback: Extending an XTuple Business Object

25 July, 2014 - 23:06

xTuple is in my opinion incredibly well designed; the code is clean and the architecture ahderent to a standardized structure. All this makes working with xTuple software quite a breeze.

I wanted to integrate JSCommunicator into the web-based xTuple version. JSCommunicator is a SIP communication tool, so my first step was to create an extension for the SIP account data. Luckily for me, the xTuple development team published an awesome tutorial for writing an xTuple extension.

xTuple cleverly uses model based business objects for the various features available. This makes customizing xTuple very straightforward. I used the tutorial mentioned above for writing my extension, but soon noticed my goals were a little different. A SIP account has 3 data fields, these being the SIP URI, the account password and an optional display name. xTuple currently has a business object in the core code for a User Account and it would make a lot more sense to simply add my 3 fields to this existing business object rather than create another business object. The tutorial very clearly shows how to extend a business object with another business object, but not how to extend a business object with only new fields (not a whole new object).

Now maybe I’m just a whole lot slower than most people, but I had a ridiculously had time figuring this out. Mind you, this is because I’m slow, because the xTuple documentation and code is understandable and as self-explanatory as it gets. I think it just takes a bit to get used to. Either way, I thought this just might be useful to others so here is how I went about it.

Setup

First you’ll have to set up your xTuple development environment and fork the xtuple and xtuple-extesions repositories as shown in this handy tutorial. A footnote I’d like to add is please verify that your version of Vagrant (and anything else you install) is the one listed in the tutorial. I think I spent like two entire days or more on a wild goose (bug) chase trying to set up my environment when the cause of all the errors was that I somehow installed an older version of Vagrant - 1.5.4 instead of 1.6.3. Please don’t make the same mistake I did. Actually if for some reason you get the following error when you try using node:

<<ERROR 2014-07-10T23:52:46.948Z>> Unrecoverable exception. Cannot call method 'extend' of undefined

    at /home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:37:39

    at Object.<anonymous> (/home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:1364:3)
    ...

chances are, you have the wrong version. That’s what happened to me. The Vagrant Virtual Development Environment automatically installs and configures everything you need, it’s ready to go. So if you find yourself installing and updating and apt-gets and etc, you probably did something wrong.

Coding

So by now we should have the Vagrant Virtual Development Environment set up and the web app up and running and accessible at localhost:8443. So far so good.

Disclaimer: You will note that much of this is similar to xTuple’s tutorial but there are some small but important differences. Other Disclaimer: I’m describing how I did it, which may or may not be ‘up to snuff’. Works for me though.

Schema

First let’s make a schema for the table we will create with the new custom fields. Be sure to create the correct directory stucture, aka /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>/database/source or in my case /path/to/xtuple-extensions/source/sip_account/database/source, and create the file create_sa_schema.sql, ‘sa’ is the name of my schema. This file will contain the following lines:

do $$
  /* Only create the schema if it hasn't been created already */
  var res, sql = "select schema_name from information_schema.schemata where schema_name = 'sa'",
  res = plv8.execute(sql);
  if (!res.length) {
    sql = "create schema sa; grant all on schema sa to group xtrole;"
    plv8.execute(sql);
  }
$$ language plv8;

Of course, feel free to replace ‘sa’ with your schema name of choice. All the code described here can be found in my xtuple-extensions fork, on the sip_ext branch.

Table

We’ll create a table containing your custom fields and a link to an existing table - the table for the existing business object you want to extend. If you’re wondering why make a whole new table for a few extra fields, here’s a good explanation, the case in question is adding fields to the Contact business object.

You need to first figure out what table you want to link to. This might not be uber easy. I think the best way to go about it is to look at the ORMs. The xTuple ORMs are a JSON mapping between the SQL tables and the object-oriented world above the database, they’re .json files found at path/to/xtuple/node_modules/xtuple/enyo-client/database/orm/models for the core business objects and at path/to/xtuplenyo-client/extensions/source/<EXTENSION NAME>/database/orm/models for exension business objects. I’ll give two examples. If you look at contact.json you will see that the Contact business object refers to the table “cntct”. Look for the “type”: “Contact” on the line above, so we know it’s the “Contact” business object. In my case, I wanted to extend the UserAccount and UserAccountRelation business objects, so check out user_account.json. The table listed for UserAccount is xt.usrinfo and the table listed for UserAccountRelation is xt.usrlite. A closer look at the sql files for these tables (usrinfo.sql and usrlite.sql) revealed that usrinfo is in fact a view and usrlite is ‘A light weight table of user information used to avoid punishingly heavy queries on the public usr view’. I chose to refer to xt.usrlite - that or I received error messages when trying the other table names.

Now I’ll make the file /path/to/xtuple-extensions/source/sip_account/database/source/usrlitesip.sql, to create a table with my custom fields plus the link to the urslite table. Don’t quote me on this, but I’m under the impression that this is the norm for naming the sql file joining tables: the name of the table you are referring to (‘usrlite’ in this case) and your extension’s name. Content of usrlitesip.sql:

select xt.create_table('usrlitesip', 'sa');

select xt.add_column('usrlitesip','usrlitesip_id', 'serial', 'primary key', 'sa');
select xt.add_column('usrlitesip','usrlitesip_usr_username', 'text', 'references xt.usrlite (usr_username)', 'sa');
select xt.add_column('usrlitesip','usrlitesip_uri', 'text', '', 'sa');
select xt.add_column('usrlitesip','usrlitesip_name', 'text', '', 'sa');
select xt.add_column('usrlitesip','usrlitesip_password', 'text', '', 'sa');

comment on table sa.usrlitesip is 'Joins User with SIP account';

Breaking it down, line 1 creates the table named ‘usrlitesip’ (no duh), line 2 is for the primary key (self-explanatory). You can then add any columns you like, just be sure to add one that references the table you want to link to. I checked usrlite.sql and saw the primary key is usr_username, be sure to use the primary key of the table you are referencing.

You can check what you made by executing the .sql files like so:

$ cd /path/to/xtuple-extensions/source/sip_account/database/source
$ psql -U admin -d dev -f create_sa_schema.sql
$ psql -U admin -d dev -f usrlitesip.sql

After which you will see the table with the columns you created if you enter:

$ psql -U admin -d dev -c "select * from sa.usrlitesip;"

Now create the file /path/to/xtuple-extensions/source/sip_account/database/source/manifest.js to put the files together and in the right order. It should contain:

{
  "name": "sip_account",
  "version": "1.4.1",
  "comment": "Sip Account extension",
  "loadOrder": 999,
  "dependencies": ["crm"],
  "databaseScripts": [
    "create_sa_schema.sql",
    "usrlitesip.sql",
    "register.sql"
  ]
}

I think the “name” has to be the same you named your extension directory as in /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>. I think the “comment” can be anything you like and you want your “loadOrder” to be high so it’s the last thing installed (as it’s an add on.) So far we are doing exactly what’s instructed in the xTuple tutorial. It’s repetitive, but I think you can never have too many examples to compare to. In “databaseScripts” you will list the two .sql files you just created for the schema and the table, plus another file to be made in the same directory named register.sql.

I’m not sure why you have to make the register.sql or even if you indeed have to. If you leave the file empty, there will be a build error, so put a ‘;’ in the register.sql or remove the line “register.sql” from manifest.js as I think for now we are good without it.

Now let’s update the database with our new extension:

$ cd /path/to/xtuple
$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$ psql -U admin -d dev -c "select * from xt.ext;"

That last command should display a table with a list of extensions; the ones already in xtuple like ‘crm’ and ‘billing’ and some others plus your new extension, in this case ‘sip_account’. When you run build_app.js you’ll probably see a message along the lines of “<Extension name> has no client code, not building client code” and that’s fine because yeah, we haven’t worked on the client code yet.

ORM

Here’s where things start getting different. So ORMs link your object to an SQL table. But we DON’T want to make a new business object, we want to extend an existing business object, so the ORM we will make will be a little different than the xTuple tutorial. Steve Hackbarth kindly explained this new business object/existing business object ORM concept here.

First we’ll create the directory /path/to/xtuple-extensions/source/sip_account/database/orm/ext, according to xTuple convention. ORMs for new business objects would be put in /path/to/xtuple-extensions/source/sip_account/database/orm/models. Now we’ll create the .json file /path/to/xtuple-extensions/source/sip_account/database/orm/ext/user_account.jscon for our ORM. Once again, don’t quote me on this, but I think the name of the file should be the name of the business object you are extending, as is done in the turorial example extending the Contact object. In our case, UserAccount is defined in user_account.json and that’s what I named my extension ORM too. Here’s what you should place in it:

 1 [
 2   {
 3     "context": "sip_account",
 4     "nameSpace": "XM",
 5     "type": "UserAccount",
 6     "table": "sa.usrlitesip",
 7     "isExtension": true,
 8     "isChild": false,
 9     "comment": "Extended by Sip",
10     "relations": [
11       {
12         "column": "usrlitesip_usr_username",
13         "inverse": "username"
14       }
15     ],
16     "properties": [
17       {
18         "name": "uri",
19         "attr": {
20           "type": "String",
21           "column": "usrlitesip_uri",
22           "isNaturalKey": true
23         }
24       },
25       {
26         "name": "displayName",
27         "attr": {
28           "type": "String",
29           "column": "usrlitesip_name"
30         }
31       },
32       {
33         "name": "sipPassword",
34         "attr": {
35           "type": "String",
36           "column": "usrlitesip_password"
37         }
38       }
39     ],
40     "isSystem": true
41   },
42   {
43     "context": "sip_account",
44     "nameSpace": "XM",
45     "type": "UserAccountRelation",
46     "table": "sa.usrlitesip",
47     "isExtension": true,
48     "isChild": false,
49     "comment": "Extended by Sip",
50     "relations": [
51       {
52         "column": "usrlitesip_usr_username",
53         "inverse": "username"
54       }
55     ],
56     "properties": [
57       {
58         "name": "uri",
59         "attr": {
60           "type": "String",
61           "column": "usrlitesip_uri",
62           "isNaturalKey": true
63         }
64       },
65       {
66         "name": "displayName",
67         "attr": {
68           "type": "String",
69           "column": "usrlitesip_name"
70         }
71       },
72       {
73         "name": "sipPassword",
74         "attr": {
75           "type": "String",
76           "column": "usrlitesip_password"
77         }
78       }
79     ],
80     "isSystem": true
81   }
82 ]

Note the “context” is my extension name, because the context + nameSpace + type combo has to be unique. We already have a UserAccount and UserAccountRelation object in the “XM” namespace in the “xtuple” context in the original user_account.json, now we will have a UserAccount and UserAccountRelation object in the “XM” namespace in the “sip_account” conext. What else is important? Note that “isExtension” is true on lines 7 and 47 and the “relations” item contains the “column” of the foreign key we referenced.

This is something you might want to verify: “column” (lines 12 and 52) is the name of the attribute on your table. When we made a reference to the primary key usr_usrname from the xt.usrlite table we named that column usrlitesip_usr_usrname. But the “inverse” is the attribute name associated with the original sql column in the original ORM. Did I lose you? I had a lot of trouble with this silly thing. In the original ORM that created a new UserAccount business object, the primary key attribute is named “username”, as can be seen here. That is what should be used for the “inverse” value. Not the sql column name (usr_username) but the object attribute name (username). I’m emphasizing this because I made that mistake and if I can spare you the pain I will.

If we rebuild our extension everything should come along nicely, but you won’t see any changes just yet in the web app because we haven’t created the client code.

Client

Create the directory /path/to/xtuple-extensions/source/sip_account/client which is where we’ll keep all the client code.

Extend Workspace View I want the fields I added to show up on the form to create a new User Account, so I need to extend the view for the User Account workspace. I’ll start by creating a directory /path/to/xtuple-extensions/source/sip_account/client/views and in it creating a file named ‘workspace.js’ containing this code:

XT.extensions.sip_account.initWorkspace = function () {

	var extensions = [
  	{kind: "onyx.GroupboxHeader", container: "mainGroup", content: "_sipAccount".loc()},
  	{kind: "XV.InputWidget", container: "mainGroup", attr: "uri" },
  	{kind: "XV.InputWidget", container: "mainGroup", attr: "displayName" },
  	{kind: "XV.InputWidget", container: "mainGroup", type:"password", attr: "sipPassword" }
	];

	XV.appendExtension("XV.UserAccountWorkspace", extensions);
};

So I’m initializing my workspace and creating an array of items to add (append) to view XV.UserAccountWorkspace. The first ‘item’ is this onyx.GroupboxHeader which is a pretty divider for my new form fields, the kind you find in the web app at Setup > User Accounts, like ‘Overview’. I have no idea what other options there are for container other than “mainGroup”, so let’s stick to that. I’ll explain content: “_sipAccount”.loc() in a bit. Next I created three input fields of the XV.InputWidget kind. This also confused me a bit as there are different kinds of input to be used, like dropdowns and checkboxes. The only advice I can give is snoop around the webapp, find an input you like and look up the corresponding workspace.js file to see what was used.

What we just did is (should be) enough for the new fields to show up on the User Account form. But before we see things change, we have to package the client. Create the file /path/to/xtuple-extensions/source/sip_account/client/views/package.js. This file is needed to ‘package’ groups of files and indicates the order the files should be loaded (for more on that, see this). For now, all the file will contain is:

enyo.depends(
"workspace.js"
);

You also need to package the ‘views’ directory containing workspace.js, so create the file Create the file /path/to/xtuple-extensions/source/sip_account/client/package.js and in it show that the directory ‘views’ and its contents must be part of the higher level package:

enyo.depends(
"views"
);

I like to think of it as a box full of smaller boxes.

This will sound terrible, but apparently you also need to create the file /path/to/xtuple-extensions/source/sip_account/client/core.js containing this line:

XT.extensions.icecream = {};

I don’t know why. As soon as I find out I’ll be sure to inform you.

As we’ve added a file to the client directory, be sure to update /path/to/xtuple-extensions/source/sip_account/client/package.js so it included the new file:

enyo.depends(
"core.js",
"views"
);

Translations

Remember “_sipAccount”.loc()” in our workspace.js file? xTuple has great internationalization support and it’s easy to use. Just create the directory and file /path/to/xtuple-extensions/source/sip_account/client/en/strings.js and in it put key-value pairs for labels and their translation, like this:

(function () {
  "use strict";

  var lang = XT.stringsFor("en_US", {
    "_sipAccount": "Sip Account",
    "_uri": "Sip URI",
    "_displayName": "Display Name",
    "_sipPassword": "Password"
  });

  if (typeof exports !== 'undefined') {
    exports.language = lang;
  }
}());

So far I included all the labels I used in my Sip Account form. If you write the wrong label (key) or forget to include a corresponding key-value pair in strings.js, xTuple will simply name your lable “_lableName”, underscore and all.

Now build your extension and start up the server:

$ cd /path/to/xtuple 
$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$ node node-datasource/main.js

If the server is already running, just stop it and restart it to reflect your changes.

Now if you go to Setup > User Accounts and click the “+” button, you should see a nice little addition to the form with a ‘Sip Account’ divider and three new fields. Nice, eh?

Extend Parameters

Currently you can search your User Accounts list using any of the User Account fields. It would be nice to be able to search with the Sip account fields we added as well. To do that, let’s create the directory /path/to/xtuple-extensions/source/sip_account/client/widgets and there create the file parameter.js to extend XV.UserAccountListParameters. One again, you’ll have to look this up. In the xTuple code you’ll find the application’s parameter.js in /path/to/xtuple/enyo-client/application/source/widgets. Search for the business object you are extending (for example, XV.UserAccount) and look for some combination of the business object name and ‘Parameters’. If there’s more than one, try different ones. Not a very refined method, but it worked for me. Here’s the content of our parameter.js:

XT.extensions.sip_account.initParameterWidget = function () {

  var extensions = [
    {kind: "onyx.GroupboxHeader", content: "_sipAccount".loc()},
    {name: "uri", label: "_uri".loc(), attr: "uri", defaultKind: "XV.InputWidget"},
    {name: "displayName", label: "_displayName".loc(), attr: "displayName", defaultKind: "XV.InputWidget"}
  ];

  XV.appendExtension("XV.UserAccountListParameters", extensions);
};

Node that I didn’t include a search field for the password attribute for obvious reasons. Now once again, we package this new code addition by creating a /path/to/xtuple-extensions/source/sip_account/client/widgets/package.js file:

enyo.depends(
"parameter.js"
);

We also have to update /path/to/xtuple-extensions/source/sip_account/client/package.js:

enyo.depends(
"core.js",
"widgets"
"views"
);

Rebuild the extension (and restart the server) and go to Setup > User Accounts. Press the magnifying glass button on the upper left side of the screen and you’ll see many options for filtering the User Accounts, among them the SIP Uri and Display Name.

Extend List View

You might want your new fields to show up on the list of User Accounts. I figured out a way to do this that looks strange and kind of incorrect, but it’s apparently working.

Create the file /path/to/xtuple-extensions/source/sip_account/client/views/list.js and add the following:

Steve Kemp: The selfish programmer

25 July, 2014 - 21:16

Once upon a time I wrote a piece of software for scheduling the classes available to a college.

There was a bug in the scheduler: Students who happened to be named 'Steve Kemp' had a significantly higher chance (>=80% IIRC) of being placed in lessons where the class makeup was more than 50% female.

This bug was never fixed. Which was nice, because I spent several hours both implementing and disguising this feature.

I'm was a bad coder when I was a teenager.

These days I'm still a bad coder, but in different ways.

Wouter Verhelst: Multiarchified eID libraries for Debian

25 July, 2014 - 19:44

A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.

Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".

For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...

... I noticed that Adobe no longer provides any downloads of the Linux version of Adobe Reader. They're just gone. There is an ftp.adobe.com containing some old versions, but nothing more recent than a 5.x version.

Well, I suppose that settles that, then.

Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:

  • run dpkg --add-architecture i386 if you haven't yet enabled multiarch
  • Install the eid-archive package, as usual
  • Edit /etc/apt/sources.list.d/eid.list, and enable the continuous repository (that is, remove the # at the beginning of the line)
  • run dpkg-reconfigure eid-archive, so that the key for the continuous repository is enabled
  • run apt-get update
  • run apt-get -t continuous install eid-mw to upgrade your middleware to the version in continuous
  • run apt-get -t continuous install libbeidpkcs11-0:i386 to install the 32-bit middleware version.
  • run your 32-bit application and sign things.

You should, however, note that the continuous repository is named so because it contains the results of our continuous integration system; that is, every time a commit is done to the middleware, packages in this repository are updated automatically. This means the software in the continuous repository might break. Or it might eat your firstborn. Or it might cause nasal daemons. As such, FedICT does not support these versions of the middleware. Don't try the above if you're not prepared to deal with that...

Tim Retout: London.pm's July 2014 tech meeting

25 July, 2014 - 15:36

Last night, I went to the London.pm tech meeting, along with a couple of colleagues from CV-Library. The talks, combined with the unusually hot weather we're having in the UK at the moment, combined with my holiday all last week, make it feel like I'm at a software conference. :)

The highlight for me was Thomas Klausner's talk about OX (and AngularJS). We bought him a drink at the pub later to pump him for information about using Bread::Board, with some success. It was worth the long, late commute back to Southampton.

All very enjoyable, and I hope they have more technical meetings soon. I'm planning to attend the London Perl Workshop later in the year.

Gunnar Wolf: Nice read: «The Fasinatng … Frustrating … Fascinating History of Autocorrect»

25 July, 2014 - 11:18

A long time ago, I did some (quite minor!) work on natural language parsing. Most of what I got was the very basic rudiments on what needs to be done to begin with. But I like reading some texts on the subject every now and then.

I am also a member of the ACM — Association for Computing Machinery. Most of you will be familiar with it, it's one of the main scholarly associations for the field of computing. One of the basic perks of being an ACM member is the subscription to a very nice magazine, Communications of the ACM. And, of course, although I enjoy the physical magazine, I like reading some columns and articles as they appear along the month using the RSS feeds. They also often contain pointers to interesting reads on other media — As happened today. I found quite a nice article, I think, worth sharing with whoever thinks I have interesting things to say.

They published a very short blurb titled The Fasinatng … Frustrating … Fascinating History of Autocorrect. I was somewhat skeptical reading it links to an identically named article, published in Wired. But gave it a shot, anyway...

The article follows a style that's often abused and not very amusing, but I think was quite well done: The commented interview. Rather than just drily following through an interview, the writer tells us a story about that interview. And this is the story of Gideon Lewis-Kraus interviewing Dean Hachamovitch, the creator of the much hated (but very much needed) autocorrect feature that appeared originally in Microsoft Word.

The story of Hachamovitch's work (and its derivations, to the much maligned phone input predictors) over the last twenty-something years is very light to read, very easy to enjoy. I hope you find it as interesting as I did.

Craig Small: PHP uniqid() not always a unique ID

24 July, 2014 - 20:17

For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one.

Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in.

To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found.

At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said “Are these numbers supposed to be the same?”

The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups?

The problem is with the uniqid() function. It is basically a hex representation of the time.  Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It’s why the problem happened, but not all the time.

The stat error was another symptom of this bug, what would happen was:

  • Child 1 starts the poll, temp filename abc123
  • Child 2 starts the poll in the same microsecond, temp filename is also abc123
  • Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result
  • Child 1 finishes, grabs the details, deletes the temporary file
  • Child 2 loops, tries to run stat but finds no file

Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random.

A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent’s scheduler makes sure of that.) The initial responses is that it is looking good.

 

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

24 July, 2014 - 17:38

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Matthew Palmer: First Step with Clojure: Terror

24 July, 2014 - 08:30
$ sudo apt-get install -y leiningen
[...]
$ lein new scratch
[...]
$ cd scratch
$ lein repl
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2
Transferring 5K from central
Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2
Transferring 4K from central
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2
Transferring 3311K from central
[...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.

  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

  3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

Russ Allbery: WebAuth 4.6.1

24 July, 2014 - 06:59

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

Lior Kaplan: Testing PHPNG on Debian/Ubuntu

24 July, 2014 - 05:01

We (at Zend) want to help people get more involved in testing PHPNG (PHP next generation), so we’re started to provide binaries for it, although it’s still a branch on top of PHP’s master branch. See more details about PHPNG on Zeev Suraski’s blog post.

The binaries (64bit) are compatible with Debian testing/unstable and Ubuntu Trusty (14.04) and up. The mod_php is built for Apache 2.4 which all three flavors have.

The repository is at http://repos.zend.com/zend-server/early-access/phpng/

Installation instructions:

# wget http://repos.zend.com/zend.key -O- 2> /dev/null | apt-key add -
# echo “deb http://repos.zend.com/zend-server/early-access/phpng/ trusty zend” > /etc/apt/sources.list.d/phpng.list
# apt-get update
# apt-get install php5

For the task of providing these binaries, I had a pleasure of combining my experience as a member of the Debian PHP team and a Debian Developer with stuff more internal to the PHP development process. Using the already existing Debian packaging enabled me to test more builds scenarios easily (and report problems accoredingly). Hopefully this could also be translated back into providing more experimental packages for Debian and making sure Debian packages are ready for the PHP released after PHP 5.6.


Filed under: Debian GNU/Linux, PHP

Petter Reinholdtsen: 98.6 percent done with the Norwegian draft translation of Free Culture

24 July, 2014 - 04:40

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

Michael Prokop: Book Review: The Docker Book

24 July, 2014 - 04:16

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

Tanguy Ortolo: GNU/Linux graphic sessions: suspending your computer

23 July, 2014 - 20:45

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \
            --dest='org.freedesktop.UPower' \
            /org/freedesktop/UPower org.freedesktop.UPower.Suspend
Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with
signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \
                        /usr/sbin/pm-suspend-hybrid, \
                        /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

Francesca Ciceri: Adventures in Mozillaland #3

23 July, 2014 - 19:04

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

Steinar H. Gunderson: The sad state of Linux Wi-Fi

23 July, 2014 - 18:45

I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer “this is just crap”.

I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:

  • Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz—this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
  • Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello “iwl-agn-rs”, I'm looking at you.)
  • Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
  • Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. “there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine”, which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.

With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.

Augh.

Andrew Pollock: [tech] Going solar

23 July, 2014 - 13:36

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

Matthew Palmer: Per-repo update hooks with gitolite

23 July, 2014 - 12:45

Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.

In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.

There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.

The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.

The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้