Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 40 min ago

Cyril Brulebois: Mark a mail as read across maildirs

12 August, 2014 - 02:20
Problem

Discussions are sometimes started by mailing a few different mailing lists so that all relevant parties have a chance to be aware of a new topic. It’s all nice when people can agree on a single venue to send their replies to, but that doesn’t happen every time.

Case in point, I’m getting 5 copies of a bunch of mails, through the following debian-* lists: accessibility, boot, cd, devel, project.

Needless to say: Reading, or marking a given mail as read once per maildir rapidly becomes a burden.

Solution

I know some people use a duplicate killer at procmail time (hello gregor) but I’d rather keep all mails in their relevant maildirs.

So here’s mark-read-everywhere.pl which seems to do the job just fine for my particular setup: all maildirs below ~/mails/* with the usual cur, new, tmp subdirectories.

Basically, given a mail piped from mutt, compute a hash on various headers, look at all new mails (new subdirectories), and mark the matching ones as read (move to the nearby cur subdirectories, and change suffix from , to ,S).

Mutt key binding (where X is short for cross post):

macro index X "<pipe-message>~/bin/mark-as-read-everywhere.pl<enter>"

This isn’t pretty or bulletproof but it already started saving time!

Now to wonder: was it worth the time to automate that?

Cyril Brulebois: How to serve Perl source files

12 August, 2014 - 01:45

I noticed a while ago a Perl script file included on my blog wasn’t served properly, since the charset wasn’t announced and web browsers didn’t display it properly. The received file was still valid UTF-8 (hello, little © character), at least!

First, wrong intuition

Reading Apache’s /etc/apache2/conf.d/charset it looks like the following directive might help:

AddDefaultCharset UTF-8

but comments there suggest reading the documentation! And indeed that alone isn’t sufficient since this would only affect text/plain and text/html. The above directive would have to be combined with something like this in /etc/apache2/mods-enabled/mime.conf:

AddType text/plain .pl
Real solution

To avoid any side effects on other file types, the easiest way forward seems to avoid setting AddDefaultCharset and to associate the UTF-8 charset with .pl files instead, keeping the text/x-perl MIME type, with this single directive (again in /etc/apache2/mods-enabled/mime.conf):

AddCharset UTF-8 .pl

Looking at response headers (wget -d) we’re moving from:

Content-Type: text/x-perl

to:

Content-Type: text/x-perl; charset=utf-8
Conclusion

Nothing really interesting, or new. Just a small reminder that tweaking options too hastily is sometimes a bad idea. In other news, another Perl script is coming up soon. :)

Juliana Louback: JSCommunicator - Setup and Architecture

11 August, 2014 - 23:06

Preface

During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I’ll now be publishing a series of posts to recap all I’ve done during GSoC. Better late than never.

Intro

JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It’s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way.

Setup

To begin, please fork the JSCommunicator repo.

If you are new to git, feel free to follow the steps in section “Setup” and “Clone” in this post.

If you read the README file (which you always should), you’ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are:

I didn’t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server.

In your project’s root directory, you’ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:

turn_servers: [
	{ server:"turn:debrtc.org" }     
],

Note that unlike the sample, I didn’t set a username and password so SIP credentials will be used.

Now fill in the websocket property – here we use sip5060.net.

websocket: {
    servers: 'wss://ws.sip5060.net',
    connection_recovery_min_interval: 2,
    connection_recovery_max_interval: 30,
  },

I’m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any ‘Remember me’ data. If you also set sip_auth_password, JSCommunicator will automatically log in.

All the other properties are for other optional functionalities and are well explained.

You’ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project’s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it’s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I’ve made the mistake several times.

You’ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project’s root directory. If you’ll be using the internationalization support you’ll also need jquery.i18n.properties.js.

To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You’ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see:

Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same.

Architecture

Disclaimer: I’m explaining my view of the JSCommunicator architecture, which currently may not be 100% correct. But so far it’s been enough for me to make my modifications and additions to the code, so it could be of use. One I get a JSCommunicator Founding Father’s stamp of approval, I’ll be sure to confirm the accuracy.

Now to explain how the JSCommunicator code interacts, the use of each code ‘item’ is described, ignoring all the html and css which will vary according to how you choose to use JSCommunicator. I’m also not going to explain jQuery which is a dependency but not specific to WebRTC. The core JSCommunicator code is the following:

  • config.js
  • jssip-helper.js
  • parseuri.js
  • webrtc-check.js
  • JSCommUI.js
  • JSCommManager.js
  • make-release
  • init.js
  • Arbiter.js
  • JsSIP.js

Each of these files will be presented in what I hope is an intuitive order.

  • config.js - As expected, this file contains your custom configuration specifications, i.e. the servers being used to direct traffic, authentication credentials, and a series of properties to enable/disable optional functionalities in JSCommunicator.

The next three files could be considered ‘utils’:

  • jssip-helper.js - This will load the data from config.js necessary to run JSCommunicator, such as configurations relating to servers, websockets, connection specifications (timeout, recovery intervals), and user credentials. Properties for optional features are ignored of course.

  • parseuri.js - Contains a function to segregate data from a URI.

  • webrtc-check.js - Verifies if browser is WebRTC compatible by checking if it’s possible to enable camera and microphone access.

These two are where the magic happens:

  • JSCommUI.js - Responsible for the UI component, controling what becomes visible to the user, audio effects, client-side error handling, and gathering the data that will be fed to JSCommManager.js.

  • JSCommManager.js - Initializes the SIP User Agent to manage the session including (but not limited to) beginning/terminating the session, sending/receiving SIP messages and calls, and signaling the state of the session and other important notifications such as incoming calls, messages received, etc.

Now for some extras:

  • make-release - Combines the main .js files into one big file. Gets jssip-helper.js, parseuri.js, webrtc-check.js, JSCommUI.js and JSCommManager.js and spits out a new file JSComm.js with all that content. Now you understand why phone-dev.shtml included each of the 5 files mentioned above whereas phone.shmtl includes only JSComm.js which didn’t exist until you ran make-release. That confused me a bit.

  • init.js - On page load, calls JSCommManager.js’ init function, which eventually calls JSCommUI.js’ init function. In other words, starts up the JSCommunicator app. I guess it’s only used to show how to start up the app. This could be done directly on the page you’re embedding JSCommunicator from. So maybe not entirely needed.

Third party code:

  • Arbiter.js - Javascript implmentation of the publish/subscribe patten, written by Matt Kruse. In JSCommunicator it’s used in JSCommManager.js to publish signals to describe direct the app’s behavior. For example, JSCommManager will publish a message indicating that the user received a call from a certain URI. In event-demo.js we subscribe to this kind of message and when said message is received, an action can be performed such as adding to the app’s call history. Very cool.

  • JsSIP.js - Implements the SIP WebSocket transport in Javascript. This ensures the transportantion of data is done in adherence to the WebSocket protocol. In JSCommManager.js we initialize a SIP User Agent based in the implementation in JsSIP.js. The User Agent will ‘translate’ all of the JSCommunicator actions into SIP WebSocket format. For example, when sending an IM, the JSCommunicator app will collect the essential data such as say the origin URI, destination URI and an IM message, while the User Agent is in charge of formating the data so that it can be transported in a SIP message unit. A SIP message contains a lot more information than just the sender, receiver and message. Of course, a lot of the info in a SIP message is irrelevant to the user and in JSCommUI.js we filter through all that data and only display what the user needs to see.

Here’s a diagram of sorts to help you visualize how the code interacts:

In sum, 1 - JSCommUI.js handles what is displayed in the UI and feeds data to JSCommManager.js; 2 - JSCommManager.js actually does stuff, feeding data to be displayed to JSCommUI.js; 3 - JSCommManager.js calls functions from the three ‘utils’, parseuri.js, webrtc-check.js and jssip-helpher.js which organizes the data from config.js; 4 - JSCommManager.js initializes a SIP User Agent based on the implementation in Arbiter.js.

When making any changes to JSCommunicator, you will likely only be working with JSCommUI.js and JSCommManager.js.

Juliana Louback: JSCommunicator - Setup and Architecture

11 August, 2014 - 23:06

Preface

During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I’ll now be publishing a series of posts to recap all I’ve done during GSoC. Better late than never.

Intro

JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It’s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way.

Setup

To begin, please fork the JSCommunicator repo.

If you are new to git, feel free to follow the steps in section “Setup” and “Clone” in this post.

If you read the README file (which you always should), you’ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are:

I didn’t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server.

In your project’s root directory, you’ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:

turn_servers: [
	{ server:"turn:debrtc.org" }     
],

Note that unlike the sample, I didn’t set a username and password so SIP credentials will be used.

Now fill in the websocket property – here we use sip5060.net.

websocket: {
    servers: 'wss://ws.sip5060.net',
    connection_recovery_min_interval: 2,
    connection_recovery_max_interval: 30,
  },

I’m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any ‘Remember me’ data. If you also set sip_auth_password, JSCommunicator will automatically log in.

All the other properties are for other optional functionalities and are well explained.

You’ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project’s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it’s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I’ve made the mistake several times.

You’ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project’s root directory. If you’ll be using the internationalization support you’ll also need jquery.i18n.properties.js.

To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You’ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see:

Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same.

Sylvestre Ledru: clang 3.4, 3.5 and 3.6 are now coinstallable in Debian

11 August, 2014 - 13:47

Clang is finally co installable on Debian. 3.4, 3.5 and the current trunk (snapshot) can be installed together.

So, just like gcc, the different version can be called with clang-3.4, clang-3.5 or clang-3.6.

/usr/bin/clang, /usr/bin/clang++, /usr/bin/scan-build and /usr/bin/scan-view are now handled through the llvm-defaults package.

llvm-defaults is also now managing clang-check, clang-tblgen, c-index-test, clang-apply-replacements, clang-tidy, pp-trace and clang-query.

Changes are also available on llvm.org/apt/.
The next step will be to manage also llvm-defaults on llvm.org/apt to simplify the transition for people using these packages.

So, with:

# /etc/apt/sources.list
deb http://llvm.org/apt/unstable/ llvm-toolchain main
deb http://llvm.org/apt/unstable/ llvm-toolchain-3.4 main
deb http://llvm.org/apt/unstable/ llvm-toolchain-3.5 main
$ apt-get install clang-3.4 clang-3.5 clang-3.6

$ clang-3.4 --version
Debian clang version 3.4.2 (branches/release_34) (based on LLVM 3.4.2)
Target: x86_64-pc-linux-gnu
Thread model: posix


$ clang-3.5 --version
Debian clang version 3.5.0-+rc2-1~exp1 (tags/RELEASE_350/rc2) (based on LLVM 3.5.0)
Target: x86_64-pc-linux-gnu
Thread model: posix


$ clang-3.6 --version
Debian clang version 3.6.0-svn214990-1~exp1 (trunk) (based on LLVM 3.6.0)
Target: x86_64-pc-linux-gnu
Thread model: posix

Original post blogged on b2evolution.

Paul Tagliamonte: DebConf 14

11 August, 2014 - 09:06

I’ll be giving a short talk on Debian and Docker!

I’ll prepare some slides to give a brief talk about Debian and Docker, then open it up to have a normal session to talk over what Docker is and isn’t, and how we can use it in Debian better.

Hope to see y’all in Portland!

Ian Donnelly: dpkg Woes

11 August, 2014 - 08:01

Hi Everybody,

The original plan for my Google Summer of Code project involved creating a merge tool for Elektra including a kdb merge command, that part is all finished and integrated into the newest release of Elektra, 0.8.7 already. The next step was to patch dpkg to add an option for conffile merging. The basic overview of how this would work is:

  • Original versions of conffiles would be saved somewhere on the system to serve as a base file
  •  The users current version of the conffile would serve as ours
  • The package maintainers version would be used as theirs
  • The result of a three-way merge would overwrite the user’s current version and be saved as a new base
  • Dpkg would need an option to perform a three way merge with a hook that allowed any tool to be used for the task, not just Elektra.

Obviously, all of these things require patching dpkg, although not in too large away. Luckily, we found a previous attempt to patch dpkg for a three-way merge hook, however it is a few years old and was never included in dpkg. Felix decided to update this patch to work with the new versions of dpkg and cleaned up a bit of redundancy, he then resubmitted it to see if the dpkg maintainers would be interested in such a patch, and to start a dialogue about including all the patches we need. Unfortunately there has been no response from the dpkg team, any many bugs on their tracker have been sitting unanswered for years. Additionally, dpkg’s source can be quite complex and it would require a ton of effort for us to make all these patches with no guarantee of integration. As a result, we have decided to go another route (which I will be posting about soon).

I just wanted to update anybody interested on the progress of my Google Summer of Code Project by letting them know the experience with dpkg. We have come up with a new solution that is already looking great and you will hear a lot more about it this final week of Google Summer of Code.

Stay Tuned!
- Ian S. Donnelly

Matthew Garrett: Birthplace

11 August, 2014 - 07:44
For tedious reasons, I will at this stage point out that I was born in Galway, Ireland.

comments

Ian Wienand: Finding out if you're a Rackspace instance

11 August, 2014 - 07:30

Different hosting providers do things slightly differently, so it's sometimes handy to be able to figure out where you are. Rackspace is based on Xen and their provided images should include the xenstore-ls command available. xenstore-ls vm-data will give you a handy provider and even region fields to let you know where you are.

function is_rackspace {
  if [ ! -f /usr/bin/xenstore-ls ]; then
      return 1
  fi

  /usr/bin/xenstore-ls vm-data | grep -q "Rackspace"
}

if is_rackspace; then
  echo "I am on Rackspace"
fi

Other reading about how this works:

Simon Josefsson: Wifi on S3 with Replicant

11 August, 2014 - 02:02

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

Cyril Brulebois: Why is my package blocked?

11 August, 2014 - 01:45

A bit of history: A while ago udeb-producing packages were getting frozen on a regular fashion, when a d-i release was about to be cut. While I wasn’t looking at the time, I can easily understand the reasons behind that: d-i is built upon many components, it takes some time to make sure it’s basically in shape for a release, and it’s very annoying when a regression sneaks in right before the installation images get built.

I took over d-i release maintenance in May 2012 and only a few uploads happened before the wheezy freeze. I was only discovering the job at the time, and I basically released whatever was in testing then. The freeze began right after that (end of June), so I started double checking things affecting d-i (in addition to or instead of the review performed by other release team members), and unblocking packages when changes seemed safe, or once they were tested.

A few uploads happened after the wheezy release and there’s already a Jessie Alpha 1 release. I was about to release Jessie Beta 1 after some fair bits of testing, a debian-installer upload, and the only remaining bits were: building installation images (hello Steve), and of course communication (mail announce and website update).

Unfortunately a new upstream release reached testing in the meanwhile, breaking the installer in several ways. I’ll give details below, of course not because I want to point finger at the maintainer, but to illustrate the ramifications that a single package’s migrating to testing can induce.

  • parted 3.2-1 was uploaded on 2014-07-30 and migrated on 2014-08-05.

  • parted 3.2-2 fixed a regression reported in Ubuntu only (LP#1352252) which I also hit with images built locally after that migration.

  • I then built some images locally using fixed parted packages but then discovered that auto-lvm was still broken, which I reported in #757417.

  • After some investigation Colin confirmed some behavioral changes in this new parted release, which imply the need for an update of several other partman-* components: #757661, #757662, #757663, #757664, #757665, #757666.

  • Thankfully fixes have been added for all of those (except partman-zfs at the time of writing), but more testing is needed, before possibly urgenting those packages so that they get into testing as soon as possible.

Since I’d like to avoid such experience in the future, I’ll probably reintroduce the old method and freeze all udeb-producing packages during next d-i releases.

So you know why it might happen. Your next question might be: “What to do when your package is getting caught in that net?”. In that case, please get in touch with both debian-release@ and debian-boot@ asking for an unblock. I’ll then review your particular package, and either let it migrate to testing, or delay it until after the release.

Russ Allbery: Orphaning some software packages

10 August, 2014 - 12:23

As mentioned in my previous message, I'm orphaning various software packages that I was maintaining as part of my job at Stanford, or that for some other reason I'm no longer using directly. My goal is to free up some time and space for new work projects at my new employer, for other hobbies, and to take better care of other projects that I'm not orphaning.

The following software packages are now orphaned, and marked as such on their web pages:

I'm also stepping down from Debian package maintenance for the OpenAFS and Shibboleth packages, and have already notified the relevant communities. For the Debian packages, and for the above software packages, I will continue to provide security support until someone else can take them over.

WebAuth is going to be in a state of transition as noted on its page:

My successor at Stanford will be continuing maintenance and development, but that person hasn't been hired yet, and it will take some time for them to ramp up and start making new releases (although there may be at least one interim release with work that I'm finishing now). It's therefore not strictly orphaned, but it's noted that way on my software pages until someone else at Stanford picks it up.

Development of the other packages that I maintain should continue as normal, with a small handful of exceptions. The following packages are currently in limbo, since I'm not sure if I'll have continued use for them:

I'm not very happy with the current design of either kadmin-remctl or wallet, so if I do continue to maintain them (and have time to work on them), I am likely to redesign them considerably.

For all of my packages, I've been adding clones of the repository to GitHub as an additional option besides my personal Git repository server. I'm of two minds about using (and locking myself into) more of the GitHub infrastructure, but repository copies on GitHub felt like it might be useful for anyone who wanted to fork or take over maintenance. I will be adding links to the GitHub repositories to the software packages for things that are in Git.

If you want to take over any of the orphaned software packages, feel free. When you're ready for the current software pages to redirect to its new home, let me know.

Ian Donnelly: How-To: kdb merge

10 August, 2014 - 10:30

Hi Everybody,

As you may know, part of my Google Summer of Code project involved the creation of merge tools for Elektra. The one I am going to focus on today is kdb merge. The kdb tool allows users to access and perform functions on the Elektra Key Database from the command line. We added a new command to this very useful tool, the merge command. This command allows a user to perform a three-way merge of KeySets from the kdb tool.
The command to use this tool is:
kdb merge [options] ourpath theirpath basepath resultpath

The standard naming scheme for a three-way merge consists of ours, theirs, and base. Ours refers to the local copy of a file, theirs refers to a remote copy, and base refers to their common anscestor. This works very similarly for KeySets, especially ones that consist of mounted conffiles. For mounted conffiles, ours should be the user’s copy, theirs would be the maintainers copy, and base would be the conffile as it was during the last package upgrade or during the package install. If you are just trying to merge any two KeySets that derive from the same base, ours and theirs can be interchanged. In kdb merge, ourpath, theirpath, and basepath work just like ours, theirs, and base except each one represents the root of a KeySet. Resultpath is pretty self- explanatory, it is just where you want the result of the merge to be saved under.

As for the options, there are a few basic ones and one option, strategy, that is very important. The basic options are:
-H --help which prints the help text
-i --interactive which attempts the merge in an interactive way
-t --test which tests the propsed merge and informs you about possible conflicts
-v –verbose which runs the merge in verbose mode
-V –version prints info about the version

The other option, strategy is:
s --strategy which is used to specify a strategy to use in case of a conflict

The current list of strategies are:
preserve the merge will fail if a conflict is detected
ours the merge will use our version during a conflict
theirs the merge will use their version during a conflict
base the merge will use the base version during a conflict

If no strategy is specified, the merge will default to the preserve strategy as to not risk making the wrong decision. If any of the other strategies are specified, when a conflcit is detected, merge will use the Key specified by the strategy (ours, theirs, or base) for the resulting Key.

An example of using the kdb merge command:
kdb merge -s ours system/hosts/ours system/hosts/theirs system/hosts/base system/hosts/result

-Ian S. Donnelly

Russell Coker: Being Obviously Wrong About Autism

10 August, 2014 - 00:01

I’m watching a Louis Theroux documentary about Autism (here’s the link to the BBC web site [1]). The main thing that strikes me so far (after watching 7.5 minutes of it) is the bad designed of the DLC-Warren school for Autistic kids in New Jersey [2].

A significant portion of people on the Autism Spectrum have problems with noisy environments, whether most Autistic people have problems with noise depends on what degree of discomfort is considered a problem. But I think it’s most likely to assume that the majority of kids on the Autism Spectrum will behave better in a quiet environment. So any environment that is noisy will cause more difficult behavior in most Autistic kids and the kids who don’t have problems with the noise will have problems with the way the other kids act. Any environment that is more prone to noise pollution than is strictly necessary is hostile to most people on the Autism Spectrum and all groups of Autistic people.

The school that is featured in the start of the documentary is obviously wrong in this regard. For starters I haven’t seen any carpet anywhere. Carpeted floors are slightly more expensive than lino but the cost isn’t significant in terms of the cost of running a special school (such schools are expensive by private-school standards). But carpet makes a significant difference to ambient noise.

Most of the footage from that school included obvious echos even though they had an opportunity to film when there was the least disruption – presumably noise pollution would be a lot worse when a class finished.

It’s not difficult to install carpet in all indoor areas in a school. It’s also not difficult to install rubber floors in all outdoor areas in a school (it seems that most schools are doing this already in play areas for safety reasons). For a small amount of money spent on installing and maintaining noise absorbing floor surfaces the school could achieve better educational results. The next step would be to install noise absorbing ceiling tiles and wallpaper, that might be a little more expensive to install but it would be cheap to maintain.

I think that the hallways in a school for Autistic kids should be as quiet as the lobby of a 5 star hotel. I don’t believe that there is any technical difficulty in achieving that goal, making a school look as good as an expensive hotel would be expensive but giving it the same acoustic properties wouldn’t be difficult or expensive.

How do people even manage to be so wrong about such things? Do they never seek any advice from any adult on the Autism Spectrum about how to run their school? Do they avoid doing any of the most basic Google searches for how to create a good environment for Autistic people? Do they just not care at all and create an environment that looks good to NTs? If they are just trying to impress NTs then why don’t they have enough pride to care that people like me will know how bad they are? These aren’t just rhetorical questions, I’d like to know what’s wrong with those people that makes them do their jobs in such an amazingly bad way.

Related posts:

  1. Autism, Food, etc James Purser wrote “Stop Using Autism to Push Your Own...
  2. Autism and a Child Beauty Contest Fenella Wagener wrote an article for the Herald Sun about...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...

Steve Kemp: Rebooting the CMS

9 August, 2014 - 16:59

I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.

Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:

  • Posting of articles, blog-entries, and polls.
  • The manipulation of the same.
  • User-account management.

It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.

If we have a JSON endpoint that will allow:

  • GET /article/32
  • POST /article/ [create]
  • GET /articles/offset/number [get the most recent]

Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.

At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.

There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..

It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.

Russ Allbery: A new challenge

9 August, 2014 - 12:01

Those reading this journal may have noticed that my rate of posting has dropped a bit in the past few years, and quite a lot in the past year. One of the major reasons for this was work, which had been getting more bureaucratic, more stressful, less trusting, and more fearful. After this got drastically worse in the past six months, I finally decided enough was enough and took advantage of a good opportunity to do something different.

I will be joining Dropbox's site reliability engineering team in a week and a half (which means that I'll be working on their servers, not on the product itself). It will take a few months to settle in, but hopefully this will mean a significant improvement to my stress levels and a lot of interesting projects to work on.

I'm taking advantage of this change to inventory the various things I'm currently committed to and let go of some projects to make more space in my life. There are also a variety of software projects that I was maintaining as part of my job at Stanford, and I will be orphaning many of those packages. I'll make another journal post about that a bit later.

For Debian folks, I am going to be at Debconf, and hope to meet many of you there. (It's going to sort of be my break between jobs.) In the long run, I'm hoping this move will let me increase my Debian involvement.

In the long run, I expect most of my free software work, my reviews, and the various services I run to continue as before, or even improve as my stress drops. But I've been at Stanford for a very long time, so this is quite the leap into the unknown, and it's going to take a while before I'm sure what new pattern my life will fall into.

Clint Adams: The politically-correct term is a juvenile cricket

9 August, 2014 - 04:16

Normally I'm disgusted by fangirling of jwz, but it seems that he finally wrote something I like.

Daniel Pocock: Help needed reviewing Ganglia GSoC changes

9 August, 2014 - 04:14

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format. Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you. Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD. Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Jan Wagner: Monitoring Plugins Debian packages

9 August, 2014 - 04:03

You may wonder why the old good nagios-plugins are not up to date in Debian unstable and testing.

Since the people behind and maintaining the plugins <= 1.5 were forced to rename the software project into Monitoring Plugins there was some work behind the scenes and much QA work necessary to release the software in a proper state. This happened 4 weeks ago with the release of the version 2.0 of the Monitoring Plugins.

With one day of delay the package was uploaded into unstable, but did hit the Debian NEW queue due the changed package name(s). Now we (and maybe you) are waiting to get them reviewed by ftp-master. This will hopefully happen before the jessie freeze.

Until this will happen, you may grab packages for wheezy by the 'wheezy-backports' suite from ftp.cyconet.org/debian/ or 'debmon-wheezy' suite from debmon.org. Feedback is much appreciated.

Richard Hartmann: RFC 7194

9 August, 2014 - 02:42

On a positive note, RFC 7194 has been published.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้