Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 23 min 1 sec ago

Vasudev Kamath: Stop messing with my settings Network Manager

20 July, 2014 - 03:09

I use a laptop with Atheros wifi card with ath9k driver. I use hostapd to convert my laptop wifi into AP (Access point) so I can share network with my Nexus 7 and Kindle. This has been working fine for quite some time till my recent update.

After recent system update (I use Debian Sid), I couldn't for some reason convert my wifi into AP so my device can connect. I can't find anything in log nor in hostapd debug messages which is useful to trouble shoot the issue. Every time I start the laptop my wifi card will be blocked by RF-KILL and I have manually unblock (both hard and soft). The script which I use to convert my Wifi into AP is below

#Initial wifi interface configuration
ifconfig "$1" up 192.168.2.1 netmask 255.255.255.0
sleep 2

# start dhcp
sudo systemctl restart dnsmasq.service

iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables -t nat -A POSTROUTING -o "$2" -j MASQUERADE
iptables -A FORWARD -i "$1" -j ACCEPT

sysctl -w net.ipv4.ip_forward=1

#start hostapd
hostapd /etc/hostapd/hostapd.conf  &> /dev/null &

I tried rebooting the laptop and for some time I managed to convert my wifi into AP, I noticed at same time that Network Manager is not started once laptop is booted, yeah this also started happening after recent upgrade which I guess is the black magic of systemd. After some time I noticed wifi has went down and now I can't bring it up because its blocked by RF-KILL. After checking the syslog I noticed following lines.

Jul 18 23:09:30 rudra kernel: [ 1754.891060] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): using nl80211 for WiFi device control
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): driver supports Access Point (AP) mode
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): new 802.11 WiFi device (driver: 'ath9k' ifindex: 10)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): exported as /org/freedesktop/NetworkManager/Devices/8
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2]
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): preparing device
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> devices added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> device added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0): no ifupdown configuration found.
Jul 18 23:09:33 rudra ModemManager[891]: <warn>  Couldn't find support for device at '/sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0': not supported by any plugin

Well I couldn't figure out much but it looks like NetworkManager has come up and after seeing interface mon.wlan0, a monitoring interface created by hostapd to monitor the AP goes mad and tries to do something with it. I've no clue what it is doing and don't have enough patience to debug that. Probably some expert can give me hints on this.

So as a last resort I purged the NetworkManager completely from the system and settled back to good old wicd and rebooted the system. After reboot wifi card is happy and is not blocked by RF-KILL and now I can convert it AP and use it as long as I wish without any problems. Wicd is not a great tool but its good enough to get the job done and does only what is asked to it unlike the NetworkManager.

So in short

NetworkManager please stop f***ing with my settings and stop acting oversmart.

Ian Donnelly: How-To: Write a Plug-in

19 July, 2014 - 03:49

Hi Everybody!

I wanted to write a how-to on how to write an Elektra plug-in. Plug-ins are what allow Elektra to translate regular configuration files into the Elektra key database, and allow keys to be coverted back into regular configuration files. For example, the hosts plug-in is used to transcribe a hosts file into a meaningful KeySet in the Elektra Key Database. This plugin is what allows the kdb tool to be able to work with hosts files like in our mount tutorial.

While there are already a good number of plug-ins written for Elektra, we obviously don’t cover all the different types of configuration files. If you want to be able to use the features of Elektra, you may have to write a plug-in for your type of configuration file. Luckily, its not hard to do. Over the new few weeks I will be writing a tutorial for how to write your very own plug-in as well as explaining all the components of an Elektra plug-in. I will link the tutorials below as I finish them for easy reference, or if you already follow my blog I am sure you will notice them as they get published.

Dominique Dumont: Looking for help to package Perl6, moar and others for Debian

19 July, 2014 - 01:32

Let’s face reality: I cannot find the time to properly maintain Perl6 related packages for Debian. Given the recent surge of popularity of rakudo, it would be a shame to let these packages rot.

Instead of throwing the towel, I’d rather call for help to maintain these packages. You don’t need to be a Debian Developer or Maintainer: I will gladly review and upload packages.

The following packages are looking for maintainer:

  • rakudo (currently RC buggy)
  • moar (needs to be packaged, some work has been done by Daniel Dehennin)
  • parrot (up to date)
  • nqp (need to be updated. current version no longer compiles on all arch)

Next step to help Perl6 on Debian is to join:

All the best

 


Tagged: debian, package, Perl6

Osamu Aoki: Debian does not boot ...Crucial/Micron RealSSD m4/C400/P400

18 July, 2014 - 23:50
Today, my PC did not boot as usual to Debian.  BIOS could not find my /dev/sda and was looking for the netboot image.  I restarted my PC and got into the BIOS boot setting menu.  Hmmm.... my first SDD (/dev/sda) was missing.  My second HDD (/dev/sdb) was there.  But I did not put the Grub boot-loader there.   No wonder it did not boot.

I have a 32GB USB3 stick with the full Debian system.  It is not a live CD image USB stick but a HDD formatted and encrypted system.  Though it is not the fastest system, it is very light and usable.  I plugged it in and powered it up.  It booted OK but /dev/sda was still missing.  While it booted, I saw "ata1: COMRESET failed (errorno=-16)" .  So this ata1 SSD cannot be accessed from BIOS nor Linux.   Sigh ...

Looking around the web under the USB stick system, I saw some people were talking that the loose serial ATA cable sometimes causes this message.   Since my PC is a laptop, It has no flexible cable but has an on-board connector inside for the SSD.

Hoping my problem is just a bad connection problem, I crack opened the back panel of my PC.  The SSD looked fine.  I unplugged it from the connector and reinserted back into the connector.  After repeating this several times to be sure, I closed the back panel and booted.

It boots as expected into Debian.  Looks like everything is fine.
  SMART Error Log Version: 1
  No Errors Logged
Good.

If you have any boot problem like mine, please reinsert your SSD to the connector like I did before you panic.

Good luck.

Osamu

PS: This Crucial/Micron RealSSD m4/C400/P400 M4-CT256M4SSD2 previously had a problem.  A firmware bug made it read-only.  The firmware updates fixed my Debian system on this SSD.  I could fix this without Win*** OS since the firmware update was on a bootable disk image file.

Mario Lang: Croudsourced accessibility: Self-made digital menus

18 July, 2014 - 22:50

Something straight out from the real world: Menu cards in restaurants are not nice to deal with if you are blind. It is an old problem we grow used to ignoring over time, but still something that can be quite nagging.

There are a lot of psychological issues involved in this one. Of course, you can ask for the menu to be read out to you by the staff. While they usually do their best, you end up missing out on some things most of the time.

First of all, depending on the current workload in the restaurant, the staff will usually try to cut some time and not read everything to you. What they usually do is to try to understand what type of meal you are interested in, and just read the choices from that category to you. While this can be considered a service in some situations (human preprocessing), there are situations were you will definitely miss a highlight on the menu that you would have liked to choose if you knew that it was there.

And even if the staff decides to read the complete menu to you (which is rare), you are confronted with the 7-things-in-my-head-at-once problem. It is usually rather hard to decide amongst a list of more then 7 items, because our short-term memory is sort of limited. What the sighted restaurant goers do, is to skip back and forth between the available options, until they hit a decisive moment. True, that can take a while, but it is definitely a lot easier if you can perform "random access reads" to the list of choices yourself. However, if someone presents a substantial number of choices to you in a row, as sequential speech, you loose the random access ability. You either remember every choice from the beginning and do your choosing mentaully (if you do have extraordinary mental abilities), or you end up asking the staff to read previous items aloud again. This can work, but usually it doesn't. At some point, you do not want to bother the staff anymore, and you even start to feel stupid for asking again and again, while this is something totally normal to every sighted person, just that "they" do their "random access browsing" on their own, so "they" have no need to feel bad about how long it takes them to decide, minus the typical social pressure that arises after a a certain time for everyone, at least if you are dining in a group.

In very rare cases, you happen to meet staff that is truly "awake", doing their best to not let you feel that they might be pressed on time, and really taking as much time as necessary to help you make the perfect decision. This is rare, but if it happens, it is almost a magical moment. One of these moments, where there are no "artificial" barriers between humans doing communcation. Anyway, I am drifting away.

The perfect solution to this problem is to provide random access browsing of a restaurant menu with the help of digital devices. Trying to make braille menus available in all restaurants is a goal which is not realistically reachable. Menus go out of date, and need changing. And getting a physical braille copy updated and reprinted is considerably more involved as with digital media. Restaurant owners will also likely not see the benefit to rpvide a braille card for a very small circle of customers. With a digital online menu, that is a completely different story.

These days, almost every blind person in at least my social circles owns an iOS (or similar) device. These devices have speech synthesis and web browsers.

Of course, some restaurants especially in urban areas do already have a menu online. I have found them manually with google and friends sometimes in the past, which has already given me the ability to actually sit back, and really comfortably choose amongst the available offerings myself, without having to bother a human, and without having to feel bad about (ab)using their time.

However, the case where a restaurant really has their menu online is rather rare still in the area where I am. And, it can be tedious to google for a restaurant website. Sometimes, the website itself is just marginally accessible, which makes it even more frustrating to get a relaxed dinner-experience.

I have discovered a location-based solution for the restaurant-menu problem recently. Foursquare offers the ability to provide a direct link to the menu in a restaurant-entry. I figured, since all you need to do is write a single webpage where the (common) menu items are listed per restaurant, that I could begin to create restaurant menus for my favourite locations, on my own. Well, not quite, but almost. I will sometimes need help from others to get the menu digitized, but that's just a one-time piece of work I hopefully can outsource :-). Once the actual content is in my INBUX, I create a nice HTML page listing the menu in a rather speech-based browser friendly way.

I have begun to do this today, with the menu of a restaurant just about 500 meters away from my apartment. Unterm goldenen Dachl now has a menu online, and the foursquare change request to publish the corresponsing URL is already pending. I don't fully understand how the Foursquare change review process works yet, but I hope the URL should be published in the upcoming days/weeks.

I am using Foursquare because it is the backend of a rather popular mobile navigation App for blind people, called Blindsquare. Blindsquare lets you comfortably use Open Street Map and Foursquare data to get an overview of your surroundingds. If a food place has a menu entry listed in Foursquare, Blindsquare conveniently shows it to you and opens a browser if you tap it. So there is no need to actually search for the restaurant, you can just use the location based search of Blindsquare to discover the restaurant entry and its menu link directly from within Blindsquare. Actually, you could even find a restaurant by accident, and with a little luck, find the menu for it by location, without even knowing how the restaurant is called. Isn't that neat? Yeah, that's how it is supposed to work, that's as much independence as you can get.

And, it is, as the title suggests, croudsourced accessibility. Becuase while it is nice if a restaurant owner cares to publish their menu themselves, if they haven't, you can do it yourself. Either as a user of assistive technologies, to scratch your own itch. Or as a friend of a person with a need for assistive technologies. Next time you go to lunch with your blind friend, consider making available the menu to them digitally in advance, instead of reading it. Other people will likely thank you for that, and you have actually achieved something today. And if you happne to put a menu online, make sure to submit a change request to Foursquare. Many blind people are using blindsquare these days, which makes it super-easy for them to discover the menu.

Elena 'valhalla' Grandi: Reducing useless noise from irssi

18 July, 2014 - 20:24
Yesterday I missed a query from a friend (with the answer to a question *I* had asked in the first place) because it ended up in window 30-something and my statusbar was full of dim numbers from channels where people had just joined/left.

This morning I've set
activity_hide_level = JOINS PARTS QUITS
and my world is a much neater place :)

(I may have to add NICKS and possibly MODES, but they are rare enough and I'm still not sure I don't care about them, especially the latter.)

Jonathan McDowell: On the state of Free VoIP

18 July, 2014 - 06:00

Every now and then I decide I'll try and sort out my VoIP setup. And then I give up. Today I tried again. I really didn't think I was aiming that high. I thought I'd start by making my email address work as a SIP address. Seems reasonable, right? I threw in the extra constraints of wanting some security (so TLS, not UDP) and a soft client that would work on my laptop (I have a Grandstream hardphone and would like an Android client as well, but I figure those are the easy cases while the "I have my laptop and I want to remain connected" case is a bit trickier). I had a suitable Internet connected VM, access to control my DNS fully (so I can do SRV records) and time to read whatever HOWTOs required. And oh my ghod the state of the art is appalling.

Let's start with getting a SIP server up and running. I went with repro which seemed to be a reasonably well recommended SIP server to register against. And mostly getting it up and running and registering against it is fine. Until you try and make a TLS SIP call through it (to a sip5060.net test address). Problem the first; the StartCom free SSL certs are not suitable because they don't advertise TLS Client. So I switch to CACert. And then I get bitten by the whole question about whether the common name on the cert should be the server name, or the domain name on the SIP address (it's the domain name on the SIP address apparently, though that might make your SIP client complain).

That gets the SIP side working. Of course RTP is harder. repro looks like it's doing the right thing. The audio never happens. I capitulate at this point, and install Lumicall on my phone. That registers correctly and I can call the sip:test.time@sip5060.net test number and hear the time. So the server is functioning, it's the client that's a problem. I try the following (Debian/testing):

  • jitsi - Registers fine, seems to lack any sort of TURN/STUN support.
  • ekiga - No sign of TLS registration support.
  • twinkle - Not in testing. A recompile leads to no sign of an actual client starting up when executed.
  • sflphone - Fails to start (Debian bug #745695).
  • Empathy - Fails to connect. Doesn't show any useful debug.
  • linphone - No TLS connect (Debian bug #743494).

I'm bored at this point. Can I "dial" my debian.org SIP address from Lumicall? Of course not; I get a "Codecs incompatible" (SIP 488 Not Acceptable Here) response. I have no idea what that means. I seem to have all of the options on Lumicall enabled. Is it a NAT thing? A codec thing? Did I sacrifice the wrong colour of goat?

At some point during this process I get a Skype call from some friends, which I answer. Up comes a video call with them, their newborn, perfect audio, and no hassle. I have a conversation with them that doesn't involve me cursing technology at all. And then I go back to fighting with SIP.

Gunnar makes the comment about Skype creating a VoIP solution 10 years ago when none was to be found. I believe they're still the market leader. It just works. I'm running the Linux client, and they're maintaining it (a little behind the curve, but close enough), and it works for text chat, voice chat and video calls. I've spent half a day trying to get a Free equivalent working and failing. I need something that works behind NAT, because it's highly likely when I'm on the move that's going to be the case. I want something that lets my laptop be the client, because I don't want to rely on my mobile phone. I want my email address to also be my VoIP address. I want some security (hell, I'm not even insisting on SRTP, though I'd like to). And the state of the Open VoIP stack just continues to make me embarrassed.

I haven't given up yet, but I'd appreciate some pointers. And Skype, if you're hiring, drop me a line. ;)

Daniel Pocock: MH17 and the elephant in the room

18 July, 2014 - 03:09

Just last week, air passengers were told of intrusive new checks on their electronic devices when flying.

For years, passengers have also suffered bans on basic essentials like drinking water and excesses like the patting down of babies that even Jimmy Saville would find offensive.

Of course, all this is being done for public safety.

So if western leaders claim the safety and security of their citizens is really their number one priority, just how is it that a passenger aircraft can be flying through a war zone where two other planes were shot down this month? When it comes to aviation security, this really is the elephant in the room. The MH17 tragedy today demonstrates that terror always finds a way. It is almost like the terrorists can have their cake and eat it too: they force "free" countries to give up their freedoms and public decency and then they still knock the occasional plane out of the sky anyway.

History in the making?

It is 100 years since the assassination of Austrian Archduke Franz Ferdinand started World War I and just over 50 years since the Cuban missile crisis. Will this incident also achieve similar notoriety in history? The downing of MH17 may well have been a "mistake" but the casualties are real and very tragic indeed. I've flown with Malaysia Airlines many times, including the same route MH17 and feel a lot of sympathy for these people who have been affected.

Tiago Bortoletto Vaz: HOPE X ical for schedule

18 July, 2014 - 01:05

As Adirondack (train line MTL-NYC) is not Internet-friendly for RSS feeds I can't profit of my ~11h travelling to check this huge schedule in the way I want to, (= having a timetable view including room, description and speakers). HOPE X has just released a pdf and a xls (wtf??), but these contain only titles and room.

So I've coded an ics generator to process their feed. The result file is available at http://acaia.ca/hopex.ics and should be up to date with the original RSS.

Craig Small: No more dspam, now what?

17 July, 2014 - 20:33

I was surprised at first to see that a long-standing bug in dspam had been fixed. Until that is, I realised it was from the Debian ftp masters and the reason the bug was closing was that dspam was being removed from the Debian archive.

 

Damn!

 

So, now what? What is a good replacement for dspam that is actually maintained? I don’t need anti-virus because mutt just ignores those sorts of things and besides youbankdetails.zip.exe doesn’t run too well on Debian. dspam basically used tokens to find common patterns of spam and ham, with you bouncing misses so it learnt from its mistakes. Already got postgrey running for greylisting so its really something that does the bayesan filtering.

 

Some intial comments:

  • bogfilter looks interesting and seems the closest thing so far
  • cluebringer aka policyd seems like a policy and bld type of spam filter, not bayesan
  • I’ve heard crm114 is good but hard to use
  • spamassasin – I used to use this, not sure why I stopped

There really is only me on the mailserver with a pretty light load so no need to worry about efficiencies.  Not sure if it matters but my MTA is postfix and I already use procmail for delivery.

 

 

Juliana Louback: Contribute a JSCommunicator Translation

17 July, 2014 - 16:06

To those who don’t know, JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging. We’ve recently added i18n internationalization support for English, Portuguese, Spanish, French and now Hebrew. We would like to have support for other languages and it’s a pretty straightforward process to add a translation, so even if you do not have a tech background you can contribute!

Setup

First, create and account in Github. Github is a hosting service for git repositories and open source projects get to use Github for FREE! As a result, many open source project repositories are hosted on Github, including but not limited to JSCommunicator.

Once you have created your git account, follow steps 1-4 in the section Setting up git listed in Github’s setup tutorial. This will help you install git and configure anything you need.

Fork

Now fork the JSCommunicator repo. I really like the word ‘fork’. Great term. If this is your first fork, know that this will make a copy of the JSCommunicator repo and all its respective code to your Github account. Here’s how we go about it:

  1. Sign in to Github

  2. Browse to https://github.com/JLouback/jscommunicator

  3. Somewhere around the upper left corner, you will see a button labeled ‘Fork’. Click that.

You should now have your own JSCommunicator repository on your Github account. On your main page select the tab ‘Repositories’. It should be the first on the list. The url will be https://github.com/YOUR-USERNAME/jscommunicator. My Github username is JLouback, so my jscommunicator repo can be found at https://github.com/JLouback/jscommunicator.

Clone

Now you should download all that code to your local machine so you can edit it. On your JSCommunicator page somewhere around the lower right corner you’ll see a field entitled HTTPS clone URL:

I don’t know if this is correct, but you can just copy the URL on your browser too. I do that.

Now on your terminal, navigate to the directory you’d like to place your repository and enter

git clone https://github.com/YOUR-USERNAME/jscommunicator

The current directory should now have a folder named ‘jscommunicator’ and in it is all the JSCommunicator code.

Switch branches

The JSCommunicator repo has a branch for the i18n support. A branch is basically a copy of the code that you can modify without changing the original code. It’s great for adding new features, once everything is done and tested you can add the new feature to the original code. A more detailed explanation of branches can be found here.

On the terminal, enter

cd jscommunicator
git branch -a

This will list all the branches in the repository, both local and remote. There should be a branch named ‘remotes/origin/i18n-support’. Switch to this branch by entering

git checkout --track origin/i18n-support

This will create a local branch named i18n-support with all the contents of the remote i18n-branch in https://github.com/opentelecoms-org/jscommunicator.

Create a .properties file

Now your jscommunicator directory will have some new files for the internationalization functionality. Among them you should find an INTERNATIONALIZATION_README file with instructions for adding a JSCommunicator translation. It’s a less detailed version of this post, but it’s worth a read in case I’ve missed something here.

Go to the directory ‘internationalization’. You’ll see a few .properties files with different language codes. Choose the language of your preference for the translation base and make a copy of it in the internationalization directory. Your copy should be named ‘Messages_LANGUAGECODE.properties’. For example, the language code for german is ‘de’, so the german file should be named ‘Messages_de.properties’. If you’re not sure about the code for your language, please check out this list of ISO 639-1 language codes so you’re using the same code as (most) browsers.

The .properties file is list of key-value pairs, a word or a few words joined by ‘_’ which is the key, the equals sign ‘=’ then a word or phrase which is the value. Do not change anything to the left of the equals sign. Translate what is on the right of the equals sign.

Modify available_languages.xml

Go back to the root directory (jscommunicator) and open the file named ‘available_languages.ruby’. This .ruby has a series of language elements. At the end of the last ‘</language>’ add a new language element with your language display name and code, copying the previous languge elements. You actually can put your language element anwhere, as long as it’s after the ‘' and before the , as well as not cutting into any of the other language elements. Your display name should be the name of the language in its native language, for example for a french translation we’ll put ‘Français’ (I think). And the code is the code we used for the .properties file. Here’s how it should look:

<language>
    <display>Deutsch</display>
    <code>de</code>
</language>

Save your changes and voila!

Test your work

Once you’ve made a .properties file, JSCommunicator will automatically load your translation if you’ve set your browser preference to that language. If your browser preference is french, it will load the Messages_fr.properties. If your browser preference is a language we don’t have a translation for (let’s say german), JScommunicator will load the default Messages.properties file which is in english.

The available_languages.ruby is used to build a language selection menu. If you add a language element without a corresponding .properties file, JSCommunicator will throw a JS error and load the default (english) translation. The same happens if you use different language codes in the .properties file name and the available_languages.ruby. The error won’t disturb your use of JSCommunicator, it just won’t load the language you selected. No harm, no foul. But if you want others to use your translation, do take care to do this correctly.

If you read the INTERNATIONALIZATION_README, you will have seen that we need the jquery.i18n.properties.js file that’s included in the .html pages. You can download that code here. Be sure to place it in the jscommunicator directory. This is only necessary if you want to see your addition at work, you don’t need this file to contribute your translation. There are other 3rd party code dependencies to run JSCommunicator too as you may have noticed. Just creating the .properties file and adding a language element to available_languages.ruby, if done correctly, is enough to make a pull request.

Commit

Here’s the part where we load all your local changes to your remote repository. In the terminal, navigate to the root directory (jscommunicator) and enter

git add internationalization/Messages_de.properties
git add available_languages.ruby

Of course, please replace ‘Messages_de.properties’ with the name of your new .properties file. And mind you, we don’t have a german translation yet!

Then enter

git commit -m "German translation"

You can enter whatever you’d like in the “ “ part, it’s a good idea to explain what language you are adding. Now we can push the changes:

git push -u origin i18n-support

You should now see the changes you made in your Github account page at github.com/YOUR_USER/jscommunicator. The message you put in double quotations (“ “) in the commit will be beside each modified file. This push also creates a new branch in your jscommunicator repository, now you will have a ‘master’ and a ‘i18n-support’ branch.

Pull

Now it’s time to share your translation with everyone else if you feel so inclined. Your version of JSCommunicator has a new translation but not the official version. First navigate to you jscommunicator repo at github.com/YOUR_USER/jscommunicator and make sure you are on the i18n-branch as that will contain your changes.

Click the green button directly to the left of the branch drop down menu shown above. This will create a pull request to add your new code to the official jscommunicator code. Once you click that button, you should see this:

Make sure that the base repo (the first on the line above the green Create pull request button) is opentelecoms-org:i18n-support and not opentelecoms-org:master or another branch name. If it is, just click ‘Edit’ on the right and you’ll be able to select the branch from a dropdown like so:

If first repo is opentelecoms-org:i18n-support and the second is YOUR_USER:i18n-support and you’ve verified (scroll down) that there are 2 files changed which are your new .properties file and the added element to the available_languages.xml, go ahead and press create pull request. It’ll open a window with a text box you can add an explanation to with one final ‘Create pull request’ button. Click that and you’re done! You’ve kindly contributed with a translation for JSCommunicator!

Russ Allbery: wallet 1.1

17 July, 2014 - 08:16

Wallet is the secure credential management infrastructure that we use at Stanford, primarily for keytabs but increasingly for any sort of security keys that have to be stored somewhere and retrieved by specific systems or people.

The primary goal of this release is to add Duo support. This is currently somewhat preliminary, with only a single Duo integration object type that creates a UNIX integration. (Well, technically it can create any type of integration, but the integration information is returned in the format expected by the UNIX integration.) I expect a later release to rename all existing "duo" object types to "duo-unix" and add additional object types for the various other types of integrations that one wants to support, but that work will have to wait for another day.

Since it's been over a year since the previous release, there are also other accumulated bug fixes and improvements. I also tried to merge or address as many issues or patches that had been sent to me over the past year as I could, although many larger patches or improvements had to be deferred. Highlights:

  • The owner and getacl commands now return the name of the ACL instead of its numeric ID, as they probably should have from the beginning.

  • The date passed to expires can now be in any format Date::Parse supports. (On a related note, Date::Parse is now required.)

  • wallet-rekey now works properly on keytabs containing multiple principals. I had for some reason assumed that one could form a keytab containing multiple principals by just concatenating several together, but that definitely does not work. wallet-rekey now appends new keys to the end of the existing keytab. Unfortunately, I didn't get a chance to implement purging of old keys, for the folks stuck with MIT Kerberos ktutil instead of Heimdal's.

There are also multiple other bug fixes and general improvements, such as using DateTime objects uniformly for all database access that involves date fields, and recording ACL renames in the ACL history table. Both the API and the database layer are still kind of a mess, and I'd love to rewrite them with the benefit of experience and more knowledge, but that's a project for another day.

You can get the latest release from the wallet distribution page.

Steve Kemp: So what can I do for Debian?

17 July, 2014 - 04:49

So I recently announced my intention to rejoin the Debian project, having been a member between 2002 & 2011 (inclusive).

In the past I resigned mostly due to lack of time, and what has changed is that these days I have more free time - primarily because my wife works in accident & emergency and has "funny shifts". This means we spend many days and evenings together, then she might work 8pm-8am for three nights in a row, which then becomes Steve-time, and can involve lots of time browsing reddit, coding obsessively, and watching bad TV (currently watching "Lost Girl". Shades of Buffy/Blood Ties/similar. Not bad, but not great.)

My NM-progress can be tracked here, and once accepted I have a plan for my activities:

  • I will minimally audit every single package running upon any of my personal systems.
  • I will audit as many of the ITP-packages I can manage.
  • I may, or may not, actually package software.

I believe this will be useful, even though there will be limits - I've no patience for PHP and will just ignore it, along with its ecosystem, for example.

As progress today I reported #754899 / CVE-2014-4978 against Rawstudio, and discussed some issues with ITP: tiptop (the program seems semi-expected to be installed setuid(0), but if it is then it will allow arbitrary files to be truncated/overwritten via "tiptop -W /path/to/file"

(ObRandom still waiting for a CVE identifier for #749846/TS-2867..)

And now sleep.

Matthias Klumpp: AppStream 0.7 specification and library released

16 July, 2014 - 23:03

Today I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.

Appstream-Qt

A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

Dirk Eddelbuettel: Introducing RcppParallel: Getting R and C++ to work (some more) in parallel

16 July, 2014 - 22:56
A common theme over the last few decades was that we could afford to simply sit back and let computer (hardware) engineers take care of increases in computing speed thanks to Moore's law. That same line of thought now frequently points out that we are getting closer and closer to the physical limits of what Moore's law can do for us.

So the new best hope is (and has been) parallel processing. Even our smartphones have multiple cores, and most if not all retail PCs now possess two, four or more cores. Real computers, aka somewhat decent servers, can be had with 24, 32 or more cores as well, and all that is before we even consider GPU coprocessors or other upcoming changes.

And sometimes our tasks are embarassingly simple as is the case with many data-parallel jobs: we can use higher-level operations such as those offered by the base R package parallel to spawn multiple processing tasks and gather the results. I covered all this in some detail in previous talks on High Performance Computing with R (and you can also consult the Task View on High Performance Computing with R which I edit).

But sometimes we can't use data-parallel approaches. Hence we have to redo our algorithms. Which is really hard. R itself has been relying on the (fairly mature) OpenMP standard for some of its operations. Luke Tierney's (awesome) keynote in May at our (sixth) R/Finance conference mentioned some of the issues related to OpenMP. One which matters is that OpenMP works really well on Linux, and either not so well (Windows) or not at all (OS X, due the usual issue with the gcc/clang switch enforced by Applem but the good news is that the OpenMP toolchain is expected to make it to OS X is some more performant form "soon"). R is still expected to make wider use of OpenMP in future versions.

Another tool which has been around for a few years, and which can be considered to be equally mature is the Intel Threaded Building Blocks library, or TBB. JJ recently started to wrap this up for use by R. The first approach resulted in a (now superseded, see below) package TBB. But hardware and OS issues bite once again, as the Intel TBB is not really building that well for the Windows toolchain used by R (and based on MinGW).

(And yes, there are two more options. But Boost Threads requires linking which precludes easy use as e.g. via our BH package. And C++11 with its threads library (based on Boost Threads) is not yet as widely available as R and Rcpp which means that it is not a real deployment option yet.)

Now, JJ, being as awesome as he is, went back to the drawing board and integrated a second threading toolkit: TinyThread++, a small header-only library without further dependencies. Not as feature-rich as Intel Threaded Building Blocks, but at least available everywhere. So a new package RcppParallel, so far only on GitHub, wraps around both TinyThread++ and Intel Threaded Building Blocks and offers a consistent interface available on all platforms used by R.

Better still, JJ also authored several pieces demonstrating this new package for the Rcpp Gallery:

All four are interesting and demonstrate different aspects of parallel computing via RcppParallel. But the last article is key. Based on a question by Jim Bullard, and then written with Jim, it shows how a particular matrix distance metric (which is missing from R) can be implemented in a serial manner in both R, and also via Rcpp. The key implementation, however, uses both Rcpp and RcppParallel and thereby achieves a truly impressive speed gain as the gains from using compiled code (via Rcpp) and from using a parallel algorithm (via RcppParallel) are multiplicative! Between JJ's and my four-core machines the gain was between 200 and 300 fold---which is rather considerable. For kicks, I also used a much bigger machine at work which came in at an even larger speed gain (but gains become clearly sublinear as the number of cores increases; there are however some tuning parameters).

So these are exciting times. I am sure there will be lots more to come. For now, head over to the RcppParallel package and start playing. Further contributions to the Rcpp Gallery are not only welcome but strongly encouraged.

Vincent Sanders

16 July, 2014 - 19:04
It is no great secret that my colleagues at Collabora have been doing work with the Raspberry Pi Foundation.

My desk is very near Marco and I often see him working with the various Pi boards. Recently he obtained one of the new B+ units for testing and I thought it looked a little sad sat naked on his desk.

To remedy this bare board problem I designed and built a laser cut a case for him and now the B+ has been publicly released I can make the design freely available.

The design is completely original though is inspired by several other plastic "clip" type designs I have seen. Originally I created and debugged the case design for my parallella though tweaking it for the Pi was pretty easy.

The design is under a CC attribution licence and I ought to say that my employer is in no way responsible for this, its all my own fault.

Wouter Verhelst: Reprepro for RPM

16 July, 2014 - 18:43

Dear lazyweb,

reprepro is a great tool. I hand it some configuration and a bunch of packages, and it creates the necessary directory structure, moves the packages to the right location, and generates a (signed) Debian package repository. Obviously it would be possible to all that reprepro does by hand—by calling things like cp and dpkg-scanpackages and gpg and other things by hand—but it's easy to forget a step when doing so, and having a tool that just does things for me is wonderful. The fact that it does so only on request (i.e., when I know something has changed, rather than "once every so often") is also quite useful.

At work, I currently need to maintain a bunch of package repositories. The Debian package archives there are maintained with reprepro, but I currently maintain the RPM archives pretty much by hand: create the correct directories, copy the right files to the right places, run createrepo over the correct directories (and in the case of the OpenSUSE repository, also run gpg), and a bunch of other things specific to our local installation. As if to prove my above point, apparently I forgot to do a few things there, meaning, some of the RPM repositories didn't actually work correctly, and my testing didn't catch on.

Which makes me wonder how RPM package repositories are usually maintained. When one needs to maintain just a bunch of packages for a number of servers, well, running createrepo manually isn't too much of a problem. When it gets beyond own systems, however, and when you need to support multiple builds for multiple versions of multiple distributions, having to maintain all those repositories by hand is probably not the best idea.

So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?

Please don't say "custom scripts"

Juliana Louback: Become an Open-source Contributor Video Conference

16 July, 2014 - 18:06

A couple weeks ago I was contacted by Yehuda Korotkin through one of the Debian mailing lists. Yehuda is a tech professor at one of Israel’s leading colleges for women. He proposed a video conference to present the open source community to his class and explain how they can contribute to open source projects. I volunteered to participate and last Monday (July 14th 2014) we had our virtual meeting, which I hope was the first of many.

Shauna G. also participated from Boston, making it a Israel - USA - Brazil meetup. Pretty impressive, eh? The conference was hosted on Google Hangouts, the video is available for viewing here. Run time is around 2.5 hours so to save you time I’ve kindly posted a summary of the conference! To be frank, I look ghastly half the time so I’d much prefer that you read this post.

First there was a Q&A period which was more of a discussion panel, followed by a hands-on session where the girls made their first contribution to an open source project. Here’s the gist of it:

Q&A

  • What is open source [software]? Software that allows free access to its source code (ergo ‘open’), permitting analysis and any modifications to the original code. All these permissions are contained in a license included in the software. Note that the term ‘open source’ is now applied to a lot of things other than software, and I assume there’s a different definition for those cases.

  • Who is the community behind open source software? Shauna pointed out that there are many kinds of projects and many kinds of communities backing them. As an example of a for-profit model, Shauna cited RedHat that offers Linux (an open source OS) for free and profits from support services. Some projects are supported by NGO’s (example: Center for Open Science) with a specific objective, a series of other projects are volunteer-based or are the result of a hobby project. The contributors vary accordingly, they are not only programmers but also people with a non-technical background. In sum, there’s a lot of variety in the open source community as a whole, details of sturcture vary case to case.

  • Why is it important that women get involved in technology? Men and women are different because of a series of reasons, among them our biological composition, influences of society and life experiences. Technology is for everyone, male or female and we need to be able to design products that will suit everyone well. If the development team is comprised of only one sex or another, it’s likely that factors will be overlooked or ignored and as a result not a democratic experience. Shauna gave some examples of this, such as a voice-powered location search software that could easily identify stereotypically ‘male’ keywords but not many of interest to females.

    In addition to this, it’s known that there is a severe workforce deficiency in the tech industry. One proposed solution is to encourage more women to follow a career in tech. The argument is based on the assumption that any boy who is remotely interested in tech has a high likelihood of studying tech, but not every girl. Often girls are not encouraged to stuy STEM-related topics and in many places tech is considered inappropriate for women. I’ve read that tech companies’ engineering team is 12% female and 88% male. That number is consistent with the stats of female college grads for STEM majors. I think boys who love tech will keep getting into tech, but there are a lot of girls who could be great engineers if they only gave it a shot. Women should be encouraged to pursue a tech career not only so they’ll get to work in such an incredible field but also so they can help assuage the gap in supply and demand present in the tech workforce today.

  • Why is open source important? We don’t always get everything we want/need from off-the-shelf software. The great thing about open source is that you are free to tweak the at will and add whatever features you’d like. Sometimes there are even more important necessities, such as security requirements. With open source you don’t need to just take the manufacturers’ word that the code is bullet proof; open it and check it out yourself. Another factor that increases the value of open source is its stability. As Linus Torvalds said, “given enough eyeballs, all bugs are shallow”. With so many developers opening, studying and testing the code, bugs are quickly identified and removed. The fact that open source software is ‘free’ doesn’t reflect negatively on the quality; to the contrary, plenty of open source software is the best that’s out there.

  • Why its important contribute to open source? To start, it’s not written anywhere that you have to contribute. That said, it sure is nice if you do. I think there are two main reasons why people contribute. One is they want to ‘give back’ to the open source community. The other is they want to add some missing detail, feature or entire product. Of course, usually if you need something, someone else does as well, then by supplying your own need you end up helping many people out on the way. But one thing that I think applies especially to students is it’s a great way to gain experience. You have to fit in some practical learning along with all that theory and one can only be so enthusiastic about the CS projects done in school that are usually discarded at the end of the semester. By contributing to an open source project, you are doing something that’s real and that people will actually use. You also have access to an incredible network of mentors who are real pros and more than willing to help you out. Maybe I’ve just been lucky, but they are really nice people who won’t snicker if you ask a stupid question. Well, maybe they do. But I haven’t seen it happen and I’m the queen of stupid questions. Many times this is their pet project, so they’re thrilled to have your contribution. Another great thing is that you can choose what you want to work with. It’s easy to find a tech internship out there, what’s not easy is finding an interesting internship. You might work with something you dislike or (gasp) end up just bringing people coffee. But we do that stuff cause we need the work experience. With open source you have a huge array of projects you can work on, one of them is certain to suit your interests.

  • How to become a contributor? When using an open source software, you might notice things you’d like to change, or actual bugs. Let the development team know about it and be sure to also let them know if you think you can figure out the fix. And if that hasn’t happened yet, try exploring Github’s open source repositories and check out the ‘Issues’ option (upper right corner). There’s usually a list of bugs or wishlist features that you just might be the man/woman for.

Hands on

For the practical part of our conference, Yehuda’s students contributed to Jscommunicator’s Issue #5 which requests i18n support for major languages. JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging.

First we set up Github in their local machines and forked the Jscommunicator repo. We learned the basic git commands needed to push changes to their remote repositories and to create a pull requests. This is explained in further detail here.

At the end of the conference with nearly an hour overtime due to my tendency to be overly verbose, Yehuda’s class made their first contribution with a Hebrew translation for Jscommunicator!

All in all, it was great to virtually meet Yehuda and his epic students who all seemed very clever engineers. I’m looking forward to our next meetup once I gain some time-management skills.

Rapha&#235;l Hertzog: Spotify migrate 5000 servers from Debian to Ubuntu

16 July, 2014 - 16:07

Or yet another reason why it’s really important that we succeed with Debian LTS. Last year we heard of Dreamhost switching to Ubuntu because they can maintain a stable Ubuntu release for longer than a Debian stable release (and this despite the fact that Ubuntu only supports software in its main section, which misses a lot of popular software).

A few days ago, we just learned that Spotify took a similar decision:

A while back we decided to move onto Ubuntu for our backend server deployment. The main reasons for this was a predictable release cycle and long term support by upstream (this decision was made before the announcement that the Debian project commits to long term support as well.) With the release of the Ubuntu 14.04 LTS we are now in the process of migrating our ~5000 servers to that distribution.

This is just a supplementary proof that we have to provide long term support for Debian releases if we want to stay relevant in big deployments.

But the task is daunting and it’s difficult to find volunteers to do the job. That’s why I believe that our best answer is to get companies to contribute financially to Debian LTS.

We managed to convince a handful of companies already and July is the first month where paid contributors have joined the effort for a modest participation of 21 work hours (watch out for Thorsten Alteholz and Holger Levsen on debian-lts and debian-lts-announce). But we need to multiply this figure by 5 or 6 at least to make a correct work of maintaining Debian 6.

So grab the subscription form and have a chat with your management. It’s time to convince your company to join the initiative. Don’t hesitate to get in touch if you have questions or if you prefer that I contact a representative of your company. Thank you!

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

Jon Dowland: Mac

16 July, 2014 - 15:18

My job exposes me to a large variety of computing systems and I regularly use Mac, Windows and Linux desktops. My main desktop environment at home and work has been Debian GNU/Linux for over 10 years. However every now and then I take a little "holiday" and use something else for a few weeks. Often I'm spurred on by some niggle or other on the GNOME desktop, or burn-out with whatever the current contentious issue of the moment is in Debian. Usually I switched to Windows and I used it as an excuse to play some computer games.

Last November I had just such an excuse to take a holiday but this time I opted to go for Mac. I had a back-log of Mac issues to investigate at work anyway.

I haven't looked back.

It appears I have switched for good. I've been meaning to write about this for some time, but I couldn't quite get the words right. I doubted I could express my frustrations in a constructive, helpful way, even if I think that my experiences are useful and my discoveries valuable, perhaps I would put them across in a way that seemed inciteful rather than insightful. I wasn't sure anyone cared. Certainly the GNOME community doesn't seem interested in feedback.

I turns out that one person that doesn't care is me: I didn't realise just how broken the F/OSS desktop is. The straw that broke the camel's back was the file manager replacing type-ahead find with a search but (to seemlessly switch metaphor) it turns out I'd been cut a thousand times already. I'm not just on the other side of the fence, I'm several fields away.

Sometimes community people write about their concerns with whether they're going in the right direction, or how to tell the difference between legitimate complaints, trolls and whiners. When I look at conferences now, the sea of Thinkpads was replaced with a sea of Apple Macs a long time ago now, and the Thinkpads haven't come back. I'd suggest: don't worry about the whiners. Worry about the leavers.

What does this mean for my Debian involvement? Well, you can't help but have noticed that I've done very little this year. I've written nearly exclusively about music so far. the good news is: I still regularly use Debian, and I still intend to stay involved, just not on the desktop. I'm essentially only maintaining two packages now, lhasa and squishyball. I might pick up a few more (possibly archivemail if the situation doesn't improve) but I'm happy with a low package load; I'd like to make sure the ones I do maintain are maintained well. The sum of all my Debian efforts this year have been to get these two (or three) ship-shape. I have a bunch of other things I'd like to achieve in Debian which are not packages, and a larger package load would just distract from them. (We really are too package-oriented in Debian).

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้