Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 11 months 1 week ago

Gunnar Wolf: Activities facing the next round of Trans-Pacific Partnership negotiations ( #yaratpp #tpp #internetesnuestra )

1 May, 2013 - 05:31

Excuse me for the rush and lack of organization... But this kind of things don't always allow for proper planning. So, please bear with my chaos ;-)

What is the Trans-Pacific Partnership?

Yet another secretely negotiated international agreement that, among many chapters, aims at pushing a free-market based economy, as defined by a very select few — Most important to me, and to many of my readers: It includes important chapters on intellectual property and online rights.

Hundreds of thousands of us along the world took part in different ways on the (online and "meat-space") demonstrations against the SOPA/PIPA laws back in February 2012. We knew back then that a similar project would attempt to bite us back: Well, here it is. Only this time, it's not only covering copyright, patents, trademark, reverse engineering, etc. — TPP is basically a large-scale free trade agreement on steroids. The issue that we care about now is just one of its aspects. Thus, it's way less probable we can get a full stop for TPP as we got for SOPA. But we have to get it on the minds of as many people as possible!

Learn more with this infography distributed by the EFF.

Which countries?

The countries currently part of TPP are Chile, Peru, New Zealand, Australia, Malaysia, Brunei, Singapore, Vietnam — And, of course, the USA.

Mexico, Canada and Japan are in the process of joining the partnership. A group of Mexican senators are travelling to Lima to take part of this round.

What are we doing about it?

As much as possible!

I tried to tune in with Peru's much more organized call — The next round of negotiations will be in Lima, Peru, between May 14 and 24. Their activities are wildly more organized than ours: They are planning a weekend-long Camping for Internet freedom, with 28 hours worth of activities.

As for us, our activities will be far more limited, but I still hope to have an interesting session:

This Friday, we will have Aula Magna, Facultad de Ingeniería, UNAM, México DF, from 10AM and until 3PM. We do not have a clear speakers program, as the organization was quite rushed. I have invited several people who I know will be interesting to hear, and I expect a good part of the discussion to be a round table. I expect we will:

  1. Introduce people working on different corners of this topic
  2. Explain in some more detail what TPP is about
  3. Come up with actions we can take to influence Mexico's joining of TPP
  4. And this will be at Facultad de Ingeniería. Another explicit goal of this session will be, of course, to bring the topic closer to the students!
We want you!

So... I am posting this message also as a plead for help. Do you think you can participate here? Were you among the local organizers for the anti-SOPA movement? Do you have some insight on TPP you can share? Do you have some gear to film+encode the talks? (as they will surely be interesting!) Or, is the topic just interesting for you? Well, please come and join us!

Some more informative links BE THERE!

So, again: Friday, 2012-05-03, 10:00-15:00

AttachmentSize Poster. Desgin by Gacela — Thanks!457.49 KB Infography about TPP distributed by the EFF392.45 KB "Here, let me sign this for you". Image by Colin Beardon.27.08 KB

Leo 'costela' Antunes: Deprecation of $(ARCH)-geomirror.debian.net

1 May, 2013 - 04:04

After the announcement of http.debian.net some months back I imagined the few people using my older $(ARCH)-geomirror.debian.net DNS redirector would relatively quickly jump ship to the newer solution, it being superior in basically every aspect. However it seems I had highly underrated the usage of my little hack. According to the server logs there still are a sensible number of genuine-looking queries being made (around 600 unique IPs in the last 3 days), and even if a sizable fraction of them are being generated by bots, this still leaves a pretty big number of potential users out there.

So I guess it’s only common courtesy to let these potential users know in a slightly more public place that I plan on pulling the plug till the end of the year. If you’re one of the people making use of the service, please migrate to http.debian.net.

Note however that this has nothing to do with cdn.debian.net, besides being based on a similar idea.

Steve Kemp: After you've started it seems like a bad idea?

30 April, 2013 - 23:03

To recap: given the absence of other credible alternatives I had two options:

  • Re-hack mutt to give me a sidebar that will show only folders containing new messages.
  • Look at writing a "simple mail client". Haha. Ha. Hah.

I think there is room for a new console client, because mutt is showing its age and does feel like it should have a real extension language - be it guile, lisp, javascript(!), Lua, or something else.

So I distilled what I thought I wanted into three sections:

  • mode-ful. There would be a "folder-browsing mode", a "message-browsing mode" and a "read-a-single-message" mode.
  • There would be scripting. Real scripting. I chose Lua.
  • You give it ~/Maildir as the configuration. Nothing else. If the damn computer cannot find your mailboxes something is wrong.

So how did I do? I wrote a ncurses-based client which has Lua backed into it. You can fully explore the sidebar-mode - which lets you select multiple folders.

From there you can view the messages in a list.

What you can't do is anything "real":

  • Update a messages flags. new -> read, etc.
  • GPG-validation.
  • MIME-handling.
  • Attachment viewing.

For a two-day hack it is remarkably robust, and allowing scripting shows awesomeness. Consider this:

--
-- show all folders in the Maildir-list.
--
function all()
   -- ensure that the sidebar displays all folders
   sidebar_mode = "all";
   -- we're going to be in "maildir browsing mode"
   cmail_mode = "sidebar";
   reset_sidebar();
   refresh_screen();
end

--
-- Test code, show that the pattern-searching works.
--
-- To use this press ":" to enter the prompt, then enter "livejournal".
--
-- OR press "l" when in the sidebar-mode.
--
function livejournal()
   sidebar_pattern = "/.livejournal.2";
   sidebar_mode = "pattern";
   reset_sidebar();
   refresh_screen();
end

--
-- There is a different table for each mode.
--
keymap = {}
keymap['sidebar'] = {}
keymap['index']   = {}
keymap['message'] = {}

--
-- In the sidebar-mode "b" toggles the sidebar <-> index.
--
-- ":" invokes the evaluator.
-- "q" quits the browser and goes to the index-mode.
-- "Q" quits the program entirely.
--
keymap['sidebar'][':'] = "prompt-eval"
keymap['sidebar']['b'] = "toggle"
keymap['sidebar']['q'] = "toggle"
keymap['sidebar']['Q'] = "exit"

-- show all/unread/livejournal folders
keymap['sidebar']['a'] = "all"
keymap['sidebar']['u'] = "unread"
keymap['sidebar']['l'] = "livejournal"

Neat, huh? See the cmail.lua file on github for more details.

My decision hasn't really progressed any further, though I can see that if this client were complete I'd love to use it. Its just that the remaining parts are the fiddly ones.

I guess I'll re-hack mutt, and keep this on the back-burner.

The code is ropey in places, but should you wish to view:

And damn C is kicking my ass.

Wouter Verhelst: Linux 3.9

30 April, 2013 - 04:57

... has been released yesterday, apparently. This wouldn't be very special, except that it carries a 'patch' by yours truly. It isn't earthshattering, but hey, I can run 'git log' and find myself, now, in a released kernel.

If that isn't nice.

Wouter Verhelst: New in wheezy: NBD named exports and installer support

30 April, 2013 - 03:57

Just after the release of squeeze, I released nbd 2.9.17, which had a new feature that required some backwards-incompatible change: the ability to specify an export by name, rather than by port number. Obviously, that means that wheezy will be the first release to ship with support for such named exports (although a backport was uploaded to squeeze-backports with support for such a named export). After all, names are that more obvious a way to specify an export than is a meaningless number. The init scripts and root-on-NBD support was updated, although a bugfix was denied for r0 (it will hopefully get into r1).

In addition, during the wheezy cycle I finally finished the partman-nbd support in the installer. With this, it is possible to install Debian to an NBD device on diskless systems, which is nice.

Bdale Garbee: Removing LiPo Protection Boards

30 April, 2013 - 01:00

In my post about Batteries and Pyro Circuits, one of my suggestions was to remove the protection circuit board from LiPo cells used with Altus Metrum flight computers. To help out folks who want to do this themselves, I put together and posted a how-to with photos in the Documents section of our web site.

Thomas Goirand: Jenkins: building debian packages after a “git push” (my 2cents of a howto)

29 April, 2013 - 22:45

The below is written in the hope it will be helpful for my fellow DDs.

Why using “build after push”?

Simple answer: to save time, to always use a clean build environment, to automate more tests.

Real answer: because you are lazy, and tired of always having to type these build commands, and that watching the IRC channel is more fun than watching the build process.

Other less important answers: building takes some CPU time, and makes your computer run slower for other tasks. It is really nice that building doesn’t consume CPU cycles on your workstation/laptop, and that a server does the work, not disturbing you while you are working. It is also super nice that it can maintain a Debian repository for you after a successful build, available for everyone to use and test, which would be harder to achieve on your work machine (which may be behind a router doing NAT, or even sometimes turned off, etc.). It’s also kind of fun to have an IRC robot telling everyone when a build is successful, so that you don’t have to tell them, they can see it and start testing your work.

Install a SID box that can build with cowbuilder

  • Setup a SID machine / server.
  • Install a build environment with git-buildpackage, pbuilder and cowbuilder (apt-get install all of these).
  • Initialize your cowbuilder with: cowbuilder –create.
  • Make sure that, outside of your chroot, you can do ./debian/rules clean for all of your packages, because that will be called before moving to the cowbuilder chroot. This means you have to install all the build-dependencies involved in the clean process of your packages outside the base.cow of cowbuilder as well. In my case, this means “apt-get install openstack-pkg-tools python-setuptools python3-setuptools debhelper po-debconf python-setuptools-git”. This part is the most boring one, but remember you can solve these problems when you see them (no need to worry too much until you see a build error).
  • Edit /etc/git-buildpackage/gbp.conf, and make sure that under [DEFAULT] you have a line showing builder=git-pbuilder, so that cowbuilder is used by default in the system when using git-buildpackage (and therefore, by Jenkins as well).

Install Jenkins

WARNING: Probably, before really installing, you should read what’s bellow (eg: Securing Jenkins).

Simply apt-get install jenkins from experimental (the SID version has some security issues, and has been removed from Wheezy, on a request of the maintainer).

Normally, after installing jenkins, you can access it through:

http://<ip-of-your-server>:8080/

There is no auth by default, so anyone will be able to access your jenkins web GUI and start any script under the jenkins user (sic!).

Jenkins auth

Before doing anything else, you have to enable Jenkins auth, otherwise, you have everything accessible from the outside, meaning that more or less, anyone browsing your jenkins server could be allowed to run any command. It might sound simple, but in fact, Jenkins auth is tricky to activate, and it is very easy to get yourself locked out, with no working web access. So here’s the steps:

1. Click on “Manage Jenkins” then on “Configure system”

2. Check the “enable security” checkbox

3. Under security realm select “Jenkins’s own user database” and leave “allow users to sign up”. Important: leave “Anyone can do anything” for the moment (otherwise, you will lock yourself out).

5. At the bottom of the screen, click on the SAVE button.

6. On the top right, click to login / create an account. Create yourself an account, and stay logged in.

7. Once logged-in, go back again in the “Manage Jenkins” -> “Configure system”, under security.

8. Switch to “Project based matrix authentication strategy”. Under “User/group to add”, enter the new login you’ve just created, and click on “Add”.

9. Select absolutely all checkboxes for that user, so that you make yourself an administrator.

10. For the Anonymous user, for Job, check Read, Build and Workspace. For “Overall” select Read.

11. At the bottom of the screen, hit save again.

Now, anonymous (eg: not logged-in) users should be able to see all projects, and be able to click on the “build now” button. Note that if you lock yourself out, the way to fix is to turn off Jenkins, edit config.xml, remove the “useSecurity” thing, and all what is in “authorizationStragegy” and “securityRealm”, then restart Jenkins. I had to do that multiple times until I had it right (as it isn’t really obvious you got to leave Jenkins completely insecure when creating a new user).

Securing Jenkins: Proxy Jenkins through apache to use it over SSL

When doing a quick $search-engine search, you will see lots of tutorials to use apache as a proxy, which seems to be the standard way to run Jenkins. Add the following to /etc/apache2/sites-available/default-ssl:

ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
ProxyRequests Off

Then perform the following commands on the shell:

htpasswd -c /etc/apache2/jenkins_htpasswd <your-jenkins-username>
a2enmod proxy
a2enmod proxy_http
a2enmod ssl
a2ensite default-ssl
a2dissite default
apache2ctl restart

Then disable access to the port 8080 of jenkins from outside:

iptables -I INPUT -d <ip-of-your-server> -p tcp --dport 8080 -j REJECT

Of course, this doesn’t mean you shouldn’t take the steps to activate Jenkins own authentication, which is disabled by default (sic!).

Build a script to build packages in a cowbuilder

I thought it was hard. In fact it was not. All together, this was kind of fun to hack. Yes hack. What I did yet another kind of 10km ugly shell script. The way to use it is simply: build-openstack-pkg <package-name>. On my build server, I have put that script in /usr/bin, so that it is accessible from the default path.Ugly, but it does the job!

Jenkins build script for openstack

At the end of the script, scan_repo() generates the necessary files for a Debian repository to work under /home/ftp. I use pure-ftpd to serve it. /home/ftp must be owned by jenkins:jenkins so that the build script can copy packages in it.

This build script is by no mean state of the art, and in fact, it’s quite hack-ish (so I’m not particularly proud of it, but it does its job…). If I am showing it in this blog post, it is just to show an example of what can be done. It is left as an exercise to the reader to create another build script adapted to its own needs, and write something cleaner and more modular.

Dependency building

Let’s say that you are using the Built-Using: field, and that package B needs to be built if package A changes. Well, Jenkins can be configured to do that. Simply edit the configuration of project B (you will find it, it’s easy…).

My use case: In my case, for building Glance, Heat, Keystone, Nova, Quantum, Cinder and Ceilometer, which are all components of Openstack, I have written a small (about 500 lines) library of shell functions, and an also small (90 lines) Makefile, which are packaged in “openstack-pkg-tools” (so Nova, Glance, etc. all build-depends on openstack-pkg-tools). The shell functions are included in each maintainer scripts (debian/*.config and debian/*.postinst mainly) to avoid having some pre-depends that would break debconf flow. The Makefile of openstack-pkg-tools is included in debian/rules of each packages.

In such a case, trying to manage the build process by hand is boring and time consuming (spending your time watching the build process of package A, so that you can manually start the build of package B, then wait again…). But it is also error prone: it is easy to do a mistakes in the build order, you can forget to dpkg -i the new version of package A, etc.

But that’s not it. Probably at some point, you will want Jenkins to rebuild everything. Well, that’s easy to do. Simply create a dummy project, and have other project to build after that one. The build steps could simply be: echo “Dummy project” as a shell script (I’m not even sure that’s needed…).

Configuring git to start a build on each push

In Jenkins, pass your mouse over the “Build now” URL. Well, we just need to wget that URL in your Alioth repository. A small drawing is better than long explanation:

for i in `ls /git/openstack` ; do
    echo "wget -q --no-check-certificate \
    https://<ip-of-your-server>/job/${PROJ_NAME}/build?delay=0sec \
    -O /dev/null" >/git/openstack/${i}/hooks/post-receive \
        && chmod 0770 /git/openstack/${i}/hooks/post-receive;
done

The chmod 0770 is necessary if you don’t want every Alioth users to have access to your Jenkins server web interface and see an eventual htpassword protection that you can add to your jenkins box (I’m not covering that, but it is fairly easy to add such protection). Note that all of the members of your Alioth group will then have access to this post-receive hook, containing the password of your htaccess, so you must trust everyone in your Alioth group to not do nasty things with your Jenkins.

Bonus point: IRC robot

If you would like to see the result of your build “published” on IRC, Jenkins can do that. Click on “Manage Jenkins”, then on “Manage Plugins”. Then click on “available” and check the box in front of “IRC plugin”. Go at the bottom of the screen and click on “Add”. Then check the box to restart Jenkins automatically. Once it is restarted, go again under “Manage jenkins” then “Configure system”. Select the “IRC Notification” and configure it to join the network and the channel you want. Click on “Advanced” to select the IRC nick name of your bot, and make sure you change the port (by default jenkins has 194, when IRC normally uses 6667). Be patient when waiting for the IRC robot to connect / disconnect, this can take some time.

Now, for each jenkins Job, you can tick the “IRC Notification” option.

Doing piuparts after build

One nice thing with automated builds, is that most of the time, you don’t need to wait starring at them. So you can add as many tests as you want, the Jenkins IRC robot will anyway let you know sooner or later the result of your build. So adding piuparts tests in the build script seems the correct thing to do. Though that is still on my todo, so maybe that will be for my next blog post.

Guido Günther: Bits from the 6th Debian groupware meeting

29 April, 2013 - 22:25

The sixth Debian Groupware Meeting was held in the LinuxHotel, Essen, Germany. We had one remote hacker from NYC which brings the number of attendants up to nine - an all time high! This is a short summary of what happened during the weekend:

  • Sogo 2.0.5 got uploaded to unstable and work on OpenChange support is advancing.

  • D-push (the rebranded version of z-push) got split and restructured to better handle different backends. An update in experimental is pending.

  • Giraffe's git (the rebranded version of zaraffa) got updated to 7.1. There's some more work needed to get this into the archive though.

  • There's been progress on getting Kolab3 into Debian starting with libkolabxml 0.8.4 and libkolab 0.4.2 (also needed by kde-pim).

  • mozilla-devscripts now handles Iceowl as well so packaging calendaring extensions like this one becomes even more useful.

  • Icedove and Iceowl bugsquashing. New versions were uploaded to unstable and experimental. The bug count trends look quiet promising but we really need help for #658664.

  • We did some davmail <-> iceowl/icedove interop testing.

  • The Groupware Page got updated

  • We had some discussions about calendar and addressbook detection and autoconfiguration. For CalDAV/CardDAV there's a specification available already but there seems to be no client implementation in Debian so we started13 to work on it.

  • We had a nice barbecue enjoying the first warm days.

Wouter Verhelst: New in Wheezy: PMW

29 April, 2013 - 21:47

One of the things I do with computers is "do stuff with music". I'm not a professional musician by any means, but I do sometimes have a need for some software to do some music editing.

In the past, that meant using GNU LilyPond; and while that's certainly an interesting piece of software, it has some idiosyncracies that have made me dislike it in the past. So when I learned about PMW, written by Philip Hazel (of PCRE and Exim fame) I was intrigued.

PMW has several advantages over lilypond, in my opinion. To name but two: its syntax is less silly, and it takes far less time to convert something from source to graphic, to the extent that I've considered creating an editor which would update the result after every keystroke, something that just isn't possible with lilypond.

The decision to upload pmw into Debian was just a no-brainer, and it's saved me some time since, already. Enjoy!

Richard Hartmann: #newinwheezy

29 April, 2013 - 19:26

There's a #newinwheezy game which basically presents a few of the 4451 new source packages in Debian/wheezy to a wider audience.

My own entry is, obviously, vcsh.

Quoth the manpage

vcsh - manage config files in $HOME via fake bare git repositories

You can also consult the (somewhat outdated) readme or just clone it.

In related news, I have been asked to hold my "Gitify your life" talk at LinuxTag 2013. This talk can be found here. While the text on that page is in German, all slides will be in English and I will happily use either language based on what the audience prefers.

Michael Prokop: The #newinwheezy game: new forensic packages in Debian/wheezy

29 April, 2013 - 17:55

Debian/wheezy includes a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are shipped with the upcoming Debian/wheezy stable release for the first time in a Debian release are:

  • dc3dd: patched version of GNU dd with forensic features
  • extundelete: utility to recover deleted files from ext3/ext4 partition
  • rephrase: Specialized passphrase recovery tool for GnuPG
  • rkhunter: rootkit, backdoor, sniffer and exploit scanner (see comments)
  • rsakeyfind: locates BER-encoded RSA private keys in memory images
  • undbx: Tool to extract, recover and undelete e-mail messages from .dbx files

Join the #newinwheezy game and present packages which are new in Debian/wheezy.

Mart&iacute;n Ferrari: Setting up my server: netfilter

29 April, 2013 - 08:45

I was going to start this series with explaining how I did the remote set-up, but instead I will share something that happened today.

One of the first things you want to do when putting a server directly connected to the Internet is some filtering. You don't want to have an application listening on the network by mistake, so a simple netfilter firewall is a good way to ensure you are only accepting connections on ports you explicitly allowed.

I have been a long-time user of ferm, a simple tool that will read a configuration file written in a special structured syntax, and generates iptables commands from it. I have used it successfully to build very complex firewalls in previous jobs, and it had the huge benefit of keeping your firewall description readable and easy to modify by other people.

This time I thought I may go with something simpler, as I only wanted a handful of very simple netfilter rules. I looked at Shorewall, and browsed a bit a few others. But in the end I decided against them: there was the need to learn the tools' concepts about different parts of the network, or there were more slanted towards command-line commands, so your actual configuration will be some files in /var/lib, totally managed by the tool. With ferm, I just need to write a very small configuration file, which reads almost like iptables commands, and that's it.

In fact, the default configuration placed by the Debian package, already did 90% of what I wanted: accept incoming SSH connections, ICMP packets, and reject everything else. I took the example IPv6 configuration from /usr/share/doc/ferm/examples/ipv6.ferm and in 10 minutes it was ready:

table filter {
    chain INPUT {
        policy DROP;
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;

        interface lo ACCEPT;
        proto icmp ACCEPT; 

        # allow IPsec
        proto udp dport 500 ACCEPT;
        proto (esp ah) ACCEPT;

        proto tcp dport ssh ACCEPT;
        proto tcp dport (http https) ACCEPT;
    }
    chain OUTPUT policy ACCEPT;
    chain FORWARD policy DROP;
}

domain ip6 table filter {
    chain INPUT {
        policy DROP;
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;

        interface lo ACCEPT;
        proto ipv6-icmp ACCEPT;

        proto tcp dport ssh ACCEPT;
        proto tcp dport (http https) ACCEPT;
    }
    chain OUTPUT policy ACCEPT;
    chain FORWARD policy DROP;
}

It is important to note than when doing this kind of thing on a remote machine, you want to make sure you don't get locked out by accident. My method is that before activating any dangerous change, I drop an at job to disable the firewall in a few minutes:

# echo /etc/init.d/ferm stop | at now +10min
warning: commands will be executed using /bin/sh
job 4 at Mon Apr 29 02:47:00 2013

And if everything goes well, I just remove the job:

# atrm 4

Update: As paravoid pointed out in the comments, now (read: since many years ago, but I've never noticed) ferm has a --interactive mode which will revert the changes if you get locked out, much like the screen resolution changing dialog in Gnome.

Another thing that you definitely want to do, is to have some kind of protection against the almost constant influx of brute-force attacks against SSH. Apart from the obvious PermitRootLogin=no setting, there are a couple of popular methods to stop people probing random username/password combinations (I am assuming here that you actually have sensible passwords, or no passwords at all): running SSH in a non-standard port, and the great fail2ban daemon.

Since I don't like non-standard stuff, I installed fail2ban, which by default it will inspect /var/log/auth.log for SSH login failures and insert netfilter rules to block the offenders.

Problem is, I don't like much how fail2ban inserts rules and chains into my very tidy netfilter configuration which I had just created. So, I added an "action" to do things my way: only create a service-related chain and insert rules there, I will call that chain from my main ferm.conf. Ferm runs early in the boot sequence, so this won't be a problem during normal operation. The only caveat is that after changing a configuration in ferm, I need to restart fail2ban so it will recreate the netfilter chains and rules, which were wiped by ferm.

This is my configuration, note that I am ignoring the port and protocol: the whole IP is blocked for a few minutes.

# cat /etc/fail2ban/jail.local 
[DEFAULT]
action = iptables-fixed[name=%(__name__)s]

# cat /etc/fail2ban/action.d/iptables-fixed.conf
[Definition]
actionstart = iptables -N fail2ban-<name>
              iptables -I fail2ban -j fail2ban-<name>
actionstop = iptables -D fail2ban -j fail2ban-<name>
             iptables -F fail2ban-<name>
             iptables -X fail2ban-<name>
actioncheck = iptables -n -L | grep -q fail2ban-<name>
actionban = iptables -I fail2ban-<name> 1 -s <ip> -j DROP
actionunban = iptables -D fail2ban-<name> -s <ip> -j DROP

[Init]
name = default

Mart&iacute;n Ferrari: Moving my stuff away from home

29 April, 2013 - 07:50

TL;DR version: I want to get rid of the small server running at home, I tell you here about the service I've chosen, and why I like it. In following posts, I'll explain how did I set it up remotely.

Disclaimer: I am in now way affiliated with the companies I mention here (except for Picasa, as I am a Google employee), and don't get any bonuses for this post. I am only sharing this because I think it might be useful information for other people.

Being a frequent migrant means possessions are a burden. In my previous place of residence (in France), I originally intended to only stay for 6 months, and so I arrived with just a couple of suitcases, and in the end that was enough for me to live for almost 2 years.

The last time, on the other hand, I was removing my stuff completely from Argentina. I emptied my house, gave away some stuff, send some boxes to my parents' place, and carried the rest with me. That was a lot of stuff, but since the company was paying for the relocation, it was not much of a problem.

Later I realised my mistake, and knowing that my time in Ireland is limited, I started to try and get rid of stuff I don't need. I know I will just sell or give away much of my stuff when I finally leave, but there are some things that are not so easy to part with. The main one being my home server, which hosts this website, my VCS repositories, pictures, and many other things I need to have on the net.

This all used to be located in a home-made PC tucked in a data centre, co-located by a friendly company. But that computer died almost 2 years ago, and so canterville became abhean, and my stuff started being hosted with my aDSL connection. It worked well for some time, but now I realised I had to revert that change.

With this in mind I set off to find a cheap place to host my stuff. I had a few requirements:

  • The total cost has to be cheap enough, for some value of cheap.
  • It needs to have enough local storage to be able to host my photos, as I don't want to host them in Picasa or Flickr; Facebook is totally out of the question.
  • The data transfer limits should not be too low, as I will be performing periodic back-ups of all that data.

I don't have that many photos, nor they are too big, but these requirements made it clear that most VPS offerings were not going to work for me. For some reason I fail to understand, local storage in VPS offerings is usually prohibitively expensive. This is OK for most use cases, but not for mine.

A friend of mine, with a similar use case, is a happy VPS customer. He told me his trick: he only hosts in the server low-quality versions of the pictures, and keeps the originals (and back-ups) at home. This was a great idea, but with two fatal flaws: I want to only carry around a laptop and one or two external hard drives; and I want to have back-ups that are not physically with me.

I was starting to think about hosting my files in Amazon s3 or something like that, since most dedicated servers are way too expensive. But then I heard about two French companies offering dirt-cheap servers: OVH and Online.net.

Both of them offered small servers for about 12€ a month, cheaper than most VPS offerings! Online seems to mainly cater to the French market, and for some silly reason, they charge a 50€ set-up fee to customers outside of France. OVH, on the other hand, has many local branches, including an Irish one, so I went with them.

The offering is a low-cost line called Kimsufi, and the smallest one is still very decent for a personal server:

  • 64-bits Atom 230 processor at 1.6 Ghz (no VT).
  • 2 Gb RAM.
  • 500 Gb hard drive.
  • Bandwidth guaranteed at 100Mbps up to 5TB of monthly traffic, 10Mbps afterwards.
  • One IPv4 address, and a /64 IPv6 block (yay, working IPv6!)

Once I had paid the fee for one month, it took a while for it to be activated (their payment system is pretty bad), but it finally was enabled about 24h later.

Then the real fun started. On one hand, I was happy to see a wide selections of operating systems to choose from, including Debian stable and testing, and a web console with many functionalities, including some basic monitoring; but on the other hand, I realised that the installed image was not pristine, the online docs are not very good, and the web application is a bit buggy and really awkward to navigate.

Having sub-par docs is not something I would usually care much about, but it made it a bit more difficult to me to understand some of the very cool functionalities their system offers (more on that in a bit), and more importantly, it made it clear to me that I won't trust their image: the procedures detailed there were not exactly best-practices, and they allow themselves to log-in as root into my server.

I want to describe here what I think are their most interesting features, that made it possible to me to do risky operations, like encrypting the root partition, and setting up a firewall; and being able to fix problems that would usually require physical access.

These are found in their web console: a hardware reset, and configurable netboot support with many offered images, including a rescue image based on Ubuntu and one that serves as a virtual KVM. (It is surprising that these servers don't have a serial console, but at least the kernel does not detect any).

With these in hand, I didn't have to fear being locked out of my server for ever. Just set up a netboot image and hard-reboot the machine! Also, it made it very simple to install my system from scratch with debootstrap.

The virtual KVM is a very interesting trick. It is a netboot image that runs some tests, and fires up a web browser. You get an email with the URL and a password to access it, and then you open a page that offers you what is basically a Qemu connected to a VNC server which will boot from your real hard drive.

It is super slow, but that allows you to get console access to your server, which can be very handy to debug booting problems, unless it is some issue with the real hardware. It also offers the possibility of downloading an ISO image off the network and booting that, so it can be used to run a stock installer CD too.

In another post I'll describe how I reinstalled my server remotely, and some of the pitfalls that I've encountered in the process.

Steinar H. Gunderson: Precise cache miss monitoring with perf

29 April, 2013 - 04:57

This should have been obvious, but seemingly it's not (perf is amazingly undocumented, and has this huge lex/yacc grammar for its command-line parsing), so here goes:

If you want precise cache miss data from perf (where “precise” means using PEBS, so that it gets attributed to the actual load and not some random instruction a few cycles later), you cannot use “cache-misses:pp” since “cache-misses” on Intel maps to some event that's not PEBS-capable. Instead, you'll have to use “perf record -e r10cb:pp”. The trick is, apparently, that “perf list” very much suggests that what you want is rcb10 and not r10cb, but that's not the way it's really encoded.

FWIW, this is LLC misses, so it's really things that go to either another socket (less likely), or to DRAM (more likely). You can change the 10 to something else (see “perf list”) if you want e.g. L2 hits.

Christian Perrier: [life] Running update: January-April...and more.

28 April, 2013 - 23:40
It seems that I didn't send any update to my international friends for quite a while, at least when it comes at my running activities.

So, in short, I ran A LOT during the first months of April 2013. I mean it. As of now (April 28th), I cumulated 1424 kilometers, with a peak in March up to 375 kilometers. So, that's an average 12 kilometers per day.

How did I achieve this? Among many other things, by doing part of my commute to work by running, which means 14 kilometers in one day, with a backpack containing everything needed to be dressed "normally" while working, plus my laptop, my rain jacket, etc....so up to 4 kilograms on my shoulders. And, yes, I can use a shower at work and I don't stink all day long!

This alone already makes a fairly good training. Of course, all alone, it wouldn't be really funny, so I, of course, add some runs during the week-end, mostly trail running, enjoying the nature around our place.

Official races have been mostly trail races during these months. The only road race has been an half-marathon in Bullion, back in February (4th year in a row I'm running this one, which is traditionnaly the "resume road races" competition in the area). I completed it in 1h39, quite close to my PB, even if....that was meant to be a training only.

In February, still as a preparation race for Paris Ecotrail, I ran a 20km trail in Auffargis (a neat small village in the neighbourhood of our place), again completing it with great success, with a big negative split (for non hard-core runners, a negative split happens when one runs the second half of a race faster than the first one).

All this was in preparation for Paris Ecotrail race, my second time on this 80km race that ends up at Eiffel Tower. Last year, being my first attempt on such distance,, I completed it in 11 hours 5 minutes. To make it short, this year, I finished 579th out of more than 2000 runners, in 9h36. That was indeed a really great result, in line with my 8h15 time back in November for the 70km "Le Puy-Firminy" night race.

Moreover, I could indeed recover very quickly : the race was run on a Saturday and I resumed my "commute runs" on Wednesday.

The next target were two trail races in April : I originally planned a 44km race on April 21st and finally ended up adding to it a 35km trail race on April 7th (the day of Paris Marathon), only 3 weeks after Ecotrail..:-)

And I completed both these with a huge success. Indeed my best trail races ever, again with two negative splits and also a very good place. Indeed, I now usually complete races close to the very first women...:-).

So, on April 7th, the trail du Josas (35km, 800m positive climb... which means about the equivalent of a marathon) was completed in 3h40....and last Sunday, the trail des Lavoirs (44km, 1100m positive climb) was completed in 4h40, with the last 2 kilometers being run above 13km/h. Describing how one can feel when "flying" in the very last kilometers of such a long run is just....impossible. Great, great, great memories.

Then, during the week following the trail des Lavoirs, I ran 101km in 6 days, confirming that recovery is perfect.

So, definitely, I am stunned by what I could achieve during these months, without injury, without big pain. Just good training and good results, without suffering and a giant pleasure.

Yes, running is definitely a drug and I'm deeply addicted. Well, the result is, in short, that I feel good and well, so I think I won't stop soon...:-)

Next challenge : Mont-Blanc marathon, in Chamonix : 42.195km....and 2500meters positive climb, with 1500 meters negative. Start in Chamonix at 1050m high and end at Planpraz (2050m), facing the Mont-Blanc, with a maximum altitude of 2267m during the race. Quite an interesting "marathon", isn't it? That will be my first race in real mountains...and, I guess not the last one. Target time: 6 hours. Secret wish: 5h30.

During summer, I will mostly be preparing for the second part of the year....but I'll certainly enjoy the neighbourhood of Vaumarcus, Switzerland, where I'll attend DebConf. Challenge : combine running, hacking, cheese eating and fit all this in 24 hours every day.

For the end of the year, challenges should peak between October and December:

  • October 27th: Toulouse marathon. A "real" one, this time....and eventually my first attempt to get a Boston qualifying time (I need 3h30).
  • November 17th: Le Puy-Firminy night race, with 70km half-road, half (easy) trails and 1200m positive climb. Challenge: beat my best time there and complete it for the 3rd year in a row
  • December 8th: Saintélyon, a 75km night race between my birth city of St-Étienne and the nearby town of Lyon, through hills....and often with snow. Challenge: finish it..:-)
So, well, see you soon on this blog for another update after Mont-Blanc marathon. Let's hope I'll give you good news.

Vasudev Kamath: Tribute to Beloved Teacher

28 April, 2013 - 13:05

Today I'm writing this blog with saddened heart. My mentor and a best friend Dr.Ashokkumar is no more. He died yesterday after fighing with Lymph Node cancer.

Ashokkumar or Ashok sir that is how we all students used to address him, was a Professor in Information Science Engineering in NMAM Institue of Technology and recently transferred to Computer Science Engineering Departement. My last meeting him with him was last year December during which he was looking every bit okay other than he had knee pain because of which he couldn't walk freely. But I never imagined that it will be my last meeting with him.

Ashok sir was also behind the FLOSS event that took place in NMAM Institute of Technology including MiniDebconf 2011 which saw 2 of the foreign DD's Christian Perrier and Jonas Smedegaard.

It was because of his first organized FLOSS event which I volunteered called Linux Habba I entered into the FLOSS world. This means its because of him that I started my FLOSS world journey and reached to my current level. Also it was because of his motivation I started writing this blog which I continue till this day.

I whole heartedly thank Ashok sir for teaching me guiding me and motivating me during my difficult times. You will always be remembered through out my life. May your soul Rest In Peace.

Here are the 2 pics of Ashok sir taken during Minidebconf (Credits: Christian Perrier and Kartik Mistry)

Good bye Sir :-(

Vagrant Cascadian: intro

28 April, 2013 - 11:55

For many years, I've been meaning to write about various things in some sort of online journal. I'd like to tie together various parts of my life, such as my technical work in Debian, LTSP and other Free Software, Software Libre, and Open Source Software projects, but also my passion for Aikido, and other ideas that inspire me, such as pleaching.

As with many new things, I struggled to name this journal. I even struggled with calling it a journal as opposed to a Blog, though blog always sounded a bit... brief and kind of blunt for my liking. So journal it is. But I still didn't even really have a name for it.

Recently introduced to the concept of pleaching, essentially weaving living, growing branches togther, I was drawn to the concept, but admittedly, the sound of "pleach" or "pleaching" also carried some hint of harshness in it. Looking into synonyms, weaving, plait, braid... braid, yes, I typically wear my hair in a pair of braids... It has enough meaning to get by on... But that's too plain... so I borrowed from Spanish to come up with "trenza".

So, thankfully to all involved, most of these posts will probably not be about picking names for things. I have a bit of backlog about some technical projects I've worked on that I'd like to write about, so I'll start with a few of those...

Paul Tagliamonte: Recent Hy developments

28 April, 2013 - 08:51

With some new patches from just oddles of interested hackers, there are some hot new changes, including a major mode for emacs and a hot new bot

I’m currently blocking stuff while I’m in the middle of the compiler, but new feature development will be wide open soon!

Check out Hy, play with the source, install it with pip, and consider doing awesome stuff!

Mike Gabriel: Unity Greeter with X2Go Remote Login Support

27 April, 2013 - 19:42

For the Danish company Fleten.net [1] (with my X2Go [2] developer hat on) I have recently developed X2Go integration into the Unity Greeter [3] theme of LightDM [4] in Ubuntu. Fleten.net--as a Canonical Partner--is providing FOSS based IT-services to schools and municipalities in Denmark and Norway, based on Ubuntu and X2Go.

read more

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้