Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 3 days 7 hours ago

Joachim Breitner: Talk and Article on Monads for Reverse Engineering

16 April, 2015 - 04:45

In a recent project of mine, a tool to analyze and create files for the Ravensburger Tiptoi pen, I used two interesting Monads with good results:

  • A parser monad that, while parsing, remembers what part of the file were used for what and provides, for example, an annotated hex dump.
  • A binary writer monad that allows you to reference and write out offsets to positions in the file that are only determined “later” in the monad, using MonadFix.

As that’s quite neat, I write a blog post for the German blog funktionale-programmierung.de about it, and also held a talk at the Karlsruhe functional programmers group. If you know some German, enjoy; if not, wait until I have a reason to hold the talk in English. (As a matter of fact, I did hold the talk in English, but only spontaneously, so the text is in German only so far.)

Rhonda D'Vine: HollySiz

15 April, 2015 - 17:57

Sometimes one stumbles upon stuff that touches one deeply. Granted, the topic of the first video from the artist I want to present you now did touch me naturally. But it made me take a closer look. This is about HollySiz. Yes, yet another French singer, but fortunately (for me) she sings mostly in English. :)

So here are the songs:

  • The Light: At first I wasn't even aware it's a music video. And the story is strong. I'm uncertain on the story of Nils Pickert did inspire the video, but it's lovely to see people getting it right. The parents job is to support their kid in finding their own identity instead of defining it for them.
  • Better Than Yesterday: In the light of The Light everything else looks antique. So what's better as a video that actually does look antique. ;)
  • Tricky Game (feat. Sianna): I somehow like this version of the song better because it contains rap. But that might be just me. A catchy beat anyway.

Like always, enjoy! And take good care of your kids if you happen to have some.

/music | permanent link | Comments: 0 | Flattr this

Michal Čihař: Packaging python-gammu

15 April, 2015 - 17:00

After Monday release of separate Gammu and python-gammu, the obvious task was to get the new package to distributions.

First I've started with Debian packages, what was quite easy as from quite complex CMake + Python package it is now purely CMake and it was mostly about removing stuff. Soon the updated Gammu package was uploaded to experimental. Once having that ready, I've also update the backports for Ubuntu and these are available in Gammu PPA. Creating new python-gammu package was a bit harder as this is the first Python 3 compatible package I've created, but it's now ready and sitting in the NEW queue.

While working on python-gammu package, I've realized that some of the data used in testsuite are missing in the tarball. While not being critical, this is definitely not nice, so I've decided to release python-gammu 2.1 today. It also includes fixes for some corner cases found by coverity.

For openSUSE the packaging was quite easy as well, stripping out unneeded parts of Gammu package went smoothly and it's now in hardware project, SR to Factory is pending. With python-gammu it turned out to be much harder as the testsuite had failed there with some strange error coming out of libdbi. After looking deeper into it, the problem is in new return type available in Git snapshot openSUSE is shipping. Fortunately producing fix was quite easy, so next Gammu upstream will handle that properly and package in hardware project is already patched. You can now use python-python-gammu from devel:languages:python and SR to Factory is pending as well.

Filed under: Debian English Gammu python-gammu SUSE Wammu | 0 comments

Raphaël Hertzog: Looking back at the Debian Long Term Support project

15 April, 2015 - 15:46

On Sunday I gave a talk about Debian LTS during the Mini-DebConf in Lyon. Obviously I presented the project and the way it’s organized, but I also took the opportunity to compute some statistics.

You can watch the presentation (thanks to the video team!) or have a look at the slides to learn more.

Here are some extracts of the statistics I collected:

The number of the uploads per “affiliation” (known affiliations are recorded in the LTS/Team wiki page) is displayed on the graph below. “None” corresponds to packages maintainers taking care of their own packages, “Debian Security” corresponds to members of the security team who also contributed to LTS, “Debian LTS” corresponds to individual members of the LTS team without any explicit affiliation. “Freexian” represents in fact 29 financial sponsors (see detail here).

Top 12 contributors (in number of uploads):

  • Thorsten Alteholz: 66
  • Holger Levsen: 27
  • Raphaël Hertzog: 14
  • Raphaël Geissert: 13
  • Thijs Kinkhorst: 8
  • Kurt Roeck: 7
  • Christoph Biedl: 7
  • Nguyen Cong: 6
  • Ben Hutchings: 6
  • Michael Vogt: 5
  • Moritz Mühlenhoff: 4
  • Matt Palmer: 4

The talk also contains explanations about the current funding setup. Hopefully this clears things up for people who were still wondering how the LTS project is working.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Petter Reinholdtsen: Debian Edu interview: Shirish Agarwal

15 April, 2015 - 14:20

It was a surprise to me to learn that project to create a complete computer system for schools I've involved in, Debian Edu / Skolelinux, was being used in India. But apparently it is, and I managed to get an interview with one of the friends of the project there, Shirish Agarwal.

Who are you, and how do you spend your days?

My name is Shirish Agarwal. Based out of the educational and historical city of Pune, from the western state of Maharashtra, India. My bread comes from giving training, giving policy tips, installations on free software to mom and pop shops in different fields from Desktop publishing to retail shops as well as work with few software start-ups as well.

How did you get in contact with the Skolelinux / Debian Edu project?

It started innocently enough. I have been using Debian for a few years and in one local minidebconf / debutsav I was asked if there was anything for schools or education. I had worked / played with free educational softwares such as Gcompris and Stellarium for my many nieces and nephews so researched and found Debian Edu or Skolelinux as it was known then. Since then I have started using the various education meta-packages provided by the project.

What do you see as the advantages of Skolelinux / Debian Edu?

It's closest I have seen where a package full of educational software are packed, which are free and open (both literally and figuratively). Even if I take the simplest software which is gcompris, the number of activities therein are amazing. Another one of the softwares that I have liked for a long time is stellarium. Even pysycache is cool except for couple of issues I encountered #781841 and #781842.

I prefer software installed on the system over web based solutions, as a web site can disappear any time but the software on disk has the possibility of a larger life span. Of course with both it's more a question if it has enough users who make it fun or sustainable or both for the developer per-se.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I do see that the Debian Edu team seems to be short-handed and I think more efforts should be made to make it popular and ask and take help from people and the larger community wherever possible.

I don't see any disadvantage to use Skolelinux apart from the fact that most apps. are generic which is good or bad how you see it. However, saying that I do acknowledge the fact that the canvas is pretty big and there are lot of interesting ideas that could be done but for reasons not known not done or if done I don't know about them. Let me share some of the ideas (these are more upstream based but still) I have had for a long time :

1. Classical maths question of two trains in opposing directions each running @x kmph/mph at y distance, when they will meet and how far would each travel and similar questions like these.

The computer is a fantastic system where questions like these can be drawn, animated and the methodology and answers teased out in interactive manner. While sites such as the Ask Dr. Math FAQ on The Two Trains problem (as an example or point of inspiration) can be used there is lot more that can be done. I dunno if there is a free software which does something like this. The idea being a blend of objects + animation + interaction which does this. The whole interaction could be gamified with points or sounds or colourful celebration whenever the user gets even part of the question or/and methodology right. That would help reinforce good behaviour. This understanding could be used to share/showcase everything from how the first wheel came to be, to evolution to how astronomy started, psychics and everything in-between.

One specific idea in the train part was having the Linux mascot on one train and the BSD or GNU mascot on the other train and they meeting somewhere in-between. Characters from blender movies could also be used.

2. Loads of crossword-puzzles with reference to subjects: We have enormous data sets in Wikipedia and Wikitionary. I don't think it should be a big job to design crossword puzzles. Using categories and sub-categories it should be doable to have Q&A single word answers from the existing data-sets. What would make it easy or hard could be the length of the word + existence of many or few vowels depending on the user's input.

3. Jigsaw puzzles - We already have a great software called palapeli with number of slicers making it pretty interesting. What needs to be done is to download large number of public domain and copyleft images, tease and use IPTC tags to categorise them into nature, history etc. and let it loose. This could turn to be really huge collection of images. One source could be taken from commons.wikimedia.org, others could be huge collection of royalty-free stock photos. Potential is immense.

Apart from this, free software suffers in two directions, we lag both in development (of using new features per-se) and maintenance a lot. This is more so in educational software as these applications need to be timely and the opportunity cost of missing deadlines is immense. If we are able to solve issues of funding for development and maintenance of such software I don't see any big difficulties. I know of few start-ups in and around India who would love to develop and maintain such software if funding issues could be solved.

Which free software do you use daily?

That would be huge list. Some of the softwares are obviously apt, aptitude, debdelta, leafpad, the shell of course (zsh nowadays), quassel for IRC. In games I use shisen-sho while card-games are evenly between kpat and Aiselriot. In desktops it's a tie between gnome-flashback and mate.

Which strategy do you believe is the right one to use to get schools to use free software?

I think it should first start with using specific FOSS apps. in whatever environment they are. If it's MS-Windows or Mac so be it. Once they are habitual with the apps. and there is buy-in from the school management then it could be installed anywhere. Most of the people now understand the concept of a repository because of the various online stores so it isn't hard to convince on that front.

What is harder is having enough people with technical skills and passion to service them. If you get buy-in from one or two teachers then ideas like above could also be asked to be done as a project as well.

I think where we fall short more than anything is in marketing. For instance, Debian has this whole range of fonts in its archive but there isn't even a page where all those different fonts in the La Ipsum format could be tried out for newcomers.

One of the issues faced constantly in installations is with updates and upgrades. People have this myth that each update and upgrade means the user interface will / has to change. I have seen this innumerable times. That perhaps is one of the reasons which browsers like Iceweasel / Firefox change user interfaces so much, not because it might be needed or be functional but because people believe that changed user interfaces are better. This, can easily be pointed with the user interfaces changed with almost every MS-Windows and Mac OS releases.

The problems with Debian Edu for deployment are many. The biggest is the huge gap between what is taught in schools and what Debian Edu is aimed at.

Me and my friends did teach on week-ends in a government school for around 2 years, and gathered some experience there. Some of the things we learnt/discovered there was :

  1. Most of the teachers are very territorial about their subjects and they do not want you to teach anything out of the portion/syllabus given.
  2. They want any activity on the system in accordance to whatever is in the syllabus.
  3. There are huge barriers both with the English language and at times with objects or whatever. An example, let's say in gcompris you have objects falling down and you have to name them and let's say the falling object is a hat or a fedora hat, this would not be as recognizable as say a Puneri Pagdi so there is need to inject local objects, words wherever possible. Especially for word-games there are so many hindi words which have become part of english vocabulary (for instance in parley), those could be made into a hinglish collection or something but that is something for upstream to do.

Mike Hommey: Announcing git-cinnabar 0.2.1

15 April, 2015 - 09:20

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

What’s new since 0.2.0?

Not much, but this felt important enough to warrant a release, even though the issue has been there since before 0.1.0:

Mercurial can be slower when cloning or pulling a list of “heads” that contain non-topological heads. On repositories like the mercurial repository, it’s not so much of a big deal, taking 7s instead of 4s. But on big repositories like mozilla-central, it’s taking 23 minutes instead of 2 minutes and 20s (on my machine). And that’s with 100% CPU use on the server side.

The problem is that mozilla-central recently merged some old closed heads, such that it now has branch heads that aren’t topological heads. Git-cinnabar, until this release, would request those branch heads, leading the server to use the slow path mentioned above. This release works around the issue.

It also fixes an issue pushing to a remote empty mercurial repository.

Steinar H. Gunderson: DVswitch is dead

15 April, 2015 - 06:00

I'm seemingly late to the party, but DVswitch was declared dead last month. I'm not surprised at all; it was effectively dead a long time ago, with basically nobody except DVswitch users using DV, the standard being long since abandoned by manufacturers.

I'm a bit curious what the replacements look like; gst-switch doesn't look all that compelling to me (high complexity for a very limited feature set), and then there's Snowmix (which I haven't tried) and Open Broadcaster Software which at least is somehow mature, although maybe for a somewhat different use case.

And no, I'm not making one myself, for two simple reasons:

  1. I don't really need it; it's more of a side interest.
  2. I am deeply skeptical at any such software that isn't made by someone who had extensive experience with sitting behind the controls of a real hardware video mixer, and I wouldn't qualify for that demand.

I am fairly certain that if I did something like this, though, one of the first areas I would think deeply about would be sound processing (mixing, compression, possibly noise suppression) and monitoring. After all, “video is 90% about audio”—who cares about the picture from a talk if you can't hear what the speaker says?

My recommendation for software mixing would probably be getting a Windows machine or a Mac and then run Wirecast. Sorry. :-)

Santiago García Mantiñán: Hello Debian Planet and Jessie's question

15 April, 2015 - 05:15

This was just meant to say hello to the Debian Planet readers, but I'll end it with a Jessie related question, so...

Intro

For those who don't know me, I was born in Betanzos, A Coruña, Galicia, in the North-West of Spain and I currently live on A Coruña. I've been a Debian developer since year 2000 when I was quite more involved than currently (live changes), but I'm always expecting to be able to dedicate more time to the project, I hope this will happen when my two children grow up a little bit.

I had been wanting to send my blog's Debian related posts to the planet but always failed to do so, yesterday I found the planet wiki page and I said... it's so easy that I don't have any excuse not to do it, so here I am.

Oh, BTW... if I ever comment on Debian's anniversary (16th of August) that at Betanzos we are launching a really huge paper balloon, it is not to commemorate Debian's date but in honour of San Roque, even though maybe we should talk to the Pita family to have Debian's logo on it for our 25th anniversary :-)

Jessie's question

In Jessie we no longer have update-notifier-common which had the /etc/kernel/postinst.d/update-notifier script that allowed us to automatically reboot on a kernel update, I have apt-file searched for something similar but I haven't found it, so... who is now responsible of echoing to /var/run/reboot-required.pkgs on a kernel upgrade so that the system reboots itself if we have configured unattended-upgrades to do so?

I really miss this stuff, I don't know if it should be on the kernel, on unattended-upgrades or where, but now that we have whatmaps... we need this feature to round it all.

End

Well, to finish I just want to say that I'm very happy to be a part of the Debian community and that I enjoy reading you guys on the planet. Thanks a lot to all the Debian folks for making Debian not only a great OS, but also a great community.

Mark Brown: Flashing an AT91SAM9G20-EK from bare metal

15 April, 2015 - 01:04

Since I just had cause to do this and it was harder than it needed to be due to bitrot in the public documentation I could find I thought I’d write up how to get a modern bootloader onto older Atmel boards. These instructions are written for the AT91SAM9G20-EK though they should also apply to other Atmel boards of a similar generation.

These instructions are for booting from NAND since it’s the default thing for the board, for this J34 should be fitted to enable the chip select and J33 disconnected to disable the dataflash. If there is something broken programmed into flash then booting while holding down BP4 should cause the second stage bootloader to trash itself and ensure the ROM bootloader puts itself into recovery mode, or just removing both J33 and J34 during power on will also ensure no second stage bootloader is found.

There is a ROM bootloader but it just loads a small region from the boot media and jumps into it which isn’t enough for u-boot so there is a second stage bootloader called AT91Bootstrap. Download sources for current versions from github. If it (or a more sensibly written equivalent) is not yet merged upstream you’ll need to apply this patch to get it to build with a modern compiler, or you could use an old toolchain (which you’ll need in the next step anyway):

diff --git a/board/at91sam9g20ek/board.mk b/board/at91sam9g20ek/board.mk
index 45f59b1822a6..b8251ca2fbad 100644
--- a/board/at91sam9g20ek/board.mk
+++ b/board/at91sam9g20ek/board.mk
@@ -1,7 +1,7 @@
 CPPFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft
 
 ASFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft

Once that’s done you can build with:

make at91sam9g20eknf_uboot_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

producing binaries/at91sam9g20ek-nandflashboot-uboot-${VERSION}.bin. This configuration will look for u-boot at 0x40000 in the flash so we need a u-boot binary. Unfortunately modern compilers seem to produce binaries that fail with no output. This is normally a sign that they need the ABI specifying more clearly as above but I got fed up trying to spot what was missing so I used an old CodeSourcery 2013.05 release instead, hopefully future versions of u-boot will be able to build for this target with older toolchains. Grab a recent release (I used 2015.01) and build with:

cd ${UBOOT}
make at91sam9g20ek_nandflash_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

to get u-boot.bin.

These can then be flashed using the Atmel flashing tool SAM-BA. Start it and connect to the target (there is a Linux version, though it appears to rely on old versions of TCL/TK so if you get trouble starting it the easiest thing is to use the sacrificial Windows laptop you’ve obtained in order to run the “entertaining” flashing tools companies sometimes provide without risking a real system, or in my case your shiny new laptop that you’ve not yet installed Linux on). Start it then:

  1. Connect SAM-BA to the device following the dialog on start.
  2. Make sure you’ve selected “NandFlash” in the memory type tabs in the center of the window.
  3. Run the “Enable NandFlash” script.
  4. Run the “Erase All” script.
  5. Run the “Send Boot File” script and provide the at91bootstrap binary.
  6. Set “Send File Name” to be the u-boot binary you built earlier and “Address” to be 0x40000.
  7. Click “Send File”
  8. Press the reset button

which should result in at91bootstrap output followed by u-boot output on the serial console. A similar process works for the AT91SAM9263, there the jumper you need is J19 (sadly u-boot does not flash pictures of cute animals or forested shorelines on the screen as the default “Basic LCD Project 1.4″ firmware does, I’m not sure this “full operating system” thing is really delivering improved functionality).

Neil Williams: Extending an existing ARMMP initramfs

14 April, 2015 - 19:27

The actual use of this extension is still in development and the log files are not currently publicly visible, but it may still be useful for people to know the what and why …

The Debian ARMMP kernel can be used for multiple devices, just changing the DTB. I’ve already done tests with this for Cubietruck and Beaglebone-Black, iMX.53 was one of the original test devices too. Whilst these tests can deploy a full image (there are examples of building such images in the vmdebootstrap package), it is a lot quicker to do simple tests of a kernel using a ramdisk. The default Debian initramfs has a focused objective but still forms a useful base for extension. In particular, I want to be able to test one initramfs on multiple boards (so multiple dtbs) with the same kernel image. I then want to be able, on selected boards, to mount a SATA drive or write an image to eMMC or a USB stick or whatever. LAVA (via the ongoing refactoring, not necessarily in the current dispatcher code) can automate such tests, e.g. to allow me to boot a Cubietruck into a standard Debian ARMMP armhf install on the SATA drive but using a modified (or updated) ARMMP kernel over TFTP without needing to install it on the device itself. That same kernel image can then be tested on multiple boards to see if the changes have benefitted one board at the expense of another. Automating all of that could be of a lot of benefit to the ARM kernel developers in Debian and outside Debian.

So, the start point. Install Debian onto a Cubietruck – in my case, with a SATA drive attached. All well and good so far, standard Debian Jessie ARMMP. (Cubietruck uses the LPAE kernel flavour but that won’t matter for the initramfs.)

Rather than building the initramfs manually, this provides a shortcut – at some point I may investigate how to do this in QEMU but for now, it’s just as quick to SSH onto the Cubietruck and update.

I’ve already written a little script to download the relevant linux-image package for ARMMP, unpack it and pull out the vmlinuz, the dtbs and a selected list of modules. The list is selective because TFTP has a 32Mb download limit and there are more modules than that. So I borrowed a snippet from the Xen folks (already shown previously here). The script is in a support repository for LAVA but can be used anywhere. (You’ll need to edit the package name in the script to choose between ARMMP and ARMMP LPAE.

Steps
  1. Get a working initramfs from an installed device running Debian ARMMP and copy some files for use later. Note: use the name of the symlink in the copy so that the file in /tmp/ is the actual file, using the name of the symlink as the filename. This is important later as it saves a step of having to make the (unnecessary) symlink inside the initramfs. Also, mkinitramfs, which built this initrd.img file in the first place, uses the same shared libraries as the main system, so copying these into the initramfs still works. (This is really useful when you get your ramdisk to support the attached secondary storage, allowing you to simply mount the original Debian install and fixup the initramfs by copying files off the main Debian install.) The relevant files are to support DNS lookup inside the initramfs which then allows a test to download a new image to put onto the attached media before rebooting into it.
    cp /boot/initrd.img-3.16.0-4-armmp-lpae /tmp/
    cp /lib/arm-linux-gnueabihf/libresolv.so.2 /tmp/
    cp /mnt/lib/arm-linux-gnueabihf/libnss_dns.so.2 /tmp/
    

    Copy these off the device for local adjustment:

    scp IP_ADDR:/tmp/FILE .
    
  2. Decompress the initrd.img:
    cp initrd.img-3.16.0-4-armmp-lpae initrd.img-3.16.0-4-armmp-lpae.gzip
    gunzip initrd.img-3.16.0-4-armmp-lpae.gzip
    
  3. Make a new empty directory
    mkdir initramfs
    cd initramfs
    
  4. Unpack:
    sudo cpio -id < initrd.img-3.16.0-4-armmp-lpae
    
  5. Remove the old modules (LAVA can add these later, allowing tests to use an updated build with updated modules):
    sudo rm -rf ./lib/modules/*
    
  6. Start to customise - need a script for udhcpc and two of the libraries from the installed system to allow the initramfs to do DNS lookups successfully.
    cp ../libresolv.so.2 ./lib/arm-linux-gnueabihf/
    cp ../libnss_dns.so.2 ./lib/arm-linux-gnueabihf/
    
  7. Copy the udhcpc default script into place:
    mkdir ./etc/udhcpc/
    sudo cp ../udhcpc.d ./etc/udhcpc/default.script
    sudo chmod 0755 ./etc/udhcpc/default.script
    
  8. Rebuild the cpio archive:
    find . | cpio -H newc -o > ../initrd.img-armmp.cpio
    
  9. Recompress:
    cd ..
    gzip initrd.img-armmp.cpio
    
  10. If using u-boot, add the UBoot header:
    mkimage -A arm -T ramdisk -C none -d initrd.img-armmp.cpio.gz initrd.img-armmp.cpio.gz.u-boot
    
  11. Checksum the final file so that you can check that against the LAVA logs.
    md5sum initrd.img-armmp.cpio.gz.u-boot
    

Each type of device will need a set of modules modprobed before tests can start. With the refactoring code, I can use an inline YAML and use dmesg -n 5 to reduce the kernel message noise. The actual module names here are just those for the Cubietruck but by having these only in the job submission, it makes it easier to test particular combinations and requirements.

- dmesg -n 5
- lava-test-case udevadm --shell udevadm hwdb --update
- lava-test-case depmod --shell depmod -a
- lava-test-case sata-mod --shell modprobe -a stmmac ahci_sunxi sd_mod sg ext4
- lava-test-case ifconfig --shell ifconfig eth0 up
- lava-test-case udhcpc --shell udhcpc
- dmesg -n 7

In due course, this will be added to the main LAVA documentation to allow others to keep the initramfs up to date and to support further test development.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, March 2015

14 April, 2015 - 15:37

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, 61 work hours have been equally split among 4 paid contributors. Their reports are available:

The remaining hours of Ben and Holger have been redispatched to other contributors for April (during which Mike Gabriel joins the set of paid contributors). BTW, if you want to join the team of paid contributors, read this and apply!

Evolution of the situation

April has seen no change in terms of sponsored hours but we have two new sponsors in the pipe and May should hopefully have a few more sponsored hours.

For the need of a LTS presentation I gave during the Mini-DebConf Lyon I prepared a small graph showing the evolution of the hours sponsored through Freexian:

The growth is rather slow and it will take years to reach our goal of funding the equivalent a full time position (176 hours per month). Even the intermediary goal of funding the equivalent of a half-time position (88h/month) is more than 6 months away given the current growth rate. But the perspective of Wheezy-LTS should help us to convince more organizations and hopefully we will reach that goal sooner. If you want to sponsor the project, check out this page.

In terms of security updates waiting to be handled, the situation looks similar to last month: the dla-needed.txt file lists 40 packages awaiting an update (exactly like last month), the list of open vulnerabilities in Squeeze shows about 56 affected packages in total (2 less than last month).

Thanks to our sponsors

The new sponsors of the month are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Mario Lang: Bjarne Stoustrup talking about organisations that can raise expectations

14 April, 2015 - 15:13

At time index 22:35, Bjarne Stroustrup explains in this video what he thinks is very special about organisatrions like Cambridge or Bell Labs. When I just heard him explain this, I couldn't help but think of Debian. This is exactly how I felt (and actually still do) when I joined Debian as a Developer in 2002. This is, what makes Debian, amongst other things, very special to me.

If you don't want to watch the video, here is the excerpt I am talking about:

One of the things that Cambridge could do, and later Bell Labs could do, is somehow raise peoples expectations of themselves. Raise the level that is considered acceptable. You walk in and you see what people are doing, you see how people are doing, you see how apparently easily they do it, and you see how nice they are while doing it, and you realize, I better sharpen up my game. This is something where you have to, you just have to get better. Because, what is acceptable has changed. And some organisations can do that, and well, most can't, to that extent. And I am very very lucky to be in a couple places that actually can increase your level of ambition, in some sense, level of what is a good standard.

Michal &#268;iha&#345;: Hacking Gammu

14 April, 2015 - 13:30

I've spent first day of SUSE Hackweek on Gammu. There are quite many tasks to be done and I wanted to complete at least some of them.

First I started with the website. I did not really like the old layout and aggressive colors and while touching it's code it's good idea to make the website work well in mobile devices. I've started with conversion to Bootstrap and It turned out to be quite easy task. The next step was making the pages simpler as in many places there was too much information hidden in sidebar. While doing content cleanup, I've removed some features which really don't make much sense these days (such as mirror selection). Anyway read more in the news entry on the site itself.

Second big task was to add support for Python 3 in python-gammu. It seems that world is finally slowly moving towards Python 3 and people started to request python-gammu to be available there as well. The porting itself took quite some time, but I've mostly completed it before Hackweek. Yesterday, there was just some time spent on polishing and releasing standalone python-gammu and Gammu without python bindings. Now you can build python-gammu using distutils or install it using pip install python-gammu.

Filed under: English Gammu python-gammu SUSE Wammu | 0 comments

Steve Kemp: Subject - Verb Agreement

14 April, 2015 - 07:00

There's pretty much no way that I can describe the act of cutting a live, 240V mains-voltage, wire in half with a pair of scissors which doesn't make me look like an idiot.

Yet yesterday evening that is exactly what I did.

There were mitigating circumstances, but trying to explain them would make little sense unless you could see the scene.

In conclusion: I'm alive, although I almost wasn't.

My scissors? They have a hole in them.

Mehdi Dogguy: DPL campaign 2015

14 April, 2015 - 05:28
This year's DPL campaign is over and voting period is also almost over. Many did not vote yet and they really should consider doing so. This is meant as a reminder for them. If you didn't have time to dig into debian-vote's archives and read questions/answers, here is the list with links to candidates' replies:
Compared to past years, we had a comparable number of questions. All questions did not start big threads as it used to be the case sometimes in the past :-) The good side of this is that we are trolling DPL candidates less than we used to do :-P

Now, if you still didn't vote, it is really time to do so. The voting period ends on Tuesday, April 14th, 23:59:59 UTC, 2015. You have only a few hours left!

Santiago García Mantiñán: haproxy as a very very overloaded sslh

14 April, 2015 - 01:38

After using haproxy at work for some time I realized that it can be configured for a lot of things, for example: it knows about SNI (on ssl is the method we use to know what host the client is trying to reach so that we know what certificate to present and thus we can multiplex several virtual hosts on the same ssl IP:port) and it also knows how to make transparent proxy connections (the connections go through haproxy but the ending server will think they are arriving directly from the client, as it will see the client's IP as the source IP of the packages).

With this two little features, which are available on haproxy 1.5 (Jessie's version has them all), I thought I could give it a try to substitute sslh with haproxy giving me a lot of possibilities that sslh cannot do.

Having this in mind I thought I could multiplex several ssl services, not only https but also openvpn or similar, on the 443 port and also allow this services to arrive transparently to the final server. Thus what I wanted was not to mimic sslh (which can be done with haproxy) but to get the semantic I needed, which is similar to sslh but with more power and with a little different behaviour, cause I liked it that way.

There is however one caveat that I don't like about this setup and it is that to achieve the transparency one has to run haproxy as root, which is not really something one likes :-( so, having transparency is great, but we'll be taking some risks here which I personally don't like, to me it isn't worth it.

Anyway, here is the setup, it basically consists of a setup on haproxy but if we want transparency we'll have to add to it a routing and iptables setup, I'll describe here the whole setup

Here is what you need to define on /etc/haproxy/haproxy.cfg:

frontend ft_ssl bind 192.168.0.1:443 mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } acl sslvpn req_ssl_sni -i vpn.example.net use_backend bk_sslvpn if sslvpn use_backend bk_web if { req_ssl_sni -m found } default_backend bk_ssh backend bk_sslvpn mode tcp source 0.0.0.0 usesrc clientip server srvvpn vpnserver:1194 backend bk_web mode tcp source 0.0.0.0 usesrc clientip server srvhttps webserver:443 backend bk_ssh mode tcp source 0.0.0.0 usesrc clientip server srvssh sshserver:22

An example of a transparent setup can be found here but lacks some details, for example, if you need to redirect the traffic to the local haproxy you'll want to use the xt_TPROXY, there is a better doc for that at squid's wiki. Anyway, if you are playing just with your own machine, like we typically do with sslh, you won't need the TPROXY power, as packets will come straight to your 443, so haproxy will be able to get the without any problem. The problem will come if you are using transparency (source 0.0.0.0 usesrc clientip) because then packets coming out of haproxy will be carrying the ip of the real client, and thus the answers of the backend will go to that client (but with different ports and other tcp data), so it will not work. We'll have to get those packets back to haproxy, for that what we'll do is mark the packages with iptables and then route them to the loopback interface using advanced routing. This is where all the examples will tell you to use iptables' mangle table with rules marking on PREROUTING but that won't work out if you are having all the setup (frontend and backends) in just one box, instead you'll have to write those rules to work on the OUTPUT chain of the mangle table, having something like this:

*mangle :PREROUTING ACCEPT :INPUT ACCEPT :FORWARD ACCEPT :OUTPUT ACCEPT :POSTROUTING ACCEPT :DIVERT - -A OUTPUT -s public_ip -p tcp --sport 22 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 443 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 1194 -o public_iface -j DIVERT -A DIVERT -j MARK --set-mark 1 -A DIVERT -j ACCEPT COMMIT

Take that just as an example, better suggestions on how to know what traffic to send to DIVERT are welcome. The point here is that if you are sending the service to some other box you can do it on PREROUTIING, but if you are sending the service to the very same box of haproxy you'll have to mark the packages on the OUTPUT chain.

Once we have the packets marked we just need to route them, something like this will work out perfectly:

ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100

And that's all for this crazy setup. Of course, if, like me, you don't like the root implication of the transparent setup, you can remove the "source 0.0.0.0 usesrc clientip" lines on the backends and forget about transparency (connections to the backend will come from your local IP), but you'll be able to run haproxy with dropped privileges and you'll just need the plain haproxy.cfg setup and not the weird iptables and advanced routing setup.

Hope you like the article, btw, I'd like to point out the main difference of this setup vs sslh, it is that I'm only sending the packages to the ssl providers if the client is sending SNI info, otherwise I'm sending them to the ssh server, while sslh will send ssl clients without SNI also to the ssl provider. If your setup mimics sslh and you want to comment on it, feel free to do it.

Andrew Shadura: UI translation tools and version control

13 April, 2015 - 20:59

Today I decided to try some translation tools I could install on my laptop locally to translate Kallithea, so I’d not need to be on-line to use Michal Čihař’s wonderful Weblate.

The first tool I tried was Gtranslator. I edited about 5 strings, and then wanted to commit my changes. To my surprise, the diff was huge. Apart from obvious changes in the file header, like changing the team address or X-Generator field, Gtranslator has reformatted almost every other entry in the file, adding meaningless line breaks or reflowing the strings I didn’t edit.

@@ -3092,8 +3093,8 @@ msgstr ""
 
 #: kallithea/templates/admin/permissions/permissions_globals.html:72
 msgid ""
-"Write permission to a repository group allows creating repositories "
-"inside that group."
+"Write permission to a repository group allows creating repositories inside "
+"that group."
 msgstr ""
 
 #: kallithea/templates/admin/permissions/permissions_globals.html:77

Apart from that it has quite a dumb user interface, so I most probably won’t ever use it again unless things improve.

Well, I thought, I need to try Lokalize which I understand is a Qt4 port of KBabel, which I remember was quite a reasonable translation tool.

Just as with Gtranslator, I created a project, edited one line and hit ‘Save’. As I expected, Lokalize updated the file header, and also changed the formatting of some entries, though the number of changes was significantly lower.

Yet the winner of this competition is Weblate, which indeed avoids unnecessary changes as much as it can, just as advertised. Probably, I’ll just stick with it, setting up a local instance.

Dirk Eddelbuettel: inline 0.3.14

13 April, 2015 - 19:02

The inline package facilitates writing code in-line in simple string expressions or short files. The package is mature and in maintenance mode: Rcpp used it greatly for several years but then moved on to Rcpp Attributes so we have a much limited need for extensions to inline.

But we now have a new inline version 0.3.14. It brings both a few minor code updates since the last release in 2013, but also new extensions to both support Fortran better (for several flavours including f95) and to make working with dynamic library files easier. These were contributed by long-time R author Karline Soetaert who thereby became a package co-author. Also, the package moved to GitHub sometime last year and now lives in this repo.

See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.14 (2015-04-11)
  • Removed call to Rcpp::RcppLdFlags() which is no longer needed

  • With move of repository to GitHub, added a .travis.yml file and corresponding entry in .Rbuildignore

  • Replaced calls to require() with calls to requireNamespace(); also updated one call

  • Much improved support for Fortran and Fortran95 thanks to Karline Soetaert who became a package co-author

  • New helper functions writeDynLib and readDynLib as well as new methods print and code (also by Karline)

Courtesy of CRANberries, there is a comparison to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: Zero Sum Game

13 April, 2015 - 13:02

Review: Zero Sum Game, by S.L. Huang

Series: Russell's Attic #1 Publisher: S.L. Huang Copyright: 2014 ISBN: 0-9960700-1-X Format: Kindle Pages: 326

Cas Russell finds things for people, often involving some strategic violence. She belongs to that genre of action novel protagonists who have a rough code of ethics, but who don't exactly follow the law. Her friend Rio is much worse: a functioning sociopath with his own code of ethics, the derivation of which is a key plot point. When the book opens, Cas is on a mission to find a person and rescue them from a drug cartel — not her normal mission, but her contact said she was referred by Rio. Oh, and Rio is currently hitting her in the face.

This setup matches any number of present-day thrillers. The SFnal twist is that Cas is very good at numbers, in a completely unrealistic action hero way. (I found it unsurprising that Huang was inspired by the superhero genre; that's the right model to have in mind when reading about Cas.) She can calculate where bullets are going to go, knows just the right angle and velocity with which to throw things (and, even more notably, can get her body to do that), and, in one particularly memorable scene, sets up an eavesdropping sound concentrator by changing the angles of available random surfaces in the neighborhood, like trash cans. This ability is not without drawbacks. When she's not in a middle of a job, with something to focus on, she usually ends up drinking herself into a stupor to get her brain to stop working. But it's an extremely useful ability that requires the villains of this book go to great efforts to try to kill her.

The plot starts out as fairly typical thriller material, involving threats and dire consequences to those Cas loves (or at least likes a lot) and an unfolding sense that this retrieval of a kidnapped woman is the tip of a very deep iceberg. The expected counterpart, a private investigator with a less casual attitude towards killing than Cas, shows up early on. (Rio does not play that role in the story. His role is much more complex.) But the superhero inspirations show up in the villains as well, in a twist that many on-line summaries spoil, but which I will leave unmentioned.

Mostly, Zero Sum Game is a fast-moving story with lots of violence, lots of guns, shadowy conspiracies, and a hypercompetent protagonist. (Female, refreshingly, particularly since she doesn't fall in love with any of the other characters in the book.) It's a recipe for enjoyable brain candy, and I think that's the best attitude to bring to it. However, a couple of things set it apart for me.

First, Cas spends quite a bit of time really thinking about her life and questioning her decisions, rather than just blithely enjoying her world of stress and violence. There's more introspection here than in the typical thriller plot, but she stops short of wallowing in angst and stays decisive. I liked that balance: a bit of inner discomfort, and a few hard ethical decisions, but not to the point of paralyzing her.

Second, her relationship with Rio is something special. Rio himself is a character type that I've seen before in books like this, but I don't think I've seen the dynamic with a character like that handled this well before. I particularly liked that the focus of the book stayed on Cas, not on Rio, and the reader was encouraged to see that relationship as a reflection on Cas and her sense of internal ethics. Seeing Rio through Cas's eyes, and then seeing other characters react to him and react to their relationship, touched some chords that I really enjoyed reading.

Unfortunately, the villains weren't as successful, at least for me. Partly this is a personal quirk: the nature of the threat posed (not revealed for about half the book) is a kind that I dislike reading about. It makes my skin crawl in a way that I don't enjoy. But, even putting that aside, the story ends on a very odd and disturbing anti-climax. It's clearly the first book of an ongoing series, and I hope later books will salvage this. (I certainly liked it well enough to read on.) But the ending left me unsettled and rather irritated at the author. Huang plays fair, and the ending is consistent with what we know by the end of the book, but I read this sort of action-thriller story for catharsis and the glory of competent people doing what they do well.

I got deeply engrossed in this book and had a hard time putting it down. Both Cas and Rio are great characters, as are most of the supporting cast. I wish the ending wasn't quite as much of a letdown so that I could recommend it more strongly. But it's still a fun superhero thriller. If you're looking for something with unrealistic superpowers, a large helping of competence, and a high body count, this may be worth picking up.

(And no, I don't know what the series title means. I know what the series title refers to, but I haven't yet figured out what connection it or the Axiom of Choice has to the plot.)

Followed by Half Life.

Rating: 7 out of 10

Mark Brown: Acer Aspire E11

13 April, 2015 - 01:52

Recently I was in Seoul in the middle of three weeks of travel and my laptop died on me.  Since I had some work that needed doing fairly urgently I took myself over to Yongsan Electronics Market and got myself a cheap replacement to tide myself over.

What I ended up with was an Acer Aspire E11. There’s a bunch of different models all with very similar plastics, I got one which has a N2940 SoC, 2G of RAM (upgraded to 4G in store), a 500G hard disk and no fans for just over 200000 Korean Won, or about $200. As you’d expect at that price it’s got shortcomings but overall I’ve been extremely happy with it, it’s worth looking at if you need something cheap.

The keyboard in particular is probably the nicest I’ve used 0n a laptop in a long time with a good, definite but not excessive click feel as you press. Battery life is about 5 hours as advertised which is not wonderful but basically fine for me most of the time, and while not exactly Retina it’s clear with good viewing angles and generally pleasant to look at. Everything is plastic but feels very solid and robust, better than a lot of more expensive devices I’ve used, and there’s not much bezel around the screen which means it’s the first laptop I’ve had which has been comfortable to use in a standard economy seat on a plane.

The biggest drawback is performance – it’s a little slow opening applications sometimes and kernel builds crawl with an x86 allmodconfig taking about one and three quarter hours. For e-mail and web browsing there’s no problem at all, I did have to move from offlineimap to mbsync to get my mail to sync in a reasonable time but that’s more to do with the performance of offlineimap than that of the system. Overall in use it feels like the Dell I was using from about 2008-2011 or so, comfortable in use outside of builds, and I do appreciate having a system with no fans.

There were a couple of small tricks getting Debian installed – this is the first system I’ve seen with secure boot enabled by default which took me a few moments to work out (but is really good to see). Once that was disabled the install was smooth other than being bitten by Debian bug#778810 which meant I needed a manual fixup to actually get it to boot from the disk. It’s also got a Broadcom WiFi module which means it doesn’t work at all with mainline but it looked like that was on a standard mini PCI Express module so easily replaceable (I happened to have a USB dongle handy so haven’t bothered) and the wired ethernet just worked.

Like I say I’ve been very happy with it, there’s a bunch of other models with different specs for everything except the case (some touchscreen, some with small 32G eMMC drives) as well. Were it not for my need to do kernel builds I’d probably be keeping it as my primary laptop.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้