Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 56 min 33 sec ago

Lars Wirzenius: 20 years ago I became a Debian developer

16 August, 2016 - 22:47

Today it is 23 years ago since Ian Murdock published his intention to develop a new Linux distribution, Debian. It also about 20 years since I became a Debian developer and made my first package upload.

In the time since:

  • I've retired a couple of times, to pursue other interests, and then un-retired.

  • I've maintained a bunch of different packages, most importantly the PGP2 software in the 90s. (I now only maintain software for which I'm also upstream, in order to make jokes about my upstream being an unco-operative jerk, and my packager being unhelpful in the extreme.)

  • Got kicked out from the Debian mailing lists for insulting another developer. Not my proudest moment. I was allowed back later, and I've tried to be polite ever since. (See also rules 6.)

  • I've been to a few Debconfs (3, 5, 6, 9, 10, 15). I'm looking forward to going to many more in the future. It's clear that seeing many project members at least every now and then has a very big impact on project cohesion.

  • I had a gig where I was paid to improve the technical quality of Debian. After a few months of bug fixing (which isn't my favourite pastime), I wrote piuparts in order to find new bugs. (I gave that project away many years ago, but it seems to still be going strong.)

  • I've almost ran for DPL twice, but I'm glad I didn't actually. I've carefully avoided any positions of power or responsibility in the project. (I live in fear that someone decides to nominate me for something where I'd actually have make important decisions.)

    Not being responsible means I can just ignore the project for a while when something annoying happens. (Or retire again.) With such a large project, eventually something really annoying does happen.

  • Came up with the DEP process with Zack and Dato. I also ran the second half of the DEP5 process to get the debian/copyright machine readable format accepted. (I'm no longer involved, though, and I don't think DEP is much now.)

  • I've taught several workshops about Debian packaging, including online for Debian-Women. It's always fun when others "get" how easy packaging really is, despite all the efforts of the larger variety in tooling and random web pages go to to obscure the fundamental simplicity.

  • Over the years Í've enjoyed many of the things developed within Debian (without claiming any credit for myself):

    • the policy manual, perhaps the most important technical achievement of the project

    • the social contract and Debian free software guidelines, unarguably the most important non-technical achievements of the project

    • the whole package management system, but especially apt

    • debhelper's dh, which made the work of packaging simple cases so easy it's nearly a no-brainer

    • d-i made me not hate installing Debian (although I think time is getting ripe to replace d-i with something new; catch me in a talkative mood at party to hear more)

    • Debian-Women made an almost immediate improvement to the culture of the larger project (even if there's still much too few women developers)

    • the diversity statement made me a lot happier about being a project member.

    I'd like to thank everyone who's worked on these and made them happen. These are important milestones in Debian.

  • I've opened my mount in a lot of places over the years, which means a lot of people know of me, but nobody can actually point at anything useful I've actually done. Which is why when I've given talks at, say, FOSDEM, I get introduced as "the guy who shared an office with Linus Torvalds a long time ago".

  • I've made a number of friends via participation in Debian. I've found jobs via contacts in Debian, and have even started a side business with someone.

It's been a good twenty years. And the fun ain't over yet.

Bits from Debian: Debian turns 23!

16 August, 2016 - 19:30

Today is Debian's 23rd anniversary. If you are close to any of the cities celebrating Debian Day 2016, you're very welcome to join the party!

If not, there's still time for you to organize a little celebration or contribution to Debian. For example, you can have a look at the Debian timeline and learn about the history of the project. If you notice that some piece of information is still missing, feel free to add it to the timeline.

Or you can scratch your creative itch and suggest a wallpaper to be part of the artwork for the next release.

Our favorite operating system is the result of all the work we have done together. Thanks to everybody who has contributed in these 23 years, and happy birthday Debian!

Michal Čihař: Gammu 1.37.4

16 August, 2016 - 16:00

It has been almost three months since last Gammu release and it's time to push fixes out to users. This time the amount of fixes is quite small, covering Huawei devices and text mode for sending SMS.

Full list of changes in 1.37.4:

  • Improved support for Huawei E3131.
  • Fixed SMS support for MULTIBAND 900E.
  • Fixed SMS created in text mode.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Keith Packard: udevwrap

16 August, 2016 - 13:32
Wrapping libudev using LD_PRELOAD

Peter Hutterer and I were chasing down an X server bug which was exposed when running the libinput test suite against the X server with a separate thread for input. This was crashing deep inside libudev, which led us to suspect that libudev was getting run from multiple threads at the same time.

I figured I'd be able to tell by wrapping all of the libudev calls from the server and checking to make sure we weren't ever calling it from both threads at the same time. My first attempt was a simple set of cpp macros, but that failed when I discovered that libwacom was calling libgudev, which was calling libudev.

Instead of recompiling the world with my magic macros, I created a new library which exposes all of the (public) symbols in libudev. Each of these functions does a bit of checking and then simply calls down to the 'real' function.

Finding the real symbols

Here's the snippet which finds the real symbols:

static void *udev_symbol(const char *symbol)
    static void *libudev;
    static pthread_mutex_t  find_lock = PTHREAD_MUTEX_INITIALIZER;

    void *sym;
    if (!libudev) {
        libudev = dlopen("", RTLD_LOCAL | RTLD_NOW);
    sym = dlsym(libudev, symbol);
    return sym;

Yeah, the libudev version is hard-coded into the source; I didn't want to accidentally load the wrong one. This could probably be improved...

Checking for re-entrancy

As mentioned above, we suspected that the bug was caused when libudev got called from two threads at the same time. So, our checks are pretty simple; we just count the number of calls into any udev function (to handle udev calling itself). If there are other calls in process, we make sure the thread ID for those is the same as the current thread.

static void udev_enter(const char *func) {
    assert (udev_running == 0 || udev_thread == pthread_self());
    udev_thread = pthread_self();
    udev_func[udev_running] = func;

static void udev_exit(void) {
    if (udev_running == 0)
    udev_thread = 0;
    udev_func[udev_running] = 0;
Wrapping functions

Now, the ugly part -- libudev exposes 93 different functions, with a wide variety of parameters and return types. I constructed a hacky macro, calls for which could be constructed pretty easily from the prototypes found in libudev.h, and which would construct our stub function:

#define make_func(type, name, formals, actuals)         \
    type name formals {                     \
    type ret;                       \
    static void *f;                     \
    if (!f)                         \
        f = udev_symbol(__func__);              \
    udev_enter(__func__);                   \
    ret = ((typeof (&name)) f) actuals;         \
    udev_exit();                        \
    return ret;                     \

There are 93 invocations of this macro (or a variant for void functions) which look much like:

make_func(struct udev *,
      (struct udev *udev),
Using udevwrap

To use udevwrap, simply stick the filename of the .so in LD_PRELOAD and run your program normally:

# LD_PRELOAD=/usr/local/lib/ Xorg 
Source code

I stuck udevwrap in my git repository:;a=summary

You can clone it using

$ git git://

Shirish Agarwal: The road to TOR

15 August, 2016 - 17:31

Happy Independence Day to all. I had been looking forward to this day so I can use to share with my brothers and sisters what little I know about TOR . Independence means so many things to many people. For me, it means having freedom, valuing it and using it to benefit not just to ourselves but to people at large. And for that to happen, at least on the web, it has to rise above censorship if we are to get there at all. I am 40 years old, and if I can’t read whatever I want to read without asking the state-military-Corporate trinity than be damned with that. Debconf was instrumental as I was able to understand and share many of the privacy concerns that we all have. This blog post is partly a tribute to being part of a community and being part of Debconf16.

So, in that search for privacy couple of years ago, I came across TOR . TOR stands for ‘The Onion Router’ project. Explaining tor is simple. Let us take the standard way in which we approach the website using a browser or any other means.

a. We type out a site name, say in the URL/URI bar .
b. Now the first thing the browser would do is look into its DNS Cache to see if the name/URL has been used before. If it is something like which has been used before and is *fresh* and there is content already it would serve the content from the cache there itself.
c. In case, if it’s not or the content is stale or something, it would generate a DNS lookup through the various routing tables till the DNS IP Address is found and information relayed to the browser.
d. The browser takes the IP Address and opens a TCP connection to the server, you have the handshake happen and after that it’s business as usual.
e. In case if it doesn’t work, you could get errors like ‘Could not connect to server xyz’ or some special errors with error codes.

This is a much simplified version of what happens or goes through normally with most/all of the browsers.

One good way to see how the whole thing happens is to use traceroute and use the whois service.

For e.g. –

[$] traceroute

and then

[$] whois | grep inetnum
inetnum: -

Just using whois IP Address gives much more. I just shared a short version because I find it interesting that Debian has booked all 255 possible IP Addresses but speculating on that would be probably be a job for a different day.

Now the difference when using TOR are two things –

a. The conversation is encrypted (somewhat like using https but encrypted through the relays)
b. The conversation is relayed over 2-3 relays and it will give a somewhat different identification to the DNS server at the other end.
c. It is only at the end-points that the conversation will be in plain text.

For e.g. the TOR connection I’m using atm is from me – France (relay) – Switzerland (relay) – Germany (relay) – . So wordpress thinks that all the connection is happening via Germany while I’m here in India. It would also tells that I’m running MS-Windows some version and a different browser while I’m from somewhere in India, on Debian, using another browser altogether

There are various motivations for doing that. For myself, I’m just a private person and do not need or want that any other person/s or even the State should be looking over my shoulder as to what I’m doing. And the argument that we need to spy on citizens because Terrorists are there doesn’t hold water over me. There are many ways in which they can pass messages even without tor or web. The Government-Corporate-Military just get more powerful if and when they know what common people think, do, eat etc.

So the question is how does you install tor if you a private sort of person . If you are on a Debian machine, you are one step closer to doing that.

So the first thing that you need to do is install the following –

$ sudo aptitude install ooniprobe python-certifi tor tor-geoipdb torsocks torbrowser-launcher

Once the above is done, then run torbrowser-launcher. This is how it would work out the first time it is run –

[$] torbrowser-launcher

Tor Browser Launcher
By Micah Lee, licensed under MIT
version 0.2.6
Creating GnuPG homedir /home/shirish/.local/share/torbrowser/gnupg_homedir
Downloading and installing Tor Browser for the first time.
Latest version: 6.0.3
Verifying signature
Extracting tor-browser-linux64-6.0.3_en-US.tar.xz
Running /home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/start-tor-browser.desktop
Launching './Browser/start-tor-browser --detach'...

As can be seen above, you basically download the tor browser remotely from the website. Obviously, for this port 80 needs to be opened.

One of the more interesting things is that it tells you where it installs the browser.

/home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/start-tor-browser and then detaches.

The first time the TOR browser actually runs it looks something similar to this –

Torbrowser picture

Additionally it would give you 4 choices. Depending on your need for safety, security and convenience you make a choice and live with it.

Now the only thing remaining to do is have an alias for your torbrowser. So I made

[$] alias tor


It is suggested that you do not use the same usernames on the onion network.

Also apart from the regular URL addresses such as ‘’ you will also see sites such as (fictional address)

Now there would be others who would want to use the same/similar settings say as there are in their Mozilla Firefox installation.

To do that do the following steps –

a. First close down both Torbrowser and Mozilla Firefox .
b. Open your file browser and go to where your mozilla profile details are. In typical Debian installations it is at


In the next tab, navigate to –


c. Now copy the following files over from your mozilla profile to your tor browser profile and you can resume where you left off.

    logins.json (Firefox 32 and above)
    signons3.txt (if exists)

and the following folders/directories

    chrome (if it exists)
    searchplugins (if it exists)

Once the above is done, fire up your torbrowser with the alias shared. This is usually put it in your .bashrc file or depending on whatever terminal interpreter you use, wherever the config file will be.

Welcome to the world of TOR. Now, after a time if you benefit from tor and would like to give back to the tor community, you should look up tor bridges and relay. As the blog post has become long enough, I would end it now and hopefully we can talk about tor bridges and relay some other day.

Filed under: Miscellenous Tagged: #anonymity, #Debconf16, #debian, #tor, #torbrowser, GNU, Linux, Privacy

Russ Allbery: Review: Winds of Fate

15 August, 2016 - 08:11

Review: Winds of Fate, by Mercedes Lackey

Series: Mage Winds #1 Publisher: DAW Copyright: 1991 Printing: July 1992 ISBN: 0-88677-516-7 Format: Mass market Pages: 460

As a kid working my way through nearly everything in the children's section of the library, I always loved book series, since it meant I could find a lot more of something I liked. But children's book series tended to be linear, with a well-defined order. When I moved into the adult SF section, I encountered a new type of series: one that moves backwards and forwards in time to fill in a broader story.

I mention that here because Winds of Fate, although well into the linked series that make up Valdemar, was one of the first Valdemar books I read. (I think it was the first, but my memory is hazy.) Therefore, in my brain, this is where the story of Valdemar "begins": with Elspeth, a country that has other abilities but has forgotten about magic, a rich world full of various approaches to magic, and very pushy magic horses. Talia's story, and particularly Vanyel's, were always backstory, the events that laid the groundwork for Elspeth's story. (I didn't encounter Tarma and Kethry until somewhat later.)

Read now in context, this is obviously not the case. The Mage Winds trilogy, of which this is the first book, are clearly sequels to the Arrows of the Queen trilogy. Valdemar was victorious in the first round of war with Ancar, but the Heralds have slowly (and with great difficulty) become aware of their weakness against magic and their surprising lack of it. Elspeth has grown into the role of heir, but she's also one of the few who find it easy to talk about and think about magic (perhaps due to her long association with Kerowyn, who came into Valdemar from the outside world in By the Sword). She therefore takes on the mission of finding an Adept who can return to Valdemar, solve the mystery of whatever is keeping magic out of the kingdom, and start training mages for the kingdom again.

Meanwhile, we get the first viewpoint character from the Tayledras: the elf-inspired mages who work to cleanse the Pelagiris forests from magic left over from a long-ago war. They appeared briefly in Vanyel's story, since his aunt was friends with a farther-north tribe of them and Valdemar of the time had contact with mages. Darkwind and his people are far to the south, up against the rim of the Dhorisha crater. Something has gone horribly wrong with Heartstone of the k'Sheyna, his tribe: it cracked when being drained, killing most of the experienced mages including Darkwind's mother, and now it is subtly wrong, twisting and undermining the normal flow of magic inside their Vale. In the aftermath of that catastrophe, Darkwind has forsworn magic and become a scout, putting him sharply at odds with his father. And it's a matter of time before less savory magic users in the area realize how vulnerable k'Sheyna is.

Up to this point in the Valdemar series, Lackey primarily did localized world-building to support the stories and characters she was writing about. Valdemar and its Heralds and Companions have been one of the few shared elements, and only rarely did the external magic-using world encounter them. Here, we get the first extended contact between the fairly naive Heralds and experienced mages who understand how they and their Companions fit into the broader system of magic. We also finally get the origin of the Dhorisha Plains and the Tayledras and Shin'a'in, and a much better sense of the broader history of this world. And Need, which started as Kethry's soul-bonded sword and then became Kerowyn's, joins the story in a much more active way.

The world-building is a definite feature if you like this sort of thing. It doesn't withstand too much thinking about the typical sword and sorcery lack of technology, but for retroactive coherence constructed from originally scattered stories, it's pretty fun. (I suspect part of why I like the Valdemar world-building is that it feels a lot like large shared universe world-building in comics.) And Need is the high point of the story: she brings a much-needed cynical stubbornness to the cast and is my favorite character in this book.

What is not a feature, unfortunately, is the characterization. Darkwind is okay but a largely unremarkable here, more another instance of the troubled but ethical Tayledras type than a clearly defined character. But Elspeth is just infuriating, repeatedly making decisions and taking hard positions that seem entirely unwarranted by the recorded events of the book. This is made worse by how frequently she's shown to be correct in ways that seem like authorial cheating. At times, it feels like she's the heroine by authorial fiat, not because she's doing a particularly good job. I can muster some sympathy for not wanting to follow the plan of the Companions when it became clear they were acting entirely out of character and actively intervening, but she expresses that with petulant, childish insistence rather than reasoned argument. And she suddenly decides Skif is in love with her and treating her like a fragile princess on the basis of absolutely no action that occurs on camera in this book so far as I can tell, and proceeds to treat him like dirt for large sections of the book. That Skif then lives down to this suddenly negative impression doesn't help.

This book also has quite a lot of the U-shaped story arc in which everything gets worse and more horrific and more hopeless for the heroes over the course of the book until it turns into torture, and only then do they manage to claw their way back out. I've come to dislike this way of building tension. It's undeniably effective, but the parts of the story near the bottom of the U are difficult and painful reading. I prefer a bit more evenly-spread hurt/comfort storytelling in my popcorn fantasy reading.

Winds of Fate is, sadly, not a very good book. Most of the characterization is intensely irritating, the writing is a bit uneven, and the middle section of the book is rather painful to read. For me, though, that's balanced by the world-building and the sense of broadened scope, by Need's abrasive decisiveness, and by some really entertaining reactions to the combination of Elspeth, Need, and her Companion walking naive into the broader world. I still have a fond spot in my heart for it, but I'm hoping the remaining books of the trilogy are better.

Rating: 6 out of 10

Paul Tagliamonte: Minica - lightweight TLS for everyone!

15 August, 2016 - 07:40

A while back, I found myself in need of some TLS certificates set up and issued for a testing environment.

I remembered there was some code for issuing TLS certs in Docker, so I yanked some of that code and made a sensable CLI API over it.

Thus was born minica!

Something as simple as minica tag@domain.tls domain.tld will issue two TLS certs (one with a Client EKU, and one server) issued from a single CA.

Next time you’re in need of a few TLS keys (without having to worry about stuff like revocation or anything), this might be the quickest way out!

Russ Allbery: Summer haul

15 August, 2016 - 06:08

The reality is that I've just not been reading very much, as you can probably tell from the lack of reviews here. Lots of other things have been occupying my time, including rather too much on-line political reading eating into my off-line book reading. But hope springs eternal, so more books have been acquired in the interim. Since I use these posts to keep myself from buying duplicates, in the absence of a real database that I've not yet written or set up, here they are:

Mishell Baker — Borderline (sff)
Curtis C. Chen — Waypoint Kangaroo (sff)
Mark Forster — Secrets of Productive People (nonfiction)
Yoon Ha Lee — Ninefox Gambit (sff)
Seanan McGuire — Every Heart a Doorway (sff)
Don Norman — The Design of Everyday Things (nonfiction)
Kristina Rizga — Mission High (nonfiction)
John Scalzi — Lock In (sff)

This a pretty random collection of things from authors I know I like, non-fiction that looked really interesting from on-line reviews, the next book for book club reading for work (The Design of Everyday Things, which I've somehow never managed to read), and the first SF novel by an old college friend of mine (Waypoint Kangaroo by Curtis Chen).

Reproducible builds folks: Reproducible Builds: week 68 in Stretch cycle

15 August, 2016 - 05:38

What happened in the Reproducible Builds effort between Sunday August 7 and Saturday August 13 2016:

GSoC and Outreachy updates Reproducible work in other projects

Thomas Schmitt implemented a new -as mkisofs option:

--set_all_file_dates timestring

Set mtime, atime, and ctime of all files and directories to  the
given time.

Valid  timestring  formats  are:  'Nov  8  14:51:13  CET  2007',
110814512007.13, 2007110814511300. See also --modification-date=
and man xorriso, Examples of input timestrings.

This  action  stays  delayed until mkisofs emulation ends. Up to
then it  can  be  revoked  by  --set_all_file_dates  with  empty
timestring.   In  any  case  files  which get into the ISO after
mkisofs emulation ended will not  be  affected,  unless  another
mkisofs emulation applies --set_all_file_date again.

LEDE developer Jonas Gorski submitted a patch to fix build times in their kernel:

kernel: allow reproducable builds

Similar how we fix the file times in the filesystems, fix the build time
of the kernel, and make the build number static. This should allow the
kernel build to be reproducable when combined with setting the
KERNEL\_BUILD\_USER and \_DOMAIN in case of different machines.

The reproducability only applies to non-initramfs kernels, those still
require additional changes.
Signed-off-by: Jonas Gorski <>
Packages reviewed and fixed, and bugs filed

Patches have been submitted by:

Package reviews

28 reviews have been added, 4 have been updated and 7 have been removed in this week, adding to our knowledge about identified issues.

Issue types have been added/updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (23)
  • shirish शिरीष (1)
diffoscope development strip-nondeterminism development
  • schedule testing/i386 more often than unstable+experimental, in order to see the results of building with build path variation. (h01ger)
  • spectranaut wrote a patch for using sqlalchemy which has yet to be merged.
  • Our build path variation tests on testing/i386 have brought the first results: the build essential package set is now 43% unreproducible compared to "only" 26% on amd64. So another conclusion from this is probably that the build essential maintainers should merge our patches, the other is that build path variation is still a goal far way, which also can be seen "nicely" now in the general suite graph showing the impact of build path variation introduced last week. (h0lger)
  • Chris Lamb wrote a patch to improve the top-level navigation so that we can always get back to "home" of a package.
  • Chris Lamb also wrote a patch to explicitly log when a build was successful instead of it being implicit.

Chris started to ping old bugs with patches and no maintainer reaction so far.

This week's edition was written by Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

Steinar H. Gunderson: Linear interpolation, source alignment, and Debian's embedding policy

15 August, 2016 - 04:57

At some point back when the dinosaurs roamed the Earth and I was in high school, I borrowed my first digital signal processing book from a friend. I later went on to an engineering education and master's thesis about DSP, but the very basics of DSP never stop to fascinate me. Today, I wanted to write something about one of them and how it affects audio processing in Nageru (and finally, how Debian's policies put me in a bit of a bind on this issue).

DSP texts tend to obscure profound truths with large amounts of maths, so I'll try to present a somewhat less general result that doesn't require going into the mathematical details. That rule is: Adding a signal to weighted, delayed copies of itself is a filtering operation. (It's simple, but ignoring it will have sinister effects, as we'll see later.)

Let's see exactly what that means with a motivating example. Let's say that I have a signal where I want to get rid of (or rather, reduce) high frequencies. The simplest way I can think of is to add every neighboring sample; that is, set yn = xn + xn-1. For each sample, we add the previous sample, ie., the signal as it was one sample ago. (We ignore what happens at the edges; the common convention is to assume signals extend out to infinity with zeros.)

What effect will this have? We can figure it out with some trigonometry, but let's just demonstrate it by plotting instead: We assume 48 kHz sample rate (which means that our one-sample delay is 20.83 µs) and a 22 kHz note (definitely treble!), and plot the signal with one-sample delay (the x axis is sample number):

As you can see, the resulting signal is a new signal of the same frequency (which is always true; linear filtering can never create new frequencies, just boost or dampen existing ones), but with much lower amplitude. The signal and the delayed version of it end up cancelling each other mostly out. Also note that there signal has changed phase; the resulting signal has been a bit delayed compared to the original.

Now let's look at a 50 Hz signal (turn on your bass face). We need to zoom out a bit to see full 50 Hz cycles:

The original signal and the delayed one overlap almost exactly! For a lower frequency, the one-sample delay means almost nothing (since the waveform is varying so slowly), and thus, in this case, the resulting signal is amplified, not dampened. (The signal has changed phase here, too—actually exactly as much in terms of real time—but we don't really see it, because we've zoomed out.)

Real signals are not pure sines, but they can be seen as sums of many sines (another fundamental DSP result), and since filtering is a linear operation, it affects those sines independently. In other words, we now have a very simple filter that will amplify low frequencies and dampen high frequencies (and delay the entire signal a little bit). We can do this for all frequencies from 0 to 24000 Hz; let's ask Octave to do it for us:

(Of course, in a real filter, we'd probably multiply the result with 0.5 to leave the bass untouched instead of boosting it, but it doesn't really change anything. A real filter would have a lot more coefficients, though, and they wouldn't all be the same!)

Let's now turn to a problem that will at first seem different: Combining audio from multiple different time sources. For instance, when mixing video, you could have input from two different cameras or sounds card and would want to combine them (say, a source playing music and then some audience sound from a camera). However, unless you are lucky enough to have a professional-grade setup where everything runs off the same clock (and separate clock source cables run to every device), they won't be in sync; sample clocks are good, but they are not perfect, and they have e.g. some temperature variance. Say we have really good clocks and they only differ by 0.01%; this means that after an hour of streaming, we have 360 ms delay, completely ruining lip sync!

This means we'll need to resample at least one of the sources to match the other; that is, play one of them faster or slower than it came in originally. There are two problems here: How do you determine how much to resample the signals, and how do we resample them?

The former is a difficult problem in its own right; about every algorithm not backed in solid control theory is doomed to fail in one way or another, and when they fail, it's extremely annoying to listen to. Nageru follows a 2012 paper by Fons Adriaensen; GStreamer does… well, something else. It fails pretty badly in a number of cases; see e.g. this 2015 master's thesis that tries to patch it up. However, let's ignore this part of the problem for now and focus on the resampling.

So let's look at the case where we've determined we have a signal and need to play it 0.01% faster (or slower); in a real situation, this number would vary a bit (clocks are not even consistently wrong). This means that at some point, we want to output sample number 3000 and that corresponds to input sample number 3000.3, ie., we need to figure out what's between two input samples. As with so many other things, there's a way to do this that's simple, obvious and wrong, namely linear interpolation.

The basis of linear interpolation is to look at the two neighboring samples and weigh them according to the position we want. If we need sample 3000.3, we calculate y = 0.7 x3000 + 0.3 x3001 (don't switch the two coefficients!), or, if we want to save one multiplication and get better numerical behavior, we can use the equivalent y = x3000 + 0.3 (x3001 - x3000). And if we need sample 5000.5, we take y = 0.5 x3000 + 0.5 x3001. And after a while, we'll be back on integer samples; output sample 10001 corresponds to x10000 exactly.

By now, I guess it should be obvious what's going on: We're creating a filter! Linear interpolation will inevitably result in high frequencies being dampened; and even worse, we are creating a time-varying filter, which means that the amount of dampening will vary over time. This manifests itself as a kind of high-frequency “flutter”, where the amount of flutter depends on the relative resampling frequencies. There's also cubic resampling (which can mean any of several different algorithms), but it only really reduces the problem, it doesn't really solve it.

The proper way of interpolating depends a lot on exactly what you want (e.g., whether you intend to change the rate quickly or not); this paper lays out a bunch of them, and was the paper that originally made me understand why linear interpolation is so bad. Nageru outsources this problem to zita-resampler, again by Fons Adriaensen; it yields extremely high-quality resampling under controlled delay, through a relatively common technique known as polyphase filters.

Unfortunately, doing this kind of calculations takes CPU. Not a lot of CPU, but Nageru runs in rather CPU-challenged environments (ultraportable laptops where the GPU wants most of the TDP, and the CPU has to go down to the lowest frequency), and it is moving in a direction where it needs to resample many more channels (more on the later), so every bit of CPU helps. So I coded up an SSE optimization of the inner loop for a particular common case (stereo signals) and sent it in for upstream inclusion. (It made the code 2.7 times as fast without any structural changes or reducing precision, which is pretty much what you can expect from SSE.)

Unfortunately, after a productive discussion, suddenly upstream went silent. I tried pinging, pinging again, and after half a year pinging again, but to no avail. I filed the patch in Debian's BTS, but the maintainer understandably is reluctant to carry a delta against upstream.

I also can't embed a copy; Debian policy would dictate that I build against the system's zita-resampler. I could work around it by rewriting zita-resampler until it looks nothing like the original, which might be a good idea anyway if I wanted to squeeze out the last drops of speed; there are AVX optimizations to be had in addition to SSE, and the structure as-is isn't ideal for SSE optimizations (although some of the changes I have in mind would have to be offset against increased L1 cache footprint, so careful benchmarking would be needed). But in a sense, it feels like just working around a policy that's there for good reason. So like I said, I'm in a bit of a bind. Maybe I should just buy a faster laptop.

Oh, and how does GStreamer solve this? Well, it doesn't use linear interpolation. It does something even worse—it uses nearest neighbor. Gah.

Chris Lamb: CLI client

15 August, 2016 - 01:43

One criminally-unknown new UNIX tool is diffoscope, a diff "on steroids" that will not only recursively unpack archives but will transform binary formats into human-readable forms in order to compare them instead of simply showing the raw difference in hexadecimal.

In an attempt to remedy its underuse, in December 2015 I created the service so that I—and hopefully others—could use diffoscope without necessarily installing the multitude of third-party tools that using it can require. It also enables trivial sharing of the HTML reports in bugs or on IRC.

To make this even easier, I've now introduced a command-line client to the web service:

 $ apt-get install trydiffoscope
 Setting up trydiffoscope (57) ...

 $ trydiffoscope /etc/hosts.allow /etc/hosts.deny
 --- a/hosts.allow
 +++ b/hosts.deny
│ @@ -1,10 +1,17 @@
│ -# /etc/hosts.allow: list of hosts that are allowed to access the system.
│ -#                   See the manual pages hosts_access(5) and hosts_options(5).
│ +# /etc/hosts.deny: list of hosts that are _not_ allowed to access the system.
│ +#                  See the manual pages hosts_access(5) and hosts_options(5).

You can also install it from PyPI with:

$ pip install trydiffoscope

Mirroring the original diffoscope command, you can save the output locally in an even more-readable HTML report format by appending "--html output.html".

In addition, if you specify the --webbrowser (or -w) argument:

$ trydiffoscope -w /etc/hosts.allow /etc/hosts.deny

... this will automatically open your default browser to view the results.

Dirk Eddelbuettel: rfoaas 1.0.0

15 August, 2016 - 01:16

The big 1.0.0 is here! Following the unsurpassed lead of the FOAAS project, we have arrived at a milestone: Release 1.0.0 is now on CRAN.

The rfoaas package provides an interface for R to the most excellent FOAAS service--which itself provides a modern, scalable and RESTful web service for the frequent need to tell someone to f$#@ off.

Release 1.0.0 brings fourteen (!!) new access points: back(), bm(), gfy(), think(), keep(), single_(), look(), looking(), no(), give(), zero(), pulp(), sake(), and anyway(). All with documentation and testing.

Even more neatly, thanks to a very pull request by Tyler Hunt, we can now issue random FOs via the getRandomFO() function!

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sven Hoexter: handling html mails with mutt and convincing Icedove to open http/https links in Firefox

15 August, 2016 - 01:05

... or the day I fixed my mail clients running on Debian/stretch.

First of all mutt failed to open html mails or the html multipart stuff in Firefox. I found some interesting hints in a recent thread on debian-user. So now my "~/.mailcap" looks like this:

text/html; /usr/bin/firefox --new-tab %s;
text/html; /usr/bin/elinks -force-html -dump %s; copiousoutput

and I added the proposed "~/.muttrc" addition verbatim:

bind  attach  <return>  view-mailcap
alternative_order text/plain text/html
unauto_view *
auto_view text/html

For work related mails, where the use of html crap mails is a sad reality I can not avoid, I stick to Icedove. But beside of the many crashes everyone encountered recently it also crashes when I try to reach "Preferences -> Advanced -> Config Editor". So no chance to adjust the handling of http/https links in the UI. Luckily that configuration is still text, well XML, in a file called mimeTypes.rdf in in the profile directory. So I manually replaced "/usr/bin/iceweasel" with "/usr/bin/firefox" and a restart later clicking on http and https links works again. Yay.

David Moreno: Ruby and libv8: Exactly my feelings

15 August, 2016 - 00:01

Thanks to my coworker Dan for making a whole bunch of these based on our day job adventures :)

Jaminy Prabaharan: GSoC-2016 Journey (In brief)

14 August, 2016 - 23:21

Three months of coding is about to end.It has officially begun on May 23rd and we are getting near to the final submission deadline on August 15th.The following is the time line of three months about what we have gone through.

You can also checkout my Debian wiki page to know more about myself.

I have worked on improving voice, video and chat communication (Real Time Communication) with free software, RTC project for Debian.(The Universal OS)

You can go through about my project in the following link.

My mentors are Iain R. Learmonth ( and Daniel Pocock ( of them were dedicative and I could learn many new things from them within these three months.I have contacted my mentors through personal mail, Debian outreach mailing lists and IRC(#debian-data and #debian-soc).They were very responsive to my queries. Thank you Iain and Daniel for improving and enlightening me.

My initial task is e-mail mining. I have to allow the client to login to the mail using IMAP, extracts the “To”, “From” and “CC” headers of every email message in the folder and then scan for the phone numbers, SIP addresses, XMPP addresses in the body  of the message.These extracted details should be written in the CSV file also.The extracted phone numbers, SIP addresses,  XMPP addresses and ham call signs should be made into a click able link using Flask.

I have also attended DebConf-16 (conference of Debian developers, contributors and users) in Cape Town ( in the middle of three months (Form July 2nd to July 9th).I gave a talk on my progressing GSoC project ( have learnt many new things about Debian, their projects and their packages apart form my GSoC project.I have also meant some of the fellow GSoC students.

I have written previos blog posts related to GSoC-2016 in the following link.

I have also sent my weekly reports till the last week (i.e week-11) to mailing list.

E-mail mining is the repository I have created on GitHub to work on my project.

I have divided the tasks and coded individually to combine together.Snippets folder in the file contains the code for each tasks.

Following are the commits I have made in the repository.

My tasks have been extended to add a gravatar on the page listing details for each email address, maintain a counter for each hour of the day in the scraper for each mail, show a list of other people that have been involved in email conversations and make the contact information on the detail pages machine readable.

My mentor suggested me to work on at least three issues before final submission.I have worked on each of them individually in the snippet folder except the last one.I will be working on it after GSoC. script contains the final code which combines all snippets into one.

Three pending pull requests to be merged after the confirmation from my mentor.

These are the abstract about what I have done within these three months.

It was an amazing and thrilling coding ride.

Stay tuned for the elaborated blog posts with DebConf experience and many more.



David Moreno: One year

14 August, 2016 - 21:52

A year ago today, I started working for I wish I could say that the time has flown by, but it really hasn’t. It has been one hell of a ride on all fronts: work, learning experiences, friends, but specially at home, since working for this company didn’t come without a complete life change. So far so good, and for that I’m grateful :)

David Moreno: Twitter feed for Planet Perl Iron Man

14 August, 2016 - 21:13

I like to read Planet Perl Iron Man, but since I’m less and less of a user of Google Reader these days, I prefer to follow interesting feeds on Twitter instead: I don’t have to digest all of the content on my timeline, only when I’m in the mood to see what’s going on out there. So, if you wanna follow the account, find it as @perlironman. If interested, you can also follow me. That is all.

David Moreno: Feedbag released under MIT license

14 August, 2016 - 21:13

I was contacted by Pivotal Labs regarding licensing of Feedbag. I guess releasing open source software as GPL only makes sense if you continue to live under a rock. I’ve bumped the version to 0.9 and released it under MIT license.

Feedbag 1.0, which I plan to work on during the following days will bring in a brand new shiny backend powered by Nokogiri, instead of Hpricot (I mean, give me a break, I’m trying to catch up with the Ruby community, after all I’m primarily a Perl guy :D) and hopefully I will be able to recreate most of the feed auto-discovery test suite that Mark Pilgrim retired (410 Gone) when he committed infosuicide.

Have a good weekend!

David Moreno: Deploying a Dancer app on Heroku

14 August, 2016 - 21:08

There’s a few different posts out there on how to run Perl apps, such as Mojolicious-based, on Heroku, but I’d like to show how to deploy a Perl Dancer application on Heroku.

The startup script of a Dancer application (bin/ can be used as a PSGI file. With that in mind, I was able to take the good work of Miyagawa’s Heroku buildpack for general PSGI apps and hack it a little bit to use Dancer’s, specifically. What I like about Miyagawa’s approach is that uses the fantastic cpanm and makes it available within your application, instead of the monotonous cpan, to solve dependencies.

Let’s make a simple Dancer app to show how to make this happen:

/tmp $ dancer -a heroku
+ heroku
+ heroku/bin
+ heroku/bin/
+ heroku/config.yml
+ heroku/environments
+ heroku/environments/development.yml
+ heroku/environments/production.yml
+ heroku/views
+ heroku/views/
+ heroku/views/layouts
+ heroku/views/layouts/
+ heroku/lib
+ heroku/lib/
+ heroku/public
+ heroku/public/css
+ heroku/public/css/style.css
+ heroku/public/css/error.css
+ heroku/public/images
+ heroku/public/500.html
+ heroku/public/404.html
+ heroku/public/dispatch.fcgi
+ heroku/public/dispatch.cgi
+ heroku/public/javascripts
+ heroku/public/javascripts/jquery.js
+ heroku/t
+ heroku/t/002_index_route.t
+ heroku/t/001_base.t
+ heroku/Makefile.PL

Now, you already know that by firing perl bin/ you can get your development server up and running. So I’ll just proceed to show how to make this work on Heroku, you should already have your development environment configured for it:

/tmp $ cd heroku/
/tmp/heroku $ git init
Initialized empty Git repository in /private/tmp/heroku/.git/
/tmp/heroku :master $ git add .
/tmp/heroku :master $ git commit -a -m 'Dancer on Heroku'
[master (root-commit) 6c0c55a] Dancer on Heroku
22 files changed, 809 insertions(+), 0 deletions(-)
create mode 100644 MANIFEST
create mode 100644 MANIFEST.SKIP
create mode 100644 Makefile.PL
create mode 100755 bin/
create mode 100644 config.yml
create mode 100644 environments/development.yml
create mode 100644 environments/production.yml
create mode 100644 lib/
create mode 100644 public/404.html
create mode 100644 public/500.html
create mode 100644 public/css/error.css
create mode 100644 public/css/style.css
create mode 100755 public/dispatch.cgi
create mode 100755 public/dispatch.fcgi
create mode 100644 public/favicon.ico
create mode 100644 public/images/perldancer-bg.jpg
create mode 100644 public/images/perldancer.jpg
create mode 100644 public/javascripts/jquery.js
create mode 100644 t/001_base.t
create mode 100644 t/002_index_route.t
create mode 100644 views/
create mode 100644 views/layouts/
/tmp/heroku :master $

And now, run heroku create, please note the buildpack URL,

/tmp/heroku :master $ heroku create --stack cedar --buildpack
Creating blazing-beach-7280... done, stack is cedar |
Git remote heroku added
/tmp/heroku :master $

And just push:

/tmp/heroku :master $ git push heroku master
Counting objects: 34, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (30/30), done.
Writing objects: 100% (34/34), 40.60 KiB, done.
Total 34 (delta 3), reused 0 (delta 0)

-----> Heroku receiving push
-----> Fetching custom buildpack... done
-----> Perl/PSGI Dancer! app detected
-----> Bootstrapping cpanm
Successfully installed JSON-PP-2.27200
Successfully installed CPAN-Meta-YAML-0.008
Successfully installed Parse-CPAN-Meta-1.4404 (upgraded from 1.39)
Successfully installed version-0.99 (upgraded from 0.77)
Successfully installed Module-Metadata-1.000009
Successfully installed CPAN-Meta-Requirements-2.122
Successfully installed CPAN-Meta-2.120921
Successfully installed Perl-OSType-1.002
Successfully installed ExtUtils-CBuilder-0.280205 (upgraded from 0.2602)
Successfully installed ExtUtils-ParseXS-3.15 (upgraded from 2.2002)
Successfully installed Module-Build-0.4001 (upgraded from 0.340201)
Successfully installed App-cpanminus-1.5015
12 distributions installed
-----> Installing dependencies
Successfully installed ExtUtils-MakeMaker-6.62 (upgraded from 6.55_02)
Successfully installed YAML-0.84
Successfully installed Test-Simple-0.98 (upgraded from 0.92)
Successfully installed Try-Tiny-0.11
Successfully installed HTTP-Server-Simple-0.44
Successfully installed HTTP-Server-Simple-PSGI-0.14
Successfully installed URI-1.60
Successfully installed Test-Tester-0.108
Successfully installed Test-NoWarnings-1.04
Successfully installed Test-Deep-0.110
Successfully installed LWP-MediaTypes-6.02
Successfully installed Encode-Locale-1.03
Successfully installed HTTP-Date-6.02
Successfully installed HTML-Tagset-3.20
Successfully installed HTML-Parser-3.69
Successfully installed Compress-Raw-Bzip2-2.052 (upgraded from 2.020)
Successfully installed Compress-Raw-Zlib-2.054 (upgraded from 2.020)
Successfully installed IO-Compress-2.052 (upgraded from 2.020)
Successfully installed HTTP-Message-6.03
Successfully installed HTTP-Body-1.15
Successfully installed MIME-Types-1.35
Successfully installed HTTP-Negotiate-6.01
Successfully installed File-Listing-6.04
Successfully installed HTTP-Daemon-6.01
Successfully installed Net-HTTP-6.03
Successfully installed HTTP-Cookies-6.01
Successfully installed WWW-RobotRules-6.02
Successfully installed libwww-perl-6.04
Successfully installed Dancer-1.3097
29 distributions installed
-----> Installing Starman
Successfully installed Test-Requires-0.06
Successfully installed Hash-MultiValue-0.12
Successfully installed Devel-StackTrace-1.27
Successfully installed Test-SharedFork-0.20
Successfully installed Test-TCP-1.16
Successfully installed Class-Inspector-1.27
Successfully installed File-ShareDir-1.03
Successfully installed Filesys-Notify-Simple-0.08
Successfully installed Devel-StackTrace-AsHTML-0.11
Successfully installed Plack-0.9989
Successfully installed Net-Server-2.006
Successfully installed HTTP-Parser-XS-0.14
Successfully installed Data-Dump-1.21
Successfully installed Starman-0.3001
14 distributions installed
-----> Discovering process types
Procfile declares types -&amp;gt; (none)
Default types for Perl/PSGI Dancer! -&amp;gt; web
-----> Compiled slug size is 2.7MB
-----> Launching... done, v4 deployed to Heroku

* [new branch] master -&amp;gt; master
/tmp/heroku :master $

And you can confirm it works:

Please note that the environment it runs on is “deployment”. The backend server it uses is the great Starman, also by the great Miyagawa.

Now, if you add or change dependencies on Makefile.PL, next time you push, those will get updated.

Very cool, right? :)

Jamie McClelland: Noam use Gnome

14 August, 2016 - 07:41

I don't quite remember when I read John Goerzen's post about teaching a 4 year old to use the linux command line with audio on planet Debian. According to the byline it was published nearly 2 years before Noam was born, but I seem to remember reading it in the weeks after his birth when I was both thrilled at the prospect of teaching my kid to use the command line and, in my sleepless stupor, not entirely convinced he would ever be old enough.

Well, the time came this morning. He found an old USB key board and discovered that a green light came on when he plugged it in. He was happily hitting the keys when Meredith suggested we turn on the monitor and open a program so he could see the letters appear on the screen and try to spell his name.

After 10 minutes in Libre Office I remembered John's blog and was inspired to start writing a bash script in my head (I would have to stop the fun with Libre Office to write it so the pressure was on...). In the end it was only a few minutes and I came up with:


while [ 1 ]; do
  read -p "What shall I say? "
  espeak "$REPLY"

It was a hit. He said what he wanted to hear and hit the keys, my job was to spell for him.

Oh, also: he discovered key combinations that did things that were unsurprising to me (like taking the screen grab above) and also things that I'm still scratching my head about (like causing a prompt on the screen that said: "Downloading shockwave plugin." No thanks. And, how did he do that?


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้