Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 11 months 2 weeks ago

Soeren Sonnenburg: CfP: Shogun Machine Learning Workshop, July 12-14, Berlin, Germany

18 April, 2013 - 02:37
CALL FOR PARTICIPATION: Shogun Machine Learning Workshop, Berlin, Germany, July 12-14, 2013

Data Science, Big-Data are omnipresent terms documenting the need for automated tools to analyze the ever growing wealth of data. To this end we invite practitioners, researchers and students to participate in the first Shogun machine learning workshop. While the workshop is centered around the development and use of the shogun machine learning toolbox, it will also feature general machine learning subjects.

General Information The workshop will include:
  • A general introduction to machine learning held by Gunnar Raetsch.
  • Introductory talks about e.g. Dimension reduction techniques, Kernel-statistical testing, Gaussian Processes, Structured Output learning.
  • Contributed talks and a poster session, and a poster-spotlight.
  • A discussion panel
  • A hands on session on July 13-14

Do not miss the chance to familiarize yourself with the shogun machine learning toolbox for solving various data analysis tasks and to talk to their authors and contributors. The program of the workshop will cover from basic to advanced topics in machine learning and how to approach them using Shogun, which makes it suitable for anyone, no matter if you are a senior researcher or practitioner with many year's of experience, or a junior student willing to discover much more. Interested?

A tentative schedule is available at

Call for contributions

The organizing committee is seeking workshop contributions. The committee will select several submitted contributions for 15-minute talks and poster presentations. The accepted contributions will also be published on the workshop web site.

Amongst other topics, we encourage submission that

  • are applications / publications utilizing Shogun
  • are highly relevant to practitioners in the field
  • are of broad general interest
  • are extensions to Shogun
Submission Guidelines

Send an abstract of your talk/contribution to before June 1. Notifications will be given on June 7.


Workshop registration is free of charge. However, only a limited number of seats is available. First-come, first-served! Register by filling out the registration form.

Location and Timeline

The main workshop will take place at c-base Berlin (, on July 12. It is followed by additional 2-day hands-on sessions held at TU Berlin on July 13-14.

About the Shogun machine learning toolbox

Shogun is designed for unified large-scale learning for a broad range of feature types and learning settings, like classification, regression, or explorative data analysis. Further information is available at

Daniel Pocock: Bitcoin grabbing headlines

18 April, 2013 - 01:40

Seeking Alpha has published my contribution rebutting Paul Krugman's recent comments on Bitcoin. While there is no prediction what the future holds for Bitcoin prices (and no investment advice), the compelling message here is that Bitcoin does have technical value as a distributed messaging system for electronic payment. This may be compared to the value of other intangible services like a mobile phone contract that provides communications service. There are still risks: a better solution (faster, with backwards compatibility) could be developed by a powerful company like Google and Bitcoins would become redundant. Or somebody could crack the algorithm. However, crypto-currency, as a concept, deserves a much deeper analysis into how it could be useful to society.

Daniel Pocock: Switzerland Schilthorn with 007 (leg 3)

17 April, 2013 - 21:40

Friday's video of the 2nd leg of the cable car journey was quite popular, Ganglia stats for the video server can't be wrong:

The first leg was skipped because of the low cloud cover. Weather reports in Switzerland, such as often show a distinction between the low clouds and high clouds, because many Swiss mountain resorts are high enough to enjoy ample sunshine by virtue of their altitude even when cities like Zurich and Geneva are overcast.

Now for the third leg, rising up to Berg. In the video, it is hard to gauge the size of the rocky cliffs along the route, but looking at the cable car's shadow will give a clue. As the cable car approaches the station in Berg, it feels like it takes forever to complete the journey. That is because the rocks are so much bigger than they appear from a distance.

<video controls="" height="340" poster="" width="560">
<source src="" type="video/mp4"></source>
<source src="" type="video/webm"></source>

The final leg to the Schilthorn summit will appear shortly. I understand many people are starting to contemplate their DebConf13 travel and summer holidays - please feel free to email me if you'd like me to post a video of some other feature in Switzerland.

AttachmentSize ganglia-schilthorn2.png8.71 KB

Gerfried Fuchs: Pentatonix

17 April, 2013 - 18:01

I know it's been ages since I last blogged anything at all. To some degree I had a down phase, but I hope to get out of it. It's nice to see that there are people out there who give me a prod every now and then, and don't let me drown. Thanks, most of you probably know that I mean exactly you, and in case you are uncertain, you probably are meant if you contact me every now and then.

Anyway, if you remember that I blogged about Lindsey Stirling last year and you started to follow her you might have already stumbled upon the next band I'd like to present. I'm speaking of Pentatonix. These five humans have terrific voices that they use in a very special way that is quite unique.

I don't want to delay the songs I want to present to you any longer, so here they are:

Like always, enjoy! And hopefully read me again more regularly again.

/music | permanent link | Comments: 1 | Flattr this

Joerg Jaspert: tmux - tm update

17 April, 2013 - 16:48

Just did a small update to my tm helper script.

There is now a new variable, TMSESSHOST. If that is set to false, the hostname of the current machine won’t be prepended to the generated session name. true is the default, as that was the behaviour in the past.

It now can attach to any session, not just the ones it created itself, thus nearly entirely removing my need of ever calling tmux directly.

And cosmetically - window names for sessions defined in the .tmux.d directory are no longer plain “Multisession”, but the session name as defined in the file.

If you are interested, you can get it over here from my git repository of misc stuff

Joey Hess: Template Haskell on impossible architectures

17 April, 2013 - 14:41

Imagine you had an excellent successful Kickstarter campaign, and during it a lot of people asked for an Android port to be made of the software. Which is written in Haskell. No problem, you'd think -- the user interface can be written as a local webapp, which will be nicely platform agnostic and so make it easy to port. Also, it's easy to promise a lot of stuff during a Kickstarter campaign. Keeps the graph going up. What could go wrong?

So, rather later you realize there is no Haskell compiler for Android. At all. But surely there will be eventually. And so you go off and build the webapp. Since Yesod seems to be the pinnacle of type-safe Haskell web frameworks, you use it. Hmm, there's this Template Haskell stuff that it uses a lot, but it only makes compiles a little slow, and the result is cool, so why not.

Then, about half-way through the project, it seems time to get around to this Android port. And, amazingly, a Haskell compiler for Android has appeared in the meantime. Like the Haskell community has your back. (Which they generally seem to.) It's early days and rough, lots of libraries need to be hacked to work, but it only takes around 2 weeks to get a port of your program that basically works.

But, no webapp. Cause nobody seems to know how to make a cross-compiling Haskell compiler do Template Haskell. (Even building a fully native compiler on Android doesn't do the trick. Perhaps you missed something though.)

At this point you can give up and write a separate Android UI (perhaps using these new Android JNI bindings for Haskell that have also appeared in the meantime). Or you can procrastinate for a while, and mull it over; consider rewriting the webapp to not use Yesod but some other framework that doesn't need Template Haskell.

Eventually you might think this: If I run ghc -ddump-splices when I'm building my Yesod code, I can see all the thousands of lines of delicious machine generated Haskell code. I just have to paste that in, in place of the Template Haskell that generated it, and I'll get a program I can build on Android! What could go wrong?

And you even try it, and yeah, it seems to work. For small amounts of code that you paste in and carefully modify and get working. Not a whole big, constantly improving webapp where every single line of html gets converted to types and lambdas that are somehow screamingly fast.

So then, let's automate this pasting. And so the EvilSplicer is born!

That's a fairly general-purpose Template Haskell splicer. First do a native build with -ddump-splices output redirected to a log file. Run the EvilSplicer to fix up the code. Then run an Android cross-compile.

But oh, the caveats. There are so many ways this can go wrong..

  • The first and most annoying problem you'll encounter is that often Template Haskell splices refer to hidden symbols that are not exported from the modules that define the splices. This lets the splices use those symbols, but prevents them being used in your code.

    This does not seem like a good part of the Template Haskell design, to be honest. It would be better if it required all symbols used in splices to be exported.

    But it can be worked around. Just use trial and error to find every Haskell library that does this, and then modify them to export all the symbols they use. And after each one, rebuild all libraries that depend on it.

    You're very unlikely to end up with more than 9 thousand lines of patches. Because that's all it took me..

  • The next problem (and the next one, and the next ...) is that while GHC's code output by -dump-splices (and indeed, by GHC error messages, etc) looks like valid Haskell code to the casual viewer, it's often not.

    To start with, it often has symbols qualified with the package and module name. ghc-prim:GHC.Types.: does not work well where code originally contained :.

    And then there's fun with multi-line strings, which sometimes cannot be parsed back in by GHC in the form it outputs them.

    And then there's the strange way GHC outputs case expressions, which is not valid Haskell at all. (It's missing some semicolons.)

    Oh, and there's the lambda expressions that GHC outputs with insufficient parentheses, leading to type errors at compile time.

    And so much more fun. Enough fun to give one the idea that this GHC output has never really been treated as code that could be run again. Because that would be a dumb thing to need to do.

  • Just to keep things interesting, the Haskell libraries used by your native GHC and your Android GHC need to be pretty much identical versions. Maybe a little wiggle room, but any version skew could cause unwanted results. Probably, most of the time, unwanted results in the form of a 3 screen long type error message.

    (My longest GHC error message seen on this odyessy was actually a full 500+ kilobytes in size. It included the complete text of Jquery and Bootstrap. At times like these you notice that GHC outputs its error messages o.n.e . c.h.a.r.a.c.t.e.r . a.t . a . t.i.m.e.)

Anyway, if you struggle with it, or pay me vast quantities of money, your program will, eventually, link. And that's all I can promise for now.

PS, I hope nobody will ever find this blog post useful in their work.

PPS, Also, if you let this put you off Haskell in any way .. well, don't. You just might want to wait a year or so before doing Haskell on Android.

Aigars Mahinovs: Ķīna 6 – internets

17 April, 2013 - 05:24

Ķīna ir slavena ne tikai ar savu akmens mūri valsts ziemeļos, bat arī ar Dižo Ķīnas Ugunsmūri apkārt visam šīs valsts Internetam, kas bloķē visu pēc kārtas un iebremzina visu pārējo. Man pirmā personīgā saskarsme ar šo “pakalpojumu” notika jau Šanhajas lidostā, kad ātri vien izrādījās, ka valstī bloķēts ir ne tikai Facebook, bet arī Twitter, kas ievērojami apgrūtināja manas iespējas ātri un viegli apziņot visus, ka es esmu vēl joprojām esmu dzīvs un vesels. Pēc pāris eksperimentiem izrādījās, ka, lai arī no telefona nav pieejama Google+ mājas lapa un nav lejuplādējama Google+ (un WhatsApp) programma uz Android, tomēr, ja tās jau ir telefonā, šie abi servisi turpina no telefona strādāt. Tāpēc es sāku rakstīt ceļojuma piezīmes Google+ un dažas dienas pēc ceļojuma sākuma man pat izdevās nokonfigurēt If This Then That servisu, lai tas paņem manus Google+ ierakstus un uztaisa no tiem Twitter ierakstus (kas jau tālāk pa citiem kanāliem izplatās uz Facebook un Draugiem un arī parādās kā nedēļas kopsavilkums šajā blogā).

Google+ ir savi plusi, bet arī savi mīnusi. Galvenais mīnuss, ko es šajā ceļojumā pamanīju ir tas, ka Google+ Android aplikācijā nav iespējams sagatavot vairākus ierakstu melnrakstus (vēlams katru ar savu geolokāciju) – bez Interneta var rakstīt tikai vienu ierakstu un tā ieraksta GPS koordinātes būs tās kurā vietā pēc tam Internets parādīsies. Es jau uzrakstīju Googlei par šo problēmu.

Galvenais pluss Google plusam (no pun intended) ir Instant Upload – ja bildēt fotogrāfijas ar Android telefonu, šīs fotogrāfijas automātiski tiks augšupielādētas un parādīsies jaunā ieraksta izveides interfeisā, kur tās var pievienot ierakstam ar vienu klikšķi bez jebkādas gaidīšanas. Diemžēl tas nestrādā ar normālajām kamerām. Pagaidām …

Taču es nebūtu īsts datoriķis, ja es nepamēģinātu uzlaust vai apiet šo nelielo Ķīnas problēmu, ne?

Visvienkāršakais veids kā apiet Ķīnas Lielo Ugunsmūru ir izmantot jebkādu VPN, kas atļauj ne tikai piekļūt VPN tīkla resursiem, bet arī atļauj laist visu trafiku caur šo VPN savienojumu. Šādus VPN pieslēgumus var nopirkt, vai (ja ir Linux serveris vai routeris ārpus Ķīnas) izveidot pašam. Manā gadījumā tas bija ar vienu klikšķi ieslēgts OpenVPN uz Fonera routera, kas stāv manās mājās.

Diemžēl Ķīna ir … savāda. Bloķēto lapu, portu un protokolu saraksts mainās gan dažādos rajonos, gan arī atkarībā no tā vai Internets ir mobīlais vai wifi vai ar pieslēgumu, gan arī vienkārši no dienas dienā. Lielā daļā gadījumu bloķēto lietu sarakstā iekrīt arī VPN savienojumi. Bieži vien arī privātie. Man kaut kā nešķiet, ka mana mājas IP adrese ir Ķīnas ugunsmūra sarakstos, taču dažreiz arī tam VPN pievienoties es nevarēju.

Un tādās situācijās, lai apskatītu kādu YouTube video, atliek tikai viens, ģeniāls risinājums – sshuttle! Šis ģeniālais rīks izveido ko līdzīgu VPN savienojumam caur parasto SSH portu un protokolu. Uz lokālās mašīnas ir nepieciešams Python un root tiesības, bet uz servera ir vajadzīgas tikai tiesības palaist Python programmas. sshuttle pats aizsūta sevi uz serveri un palaižās tur, pat iešifrē un pārsūta visus savienojumus un arī DNS pieprasījumus, ja viņam to paprasa. Var pārsūtīt konkrētus tīklus vai visu trafiku. Un ātrums – manā pieredzē tas ir bija pat ātrāks par parasto VPN.

Kopumā Interneta blokāde un tā vispārīgais lēnums ir viens ļoti specīgs mīnuss Ķīnai. Aizskrienot mazliet uz priekšu stāstījumā pateikšu, ka Hong Kongā šādas problēmas nav – tur Internets ir lielisks! Lūk tāds tīzeris sanāca

Steve Kemp: sysadmin tools

17 April, 2013 - 03:31

This may be useful, may become useful, or may not:

Michal &#268;iha&#345;: Weblate 1.5

16 April, 2013 - 23:30

Weblate 1.5 has been released today. It comes with lot of improvements, especially in performance, reporting and support for machine translations.

Full list of changes for 1.5:

  • Please check manual for upgrade instructions.
  • Added public user pages.
  • Better naming of plural forms.
  • Added support for TBX export of glossary.
  • Added support for Bitbucket notifications.
  • Activity charts are now available for each translation, language or user.
  • Extended options of import_project admin command.
  • Compatible with Django 1.5.
  • Avatars are now shown using libravatar.
  • Added possibility to pretty print JSON export.
  • Various performance improvements.
  • Indicate failing checks or fuzzy strings in progress bars for projects or languages as well.
  • Added support for custom pre-commit hooks and commiting additional files.
  • Rewritten search for better performance and user experience.
  • New interface for machine translations.
  • Added support for monolingual po files.
  • Extend amount of cached metadata to improve speed of various searches.
  • Now shows word counts as well.

You can find more information about Weblate on it's website, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Ready to run appliances will be soon available in SUSE Studio Gallery.

Weblate is also being used as official translating service for phpMyAdmin, Gammu, Weblate itself and others.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Filed under: English Phpmyadmin Suse Weblate | 0 comments | Flattr this!

Stefano Zacchiroli: bits from the DPL for March-April 2013

16 April, 2013 - 22:52

Dear Project Members,

   "Now that I have your attention, I would like to make the following

... nah, scrap that. In my last day in office I first of all owe you a report of DPL activities for the last reporting period of this term, i.e. March 8th until today. Here it is!

  • At LibrePlanet (see below) I've discussed at length with Karen Sandler as GNOME representative the possibility of Debian participation in the FOSS Outreach Program for Women. I've then proposed that we do participate and, as you might have read on d-d-a, we're now doing that. Many thanks to the volunteer co-organizers for Debian participation in the program: Mònica Ramírez Arceda, Ana Guerrero López, and Patty Langasek.

  • A couple of years of work with the auditors has come together. At you can now find Debian monetary transactions for the 2010-2013 period. Note that:

    1) They are not all of our transactions, most notably because we haven't yet managed to get access to all our bank transactions at SPI (while we do have access to other transactions there, e.g. donations). Given the relevance of the missing transactions for our budget, this is a blocker for producing meaningful public periodic reports of Debian finances. This is clearly annoying, but I'm confident that our feedback to SPI over the past years has helped them better understand our needs and improve. I hope this could be finally solved during the next DPL term. And,

    2) Donor names have been anonymized in the ledger files, in wait of a donation system that allow to express privacy preferences. Complete donation information is available in the companion ledger.git repository, which is accessible to auditors only.

    I'd like to thank the Debian auditors, and in particular Martin Michlmayr, for their amazing work on this over the past 3 years.

  • As you might have noticed, we now have an official Debian Project blog, finally entering the brave new Web 1.5 era I've only helped "politically" here and there with this over the years, and I'm happy to see it live. Your thanks for this should go to the blog editors---Francesca Ciceri and Ana Guerrero Lopez---and to DSA for making it real. A proper delegation for the editors is pending and I'm confident the next DPL will pick it up.


Over the past month or so I've attended and spoken on behalf of Debian in the following occasions:

  • "Debian: 20 years and counting" talk at the New York Linux User Group; slides are available. Many thanks to Brian Gupta and Tom Limoncelli for inviting me.

  • "Debian and GNU" talk at LibrePlanet 2013; slides are available. Many thanks to John Sullivan and the Free Software Foundation for inviting me to talk at their main conference. My presence there has also been a chance to reassess the status of collaboration with FSF (see John's brief summary) and discuss further technical collaboration with the Trisquel maintainers.

  • "Legal issues from a radical community angle" keynote at the yearly workshop of FSFE's European Legal Network; slides are available. The talk has also been covered by a LWN feature article last week (the link should become unembargoed for non-LWN subscribers starting tomorrow).


I've approved the budget for the following forthcoming sprints:

Also, we've bought a 3-year warranty pack for the disk array that powers ftp-master.d.o (~900 USD).

On the income side, Brian Gupta has started an interesting matching fund experiment, in order to raise funds for the forthcoming DebConf13. The matching fund will be open until April 30th, so your help in spreading news would be welcome. Many thanks to Brian for the idea and to his company, Brandorr Group, for funding it.

DPL helpers

Three more DPL helpers IRC meetings have been held; minutes are available at the usual place.

Legal Spring Cleaning

I've finally cleaned up the pile of pending legal matters (but I'm sure new ones will show up for the delight of the next DPL :-P)

  • one is merely internal for trademark@d.o: our procedures for (n)acks on incoming requests has now been vetted by our legal advisors

  • the second one is relevant for our service: one of the blockers to officialize it as have historically been DMCA-related concerns. We now have a DMCA policy for (wannabe) mentors.d.o, which I've shared with the service maintainers and DSA. This specific part should no longer be a concern.

  • the last one concerns the possibilities of playing DVDs with Debian. We now have legal guidelines on how to include installer packages that allow to do so; that should allow us to have a decent solution for our users in the Jessie time frame.

Once again, I'd like to thank SFLC for the pro bono and very high quality legal advice they keep on offering to Debian.

  • I've mentioned last month that, as a Debian representative, I've joined a working group by the Italian public administration (PA) that should define procurement rules for software in the PA at large, together with representatives of other well-known FOSS initiatives (e.g. KDE, FSFE). The first meetings have now been held and I've participate in some, for the moment on my own budget. I'll check with the next DPL the feasibility of keep on doing so on in the future.

Now, before I get sentimental, let me thank Gergely, Lucas, and Moray for running in the recently concluded DPL election. Only thinking of running and then go through a campaign denote a very high commitment to the Project; we should all be thankful to them.

Then I'd like to congratulate Lucas for his election. I've known him for a long time, and I can testify about his clear vision of the role Debian has to play in Free Software and on what Debian needs to improve to do so. Best wishes for the term ahead, Lucas!

Finally, I'd like to thank you all for the support you've shown me over the past 3 years. Serving as DPL is a great honor, but also a very demanding job. Thank to you all, and to how cool Debian is, it has been for me an incredibly rewarding experience. I had no idea what I were doing when I embarked on this adventure, but in hindsight I don't regret any of it. See you around, as I don't plan to be anywhere far away from Debian anytime soon.


PS the day-to-day activity logs for March and April 2013 are available at the usual place master:/srv/leader/news/bits-from-the-DPL.txt.20130{3,4}

Wouter Verhelst: The devops "installer"

16 April, 2013 - 21:57

puppet and similar things are very nice tools for managing your systems. Rather than doing the same thing 100 times, it's much more interesting to spend some time describing how to do something, and then have it done hundreds of times automatically.

However, I'm starting to see a distressing trend lately. I'd like to say that puppet is not an installer, nor is it a replacement for one. Here's why:

  • By giving me a script that calls puppet to install your software from the interwebz, you're making it very hard for me to install your software in case I'm running behind a paranoid firewall that I don't have access to.
  • By giving me a script that calls puppet to install your software, and sets up things so it will try to run puppet agent against your server (presumably to keep my installation up-to-date), you're making it much more difficult for me to manage that system as part of my larger network, which is already being managed with puppet.

Please, pretty please, with sugar on top: just provide packages. If that can't be done, just provide clear installation instructions and/or a puppet module that I can include or something. Don't assume I'm a system administrator not worth his paycheck.


note: the gitorious example above is just that, an example. I've seen more cases of people doing similar things, not just gitorious. Surprisingly, it's mostly from people who're on the ruby kool-aid. I hope that's not related...

Petter Reinholdtsen: First Debian Edu / Skolelinux developer gathering in 2013 take place in Trondheim

16 April, 2013 - 21:00

This years first Skolelinux / Debian Edu developer gathering take place the coming weekend in Trondheim. Details about the gathering can be found on the FRiSK wiki. The dates are 19-21th of April 2013, and online participation for those unable to make it in person is very welcome, and I plan to participate online myself as I could not leave Oslo this weekend.

The focus of the gathering is to work on the web pages and project infrastructure, and to continue the work on the Wheezy based Debian Edu release.

See you on IRC, #debian-edu on, then?

Craig Small: Removing itools

16 April, 2013 - 19:35

For very many years I have been running a set of tools on my website that basically runs whois or nslookup queries and presents them in a standard format.  I have decided today to shut this part of the website down as the code running those components is very old and I’ve not maintained it for years.  Back when I initially wrote the tools, in 1995 or so, there wasn’t many alternatives to this site but that has long changed.

So thanks for those who emailled me over the years; its been an interesting journey.

Michal &#268;iha&#345;: phpMyAdmin translations status

16 April, 2013 - 18:00

phpMyAdmin 4.0-rc2 is out and if your want to have your language in final release, it's last moment to start working on translation.

So let's look at which translations are at 100% right now (new ones are bold):

Almost complete:

As you can see, there is still lot of languages missing, this might be your opportunity to contribute to phpMyAdmin. Also you are welcome to translate phpMyAdmin 4.0 using translation server.

If your language is already fully translated and you want to help as well, you can translate our documentation as well.

Filed under: English Phpmyadmin | 0 comments | Flattr this!

NOKUBI Takatsugu: 8th Kernel/VM Explorers

16 April, 2013 - 14:20

I attended The 8th Kernel/VM Explorers(Japanese) in 13 Apr. The event focuses on OS kernels and/or Virtual Machines architecture.

I think seeing figure or graph will be interesting.


Johannes Schauer: botch - the debian bootstrap software has a name

16 April, 2013 - 13:44

After nearly one year of development, the "Port bootstrap build-ordering tool" now finally has a name: botch. It stands for Bootstrap/Build Order Tool CHain. Now we dont have to call it "Port bootstrap build-ordering tool" anymore (nobody did that anyways) and also dont have to talk about it by referring to it as "the tools" in email, IRC or in my master thesis which is due in a few weeks. With this, the issue also doesnt block the creation of a Debian package anymore. Since only a handful of people have a clone of it anyway, I also renamed the gitorious git repository url (and updated all links in this blog and left a text file informing about the name change in the old location). The new url is:

Further improvements since my last post in January include:

  • greatly improved speed of partial order calculation
  • calculation of strong edges and strong subgraphs in build graphs and source graphs
  • calculation of source graphs from scratch or from build graphs
  • calculation of strong bridges and strong articulation points
  • calculate betweenness of vertices and edges
  • find self-cycles in the source graph
  • add webselfcycle code to generate a HTML page of self-cycles
  • allow to collapse all strongly connected components of the source graph into a single vertex to make it acyclic
  • allow to create a build order for cyclic graphs (by collapsing SCC)
  • allow to specify custom installation sets
  • add more feedback arc set heuristics (eades, insertion sort, sifting, moving)
  • improve the cycle heuristic

In the mean time a paper about the theoretical aspects of this topic which Pietro Abate and I submitted to the CBSE 2013 conference got accepted (hurray!) and I will travel to the conference in Canada in June.

Botch (yeey, I can call it by a name!) is also an integral part of one of the proposals for a Debian Google Summer of Code project this year, mentored by Wookey and co-mentored by myself. Lets hope it gets accepted and produces good results in the end of the summer!

Hideki Yamane: make package modern → 40% faster

16 April, 2013 - 08:05
Does your package still use old debhelper style? How about considering to make it modern.
I've tried to make fontforge package cleanup and use dh style since it fails with -j4 build option. And as the result, its debian/rules file become 1/3 (187→53) and it reduces build time 40% with parallel option.
  • before
    • real 5m13.605s
  • after
    • real 3m24.121s
(both cowbuilder on tmpfs)

Matthew Palmer: splunkd, Y U NO FOREGROUND?!?

15 April, 2013 - 09:17

I am led to believe that splunkd (some agent for feeding log entries into the Grand Log Analysis Tool Of Our Age™) has no capability for running itself in the foreground. This is stupid. Do not make these sorts of assumptions about how the user will want to run your software. Some people use sane service-management systems that are capable of handling the daemonisation for you and automatically restart the managed process on crash. These systems are typically much easier to configure and debug, and they don’t need bloody PID files and the arguments about where to put them (tmpfs, inside or outside chroots… oh my) and who should update them and how to reliably detect that they’re out of date when they crash without causing race conditions and whether non-root-running processes should place their PID files in the same place and how do you deal with the permissions issues and… bugger that for a game of skittles.

In short, if you provide a service daemon and do not provide some well-documented means of saying “don’t background”, I will hurt you. This goes double if your shitware is not open source.

Steinar H. Gunderson: TG and VLC scalability

15 April, 2013 - 07:14

With The Gathering 2013 well behind us, I wanted to write a followup to the posts I had on video streaming earlier.

Some of you might recall that we identified an issue at TG12, where the video streaming (to external users) suffered from us simply having too fast network; bursting frames to users at 10 Gbit/sec overloads buffers in the down-conversion to lower speeds, causing packet loss, which triggers new bursts, sending the TCP connection into a spiral of death.

Lacking proper TCP pacing in the Linux kernel, the workaround was simple but rather ugly: Set up a bunch of HTB buckets (literally thousands), put each client in a different bucket, and shape each bucket to approximately the stream bitrate (plus some wiggle room for retransmits and bitrate peaks, although the latter are kept under control by the encoder settings). This requires a fair amount of cooperation from VLC, which we use as both encoder and reflector; it needs to assign a unique mark (fwmark) to each connection, which then tc can use to put the client into the right HTB bucket.

Although we didn't collect systematic user experience data (apart from my own tests done earlier, streaming from Norway to Switzerland), it's pretty clear that the effect was as hoped for: Users who had reported quality for a given stream as “totally unusable” now reported it as “perfect”. (Well, at first it didn't seem to have much effect, but that was due to packet loss caused by a faulty switch supervisor module. Only shows that real-world testing can be very tricky. :-) )

However, suddenly this happened on the stage:

which led to this happening to the stream load:

and users, especially ones external to the hall, reported things breaking up again. It was obvious that the load (1300 clients, or about 2.1 Gbit/sec) had something to do with it, but the server wasn't out of CPU—in fact, we killed a few other streams and hung processes, freeing up three or so cores, without any effect. So what was going on?

At the time, we really didn't get to go deep enough into it before the load had lessened; perf didn't really give an obvious answer (even though HTB is known to be a CPU hog, it didn't really figure high up in the list), and the little tuning we tried (including removing HTB) didn't really help.

It wasn't before this weekend, when I finally got access to a lab with 10gig equipment (thanks, Michael!), that I could verify my suspicions: VLC's HTTP server is single-threaded, and not particularly efficient at that. In fact, on the lab server, which is a bit slower than what we had at TG (4x2.0GHz Nehalem versus 6x3.2GHz Sandy Bridge), the most I could get from VLC was 900 Mbit/sec, not 2.1 Gbit/sec! Clearly we were both a bit lucky with our hardware, and that we had more than one stream (VLC vs. Flash) to distribute our load on. HTB was not the culprit, since this was run entirely without HTB, and the server wasn't doing anything else at all.

(It should be said that this test is nowhere near 100% exact, since the server was only talking to one other machine, connected directly to the same switch, but it would seem a very likely bottleneck, so in lieu of $100k worth of testing equipment and/or a very complex netem setup, I'll accept it as the explanation until proven otherwise. :-) )

So, how far can you go, without switching streaming platforms entirely? The answer comes in form of Cubemap, a replacement reflector I've been writing over the last week or so. It's multi-threaded, much more efficient (using epoll and sendfile—yes, sendfile), and also is more robust due to being less intelligent (VLC needs to demux and remux the entire signal to reflect it, which doesn't always go well for more esoteric signals; in particular, we've seen issues with the Flash video mux).

Running Cubemap on the same server, with the same test client (which is somewhat more powerful), gives a result of 12 Gbit/sec—clearly better than 900 Mbit/sec! (Each machine has two Intel 10Gbit/sec NICs connected with LACP to the switch, and load-balance on TCP port number.) Granted, if you did this kind of test using real users, I doubt they'd get a very good experience; it was dropping bytes like crazy since it couldn't get the bytes quickly enough to the client (and I don't think it was the client that was the problem, although that machine was also clearly very very heavily loaded). At this point, the problem is almost entirely about kernel scalability; less than 1% is spent in userspace, and you need a fair amount of mucking around with multiple NIC queues to get the right packets to the right processor without them stepping too much on each others' toes. (Check out /usr/src/linux/Documentation/network/scaling.txt for some essential tips here.)

And now, finally, what happens if you enable our HTB setup? Unfortunately, it doesn't really go well; the nice 12 Gbit/sec drops to 3.5–4 Gbit/sec! Some of this is just increased amounts of packet processing (for instance, the two iptables rules we need to mark non-video traffic alone take the speed down from 12 to 8), but it also pretty much shows that HTB doesn't scale: A lot of time is spent in locking routines, probably the different CPUs fighting over locks on the HTB buckets. In a sense, it's maybe not so surprising when you look at what HTB really does; you can't process each packet as independently, the entire point is to delay packets based on other packets. A more welcome result is that setting up a single fq_codel qdisc on the interface hardly mattered at all; it went down from 12 to 11.7 or something, but inter-run variation was so high, this is basically only noise. I have no idea if it actually had any effect at all, but it's at least good to know that it doesn't do any harm.

So, the conclusion is: Using HTB to shape works well, but it doesn't scale. (Nevertheless, I'll eventually post our scripts and the VLC patch here. Have some patience, though; there's a lot of cleanup to do after TG, and only so much time/energy.) Also, VLC only scales up to a thousand clients or so; after that, you want Cubemap. Or Wowza. Or Adobe Media Server. Or nginx-rtmp, if you want RTMP. Or… or… or… My head spins.

Gregor Herrmann: RC bugs 2013/14-15

15 April, 2013 - 01:48

2 weeks passed, & I worked on exactly 2 RC bugs *cough*

  • #656166 – firmware-b43legacy-installer: "firmware-b43legacy-installer: unowned files after purge (policy 6.8)"
    ship empty directories in packages, upload to DELAYED/5
  • #704870 – opus: "opus: cve-2013-0899"
    add pointers to upstream bugs/commits, later NMUd by another DD


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้