Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 hour 7 min ago

Elena 'valhalla' Grandi: Candy from Strangers

7 September, 2016 - 01:46
Candy from Strangers

A few days ago I gave a talk at ESC about some reasons why I think that using software and especially libraries from the packages of a community managed distribution is important and much better than alternatives such as pypi, nmp etc. This article is a translation of what I planned to say before forgetting bits of it and luckily adding it back as an answer to a question :)

When I was young, my parents taught me not to accept candy from strangers, unless they were present and approved of it, because there was a small risk of very bad things happening. It was of course a simplistic rule, but it had to be easy enough to follow for somebody who wasn't proficient (yet) in the subtleties of social interactions.

One of the reasons why it worked well was that following it wasn't a big burden: at home candy was plenty and actual offers were rare: I only remember missing one piece of candy because of it, and while it may have been a great one, the ones I could have at home were also good.

Contrary to candy, offers of gratis software from random strangers are quite common: from suspicious looking websites to legit and professional looking ones, to platforms that are explicitly designed to allow developers to publish their own software with little or no checks.

Just like candy, there is also a source of trusted software in the Linux distributions, especially those lead by a community: I mention mostly Debian because it's the one I know best, but the same principles apply to Fedora and, to some measure, to most of the other distributions. Like good parents, distributions can be wrong, and they do leave room for older children (and proficient users) to make their own choices, but still provide a safe default.

Among the unsafe sources there are many different cases and while they do share some of the risks, they have different targets with different issues; for brevity the scope of this article is limited to the ones that mostly concern software developers: language specific package managers and software distribution platforms like PyPi, npm and rubygems etc.

These platforms are extremely convenient both for the writers of libraries, who are enabled to publish their work with minor hassles, and for the people who use such libraries, because they provide an easy way to install and use an huge amount of code. They are of course also an excellent place for distributions to find new libraries to package and distribute, and this I agree is a good thing.

What I however believe is that getting code from such sources and using it without carefully checking it is even more risky than accepting candy from a random stranger on the street in an unfamiliar neighbourhood.

The risk aren't trivial: while you probably won't be taken as an hostage for ransom, your data could be, or your devices and the ones who run your programs could be used in some criminal act causing at least some monetary damage both to yourself and to society at large.

If you're writing code that should be maintained in time there are also other risks even when no malice is involved, because each package on these platform has a different policy with regards to updates, their backwards compatibility and what can be expected in case an old version is found to have security issues.

The very fact that everybody can publish anything on such platforms is both their biggest strength and their main source of vulnerability: while most of the people who publish their libraries do so with good intentions, attacks have been described and publicly tested, such as the fun typo-squatting one (archived URL" target="_blank"> that published harmless malicious code under common typos for famous libraries.

Contrast this with Debian, where everybody can contribute, but before they are allowed full unsupervised access to the archive they have to establish a relationship with the rest of the community, which includes meeting other developers in real life, at the very least to get their gpg keys signed.

This doesn't prevent malicious people from introducing software, but raises significantly the effort required to do so, and once caught people can usually be much more effectively prevented from repeating it than a simple ban on an online-only account can do.

It is true that not every Debian maintainer actually does a full code review of everything that they allow in the archive, and in some cases it would be unreasonable to expect it, but in most cases they are at least reasonably familiar with the code to do at least bug triage, and most importantly they are in an excellent position to establish a relationship of mutual trust with the upstream authors.

Additionally, package maintainers don't work in isolation: a growing number of packages are being maintained by a team of people, and most importantly there are aspects that involve potentially the whole community, from the fact that new packages that enter the distribution are publicity announced on a mailing list to the various distribution-wide QA efforts.

Going back to the language specific distribution platforms, sometimes even the people who manage the platform themselves can't be fully trusted to do the right thing: I believe everybody in the field remembers the npm fiasco where a lawyer letter requesting the removal of a package started a series of events that resulted in potentially breaking a huge amount of automated build systems.

Here some of the problems were caused by some technical policies that caused the whole ecosystem to be especially vulnerable, but one big issue was the fact that the managers of the npm platform are a private entity with no oversight from the user community.

Here not all distributions are equal, but contrast this with Debian, where the distribution is managed by a community that is based on a social contract and is governed via democratic procedures established in its

Additionally, the long history of the distribution model means that many issues have already been met, the errors have already been done, and there are established technical procedures to deal with them in a better way.

So, shouldn't we use language specific distribution platforms at all? No! As developers we aren't children, we are adults who have the skills to distinguish between safe and unsafe libraries just as well as the average distribution maintainer can do. What I believe we should do is stop treating them as a safe source that can be used blindly and reserve that status to actual trustful sources like Debian, falling back to the language specific platforms only when strictly needed, and in that case:

actually check carefully what we are using, both by reading the code and by analysing the development and community practices of the authors;
if possible, share that work by becoming ourselves maintainers of that library in our favourite distribution, to prevent duplication of effort and to give back to the community whose work we get advantage from.

Guido Günther: Debian Fun in August 2016

7 September, 2016 - 01:08
Debian LTS

August marked the sixteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 9 hours (of allocated 8) mostly on Rails related CVEs which resulted in DLA-603-1 and DLA-604-1 fixing 6 CVEs and marking others as not affecting the packages. The hardest part was proper testing since the split packages in Wheezy don't allow to run the upstream test suite as is. There's still CVE-2016-0753 which I need to check if it affects activerecord or activesupport.

Additionally I had one relatively quiet week of LTS frontdesk work triaging 10 CVEs.

Other Debian stuff
  • I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to unstable. The later bringing all the enhanements and bugfixes since Debconf 16 to sid and testing.
  • The usual bunch of libvirt related uploads

Andrew Shadura: Manual control of OpenEmbedded -dbg packages

6 September, 2016 - 19:28

In December last year, OpenEmbebbed introduced automatic debug packages. Prior to that, you’d need to manually construct FILES_${PN}-dbg variable in your recipe. If you need to retain manual control over precisely what does into debug packages, set an undocumented NOAUTOPACKAGEDEBUG variable to 1, the same way Qt recipe does:

FILES_${PN}-dev = "${includedir}/${QT_DIR_NAME}/Qt/*"
FILES_${PN}-dbg = "/usr/src/debug/"
FILES_${QT_BASE_NAME}-demos-doc = "${docdir}/${QT_DIR_NAME}/qch/qt.qch"

P.S. Knowing this would have saved me and my colleagues days of work.

Norbert Preining: Yukio Mishima: Patriotism (憂国)

6 September, 2016 - 18:59

A masterpiece by Yukio Mishima – Patriotism – the story of love and dead. A short story about the double suicide of a Lieutenant and his wife following the Ni Ni Roku Incident where some parts of the military tried to overthrow government and military leaders. Although Lieutenant Takeyama wasn’t involved into the coup, because his friends wanted to safeguard him and his new wife, he found himself facing a fight and execution of his friends. Not being able to cope with this situation he commits suicide, followed by his wife.

Written in 1960 by one of the most interesting writers of Japanese modern history, Yukio Mishima, this book and the movie made by Mishima himself, are very disturbing images of the relation between human and state.

Although the English title says Patriotism, the Japanese one is 憂国 (Yukoku) which is closer to Concern for one’s own country. This concern, and the feeling of devotion to the Imperial system and the country that leads the two into their deed. We are guided through the whole book and movie by a large scroll with 至誠 (shisei, devotion) written on it.

But indeed, Patriotism is a good title I think – one of the most dangerous concepts mankind has brought forth. If Patriotism would be only the love for one’s own country, all would be fine. But reality shows that patriotism unfailingly brings along xenophobia and the feeling of superiority.

For someone coming from a small and unimportant country, I never had even the slightest allure to be patriotic in the bad sense. And looking at the world and people around me, I often have the feeling that mainly big countries produce the biggest and worst style of patriotism. This is obvious in countries like China, but having recently learned that all US pupils have to recite (obviously without understanding) the Pledge of Allegiance, the shock of how bad patriotism can start washing the brains of even the smallest kids in a seemingly free country is still present.

But back to the book: Here the patriotism is exhibited by the presence of the Imperial images and shrine in the entrance, in front of which the two pray the last time before executing themselves.

Not only the book is a masterpiece by itself, also the movie is a special piece of art: Filmed in silent movie style with text inserts, the whole story takes place on a Noh stage. This is in particular interesting as Mishima was one of the few, if not the only, modern Noh play writer. He has written several Noh plays.

Another very impressive scene for me was when, after her husbands suicide, Reiko returns from putting up her final make-up into the central room. Her kimono is already blood soaked and the trailing kimono leaves traces on the Noh stage resembling the strokes of a calligraphy, as if her movement is guided, too, by 至誠.

The final scene of the movie shows the two of them in a Zen stone garden, forming the stone, the unreachable island of happiness.

Very impressive, both the book as well as the movie.

Guido Günther: Debian Fun in Augst 2016

6 September, 2016 - 13:35
Debian LTS

August marked the sixteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 9 hours (of allocated 8) mostly on Rails related CVEs which resulted in DLA-603-1 and DLA-604-1 fixing 6 CVEs and marking others as not affecting the packages. The hardest part was proper testing since the split packages in Wheezy don't allow to run the upstream test suite as is. There's still CVE-2016-0753 which I need to check if it affects activerecord or activesupport.

Additionally I had one relatively quiet week of LTS frontdesk work triaging 10 CVEs.

Other Debian stuff
  • I uploaded git-buildpackage 0.8.2 to experimental and 0.8.3 to unstable. The later bringing all the enhanements and bugfixes since Debconf 16 to sid and testing.
  • The usual bunch of libvirt related uploads

Clint Adams: Can't put your arms around a memory

6 September, 2016 - 08:41

“I think it stems from employing people who are capable of telling you what BGP stands for,” he said. “Watching my DevOps team in action is an infuriating mix of ‘Damn, that's a slick CI/CD process you’ve built,’ and ‘What do you mean you don't know what the output of netstat means?’”

Joachim Breitner: The new CIS-194

6 September, 2016 - 01:09

The Haskell minicourse at the University of Pennsylvania, also known as CIS-194, has always had a reach beyond the students of Penn. At least since Brent Yorgey gave the course in 2013, who wrote extensive lecture notes and eventually put the material on Github.

This year, it is my turn to give the course. I could not resist making some changes, at least to the first few weeks: Instead of starting with a locally installed compiler, doing execises that revolve mostly around arithmetic and lists, I send my students to CodeWorld, which is a web programming environment created by Chris Smith1.

This greatly lowers the initial hurlde of having to set up the local toolchain, and is inclusive towards those who have had little expose to the command line before. Not that I do not expect my students to handle that, but it does not hurt to move that towards later in the course.

But more importantly: CodeWorld comes with a nicely designed simple API to create vector graphics, to animate these graphics and even create interactive programs. This means that instead of having to come up with yet another set of exercieses revolving around lists and numbers, I can have the students create Haskell programs that are visual. I believe that this is more motivating and stimulating, and will nudge the students to spend more time programming and thus practicing.

In fact, the goal is that in their third homework assignemnt, the students will implement a fully functional, interactive Sokoban game. And all that before talking about the built-in lists or tuples, just with higher order functions and custom datatypes. (Click on the picture above, which is part of the second weeks’s homework. You can use the arrow keys to move the figure around and press the escape key to reset the game. Boxes cannot be moved yet -- that will be part of homework 3.)

If this sounds interesting to you, and you always wanted to learn Haskell from scratch, feel free to tag along. The lecture notes should be elaborate enough to learn from that, and with the homework problems, you should be able to tell whether you have solved it yourself. Just do not publish your solutions before the due date. Let me know if you have any comments about the course so far.

Eventually, I will move to local compilation, use of the interpreter and text-based IO and then start using more of the material of previous iterations of the course, which were held by Richard Eisenberg in 2014 and by Noam Zilberstein in 2015.

  1. Chris has been very helpful in making sure CodeWorld works in a way that suits my course, thanks for that!

Chris Lamb: How to write your first Lintian check

5 September, 2016 - 22:33

Lintian's humble description of "Debian package checker" belies its importance within the Debian GNU/Linux project. An extensive static analysis tool, it's not only used by the vast majority of developers, falling foul of some of its checks even cause uploads to be automatically rejected by the archive maintenance software.

As you may have read in my recent monthly report, I've recently been hacking on Lintian itself. In particular:

  • #798983: Check for libjs-* binary package name outside of the web section
  • #814326: Warn if filenames contain wildcard characters
  • #829744: Add new-package-should-not-package-python2-module tag
  • #831864: Warn about Python packages that ship information
  • #832096 Check for common typos in debian/rules target names
  • #832099: Check for unnecessary SOURCE_DATE_EPOCH assignments
  • #832771: Warn about systemd .service files with a missing Install key

However, this rest of this post will go through the steps needed to start contributing yourself.

To demonstrate this I will be walking through submitting a patch for bug #831864 which warns about Python packages that ship .coverage files generated by

Getting started

First, let's obtain the Lintian sources and create a branch for our work:

$ git clone
$ cd lintian
$ git checkout -b warn-about-dotcoverage-files
Switched to a new branch 'warn-about-dotcoverage-files'

The most interesting files are under checks/*:

$ ls -l checks/ | head -n 9
total 1356
-rw-r--r-- 1 lamby lamby  6393 Jul 29 14:19 apache2.desc
-rw-r--r-- 1 lamby lamby  8619 Jul 29 14:19
-rw-r--r-- 1 lamby lamby  1956 Jul 29 14:19 application-not-library.desc
-rw-r--r-- 1 lamby lamby  3285 Jul 29 14:19
-rw-r--r-- 1 lamby lamby   544 Jul 29 14:19 automake.desc
-rw-r--r-- 1 lamby lamby  1354 Jul 29 14:19
-rw-r--r-- 1 lamby lamby 19506 Jul 29 14:19 binaries.desc
-rw-r--r-- 1 lamby lamby 25204 Jul 29 14:19
-rw-r--r-- 1 lamby lamby 15641 Aug 24 21:42 changelog-file.desc
-rw-r--r-- 1 lamby lamby 19606 Jul 29 14:19

Note that the files are in pairs; a foo.desc file that contains description of the tags and a sibling Perl module that actually performs the checks.

Let's add our new tag before we go any further. After poking around, it looks like files.{pm,desc} would be most appropriate, so we'll add our new tag definition to files.desc:

Tag: package-contains-python-coverage-file
Severity: normal
Certainty: certain
Info: The package conains a file that looks like output from the Python tool.  These are generated by python{,3}-coverage during a test
 run, noting which parts of the code have been executed.  They can then be
 subsequently analyzed to identify code that could have been executed but was
 As they are are unlikely to be of no utility to end-users, these files should
 be removed from the package.

The Severity and Certainty fields are documented in the manual. Note the convention of using double spaces after full stops in the Info section.

Extending the testsuite

Lintian has many moving parts based on regular expressions and other subtle logic, so it's especially important to provide tests in order to handle edge cases and to catch any regressions in the future.

We create tests by combining a tiny Debian package that will deliberately violate our check, along with some metadata and the expected output of running Lintian against this package.

The tests themselves are stored under t/tests. There may be an existing test that it would be more appropriate to extend, but I've gone with creating a new directory called files-python-coverage:

$ mkdir -p t/tests/files-python-coverage
$ cd t/tests/files-python-coverage

First, we create a simple package:

$ mkdir -p debian/debian
$ printf '#!/usr/bin/make -f\n\n%%:\n\tdh $@\n' > debian/debian/rules
$ chmod +x debian/debian/rules

… and then we install dummy file to trigger the check:

$ touch debian/.coverage
$ echo ".coverage /usr/share/files-python-coverage" > debian/debian/install

We then add the aforementioned metadata to t/tests/files-python-coverage/desc:

Testname: files-python-coverage
Sequence: 6000
Version: 1.0
Description: Check for Python .coverage files

… and the expected warning to t/tests/files-python-coverage/tags:

$ echo "W: files-python-coverage: package-contains-python-coverage-file" \
      "usr/share/files-python-coverage/.coverage" > tags

When we run the testsuite, it should fail because we don't emit the check yet:

$ cd $(git rev-parse --show-toplevel)
$ debian/rules runtests onlyrun=tag:package-contains-python-coverage-file
--- t/tests/files-python-coverage/tags
+++ debian/test-out/tests/files-python-coverage/tags.files-python-coverage
@@ -1 +0,0 @@
-W: files-python-coverage: package-contains-python-coverage-file usr/share/files-python-coverage/.coverage
fail tests::files-python-coverage: output differs!

Failed tests (1)
debian/rules:48: recipe for target 'runtests' failed
make: *** [runtests] Error 1

$ echo $?

Specifying onlyrun= means we only run the tests that are designed to trigger this tag rather than the whole testsuite. This is controlled by the Test-For key in our desc file, not by scanning the tags files.

This recipe for creating a testcase could be used when submitting a regular bug against Lintian — providing a failing testcase not only clarifies misunderstandings resulting from the use of natural language, it also makes it easier, quicker and safer to correct the offending code itself.

Emitting the tag

Now, let's actually implement the check:

             tag 'package-installs-python-egg', $file;

+        # ---------------- .coverage ( output)
+        if ($fname =~ m,\.coverage$,o) {
+            tag 'package-contains-python-coverage-file', $file;
+        }

         # ---------------- /usr/lib/site-python

Our testsuite now passes:

$ debian/rules runtests onlyrun=tag:package-contains-python-coverage-file
.... running tests ....
mkdir -p "debian/test-out"
t/runtests -k -j 9 t "debian/test-out" tag:package-contains-python-coverage-file
pass tests::files-python-coverage
if [ "tag:package-contains-python-coverage-file" = "" ]; then touch runtests; fi

$ echo $?
Submitting the patch

Lastly, we create a patch for submission to the bug tracking system:

$ git commit -a -m "c/files: Warn about Python packages which ship" \
      " information. (Closes: #831864)"

$ git format-patch HEAD~

… and we finally attach it to the existing bug:


tags 831864 + patch

Patch attached.



I hope this post will encourage at some extra contributions towards this important tool.

(Be aware that I'm not a Lintian maintainer, so not only should you not treat anything here as gospel and expect this post may be edited over time if clarifications arise.)

Bits from Debian: DebConf17 organization started

5 September, 2016 - 19:00

DebConf17 will take place in Montreal, Canada from August 6 to August 12, 2017. It will be preceded by DebCamp, July 31 to August 4, and Debian Day, August 5.

We invite everyone to join us in organizing DebConf17. There are different areas where your help could be very valuable, and we are always looking forward to your ideas.

The DebConf content team is open to suggestions for invited speakers. If you'd like to propose somebody who is not a regular DebConf attendee follow the details in the call for speaker proposals blog post.

We are also beginning to contact potential sponsors from all around the globe. If you know any organization that could be interested, please consider handing them the sponsorship brochure or contact the fundraising team with any leads.

The DebConf team is holding IRC meetings every two weeks. Have a look at the DebConf17 website and wiki page, and engage in the IRC channels and the mailing list.

Let’s work together, as every year, on making the best DebConf ever!

Raphaël Hertzog: My Free Software Activities in August 2016

5 September, 2016 - 17:03

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

This months is rather light since I was away in vacation for two weeks.

Kali related work

The new pkg-security team is working full steam and I reviewed/sponsored many packages during the month: polenum, accheck, braa, t50, ncrack, websploit.

I filed bug #834515 against sbuild since sbuild-createchroot was no longer usable for kali-rolling due to the embedded dash. That misfeature has been reverted and implemented through an explicit option.

I brought the attention of ftpmasters on #832163 since we had unexpected packages in the standard section (they have been discovered in the Kali live ISO while we did not want them).

I uploaded two fontconfig NMU to finally push to Debian a somewhat cleaner fix for the problem of various captions being displayed as squares after a font upgrade (see #828037 and #835142).

I tested (twice) a live-build patch from Adrian Gibanel Lopez implementing EFI boot with grub and merged it into the official git repository (see #731709).

I filed bug #835983 on python-pypdf2 since it has an invalid dependency forbidding co-installation with python-pypdf.

I orphaned splint since its maintainer was missing in action (MIA) and immediately made a QA upload to fix the RC bug which kicked it out of testing (this package is a build dependency of a Kali package).


I wrote a patch to make python-django-jsonfield compatible with Django 1.10 (#828668) and I committed that patch in the upstream repository.

Distro Tracker

I made some changes to make the codebase compatible with Django 1.10 (and added Django 1.10 to the tox test matrix). I added a “Debian Maintainer Dashboard” link next to people’s name on request of Lucas Nussbaum (#830548).

I made a preliminary review of Paul Wise’s patch to add multiarch hints (#833623) and improved the handling of the mailbot when it gets MIME Headers referencing an unknown charset (like “cp-850”, Python only knows of “cp850”)

I also helped Peter Palfrader to enabled a .onion address for, see for the full list of services available over Tor.

Misc stuff

I updated my salt formula to work with the latest version of (0.2.0)

I merged updated translations for the Debian Administrator’s Handbook from and uploaded a new version to Debian.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Paul Tagliamonte: go-haversine

5 September, 2016 - 09:52

In the spirit of blogging about some of the code i’ve written in the past year or two, I wrote a small utility library called go-haversine, which uses the Haversine Forumla to compute the distance between two points.

This is super helpful when working with GPS data - but remember, this assumes everything’s squarely on the face of the planet.

Dirk Eddelbuettel: Rcpp 0.12.7: More updates

5 September, 2016 - 09:13

The seventh update in the 0.12.* series of Rcpp just arrived on the CRAN network for GNU R as well as in Debian. This 0.12.7 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, and the 0.12.6 release in July --- making it the eleventh release at the steady bi-montly release frequency. Keeping with the established pattern, this is again more of a maintenance release which addresses small bugs, nuisances or documentation issues without adding any major new features. One issue that got to a few people was our casual use of NORET in the definition of Rcpp::stop(). We had (ahem) overlooked that NORET is only defined by R 3.2.0 or later, and several folks trying to build on older releases of R (why?) got bitten. Well, at least we have a new record for most frequently reported bug ... Kidding aside, this is now fixed.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 759 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by well over fifty packages since the last release in mid-July!

We are once again fortunate to have a number of pull request, from first-timers to regulars. James "coatless" Balamuta in particular relentlessly pushed for better documentation and cleanup of numerous dangling issue tickets. Artem Klevtsov also contributed again. Qiang, Kevin and I also got some changes in for the Rcpp Core team. More details are again below.

Changes in Rcpp version 0.12.7 (2016-09-04)
  • Changes in Rcpp API:

    • The NORET macro is now defined if it was not already defined by R itself (Kevin fixing issue #512).

    • Environment functions get() & find() now accept a Symbol (James Balamuta in #513 addressing issue #326).

    • Several uses of Rf_eval were replaced by the preferred Rcpp::Rcpp_eval (Qiang in PR #523 closing #498).

    • Improved Autogeneration Warning for RcppExports (James Balamuta in #528 addressing issue #526).

    • Fixed invalid C++ prefix identifiers in auto-generated code (James Balamuta in #528 and #531 addressing issue #387; Simon Dirmeier in #548).

    • String constructors now set default UTF-8 encoding (Qiang Kou in #529 fixing #263).

    • Add variadic variants of the RCPP_RETURN_VECTOR and RCPP_RETURN_MATRIX macro when C++11 compiler used (Artem Klevtsov in #537 fixing #38).

  • Changes in Rcpp build system

    • Travis CI is now driven via from our fork, and deploys all packages as .deb binaries using our PPA where needed (Dirk in #540 addressing issue #517).

  • Changes in Rcpp unit tests

    • New unit tests for random number generators the R namespace which call the standalone Rmath library. (James Balamuta in #514 addressing issue #28).

  • Changes in Rcpp Examples:

    • Examples that used cxxfunction() from the inline package have been rewritten to use either sourceCpp() or cppFunction() (James Balamuta in #541, #535, #534, and #532 addressing issue #56).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Carl Chenet: Retweet 0.9: Automatically retweet & like

5 September, 2016 - 05:00

Retweet 0.9, a self-hosted Python app to automatically retweet and like tweets from another user-defined Twitter account, was released this September, 2nd.

Retweet 0.9 is already in production for Le Journal du hacker, a French-speaking Hacker News-like website,, a French-speaking job board and this very blog.

What’s the purpose of Retweet?

Let’s face it, it’s more and more difficult to communicate about our projects. Even writing an awesome app is not enough any more. If you don’t appear on a regular basis on social networks, everybody thinks you quit or that the project is stalled.

But what if you already have built an audience on Twitter for, let’s say, your personal account. Now you want to automatically retweet and like all tweets from the account of your new project, to push it forward.

Sure, you can do it manually, like in the old good 90’s… or you can use Retweet!

Twitter Out Of The Browser

Have a look at my Github account for my other Twitter automation tools:

  • Feed2tweet, a RSS-to-Twitter command-line tool
  • db2twitter, get data from SQL database (several supported), build tweets and send them to Twitter
  • Twitterwatch, monitor the activity of your Twitter timeline and warn you if no new tweet appears

What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

Iustin Pop: Usefulness of real-time data during cycling

5 September, 2016 - 03:27

Wow, a mouthful of a title, for a much simpler post.

Up until earlier this year, I had only one sport GPS device, and that was a watch. Made sense, as my "main" sport was running (not that I did it very consistently). Upgraded over a few generations of Garmins, I mainly used it to record my track and statistics (pace, heart rate, etc.) The newest generation of watch and heart rate monitor even give more statistics (left-right leg balance, ground contact time, so on).

Most of this data can be looked at while running, but only as an exception; after all, it's very hard to run with one hand up in front of your face. The other useful features—guided workouts and alerts during normal runs—I've used, but not consistently.

So when I started biking a bit more seriously, I wondered whether it would make sense to buy a bike computer. The feature intersection between watch and bike GPS is quite large, so clearly this is a "want" not a "need" by far. Reading forum posts showed the same question repeated multiple times… What convinced me to nevertheless buy such a bike GPS were the mapping/routing features. A bike GPS with good routing features, or even just maps on which tracks can be overlayed, can certainly ease the discovery of new routes.

After a few months of use, my most useful feature is one that I didn't expect. While the mapping is useful and I do use it, the one that actually is significantly better than my watch is the large display with data fields that I can trivially check almost continuously on road biking, and during technically easy climbing sections for mountain biking.

My setup looks like this:

It's a 9-field setup; the Edge 1000 can go to 10, but I like "headline" field. The watch can only go to four, and is basically not usable during rides, unless one would use a quick-release kit for mounting on the handle bar.

This setup allowed me to learn much better my physical capabilities, why I sometimes run out of energy, and how the terrain affects me. Random things that I learned:

  • Gradient: on a road bike, +2% grade is just fun, -2% grade is excellent; on a mountain bike, -2% is somewhat felt but not so much. Going above 6-8% on a mountain bike is tiring, and above 15% means I can bike but I will dip too much into my reserves. Not sure yet what the numbers are on a road bike…
  • Cadence: on flatter routes, my most efficient cadence is 102-108 RPM; between 98-102 I feel I need to extra push, and below 98 I know (now) my muscles will get tired too early; on significant ascents, I don't have enough gearing to sustain such an RPM, and that tires me faster. On medium distance flat rides (~70Km), I usually do ~100 averaged over the whole distance.
  • Heart rate: below ~140 is recovery, ~140-150 is sustained effort, ~150-160 is short-duration pushes, and anything above ~160 will eat through my anaerobic budget, which means I'd better stop soon or my performance for the rest of the ride will suffer; this, surprisingly, matches quite well with my latest run lactate threshold (as computed by my watch), which was 161bpm.
  • Condition: when cruising without pedalling or when stopping, I can ballpark my current condition very easily by seeing how fast my heart rate goes down.
  • Total ascent: useful for two things: to make me proud how much I've already climbed, and—if I know the total ascent for the route—either make me despair how much I have left, or make me happy that the climbs are done

Seeing all this data only after the ride is less useful since you don't remember exactly how you felt during specific points in the ride. But as real-time data, one can easily see/feel the correlation between how the body feels and what data says.

One thing that would be also useful would be real-time power data (3 sec average, not instantaneous) to correlate even better with the body state. I now use heart rate and cadence as a proxy for that, but being able to see actual power numbers would increase the usefulness of the data even more.

Unfortunately, none of these makes the climbs easier. But at least it allows me to understand why one climb feels hard and another easy (relatively speaking). I wonder if, and how this could be applied to running; maybe with smart glasses?

Conclusion: yes, I do recommend a bike computer with a large display (to be able to see many fields at once). Just in case one has disposable income at hand and doesn't know which hobby to spend it on

Junichi Uekawa: Tried setting up chrome remote desktop on Debian.

4 September, 2016 - 18:50
Tried setting up chrome remote desktop on Debian. Lack of logging made it almost impossible but after realizing that chrome browser dumps some logs on standard output and it was waiting for me to enter password, I made progress and I can use chrome remote desktop now. Awesome.

Steinar H. Gunderson: Multitrack audio in Nageru 1.4.0

4 September, 2016 - 07:00

Even though the Chess Olympiad takes some attention right now, development on Nageru (my live video mixer) has continued steadily throughout since the 1.3.0 release. I wanted to take a little time to talk about the upcoming 1.4.0 release, and why things are as they are; writing down things often make them a bit clearer.

Every major release of Nageru has had a specific primary focus: 1.0.0 was about just getting everything to work, 1.1.0 was about NVIDIA support for more oomph, 1.2.0 was about stabilization and polish (and added support for Blackmagic's PCI cards as a nice little bonus), and 1.3.0 was about x264 integration. For 1.4.0, I wanted to work on multitrack audio and mixing.

Audio has always been a clear focus for Nageru, and for good reason; video is 90% about audio, and it's sorely neglected in most amateur productions (not to mention that processing tools are nearly non-existant in most free or cheap software video mixers). Right from the get-go, it's had a chain with proper leveling, compressors and most importantly visual monitoring, so that you know when things are not as they should be. However, it was also written with an assumption that there would be a single audio input source (one of the cameras), and that's going to change.

Single-input is less of a showstopper than one'd think at first; you can work around it by buying a mixer, plug everything into that and then feed that signal into the computer. However, there are a few downsides: If you want camera audio, you'll need to pull more cable from each camera (or have SDI de-embedders). Your mixer is likely to require an extra power supply, and that means yet more cable (any decent USB video card can power over USB, so why shouldn't your audio?). You'll need to buy and transport yet another device. And so on. (If you already have a PA mixer, of course you can use it, but just reusing the PA mix as a stream mix rarely gives the best results, and mixing on an aux bus gives very little flexibility.)

So for 1.4.0, I wanted something to get essentially the processing equivalent of a mid-range mixer. But even though my education is in DSP, my experience with mixers is rather limited, so I did the only reasonable thing and went over to a friend who's also an (award-winning) audio engineer. (It turns out that everything on a mixer is the way it is for a pretty good reason, tuned through 50+ years of collective audio experience. If you just try to make up something on your own without understanding what's going on, you have a 0.001% chance of stumbling upon some genius new way of doing things by accident, and a significantly larger chance than that of messing things up.) After some back and forth, we figured out a reasonable set of basic tools that would be useful in the right hands, and not too confusing for a beginner. So let's have a look at the new controls you get:

There's one set of these controls for each bus. (This is the expanded view; there's also a compact view that has only the meters and the fader, which is what you'll typically want to use during the run itself—the expanded view is for setup and tweaking.) A bus in Nageru is a pair of channels (left/right), sourced from a video capture or ALSA card. The channel mapping is flexible; my USB sound card has 18 channels, for instance, and you can use that to make several buses. Each bus has a name (here I named it very creatively “Main”, but in a real setting you might want something like “Blue microphone” or “Speaker PC”), which is just for convenience; it doesn't mean much.

The most important parts of the mix are given the most screen estate, so even though the way through the signal chain is left-to-right top-to-bottom, I'll go over it in the opposite direction. By far the most important part is the audio level, so the fader naturally is very prominent. (Note that the scale is nonlinear; you want more resolution in the most important area.) Changing a fader with the mouse or keyboard is possible, and probably most people will be doing that, but Nageru will also support USB faders. These usually speak MIDI, for historical reasons, and there are some UI challenges when they're all so different, but you can get really small ones if you want to get that tactile feel without blowing up your budget or getting a bigger backpack.

Then there's the meter to the left of that. Nageru already has R128 level meters in the mastering section (not shown here, but generally unchanged from 1.3.0), and those are kept as-is, but for each bus, you don't want to know loudness; you want to know recording levels, so you want a peak meter, not a loudness meter. In particular, you don't want the bus to send clipped data to the master (which would happen if you set it too high); Nageru can handle this situation pretty well (unlike most digital mixers, it mixes in full 32-bit floating-point so there's no internal clipping, and there's a limiter on the master by default), but it's still not a good place to be in, so you can see that being marked in red in this example. The meter doubles as an input peak check during setup; if you turn off all the effects and set the fader to neutral, you can see if the input hits peak or not, and then adjust it down. (Also, you can see here that I only have audio in the left channel; I'd better check my connections, or perhaps just use mono, by setting the right channel on the bus mapping to the same input as the left one.)

The compressor (now moved from the mastering section to each bus) should be well-known for those using 1.3.0, but in this view, it also has a reduction meter, so that you can see whether it kicks in or not. Most casual users would want to just leave the gain staging and compressor settings alone, but a skilled audio engineer will know how to adjust these to each speaker's antics (some speak at a pretty even volume and thus can get a bit of headroom, while some are much more variable and need tighter settings).

Finally (or, well, first), there's the EQ section. The lo-cut is again well-known from 1.3.0 (the cutoff frequency is the same across all buses), but there's now also a simple three-band EQ per bus. Simply ask the speaker to talk normally for a bit, and tweak the controls until it sounds good. People have different voices and different ways of holding the microphone, and if you have a reasonable ear, you can use the EQ to your advantage to make them sound a little more even on the stream. Either that, or just put it in neutral, and the entire EQ code will be bypassed.

The code is making pretty good progress; all the DSP stuff is done (save for some optimizations I want to do in zita-resampler, now that the discussion flow is started again), and in theory, one could use it already as-is. However, there's a fair amount of gnarly support code that still needs to be written: In particular, I need to do some refactoring to support ALSA hotplug (you don't want your entire stream to go down just because an USB soundcard dropped out for a split second), and similarly some serialization of saving/loading bus mappings. It's not exactly rocket science, but all the code still needs to be written, and there are a number of corner cases to think of.

If you want to peek, the code is in the multichannel_audio branch, but beware; I rebase/squash it pretty frequently, so if you pull from it, expect frequent git breakage.

Antonio Terceiro: testing build reprodubility with debrepro

3 September, 2016 - 23:58

Earlier today I was handling a reproducibility bug and decided I had to try a reproducibility test by myself. I tried reprotest, but I was being hit by a disorderfs issue and I was not sure whether the problem was with reprotest or not (at this point I cannot reproduce that anymore).

So I decided to hack a simple script to that, and it works. I even included it in devscripts after writing a manpage. Of course reprotest is more complete, extensible, and supports arbitrary virtualization backends for doing the more dangerous/destructive variations (such as changing the hostname and other things that require root) but for quick tests debrepro does the job.

Usage examples:

$ debrepro                                 # builds current directory
$ debrepro /path/to/sourcepackage          # builds package there
$ gbp-buildpackage --git-builder=debrepro  # can be used with vcs wrappers as well

debrepro will do two builds with a few variations between them, including $USER, $PATH, timezone, locale, umask, current time, and will even build under disorderfs if available. Build path variation is also performed because by definition the builds are done in different directories. If diffoscope is installed, it will be used for deep comparison of non-matching binaries.

If you are interested and don’t want to build devscripts from source or wait for the next release, you can just grab the script, save it as “debrepro” somewhere on your $PATH and make it executable.

Thorsten Alteholz: Openzwave in Debian

3 September, 2016 - 21:04

It was a real surprise when I saw activity on #791965, which is my ITP bug to package openzwave.

As Ralph wrote, the legal status of the Z-Wave standard has been changed. According to a press release of Sigma Designs, the Z-Wave standard is now put into the public domain.

As even the specification of the Z-Wave S2 security application framework is available now, the openzwave community is finally able to create a really compatible application which might also pass the Z-Wave certification. Thus there is new hope that there will be an openzwave package in Debian.

Thorsten Alteholz: My Debian Activities in August 2016

3 September, 2016 - 20:38

FTP assistant

This month I marked 257 packages for accept and rejected only 26. Seems to be that I mostly choosed the high quality packages this month. I also sent 12 emails to maintainers asking questions.

Debian LTS

This was my twenty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 14.75h. Again, most of the time I choosed packages, where at the end the vulnerable code of the corresponding CVE was not present in the Wheezy version. So I could mark several CVEs for lshell and wget as not-affected, without doing an upload. Unfortunately I had to give up working on chicken. My scheme abilities appear to be rather rusty.

Further, I uploaded a test version for php5 that takes care of 17 CVEs and as requested by the LTS users, two additional bugs. After all tests are passed, I will do a real upload with DLA.

This month I also had another term of frontdesk work.

Other stuff

For the Alljoyn framework I fixed a compile issue with gcc 6 and could close RC-bugs
#831127, #831091, and #831198.
My patch was also accepted by upstream.

Unfortunately a bug in gtest resulted in #833636.

As gcc 6 is the default compiler now in testing, I could also close RC bug #831106.

Bits from Debian: New Debian Developers and Maintainers (July and August 2016)

3 September, 2016 - 17:00

The following contributors got their Debian Developer accounts in the last two months:

  • Edward John Betts (edward)
  • Holger Wansing (holgerw)
  • Timothy Martin Potter (tpot)
  • Martijn van Brummelen (mvb)
  • Stéphane Blondon (sblondon)
  • Bertrand Marc (bmarc)
  • Jochen Sprickerhof (jspricke)
  • Ben Finney (bignose)
  • Breno Leitao (leitao)
  • Zlatan Todoric (zlatan)
  • Ferenc Wágner (wferi)
  • Matthieu Caneill (matthieucan)
  • Steven Chamberlain (stevenc)

The following contributors were added as Debian Maintainers in the last two months:

  • Jonathan Cristopher Carter
  • Reiner Herrmann
  • Michael Jeanson
  • Jens Reyer
  • Jerome Benoit
  • Frédéric Bonnard
  • Olek Wojnar



Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้