Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 33 min 9 sec ago

Raphaël Hertzog: My Debian LTS report for September

30 September, 2014 - 21:24

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

I started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Mario Lang: A simple C++11 concurrent workqueue

30 September, 2014 - 20:20

For a little toy project of mine (a wikipedia XML dump word counter) I wrote a little C++11 helper class to distribute work to all available CPU cores. It took me many years to overcome my fear of threading: In the past, whenever I toyed with threaded code, I ended up having a lot of deadlocks, and generally being confused. It appears that I finally have understood enough of this crazyness to be able to come up with the small helper class below.

It makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining work is processed and all background threads are joined.

The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don't know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc.

The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block.

So the most basic usage example is probably something like:

int main() {
  typedef std::string item_type;
  distributor<item_type> process([](item_type &item) {
    // do work
  });

  while (/* input */) process(std::move(/* item */));

  return 0;
}

That is about as simple as it can get, IMHO.

The code can be found in the GitHub project mentioned above. However, since the class is relatively short, here it is.

#include <algorithm>
#include <condition_variable>
#include <mutex>
#include <queue>
#include <stdexcept>
#include <thread>
#include <vector>

template <typename Type, typename Queue = std::queue<Type>>
class distributor: Queue, std::mutex, std::condition_variable {
  typename Queue::size_type capacity;
  bool done = false;
  std::vector<std::thread> threads;

public:
  template<typename Function>
  distributor( Function function
             , unsigned int concurrency = std::thread::hardware_concurrency()
             , typename Queue::size_type max_items_per_thread = 1
             )
  : capacity{concurrency * max_items_per_thread}
  {
    if (not concurrency)
      throw std::invalid_argument("Concurrency must be positive and non-zero");
    if (max_items_per_thread)
      std::invalid_argument("Max items per thread must be positive and non-zero");

    for (unsigned int count {0}; count < concurrency; count += 1)
      threads.emplace_back(static_cast<void (distributor::*)(Function)>
                           (&distributor::consume), this, function);
  }

  distributor(distributor &&) = default;
  distributor(distributor const &) = delete;
  distributor& operator=(distributor const &) = delete;

  ~distributor() {
    {
      std::lock_guard<std::mutex> guard(*this);
      done = true;
    }
    notify_all();
    std::for_each(threads.begin(), threads.end(),
                  std::mem_fun_ref(&std::thread::join));
  }

  void operator()(Type &&value) {
    std::unique_lock<std::mutex> lock(*this);
    while (Queue::size() == capacity) wait(lock);
    Queue::push(std::forward<Type>(value));
    notify_one();
  }

private:
  template <typename Function>
  void consume(Function process) {
    std::unique_lock<std::mutex> lock(*this);
    while (true) {
      if (not Queue::empty()) {
        Type item { std::move(Queue::front()) };
        Queue::pop();
        lock.unlock();
        notify_one();
        process(item);
        lock.lock();
      } else if (done) {
        break;
      } else {
        wait(lock);
      }
    }
  }
};

If you have any comments regarding the implementation, please drop me a mail.

Francois Marier: Encrypted mailing list on Debian and Ubuntu

30 September, 2014 - 13:30

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.

Requirements

What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder
Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps
Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

Dirk Eddelbuettel: Rcpp 0.11.3

30 September, 2014 - 09:39

A new release 0.11.3 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package has been uploaded too.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 273 packages on CRAN depend on Rcpp for making analyses go faster and further.

This release brings a fairly large number of continued enhancements, fixes and polishing to Rcpp. These were provided by a total of seven different contributors---which is a new record as well.

See below for a detailed list of changes extracted from the NEWS file, but some highlights included in this release are

  • Several API cleanups, polishes and a pre-announced code removal
  • New InternalFunction interface, and new Timer functionality.
  • More robust functionality of Rcpp Attributes as well as a new dryRun option.
  • The Rcpp FAQ was updated, as was the main Description: in the DESCRIPTION file.
  • Rcpp.package.skeleton() can now deploy functionality from pkgKitten to create Rcpp packages that purr.

One sore point, however, is that we missed that packages using Rcpp Modules appear to require a rebuild. We are sorry for the inconvenience; this has highlighted a shortcoming in our fairly robust and extensive tests. While we test our packages against all known CRAN dependents, such tests check for the ability to compile and run freshly and not whether previously built packages still run. We intend to augment our testing in this direction to avoid a repeat occurrence of such a misfeature.

Changes in Rcpp version 0.11.3 (2014-09-27)
  • Changes in Rcpp API:

    • The deprecation of RCPP_FUNCTION_* which was announced with release 0.10.5 last year is proceeding as planned, and the file macros/preprocessor_generated.h has been removed.

    • Timer no longer records time between steps, but times from the origin. It also gains a get_timers(int) methods that creates a vector of Timer that have the same origin. This is modelled on the Rcpp11 implementation and is more useful for situations where we use timers in several threads. Timer also gains a constructor taking a nanotime_t to use as its origin, and a origin method. This can be useful for situations where the number of threads is not known in advance but we still want to track what goes on in each thread.

    • A cast to bool was removed in the vector proxy code as inconsistent behaviour between clang and g++ compilations was noticed.

    • A missing update(SEXP) method was added thanks to pull request by Omar Andres Zapata Mesa.

    • A proxy for DimNames was added.

    • A no_init option was added for Matrices and Vectors.

    • The InternalFunction class was updated to work with std::function (provided a suitable C++11 compiler is available) via a pull request by Christian Authmann.

    • A new_env() function was added to Environment.h

    • The return value of range eraser for Vectors was fixed in a pull request by Yixuan Qiu.

  • Changes in Rcpp Sugar:

    • In ifelse(), the returned NA type was corrected for operator[].

  • Changes in Rcpp Attributes:

    • Include LinkingTo in DESCRIPTION fields scanned to confirm that C++ dependencies are referenced by package.

    • Add dryRun parameter to sourceCpp.

    • Corrected issue with relative path and R chunk use for sourceCpp.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was updated with respect to OS X issues.

    • A new entry in the Rcpp-FAQ clarifies the use of licenses.

    • Vignettes build results no longer copied to /tmp to please CRAN.

    • The Description in DESCRIPTION has been shortened.

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function will now use pkgKitten package, if available, to create a package which passes R CMD check without warnings. A new Suggests: has been added for pkgKitten.

    • The modules=TRUE case for Rcpp.package.skeleton() has been improved and now runs without complaints from R CMD check as well.

  • Changes in Rcpp unit test functions:

    • Functions from the RUnit package are now prefixed with RUnit::

    • The testRcppModule and testRcppClass sample packages now pass R CMD check --as-cran cleanly with NOTES or WARNINGS

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: Letter to Starburst magazine

29 September, 2014 - 16:51

I recently read a few issues of Starburst magazine which is good fun, but a brief mention of the Man Booker prize in issue 404 stoked the fires of the age old SF-versus-mainstream argument, so I wrote the following:

Dear Starburst,

I found it perplexing that, in "Brave New Words", issue 404, whilst covering the Man-Booker shortlist, Ed Fortune tried to simultaneously argue that genre readers "read broadly" yet only Howard Jacobson's novel would be of passable interest. Asides from the obvious logical contradiction he is sadly overlooking David Mitchell's critically lauded and undisputably SF&F novel "The Bone Clocks", which it turned out was also overlooked by the short-listers. Still, Jacobson's novel made it, meaning SF&F represents 16% of the shortlist. Not too bad I'd say.

All the best & keep up the good work!

As it happens I'm currently struggling through "J". I'm at around the half-way mark.

DebConf team: DebConf15 dates are set, come and join us! (Posted by DebConf15 team)

29 September, 2014 - 15:40

At DebConf14 in Portland, Oregon, USA, next year’s DebConf team presented their conference plans and announced the conference dates: DebConf15 will take place from 15 to 22 August 2015 in Heidelberg, Germany. On the Opening Weekend on 15/16 August, we invite members of the public to participate in our wide offering of content and events, before we dive into the more technical part of the conference during following week. DebConf15 will also be preceeded by DebCamp, a time and place for teams to gather for intensive collaboration.

A set of slides from a quick show-case during the DebConf14 closing ceremony provide a quick overview of what you can expect next year. For more in-depth information, we invite you to watch the video recording of the full session, in which the team provides detailed information on the preparations so far, location and transportation to the venue at Heidelberg, the different rooms and areas at the Youth Hostel (for accommodation, hacking, talks, and social activities), details about the infrastructure that are being worked on, and the plans around the conference schedule.

We invite everyone to join us in organising this conference. There are different areas where your help could be very valuable, and we are always looking forward to your ideas. Have a look at our wiki page, join our IRC channels and subscribe to our mailing lists.

We are also contacting potential sponsors from all around the globe. If you know any organisation that could be interested, please consider handing them our sponsorship brochure or contact the fundraising team with any leads.

Let’s work together, as every year, on making the best DebConf ever!

Ean Schuessler: RoboJuggy at JavaOne

29 September, 2014 - 07:14

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson.  I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David’s friend and collaborators at  OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results.

Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden’s RSMedia and various versions of David’s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk.

Hunting and gathering

I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the “netinst” Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don’t need. I’d recommend anyone interested I’m duplicating our work to stay their journey there:

Raspbian UA Net Installer

Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions:

Setting up ROS Hydro on the Raspberry Pi

Building from source means that all your install ends up being “isolated” (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:

sudo dd bs=4M if=/dev/sdb | gzip > /home/your_username/image`date +%d%m%y`.gz
Getting Java in the mix

Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn’t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM.

Java 8 JDK for ARM

At this point you are ready to start installing the ROS Java packages. I’m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the “install from source link” in the page to see the right stuff:

Installing ROS Java on Hydro

Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won’t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently “chain” workspaces in ROS but I didn’t understand this well enough to get it working so what I did was this:

> mkdir -p ~/rosjava 
> wstool init -j4 ~/rosjava/src https://raw.github.com/rosjava/rosjava/hydro/rosjava.rosinstall
> source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages.
> rosdep update 
> rosdep install --from-paths src -i -y

and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.

> catkin_make_isolated --install

Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same “build from source” installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver.

Putting together the pieces

Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box (sorry that its rotated, I need to fix my WordPress install):

 

Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores.

Complete pan and tilt rig with motors for $25

A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won’t regret buying it.

Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at http://rsmediadevkit.sourceforge.net.

That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don’t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later.  Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly:

Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes.

Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren’t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here:

Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I’m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress.

 

As I was eating my chicken hearts I was also connecting the pan and tilt armature onto the puppet’s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good!

I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not “built in”. I had tried to follow the provided instructions but was not (and still have not) been able to get that working.

Using “unofficial” messages with ROS Java

I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions:

Connecting to the RSMedia Linux Console Port

After some sweaty time bent over a magnifying glass I had success:

I had previously purchased the USB-TTL232 accessory described in the article from Dallas’ awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner.

It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn’t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful  resource which let me get all kinds of body movements going:

A collection of useful commands for the RSMedia

At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry’s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going.  Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work.

The last day was spent doing things that I wouldn’t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully.

I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything. Luckily during all this my beautiful and ever so supportive girlfriend Becca had helped me get packed or I probably wouldn’t have made it out the door.

Welcome to San Francisco

THIS ARTICLE IS STILL BEING WRITTEN

 

Jonathan Dowland: Puppet and filesystem mounts

29 September, 2014 - 02:50

Well, not long after writing my last post I've found some time to write up some of my puppet adventures, sooner than I imagined...

Outside work, I sys-admin a VPS instance that is shared by a few friends. We recently embarked in a project to migrate to a different VPS instance and I took the opportunity to revisit how we managed home directories.

I've got all the disk space allocated to the VM set up as LVM physical volumes. This has proven very useful for later expansion: we can do it all live. Each user on the VM may have one or more UNIX accounts that they use. Therefore, in the old scheme, for the jon user, we mounted an allocation of disk space at /home/jons, put the account home directories under it at e.g. /home/jons/jon, symlinked /home/jon -> /home/jons/jon for brevity, and set that as the home field in the passwd entry. This worked surprisingly well, but I was always uncomfortable with having a symlink in the home path (and some software was, too.)

For the new machine, I decided to try bind mounts. Short story: they just work. However, the mtab (and df output) can look a little cluttered, and mount order becomes quite important. To manage the set-up, I wrote a few puppet snippets. First, a convenience definition to make the actual bind-mounts a little less verbose.

define bindmount($device) {
  mount { $name:
    device  => $device,
    ensure  => mounted,
    fstype  => 'none',
    options => 'bind',
    dump    => 0,
    pass    => 2,
    require => File[$device],
  }
}

Once that was in place, we then needed to ensure that the directories to which the LV were to be mounted, and to where the user's home would be bind-mounted, actually exist; we also need to mount the underlying LV and set up the bind mount. The dependency chain is actually a graph, but with the majority of dependencies quite linear:

define bindmounthome() {
  file { ["/home/${name}s", "/home/${name}"]:
    ensure  => directory,
  } -> # depended upon by
  mount { "/home/${name}s":
    device  => "LABEL=${name}",
    ensure  => mounted,
    fstype  => 'ext4',
    options => 'defaults',
    dump    => 0,
    pass    => 2,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device  => "/home/${name}s/${name}",
  }
  file { "/home/${name}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${name}s"]],
  }
}

That covers the underlying mounts and the "primary" accounts. However, the point of this exercise was to support the secondary accounts for each user. There's a bit of repetition here, and with some refactoring both this and the preceding bindmounthome definition could be a bit shorter, but I'm not sure whether that would be at the expense of legibility:

define seconduser($parent) {
  file { "/home/${name}":
    ensure => directory,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device => "/home/${parent}s/${name}",
  }
  file { "/home/${parent}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${parent}s"]],
  }
}

I had to re-read the above a couple of times just now to convince myself that I hadn't missed the dependencies between the mount invocations towards the bottom, but they're there: so, puppet will always run the mount for /home/jons before /home/jons/jon. Since puppet is writing to the fstab, this means that the ordering is correct and a sequential start-up will work.

If you want anything cleverer than serialised, one-at-a-time mounting at boot, I think one would have to use something other than trusty-old fstab for the job. I'm planning to look at Systemd's mount unit type, but there's no rush as this particular host is still running sysvinit for the time being.

Clint Adams: Banana Pi is a real thing

29 September, 2014 - 02:13

Now that I've almost caught up with life after an extended stint on the West Coast, it's time to play.

Like Gunnar, I acquired a Banana Pi courtesy of LeMaker.

My GuruPlug (courtesy me) and my Excito B3 (courtesy the lovely people at Tor) are giving me a bit of trouble in different ways, so my intent is to decommission and give away the GuruPlug and Excito B3, leaving my DreamPlug and the Banana Pi to provide the services currently performed by the GuruPlug, Excito B3, and DreamPlug.

The Banana Pi is presently running Bananian on a 32G SDHC (Class 10) card. This is close to wheezy, and appears to have a mostly-sane default configuration, but I am not going to trust some random software downloaded off the Internet on my home network, so I need to be able to run Debian on it instead.

My preliminary belief is that the two main obstacles are Linux and U-Boot. Bananian 14.09 comes with Linux 3.4.90+ #1 SMP PREEMPT Fri Sep 12 18:13:45 CEST 2014 armv7l GNU/Linux, whatever that is, and U-Boot SPL 2014.04-10694-g2ae8b32 (Sep 03 2014 - 20:53:14). I don't yet know what the status of mainline/Debian support is.

Someone gave me a wooden cigar box to use as a case, which is not working out quite as hoped. I also found that my hack to power a 3.5" SATA drive does not work, so I'll either need to hammer on that some more or resolve to use a 2.5" drive instead.

memory:

Mem:        993700      36632     957068          0       2248      11136
-/+ buffers/cache:      23248     970452
Swap:       524284       1336     522948

cpu:

Processor       : ARMv7 Processor rev 4 (v7l)
processor       : 0
BogoMIPS        : 1192.96

processor       : 1
BogoMIPS        : 1197.05

Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc07
CPU revision    : 4

Hardware        : sun7i
Revision        : 0000
Serial          : 03c32de75055484880485278165166c9

Jonathan Dowland: What have I been up to?

29 September, 2014 - 01:59

It's been a little while since I've written about what I've been up to. The truth is I've been busy with moving house - and I'll write a bit more about that at another time. But asides from that there have been some bits and bobs.

I use a little tool called archivemail to tidy up old listmail (my policy is to retain 30 days of listmail for most lists). If I unsubscribe to a list, then eventually I end up with an empty mail folder corresponding to that list. I decided it would be nice to extend archivemail to delete mailboxes if, after the archiving has taken place, the mailbox is empty. Doing this properly means adding delete routines to Python's "mailbox" library, which is part of the Python standard library. I've therefore started work on a patch for Python.

Since this is an enhancement, Python would only accept a patch for Python 3. Therefore, eventually, I would also have to port archivemail from Python 2 to 3. "archivemail" is basically abandonware at the moment, and the principal Debian maintainer is MIA. There was a release critical bug filed against it, so I joined the Debian Python team to co-maintain archivemail in Debian. I've worked around the RC bug but a proper fix is still to come.

In other Debian news, I've been mostly quiet. A small patch for squishyball to get it to build on Hurd, and a temporary fix patch for lhasa to get it to build on the build daemons for all architectures (problems with the test suite). All three of lhasa, squishyball and archivemail need a little bit of love to get them into shape before the jessie freeze.

I've had plans to write up some of the more interesting technical things I've been up to at work, but with the huge successes of the School we've been so busy I haven't had time. Hopefully you can soon look forward to some of our further adventures with puppet, including evaluating Shibboleth modules, some stuff about handling user directories, bind mounts and LVM volumes and actually publishing some of our more useful internal modules; I hope we will also (soon) have some useful data to go with our experiments with Linux LXC containers versus KVM-powered virtual machines in some of our use-cases. I've also got a few bits and pieces on Systemd to write up.

Benjamin Mako Hill: Community Data Science Workshops Post-Mortem

28 September, 2014 - 13:02

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

DebConf team: Wrapping up DebConf14 (Posted by Paul Wise, Donald Norwood)

28 September, 2014 - 03:40

The annual Debian developer meeting took place in Portland, Oregon, 23 to 31 August 2014. DebConf14 attendees participated in talks, discussions, workshops and programming sessions. Video teams captured a lot of the main talks and discussions for streaming for interactive attendees and for the Debian video archive.

Between the video, presentations, and handouts the coverage came from the attendees in blogs, posts, and project updates. We’ve gathered a few articles for your reading pleasure:

Gregor Herrmann and a few members of the Debian Perl group had an informal unofficial pkg-perl micro-sprint and were very productive.

Vincent Sanders shared an inspired gift in the form of a plaque given to Russ Allbery in thanks for his tireless work of keeping sanity in the Debian mailing lists. Pictures of the plaque and design scheme are linked in the post. Vincent also shared his experiences of the conference and hopes the organisers have recovered.

Noah Meyerhans’ adventuring to Debian by train, (Inter)netted some interesting IPv6 data for future road and railwarriors.

Hideki Yamane sent a gentle reminder for English speakers to speak more slowly.

Daniel Pocock posted of GSoC talks at DebConf14, highlights include the Java Project Dependency Builder and the WebRTC JSCommunicator.

Thomas Goirand gives us some insight into a working task list of accomplishments and projects he was able to complete at DebConf14, from the OpenStack discussion to tasksel talks, and completion of some things started last year at DebConf13.

Antonio Terceiro blogged about debci and the Debian Continuous Integration project, Ruby, Redmine, and Noosfero. His post also shares the atmosphere of being able to interact directly with peers once a year.

Stefano Zacchiroli blogged about a talk he did on debsources which now has its own HACKING file.

Juliana Louback penned: DebConf 2014 and How I Became a Debian Contributor.

Elizabeth Krumbach Joseph’s in-depth summary of DebConf14 is a great read. She discussed Debian Validation & CI, debci and the Continuous Integration project, Automated Validation in Debian using LAVA, and Outsourcing webapp maintenance.

Lucas Nussbaum by way of a blog post releases the very first version of Debian Trivia modelled after the TCP/IP Drinking Game.

François Marier’s shares additional information and further discussion on Outsourcing your webapp maintenance to Debian.

Joachim Breitner gave a talk on Haskell and Debian, created a new tool for binNMUs for Haskell packages which runs via cron job. The output is available for Haskell and for OCaml, and he still had a small amount of time to go dancing.

Jaldhar Harshad Vyas was not able to attend DebConf this year, but he did tune in to the videos made available by the video team and gives an insightful viewpoint to what was being seen.

Jérémy Bobbio posted about Reproducible builds in Debian in his recap of DebConf14. One of the topics at hand involved defining a canonical path where packages must be built and a BOF discussion on reproducible builds from where the conversation moved to discussions in both Octave and Groff. New helpers dh_fixmtimes and dh_genbuildinfo were added to BTS. The .buildinfo format has been specified on the wiki and reviewed. Lots of work is being done in the project, interested parties can help with the TODO list or join the new IRC channel #debian-reproducible on irc.debian.org.

Steve McIntyre posted a Summary from the d-i / debian-cd BoF at DC14, with some of the session video available online. Current jessie D-I needs some help with the testing on less common architectures and languages, and release scheduling could be improved. Future plans: Switching to a GUI by default for jessie, a default desktop and desktop choice, artwork, bug fixes and new architecture support. debian-cd: Things are working well. Improvement discussions are on selecting which images to make I.E. netinst, DVD, et al., debian-cd in progress with http download support, Regular live test builds, Other discussions and questions revolve around which ARM platforms to support, specially-designed images, multi-arch CDs, and cloud-init based images. There is also a call for help as the team needs help with testing, bug-handling, and translations.

Holger Levsen reports on feedback about the feedback from his LTS talk at DebConf14. LTS has been perceived well, fits a demand, and people are expecting it to continue; however, this is not without a few issues as Holger explains in greater detail the lacking gatekeeper mechanisms, and how contributions are needed from finance to uploads. In other news the security-tracker is now fixed to know about old stable. Time is short for that fix as once jessie is released the tracker will need to support stable, oldstable which will be wheezy, and oldoldstable.

Jonathan McDowell’s summary of DebConf14 includes a fair perspective of the host city and the benefits of planning of a good DebConf14 location. He also talks about the need for facetime in the Debian project as it correlates with and improves everyone’s ability to work together. DebConf14 also provided the chance to set up a hard time frame for removing older 1024 bit keys from Debian keyrings.

Steve McIntyre posted a Summary from the “State of the ARM” BoF at DebConf14 with updates on the 3 current ports armel, armhf and arm64. armel which targets the ARM EABI soft-float ARMv4t processor may eventually be going away, while armhf which targets the ARM EABI hard-float ARMv7 is doing well as the cross-distro standard. Debian is has moved to a single armmp kernel flavour using Device Tree Blobs and should be able to run on a large range of ARMv7 hardware. The arm64 port recently entered the main archive and it is hoped to release with jessie with 2 official builds hosted at ARM. There is talk of laptop development with an arm64 CPU. Buildds and hardware are mentioned with acknowledgements for donated new machines, Banana Pi boards, and software by way of ARM’s DS-5 Development Studio - free for all Debian Developers. Help is needed! Join #debian-arm on irc.debian.org and/or the debian-arm mailing list. There is an upcoming Mini-DebConf in November 2014 hosted by ARM in Cambridge, UK.

Tianon Gravi posted about the atmosphere and contrast between an average conference and a DebConf.

Joseph Bisch posted about meeting his GSOC mentors, attending and contributing to a keysigning event and did some work on debmetrics which is powering metrics.debian.net. Debmetrics provides a uniform interface for adding, updating, and viewing various metrics concerning Debian.

Harlan Lieberman-Berg’s DebConf Retrospective shared the feel of DebConf, and detailed some of the work on debugging a build failure, work with the pkg-perl team on a few uploads, and work on a javascript slowdown issue on codeeditor.

Ana Guerrero López reflected on Ten years contributing to Debian.

Ritesh Raj Sarraf: Laptop Mode Tools 1.66

27 September, 2014 - 17:09

I am pleased to announce the release of Laptop Mode Tools at version 1.66.

This release fixes an important bug in the way Laptop Mode Tools is invoked. Users, now when disable it in the config file, the tool will be disabled. Thanks to bendlas@github for narrowing it down. The GUI configuration tool has been improved, thanks to Juan. And there is a new power saving module for users with ATI Radeon cards. Thanks to M. Ziebell for submitting the patch.

Laptop Mode Tools development can be tracked @ GitHub

AddThis:  Categories: Keywords: 

Niels Thykier: Lintian – Upcoming API making it easier to write correct and safe code

27 September, 2014 - 15:08

The upcoming version of Lintian will feature a new set of API that attempts to promote safer code. It is hardly a “ground-breaking discovery”, just a much needed feature.

The primary reason for this API is that writing safe and correct code is simply too complicated that people get it wrong (including yours truly on occasion).   The second reason is that I feel it is a waste having to repeat myself when reviewing patches for Lintian.

Fortunately, the kind of issues this kind of mistake creates are usually minor information leaks, often with no chance of exploiting it remotely without the owner reviewing the output first[0].

Part of the complexity of writing correct code originates from the fact that Lintian must assume Debian packages to be hostile until otherwise proven[1]. Consider a simplified case where we want to read a file (e.g. the copyright file):

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  # BAD: This is an example of doing it wrong
  open(my $fd, '<', $info->unpacked($filename));
  ...;
  close($fd);
  return;
}

This has two trivial vulnerabilities[2].

  1. Any part of the path (usr,usr/share, …) can be asymlink to “somewhere else” like /
    1. Problem: Access to potentially any file on the system with the credentials of the user running Lintian.  But even then, Lintian generally never write to those files and the user has to (usually manually) disclose the report before any information leak can be completed.
  2. The target path can point to a non-file.
    1. Problem: Minor inconvenience by DoS of Lintian.  Examples include a named pipe, where Lintian will get stuck until a signal kills it.


Of course, we can do this right[3]:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
use Lintian::Util qw(is_ancestor_of);
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $root = $info->unpacked
  my $path = $info->unpacked($filename);
  if ( -f $path and is_ancestor_of($root, $path)) {
    open(my $fd, '<', $path);
    ...;
    close($fd);
  }
  return;
}

Where “is_ancestor_of” is the only available utility to assist you currently.  It hides away some 10-12 lines of code to resolve the two paths and correctly asserting that $path is (an ancestor of) $root.  Prior to Lintian 2.5.12, you would have to do that ancestor check by hand in each and every check[4].

In the new version, the correct code would look something like this:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $path = $info->index_resolved_path($filename);
  if ($path and $path->is_open_ok) {
    my $fd = $path->open;
    ...;
    close($fd);
  }
  return;
}

Now, you may wonder how that promotes safer code.  At first glance, the checking code is not a lot simpler than the previous “correct” example.  However, the new code has the advantage of being safer even if you forget the checks.  The reasons are:

  1. The return value is entirely based on the “file index” of the package (think: tar vtf data.tar.gz).  At no point does it use the file system to resolve the path.  Whether your malicious package trigger an undef warning based on the return value of index_resolved_index leaks nothing about the host machine.
    1. However, it does take safe symlinks into account and resolves them for you.  If you ask for ‘foo/bar’ and ‘foo’ is a symlink to ‘baz’ and ‘baz/bar’ exists in the package, you will get ‘baz/bar’.  If ‘baz/bar’ happens to be a symlink, then it is resolved as well.
    2. Bonus: You are much more likely to trigger the undef warning during regular testing, since it also happens if the file is simply missing.
  2. If you attempt to call “$path->open” without calling “$path->is_open_ok” first, Lintian can now validate the call for you and stop it on unsafe actions.

It also has the advantage of centralising the code for asserting safe access, so bugs in it only needs to be fixed in one place.  Of course, it is still possible to write unsafe code.  But at least, the new API is safer by default and (hopefully) more convenient to use.

 

[0] Lintian.debian.org being the primary exception here.

[1] This is in contrast to e.g. piuparts, which very much trusts its input packages by handing the package root access (albeit chroot’ed, but still).

[2] And also a bug.  Not all binary packages have a copyright – instead some while have a symlink to another package.

[3] The code is hand-typed into the blog without prior testing (not even compile testing it).  The code may be subject to typos, brown-paper-bag bugs etc. which are all disclaimed (of course).

[4] Fun fact, our documented example for doing it “correctly” prior to implementing is_ancestor_of was in fact not correct.  It used the root path in a regex (without quoting the root path) – fortunately, it just broke lintian when your TMPDIR / LINTIAN_LAB contained certain regex meta-characters (which is pretty rare).


Richard Hartmann: Release Critical Bug report for Week 39

27 September, 2014 - 04:45

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1393
    • Affecting Jessie: 408 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 360 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 50 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 290 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

Steve Kemp: Next week I shall be mostly in Kraków

27 September, 2014 - 01:20

Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.

It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.

In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.

Jakub Wilk: Pet peeves: debhelper build-dependencies (redux)

26 September, 2014 - 20:05
$ zcat Sources.gz | grep -o -E 'debhelper [(]>= 9[.][0-9]{,7}([^0-9)][^)]*)?[)]' | sort | uniq -c | sort -rn
    338 debhelper (>= 9.0.0)
     70 debhelper (>= 9.0)
     18 debhelper (>= 9.0.0~)
     10 debhelper (>= 9.0~)
      2 debhelper (>= 9.2)
      1 debhelper (>= 9.2~)
      1 debhelper (>= 9.0.50~)

Is it a way to protest against the current debhelper's version scheme?

Holger Levsen: 20140925-reproducible-builds

26 September, 2014 - 18:34
Reproducible builds? I never did any - manually

I've never done a reproducible build attempt of any package, manually, ever. But what I've done now is setting up reproducible builds on jenkins.debian.net which will build hundreds or thousands of packages, hopefully reproducibly, regularily in the future. Thanks to Lunar's and many other peoples work, this was actually rather easy. If you want to do this manually, it should take you just a few minutes to setup a suitable build environment.

So three days ago when I wasn't exactly bored I decided that it was a good moment to implement some reproducible build jobs on jenkins.d.n, and so I gave it a try and two hours later the basic implementation was working, and then it was an evening and morning of fine tuning until I was mostly satisfied. Since then there has been some polishing, but the basic setup is done and has been working since.

What's the result? One job, reproducible_setup will just create a suitable environment for pbuilding reproducible packages as documented so well on the Debian wiki. And as that job runs 3.5 minutes only (to debootstrap from scratch), it's run daily.

And then there are currently 16 other jobs, which test reproducible builds in different areas: d-i, core, some six major desktops and some selected desktop applications, some security + privacy related packages, some build chains we have in Debian, libreoffice and X.org. Most of these jobs run several hours, but luckily not days. And they discover packages which still fail to build reproducibly, which already has caused some bugs to be filed, eg. #762732 "libdebian-installer: please do not write timestamps in Doxygen generated documentation".

So this is the output from testing the reproducibilty of all debian-installer packages: 72 packages were successfully built reproducibly, while 6 packages failed to do so. I was quite impressed by these numbers as AFAIK noone tried to build d-i reproducibly before.

72 packages successfully built reproducibly: userdevfs user-setup usb-discover udpkg tzsetup rootskel rootskel-gtk rescue preseed pkgsel partman-xfs partman-target partman-partitioning partman-nbd partman-multipath partman-md partman-lvm partman-jfs partman-iscsi partman-ext3 partman-efi partman-crypto partman-btrfs partman-basicmethods partman-basicfilesystems partman-base partman-auto partman-auto-raid partman-auto-lvm partman-auto-crypto partconf os-prober oldsys-preseed nobootloader network-console netcfg net-retriever mountmedia mklibs media-retriever mdcfg main-menu lvmcfg lowmem localechooser live-installer lilo-installer kickseed kernel-wedge kbd-chooser iso-scan installation-report installation-locale hw-detect grub-installer finish-install efi-reader dh-di debian-installer-utils debian-installer-netboot-images debian-installer-launcher clock-setup choose-mirror cdrom-retriever cdrom-detect cdrom-checker cdebconf-terminal cdebconf-entropy bterm-unifont base-installer apt-setup anna 
6 packages failed to built reproducibly: win32-loader libdebian-installer debootstrap console-setup cdebconf busybox 

What's also impressive: all packages for the newly introduced Cinnamon Desktop build reproducibly from the start!

The jenkins setup is configured via just three small files:

That's it and that's enough to keep several cores busy for days. But as each job only takes a few hours each is scheduled twice a month and more jobs and packages shall be added in future (with some heuristics to schedule known good packages less often...)

I guess it's an appropriate opportunity to say "many thanks to Profitbricks", who have been donating the powerful virtual machine jenkins.debian.net is running on since October 2012. I also want to say "many many thanks to Helmut" (Grohne) who has recently joined me in maintaining this jenkins setup. And then I'd like to thank "the KGB trio" (Gregor, Tincho and Dam!) for providing those KGB bots on IRC, which are very helpful for providing notifications on IRC channels and last but not least thanks to everybody who contributed so that reproducible builds got this far! Keep up the jolly good work!

And If you happen to know failing packages not included in job-cfg/reproducible.yaml I'd like to hear about those, so they'll get regularily tested and appear on the radar, until finally bugs are filed, fixed and migrated to stable. So one day all binary packages in Debian stable will be build reproducibly. An important step on this road is probably to have this defined as an release goal for Jessie+1. And then for jessie+1 hopefully the first 10k packages will build reproducibly? Or whooping 23k maybe? And maybe release jessie+2 with 100%?!? We will see! Even Jessie already has quite some packages (someone needs to count them...) which build reproducibly with just modified dpkg(-dev) and debhelper packages alone...

So let's fix all the bugs! That said, an easier start for most of you is probably the list of useful things you (yes, you!) can do!

Oh, and last but surely not least in my book: many thanks too to the nice people hosting me so friendly in the last days! Keep on rockin'!

Petter Reinholdtsen: How to test Debian Edu Jessie despite some fatal problems with the installer

26 September, 2014 - 18:20

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use ftp.skolelinux.org::cd-edu-testing-nolocal-netinst/debian-edu-amd64-i386-NETINST-1.iso). The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

Dirk Eddelbuettel: R and Docker

26 September, 2014 - 10:57

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้