Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 7 min ago

Jonathan Dowland: Puppet and filesystem mounts

29 September, 2014 - 02:50

Well, not long after writing my last post I've found some time to write up some of my puppet adventures, sooner than I imagined...

Outside work, I sys-admin a VPS instance that is shared by a few friends. We recently embarked in a project to migrate to a different VPS instance and I took the opportunity to revisit how we managed home directories.

I've got all the disk space allocated to the VM set up as LVM physical volumes. This has proven very useful for later expansion: we can do it all live. Each user on the VM may have one or more UNIX accounts that they use. Therefore, in the old scheme, for the jon user, we mounted an allocation of disk space at /home/jons, put the account home directories under it at e.g. /home/jons/jon, symlinked /home/jon -> /home/jons/jon for brevity, and set that as the home field in the passwd entry. This worked surprisingly well, but I was always uncomfortable with having a symlink in the home path (and some software was, too.)

For the new machine, I decided to try bind mounts. Short story: they just work. However, the mtab (and df output) can look a little cluttered, and mount order becomes quite important. To manage the set-up, I wrote a few puppet snippets. First, a convenience definition to make the actual bind-mounts a little less verbose.

define bindmount($device) {
  mount { $name:
    device  => $device,
    ensure  => mounted,
    fstype  => 'none',
    options => 'bind',
    dump    => 0,
    pass    => 2,
    require => File[$device],
  }
}

Once that was in place, we then needed to ensure that the directories to which the LV were to be mounted, and to where the user's home would be bind-mounted, actually exist; we also need to mount the underlying LV and set up the bind mount. The dependency chain is actually a graph, but with the majority of dependencies quite linear:

define bindmounthome() {
  file { ["/home/${name}s", "/home/${name}"]:
    ensure  => directory,
  } -> # depended upon by
  mount { "/home/${name}s":
    device  => "LABEL=${name}",
    ensure  => mounted,
    fstype  => 'ext4',
    options => 'defaults',
    dump    => 0,
    pass    => 2,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device  => "/home/${name}s/${name}",
  }
  file { "/home/${name}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${name}s"]],
  }
}

That covers the underlying mounts and the "primary" accounts. However, the point of this exercise was to support the secondary accounts for each user. There's a bit of repetition here, and with some refactoring both this and the preceding bindmounthome definition could be a bit shorter, but I'm not sure whether that would be at the expense of legibility:

define seconduser($parent) {
  file { "/home/${name}":
    ensure => directory,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device => "/home/${parent}s/${name}",
  }
  file { "/home/${parent}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${parent}s"]],
  }
}

I had to re-read the above a couple of times just now to convince myself that I hadn't missed the dependencies between the mount invocations towards the bottom, but they're there: so, puppet will always run the mount for /home/jons before /home/jons/jon. Since puppet is writing to the fstab, this means that the ordering is correct and a sequential start-up will work.

If you want anything cleverer than serialised, one-at-a-time mounting at boot, I think one would have to use something other than trusty-old fstab for the job. I'm planning to look at Systemd's mount unit type, but there's no rush as this particular host is still running sysvinit for the time being.

Clint Adams: Banana Pi is a real thing

29 September, 2014 - 02:13

Now that I've almost caught up with life after an extended stint on the West Coast, it's time to play.

Like Gunnar, I acquired a Banana Pi courtesy of LeMaker.

My GuruPlug (courtesy me) and my Excito B3 (courtesy the lovely people at Tor) are giving me a bit of trouble in different ways, so my intent is to decommission and give away the GuruPlug and Excito B3, leaving my DreamPlug and the Banana Pi to provide the services currently performed by the GuruPlug, Excito B3, and DreamPlug.

The Banana Pi is presently running Bananian on a 32G SDHC (Class 10) card. This is close to wheezy, and appears to have a mostly-sane default configuration, but I am not going to trust some random software downloaded off the Internet on my home network, so I need to be able to run Debian on it instead.

My preliminary belief is that the two main obstacles are Linux and U-Boot. Bananian 14.09 comes with Linux 3.4.90+ #1 SMP PREEMPT Fri Sep 12 18:13:45 CEST 2014 armv7l GNU/Linux, whatever that is, and U-Boot SPL 2014.04-10694-g2ae8b32 (Sep 03 2014 - 20:53:14). I don't yet know what the status of mainline/Debian support is.

Someone gave me a wooden cigar box to use as a case, which is not working out quite as hoped. I also found that my hack to power a 3.5" SATA drive does not work, so I'll either need to hammer on that some more or resolve to use a 2.5" drive instead.

memory:

Mem:        993700      36632     957068          0       2248      11136
-/+ buffers/cache:      23248     970452
Swap:       524284       1336     522948

cpu:

Processor       : ARMv7 Processor rev 4 (v7l)
processor       : 0
BogoMIPS        : 1192.96

processor       : 1
BogoMIPS        : 1197.05

Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc07
CPU revision    : 4

Hardware        : sun7i
Revision        : 0000
Serial          : 03c32de75055484880485278165166c9

Jonathan Dowland: What have I been up to?

29 September, 2014 - 01:59

It's been a little while since I've written about what I've been up to. The truth is I've been busy with moving house - and I'll write a bit more about that at another time. But asides from that there have been some bits and bobs.

I use a little tool called archivemail to tidy up old listmail (my policy is to retain 30 days of listmail for most lists). If I unsubscribe to a list, then eventually I end up with an empty mail folder corresponding to that list. I decided it would be nice to extend archivemail to delete mailboxes if, after the archiving has taken place, the mailbox is empty. Doing this properly means adding delete routines to Python's "mailbox" library, which is part of the Python standard library. I've therefore started work on a patch for Python.

Since this is an enhancement, Python would only accept a patch for Python 3. Therefore, eventually, I would also have to port archivemail from Python 2 to 3. "archivemail" is basically abandonware at the moment, and the principal Debian maintainer is MIA. There was a release critical bug filed against it, so I joined the Debian Python team to co-maintain archivemail in Debian. I've worked around the RC bug but a proper fix is still to come.

In other Debian news, I've been mostly quiet. A small patch for squishyball to get it to build on Hurd, and a temporary fix patch for lhasa to get it to build on the build daemons for all architectures (problems with the test suite). All three of lhasa, squishyball and archivemail need a little bit of love to get them into shape before the jessie freeze.

I've had plans to write up some of the more interesting technical things I've been up to at work, but with the huge successes of the School we've been so busy I haven't had time. Hopefully you can soon look forward to some of our further adventures with puppet, including evaluating Shibboleth modules, some stuff about handling user directories, bind mounts and LVM volumes and actually publishing some of our more useful internal modules; I hope we will also (soon) have some useful data to go with our experiments with Linux LXC containers versus KVM-powered virtual machines in some of our use-cases. I've also got a few bits and pieces on Systemd to write up.

Benjamin Mako Hill: Community Data Science Workshops Post-Mortem

28 September, 2014 - 13:02

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

DebConf team: Wrapping up DebConf14 (Posted by Paul Wise, Donald Norwood)

28 September, 2014 - 03:40

The annual Debian developer meeting took place in Portland, Oregon, 23 to 31 August 2014. DebConf14 attendees participated in talks, discussions, workshops and programming sessions. Video teams captured a lot of the main talks and discussions for streaming for interactive attendees and for the Debian video archive.

Between the video, presentations, and handouts the coverage came from the attendees in blogs, posts, and project updates. We’ve gathered a few articles for your reading pleasure:

Gregor Herrmann and a few members of the Debian Perl group had an informal unofficial pkg-perl micro-sprint and were very productive.

Vincent Sanders shared an inspired gift in the form of a plaque given to Russ Allbery in thanks for his tireless work of keeping sanity in the Debian mailing lists. Pictures of the plaque and design scheme are linked in the post. Vincent also shared his experiences of the conference and hopes the organisers have recovered.

Noah Meyerhans’ adventuring to Debian by train, (Inter)netted some interesting IPv6 data for future road and railwarriors.

Hideki Yamane sent a gentle reminder for English speakers to speak more slowly.

Daniel Pocock posted of GSoC talks at DebConf14, highlights include the Java Project Dependency Builder and the WebRTC JSCommunicator.

Thomas Goirand gives us some insight into a working task list of accomplishments and projects he was able to complete at DebConf14, from the OpenStack discussion to tasksel talks, and completion of some things started last year at DebConf13.

Antonio Terceiro blogged about debci and the Debian Continuous Integration project, Ruby, Redmine, and Noosfero. His post also shares the atmosphere of being able to interact directly with peers once a year.

Stefano Zacchiroli blogged about a talk he did on debsources which now has its own HACKING file.

Juliana Louback penned: DebConf 2014 and How I Became a Debian Contributor.

Elizabeth Krumbach Joseph’s in-depth summary of DebConf14 is a great read. She discussed Debian Validation & CI, debci and the Continuous Integration project, Automated Validation in Debian using LAVA, and Outsourcing webapp maintenance.

Lucas Nussbaum by way of a blog post releases the very first version of Debian Trivia modelled after the TCP/IP Drinking Game.

François Marier’s shares additional information and further discussion on Outsourcing your webapp maintenance to Debian.

Joachim Breitner gave a talk on Haskell and Debian, created a new tool for binNMUs for Haskell packages which runs via cron job. The output is available for Haskell and for OCaml, and he still had a small amount of time to go dancing.

Jaldhar Harshad Vyas was not able to attend DebConf this year, but he did tune in to the videos made available by the video team and gives an insightful viewpoint to what was being seen.

Jérémy Bobbio posted about Reproducible builds in Debian in his recap of DebConf14. One of the topics at hand involved defining a canonical path where packages must be built and a BOF discussion on reproducible builds from where the conversation moved to discussions in both Octave and Groff. New helpers dh_fixmtimes and dh_genbuildinfo were added to BTS. The .buildinfo format has been specified on the wiki and reviewed. Lots of work is being done in the project, interested parties can help with the TODO list or join the new IRC channel #debian-reproducible on irc.debian.org.

Steve McIntyre posted a Summary from the d-i / debian-cd BoF at DC14, with some of the session video available online. Current jessie D-I needs some help with the testing on less common architectures and languages, and release scheduling could be improved. Future plans: Switching to a GUI by default for jessie, a default desktop and desktop choice, artwork, bug fixes and new architecture support. debian-cd: Things are working well. Improvement discussions are on selecting which images to make I.E. netinst, DVD, et al., debian-cd in progress with http download support, Regular live test builds, Other discussions and questions revolve around which ARM platforms to support, specially-designed images, multi-arch CDs, and cloud-init based images. There is also a call for help as the team needs help with testing, bug-handling, and translations.

Holger Levsen reports on feedback about the feedback from his LTS talk at DebConf14. LTS has been perceived well, fits a demand, and people are expecting it to continue; however, this is not without a few issues as Holger explains in greater detail the lacking gatekeeper mechanisms, and how contributions are needed from finance to uploads. In other news the security-tracker is now fixed to know about old stable. Time is short for that fix as once jessie is released the tracker will need to support stable, oldstable which will be wheezy, and oldoldstable.

Jonathan McDowell’s summary of DebConf14 includes a fair perspective of the host city and the benefits of planning of a good DebConf14 location. He also talks about the need for facetime in the Debian project as it correlates with and improves everyone’s ability to work together. DebConf14 also provided the chance to set up a hard time frame for removing older 1024 bit keys from Debian keyrings.

Steve McIntyre posted a Summary from the “State of the ARM” BoF at DebConf14 with updates on the 3 current ports armel, armhf and arm64. armel which targets the ARM EABI soft-float ARMv4t processor may eventually be going away, while armhf which targets the ARM EABI hard-float ARMv7 is doing well as the cross-distro standard. Debian is has moved to a single armmp kernel flavour using Device Tree Blobs and should be able to run on a large range of ARMv7 hardware. The arm64 port recently entered the main archive and it is hoped to release with jessie with 2 official builds hosted at ARM. There is talk of laptop development with an arm64 CPU. Buildds and hardware are mentioned with acknowledgements for donated new machines, Banana Pi boards, and software by way of ARM’s DS-5 Development Studio - free for all Debian Developers. Help is needed! Join #debian-arm on irc.debian.org and/or the debian-arm mailing list. There is an upcoming Mini-DebConf in November 2014 hosted by ARM in Cambridge, UK.

Tianon Gravi posted about the atmosphere and contrast between an average conference and a DebConf.

Joseph Bisch posted about meeting his GSOC mentors, attending and contributing to a keysigning event and did some work on debmetrics which is powering metrics.debian.net. Debmetrics provides a uniform interface for adding, updating, and viewing various metrics concerning Debian.

Harlan Lieberman-Berg’s DebConf Retrospective shared the feel of DebConf, and detailed some of the work on debugging a build failure, work with the pkg-perl team on a few uploads, and work on a javascript slowdown issue on codeeditor.

Ana Guerrero López reflected on Ten years contributing to Debian.

Ritesh Raj Sarraf: Laptop Mode Tools 1.66

27 September, 2014 - 17:09

I am pleased to announce the release of Laptop Mode Tools at version 1.66.

This release fixes an important bug in the way Laptop Mode Tools is invoked. Users, now when disable it in the config file, the tool will be disabled. Thanks to bendlas@github for narrowing it down. The GUI configuration tool has been improved, thanks to Juan. And there is a new power saving module for users with ATI Radeon cards. Thanks to M. Ziebell for submitting the patch.

Laptop Mode Tools development can be tracked @ GitHub

AddThis:  Categories: Keywords: 

Niels Thykier: Lintian – Upcoming API making it easier to write correct and safe code

27 September, 2014 - 15:08

The upcoming version of Lintian will feature a new set of API that attempts to promote safer code. It is hardly a “ground-breaking discovery”, just a much needed feature.

The primary reason for this API is that writing safe and correct code is simply too complicated that people get it wrong (including yours truly on occasion).   The second reason is that I feel it is a waste having to repeat myself when reviewing patches for Lintian.

Fortunately, the kind of issues this kind of mistake creates are usually minor information leaks, often with no chance of exploiting it remotely without the owner reviewing the output first[0].

Part of the complexity of writing correct code originates from the fact that Lintian must assume Debian packages to be hostile until otherwise proven[1]. Consider a simplified case where we want to read a file (e.g. the copyright file):

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  # BAD: This is an example of doing it wrong
  open(my $fd, '<', $info->unpacked($filename));
  ...;
  close($fd);
  return;
}

This has two trivial vulnerabilities[2].

  1. Any part of the path (usr,usr/share, …) can be asymlink to “somewhere else” like /
    1. Problem: Access to potentially any file on the system with the credentials of the user running Lintian.  But even then, Lintian generally never write to those files and the user has to (usually manually) disclose the report before any information leak can be completed.
  2. The target path can point to a non-file.
    1. Problem: Minor inconvenience by DoS of Lintian.  Examples include a named pipe, where Lintian will get stuck until a signal kills it.


Of course, we can do this right[3]:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
use Lintian::Util qw(is_ancestor_of);
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $root = $info->unpacked
  my $path = $info->unpacked($filename);
  if ( -f $path and is_ancestor_of($root, $path)) {
    open(my $fd, '<', $path);
    ...;
    close($fd);
  }
  return;
}

Where “is_ancestor_of” is the only available utility to assist you currently.  It hides away some 10-12 lines of code to resolve the two paths and correctly asserting that $path is (an ancestor of) $root.  Prior to Lintian 2.5.12, you would have to do that ancestor check by hand in each and every check[4].

In the new version, the correct code would look something like this:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $path = $info->index_resolved_path($filename);
  if ($path and $path->is_open_ok) {
    my $fd = $path->open;
    ...;
    close($fd);
  }
  return;
}

Now, you may wonder how that promotes safer code.  At first glance, the checking code is not a lot simpler than the previous “correct” example.  However, the new code has the advantage of being safer even if you forget the checks.  The reasons are:

  1. The return value is entirely based on the “file index” of the package (think: tar vtf data.tar.gz).  At no point does it use the file system to resolve the path.  Whether your malicious package trigger an undef warning based on the return value of index_resolved_index leaks nothing about the host machine.
    1. However, it does take safe symlinks into account and resolves them for you.  If you ask for ‘foo/bar’ and ‘foo’ is a symlink to ‘baz’ and ‘baz/bar’ exists in the package, you will get ‘baz/bar’.  If ‘baz/bar’ happens to be a symlink, then it is resolved as well.
    2. Bonus: You are much more likely to trigger the undef warning during regular testing, since it also happens if the file is simply missing.
  2. If you attempt to call “$path->open” without calling “$path->is_open_ok” first, Lintian can now validate the call for you and stop it on unsafe actions.

It also has the advantage of centralising the code for asserting safe access, so bugs in it only needs to be fixed in one place.  Of course, it is still possible to write unsafe code.  But at least, the new API is safer by default and (hopefully) more convenient to use.

 

[0] Lintian.debian.org being the primary exception here.

[1] This is in contrast to e.g. piuparts, which very much trusts its input packages by handing the package root access (albeit chroot’ed, but still).

[2] And also a bug.  Not all binary packages have a copyright – instead some while have a symlink to another package.

[3] The code is hand-typed into the blog without prior testing (not even compile testing it).  The code may be subject to typos, brown-paper-bag bugs etc. which are all disclaimed (of course).

[4] Fun fact, our documented example for doing it “correctly” prior to implementing is_ancestor_of was in fact not correct.  It used the root path in a regex (without quoting the root path) – fortunately, it just broke lintian when your TMPDIR / LINTIAN_LAB contained certain regex meta-characters (which is pretty rare).


Richard Hartmann: Release Critical Bug report for Week 39

27 September, 2014 - 04:45

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1393
    • Affecting Jessie: 408 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 360 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 50 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 290 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

Steve Kemp: Next week I shall be mostly in Kraków

27 September, 2014 - 01:20

Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.

It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.

In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.

Jakub Wilk: Pet peeves: debhelper build-dependencies (redux)

26 September, 2014 - 20:05
$ zcat Sources.gz | grep -o -E 'debhelper [(]>= 9[.][0-9]{,7}([^0-9)][^)]*)?[)]' | sort | uniq -c | sort -rn
    338 debhelper (>= 9.0.0)
     70 debhelper (>= 9.0)
     18 debhelper (>= 9.0.0~)
     10 debhelper (>= 9.0~)
      2 debhelper (>= 9.2)
      1 debhelper (>= 9.2~)
      1 debhelper (>= 9.0.50~)

Is it a way to protest against the current debhelper's version scheme?

Holger Levsen: 20140925-reproducible-builds

26 September, 2014 - 18:34
Reproducible builds? I never did any - manually

I've never done a reproducible build attempt of any package, manually, ever. But what I've done now is setting up reproducible builds on jenkins.debian.net which will build hundreds or thousands of packages, hopefully reproducibly, regularily in the future. Thanks to Lunar's and many other peoples work, this was actually rather easy. If you want to do this manually, it should take you just a few minutes to setup a suitable build environment.

So three days ago when I wasn't exactly bored I decided that it was a good moment to implement some reproducible build jobs on jenkins.d.n, and so I gave it a try and two hours later the basic implementation was working, and then it was an evening and morning of fine tuning until I was mostly satisfied. Since then there has been some polishing, but the basic setup is done and has been working since.

What's the result? One job, reproducible_setup will just create a suitable environment for pbuilding reproducible packages as documented so well on the Debian wiki. And as that job runs 3.5 minutes only (to debootstrap from scratch), it's run daily.

And then there are currently 16 other jobs, which test reproducible builds in different areas: d-i, core, some six major desktops and some selected desktop applications, some security + privacy related packages, some build chains we have in Debian, libreoffice and X.org. Most of these jobs run several hours, but luckily not days. And they discover packages which still fail to build reproducibly, which already has caused some bugs to be filed, eg. #762732 "libdebian-installer: please do not write timestamps in Doxygen generated documentation".

So this is the output from testing the reproducibilty of all debian-installer packages: 72 packages were successfully built reproducibly, while 6 packages failed to do so. I was quite impressed by these numbers as AFAIK noone tried to build d-i reproducibly before.

72 packages successfully built reproducibly: userdevfs user-setup usb-discover udpkg tzsetup rootskel rootskel-gtk rescue preseed pkgsel partman-xfs partman-target partman-partitioning partman-nbd partman-multipath partman-md partman-lvm partman-jfs partman-iscsi partman-ext3 partman-efi partman-crypto partman-btrfs partman-basicmethods partman-basicfilesystems partman-base partman-auto partman-auto-raid partman-auto-lvm partman-auto-crypto partconf os-prober oldsys-preseed nobootloader network-console netcfg net-retriever mountmedia mklibs media-retriever mdcfg main-menu lvmcfg lowmem localechooser live-installer lilo-installer kickseed kernel-wedge kbd-chooser iso-scan installation-report installation-locale hw-detect grub-installer finish-install efi-reader dh-di debian-installer-utils debian-installer-netboot-images debian-installer-launcher clock-setup choose-mirror cdrom-retriever cdrom-detect cdrom-checker cdebconf-terminal cdebconf-entropy bterm-unifont base-installer apt-setup anna 
6 packages failed to built reproducibly: win32-loader libdebian-installer debootstrap console-setup cdebconf busybox 

What's also impressive: all packages for the newly introduced Cinnamon Desktop build reproducibly from the start!

The jenkins setup is configured via just three small files:

That's it and that's enough to keep several cores busy for days. But as each job only takes a few hours each is scheduled twice a month and more jobs and packages shall be added in future (with some heuristics to schedule known good packages less often...)

I guess it's an appropriate opportunity to say "many thanks to Profitbricks", who have been donating the powerful virtual machine jenkins.debian.net is running on since October 2012. I also want to say "many many thanks to Helmut" (Grohne) who has recently joined me in maintaining this jenkins setup. And then I'd like to thank "the KGB trio" (Gregor, Tincho and Dam!) for providing those KGB bots on IRC, which are very helpful for providing notifications on IRC channels and last but not least thanks to everybody who contributed so that reproducible builds got this far! Keep up the jolly good work!

And If you happen to know failing packages not included in job-cfg/reproducible.yaml I'd like to hear about those, so they'll get regularily tested and appear on the radar, until finally bugs are filed, fixed and migrated to stable. So one day all binary packages in Debian stable will be build reproducibly. An important step on this road is probably to have this defined as an release goal for Jessie+1. And then for jessie+1 hopefully the first 10k packages will build reproducibly? Or whooping 23k maybe? And maybe release jessie+2 with 100%?!? We will see! Even Jessie already has quite some packages (someone needs to count them...) which build reproducibly with just modified dpkg(-dev) and debhelper packages alone...

So let's fix all the bugs! That said, an easier start for most of you is probably the list of useful things you (yes, you!) can do!

Oh, and last but surely not least in my book: many thanks too to the nice people hosting me so friendly in the last days! Keep on rockin'!

Petter Reinholdtsen: How to test Debian Edu Jessie despite some fatal problems with the installer

26 September, 2014 - 18:20

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use ftp.skolelinux.org::cd-edu-testing-nolocal-netinst/debian-edu-amd64-i386-NETINST-1.iso). The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

Dirk Eddelbuettel: R and Docker

26 September, 2014 - 10:57

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

Steve Kemp: Today I mostly removed python

26 September, 2014 - 03:11

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

Aigars Mahinovs: Distributing third party applications via Docker?

26 September, 2014 - 02:54

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

Jan Wagner: Redis HA with Redis Sentinel and VIP

26 September, 2014 - 01:56

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave.

Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).

The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':

# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
# 
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
# 
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
#
# This script should be resistant to multiple invocations.

This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on ${6} which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.

Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

Gunnar Wolf: #bananapi → On how compressed files should be used

26 September, 2014 - 00:37

I am among the lucky people who got back home from DebConf with a brand new computer: a Banana Pi. Despite the name similarity, it is not affiliated with the very well known Raspberry Pi, although it is a very comparable (although much better) machine: A dual-core ARM A7 system with 1GB RAM, several more on-board connectors, and same form-factor.

I have not yet been able to get it to boot, even from the images distributed on their site (although I cannot complain, I have not devoted more than a hour or so to the process!), but I do have a gripe on how the images are distributed.

I downloaded some images to play with: Bananian, Raspbian, a Scratch distribution, and Lubuntu. I know I have a long way to learn in order to contribute to Debian's ARM port, but if I can learn by doing... ☻

So, what is my gripe? That the three images are downloaded as archive files:

  1. 0 gwolf@mosca『9』~/Download/banana$ ls -hl bananian-latest.zip \
  2. > Lubuntu_For_BananaPi_v3.1.1.tgz Raspbian_For_BananaPi_v3.1.tgz \
  3. > Scratch_For_BananaPi_v1.0.tgz
  4. -rw-r--r-- 1 gwolf gwolf 222M Sep 25 09:52 bananian-latest.zip
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz
  6. -rw-r--r-- 1 gwolf gwolf 1.3G Sep 25 10:01 Raspbian_For_BananaPi_v3.1.tgz
  7. -rw-r--r-- 1 gwolf gwolf 1.2G Sep 25 10:05 Scratch_For_BananaPi_v1.0.tgz

Now... that is quite an odd way to distribute image files! Specially when looking at their contents:

  1. 0 gwolf@mosca『14』~/Download/banana$ unzip -l bananian-latest.zip
  2. Archive: bananian-latest.zip
  3. Length Date Time Name
  4. --------- ---------- ----- ----
  5. 2032664576 2014-09-17 15:29 bananian-1409.img
  6. --------- -------
  7. 2032664576 1 file
  8. 0 gwolf@mosca『15』~/Download/banana$ for i in Lubuntu_For_BananaPi_v3.1.1.tgz \
  9. > Raspbian_For_BananaPi_v3.1.tgz Scratch_For_BananaPi_v1.0.tgz
  10. > do tar tzvf $i; done
  11. -rw-rw-r-- bananapi/bananapi 3670016000 2014-08-06 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img
  12. -rwxrwxr-x bananapi/bananapi 3670016000 2014-08-08 04:30 Raspbian_For_BananaPi_v3_1.img
  13. -rw------- bananapi/bananapi 3980394496 2014-05-27 01:54 Scratch_For_BananaPi_v1_0.img

And what is bad about them? That they force me to either have heaps of disk space available (2GB or 4GB for each image) or to spend valuable time extracting before recording the image each time.

Why not just compressing the image file without archiving it? That is,

  1. 0 gwolf@mosca『7』~/Download/banana$ tar xzf Lubuntu_For_BananaPi_v3.1.1.tgz
  2. 0 gwolf@mosca『8』~/Download/banana$ xz Lubuntu_1404_For_BananaPi_v3_1_1.img
  3. 0 gwolf@mosca『9』~/Download/banana$ ls -hl Lubun*
  4. -rw-r--r-- 1 gwolf gwolf 606M Aug 6 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img.xz
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz

Now, wouldn't we need to decompress said files as well? Yes, but thanks to the magic of shell redirections, we can just do it on the fly. That is, instead of having 3×4GB+1×2GB files sitting on my hard drive, I just need to have several files ranging between 145M and I guess ~1GB. Then, it's as easy as doing:

  1. 0 gwolf@mosca『8』~/Download/banana$ dd if=<(xzcat bananian-1409.img.xz) of=/dev/sdd

And the result should be the same: A fresh new card with Bananian ready to fly. Right, right, people using these files need to have xz installed on their systems, but... As it stands now, I can suppose current prospective users of a Banana Pi won't fret about facing a standard Unix tool!

(Yes, I'll forward this rant to the Banana people, it's not just bashing on my blog :-P )

Marco d'Itri: CVE-2014-6271 fix for Debian sarge and etch

25 September, 2014 - 23:01

Very old Debian releases like sarge (3.1) and etch (4.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug):

http://ftp.linux.it/pub/People/md/bash/

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

Julian Andres Klode: hardlink 0.3.0 released; xattr support

25 September, 2014 - 20:42

Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

I also added support for a –minimum-size option.

Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.


Filed under: Uncategorized

Petter Reinholdtsen: Suddenly I am the new upstream of the lsdvd command line tool

25 September, 2014 - 17:20

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้