Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 2 hours 17 min ago

Niels Thykier: debhelper 10.5.1 now available in unstable

27 June, 2017 - 04:59

Earlier today, I uploaded debhelper version 10.5.1 to unstable.  The following are some highlights compared to version 10.2.5:

  • debhelper now supports the “meson+ninja” build system. Kudos to Michael Biebl.
  • Better cross building support in the “makefile” build system (PKG_CONFIG is set to the multi-arched version of pkg-config). Kudos to Helmut Grohne.
  • New dh_missing helper to take over dh_install –list-missing/–fail-missing while being able to see files installed from other helpers. Kudos to Michael Stapelberg.
  • dh_installman now logs what files it has installed so the new dh_missing helper can “see” them as installed.
  • Improve documentation (e.g. compare and contrast the dh_link config file with ln(1) to assist people who are familiar with ln(1))
  • Avoid triggering a race-condition with libtool by ensuring that dh_auto_install run make with -j1 when libtool is detected (see Debian bug #861627)
  • Optimizations and parallel processing (more on this later)

There are also some changes to the upcoming compat 11

  • Use “/run” as “run state dir” for autoconf
  • dh_installman will now guess the language of a manpage from the path name before using the extension.


Filed under: Debhelper, Debian

Joey Hess: DIY solar upgrade complete-ish

27 June, 2017 - 04:44

Success! I received the Tracer4215BN charge controller where UPS accidentially-on-purpose delivered it to a neighbor, and got it connected up, and the battery bank rewired to 24V in a couple hours.

Here it's charging the batteries at 220 watts, and that picture was taken at 5 pm, when the light hits the panels at nearly a 90 degree angle. Compare with the old panels, where the maximum I ever recorded at high noon was 90 watts. I've made more power since 4:30 pm than I used to be able to make in a day! \o/

Alexander Wirt: Stretch Backports available

27 June, 2017 - 04:00

With the release of stretch we are pleased to open the doors for stretch-backports and jessie-backports-sloppy. \o/

As usual with a new release we will change a few things for the backports service.

What to upload where

As a reminder, uploads to a release-backports pocket are to be taken from release + 1, uploads to a release-backports-sloppy pocket are to be taken from release + 2. Which means:

Source Distribution Backports Distribution Sloppy Distribution buster stretch-backports jessie-backports-sloppy stretch jessie-backports - Deprecation of LTS support for backports

We started supporting backports as long as there is LTS support as an experiment. Unfortunately it didn’t worked, most maintainers didn’t wanted to support oldoldstable-backports (squeeze) for the lifetime of LTS. So things started to rot in squeeze and most packages didn’t received updates. After long discussions we decided to deprecate LTS support for backports. From now on squeeze-backports(-sloppy) is closed and will not receive any updates. Expect it to get removed from the mirrors and moved to archive in the near future.

BSA handling

We - the backports team - didn’t scale well in processing BSA requests. To get things better in the future we decided to change the process a little bit. If you upload a package which fixes security problems please fill out the BSA template and create a ticket in the rt tracker (see for details).

Stretching the rules

From time to time its necessary to not follow the backports rules, like a package needs to be in testing or a version needs to be in Debian. If you think you have one of those cases, please talk to us on the list before upload the package.


Thanks have to go out to all people making backports possible, and that includes up front the backporters themself who do upload the packages, track and update them on a regular basis, but also the buildd team making the autobuilding possible and the ftp masters for creating the suites in the first place.

We wish you a happy stretch :) Alex, on behalf of the Backports Team

Jonathan Dowland: Coming in from the cold

26 June, 2017 - 23:18

I've been using a Mac day-to-day since around 2014, initially as a refreshing break from the disappointment I felt with GNOME3, but since then a few coincidences have kept me on the platform. Something happened earlier in the year that made me start to think about a move back to Linux on the desktop. My next work hardware refresh is due in March next year, which gives me about nine months to "un-plumb" myself from the Mac ecosystem. From the top of my head, here's the things I'm going to have to address:

  • the command modifier key (⌘). It's a small thing but its use on the Mac platform is very consistent, and since it's not used at all within terminals, there's never a clash between window management and terminal applications. Compared to the morass of modifier keys on Linux, I will miss it. It's possible if I settle on a desktop environment and spend some time configuring it I can get to a similarly comfortable place. Similarly, I'd like to stick to one clipboard, and if possible, ignore the select-to-copy, middle-click-to-paste one entirely. This may be an issue for older software.

  • The Mac hardware trackpad and gestures are honestly fantastic. I still have some residual muscle memory of using the Thinkpad trackpoint, and so I'm weaning myself off the trackpad by using an external thinkpad keyboard with the work Mac, and increasingly using a x61s where possible.

  • SizeUp. I wrote about this in useful mac programs. It's a window management helper that lets you use keyboard shortcuts to move move and resize windows. I may need something similar, depending on what desktop environment I settle on. (I'm currently evaluating Awesome WM).

  • 1Password. These days I think a password manager is an essential piece of software, and 1Password is a very, very good example of one. There are several other options now, but sadly none that seem remotely as nice as 1Password. Ryan C Gordon wrote 1pass, a Linux-compatible tool to read a 1Password keychain, but it's quite raw and needs some love. By coincidence that's currently his focus, and one can support him in this work via his Patreon.

  • Font rendering. Both monospace and regular fonts look fantastic out of the box on a Mac, and it can be quite hard to switch back and forth between a Mac and Linux due to the difference in quality. I think this is a mixture of ensuring the font rendering software on Linux is configured properly, but also that I install a reasonable selection of fonts.

I think that's probably it: not a big list! Notably, I'm not locked into iTunes, which I avoid where possible; Apple's Photo app (formerly iPhoto) which is a bit of a disaster; nor Time Machine, which is excellent, but I have a backup system for other things in place that I can use.

Andreas Bombe: PDP-8/e Replicated — Introduction

26 June, 2017 - 01:58

I am creating a replica of the DEC PDP-8/e architecture in an FPGA from schematics of the original hardware. So how did I end up with a project like this?

The story begins with me wanting to have a computer with one of those front panels that have many, many lights where you can really see, in real time, what the computer is doing while it is executing code. Not because I am nostalgic for a prior experience with any of those — I was born a bit too late for that and my first computer as a kid was a Commodore 64.

Now, the front panel era ended around 40 years ago with the advent of microprocessors and computers of that age and older that are complete and working are hard to find and not cheap. And even if you do, there’s the issue of weight, size (complete systems with peripherals fill at least a rack) and power consumption. So what to do — build myself a small one with modern technology of course.

While there’s many computer architectures of that era to choose from, the various PDP machines by DEC are significant and well known (and documented) due to their large numbers. The most important are probably the 12 bit PDP-8, the 16 bit PDP-11 and the 36 bit PDP-10. While the PDP-11 is enticing because of the possibility to run UNIX I wanted to start with something simpler, so I chose the PDP-8.

My implementation on display next to a real PDP-8/e at VCFe 18.0

The Original

DEC started the PDP-8 line of computers programmed data processors designed as low cost machines in 1965. It is a quite minimalist 12 bit architecture based on the earlier PDP-5, and by minimalist I mean seriously minimal. If you are familiar with early 8 bit microprocessors like the 6502 or 8080 you will find them luxuriously equipped in comparison.

The PDP-8 base architecture has a program counter (PC) and an accumulator (AC)1. That’s it. There are no pointer or index registers2. There is no stack. It has addition and AND instructions but subtractions and OR operations have to be manually coded. The optional Extended Arithmetic Element adds the MQ register but that’s really it for visible registers. The Wikipedia page on the PDP-8 has a good detailed description.

Regarding technology, the PDP-8 series has been in production long enough to get the whole range of implementations from discrete transistor logic to microprocessors. The 8/e which I target was right in the middle, implemented in TTL logic where each IC contains multiple logic elements. This allowed the CPU itself (including timing generator) to fit on three large circuit boards plugged into a backplane. Complete systems would have at least another board for the front panel and multiple boards for the core memory, then additional boards for whatever options and peripherals were desired.

Design Choices and Comparisons

I’m not the only one who had the idea to build something like that, of course. Among the other modern PDP-8 implementations with a front panel, probably the most prominent project is the Spare Time Gizmos SBC6120 which is a PDP-8 single board computer built around the Harris/Intersil HD-6120 microprocessor, which implementes the PDP-8 architecture, combined with a nice front panel. Another is the PiDP-8/I, which is another nice front panel (modeled after the 8/i which has even more lights) driven by the simh simulator running under Linux on a Raspberry Pi.

My goal is to get front panel lights that appear exactly like the real ones in operation. This necessitates driving the lights at full speed as they change with every instruction or even within instructions for some display selections. For example, if you run a tight loop that does nothing but increment AC while displaying that register, it would appear that all lights are lit at equal but less than full brightness. The reason is that the loop runs at such a high speed that even the most significant bit, which is blinking the slowest, is too fast to see flicker. Hence they are all effectively 50% on, just at different frequencies, and appear to be constantly light at the same brightness.

This is where the other projects lack what I am looking for. The PiDP-8/I is a multiplexed display which updates at something like 30 Hz or 60 Hz, taking whatever value is current in the simulation software at the time. All the states the lights took inbetween are lost and consequently there is flickering where there shouldn’t be. On the SBC6120 at least the address lines appear to update at full speed as these are the actual RAM address lines. However the used 6120 microprocessor does not have required data for the indicator display externally available. Instead, the SBC6120 runs an interrupt at 30 Hz to trap into its firmware/monitor program which then reads the current state and writes it to the front panel display, which is essentially just another peripheral. A different considerable problem with the SBC6120 is its use of the 6100 microprocessor family ICs, which are themselves long out of production and not trivial (or cheaply) to come by.

Given that the way to go is to drive all lights in step with every cycle3, this can be done by a software running on a dedicated microcontroller — which is how I started — or by implementing a real CPU with all the needed outputs in an FPGA — which is the project I am writing about.

In the next post I will give on overview of the hardware I built so far and some of the features that are yet to be implemented.

  1. With an associated link bit which is a little different from a carry bit in that it is treated as a thirteenth bit, i.e. it will be flipped rather than set when a carry occurs. [return]
  2. Although there are 8 specially treated memory addresses that will pre-increment when used in indirect addressing. [return]
  3. Basic cycles on the PDP-8/e are 1.4 µs for memory modifying cycles and fast cycles of 1.2 µs for everything else. Instructions can be one to three cycles long. [return]

Steinar H. Gunderson: Frame queue management in Nageru 1.6.1

25 June, 2017 - 22:45

Nageru 1.6.1 is on its way, and what was intended to only be a release centered around monitoring improvements (more specifically a full set of native Prometheus] metrics) actually ended up getting a fairly substantial change to how Nageru manages its frame queues. To understand what's changing and why, it's useful to first understand the history of Nageru's queue management. Nageru 1.0.0 started out with a fairly simple scheme, but with some basics that are still relevant today: One of the input cards was deemed the master card, and whenever it delivers a frame, the master clock ticks and an output frame is produced. (There are some subtleties about dropped frames and/or the master card changing frame rates, but I'm going to ignore them, since they're not important to the discussion.)

To this end, every card keeps a preallocated frame queue; when a card delivers a frame, it's put into the queue, and when the master clock ticks, it tries picking out one frame from each of the other card's queues to mix together. Note that “mix” here could be as simple as picking one input and throwing all the other ones away; the queueing algorithm doesn't care, it just feeds all of them to the theme and lets that run whatever GPU code it needs to match the user's preferences.

The only thing that really keeps the queues bounded is that the frames in them are preallocated (in GPU memory), so if one queue gets longer than 16 frames, Nageru starts dropping it. But is 16 the right number? There are two conflicting demands here, ignoring memory usage:

  • You want to keep the latency down.
  • You don't want to run out of frames in the queue if you can avoid it; if you drop too aggressively, you could find yourself at the next frame with nothing in the queue, because the input card hasn't delivered it yet when the master card ticks. (You could argue one should delay the output in this case, but for how long? And if you're using HDMI/SDI output, you have no such luxury.)

The 1.0.0 scheme does about as well as one could possibly hope in never dropping frames, but unfortunately, it can be pretty poor at latency. For instance, if your master card runs at 50 Hz and you have a 60 Hz card, the latter will eventually build up a delay of 16 * 16.7 ms = 266.7 ms—clearly unacceptable, and rather unneeded.

You could ask the user to specify a queue length, but the user probably doesn't know, and also shouldn't really have to care—more knobs to twiddle are a bad thing, and even more so knobs the user is expected to twiddle. Thus, Nageru 1.2.0 introduced queue autotuning; it keeps a running estimate on how big the queue needs to be to avoid underruns, simply based on experience. If we've been dropping frames on a queue and then there's an underrun, the “safe queue length” is increased by one, and if the queue has been having excess frames for more than a thousand successive master clock ticks, we reduce it by one again. Whenever the queue has more than this “safe” number, we drop frames.

This was simple, effective and largely fixed the problem. However, when adding metrics, I noticed a peculiar effect: Not all of my devices have equally good clocks. In particular, when setting up for 1080p50, my output card's internal clock (which assumes the role of the master clock when using HDMI/SDI output) seems to tick at about 49.9998 Hz, and my simple home camcorder delivers frames at about 49.9995 Hz. Over the course of an hour, this means it produces one more frame than you should have… which should of course be dropped. Having an SDI setup with synchronized clocks (blackburst/tri-level) would of course fix this problem, but most people are not so lucky with their cameras, not to mention the price of PC graphics cards with SDI outputs!

However, this happens very slowly, which means that for a significant amount of time, the two clocks will very nearly be in sync, and thus racing. Who ticks first is determined largely by luck in the jitter (normal is maybe 1ms, but occasionally, you'll see delayed delivery of as much as 10 ms), and this means that the “1000 frames” estimate is likely to be thrown off, and the result is hundreds of dropped frames and underruns in that period. Once the clocks have diverged enough again, you're off the hook, but again, this isn't a good place to be.

Thus, Nageru 1.6.1 change the algorithm around yet again, by incorporating more data to build an explicit jitter model. 1.5.0 was already timestamping each frame to be able to measure end-to-end latency precisely (now also exposed in Prometheus metrics), but from 1.6.1, they are actually used in the queueing algorithm. I ran several eight- to twelve-hour tests and simply stored all the event arrivals to a file, and then simulated a few different algorithms (including the old algorithm) to see how they fared in measures such as latency and number of drops/underruns.

I won't go into the full details of the new queueing algorithm (see the commit if you're interested), but the gist is: Based on the last 5000 frames, it tries to estimate the maximum possible jitter for each input (ie., how late the frame could possibly be). Based on this as well as clock offsets, it determines whether it's really sure that there will be an input frame available on the next master tick even if it drops the queue, and then trims the queue to fit.

The result is pretty satisfying; here's the end-to-end latency of my camera being sent through to the SDI output:

As you can see, the latency goes up, up, up until Nageru figures it's now safe to drop a frame, and then does it in one clean drop event; no more hundreds on drops involved. There are very late frame arrivals involved in this run—two extra frame drops, to be precise—but the algorithm simply determines immediately that they are outliers, and drops them without letting them linger in the queue. (Immediate dropping is usually preferred to sticking around for a bit and then dropping it later, as it means you only get one disturbance event in your stream as opposed to two. Of course, you can only do it if you're reasonably sure it won't lead to more underruns later.)

Nageru 1.6.1 will ship before Solskogen, as I intend to run it there :-) And there will probably be lovely premade Grafana dashboards from the Prometheus data. Although it would have been a lot nicer if Grafana were more packaging-friendly, so I could pick it up from stock Debian and run it on armhf. Hrmf. :-)

Lars Wirzenius: Obnam 1.22 released (backup application)

25 June, 2017 - 19:41

I've just released version 1.22 of Obnam, my backup application. It is the first release for this year. Packages are available on and in Debian unstable, and source is in git. A summary of the user-visible changes is below.

For those interested in living dangerously and accidentally on purpose deleting all their data, the link below shows that status and roadmap for FORMAT GREEN ALBATROSS.

Version 1.22, released 2017-06-25
  • Lars Wirzenius made Obnam log the full text of an Obnam exception/error message with more than one line. In particular this applies to encryption error messages, which now log the gpg output.

  • Lars Wirzenius made obnam restore require absolute paths for files to be restored.

  • Lars Wirzenius made obnam forget use a little less memory. The amount depends on the number of genrations and the chunks they refer to.

  • Jan Niggemann updated the German translation of the Obnam manual to match recent changes in the English version.

  • SanskritFritz and Ian Cambell fixed the kdirstat plugin.

  • Lars Wirzenius changed Obnam to hide a Python stack trace when there's a problem with the SSH connection (e.g., failure to authenticate, or existing connection breaks).

  • Lars Wirzenius made the Green Albatross version of obnam forget actually free chunks that are no longer used.

Shirish Agarwal: Dreams don’t cost a penny, mumma’s boy :)

25 June, 2017 - 11:52

This one I promise will be short

After the last two updates, I thought it was time for something positive to share. While I’m doing the hard work (of physiotherapy which is gruelling), the only luxury I have nowadays is of falling on to dreams which are also rare. The dream I’m going to share is totally unrealistic as my mum hates travel but I’m sure many sons and daughters would identify with it. Almost all travel outside of relatives is because I dragged her onto it.

In the dream, I am going to a Debconf, get bursary and the conference is being held somewhere in Europe, maybe Paris (2019 probably) . I reach and attend the conference, present and generally have a great time sharing and learning from my peers. As we all do in these uncertain times, I too phone home every couple of days ensuring her that I’m well and in best of health. The weather is mild like it was in South Africa and this time I had come packed with woollens so all was well.

Just the day before the conference is to end, I call up mum and she tells about a specific hotel/hostel which I should check out. I am somewhat surprised that she knows of a specific hotel/hostel in a specific place but I accede to her request. I go there to find a small, quiet, quaint bed & breakfast place (dunno if Paris has such kind of places), something which my mum would like. Intuitively, I ask at the reception to look in the register. After seeing my passport, he too accedes to my request and shows me the logbook of all visitors registering to come in the last few days (for some reason privacy is not a concern then) . I am surprised to find my mother’s name in the register and she had turned up just a day or two before.

I again request the reception to be able to go to room abcd without being announced and have some sort of key (like for maintenance or laundry as an excuse) . The reception calls the manager and after looking copy of mother’s passport and mine they somehow accede to the request with help located nearby just in case something goes wrong.

I disguise my voice and announce as either Room service or Maintenance and we are both surprised and elated to see each other. After talking a while, I go back to the reception and register myself as her guest.

The next week is a whirlwind as I come to know of hop-on, hop-off buses similar service in Paris as was in South Africa. I buy a small booklet and we go through all the museums. the vineyards or whatever it is that Paris has to offer. IIRC there is also a lock and key on a famous bridge. We also do that. She is constantly surprised at the different activities the city shows her and the mother-son bond becomes much better.

I had shared the same dream with her and she laughed. In reality, she is happy and comfortable in the confines of her home.

Still I hope some mother-daughter, son-father, son-mother or any combination of parent, sibling takes this entry and enrich each other by travelling together.

Filed under: Miscellenous Tagged: #dream, #mother, #planet-debian, travel

Lisandro Damián Nicanor Pérez Meyer: Qt 5.7 submodules that didn't make it to Stretch but will be in testing

25 June, 2017 - 05:41
There are two Qt 5.7 submodules that we could not package in time for Strech but are/will be available in their 5.7 versions in testing. This are qtdeclarative-render2d-plugin and qtvirtualkeyboard.

declarative-render2d-plugin makes use of the Raster paint engine instead of OpenGL to render the  contents of a scene graph, thus making it useful when Qt Quick2 applications  are run in a system without OpenGL 2  enabled hardware. Using it might require tweaking Debian's /etc/X11/Xsession.d/90qt5-opengl. On Qt 5.9 and newer this plugin is merged in Qt GUI so there should be no need to perform any action on the user's behalf.

Debian's VirtualKeyboard currently has a gotcha: we are not building it with the embedded code it ships. Upstream ships 3rd party code but lacks a way to detect and use the system versions of them. See QTBUG-59594, patches are welcomed. Please note that we prefer patches sent directly upstream to the current dev revision, we will be happy to backport patches if necessary.
Yes, this means no hunspell, openwnn, pinyin, tcime nor lipi-toolkit/t9write support.

Steve Kemp: Linux security modules, round two.

25 June, 2017 - 04:00

So recently I wrote a Linux Security Module (LSM) which would deny execution of commands, unless an extended attribute existed upon the filesystem belonging to the executables.

The whitelist-LSM worked well, but it soon became apparent that it was a little pointless. Most security changes are pointless unless you define what you're defending against - your "threat model".

In my case it was written largely as a learning experience, but also because I figured it seemed like it could be useful. However it wasn't actually as useful because you soon realize that you have to whitelist too much:

  • The redis-server binary must be executable, to the redis-user, otherwise it won't run.
  • The apache binary must be executable to the www-data user.
  • /usr/bin/git must be executable.

In short there comes a point where user alice must run executable blah. If alice can run it, then so can mallory. At which point you realize the exercise is not so useful.

Taking a step back I realized that what I wanted to to prevent was the execution of unknown/unexpected, and malicious binaries How do you identify known-good binaries? Well hashes & checksums are good. So for my second attempt I figured I'd not look for a mere "flag" on a binary, instead look for a valid hash.

Now my second LSM is invoked for every binary that is executed by a user:

  • When a binary is executed the sha1 hash is calculated of the files contents.
  • If that matches the value stored in an extended attribute the execution is permitted.
    • If the extended-attribute is missing, or the checksum doesn't match, then the execution is denied.

In practice this is the same behaviour as the previous LSM - a binary is either executable, because there is a good hash, or it is not, because it is missing or bogus. If somebody deploys a binary rootkit this will definitely stop it from executing, but of course there is a huge hole - scripting-languages:

  • If /usr/bin/perl is whitelisted then /usr/bin/perl /tmp/ will succeed.
  • If /usr/bin/python is whitelisted then the same applies.

Despite that the project was worthwhile, I can clearly describe what it is designed to achieve ("Deny the execution of unknown binaries", and "Deny binaries that have been modified"), and I learned how to hash a file from kernel-space - which was surprisingly simple.

(Yes I know about IMA and EVM - this was a simple project for learning purposes. Public-key signatures will be something I'll look at next/soon/later. :)

Perhaps the only other thing to explore is the complexity in allowing/denying actions based on the user - in a human-readable fashion, not via UIDs. So www-data can execute some programs, alice can run a different set of binaries, and git can only run /usr/bin/git.

Of course down that path lies apparmour, selinux, and madness..

Ingo Juergensmann: Upgrade to Debian Stretch - GlusterFS fails to mount

25 June, 2017 - 01:45

Before I upgrade from Jessie to Stretch everything worked as a charme with glusterfs in Debian. But after I upgraded the first VM to Debian Stretch I discovered that glusterfs-client was unable to mount the storage on Jessie servers. I got this in glusterfs log:

[2017-06-24 12:51:53.240389] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server= --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/
[2017-06-24 12:51:54.534826] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 12:51:54.534896] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
[2017-06-24 12:51:56.668254] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-24 12:51:56.671649] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-06-24 12:51:56.671669] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/le)
[2017-06-24 12:51:57.014502] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/ [0x7fbea36c4a20] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x55fbbaed06f4] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55fbbaeca444] ) 0-: received signum (0), shutting down
[2017-06-24 12:51:57.014564] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/etc/'.
[2017-06-24 16:44:45.501056] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server= --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/
[2017-06-24 16:44:45.504038] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 16:44:45.504084] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount

After some searches on the Internet I found Debian #858495, but no solution for my problem. Some search results recommended to set "option rpc-auth-allow-insecure on", but this didn't help. In the end I joined #gluster on Freenode and got some hints there:

JoeJulian | ij__: debian breaks apart ipv4 and ipv6. You'll need to remove the ipv6 ::1 address from localhost in /etc/hosts or recombine your ip stack (it's a sysctl thing)
JoeJulian | It has to do with the decisions made by the debian distro designers. All debian versions should have that problem. (yes, server side).

Removing ::1 from /etc/hosts and from lo interface did the trick and I could mount glusterfs storage from Jessie servers in my Stretch VMs again. However, when I upgraded the glusterfs storages to Stretch as well, this "workaround" didn't work anymore. Some more searching on the Internet made me found this posting on glusterfs mailing list:

We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set <volname> transport.address-family inet" to allow Gluster to listen on IPv4 by default.

Setting this option instantly fixed my issues with mounting glusterfs storages.

So, whatever is wrong with glusterfs in Debian, it seems to have something to do with IPv4 and IPv6. When disabling IPv6 in glusterfs, it works. I added information to #858495.

Kategorie: DebianTags: DebianGlusterFSSoftwareServer 

Norbert Preining: Calibre 3 for Debian

24 June, 2017 - 10:42

I have updated my Calibre Debian repository to include packages of the current Calibre 3.1.1. As with the previous packages, I kept RAR support in to allow me to read comic books. I also have forwarded my changes to the maintainer of Calibre in Debian so maybe we will have soon official packages, too.

The repository location hasn’t changed, see below.

deb calibre main
deb-src calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13


Joachim Breitner: The perils of live demonstrations

24 June, 2017 - 06:54

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith <>
Date:   Wed Jun 21 23:53:53 2017 +0000

    Change dilated to take one parameter.
    Function is nearly unused, so I'm not concerned about breakage.
    This new version better aligns with standard educational usage,
    in which "dilation" means uniform scaling.  Taken as a separate
    operation, it commutes with rotation, and preserves similarity
    of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.

  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

Joey Hess: PV array is hot

24 June, 2017 - 03:43

Only took a couple hours to wire up and mount the combiner box.

Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to.

And the new PV array is hot! This should read close to 112 V at solar noon; so for 4 pm, 66.8 V seems reasonable.

Riku Voipio: Cross-compiling with debian stretch

23 June, 2017 - 23:25
Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, blets build something more complicated to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)

Jonathan Dowland: WD drive head parking update

23 June, 2017 - 21:28

An update for my post on Western Digital Hard Drive head parking: disabling the head-parking completely stopped the Load_Cycle_Count S.M.A.R.T. attribute from incrementing. This is probably at the cost of power usage, but I am not able to assess the impact of that as I'm not currently monitoring the power draw of the NAS (Although that's on my TODO list).

Bits from Debian: Hewlett Packard Enterprise Platinum Sponsor of DebConf17

23 June, 2017 - 21:15

We are very pleased to announce that Hewlett Packard Enterprise (HPE) has committed support to DebConf17 as a Platinum sponsor.

"Hewlett Packard Enterprise is excited to support Debian's annual developer conference again this year", said Steve Geary, Senior Director R&D at Hewlett Packard Enterprise. "As Platinum sponsors and member of the Debian community, HPE is committed to supporting Debconf. The conference, community and open distribution are foundational to the development of The Machine research program and will our bring our Memory Driven Computing agenda to life."

HPE is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

With this additional commitment as Platinum Sponsor, HPE contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Hewlett Packard Enterprise, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through, and visit the DebConf17 website at

Arturo Borrero González: Backup router/switch configuration to a git repository

23 June, 2017 - 15:10

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes.

Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git.

After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo.

Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run.

In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve.

Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script.

Since we have several devices of the same type, we just iterate over their names.

For example, this is what we use for hp comware devices:

# run this script by cron

DEVICES="device1 device2 device3 device4"


TMP_DIR="$(mktemp -d)"
if [ -z "$TMP_DIR" ] ; then
	echo "E: no temp dir created" >&2
	exit 1

GIT_BIN="$(which git)"
if [ ! -x "$GIT_BIN" ] ; then
	echo "E: no git binary" >&2
	exit 1

SCP_BIN="$(which scp)"
if [ ! -x "$SCP_BIN" ] ; then
	echo "E: no scp binary" >&2
	exit 1

SSHPASS_BIN="$(which sshpass)"
if [ ! -x "$SSHPASS_BIN" ] ; then
	echo "E: no sshpass binary" >&2
	exit 1

# clone git repo
$GIT_BIN clone $GIT

for device in $DEVICES; do
	mkdir -p $device
	cd $device

	# fetch cfg

	# commit
	$GIT_BIN add -A .
	$GIT_BIN commit -m "${device}: configuration change" \
		-m "A configuration change was detected" \
		--author="cron <>"

	$GIT_BIN push -f
	cd ..

# cleanup
rm -rf $TMP_DIR

You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place.

For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this:

local-user git class manage
 password hash xxxxx
 service-type ssh
 authorization-attribute user-role security-audit

In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this:

system {
	login {
		class git {
			permissions maintenance;
			allow-commands scp.*;
		user git {
			uid xxx;
			class git;
			authentication {
				encrypted-password "xxx";

The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this:

aaa authorization group "git user" 1 match-command "scp.*" permit
aaa authentication local-user "git" group "git user" password sha1 "xxxxx"

What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit the changes by itself to git. I’m open to suggestions :-)

Elena 'valhalla' Grandi: On brokeness, the live installer and being nice to people

23 June, 2017 - 14:40
On brokeness, the live installer and being nice to people

This morning I've read this

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enought to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

Steve McIntyre: -1, Trolling

23 June, 2017 - 04:59

Here's a nice comment I received by email this morning. I guess somebody was upset by my last post?

From: Tec Services <>
Date: Wed, 21 Jun 2017 22:30:26 -0700
Subject: its time for you to retire from debian...unbelievable..your
         the quality guy and fucked up the installer!

i cant ever remember in the hostory of computing someone releasing an installer
that does not work!!


you need to be retired...due to being retarded..

and that this was dedicated to ian...what a should be ashames..he is probably roling in his grave from shame
right now....

It's nice to be appreciated.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้