Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 1 week 2 days ago

Jonathan Carter: Debconf 2016 to be hosted in Cape Town

14 February, 2015 - 04:35

Long story short, we put in a bid to host Debconf 16 in Cape Town, and we got it!

Back at Debconf 12 (Nicaragua), many people asked me when we’re hosting a Debconf in South Africa. I just laughed and said “Who knows, maybe some day”. During the conference I talked to Stefano Rivera (tumbleweed) who said that many people asked him too. We came to the conclusion that we’d both really really want to do it but just didn’t have enough time at that stage. I wanted to get to a point where I could take 6 months off for it and suggested that we prepare a bid for 2019. Stefano thought that this was quite funny, I think at some point we managed to get that estimate down to 2017-2018.

That date crept back even more with great people like Allison Randal and Bernelle Verster joining our team, along with other locals Graham Inggs, Raoul SnymanAdrianna PińskaNigel Kukard, Simon CrossMarc Welz, Neill Muller, Jan Groenewald, and our international mentors such as Nattie Mayer-Hutchings, Martin Krafft and Hannes von Haugwitz. Now, we’re having that Debconf next year. It’s almost hard to believe, not sure how I’ll sleep tonight, we’ve waited so long for this and we’ve got a mountain of work ahead of us, but we’ve got a strong team and I think Debconf 2016 attendees are in for a treat!

Since I happened to live close to Montréal back in 2012, I supported the idea of a Debconf bid for Montréal first, and then for Cape Town afterwards. Little did I know then that the two cities would be the only two cities bidding against each other 3 years later. I think both cities are superb locations to host a Debconf, and I’m supporting Montréal’s bid for 2017.

Want to get involved? We have a mailing list and IRC channel: #debconf16-capetown on oftc. Thanks again for all the great support from everyone involved so far!

Richard Hartmann:

14 February, 2015 - 03:59

Here's to a happy, successful, and overall quite awesome DebConf16 in Cape Town, South Africa.

As a very welcome surprise, the Montreal team is already planning a mini-DC and already have a strong bid for DC17.

Update: Well, that was quick...

Richard Hartmann: Release Critical Bug report for Week 07

14 February, 2015 - 03:16

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1071 (Including 192 bugs affecting key packages)
    • Affecting Jessie: 147 (key packages: 110) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 106 (key packages: 82) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 25 bugs are tagged 'patch'. (key packages: 23) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 4 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 77 bugs are neither tagged patch, nor marked done. (key packages: 59) Help make a first step towards resolution!
      • Affecting Jessie only: 41 (key packages: 28) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 11 bugs are in packages that are unblocked by the release team. (key packages: 6)
        • 30 bugs are in packages that are not unblocked. (key packages: 22)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Olivier Berger: Testing the RuneStone interactive Python courses server in docker

13 February, 2015 - 21:36

I’ve been working on setting up a Docker container environment allowing to test the RuneStone Interactive server.

RuneStone Interactive allows the publication of courses containing interactive Python examples, and while most of the content is static (the Python examples are run innside a Python interpreter implemented in JavaScript, hence locally in the JS VM of the Web browser), the tool also offers an environment allowing to monitor the progress of learners in a course, which is dynamic and is queried by the browser over AJAX APIs.

That’s the part which I wanted to be able to operate for test purposes. As it is a web2py application, it’s not exactly obvious to gather all dependencies and run locally. Well, in fact it is, but I want to understand the architecture of the tool to be able to understand the deployment constraints, so making a docker image will help in this purpose.

The result is the following :

Now, it’s easier to test the writing of a new course (yet another container above the latter one), and directly test for real.

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910 - part II

13 February, 2015 - 18:40

In my recent attempt to setup a motion detection camera I was disappointed, that my camera, which should be able to record with 30 fps in 720p mode only reached 10 fps using the software motion. Now I got a bit further. This seems to be an issue with the format used by motion. I've check the output of v4l2-ctl ...

$ v4l2-ctl -d /dev/video1 --list-formats-ext
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)

Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)

Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)

Interval: Discrete 0.042s (24.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)

... and motion:

$ motion
[1] [NTC] [VID] v4l2_set_pix_format: Config palette index 17 (YU12) doesn't work.
[1] [NTC] [VID] v4l2_set_pix_format: Supported palettes:
[1] [NTC] [VID] v4l2_set_pix_format: (0) YUYV (YUV 4:2:2 (YUYV))
[1] [NTC] [VID] v4l2_set_pix_format: 0 - YUV 4:2:2 (YUYV) (compressed : 0) (0x56595559)
[1] [NTC] [VID] v4l2_set_pix_format: (1) MJPG (MJPEG)
[1] [NTC] [VID] v4l2_set_pix_format: 1 - MJPEG (compressed : 1) (0x47504a4d)

[1] [NTC] [VID] v4l2_set_pix_format Selected palette YUYV
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YUYV (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YUYV (1280x720) bytesperlines 2560 sizeimage 1843200 colorspace 00000008

Ok, so both formats YUYV and MJPG are supported and recognized and I can choose both via the v4l2palette configuration variable, citing motion.conf:

# v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
# Values :
# V4L2_PIX_FMT_SN9C10X : 0 'S910'
# V4L2_PIX_FMT_SBGGR16 : 1 'BYR2'
# V4L2_PIX_FMT_SBGGR8 : 2 'BA81'
# V4L2_PIX_FMT_SPCA561 : 3 'S561'
# V4L2_PIX_FMT_PAC207 : 6 'P207'
# V4L2_PIX_FMT_RGB24 : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420 : 17 'YU12'
v4l2_palette 17

Now motion uses YUYV as default mode as shown by its output. So it seems that all I have to do is to choose MJPEG in my motion.conf:

v4l2_palette 8

Testing again ...

$ motion
[1] [NTC] [VID] vid_v4lx_start: Using V4L2
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 25 (ret 0 )
Corrupt JPEG data: 5 extraneous bytes before marker 0xd6
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 14 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 36 (ret 0 )
Corrupt JPEG data: 3 extraneous bytes before marker 0xd2
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 58 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 80 (ret 0 )
Corrupt JPEG data: 4 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [ERR] [ALL] motion_init: Error capturing first image
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 16 items
Corrupt JPEG data: 4 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 11 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 3 extraneous bytes before marker 0xd4
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 7 extraneous bytes before marker 0xd1

... and another issue is turning up :( The output above goes on and on and on and there is no video capturing. So accordingly to $searchengine the above happens to a lot of people. I just found one often suggested fix: pre-load from libv4l-0:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/ motion

But the problem persists and I'm out of ideas :( So atm it lokks like I cannot use the MJPEG format and don't get 30 fps at 1280x720 pixels. During writing I then discovered a solution by good old trial-and-error: Leaving the v4l2_palette variable at its default value 17 (YU12) and pre-loading makes use of YU12 and the framerate at least raises to 24 fps:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4lg/ motion
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YU12 (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YU12 (1280x720) bytesperlines 1280 sizeimage 1382400 colorspace 00000008
[1] [NTC] [EVT] event_new_video FPS 24

Finally! :) The results are nice. It would maybe even be a good idea to limit the framerate a bit, to e.g. 20. So that is a tested configuration for the Logitech C910 running at a resolution of 1280x720 pixels:

v4l2_palette 17
width 1280
height 720
framerate 20
minimum_frame_time 0
pre_capture 10 # 0,5 seconds pre-recording
post_capture 50 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Now all this made me curious, which framerate is possible at a resolution of 1920x1080 pixels now and how the results look like. Although I get 24 fps too, the resulting movie suffers of jumps every few frames. So here I got pretty good results with a more conservative setting. By increasing framerate - tested up to 15 fps with good results - pre_capture needed to be decreased accordingly to values between 1..3 to minimize jumps:

v4l2_palette 17
width 1920
height 1080
framerate 12
minimum_frame_time 0
pre_capture 6 # 0,5 seconds pre-recording
post_capture 30 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Both configurations lead to satisfying results. Of course the latter will easily fill your hardrive :)


I guess, the results can be optimzed further by playing around with ffmpeg_bps and ffmpeg_variable_bitrate. Maybe then it is possible to record without jumps at higher framerates too(?). I also didn't test the various norm settings (PAL, NTSC, etc.).

Steve McIntyre: Linaro VLANd v0.2

13 February, 2015 - 13:53

I've been working on this for too long without really talking about it, so let's fix that now!

VLANd is a simple (hah!) python program intended to make it easy to manage port-based VLAN setups across multiple switches in a network. It is designed to be vendor-agnostic, with a clean pluggable driver API to allow for a wide range of different switches to be controlled together.

There's more information in the README file. I've just released v0.2, with a lot of changes included since the last release:

  • Massive numbers of bugfixes and code cleanups
  • Improve how we talk to the Cisco switches - disable paging on long output
  • Switch from "print" to "" for messages, and add logfile support
  • Improved test suite coverage, and added core test scripts for the lab environment

I've demonstrated this code today in Hong Kong at the Linaro Connect event, and now I'm going on vacation for 4 weeks. Australia here I come! :-)

Neil Williams: OpenTAC mailing list

13 February, 2015 - 09:10

After the OpenTAC session at Linaro Connect, we do now have a mailing list to support any and all discussions related to OpenTAC. Thanks to Daniel Silverstone for the list.

List archive:

More information on OpenTAC:

Richard Hartmann: A Dance with Dragons

13 February, 2015 - 04:51

Yesterday, I went to the Federal Office for Information Security (BSI) on an invitation to their "expert round-table on SDN".

While the initial mix of industry attendees was of.. varied technical knowledge.. I was pleasantly surprised by the level of preparation by the BSI. None of them were networkers, but they did have a clear agenda and a pretty good idea of what they wanted to know.

During the first round-table, they went through

  • This is our idea of what we think SDN is
  • Is SDN a fad or here to stay?
  • What does the industry think about SDN?
  • What are the current, future, and potential benefits of SDN?
  • What are the current, future, and potential risks of SDN?
  • How can SDN improve the security of critical infrastructure?
  • How can you ensure that the whole stack from hardware through data plane to control plane can be trusted?
  • How can critical parts of the SDN stack be developed in, or strongly influenced from, players in Germany or at least Europe?

Yes, some of those questions are rather basic and/or generic, but that was on purpose. The mix of clear expectations and open-ended questions was quite effective at getting at what they wanted to know.

During lunch, we touched on the more general topic of how to reach and interact with technical audiences, with regards to both networks and software. The obvious answer for initial contact in regards to networks was DENOG; which they didn't know about.

With software, the answer is not quite as simple. My suggestion was to engage in a positive way and thus build trust over time. Their clear advantage is that, contrary to most other services, their raison d'être is purely defensive and non-military so they can focus on audits, support of key pieces of software, and, most important of all, talk about their results. No idea if they will actually pursue this, but here's to hoping; we could all use more government players on the good side.

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910

13 February, 2015 - 02:02

Winter time is a good time for some nature observation. Yesterday I had a woodpecker (picture) in front of my kitchen window. During the recent weeks there were long-tailed tits, a wren and other rarely seen birds. So I thought, it might be a good idea to capture some of these events :) I still own a Logitech C910 USB camera which allows HD video capturing up to 1080p. So I checked the web for some software that would begin video capturing in the case of motion detection and found motion, already available for Debian users. So I gave it a try. I tested all available resolutions of the camera together with the capturing results. I found that the resulting framerate of both the live stream and the captured video is highly depending on the resolution and some few configuration options. Below is a summary of my tests and the results I've achieved so far.

Logitech C910 HD camera

Just a bit of data regarding the camera. AFAIK it allows for fluent video streams up to 720p.

$ dmesg
usb 7-3: new high-speed USB device number 5 using ehci-pci
usb 7-3: New USB device found, idVendor=046d, idProduct=0821
usb 7-3: New USB device strings: Mfr=0, Product=0, SerialNumber=1
usb 7-3: SerialNumber: 91CF80A0
usb 7-3: current rate 0 is different from the runtime rate 16000
usb 7-3: current rate 0 is different from the runtime rate 32000
uvcvideo: Found UVC 1.00 device (046d:0821)
input: UVC Camera (046d:0821) as /devices/pci0000:00/0000:00:1a.7/usb7/7-3/7-3:1.2/input/input17

$ lsusb
Bus 007 Device 005: ID 046d:0821 Logitech, Inc. HD Webcam C910

$ v4l2-ctl -V -d /dev/video1
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 2560
Size Image : 1843200
Colorspace : SRGB

Also the uvcvideo kernel module is loaded and the user in question is part of the video group.

Installation and start

Installation of the software is as easy as always:

apt-get install motion

It is possible to run the software as a service. But for testing, I copied /etc/motion/motion.conf to ~/.motion/motion.conf, fixed its permissions (you cannot copy the file as user - it's not world readable) and disabled the daemon mode.

daemon off

Note that in my case the correct device is /dev/video1 because the laptop has a built-in camera, that is /dev/video0. Also the target directory should be writeable by my user:

videodevice /dev/video1
target_dir ~/Videos

Then running motion from the command line ...

[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video1 input -1
[1] [NTC] [VID] v4l2_get_capability:
cap.driver: "uvcvideo"
cap.card: "UVC Camera (046d:0821)"
cap.bus_info: "usb-0000:00:1a.7-1"
[1] [NTC] [VID] v4l2_get_capability: - VIDEO_CAPTURE
[1] [NTC] [VID] v4l2_get_capability: - STREAMING
[1] [NTC] [VID] v4l2_select_input: name = "Camera 1", type 0x00000002, status 00000000
[1] [NTC] [VID] v4l2_select_input: - CAMERA
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items

... will begin to capture motion detection events and also output a live stream. CTRL+C will stop it again.

Live stream

The live stream is available by pointing the browser to localhost:8081. However, the stream seems to run at 1 fps (frames per second) and indeed does. The stream gets more quality by this configuration:

stream_motion on
stream_maxrate 100

The first option is responsible, that the stream only runs at one fps if there is no motion detection event. Otherwise the framerate increases to its maximum value, which is either the one given by stream_maxrate or the camera limit. The quality of the stream picture can be increased a bit further too by increasing the stream_quality value. Because I neither need the stream nor the control feed I disabled both:

stream_port 0
webcontrol_port 0
Picture capturing

By default there is video and picture capturing if a motion event is detected. I'm not interested in these pictures, so I turned them off:

output_pictures off

FYI: If you want a good picture quality, then the value of quality should very probably be increased.

Video capturing

This is the really interesting part :) Of course if I will "shoot" some birds (with the camera), then a small image of say 320x240 pixels is not enough. The camera allows for a capture resolution up to 1920x1080 pixels (1080p). It is advertised for fluent video streams up to 720p (1280x720 pixels). So I tried the following resolutions: 320x240, 640x480, 800x600, 640x360 (360p), 1280x720 (720p) and 1920x1080 (1080p). These are easily configured by the width and height variables. For example the following configures motion for 1280x720 pixels (720p):

width 1280
height 720

The result was really disappointing. No event is captured with more then 20 fps. At higher resolutions the framerate drops even further and at the highest resolution of 1920x1080 pixels, the framerate is only two(!) fps. Also every created video runs much too fast and even faster by increasing the framerate variable. Of course its default value of 2 (fps) is not enough for fluent videos. AFAIK the C910 can run with 30 fps at 1280x720 pixels. So increasing the value of framerate, the maximum framerate recorded, is a must-do. (If you wanna test yourself, check the log output for the value following event_new_video FPS.)

The solution to the issue, that videos are running too fast, however is to increase the pre_capture value, the number of pre-captured (buffered) pictures from before motion was detected. Even small values like 3..5 result in a distinctive improvement of the situation. Though increasing the value further didn't have any effect. So the values below should almost get the most out of the camera and result in videos in normal speed.

framerate 100
pre_capture 5

Videos in 1280x720 pixels are still captured with 10 fps and I don't know why. Running guvcview, the same camera allows for 30 fps in this resolution (even 60 fps in lower resolutions). However, even if the framerate could be higher, the resulting video runs fluently. Still the quality is just moderate (or to be honest, still disappointing). It looks "pixelated". Only static pictures are sharp. It took me a while to fix this too, because I first thought, the reason is the camera or missing hardware support. It is not :) The reason is, that ffmpeg is configured to produce a moderate(?)-quality video. The relevant variables are ffmpeg_bps and ffmpeg_variable_bitrate. I got the best results just changing the latter:

ffmpeg_variable_bitrate 2

Finally the resulting video quality is promising. I'll start with this configuration setting up an observation camera for the bird feeding ground.

There is one more tweak for me. I got even better results by enabling the auto_brightness feature.

auto_brightness on
Complete configuration

So the complete configuration looks like this (only those options changed to the original config file)

daemon off
videodevice /dev/video1
width 1280
height 720
framerate 100
auto_brightness on
ffmpeg_variable_bitrate 2
target_dir /home/user/Videos
stream_port 0 #8081
stream_motion on
stream_maxrate 100
webcontrol_port 0 #8080

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2015

13 February, 2015 - 01:24

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, 48 work hours have been equally split among 4 paid contributors. Their reports are available:

Evolution of the situation

During the last month, the number of paid work hours has made a noticeable jump: we’re now at 58 hours per month. At this rate, we would need 3 more months to reach our minimal goal of funding the equivalent of a half-time position. Unfortunately, the number of new sponsors actually in the process is not likely to be enough to have a similar raise next month.

So, as usual, we are looking for more sponsors.

In terms of security updates waiting to be handled, the situation looks a bit worse than last month: the dla-needed.txt file lists 37 packages awaiting an update (7 more than last month), the list of open vulnerabilities in Squeeze shows about 63 affected packages in total (7 more than last month).

The increase is not too worrying, but the waiting time before an issue is dealt with is sometimes more problematic. To be able to deal with all incoming issues in a timely manner, the LTS team needs more resources: some months will have more issues than usual, some issues will be longer to handle than others, etc.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Sven Hoexter: Out of the comfort zone: What I learned from my net-snmp backport on CentOS 5 and 6

12 February, 2015 - 22:19

This is a short roundup of things I learned after I've rolled out the stuff I wrote about here.

Bugs and fighting with multiarch rpm/yum

One oversight led me to not special case the perl-devel dependency on the net-snmp-devel package for CentOS 5, to depend on perl instead. That was easily fixed but afterwards a yum install net-snmp-devel still failed because it tried to install the stock CentOS 5 net-snmp-devel package and its dependencies. Closer investigation showed that it did so because I only provided x86_64 packages in our internal repository but it wanted to install i386 and x86_64 packages.

Looking around this issue is documented in the Fedora Wiki. So the modern way to deal with it is to make the dependency architecture dependend with the


macro. That is evaluated at build time of the src.rpm and then the depedency is hardcoded together with the architecture. Compare

$ rpm -qRp net-snmp-devel-5.7.2-18.el6.x86_64.rpm|grep perl


$ rpm -qRp net-snmp-devel-5.7.2-19.el5.centos.x86_64.rpm |grep perl

As you can see it's missing for the el5 package and that's because it's too old, or more specific, rpm is too old to know about it.

The workaround we use for now is to explicitly install only the x86_64 version of net-snmp-devel on CentOS 5 when required. That's a

yum install net-snmp-devel.x86_64

Another possible workaround is to remove i386 from your repository or just blacklist it in your yum configuration. I've read that's possible but did not try it.

steal time still missing on CentOS 6

One of the motivations for this backport is support for steal time reporting via SNMP. For CentOS 5 that's solved with our net-snmp backport but for CentOS 6 we still do not see steal time. A short look around showed that it's also missing in top, vmstat and friends. Could it be a kernel issue?

Since we're already running on the CentOS Xen hypervisor we gave the Linux 3.10.x packages a spin in a domU and now we also have steal time reporting. Yes, a kernel issue with the stock CentOS 6/RHEL 6 kernel. I'm not sure where and how to fill it since RedHat moved to KVM.

Sven Hoexter: Comparing apples and oranges: IPv6 adoption vs HTTP/2 adoption

12 February, 2015 - 21:34

I did not yet read the HTTP/2 draft but I've read the SPDY draft about 2.5 years ago. So I might be wrong here with my assumptions.

We're now about 17 years into the IPv6 migration and still we do not have widespread IPv6 adoption on the client side. That's mostly an issue of CPE device rotation at the customer side and updating provisioning solutions. For the server side we're more or less done. Operating system support is there, commercial firewall and router vendors picked it up in the last ten years, datacenter providers are ready, so we're waiting for the users to put some pressure on providing a service via IPv6.

Looking at HTTP/2 it's also a different protocol. The gap might be somewhat closer than the one between IPv4 and IPv6, but it's still huge. Now I'd bet that in the case of HTTP/2 we'll also see a really slow adoption, but this time it's not the client side that's holding back the adoption. For HTTP/2 I've no doubts about a fast adoption on the client side. Google and Mozilla are nowadays providing some kind of continues delivery of new features to the end user on a monthly basis (surprisingly that also works for mobile devices!). So the web browser near you will soon have a HTTP/2 implementation. Even Microsoft is switching to a new development model of rolling updates with Windows 10 and the Internet Explorer successor. But looking at the server side I doubt we'll have widespread HTTP/2 support within the next 5 years. Maybe in 10. But I doubt even that.

With all those reverse proxies, interception devices for header enrichment at mobile carriers, application servers, self written HTTP implementations, load balancers and so on I doubt we'll have a fast and smooth migration ahead. But maybe we're all lucky and I'm wrong. I'd really love to be wrong here.

Maybe we can provide working IPv4-IPv6 dual-stack setups for everyone in the meantime.

Daniel Leidert: Setting up a network buildd with pbuilder ... continued

12 February, 2015 - 21:15

Last year I'd described my setup of a local network-buildd using pbuilder, ccache, inoticoming and NFS. One then-still-open goal was to support different Debian releases. This is especially necessary for backports of e.g. bluefish. The rules to contribute backports require to include all changelog entries since the last version on debian-backports or since stable if it's the first version in an uploaded package. Therefor one needs to know the last version in e.g. wheezy-backports. Because I'm not typing the command (the source package only gets uploaded and inoticoming starts the build process) I was looking for a way to automatically retrieve that version and add the relevant -vX.Y-Z switch to dpkg-buildpackage.

The solution I found requires aptitude and a sources.list entry for the relevant release. If you are only interested in the solution, just jump to the end :)

I'm going to add the version switch to the DEBBUILDOPTS variable of pbuilder. In my setup I have a common (shared) snippet called /etc/ and one configuration file per release and architecture, say /etc/pbuilderrc.amd64.stable. Now the first already contains ...

DEBBUILDOPTS="-us -uc -j2"

... and DEBBUILDOPTS can be extended in the latter:


Because the config file is parsed pretty early in the process the package name has not yet been assigned to any variable. The last argument to pbuilder is the .dsc file. So I use the last argument and parse the file to retrieve the source package name.

cat ${@: -1} | grep -e ^Source | awk -F\  '{ print $2 }'

The solution above works, because pbuilder is a BASH script. Otherwise it maybe needs some tweaking. I use the source package name, because it is unique and there is just one :) Now with this name I check for all versions in wheezy* and stable* and sort them. The sort order of aptitude is from low to high, so the last line shopuld contain the highest version. This covers the possibility that there has not yet been a backport or that there is one:

aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version \
"?narrow(?source-package(^PACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" |\
tail -n 1 | sed -e 's#~bpo.*$##g'

The sed-part is necessary because otherwise dh_genchanges will add a superfluous changelog entry (the last one of the last upload). To make things easier, I assign the name and version to variables. So this is the complete solution:

MYPACKAGE="`cat ${@: -1} | grep -e ^Source | awk -F\ '{ print $2 }'`"
MYBPOVERS="`aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version "?narrow(?source-package(^$MYPACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" | tail -n 1 | sed -e 's#~bpo.*$##g'`"

log "I: Package is $MYPACKAGE and last stable/bpo version is $MYBPOVERS"


I've recently built a new bluefish backport. The last backport version is 2.2.6-1~bpo70+1. There is also the stable version 2.2.3-4. So the version I need is 2.2.6-1 (2.2.6-1~bpo70+1 < 2.2.6-1!!). Checking the log it works:

I: Package is bluefish and last stable/bpo version is 2.2.6-1

A different example is rsync. I've recently locally rebuilt it for a stable system (wanted to make use of the --chown switch). There is not yet a backport. So the version I (would) need is 3.0.9-4. Checking the logs again and works too:

I: Package is rsync and last stable/bpo version is 3.0.9-4

Feedback appreciated ...


Christian Perrier: Bug #777777

12 February, 2015 - 12:57
Who is going to report bug #777777? :-)

John Goerzen: Reactions to “Has modern Linux lost its way?” and the value of simplicity

12 February, 2015 - 06:39

Apparently I touched a nerve with my recent post about the growing complexity of issues.

There were quite a few good comments, which I’ll mention here. It’s provided me some clarity on the problem, in fact. I’ll try to distill a few more thoughts here.

The value of simplicity and predictability

The best software, whether it’s operating systems or anything else, is predictable. You read the documentation, or explore the interface, and you can make a logical prediction that “when I do action X, the result will be Y.” grep and cat are perfect examples of this.

The more complex the rules in the software, the more hard it is for us to predict. It leads to bugs, and it leads to inadvertant security holes. Worse, it leads to people being unable to fix things themselves — one of the key freedoms that Free Software is supposed to provide. The more complex software is, the fewer people will be able to fix it by themselves.

Now, I want to clarify: I hear a lot of talk about “ease of use.” Gnome removes options in my print dialog box to make it “easier to use.” (This is why I do not use Gnome. It actually makes it harder to use, because now I have to go find some obscure way to just make the darn thing print.) A lot of people conflate ease of use with ease of learning, but in reality, I am talking about neither.

I am talking about ease of analysis. The Linux command line may not have pointy-clicky icons, but — at least at one time — once you understood ls -l and how groups, users, and permission bits interacted, you could fairly easily conclude who had access to what on a system. Now we have a situation where the answer to this is quite unclear in terms of desktop environments (apparently some distros ship network-manager so that all users on the system share the wifi passwords they enter. A surprise, eh?)

I don’t mind reading a manpage to learn about something, so long as the manpage was written to inform.

With this situation of dbus/cgmanager/polkit/etc, here’s what it feels like. This, to me, is the crux of the problem:

It feels like we are in a twisty maze, every passage looks alike, and our flashlight ran out of battieries in 2013. The manpages, to the extent they exist for things like cgmanager and polkit, describe the texture of the walls in our cavern, but don’t give us a map to the cave. Therefore, we are each left to piece it together little bits at a time, but there are traps that keep moving around, so it’s slow going.

And it’s a really big cave.

Other user perceptions

There are a lot of comments on the blog about this. It is clear that the problem is not specific to Debian. For instance:

  • Christopher writes that on Fedora, “annoying, niggling problems that used to be straightforward to isolate, diagnose and resolve by virtue of the platform’s simple, logical architecture have morphed into a morass that’s worse than the Windows Registry.” Alessandro Perucchi adds that he’s been using Linux for decades, and now his wifi doesn’t work, suspend doesn’t work, etc. in Fedora and he is surprisingly unable to fix it himself.
  • Nate bargman writes, in a really insightful comment, “I do feel like as though I’m becoming less a master of and more of a slave to the computing software I use. This is not a good thing.”
  • Singh makes the valid point that this stuff is in such a state of flux that even if a person is one of the few dozen in the world that understand what goes into a session today, the knowledge will be outdated in 6 months. (Hal, anyone?)

This stuff is really important, folks. People being able to maintain their own software, work with it themselves, etc. is one of the core reasons that Free Software exists in the first place. It is a fundamental value of our community. For decades, we have been struggling for survival, for relevance. When I started using Linux, it was both a question and an accomplishment to have a useable web browser on many platforms. (Netscape Navigator was closed source back then.) Now we have succeeded. We have GPL-licensed and BSD-licensed software running on everything from our smartphones to cars.

But we are snatching defeat from the jaws of victory, because just as we are managing to remove the legal roadblocks that kept people from true mastery of their software, we are erecting technological ones that make the step into the Free Software world so much more difficult than it needs to be.

We no longer have to craft Modelines for X, or compile a kernel with just the right drivers. This is progress. Our hardware is mostly auto-detected and our USB serial dongles work properly more often on Linux than on Windows. This is progress. Even our printers and scanners work pretty darn well. This is progress, too.

But in the place of all these things, now we have userspace mucking it up. We have people with mysterious errors that can’t be easily assisted by the elders in the community, because the elders are just as mystified. We have bugs crop up that would once have been shallow, but are now non-obvious. We are going to leave a sour taste in people’s mouth, and stir repulsion instead of interest among those just checking it out.

The ways out

It’s a nasty predicament, isn’t it? What are your ways out of that cave without being eaten by a grue?

Obviously the best bet is to get rid of the traps and the grues. Somehow the people that are working on this need to understand that elegance is a feature — a darn important feature. Sadly I think this ship may have already sailed.

Software diagnosis tools like Enrico Zini’s seat-inspect idea can also help. If we have something like an “ls for polkit” that can reduce all the complexity to something more manageable, that’s great.

The next best thing is a good map — good manpages, detailed logs, good error messages. If software would be more verbose about the permission errors, people could get a good clue about where to look. If manpages for software didn’t just explain the cavern wall texture, but explain how this room relates to all the other nearby rooms, it would be tremendously helpful.

At present, I am unsure if our problem is one of very poor documentation, or is so bad that good documentation like this is impossible because the underlying design is so complex it defies being documented in something smaller than a book (in which case, our ship has not just sailed but is taking on water).

Counter-argument: progress

One theme that came up often in the comments is that this is necessary for progress. To a certain extent, I buy that. I get why udev is important. I get why we want the DE software to interact well. But here’s my thing: this already worked well in wheezy. Gnome, XFCE, and KDE software all could mount/unmount my drives. I am truly still unsure what problem all this solved.

Yes, cloud companies have demanding requirements about security. I work for one. Making security more difficult to audit doesn’t do me any favors, I can assure you.

The systemd angle

To my surprise, systemd came up quite often in the discussion, despite the fact that I mentioned I wasn’t running systemd-sysv. It seems like the new desktop environemt ecosystem is “the systemd ecosystem” in a lot of people’s minds. I’m not certain this is justified; systemd was not my first choice, but as I said in an earlier blog post, “jessie will still boot”.

A final note

I still run Debian on all my personal boxes and I’m not going to change. It does awesome things. For under $100, I built a music-playing system, with Raspberry Pis, fully synced throughout my house, using a little scripting and software. The same thing from Sonos would have cost thousands. I am passionate about this community and its values. Even when jessie releases with polkit and all the rest, I’m still going to use it, because it is still a good distro from good people.

Michal &#268;iha&#345;: Hosted Weblate welcomes new projects

12 February, 2015 - 00:00

In past days, several new free software projects have been added to Hosted Weblate. If you are interested in translating your project there, just follow instruction at our website.

The new projects include:

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Daniel Leidert: Blogger RSS feed and category URLs with combined labels/tags

11 February, 2015 - 23:07

Run a blog on Maybe made it bilingual? Maybe blog on different topics? Wondering how the URL for an RSS feed for e.g. two labels looks like? Asking how to see all articles matching two tags (labels)? Finding a keyword under one or more labels? Many things are possible. I'll show a few examples below. Maybe that is even interesting for the planet Debian folks. I happen to blog mostly in English about Debian topics. But sometimes I also want to post something in German only (e.g. about German tax software). It is discouraged to put the latter on planet-debian. Instead it can be published in the language specific planet feed. So instead of adding new tags, one could easily combine two labels: the one for language of the feed and the one for Debian related posts (e.g. debian+english or debian+german). Therefor this post goes to the Debian planet.

Search for combbined labels/tags

Say I want to view all postings related to the topics FOO and BAR. Then it is:

http://domain.tld/search/label/FOO+BAR OR

Be aware that labels are case sensitive and that more labels can be added. The examples below will show all postings related to the topics debian and n54l and xbmc:

It is also possible to search for all posts related to the topics FOO or BAR:


Say for example, you want to see all postings related to the topics logitech or toshiba, then it is:|label:toshiba
Feed URLs

To get back to the first example lets say, the feed shall contain all posts related to the topics FOO and BAR. Then it is:

http://domain.tld/feeds/posts/default/-/FOO/BAR/ OR

Respecitvely to show all feeds related to either of those topics use:


To get a feed of the example topics as shown above then would be:|label:toshiba

Coming back to planet Debian, below is a solution for a multi-lingual planet contribution (if both planets would exist):
Advanced ...

There is much more possible. I'll just show two more examples. AND and OR can be combined ...|label:toshiba)

... and a keyword search can be added too:|label:toshiba))

Rhonda D'Vine: Zaz

11 February, 2015 - 16:43

It is time for some more music. This woman was introduced to me by a friend who actually understands what she sings about, because she sings in French. Regardless, her voice and feeling for music touched me deep, so today I want to present to you Zaz (homepage seems French only). Like mentioned, she sings in French, and her connection with the Chanson genre brought her (deserved) comparison with the great Édith Piaf.

Without further ado, here are the songs:

  • La Fée: I think this was the first song I heard from her and caught me.
  • Je Veux: This live version of the song shows how much of a charming person she is and that she simply enjoys music. :)
  • Eblouie Par La Nuit: This song simply gives me goose bumps. Pure emotions.

I hope you can enjoy her as much as I do.

/music | permanent link | Comments: 0 | Flattr this

Mike Hommey: Announcing git-cinnabar 0.1.0

11 February, 2015 - 14:51

As you may or may not know, I have been working on this project for quite some time, but actually really started the most critical parts a couple months ago. After having looked for (and chosen) a new name for what was a prototype project, it’s now time for a very first release.

So what is this all about?

Cinnabar is the common natural form in which mercury can be found on Earth. It contains mercury sulfide and its powder is used to make the vermillion pigment.

What does that have to do with git?

Hint: mercury.

Git-cinnabar is a git remote helper (you can think of that as a plugin) to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Numerous such tools already exist. Where git-cinnabar stands out is that it doesn’t use a local mercurial clone under the hood (unlike all the existing other such tools), and is close to an order of magnitude faster to clone a repository like mozilla-central than the git-remote-hg that used to be shipped as a contrib to git.

I won’t claim it is exempt of problems and limitations, which is why it’s not a 1.0. I’m however confident enough with its state to make the first “official” release.

Get it on github.

If you’ve been using the prototype, you can actually continue to use that clone and update it. Github conveniently keeps things working after a rename. You can update the remote url if you feel like it, though.

If you are a Gecko developer, you can take a look at a possible workflow.

Enrico Zini: systemd-default-rescue

11 February, 2015 - 00:06

Four months ago I wrote this somewhere:

Seeing a DD saying "this new dbus stuff scares me" would make most debian users scared. Seeing a DD who has an idea of what is going on, and who can explain it, would be an interesting and exciting experience.

So, let's be exemplary, competent and patient. Or at least, competent. Some may like or not like the changes, but do we all understand what is going on? Will we all be able to support our friends and customers running jessie?

I confess that although I understand the need for it, I don't feel competent enough to support systemd-based machines right now.

So, are we maybe in need of help, cheat sheets, arsenals of one-liners, diagnostic tools?

Maybe a round of posts on -planet like "one debian package a day" but with new features that jessie will have, and how to understand them and take advantage of them?

That was four months ago. In the meantime, I did some work, and it got better for me.

Yesterday, however, I've seen an experienced Linux person frustrated because the shutdown function of the desktop was doing nothing whatsoever. Today I found John Goerzen's post on planet.

I felt like some more diagnostic tools were needed, so I spent the day making seat-inspect.

seat-inspect tries to make the status of the login/seat system visible, to help with understanding and troubleshooting.

The intent of running the code is to have an overview of the system status, both to see what the new facilities are about, and to figure out if there is something out of place.

The intent of reading the code is to have an idea of how to use these facilities: the code has been written to be straightforward and is annotated with relevant bits from the logind API documentation.

seat-inspect is not a finished tool, but a starting point. I put it on github hoping that people will fork it and add their own extra sanity checks and warnings, so that it can grow into a standard thing to run if a system acts weird.

As it is now, it should be able to issue warnings if some bits are missing for network-manager or shutdown functions to work correctly. I haven't really tested that, though, because I don't have a system at hand where they are currently not working fine.

Another nice thing of it is that when running seat-inspect -v you get a dump of what logind/consolekit think about your system. I found it an interesting way to explore the new functionalities that we recently grew. The same can be done, and in more details, with loginctl calls, but I lacked a summary.

After writing this I feel a bit more competent, probably enough to sit at somebody's computer and poke into loginctl bits. I highly recommend the experience.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้