Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 22 min ago

Giuseppe Iuculano: apt-get purge chromium

13 October, 2014 - 00:10

As you may know, I was the Debian chromium maintainer for many years. Some week ago, I decided to stop working  in the chromium package because it is not possible anymore to contribute and work in the team. In fact, Michael Gilbert started to work in a manner that prevent people to help maintain the package.

In the last period the git repository rarely was updated, and my requests were systematically  ignored. Having an updated git repository is mandatory in a big package like Chromium,  and if you don’t push your changes, other people will lost their time

Now, after deciding to stop maintaining Chromium, I also decided to purge it and switch to the Google Chrome binary. Why? Chromium is a pain. Huge commits not documented in changelog that caused stupid bugs because no one can double check them.

In this moment we have an unusable [1] [2] [3] version of Chromium in testing because maintainer demoted grave bugs with the recommendation to rm -rf ./config/chromium … and nobody can understand the sense of latest commits.

Mario Lang: soundCLI works again

12 October, 2014 - 21:30

I recently ranted about my frustration with GStreamer in a SoundCloud command-line client written in Ruby.

Well, it turns out that there was quite a bit confusion going on. I still haven't figured out why my initial tries resulted in an error regarding $DISPLAY not being set. But now that I have played a bit with gst-launch-1.0, I can positively confirm that this was very likely not the fault of GStreamer.

THe actual issue is, that ruby-gstreamer is assuming gstreamer-1.0, while soundCLI was still written against the gstreamer-0.10 API.

Since the ruby gst module doesn't have the Gstreamer API version in its name, and since Ruby is a dynamic language that only detects most errors at runtime, this led to all sorts of cascaded errors.

It turns out I only had to correct the use of query_position, query_duration, and get_state, as well as switching from playbin2 to playbin.

soundCLI is now running in the background and playing my SoundCloud stream. A pull request against soundCLI has also been opened.

On a somewhat related note, I found a GCC bug (ICE SIGSEGV) this weekend. My first one. It is related to C++11 bracketed initializers. Given that I have heard GCC 5.0 aims to remove the experimental nature of C++11 (and maybe also C++14), this seems like a good time to hit this one. I guess that means I should finally isolate the C++ regex (runtime) segfault I recently stumbled across.

Guido Günther: Testing a NetworkManager VPN plugin password dialog

12 October, 2014 - 19:55

Testing the password dialog of a NetworkManager VPN plugin is as simple as:

echo -e 'DATA_KEY=foo\nDATA_VAL=bar\nDONE\nQUIT\n' | ./auth-dialog/nm-iodine-auth-dialog -n test -u $(uuid) -i

The above is for the iodine plugin when run from the built source tree. This allows one to test these dialogs although one didn't see them since ages since GNOME shell uses the external UI mode to query for the password.

This blog is flattr enabled.

John Goerzen: First impressions of systemd, and they’re not good

12 October, 2014 - 11:18

Well, I finally bit the bullet. My laptop, which runs jessie, got dist-upgraded for the first time in a few months. My brightness keys stopped working, and it no longer would suspend to RAM when the lid was closed, and upon chasing things down from XFCE to policykit, eventually it appears that suddenly major parts of the desktop breaks without systemd in jessie. Sigh.

So apt-get install systemd-sysv (and watch sysvinit-core get uninstalled) and reboot.

Only, my system doesn’t come back up. In fact, over several hours of trying to make it boot with systemd, it failed in numerous spectacular and hilarious (or, would be hilarious if my laptop would boot) ways. I had text obliterating the cryptsetup password prompt almost every time. Sometimes there were two processes trying to read a cryptsetup password at once. Sometimes a process was trying to read that while another one was trying to read an emergency shell password. Many times it tried to write to /var and /tmp before they were mounted, meaning they *wouldn’t* mount because there was stuff there.

I noticed it not doing much with ZFS, complaining of a dependency loop between zfs-mount and $local-fs. I fixed that, but it still wouldn’t boot. In fact, it simply hung after writing something about wall passwords.

I’ve dug into systemd, finding a “unit generator for fstab” (whatever the hack that is, it’s not at all made clear by systemd-fstab-generator(8)).

In some cases, there’s info in journalctl, but if I can’t even get to an emergency mode prompt, the practice of hiding all stdout and stderr output is not all that pleasant.

I remember thinking “what’s all the flaming about?” systemd wasn’t my first choice (I always thought “if it ain’t broke, don’t fix it” about sysvinit), but basically ignored the thousands of messages, thinking whatever happens, jessie will still boot.

Now I’m not so sure. Even if the thing boots out of the box, it seems like the boot process with systemd is colossally fragile.

For now, at least zfs rollback can undo upgrades to 800 packages in about 2 seconds. But I can’t stay at some early jessie checkpoint forever.

Have we made a vast mistake that can’t be undone? (If things like even *brightness keys* now require systemd…)

Dirk Eddelbuettel: RPushbullet 0.1.0 with a lot more awesome

11 October, 2014 - 11:29

A new release 0.1.0 of the RPushbullet package (interfacing the neat Pushbullet service) landed on CRAN today.

It brings a number of goodies relative to the first release 0.0.2 of a few months ago:

  • pushing of files is now supported thanks to a nice pull request bu Mike Birdgeneau
  • a default device can be designated in the ~/.rpushbullet.json file or options
  • initialization has been rewritten to use recpients which can be indices, device names or, if missing entirely, the (new) default device
  • alternatively, email is supported as another recipient option in which case the Pushbullet service will send an email to the give address
  • pbGetDevices() now returns a proper S3 object with corresponding print() and summary() methods
  • the documentation regarding package initialization, and setting of key, devices, etc has been expanded
  • more examples has been added to the documentation
  • various minor cleanups, fixes, corrections throughout

There is a whole boat load of more wickedness in the Pushbullet API so if anybody feels compelled to add it, fire off pull requests at GitHub.

More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Matthias Klumpp: Listaller + Glick: Some new ideas

10 October, 2014 - 20:59

As you might know, due to invasive changes in PackageKit, I am currently rewriting the 3rd-party application installer Listaller. Since I am not the only one looking at the 3rd-party app-installation issue (there is a larger effort going on at GNOME, based on Lennarts ideas), it makes sense to redesign some concepts of Listaller.

Currently, dependencies and applications are installed into directories in /opt, and Listaller contains some logic to make applications find dependencies, and to talk to the package manager to install missing things. This has some drawbacks, like the need to install an application before using it, the need for applications to be relocatable, and application-installations being non-atomic.

Glick2

There is/was another 3rd-party app installer approach on the GNOME side, by Alexander Larsson, called Glick2. Glick uses application bundles (do you remember Klik from back in the days?) mounted via FUSE. This allows some neat features, like atomic installations and software upgrades, no need for relocatable apps and no need to install the application.

However, it also has disadvantages. Quoting the introduction document for Glick2:

“Bundling isn’t perfect, there are some well known disadvantages. Increased disk footprint is one, although current storage space size makes this not such a big issues. Another problem is with security (or bugfix) updates in bundled libraries. With bundled libraries its much harder to upgrade a single library, as you need to find and upgrade each app that uses it. Better tooling and upgrader support can lessen the impact of this, but not completely eliminate it.”

This is what Listaller does better, since it was designed to do a large effort to avoid duplication of code.

Also, currently Glick doesn’t have support for updates and software-repositories, which Listaller had.

Combining Listaller and Glick ideas

So, why not combine the ideas of Listaller and Glick? In order to have Glick share resources, the system needs to know which shared resources are available. This is not possible if there is one huge Glick bundle containing all of the application’s dependencies. So I modularized Glick bundles to contain just one software component, which is e.g. GTK+ or Qt, GStreamer or could even be a larger framework (e.g. “GNOME 3.14 Platform”). These components are identified using AppStream XML metadata, which allows them to be installed from the distributor’s software repositories as well, if that is wanted.

If you now want to deploy your application, you first create a Glick bundle for it. Then, in a second step, you bundle your application bundle with it’s dependencies in one larger tarball, which can also be GPG signed and can contain additional metadata.

The resulting “metabundle” will look like this:

 

 

 

 

 

 

 

 

 

This doesn’t look like we share resources yet, right? The dependencies are still bundled with the application requiring them. The trick lies in the “installation” step: While the application above can be executed right away without installing it, there will also be an option to install it. For the user, this will mean that the application shows up in GNOME-Shell’s overview or KDEs Plasma launcher, gets properly registered with mimetypes and is – if installed for all users – available system-wide.

Technically, this will mean that the application’s main bundle is extracted and moved to a special location on the file system, so are the dependency-bundles. If bundles already exist, they will not be installed again, and the new application will simply use the existing software. Since the bundles contain information about their dependencies, the system is able to determine which software is needed and which can simply be deleted from the installation directories.

If the application is started now, the bundles are combined and mounted, so the application can see the libraries it depends on.

Additionally, this concept allows secure updates of applications and shared resources. The bundle metadata contains an URL which points to a bundle repository. If new versions are released, the system’s auto-updater can automatically pick these up and install them – this means e.g. the Qt bundle will receive security updates, even if the developer who shipped it with his/her app didn’t think of updating it.

Conclusion

So far, no productive code exists for this – I just have a proof-of-concept here. But I pretty much like the idea, and I am thinking about going further in that direction, since it allows deploying applications on the Linux desktop as well as deploying software on servers in a way which plays nice with the native package manager, and which does not duplicate much code (less risk of having not-updated libraries with security flaws around).

However, there might be issues I haven’t thought about yet. Also, it makes sense to look at GNOME to see how the whole “3rd-party app deployment” issue develops. In case I go further with Listaller-NEXT, it is highly likely that it will make use of the ideas sketched above (comments and feedback are more than welcome!).

Mario Lang

10 October, 2014 - 17:00
GStreamer and the command-line?

I was recently looking for a command-line client for SoundCloud. soundCLI on GitHub appeared to be what I want. But wait, there is a problem with its implementation.

soundCLI uses gstreamer's playbin2 to play audio data. But that apparently requires $DISPLAY to be set.

So no, soundCLI is not a command-line client. It is a command-line client for X11 users.

Ahem.

A bit of research on Stackoverflow and related sites did not tell me how to modify playbin2 usage such that it does not require X11, while it is only playing AUDIO data.

What the HECK is going on here. Are the graphical people trying to silently overtake the world? Is Linux becoming the new Windows? The distinction between CLI and GUI has become more and more blurry in the recent years. I fear for my beloved platform.

If you know how to patch soundCLI to not require X11, please let me know. My current work-around is to replace all gstreamer usage with a simple "system" call to vlc. That works, but it does not give me comment display (since soundCLI doesn't know the playback position anymore) and hangs after every track, requiring me to enter "quit" manually on the VLC prompt. I really would have liked to use mplayer2 for this, but alas, mplayer2 does not support https. Oh well, why would it need to, in this day and age where everyone seems to switch to https by default. Oh well.

Martin Pitt: Running autopkgtests in the cloud

10 October, 2014 - 15:25

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Ingo Juergensmann: Buildd.Net: update-buildd.net v0.99 released

10 October, 2014 - 03:29

Buildd.Net offers a buildd centric view to autobuilder network such as previously Debians autobuilder network or nowadays the autobuilder network of debian-ports.org. The policy of debian-ports.org requires a GPG key for the buildd to sign packages for upload that is valid for 1 year. Buildd admins are usually lazy people. At least they are running a buildd instead of building those packages all manually. Being a lazy buildd admin it might happen that you miss to renew your GPG key, which will render your buildd unable to upload newly built packages.

When participating in Buildd.Net you need to run update-buildd.net, a small script that transmits some statistical data about your package building. I added now a GPG key expiry check to that script that will warn the buildd admin by mail and text on the Buildd.Net arch status page, such as for m68k. So, either your client updates automatically to the new version or you can download the script yourself.

Kategorie: DebianTags: BuilddNetDebianSoftware 

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Qt 5.3.2 in Wheezy-backports: just a few hours away

10 October, 2014 - 02:50
In more or less 24 hs most of Qt 5.3.2 will be available as a Wheezy backport. That means that if you are using Debian stable you don't need to wait for Jessie: just wait a few hours, add wheezy-backports's repo to your sources.list and get it :)

The rest of Qt 5 will arrive soon.

This is the same version that will be shipped in Jessie, so whatever you develop with it will work with the next Debian stable release :)

Don't forget: you better start porting your Qt4 apps to Qt5!

Chris Lamb: London—Paris—London 2014

10 October, 2014 - 01:19

I've wanted to ride to Paris for a few months now but was put off by the hassle of taking a bicycle on the Eurostar, as well having a somewhat philosophical and aesthetic objection to taking a bike on a train in the first place. After all, if one already is possession of a mode of transport...

My itinerary was straightforward:

Friday 12h00
London → Newhaven
Friday 23h00
Newhaven → Dieppe (ferry)
Saturday 04h00
Dieppe → Paris
Saturday 23h00
(Sleep)
Sunday 07h00
Paris → Dieppe
Sunday 18h00
Dieppe → Newhaven (ferry)
Sunday 21h00
Newhaven → Peacehaven
Sunday 23h00
(Sleep)
Monday 07h00
Peacehaven → London
Packing list
  • Ferry ticket (unnecessary in the end)
  • Passport
  • Credit card
  • USB A male → mini A male (charges phone, battery pack & front light)
  • USB A male → mini B male (for charging or connecting to Edge 800)
  • USB mini A male → OTG A female (for Edge 800 uploads via phone)
  • Waterproof pocket
  • Sleeping mask for ferry (probably unnecessary)
  • Battery pack

Not pictured:

  • Castelli Gabba Windstopper short-sleeve jersey
  • Castelli Velocissimo bib shorts
  • Castelli Nanoflex arm warmers
  • Castelli Squadra rain jacket
  • Garmin Edge 800
  • Phone
  • Front light: Lezyne Macro Drive
  • Rear lights: Knog Gekko (on bike), Knog Frog (on helmet)
  • Inner tubes (X2), Lezyne multitool, tire levers, hand pump

Day 1: London → Newhaven

Tower Bridge.

Many attempt to go from Tower Bridge → Eiffel Tower (or Marble Arch → Arc de Triomphe) in less than 24 hours. This would have been quite easy if I had left a couple of hours later.

Fanny's Farm Shop, Merstham, Surrey.

Plumpton, East Sussex.

West Pier, Newhaven.

Leaving Newhaven on the 23h00 ferry.


Day 2: Dieppe → Paris

Beauvoir-en-Lyons, Haute-Normandie.

Sérifontaine, Picardie.

La tour Eiffel, Paris.

Champ de Mars, Paris.

Pont de Grenelle, Paris.


Day 3: Paris → Dieppe

Cormeilles-en-Vexin, Île-de-France.

Gisors, Haute-Normandie.

Paris-Brest, Gisors, Haute-Normandie.

Wikipedia: This pastry was created in 1910 to commemorate the Paris–Brest bicycle race begun in 1891. Its circular shape is representative of a wheel. It became popular with riders on the Paris–Brest cycle race, partly because of its energizing high caloric value, and is now found in pâtisseries all over France.

Gournay-en-Bray, Haute-Normandie.

Début de l'Avenue Verte, Forges-les-Eaux, Haute-Normandie.

Mesnières-en-Bray, Haute-Normandie.

Dieppe, Haute-Normandie.

«La Mancha».


Day 4: Peacehaven → London

Peacehaven, East Sussex.

Highbrook, West Sussex.

London weather.


Summary
Distance
588.17 km
Pedal turns
~105,795

My only non-obvious tips would be to buy a disposable blanket in the Newhaven Co-Op to help you sleep on the ferry. In addition, as the food on the ferry is good enough you only need to get to the terminal one hour before departure, avoiding time on your feet in unpicturesque Newhaven.

In terms of equipment, I would bring another light for the 4AM start on «L'Avenue Verte» if only as a backup and I would have checked I could arrive at my Parisian Airbnb earlier in the day - I had to hang around for five hours in the heat before I could have a shower, properly relax, etc.

I had been warned not to rely on being able to obtain enough water en route on Sunday but whilst most shops were indeed shut I saw a bustling tabac or boulangerie at least once every 20km so one would never be truly stuck.

Route-wise, the surburbs of London and Paris are both equally dismal and unmotivating and there is about 50km of rather uninspiring and exposed riding on the D915.

However, «L'Avenue Verte» is fantastic even in the pitch-black and the entire trip was worth it simply for the silent and beautiful Normandy sunrise. I will be back.

Jan Wagner: Updated Monitoring Plugins Version is coming soon

9 October, 2014 - 17:46

Three months ago version 2.0 of Monitoring Plugins was released. Since then many changes were integrated. You can find a quick overview in the upstream NEWS.

Now it's time to move forward and a new release is expected soon. It would be very welcome if you could give the latest source snapshot a try. You also can give the Debian packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the changelog.

Feedback is very appreciated via Issue tracker or the Monitoring Plugins Development Mailinglist.

Update: The official call for testing is available.

Jonathan Dowland: Ansible

9 October, 2014 - 04:12

I've just recently built the large bulk of VMs that we use for first semester teaching. This year that was 112. We use the same general approach for these as our others: get a generic base image up and running, with just enough configuration complete so a puppet client starts up; get it talking to our master; let puppet take it from there.

There are pragmatic balances between how much we do in the kickstart versus how much we do in puppet, but also when we build a new VM from scratch versus when we clone an existing image, and how specialisation we do in the clone image.

Unfortunately this year we ended up in a situation where our clone image wouldn't talk to our puppet master out of the box, due to some changes we'd made to our master set up since the clone image was prepared. We didn't really have enough time to re-clone the entire set of VMs from a fixed base image, and instead needed to fix them whilst up. However we couldn't rely on puppet to do that, since they wouldn't talk to the puppet master.

We needed to manually reset the puppet client state per VM and then re-establish a trust relationship with the correct master (which is not the default master hostname in our environment anymore). Luckily, we deploy a local account with a known passphrase via the kickstart, which also has sudo access, as an interim measure before puppet strips it back out again and sets up proper LDAP and Kerberos authentication. So we can at least get into the boxes. However logging into 112 VMs by hand is not a particularly pleasant task.

In the past I might have tried to achieve this using something like clusterssh but this year I decided to give ansible a try instead.

Ansible started life, I believe, as a tool that would let you run arbitrary commands on remote hosts, including navigating ssh and sudo as required, without needing any agent software on the remote end. It has since seemed to grow into an enterprise product in its own right, seemingly in competition with puppet, chef, cfengine et al.

Looking at the Ansible website now I'd be rather put off by just how "enterprisey" it has become - much as I am by the puppet website, if I'm honest - but if you persevere past the webinars, testimonials, etc. etc., you can find yourself to the documentation, and running an arbitrary command is as simple as

  • defining a list of hosts
  • running an ansible command line referencing some or all of those hosts

The hosts file format is simple

[somehosts]
host1
host2
...
[otherhosts]
host3

The command line can be a little bit more complex, especially if you need to use one username for ssh, another for sudo, and you don't want to use ssh key auth:

ansible -i ./hostsfile somehosts -k -u someuser \
    --sudo -K -a 'puppet agent --onetime --no-daemonize --verbose’

"all" would work where I've used somehosts in the example above.

So there you go: using one configuration management system to bootstrap another. I'm sure I've reserved myself a special place in hell for this.

Steve Kemp: Writing your own e-books is useful

9 October, 2014 - 03:03

Before our recent trip to Poland I took the time to create my own e-book, containing the names/addresses of people to whom we wanted to send postcards.

Authoring ebooks is simple, and this was a useful use. (Ordinarily I'd have my contacts on my phone, but I deliberately left it at home ..)

I did mean to copy and paste some notes from wikipedia about transport, tourist destinations, etc, into a brief guide. But I forgot.

In other news the toy virtual machine I hacked together got a decent series of updates, allowing you to embed it and add your own custom opcode(s) easily. That was neat, and fell out naturely from the switch to using function-pointers for the opcode implementation.

Elena 'valhalla' Grandi: New gpg subkey

9 October, 2014 - 02:42
The GPG subkey I keep for daily use was going to expire, and this time I decided to create a new one instead of changing the expiration date.

Doing so I've found out that gnupg does not support importing just a private subkey for a key it already has (on IRC I've heard that there may be more informations on it on the gpg-users mailing list), so I've written a few notes on what I had to do on my website, so that I can remember them next year.

The short version is:

* Create your subkey (in the full keyring, the one with the private master key)
* export every subkey (including the expired ones, if you want to keep them available), but not the master key
* (copy the exported key from the offline computer to the online one)
* delete your private key from your regular use keyring
* import back the private keys you have exported before.

Ian Donnelly: A Comparison of Elektra Merge and Git Merge

9 October, 2014 - 02:02

Hi everybody,

We have gotten some inquires about how Elektra’s merge functionality compares to the merge functionality built into git: git merge-file. I am glad to say that Elektra outperforms git’s merge functionality in the same ways it outperforms diff3 when applied to configuration files. Obviously, git’s merge functionality does a much better job with source-code as that is not the goal of elektra. In that previous example, I showed that because diff3 is lined based, where Elektra is not (unless you mount a file using the line plugin). The example I used before, and I will go over again, is using smb.conf and in-line comments.

Many of our storage plug-ins understand the difference between comments and actual configuration data. So if a configuration file has an inline comment like so:
max log size = 10000 ; Controls the size of the log file (in KiB)
we can compare the actual Keys, value pairs between versions max log size = 10000 and deal with the comments separately.

As a result, if we have a base:
max log size = 1000 ; Size in KiB

Ours:
max log size = 10000 ; Size in KiB

Theirs:
max log size = 1000 ; Controls the size of the log file (in KiB)

The result using elektra-merge would be:
max log size = 10000 ; Controls the size of the log file (in KiB)

Just like diff3, git merge-file can only compare these lines as lines, and thus there is a conflict. When running git merge-file smb.conf.ours smb.conf.base smb.conf.theirs we get the following output showing a conflict:
<<<<<<< smb.conf.ours
max log size = 2000 ; Size in KiB
=======
max log size = 1000 ; Controls the size of the log file (in KiB)
>>>>>>> smb.conf.theirs

This really shows the strength of Elektra’s plugin system and why it makes merge an obvious use-case of Elektra. I hope this example makes it clear why using Elektra’s merge functionality is advantageous over git’s merge functionality. Once again I would like to stress the importance of quality storage plug-ins for Elektra. The more quality plugins we have the more powerful Elektra can be. If you are interested in plugins and would like to help us by adding functionality to Elektra by creating a new plug-in be sure to read my basic tutorial on how to do so.

Sincerely,
Ian Donnelly

EvolvisForge blog: PSA: #shellshock still unfixed except in Debian unstable

8 October, 2014 - 21:57

I just installed, for work, Hanno Böck’s bashcheck utility on our monitoring system, and watched all¹ systems go blue.

① All but two. One is not executing remote scripts from the monitoring for security reasons, the other is my desktop which runs Debian “sid” (unstable).

This means that all those distributions still have unfixed #shellshock bugs.

  • lenny (with Md’s packages): bash (3.2-4.2) = 3.2.53(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • squeeze (LTS): bash (4.1-3+deb6u2) = 4.1.5(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • wheezy (stable-security): bash (4.2+dfsg-0.1+deb7u3) = 4.2.37(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • jessie (testing): bash (4.3-10) = 4.3.27(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • sid (unstable): bash (4.3-11) = 4.3.30(1)-release
    • none
  • CentOS 5.5: bash-3.2-24.el5 = 3.2.25(1)-release
    • extra-vulnerable (function import active)
    • CVE-2014-6271 (original shellshock)
    • CVE-2014-7169 (taviso bug)
    • CVE-2014-7186 (redir_stack bug)
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.6: bash-3.2-24.el5 = 3.2.25(1)-release
    • extra-vulnerable (function import active)
    • CVE-2014-6271 (original shellshock)
    • CVE-2014-7169 (taviso bug)
    • CVE-2014-7186 (redir_stack bug)
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.8: bash-3.2-33.el5_10.4 = 3.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.9: bash-3.2-33.el5_10.4 = 3.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.10: bash-3.2-33.el5_10.4 = 3.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 6.4: bash-4.1.2-15.el6_5.2.x86_64 = 4.1.2(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 6.5: bash-4.1.2-15.el6_5.2.x86_64 = 4.1.2(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • lucid (10.04): bash (4.1-2ubuntu3.4) = 4.1.5(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • precise (12.04): bash (4.2-2ubuntu2.5) = 4.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • quantal (12.10): bash (4.2-5ubuntu1) = 4.2.37(1)-release
    • extra-vulnerable (function import active)
    • CVE-2014-6271 (original shellshock)
    • CVE-2014-7169 (taviso bug)
    • CVE-2014-7186 (redir_stack bug)
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • trusty (14.04): bash (4.3-7ubuntu1.4) = 4.3.11(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)

I don’t know if/when all distributions will have patched their packages ☹ but thought you’d want to know the hysteria isn’t over yet…

… however, I hope you were not stupid enough to follow the advice of this site which suggests you to download some random file over the ’net and execute it with superuser permissions, unchecked. (I think the Ruby people were the first to spread this extremely insecure, stupid and reprehensible technique.)

Thanks to ↳ tarent for letting me do this work during $dayjob time!

Jan Wagner: Updated Monitoring Plugins Version is coming soon

8 October, 2014 - 19:27

Three months ago version 2.0 of Monitoring Plugins was released. Since then many changes were integrated. You can find a quick overview in the upstream NEWS.

Now it's time to move forward and a new release is expected soon. It would be very welcome if you could give the latest source snapshot a try. You also can give the Debian packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the changelog.

Feedback is very appreciated via Issue tracker or the Monitoring Plugins Development Mailinglist.

Michal &#268;iha&#345;: Wammu 0.37

8 October, 2014 - 18:00

It has been more than three years since last release of Wammu and I've decided it's time to push changes made in the Git repos to the users. So here comes Wammu 0.37.

The list of changes is not really huge, but in total that means 1470 commits in git (most of that are translations):

  • Translation updates (Indonesian, Spanish, ...).
  • Add export of contact to XML.
  • Add Get all menu option.
  • Added appdata metadata.

I will not make any promises for future releases (if there will be any) as the tool is not really in active development.

Filed under: English Gammu Wammu | 0 comments | Flattr this!

Thorsten Glaser: mksh R50d released

7 October, 2014 - 23:54

The last MirBSD Korn Shell update broke update-initramfs because I accidentally introduced a regression in field splitting while fixing other bugs – sorry!

mksh R50d was just released to fix that, and a small NULL pointer dereference found by Goodbox on IRC. Thanks to my employer tarent for a bit of time to work on it.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้