Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 37 min ago

Ian Donnelly: How-To: Integrate elektra-merge Into a Debian Package

17 August, 2014 - 02:43

Hi Everybody,

So I already explained that we decided to go in a new direction and patch ucf to allow automatic configuration file merging with any custom command. Today I wanted to explain how to use ucf’s new `–three-way-merge-command` functionality in
conjunction with Elektra in order to ultilize Elektra’s powerful tools in order to allow automatic three-way merges of your package’s configuration during upgrades in a way that is more reliable than a diff3 merge. This guide assumes that you are fimilar with ucf already and are just trying to impliment the --three-way-merge-command option using Elektra.

The addition of the --three-way-merge-command option was a part of my Google Summer of Code Project. This option takes the form:

--three-way-merge-command command [New File] [Destination]

Where command is the command you would like to use for the merge. New File and Destination are the same as always.

We added a new script to Elektra called elektra-merge for use with this new option in ucf. This script acts as a liaison between ucf and Elektra, allowing a regular ucf command to run a kdb merge even though ucf commands only pass New File and Destination whereas kdb merge requires ourpath, theirpath, basepath, and resultpath. Since ucf already performs a three-way merge, it keeps track of all the necessary files to do so, even though it only takes in New File and Destination.

In order to use elektra-merge, the current configuration file must be mounted to KDB to serve as ours in the merge. The script automatically mounts theirs, base, and result using the kdb remount command in order to use the same backend as ours (since all versions of the same file should use the same backend anyway) and this way users don’t need to worry about specifying the backend for each version of the file. Then the script attempts a merge on the newly mounted KeySets. Once this is finished, either on success or not, the script finishes by unmouting all but our copy of the file to cleanup KDB. Then, if the merge was successful ucf will replace ours with the result providing the package with an automatically merged configuration which will also be updated in KDB itself.

Additionally, we added two other scripts, elektra-mount and elektra-umount which act as simple wrappers for kdb mount and kdb umount. That work identically but are more script friendly

The full command to use elektra-merge to perform a three-way merge on a file managed by ucf is:

ucf --three-way --threeway-merge-command elektra-merge [New File] [Destination]

Thats it! As described above, elektra-merge is smart enough to run the whole merge off of the information from that command and utilizes the new kdb remount command to do so.

Integrating elektra-merge into a package that already uses ucf is very easy! In postinst you should have a line similar to:

ucf [New File] [Destination]

or perhaps:

ucf --three-way [New File] [Destination]

All you must do is in postinst, when run with the configure option you must mount the config file to Elektra:

elektra-mount [New File] [Mounting Destination] [Backend]

Next, you must update the line containing ucf with the options --three-way and --threeway-merge-command like so:

ucf --three-way --threeway-merge-command elektra-merge [New File] [Destination]

Then, in your postrm script, during a purge, you must umount the config file before deleting it:

elektra-umount [Name]

That’s it! With those small changes you can use Elektra to perform automatic three-way merges on any files that your package uses ucf to handle!

I just wanted to show a quick example, below is a diff representing the changes we made to the samba-common package in order to allow automatic configuration merging for smb.conf using Elektra. We chose this package because it already
uses ucf to handle smb.conf but it frequently requires users to manually merge changes across versions. Here is the patch showing what we changed:

diff samba_orig/samba-3.6.6/debian/samba-common.postinst samba/samba-3.6.6/debian/samba-common.postinst
92c92,93
< ucf --three-way --debconf-ok "$NEWFILE" "$CONFIG"
---
> elektra-mount "$CONFIG" system/samba/smb ini
> ucf --three-way --threeway-merge-command elektra-merge --debconf-ok "$NEWFILE" "$CONFIG"
Only in samba/samba-3.6.6/debian/: samba-common.postinst~
diff samba_orig/samba-3.6.6/debian/samba-common.postrm samba/samba-3.6.6/debian/samba-common.postrm
4a5
> elektra-umount system/samba/smb

As you can see, all we had to do was add the line to mount smb.conf during install, update the ucf command to include the new --threeway-merge-command option, and unmount system/samba/smb during a purge. It really is that easy!

Sincerely,
Ian S. Donnelly

Daniel Pocock: WebRTC: what works, what doesn't

16 August, 2014 - 23:49

With the release of the latest rtc.debian.org portal update, there are numerous improvements but there are still some known problems too.

The good news is that if you have a web browser, you can probably make successful WebRTC calls from one developer to another without any need to install or configure anything else.

The bad news is that not every permutation of browser and client will work. Here I list some of the limitations so people won't waste time on them.

The SIP proxy supports any SIP client

Just about any SIP client can connect to the proxy server and register. This does not mean that every client will be able to call each other. Generally speaking, modern WebRTC clients will be able to call each other. Standalone softphones or deskphones will call each other. Calling from a normal softphone or deskphone to a WebRTC browser, or vice-versa, will not work though.

Some softphones, like Jitsi, have implemented most of the protocols to communicate with WebRTC but they are yet to put the finishing touches on it.

Chat should just work for any combination of clients

The new WebRTC frontend supports SIP chat messaging.

There is no presence or buddy list support yet.

You can even use a tool like sipsak to accept or send SIP chats from a script.

Chat works for any client new or old. Although a WebRTC user can't call a softphone user, for example, they can send chats to each other.

WebRTC support in Iceweasel 24 on wheezy systems is very limited

On a wheezy system, the most recent Iceweasel update is version 24.7.

This version supports most of WebRTC but does not support TURN relay servers to help you out of a NAT network.

If you call between two wheezy machines on the same NAT network it will work. If the call has to traverse a NAT boundary it will not work.

Wheezy users need to either download a newer Firefox version or use Chromium.

JsSIP doesn't handle ICE elegantly

Internet Connectivity Establishment (ICE, RFC 5245) is meant to prevent calls from being answered with missing audio or video streams.

ICE is a mandatory part of WebRTC.

When correctly implemented, the JavaScript application will exchange ICE candidates and run the connectivity checks before alerting anybody that a call is ringing. If the checks fail (for example, with Iceweasel 24 and NAT), the caller should be told the call can't be made and the callee shouldn't be disturbed at all.

JsSIP is not operating in this manner though. It alerts the callee before telling the browser to start the connectivity checks. Then it even waits for the callee to answer. Only then does it tell the browser to start checking connectivity. This is not a fault with the ICE standard or the browser, it is an implementation problem.

Therefore, until this is fully fixed, people may still see some calls that appear to answer but don't have any media stream. After this is fixed, such calls really will be a thing of the past.

Debian RTC testing is more than just a pipe dream

Although these glitches are not ideal for end users, there is a clear roadmap to resolve them.

There are also a growing collection of workarounds to minimize the inconvenience. For example, JSCommunicator has a hack to detect when somebody is using Iceweasel 24 and just refuse to make the call. See the option require_relay_candidate in the config.js settings file. This also ensures that it will refuse to make a call if the TURN server is offline. Better to give the user a clear error than a call without any audio or video stream.

require_relay_candidate is enabled on freephonebox.net because it makes life easier for end users. It is not enabled on rtc.debian.org because some DDs may be willing to tolerate this issue when testing on a local LAN.

Matthias Klumpp: AppStream/DEP-11 Debian progress

16 August, 2014 - 22:50

There hasn’t been a progress-report on DEP-11 for some time, but that doesn’t mean there was no work going on on it.

DEP-11 is Debian’s implementation of AppStream, as well as an effort to enhance the metadata available about software in Debian. While initially, AppStream was only about applications, DEP-11 was designed with a larger scope, to collect data about libraries, binaries and things like Python modules. Now, since AppStream 0.6, DEP-11 and AppStream have essentially the same scope, with the difference of DEP-11 metadata being described in YAML, while official AppStream data is XML. That was due to a request by our ftpmasters team, which doesn’t like XML (which is also not used anywhere in Debian, as opposed to YAML). But this doesn’t mean that people will have to deal with the YAML file format: The libappstream library will just take DEP-11 data as another data source for it’s Xapian database, allowing anything using libappstream to access that data just like the XML stuff. Richards libappstream-glib will also receive support for the DEP-11 format soon, filling it’s in-memory data cache and enabling the use of GNOME-Software on Debian.

So, what has been done so far? The past months, my Google Summer of Code student. Abhishek Bhattacharjee, was working hard to integrate DEP-11 support into dak, the Debian Archive Kit, which maintains the whole Debian archive. The result will be an additional metadata table in our internal Postgres database, storing detailed information about the software available in a Debian package, as well as “Components-<arch>.yml.gz” files in the Debian repositories. Dak will also produce an application icon-cache and a screenshots repository. During the time of the SoC, Abhishek focused mainly on the applications part of things, and less on the other components (like extracting data about Python modules or libraries) – these things can easily be implemented later.

The remaining steps will be to polish the code and make it merge-ready for Debian’s dak (as soon as it has received enough testing, we will likely give it a try on the Tanglu Debian derivative). Following that, Apt will be extended to fetch the DEP-11 data on-demand on systems where it is useful (which is currently mostly desktop-systems) – if you want to save a little bit of space, you will be able to disable downloading this extra metadata in Apt. From there, libappstream will take the data for it’s Xapian db. This will lead to the removal of the much-hated (from ftpmasters and maintainers side) app-install-data package, which has not been updated for two years and only contains a small fraction of the metadata provided by DEP-11.

What Debian will ultimately gain from this effort is support for software-centers like GNOME-Software, and improved support for tools like Apper and Muon in displaying applications. Long-term, with more metadata being available, It would be cool to add support for it to “specialized package managers”, like Python’s pip, npm or gem, to make them fetch information about available distribution software and install that instead of their own copies from 3rd-party repositories, if possible. This should ultimatively lead to less code duplication on distributions and will likely result in fewer security issues, since the officially maintained and integrated distribution packages can easily be used, if possible. This is no attempt to make tools like pip obsolete, but an attempt to have the different tools installing software on your machine communicate better, instead of creating parallel worlds in terms of software management. Another nice sideeffect of more metadata will be options to search for tools handling mimetypes in the software repos (in case you can’t open a file), smart software centers installing missing firmware, and automatic suggestions for developers which software they need to install in order to build a specific software package. Also, the data allows us to match software across distributions, for that, I will have some news soon (not sure how soon though, as I am currently in thesis-writing-mode, and therefore have not that much spare time). Since the goal is to have these features available on all distributions supporting AppStream, it will take longer to realize – but we are on a good way.

So, if you want some more information about my student’s awesome work, you can read his blogpost about it. He will also be at Debconf’14 (Portland). (I can’t make it this time, but I surely won’t miss the next Debconf)

Sadly, I only see a very small chance to have the basic DEP-11 stuff land in-time for Jessie (lots of review work needs to be done, and some more code needs to be written), but we will definitively have it in Jessie+1.

A small example on how this data will look like can be found here – a larger, actual file is available here. Any questions and feedback are highly appreciated.

Bits from Debian: Debian turns 21!

16 August, 2014 - 17:45

Today is Debian's 21st anniversary. Plenty of cities are celebrating Debian Day. If you are not close to any of those cities, there's still time for you to organize a little celebration!

Happy 21st birthday Debian!

Paul Tagliamonte: PyGotham 2014

16 August, 2014 - 02:54

I’ll be there this year!

Talks look amazing, I can’t wait to hit up all the talks. Looks really well organized! Talk schedule has a bunch that I want to hit, I hope they’re recorded to watch later!

If anyone’s heading to PyGotham, let me know, I’ll be there both days, likely floating around the talks.

Aurelien Jarno: Intel about to disable TSX instructions?

16 August, 2014 - 00:02

Last time I changed my desktop computer I bought a CPU from the Intel Haswell family, the one available on the market at that time. I carefully selected the CPU to make sure it supports as many instructions extensions as possible in this family (Intel likes segmentation, even high-end CPUs like the Core i7-4770k do not support all possible instructions). I ended-up choosing the Core i7-4771 as it supports the “Transactional Synchronization Extensions” (Intel TSX) instructions, which provide transactional memory support. Support for it has been recently added in the GNU libc, and has been activated in Debian. By choosing this CPU, I wanted to be sure that I can debug this support in case of bug report, like for example in bug#751147.

Recently some computing websites started to mention that the TSX instructions have bugs on Xeon E3 v3 family (and likely on Core i7-4771 as they share the same silicon and stepping), quoting this Intel document. Indeed one can read on page 49:

HSW136. Software Using Intel TSX May Result in Unpredictable System Behavior

Problem: Under a complex set of internal timing conditions and system events, software using the Intel TSX (Transactional Synchronization Extensions) instructions may result in unpredictable system behavior.
Implication: This erratum may result in unpredictable system behavior.
Workaround: It is possible for the BIOS to contain a workaround for this erratum.

And later on page 51:

Due to Erratum HSw136, TSX instructions are disabled and are only supported for software development. See your Intel representative for details.

The same websites tell that Intel is going to disable the TSX instructions via a microcode update. I hope it won’t be the case and that they are going to be able to find a microcode fix. Otherwise it would mean I will have to upgrade my desktop computer earlier than expected. It’s a bit expensive to upgrade it every year and that’s a the reason why I skipped the Ivy Bridge generation which didn’t bring a lot from the instructions point of view. Alternatively I can also skip microcode and BIOS updates, in the hope I won’t need another fix from them at some point.

Steinar H. Gunderson: Blenovo part III

15 August, 2014 - 20:16

I just had to add this to the saga:

I got an email from Lenovo Germany today, saying they couldn't reach me (and that the case would be closed after two days if I don't contact them back). I last sent them the documents they asked for July 3rd.

I am speechless.

Steve Kemp: A tale of two products

15 August, 2014 - 20:14

This is a random post inspired by recent purchases. Some things we buy are practical, others are a little arbitrary.

I tend to avoid buying things for the sake of it, and have explicitly started decluttering our house over the past few years. That said sometimes things just seem sufficiently "cool" that they get bought without too much thought.

This entry is about two things.

A couple of years ago my bathroom was ripped apart and refitted. Gone was the old and nasty room, and in its place was a glorious space. There was only one downside to the new bathroom - you turn on the light and the fan comes on too.

When your wife works funny shifts at the hospital you can find that the (quiet) fan sounds very loud in the middle of the night and wakes you up..

So I figured we could buy a couple of LED lights and scatter them around the place - when it is dark the movement sensors turn on the lights.

These things are amazing. We have one sat on a shelf, one velcroed to the bottom of the sink, and one on the floor, just hidden underneath the toilet.

Due to the shiny-white walls of the room they're all you need in the dark.

By contrast my second purchase was a mistake - The Logitech Harmony 650 Universal Remote Control should be great. It clearly has the features I want - Able to power:

  • Our TV.
  • Our Sky-box.
  • OUr DVD player.

The problem is solely due to the horrific software. You program the device via an application/website which works only under Windows.

I had to resort to installing Windows in a virtual machine to make it run:

# Get the Bus/ID for the USB device
bus=$(lsusb |grep -i Harmony | awk '{print $2}' | tr -d 0)
id=$(lsusb |grep -i Harmony | awk '{print $4}' | tr -d 0:)

# pass to kvm
kvm -localtime ..  -usb -device usb-host,hostbus=$bus,hostaddr=$id ..

That allows the device to be passed through to windows, though you'll later have to jump onto the Qemu console to re-add the device as the software disconnects and reconnects it at random times, and the bus changes. Sigh.

I guess I can pretend it works, and has cut down on the number of remotes sat on our table, but .. The overwhelmingly negative setup and configuration process has really soured me on it.

There is a linux application which will take a configuration file and squirt it onto the device, when attached via a USB cable. This software, which I found during research prior to buying it, is useful but not as much as I'd expected. Why? Well the software lets you upload the config file, but to get a config file you must fully complete the setup on Windows. It is impossible to configure/use this device solely using GNU/Linux.

(Apparently there is MacOS software too, I don't use macs. *shrugs*)

In conclusion - Motion-activated LED lights, more useful than expected, but Harmony causes Discord.

Juliana Louback: JSCommunicator 2.0 (Beta) is Live!

15 August, 2014 - 07:06

This is the last week of Google Summer of Code 2014 - all good things must come to an end. To wrap things up, I’ve merged all my work on JSCommunicator into a new version with all the added features. You can now demo the new and improved (or at least so I hope) JSCommunicator on rtc.debian.org!

JSCommunicator 2.0 has an assortment of new add-ons, the most important new features are the Instant Messaging component and the internationalization support.

The UI has been reorganized but we are currently not using a skin for color scheme - will be posting about that in a bit. The idea is to have a more neutral look that can be easily customized and integrated with other web apps.

A chat session is automatically opened when you begin a call with someone - unless you already started a chat session with said someone. Sound alerts for new incoming messages are optional in the config file, visual alerts occur when an inactive chat tab receives a new message. Future work includes multiple user chat sessions and adapting the layout to a large amount of chat tabs. Currently it only handles 6. (Should I allow more? Who chats with more than 6 people at once? 14 year old me would, but now I just can’t handle that. Anyway, I welcome advice on how to go about this. Should we do infinite tabs or if not, what’s the cut-off?)

About internationalization, I’m uber proud to say we currently run in 6 languages! The 6 are English (default), Spanish, French, Portuguese, Hebrew and German. But one thing I must mention is that since I added new stuff to JSCommunicator, some of the new stuff doesn’t have a translation. I took care of the Portuguese translation and Yehuda Korotkin quickly turned in the Hebrew translation, but we are still missing an update for Spanish, French and German. If you can contribute, please do. There are about 10 new labels to translate, you can fix the issue here. Or if you’re short on time, shoot me an email with the translation for what’s on the right side of the ‘=’:

welcome = Welcome,

call = Call

chat = Chat

enter_contact = Enter contact

type_to_chat = type to chat…

start_chat = start chat

me = me

logout = Logout

no_contact = Please enter a contact.

remember_me = Remember me

I’ll merge it myself but I’ll be sure to add you to the authors list - or maybe I’ll just take all the glory and pretend to be a polyglot.

Gregor Herrmann: RC bugs 2014/13 - 2014/33

15 August, 2014 - 04:32

perl 5.20 got uploaded to debian unstable a few minutes ago; be prepared for some glitches when upgrading sid machines/chroots in the next days, while all 557 reverse dependencies are rebuilt via binNMUs.

how does this relate to this blog post's title? it does, since during the last weeks I was mostly trying to help with the preparation of this transition. & we managed to fix quite a few bugs while they were not bumped to serious yet, otherwise the list below would be a bit longer :)

anyway, here are the the RC bugs I've worked on in the last 20 or so weeks:

  • #711614 – src:libscriptalicious-perl: "libscriptalicious-perl: FTBFS with perl 5.18: test hang"
    upload new upstream release (pkg-perl)
  • #711616 – src:libtest-refcount-perl: "libtest-refcount-perl: FTBFS with perl 5.18: test failures"
    build-depend on fixed version (pkg-perl)
  • #719835 – libdevel-findref-perl: "libdevel-findref-perl: crash in XS_Devel__FindRef_find_ on Perl 5.18"
    upload new upstream release (pkg-perl)
  • #720021 – src:libhtml-template-dumper-perl: "libhtml-template-dumper-perl: FTBFS with perl 5.18: test failures"
    mark fragile test as TODO (pgk-perl)
  • #720271 – src:libnet-jabber-perl: "libnet-jabber-perl: FTBFS with perl 5.18: test failures"
    add patch to sort hash (pkg-perl)
  • #726948 – libmath-bigint-perl: "libmath-bigint-perl: uninstallable in sid - obsoleted by perl 5.18"
    upload new upstream release (pkg-perl)
  • #728634 – src:fusesmb: "fusesmb: FTBFS: configure: error: Please install libsmbclient header files."
    finally upload to DELAYED/2 with patch from November (using pkg-config)
  • #730936 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS: Tests errors"
    upload new upstream release (pkg-perl)
  • #737434 – src:libmojomojo-perl: "[src:libmojomojo-perl] Sourceless file (minified)"
    add unminified version of javascript file to source package (pkg-perl)
  • #739505 – libcgi-application-perl: "libcgi-application-perl: CVE-2013-7329: information disclosure flaw"
    upload with patch prepared by carnil (pkg-perl)
  • #739809 – src:libgtk2-perl: "libgtk2-perl: FTBFS: Test failure"
    add patch from Colin Watson (pkg-perl)
  • #743086 – src:libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: Tests failures"
    add patch from CPAN RT (pkg-perl)
  • #743099 – src:libclass-refresh-perl: "libclass-refresh-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #745792 – encfs: "[PATCH] Fixing FTBFS on i386 and kfreebsd-i386"
    use DEB_HOST_MULTIARCH to find libraries, upload to DELAYED/2
  • #746148 – src:redshift: "redshift: FTBFS: configure: error: missing dependencies for VidMode method"
    add missing build dependency, upload to DELAYED/2
  • #747771 – src:bti: "bti: FTBFS: configure: line 3571: syntax error near unexpected token `PKG_CHECK_MODULES'"
    add missing build dependency
  • #748996 – libgd-securityimage-perl: "libgd-securityimage-perl: should switch to use libgd-perl"
    update (build) dependency (pkg-perl)
  • #749509 – src:visualvm: "visualvm: FTBFS: debian/visualvm/...: Directory nonexistent"
    use override_dh_install-indep in debian/rules (pkg-java)
  • #749825 – src:libtime-parsedate-perl: "libtime-parsedate-perl: trying to overwrite '/usr/share/man/man3/Time::ParseDate.3pm.gz', which is also in package libtime-modules-perl 2011.0517-1"
    add missing Breaks/Replaces (pkg-perl)
  • #749938 – libnet-ssh2-perl: "libnet-ssh2-perl: FTBFS: libgcrypt20 vs. libcrypt11"
    upload package with fixed build-dep, prepared by Daniel Lintott (pkg-perl)
  • #750276 – libhttp-async-perl: "libhttp-async-perl: FTBFS: Tests failures"
    upload new upstream release prepared by Daniel Lintott (pkg-perl)
  • #750283 – src:xacobeo: "xacobeo: FTBFS: Tests failures when network is accessible"
    add missing build dependency (pkg-perl)
  • #750305 – src:libmoosex-app-cmd-perl: "libmoosex-app-cmd-perl: FTBFS: Tests failures"
    add patch to fix test regexps (pkg-perl)
  • #750325 – src:libtemplate-plugin-latex-perl: "libtemplate-plugin-latex-perl: FTBFS: Tests failures"
    upload new upstream releases prepared by Robert James Clay (pkg-perl)
  • #750341 – src:cpanminus: "cpanminus: FTBFS: Trying to write outside builddir"
    set HOME for tests (pkg-perl)
  • #750564 – obexftp: "missing license in debian/copyright"
    add missing license to debian/copyright, QA upload
  • #750770 – libsereal-decoder-perl: "libsereal-decoder-perl: FTBFS on various architectures"
    upload new upstream development release (pkg-perl)
  • #751044 – packaging-tutorial: "packaging-tutorial: FTBFS - File `bxcjkjatype.sty' not found."
    send a patch (updated build-depends) to the BTS
  • #751563 – src:tuxguitar: "tuxguitar: depends on xulrunner which is no more"
    do some triaging (pkg-java)
  • #752171 – src:pcp: "pcp: Build depends on autoconf"
    upload NMU prepared by Xilin Sun, adding missing build dependency
  • #752347 – highlight: "highlight: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752349 – src:nflog-bindings: "nflog-bindings: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752469 – clearsilver: "clearsilver: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752470 – ekg2: "ekg2: hardcodes /usr/lib/perl5"
    calculate perl lib path at build time, QA upload
  • #752472 – fwknop: "fwknop: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752476 – handlersocket: "handlersocket: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, QA upload
  • #752704 – lcgdm: "lcgdm: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, upload to DELAYED/5
  • #752705 – libbuffy-bindings: "libbuffy-bindings: hardcodes /usr/lib/perl5"
    pass value of $Config{vendorarch} to dh_install in debian/rules, upload to DELAYED/5
  • #752710 – liboping: "liboping: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752714 – lockdev: "lockdev: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752716 – ming: "ming: hardcodes /usr/lib/perl5"
    NMU with the minimal changes from the next release
  • #752799 – obexftp: "obexftp: hardcodes /usr/lib/perl5"
    calculate perl lib path at build time, QA upload
  • #752810 – src:razor: "razor: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752812 – src:redland-bindings: "redland-bindings: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752815 – src:stfl: "stfl: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, upload to DELAYED/5
  • #752924 – libdbix-class-perl: "libdbix-class-perl: FTBFS: Failed test 'Cascading delete on Ordered has_many works'"
    add patch from upstream git (pkg-perl)
  • #752928 – libencode-arabic-perl: "libencode-arabic-perl: FTBFS with newer Encode: Can't locate object method "export_to_level" via package "Encode""
    add patch from Niko Tyni (pkg-perl)
  • #752982 – src:libwebservice-musicbrainz-perl: "libwebservice-musicbrainz-perl: hardcodes /usr/lib/perl5"
    pass create_packlist=0 to Build.PL, upload to DELAYED/5
  • #752988 – libnet-dns-resolver-programmable-perl: "libnet-dns-resolver-programmable-perl: broken with newer Net::DNS"
    add patch from CPAN RT (pkg-perl)
  • #752989 – libio-callback-perl: "libio-callback-perl: FTBFS with Perl 5.20: alternative dependencies"
    versioned close (pkg-perl)
  • #753026 – libje-perl: "libje-perl: FTBFS with Perl 5.20: test failures"
    upload new upstream release (pkg-perl)
  • #753038 – libplack-test-anyevent-perl: "libplack-test-anyevent-perl: FTBFS with Perl 5.20: alternative dependencies"
    versioned close (pkg-perl)
  • #753057 – libinline-java-perl: "libinline-java-perl: broken symlinks when built under perl 5.20"
    fix symlinks to differing paths in perl 5.18 vs. 5.20 (pkg-perl)
  • #753144 – src:net-snmp: "net-snmp: FTBFS on kfreebsd-amd64 - 'struct kinfo_proc' has no member named 'kp_eproc'"
    add patch from Niko Tyni, upload to DELAYED/5, later reschedules to 0-day with maintainer's approval
  • #753214 – src:license-reconcile: "license-reconcile: FTBFS: Tests failures"
    make (build) dependency versioned (pkg-perl)
  • #753237 – src:libcgi-application-plugin-ajaxupload-perl: "libcgi-application-plugin-ajaxupload-perl: Tests failures"
    make (build) dependency versioned (pkg-perl)
  • #754125 – libimager-perl: "libimager-perl: FTBFS on s390x"
    close bug, package builds again after libpng upload (pkg-perl)
  • #754691 – src:libio-interface-perl: "libio-interface-perl: FTBFS on kfreebsd-*: invalid storage class for function 'XS_IO__Interface_if_flags'"
    add patch which adds a missing } (pkg-perl)
  • #754993 – libdevice-usb-perl: "libdevice-usb-perl: FTBFS with newer Inline(::C)"
    workaround an Inline bug in debian/rules
  • #755028 – src:libtk-tablematrix-perl: "libtk-tablematrix-perl: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #755324 – src:pinto: "pinto: FTBFS: Tests failures"
    add patch to "use" required module (pkg-perl)
  • #755332 – src:libdevel-nytprof-perl: "libdevel-nytprof-perl: FTBFS: Tests failures"
    mark failing tests temporarily as TODO (pkg-perl)
  • #757754 – obexftp: "obexftp: FTBFS: format not a string literal and no format arguments [-Werror=format-security]"
    add patch with format argument, QA upload
  • #757774 – src:libwx-glcanvas-perl: "libwx-glcanvas-perl: hardcodes /usr/lib/perl5"
    build-depend on new libwx-perl (pkg-perl)
  • #757855 – libwx-perl: "libwx-perl: embeds exact wxWidgets version, needs stricter dependencies"
    use virtual package provided by alien-wxwidgets (pkg-perl)
  • #758127 – src:libwx-perl: "libwx-perl: FTBFS on arm*"
    report and try to debug new build failure (pkg-perl)

p.s.: & now, go & enjoy the new perl 5.20 features :)

Daniel Pocock: Bug tracker or trouble ticket system?

14 August, 2014 - 13:04

One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.

There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.

Support request or bug?

One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.

Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".

At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.

Will people use it?

Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.

Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.

Beyond Perl

Some of the most well known systems in this space are Bugzilla, Request Tracker and OTRS. All of these solutions are developed in Perl.

These days, Python, JavaScript/Node.JS and Java have taken more market share and Perl is chosen less frequently for new projects. Perl skills are declining and younger developers have usually encountered Python as their main scripting language at university.

My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.

Bugzilla has fallen out of the Debian and Ubuntu distributions after squeeze due to its complexity. In contrast, Fedora carries the Bugzilla packages and also uses it as their main bug tracker.

Evaluation

I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.

Some of the trends that appear:

  • Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
  • A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
  • Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
  • Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.
Questions

This leaves me with some of the following questions:

  • Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
  • For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
  • Which are more extendable with modern programming practices, such as Python scripting and using Git?
  • Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
  • Which of them are suitable for the public internet and which are only considered suitable for private access?

Ian Donnelly: The New Deal: ucf Integration

14 August, 2014 - 03:29

 

Hi Everybody,

A few days ago I posted an entry on this blog called dpkg Woes where I explained that due to a lack of response, we were abandoning our plan to patch dpkg for my Google Summer of Code project, and I explained that we had a new solution. Well today I would like to tell you about that solution. Instead of patching dpkg, which would take a long time and seemed like it would never make it upstream, we have added some new features to ucf which will allow my Google Summer of Code project to be realized.

If you don’t know, ucf, which stands for Update Configuration File, is a popular Debian package whose goal is to “preserve user changes to config files.” It is meant to act as an alternative to considering a configuration file a conffile on systems that use dpkg. Instead, package maintainers can use ucf to handle these files in a conffile-like way. Where conffiles must work on all systems, because they are shipped with the package, configuration files that use ucf can be handled by maintainer scripts and can vary between systems. ucf exists as a script that allows conffile-like handling of non-conffile configuration files and allows much more flexibility than dpkg’s conffile system. In fact, ucf even includes an option to perform a three-way merge on files it manages, it currently only uses diff3 for the task though.

As you can see, ucf has a goal that while different than ours, seems naturally compatible to our goal of automatic conffile merging. Obviously, since ucf is a different tool than dpkg we had to re-think how we were going to integrate with ucf. Luckily, integration with ucf proved to be much more simple than integration with dpkg. All we had to do to integrate with ucf was to add a generic hook to attempt a three way merge using any tool created for the task such as Elektra and kdb merge. Felix submitted a pull request with the exact code almost a week ago and we have talked with Manoj Srivastava, the developer for ucf, and he seemed to really like the idea. The only changes we made are to add an option for a three-way merge command, and if one is present, the merge is attempted using the specified command. It’s all pretty simple really.

Since we decided to include a generic hook for a three-way merge command instead of an Elektra-specific one (which would be less open and would create a dependency on Elektra), we also had to add functionality to Elektra to work with this hook. We ended up writing a new script, called elektra-merge which is now included in our repository. All this script does is act as a liaison between the ucf --three-way-merge-command option and Elektra itself. The script automatically mounts the correct files for theirs and base and dest using the new remount command.

Since the only parameters that are passed to the ucf merge command are the paths of ours, theirs, base and result, we were missing vital information on how to mount these files. Our solution was to create the remount command which mirrors the backend configuration of an existing mountpoint to create a new mountpoint using a new file. So if ours is mounted to system/ours using ini, kdb remount /etc/theirs system/theirs system/ours will mount /etc/theirs to system/theirs using the same backend as ours. Since theirs, base, and result should all have the same backend as ours, we can use remount to mount these files even if all we know is their paths.

Now, package maintainers can edit their scripts to utilize this new feature. If they want, package maintainers can specify a command to use to merge files using ucf during package upgrades. I will soon be posting a tutorial about how to integrate this feature into a package and how to use Elektra in your scripts in order to allow for automatic three-way merges during package upgrade. I will post a link to the tutorial here once it is published.

Sincerely,
Ian S. Donnelly

Richard Hartmann: Slave New World

14 August, 2014 - 02:39

Ubiquitous surveillance is a given these days, and I am not commenting on the crime or the level of stupidity of the murderer, but the fact that the iPhone even logs when you turn your flashlight on and off is scary.

Very, very scary in all its myriad of implications.

But at least it's not as if both your phone and your carrier wouldn't log your every move anyway.

Because Enhanced 911 and its ability to silently tell the authorities your position was not enough :)

Daniel Pocock: WebRTC in CRM/ERP solutions at xTupleCon 2014

14 August, 2014 - 02:29

In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.

Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including rtc.debian.org, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.

xTupleCon discounts for developers

xTuple has advised that they will offer a discount to other open source developers and contributers who wish to attend any part of their event. For details, please contact xTuple directly through this form. Please note it is getting close to their deadline for registration and discounted hotel bookings.

Potential WebRTC / JavaScript meet-up in Norfolk area

For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.

Riku Voipio: Booting Linaro ARMv8 OE images with Qemu

13 August, 2014 - 22:36
A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well:

$ http://releases.linaro.org/14.07/openembedded/aarch64/Image
$ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
root@genericarmv8:~#
Quick benchmarking with age-old ByteMark nbench: Index Qemu Foundation Host Memory 4.294 0.712 44.534 Integer 6.270 0.686 41.983 Float 1.463 1.065 59.528Baseline (LINUX) : AMD K6/233* Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.

Ian Donnelly: How-To: kdb import

13 August, 2014 - 04:29

Hi everybody,

Today I wanted to go over what I think is a very useful command in the kdb tool, kdb import. As you know, the kdb tool allows users to interact with the Elektra Key Database (KDB) via the command line. Today I would like to explain the import function of kdb.

The command to use kdb import is:

kdb import [options] destination [format]

In this command, destination is where the imported Keys should be stored below. For instance, kdb import system/imported would store all the keys below system/imported. This command takes Keys from stdin to store them into KDB. Typically, this command is used with a pipe to read in the Keys from a file.

The format argument you see above can be a very powerful option to use with kdb import. The format argument allows a user to specify which plug-in is used to import the Keys into the Key Database. The user can specify any storage plug-in to serve as the format for the Keys to be imported. For instance, if a user wanted to import a /etc/hosts file into KDB without mounting it, they could use the command cat /etc/hosts | kdb import system/hosts hosts. This command would essentially copy the current hosts file into KDB, like mounting it. Unlike mounting it, changes to the Keys would not be reflected in the hosts file and vise versa.

If no format is specified, the format dump will be used instead. The dump format is the standard way of expressing Keys and all their relevant information. This format is intended to be used only within Elektra. The dump format is a good means of backing up Keys from the Key Database for use with Elektra later such as reimporting them later. As of this writing, dump is the only way to fully preserve all parts of the KeySet.

It is very important to note that the dump does not rename keys by design. If a user exports a KeySet using dump using a command such as kdb export system/backup > backup.ecf, they can only import that keyset back into system/backup using a command like cat backup.ecf | kdb import system/backup.

The kdb import command only takes one special option:

-s --strategy

which is used to specify a strategy to use if Keys already exist in the specified destination.
The current list of strategies are:

preserve any keys already in the destination will not be overwritten
overwrite any keys already in the destination will be overwritten if a new key has the same name
cut all keys already in the destination will be removed, then new keys will be imported

If no strategy is specified, the command defaults to the preserve strategy as to not be destructive to any previous keys.

An example of using the kdb import is as follows:

cat backup.ecf | kdb import system/restore

This command would import all keys stored in the file backup.ecf into the Key Database under system/restore.

In this example, backup.ecf was exported from the KeySet using the dump format by using the command:
kdb export system/backup > backup.ecf

backup.ecf contains all the information about the keys below system/backup:

$cat backup.ecf
kdbOpen 1
ksNew 3
keyNew 19 0
system/backup/key1
keyMeta 7 1
binary
keyEnd
keyNew 19 0
system/backup/key2
keyMeta 7 1
binary
keyEnd
keyNew 19 0
system/backup/key3
keyMeta 7 1
binary
keyEnd
ksEnd

Before the import command, system/backup does not exists and no keys are contained there.
After the import command, running the command kdb ls system/backup prints:

system/backup/key1
system/backup/key2
system/backup/key3

As you can see, the kdb import command is a very useful tool included as part of Elektra. I also wrote a tutorial on the kdb export command. Please go read that as well because those two commands go hand in hand and allow some very powerful usage of Elektra.

Sincerely,
Ian S. Donnelly

Ian Donnelly: How-To: kdb export

13 August, 2014 - 04:29

Hi everybody,

I recently posted a tutorial on the kdb import command. Well I also wanted to go over it’s sibling function, kdb export. These two commands work very similarly, but there are some differences.

First of all, the command to use kdb export is:

kdb export [options] source [format]

In this command, source is the root key of which Keys should be exported. For instance, kdb export system/export would export all the keys below system/export. Additionally, this command exports keys under the system/elektra directory by default. It does this so that information about the keys stored under this directory will be included if the Keys are later imported into an Elektra Key Database. This command export Keys to stdout to store them into the Elektra Key Database. Typically, the export command is used with redirection to write the Keys to a file.

As we discussed already, the format argument can be a very powerful option to use with kdb export. Just like with kdb import the format argument allows a user to specify which plug-in is used to export the Keys from the Key Database. The user can specify any storage plug-in to serve as the format for the exported Keys. For instance, if a user mounted their hosts file to system/hosts using kdb mount /etc/hosts system/hosts hosts they would be able to export these Keys using the hosts format by using the command kdb export system/hosts hosts > hosts.ecf. This command would essentially create a backup of their current /etc/hosts file in a valid format for /etc/hosts.

If no format is specified, the format dump will be used instead. The dump format is the standard way of expressing Keys and all their relevant information. This format is intended to be used only within Elektra. The dump format is a good means of backing up Keys from the Key Database for use with Elektra later such as reimporting them later. As of this writing, dump is the only way to fully preserve all parts of the KeySet.

The kdb export command takes one special option, but it’s different than the one for kdb import, that option is:

-E --without-elektra which omits the system/elektra directory of keys

An example of using the kdb export is as follows:

kdb export system/backup > backup.ecf

This command would export all keys stored under system/backup, along with relevant Keys in system/elektra, into a file called backup.ecf.

As you can see, the kdb export command is a very useful tool just like it’s sibling, kdb import. If you haven’t yet, please go read the tutorial I wrote for kdb import because these two commands are best used together and can enable some really great features of Elektra.

Sincerely,
Ian S. Donnelly

Cyril Brulebois: Mark a mail as read across maildirs

12 August, 2014 - 02:20
Problem

Discussions are sometimes started by mailing a few different mailing lists so that all relevant parties have a chance to be aware of a new topic. It’s all nice when people can agree on a single venue to send their replies to, but that doesn’t happen every time.

Case in point, I’m getting 5 copies of a bunch of mails, through the following debian-* lists: accessibility, boot, cd, devel, project.

Needless to say: Reading, or marking a given mail as read once per maildir rapidly becomes a burden.

Solution

I know some people use a duplicate killer at procmail time (hello gregor) but I’d rather keep all mails in their relevant maildirs.

So here’s mark-read-everywhere.pl which seems to do the job just fine for my particular setup: all maildirs below ~/mails/* with the usual cur, new, tmp subdirectories.

Basically, given a mail piped from mutt, compute a hash on various headers, look at all new mails (new subdirectories), and mark the matching ones as read (move to the nearby cur subdirectories, and change suffix from , to ,S).

Mutt key binding (where X is short for cross post):

macro index X "<pipe-message>~/bin/mark-as-read-everywhere.pl<enter>"

This isn’t pretty or bulletproof but it already started saving time!

Now to wonder: was it worth the time to automate that?

Cyril Brulebois: How to serve Perl source files

12 August, 2014 - 01:45

I noticed a while ago a Perl script file included on my blog wasn’t served properly, since the charset wasn’t announced and web browsers didn’t display it properly. The received file was still valid UTF-8 (hello, little © character), at least!

First, wrong intuition

Reading Apache’s /etc/apache2/conf.d/charset it looks like the following directive might help:

AddDefaultCharset UTF-8

but comments there suggest reading the documentation! And indeed that alone isn’t sufficient since this would only affect text/plain and text/html. The above directive would have to be combined with something like this in /etc/apache2/mods-enabled/mime.conf:

AddType text/plain .pl
Real solution

To avoid any side effects on other file types, the easiest way forward seems to avoid setting AddDefaultCharset and to associate the UTF-8 charset with .pl files instead, keeping the text/x-perl MIME type, with this single directive (again in /etc/apache2/mods-enabled/mime.conf):

AddCharset UTF-8 .pl

Looking at response headers (wget -d) we’re moving from:

Content-Type: text/x-perl

to:

Content-Type: text/x-perl; charset=utf-8
Conclusion

Nothing really interesting, or new. Just a small reminder that tweaking options too hastily is sometimes a bad idea. In other news, another Perl script is coming up soon. :)

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้