Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 min 11 sec ago

Russ Allbery: A new challenge

9 August, 2014 - 12:01

Those reading this journal may have noticed that my rate of posting has dropped a bit in the past few years, and quite a lot in the past year. One of the major reasons for this was work, which had been getting more bureaucratic, more stressful, less trusting, and more fearful. After this got drastically worse in the past six months, I finally decided enough was enough and took advantage of a good opportunity to do something different.

I will be joining Dropbox's site reliability engineering team in a week and a half (which means that I'll be working on their servers, not on the product itself). It will take a few months to settle in, but hopefully this will mean a significant improvement to my stress levels and a lot of interesting projects to work on.

I'm taking advantage of this change to inventory the various things I'm currently committed to and let go of some projects to make more space in my life. There are also a variety of software projects that I was maintaining as part of my job at Stanford, and I will be orphaning many of those packages. I'll make another journal post about that a bit later.

For Debian folks, I am going to be at Debconf, and hope to meet many of you there. (It's going to sort of be my break between jobs.) In the long run, I'm hoping this move will let me increase my Debian involvement.

In the long run, I expect most of my free software work, my reviews, and the various services I run to continue as before, or even improve as my stress drops. But I've been at Stanford for a very long time, so this is quite the leap into the unknown, and it's going to take a while before I'm sure what new pattern my life will fall into.

Clint Adams: The politically-correct term is a juvenile cricket

9 August, 2014 - 04:16

Normally I'm disgusted by fangirling of jwz, but it seems that he finally wrote something I like.

Daniel Pocock: Help needed reviewing Ganglia GSoC changes

9 August, 2014 - 04:14

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format. Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you. Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD. Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Jan Wagner: Monitoring Plugins Debian packages

9 August, 2014 - 04:03

You may wonder why the old good nagios-plugins are not up to date in Debian unstable and testing.

Since the people behind and maintaining the plugins <= 1.5 were forced to rename the software project into Monitoring Plugins there was some work behind the scenes and much QA work necessary to release the software in a proper state. This happened 4 weeks ago with the release of the version 2.0 of the Monitoring Plugins.

With one day of delay the package was uploaded into unstable, but did hit the Debian NEW queue due the changed package name(s). Now we (and maybe you) are waiting to get them reviewed by ftp-master. This will hopefully happen before the jessie freeze.

Until this will happen, you may grab packages for wheezy by the 'wheezy-backports' suite from ftp.cyconet.org/debian/ or 'debmon-wheezy' suite from debmon.org. Feedback is much appreciated.

Richard Hartmann: RFC 7194

9 August, 2014 - 02:42

On a positive note, RFC 7194 has been published.

Tiago Bortoletto Vaz: New gadget

9 August, 2014 - 02:16

Solid, energy-efficient, nice UI, wireless, multiple output formats and hmm... can you smell it? :)

Ian Donnelly: The Line Plug-In

9 August, 2014 - 01:19

Hi Everybody,

As you may have noticed I wrote a new plug-in for Elektra called “line“. I used it for a lot of examples in my tutorial, How-To: Write a Plug-In. The line plug-in is a very simple storage plug-in for Elektra. This plug-in stores files into the Elektra Key Database creating a new Key for each line and setting the string value of the Key to the string value of the line of that file. So if we have a file called “hello.txt”:

Hello
World!

And we mount it to kdb like so: kdb mount ~/line.txt system/hello_line line. The output of kdb ls system/hello_line the output would be:

system/hello_line/#1
system/hello_line/#2

With the getString values of #1 and #2 being Hello and World! respectively. If this seems like a very simple plug-in, that’s because it is. Obviously, this plug-in isn’t a great showcase for the robustness of Elektra, any data structure could store a file line by line relatively easy, so why did we add a line plug-in at all?

The answer is that we included a line plug-in to allow any line-based file to use functions of Elektra, particularly the new Merge function. My Google Summer of Code project is to allow for automatic three-way merge of Debian conffiles during package upgrades as opposed to the current prompt and manual merging a user must do if a conffile is edited. Using Elektra and the new mergecode we can mount a conffile with the best plug-in for it, (the ini plug-in for Samba’s smb.conf for instance) and that allows for a very powerful merging ability with a lot more success than a simple diff merge. However, there are a lot of conffiles that don’t use any particular standard (such as ini, xml, or JSON) to store data. That is where the line plug-in comes in. We can still mount these files using the line plug-in and attempt a merge. Of course it is much more likely to have conflicts, and this type of merge is still susceptible to many of the same flaws as regular file merges (such as not being able to detect when a line has been moved), but it simple cases, the merge may succeed which would reduce the overall number of times a user would be prompted during an upgrade.

Basically, I wrote a line plug-in for Elektra as a fallback for conffile merges when we can’t mount the conffile in any more meaningful way. While merges using KeySets that were mounted using line are more likely to fail than other, more specialized plug-ins, there are cases that these merges will succeed and the user will not have to deal with a confusing prompt. The whole point of my Google Summer of Code project is to make upgrading packages and dealing with conffiles much smoother and easier than it is now by including a three-way mere and this line plug-in will help with this goal.

Sincerely,
Ian S. Donnelly

Richard Hartmann: Microsoft Linux: Debian

8 August, 2014 - 19:32

Huh...

Source

(Yes, I am on Debian's trademark team and no, I have no idea what that means. Yet.)

Sune Vuorela: Fun and joy with .bat files

8 August, 2014 - 15:16

Occasionally, one gets in touch with kind of ‘foreign’ technologies and needs to get stuff working anyways.

Recently, I had to do some various hacking with and around .bat files. Bat files are a kind of script files for Microsoft Windows.

Calling external commands

Imagine need to call some other command, let’s say git diff. So from a cmd thing, you would write

git diff

similar to writing shell scripts on unixes. But there is a catch. If the thing you want to call is another bat-script, just calling it ensures it ‘replaces’ the current script and never returns. So you need

call git diff

if the command you want to run is a bat file and you want to return to your script.

Calling an external helper next to your script
If you for some reason needs to call some external helper placed next to your script, there is a helpful thing to do that as well. Imagine your helper is called helper.bat

call %~dp0helper.bat

is the very self-explanatory way of doing that.

Stopping execution of your script

If you somehow encounter some condition in your script that requires you to stop your script, there is a command ‘exit’ handy. It even takes a argument for what error code there is.

exit 2

stops your script with return code 2. But it also have the nice added feature that if you do it in a script you run by hand in a terminal, it also exits the terminal.

Luckily there is also a fix for that:

exit /b 2

and it doesn’t exit your interactive terminal, and it sets the %ERRORLEVEL% variable to the exit code.

Fortunately, the fun doesn’t stop here.

If the script is run non-interactively, exit /b doesn’t set the exit code for for example perl’s system() call. You need to use exit without /b for that. So now you need two scripts. one for “interactive” use that calls exit /b and a similar one using exit for use by other apps/scripts.

Or, we can combine some of our knowledge and add a extra layer of indirection.

  • write your script for interactive use (with exit /b) and let’s call it script.bat
  • create a simple wrapper script
    call %~dp0script.bat
    exit %ERRORLEVEL%

  • call the wrapper for non-interactive use

and then success.

Oh. and on a unrelated note. Windows can’t schedule tasks for users that aren’t logged in and don’t have a password set. The response “Access Denied” is the only clue given.

Ian Wienand: Bash arithmetic evaluation and errexit trap

8 August, 2014 - 13:30

In the "traps for new players" category:

count=0
things="0 1 0 0 1"

for i in $things;
do
   if [ $i == "1" ]; then
       (( count++ ))
   fi
done

echo "Count is ${count}"

Looks fine? I've probably written this many times. There's a small gotcha:

((expression))
The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION. If the value of the expression is non-zero, the return status is 0; otherwise the return status is 1. This is exactly equivalent to let "expression".

When you run this script with -e or enable errexit -- probably because the script has become too big to be reliable without it -- count++ is going to return 0 (post-increment) and per above stop the script. A definite trap to watch out for!

Ian Donnelly: How-To: Write a Plug-In (Part 3, Coding)

8 August, 2014 - 12:13

Hi Everybody!

Hope you have been enjoying my tutorial on writing plug-ins so far. In Part 1 we covered the basic overview of a plug-in. Part 2 covered a plug-in’s contract and the best way to write one. Now, for Part 3 we are going to cover the meat of a plug-in, the actual coding. As you should know from reading Part 1, there are five main functions used for plug-ins, elektraPluginOpen, elektraPluginGet, elektraPluginSet, ELEKTRA_PLUGIN_EXPORT(Plugin), where Plugin should be replaced with the name of your plug-in. We are going to start this tutorial by focusing on the elektraPluginGet because it usually is the most critical function.

As we discussed before, elektraPluginGet is the function responsible for turning information from a file into a usable KeySet. This function usually differs pretty greatly between each plug-in. This function should be of type int, it returns 0 on success or another number on an error. The function will take in a Key, usually called parentKey which contains a string containing the path to the file that is mounted. For instance, if you run the command kdb mount /etc/linetest system/linetest line then keyString(parentKey) should be equal to “/etc/linetest”. At this point, you generally want to open the file so you can begin saving it into keys. Here is the trickier part to explain. Basically, at this point you will want to iterate through the file and create keys and store string values inside of them according to what your plug-in is supposed to do. I will give a few examples of different plug-ins to better explain.

My line plug-in was written to read files into a KeySet line by line using the newline character as a delimiter and naming the keys by their line number such as (#1, #2, .. #_22) for a file with 22 lines. So once I open the file given by parentKey, every time a I read a line I create a new key, let’s call it new_key using dupKey(parentKey). Then I set new_keys’s name to lineNN (where NN is the line number) using keyAddBaseName and store the string value of the line into the key using keySetString. Once the key is initialized, I append it to the KeySet that was passed into the elektraPluginGet function, let’s call it returned for now, using ksAppendKey(return, new_key). Now the KeySet will contain new_key with the name lineNN properly saved where it should be according to the kdb mount command (in this case, system/linetest/lineNN), and a string value equal to the contents of that line in the file. MY plug-in repeats these steps as long as it hasn’t reached end of file, thus saving the whole file into a KeySet line by line.

The simpleini plug-in works similarly, but it parses for ini files instead of just line-by-line. At their most simple level, ini files are in the format of name=value with each pair taking one line. So for this plug-in, it makes a lot of sense to name each Key in the KeySet by the string to the left of the “=” sign and store the value into each key as a string. For instance, the name of the key would be “name” and keyGetString(name) would return “value”. 

As you may have noticed, simpleini and line plug-ins work very similarly. However, they just parse the files differently. The simpleini plug-in parses the file in a way that is more natural to ini file (setting the key’s name to the left side of the equals sign and the value to the right side of the equals sign.) The elektraPluginGet function is the heart of a storage plug-in, its what allows Elektra to store configurations in it’s database. This function isn’t just run when a file is first mounted, but whenever a file gets updated, this function is run to update the Elektra Key Database to match.

We also gave a brief overview of elektraPluginSet function. This function is basically the opposite of elektraPluginGet. Where elektraPluginGet reads information from a file into the Elektra Key Database, elektraPluginSet writes information from the database back into the mounted file.

First have a look at the signature of elektraLineSet:

elektraLineSet(Plugin *handle ELEKTRA_UNUSED, KeySet *toWrite, Key *parentKey)

Lets start with the most important parameters, the KeySet and the parentKey. The KeySet supplied is the KeySet that is going to be persisted in the file. In our case it would contain the Keys representing the lines. The parentKey is the topmost Key of the KeySet at serves several purposes. First, it contains the filename of the destination file as its value. Second, errors and warnings can be emitted via the parentKey. We will discuss error handling in more detail later. The Plugin handle can be used to persist state information in a threadsafe way with elektraPluginSetData. As our plugin is not stateful and therefore does not use the handle, it is marked as unused in order to supress compiler warnings.

Basically the implementation of elektraLineSet can be described with the following pseudocode:

open the file
if (error)
{
ELEKTRA_SET_ERROR(74, parentKey, keyString(parentKey));
}
for each key
{
write the key value together with a newline
}
close the file

The fullblown code can be found at https://github.com/ElektraInitiative/libelektra/blob/master/src/plugins/line/line.c

As you can see, all elektraLineSet does is open a file, take each Key from the KeySet (remember they are named #1, #2 … #_22) in order, and write each key as it’s own line in the file. Since we don’t care about the name of the Key in this case (other than for order), we just write the value of keyString for each Key as a new line in the file. That’s it. Now, each time the mounted KeySet is modified, elektraPluginSet will be called and the mounted file will be updated.

We haven’t discussed ELEKTRA_SET_ERROR yet. Because Elektra is a library, printing errors to stderr wouldn’t be a good idea. Instead, errors and warnings can be appended to a key in the form of metadata. This is what ELEKTRA_SET_ERROR does. Because the parentKey always exists even if a critical error occurres, we append the error to the parentKey. The first parameter is an id specifying the general error that occurred. A listing of existing errors together with a short description and a categorization can be found at https://github.com/ElektraInitiative/libelektra/blob/master/src/liberror/specification. The third parameter can be used to provide additional information about the error. In our case we simply supply the filename of the file that caused the error. The kdb tools will interprete this error and print it in a pretty way. Notice that this can be used in any plugin function where the parentKey is available.

The elektraPluginOpen and elektraPluginClose functions are not commonly used for storage plug-ins, but they can be useful and are worth reviewing. elektraPluginOpen function runs before elektraPluginGet and is useful to do initialization if necessary for the plug-in. On the other hand elektraPluginClose is run after other functions of the plug-in and can be useful for freeing up resources.

The last function, one that is always needed in a plug-in, is ELEKTRA_PLUGIN_EXPORT. This functions is responsible for letting Elektra know that the plug-in exists and which methods it implements. The code from my line function is a good example and pretty self-explanatory:

Plugin *ELEKTRA_PLUGIN_EXPORT(line)
{
return elektraPluginExport("line",
ELEKTRA_PLUGIN_GET, &elektraLineGet,
ELEKTRA_PLUGIN_SET, &elektraLineSet,
ELEKTRA_PLUGIN_END);
}

There you have it! This is the last part of my tutorial on writing a storage plug-in for Elektra. Hopefully you now have a good understanding of how Elektra plug-ins work and you’ll be able to add some great functionality into Elektra through development of new plug-ins. I hope you enjoyed this tutorial and if you have any questions just leave a comment!

Happy coding!
Ian S. Donnelly

Jordi Mallach: A pile of reasons why GNOME should be Debian jessie’s default desktop environment

8 August, 2014 - 05:58

GNOME has, for some reason or another, always been the default desktop environment in Debian since the installer is able to install a full desktop environment by default. Release after release, Debian has been shipping different versions of GNOME, first based on the venerable 1.2/1.4 series, then moving to the time-based GNOME 2.x series, and finally to the newly designed 3.4 series for the last stable release, Debian 7 ‘wheezy’.

During the final stages of wheezy’s development, it was pointed out that the first install CD image would not longer hold all of the required packages to install a full GNOME desktop environment. There was lots of discussion surrounding this bug or fact, and there were two major reactions to it. The Debian GNOME team rebuilt some key packages so they would be compressed using xz instead of gzip, saving the few megabytes that were needed to squeeze everything in the first CD. In parallel, the tasksel maintainer decided switching to Xfce as default desktop was another obvious fix. This change, unannounced and two days before the freeze, was very contested and spurred the usual massive debian-devel threads. In the end, and after a few default desktop flip flops, it was agreed that GNOME would remain as the default for the already frozen wheezy release, and this issue would be revisited later on during jessie’s development.

And indeed, some months ago, Xfce was again reinstated as Debian’s default desktop for jessie as announced:

Change default desktop to xfce.

This will be re-evaluated before jessie is frozen. The evaluation will
start around the point of DebConf (August 2014). If at that point gnome
looks like a better choice, it’ll go back as the default.

Some criteria for that choice will include:

* Popcon numbers for gnome on jessie. If gnome installations continue to
  rise fast enough despite xfce being the default (compared with, say
  kde installations), then we’ll know that users prefer gnome.
  Currently we have no data about how many users would choose gnome when
  it’s not the default. Part of the reason for switching to xfce now
  is to get such data.

* The state of accessability support, particularly for the blind.

* How well the UI works for both new and existing users. Gnome 3
  seems to be adding back many gnome 2 features that existing users
  expect, as well as making some available via addons. If it feels
  comfortable to gnome 2 (and xfce) users, that would go a long way
  toward switching back to it as the default. Meanwhile, Gnome 3 is also
  breaking new ground in its interface; if the interface seems more
  welcoming to new users, or works better on mobile devices, etc, that
  would again point toward switching back.

* Whatever size constraints exist for CD or other images at the time.

--

Hello to all the tech journalists out there. This is pretty boring.
Why don’t you write a story about monads instead?

― Joey Hess in dfca406eb694e0ac00ea04b12fc912237e01c9b5.

Suffice to say that the Debian GNOME team participants have never been thrilled about how the whole issue is being handled, and we’ve been wondering if we should be doing anything about it, or just move along and enjoy the smaller amount of bug reports against GNOME packages that this change would bring us, if it finally made it through to the final release. During our real life meet-ups in FOSDEM and the systemd+GNOME sprint in Antwerp, most members of the team did feel Debian would not be delivering a graphical environment with the polish we think our users deserve, and decided we at least should try to convince the rest of the Debian project and our users that Debian will be best suited by shipping GNOME 3.12 by default. Power users, of course, can and know how to get around this default and install KDE, Xfce, Cinnamon, MATE or whatever other choice they have. For the average user, though, we think we should be shipping GNOME by default, and tasksel should revert the above commit again. Some of our reasons are:

  • Accessibility: GNOME continues to be the only free desktop environment that provides full accessibility coverage, right from login screen. While it’s true GNOME 3.0 was lacking in many areas, and GNOME 3.4 (which we shipped in wheezy) was just barely acceptable thanks to some last minute GDM fixes, GNOME 3.12 should have ironed out all of the issues and our non-expert understanding is that a11y support is now on par with what GNOME 2.30 from squeeze offered.
  • Downstream health: The number of active members in the team taking care of GNOME in Debian is around 5-10 persons, while it is 1-2 in the case of Xfce. Being the default desktop draws a lot of attention (and bug reports) that only a bigger team might have the resources to handle.
  • Upstream health: While GNOME is still committed to its time-based release schedule and ships new versions every 6 months, Xfce upstream is, unfortunately, struggling a bit more to keep up with new plumbing technology. Only very recently it has regained support to suspend/hibernate via logind, or support for Bluez 5.x, for example.
  • Community: GNOME is one of the biggest free software projects, and is lucky to have created an ecosystem of developers, documenters, translators and users that interact regularly in a live social community. Users and developers gather in hackfests and big, annual conferences like GUADEC, the Boston Summit, or GNOME.Asia. Only KDE has a comparable community, the rest of the free desktop projects don’t have the userbase or manpower to sustain communities like this.
  • Localization: Localization is more extensive and complete in GNOME. Xfce has 18 languages above 95% of coverage, and 2 at 100% (excluding English). GNOME has 28 languages above 95%, 9 of them being complete (excluding English).
  • Documentation: Documentation coverage is extensive in GNOME, with most of the core applications providing localized, up to date and complete manuals, available in an accessible format via the Help reader.
  • Integration: The level of integration between components is very high in GNOME. For example, instant messaging, agenda and accessibility components are an integral part of the desktop. GNOME is closely integrated to NetworkManager, PulseAudio, udisks and upower so that the user has access to all the plumbing in a single place. GNOME also integrates easily with online accounts and services (ownCloud, Google, MS Exchange…).
  • Hardware: GNOME 3.12 will be one of the few desktop environments to support HiDPI displays, now very common on some laptop models. Lack of support for HiDPI means non-technical users will get an unreadable desktop by default, and no hints on how to fix that.
  • Security: GNOME is more secure. There are no processes launched with root permissions on the user’s session. All everyday operations (package management, disk partitioning and formatting, date/time configuration…) are accomplished through PolicyKit wrappers.
  • Privacy: One of the latest focuses of GNOME development is improving privacy, and work is being done to make it easy to run GNOME applications in isolated containers, integrate Tor seamlessly in the desktop experience, better disk encryption support and other features that should make GNOME a more secure desktop environment for end users.
  • Popularity: One of the metrics discussed by the tasksel change proponents mentioned popcon numbers. 8 months after the desktop change, Xfce does not seem to have made a dent on install numbers. The Debian GNOME team doesn’t feel popcon’s data is any better than a random online poll though, as it’s an opt-in service which the vast majority of users don’t enable.
  • systemd embracing: One of the reasons to switch to Xfce was that it didn’t depend on systemd. But now that systemd is the default, that shouldn’t be a problem. Also given ConsoleKit is deprecated and dead upstream, KDE and Xfce are switching or are planning to switch to systemd/logind.
  • Adaptation: Debian forced a big desktop change with the wheezy release (switching from the traditional GNOME 2.x to the new GNOME Shell environment. Switching again would mean more adaptation for uses when they’ve had two years to experience GNOME 3.4. Furthermore, GNOME 3.12 means two years of improvements and polishing to GNOME 3.4, which should help with some of the rough edges found in the GNOME release shipped with wheezy.
  • Administration: GNOME is easy to administrate. All the default settings can be defined by administrators, and mandatory settings can be forced to users, which is required in some companies and administrations; Xfce cannot do that. The close integration with freedesktop components (systemd, NM, PulseAudio…) also gives access to specific and useful administration tools.

In short, we think defaulting to GNOME is the best option for the Debian release, and in contrast, shipping Xfce as the default desktop could mean delivering a desktop experience that has some incomplete or rough edges, and not on par with Debian quality standards for a stable release. We believe tasksel should again revert the change and be uploaded as soon as possible, in order to get people testing images with GNOME the sooner the better, with the freeze only two months away.

We would also like that in the future, changes of this nature will not be announced in a git commit log, but widely discussed in debian-project and the other usual development/decision channels, like the change of init system happened recently. We will, whichever the final decision is, continue to package GNOME with great care to ensure our users get the best possible desktop experience Debian can offer.

Niels Thykier: Recent improvements to Britney2

8 August, 2014 - 04:27

As mentioned by Raphaël Hertzog, I have been spending some time on improving Britney2.  Just the other day I submitted a second branch for review that I expect to merge early next week.  I also got another set patches coming up soon.  Currently, none of them are really user visible, so unless you are hosting your own version of Britney, these patches are probably not all that interesting to you.

 

The highlights:

  1. Reduce the need for backtracking by finding semantically equivalent packages.
  2. Avoid needing to set up a backtrack point in some cases.
    • This has the side-effect of eliminating some O(e^n) runtime cases.
  3. Optimise “installability” testing of packages affected by a hinted migration.
    • This has the side-effect of avoiding some O(e^n) runtime cases when the “post-hint” state does not trigger said behaviour.
    • There is a follow-up patch for this one coming in the third series to fix a possible bug for a corner-case (causing a valid hint to be incorrectly rejected when it removed an “uninstallable” package).
  4. Reduce the number of affected packages to test when migrating items by using knowledge about semantically equivalent packages.
    • In some cases, Britney can now do “free” migrations when all binaries being updated replace semantically equivalent packages.
  5. (Merge pending) Avoid many redundant calls to “sort_actions()”, which exhibits at least O(n^2) runtime in some cases.
    • For the dataset Raphaël submitted, this patch shaves off over 30 minutes runtime.  In the particular case, each call to sort_actions takes 3+ minutes and it was called at least 10 times, where it was not needed.
    • That said, sort_actions have a vastly lower runtime in the runs for Debian (and presumably also Ubuntu, since no one complained from their side so far).

 

The results so far:

After the first patch series was merged, the Kali dataset (from Raphaël) could be processed in “only” ~2 hours. With the second patch series merged, the dataset will drop by another 30-50 minutes (most of which are thanks to the change mentioned in highlight #5).

The third patch series currently do not have any mention-worthy performance related changes.  It will probably be limited to bug fixes and some refactoring.

 

Reflections:

The 3 first highlights only affects the “new” installability tester meaning that the Britney2 instances at Ubuntu and Tanglu should be mostly unaffected by the O(n^2) runtime.  Although those cases will probably just fail with several “AIEEE“s. :) The 5th highlight should equally interesting to all Britney2 instances though.

 

For me, the most interesting part is that we have never observed the O(n^2) behaviour in a daily “sid -> testing” run.  The dataset from Raphaël was basically a “stable -> testing/sid” run, which is a case I do not think we have ever done before.  Despite our current updates, there is still room for improvements on that particular use case.

In particular, I was a bit disheartened at how poorly our auto hinter(s) performed on this dataset.  Combined they only assisted with the migration of something like 28 “items”.  For comparison, the “main run” migrated ~7100 “items” and 9220 items were unable to migrate. Furthermore, the “Original” auto hinter spend the better part of 15 minutes computing hints  – at least it results in 10 “items” migrating.

 

Links to the patches:

 


Sylvestre Ledru: Debian Twitter accounts are back

8 August, 2014 - 03:46

After some downtime due to the identi.ca changes, the Debian Twitter accounts are now back.

New Twitter feed ideas are welcome.

Original post blogged on b2evolution.

Lars Wirzenius: On ticketing systems

8 August, 2014 - 02:14

I don't really like any of the ticketing systems I've ever needed to use, whether they've been used as bug tracking systems, user support issue management systems, or something else. Some are not too bad. I currently rely most on debbugs and ikiwiki.

debbugs is the Debian bug tracking system. See https://www.debian.org/Bugs/ for an entry point. It's mostly mail based, with a read-only web interface. You report a bug by sending an email to submission address, and (preferably) include a few magic "pseudo-headers" at the top of your message body ot identify the package and version. There's tools to make this easier, but mostly it's just about sending an e-mail. All replies are via e-mails as well. Effectively, each bug becomes is own little dedicated mailing list.

This is important. A ticket, whether it is a bug report or a support request, is all about the discussion. "Hey I have this problem..." followed by "Have you tried..." and so forth. Anything that makes that discussion easier and faster to have is better.

It is my very strong opinion, and long experience, that the best way to have such a discussion is over e-mail. A lot of modern ticketing systems are web based. They might have an e-mail mode, perhaps read-only, but that's mostly an afterthought. It's a thing bolted onto the side of the system because people like me whinge otherwise.

I like e-mail for this for several reasons.

  • E-mail is push, not pull. I don't need to go look at a web page to be notified that something's happened.

  • E-mail requires no extra usernames and passwords to manage. I don't need to create a new account every time I encounter a new ticketing system instance.

  • E-mail makes it very easy to respond. I can just reply to a message. I don't need to go to a web site, log in, and find a reply button.

  • I already have archives of my e-mail, so referring to old messages (or finding them) is easy and quick. (Mutt, offlineimap, and notmuch is my particular set of choices. But I'm not locked to them, and you can use whatever you like, too.)

  • E-mail is a very rich format. Discussions are inherently threaded, and various character sets, languages, attachments, and other such things just work.

For these reasons, I strongly prefer ticketing systems in which e-mails are the primary form of discussions, and e-mail is a first class citizen. I don't mind if there's other ways to participate in the discussion, but if I have to use something else than e-mail, I tend not to be happy.

I use ikiwiki to provide a distributed, shared notebook on bugs. It's a bit cumbersome, and doesn't work well for discussions.

I think we can improve on the way debbugs works, however. I've been thinking about ticketing systems for Obnam (my backup program), since it gaining enough users that it's getting hard to keep track of discussions with just an e-mail client.

Here's what I want:

  • Obnam users do not need to care about there being a ticketing system. They report a problem by e-mailing the support mailing list, and they keep the list in cc when conducting the discussion. This is very similar to debbugs, with the distinction that there's no ticket numbers that must be kept in the replies.

  • The support staff (that's me, but hopefully others as well) have access to the ticketing system, which automatically sorts incoming messages into tickets. Tickets have sufficient metadata that it's possible to track which ones have been dealt with, or still need work, and perhaps other things. Each ticket contain a Maildir with all the e-mails belonging to that ticket.

  • The ticketing system is distributed. I need to be able to work on tickets offline, and to synchronise instances between different computers. Just like git. It's not enough to have an offline mode (e.g., queuing e-mails on my laptop for sending to debbugs when I'm back online).

  • There is a reasonably powerful search engine that can quickly find the relevant tickets, and messages, based on various criteria.

I will eventually have this. I'm not saying I'm working on this, since I don't have enough free time to do that, but there's a git repository, and some code, and it imports e-mails automatically now.

Some day there may even be a web interface.

(This has been a teaser.)

Hideki Yamane: New Debian T-shirts (2014 summer)

7 August, 2014 - 22:37
For these every 4 or 5 years, Jun Nogata made Debian T-shirts and today I got a 2014 summer version (thanks!  :-), looks good.


I'll take 2 or 3 Japanese Large-size one to DebConf14 in Portland. Please let me know if you want it.

Dirk Eddelbuettel: Rcpp now used by 250 CRAN packages

7 August, 2014 - 20:03

Rcpp reached a nice round milestone yesterday: 250 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations).

The graph is on the left depicts the growth of Rcpp over time. Or at least since I started to write down some usage numbers: first informally, then via a script.

Also displayed is the relative proportion of CRAN packages using it. Rcpp cleared the four per-cent hurdle just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk.

250 is a pretty impressive, and rather humbling, number.

From everybody behind Rcpp, I would like to say a heartfelt Thank You! to all the users and of course contributors.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Craig Small: WordPress 3.9.2 for Debian

7 August, 2014 - 16:43

WordPress released today a security release 3.9.2 which they fix several security issues, including a denial of service issue around XML.  The corresponding Debian package 3.9.2+dfsg-1 is currently being uploaded to the Debian ftp-master server as I write this and should be available on the mirrors soon.

Unfortunately at the time of writing, there are no CVE identifiers to match these problems up with, but refer to the wordpress page for details about these bugs.

Andrew Nacin from WordPress has kindly outlined what versions are susceptible and it looks like the Debian squeeze (3.6.1+dfsg-1~deb6u4)  and wheezy (3.6.1+dfsg-1~deb7u3) are vulnerable to at least some of these bugs which means for me its patch reading and back-porting time

 

Related articles

Ian Donnelly: How-To: Write a Plug-In (Part 2, The Contract)

7 August, 2014 - 08:14

Hi Everybody!

This is the second installment of my tutorial, How-To: Write a Plug-In. Previously, in Part 1, I went over the basic interface of an Elektra Storage Plug-In. For this installment, I will be focusing on one of the most important aspects of a plug-in, its contract.

The contract describes how a plug-in may behave. The main part of the contract describes to Elektra how it will modify the KeySet returned. This is what allows Elektra to work with various types of plug-ins. This guide will focus on one type of plug-in: the storage plug-in.

We already have a published specification for plug-ins on our repository but I will reiterate the details here. The first part of the contract you will need is the system/elektra/modules/plugin (where plugin represents the name of your plug-in). All this key does is let elektra know that your plug-in exists. The next key is system/elektra/modules/plugin/exports. This key lets Elektra know that your plug-in exports symbols to Elektra. Below this key you need to create keys for each function you want to export, in my line plug-in I implemented functions for get and set so I will create two keys as follows:
keyNew ("system/elektra/modules/line/exports/get",
KEY_FUNC, elektraLineGet, KEY_END),
keyNew ("system/elektra/modules/line/exports/set",
KEY_FUNC, elektraLineSet, KEY_END),
So far we have told Elektra that our plugin exists and will be exporting functions for getting and setting keys.

The next set of keys are the system/elektra/Plugin/infos keys. These are pretty self-explanatory. The first three children of this key, author, license, and description are pretty self explanatory. The infos/provides key tells Elektra the type of plug-in that is being used, this will be “storage” for any storage plug-in (which is what this tutorial focuses on). To be precise, it tells Elektra which abstract functionality is provided. This allows other plugins to reference abstract functionality instead of declaring a hard dependency to one specific plugin. For example plugins could state that they want to be placed before any other plugin doing the abstract functionality “conversion” instead of referencing for example the keytometa plugin directly.

The info/placements key tells Elektra which functions of the plug-in it should call. Since I implemented get and set in the line plug-in, I would set this key to “getstorage setstorage”. Next is infos/needs and infos/recommends, infos/needs should contain a list of plug-ins or providers that your plug-in needs to work (see the info/provides key above). Otherwise this key needs to be set to be blank “”. infos/recommends works exactly like needs except that a plug-in that needs another plug-in won’t form a valid backend if the other plug-in doesn’t exist but if it only recommends having the other plug-in, it will still work in the others absence.

Since Elektra 0.8.7, the best practice for creating a plug-in involves storing the info keys at the very top of a file named REAMDE.md in the same directory as the plugin source. For instance, in my line plugin, the first few lines of README.md are as follows:
- infos = Information about LINE plugin is in keys below
- infos/author = Ian Donnelly <ian.s.donnelly@gmail.com>
- infos/licence = BSD
- infos/needs =
- infos/provides = storage
- infos/placements = getstorage setstorage
- infos/description = Very simple storage plug-in which stores each line from a file as a key

The rest of the README.md file should contain an English description of the plug-in. You can include an introduction to the plug-in, its purpose, any shortcomings it may have, and examples of how to use it. This README.md file is standard for GitHub which will automatically be displayed when a user is browsing your plug-in’s directory in the repo. It is written in Markdown.

In order to generate the contract from this file you must edit the CMakeLists.txt file in your plug-in source directory to include the following lines:
generate_readme (plugin)
add_includes (elektra-full ${CMAKE_CURRENT_BINARY_DIR})
include_directories (${CMAKE_CURRENT_BINARY_DIR})
where you substitute plugin with the name of you plugin. This will generate a file called readme_plugin.c (once again, substitute plugin), in your plugin’s build directory when you run the make command.

The best way to declare the contract is within it’s own function, elektraPluginContract. This makes it easier to manage the contract and is much more readable. This is the contract declaration for my line plug-in:

static inline KeySet *elektraLineContract()
{
return ksNew (30,
keyNew ("system/elektra/modules/line",
KEY_VALUE, "line plugin waits for your orders", KEY_END),
keyNew ("system/elektra/modules/line/exports", KEY_END),
keyNew ("system/elektra/modules/line/exports/get",
KEY_FUNC, elektraLineGet, KEY_END),
keyNew ("system/elektra/modules/line/exports/set",
KEY_FUNC, elektraLineSet, KEY_END),
#include "readme_line.c"
keyNew ("system/elektra/modules/line/infos/version",
KEY_VALUE, PLUGINVERSION, KEY_END),
KS_END);
}

Next, you will need to make sure to actually set the contract. This is always done in the elektraPluginGet function since this is technically the only REQUIRED function in a plug-in. We just create a new KeySet, set it to our contract, and append it to the returned KeySet. You can see how this is done for line (note that this code is at the very beginning of elektraLineGet):

if (!strcmp (keyName(parentKey), "system/elektra/modules/line"))
{
KeySet *moduleConfig = elektraLineContract();
ksAppend(returned, moduleConfig);
ksDel(moduleConfig);
return 1;
}

Note that the plugin contract is nothing special, but just a KeySet interpreted by the resolver forming an Elektra backend. Elektra will request the keys below system/elektra/modules/plugin (where plugin is the pluginname) as soon as it wants to know something about that plugin.

Notice also the line #include "readme_line.c". This line, pulls in the contents of readme_line.c in my plug-in’s build directory, which due to the lines we added in CMakeLists.txt will contain the contract as defined at the top of README.md. Any changes to the keys in README.md will be reflected in the plug-in next time it is built.

Part 2 of this tutorial focuses on the plug-in contract by explaining what each element does and explaining how to implement a contract.  For more information about plug-ins and more detailed examples I highly suggest reading Markus’ Thesis for Elektra. Chapter 4, which starts on page 61 goes into much more detail on Plug-Ins and explains the various features of the contracts. Look for Part 3 soon!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้