Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 57 min 38 sec ago

Gregor Herrmann: GDAC 2014/12

13 December, 2014 - 01:12

debian is again taking part in the OPW, & this afternoon I happened to read the backlog of the first weekly IRC meeting (in #debian-qa) between the mentors & the mentee for one of the projects. it was great to see that the participant's first patch is already merged & deployed, & that she closed her first bug report & is really getting into this debian world. – yay to great mentoring & increasing diversity!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Jingjie Jiang: Week1

12 December, 2014 - 21:37
Down the rabbit hole

Starting from this week, my OPW period officially begins.
I am thankful and very grateful to this chance. One for the reason I can get an opportunity to contribute to a beneficial, working, meaningful, real-world software. The other seemingly reason is, I can learn much experience and design philosophy from my mentors zack and matthieu. :)

This week my fix is on, bug #763921. It’s basically making the folder page rendering providing more information, specifically the ls -l format. This offers information such as filetype, permission, size, etc.

I learned some new knowledge about “man 2 stat”, and also got more familiar(actually confident) with front end stuff, namely css.

I also get myself familiar with the python test (coverage). Next week, I will try to increase the test coverage a bit. Tests is an essential part of software. It ensures the correctness and robustness. And more importantly, by making tests, we can easily debug the software. The so called, 磨刀不误砍柴工。

The trello cards of next week is interesting. (in case you dunno the site, it’s here:http://trello.com

Let’s see it.


Daniel Leidert: Issues with Server4You vServer running Debian Stable (Wheezy)

12 December, 2014 - 17:51

I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using debootstrap over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)

Cron jobs not running

I installed unattended-upgrades and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with

# run-parts --list /etc/cron.daily

and apt was not listed although /etc/cron.daily/apt was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using dpkg-statoverride :( It was easy to fix the file permissions for everything beyond /etc/cron*, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using debsums. I'm speaking about file permissions and ownership.

Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.

init from testing / upstart on hold

Today I've discovered, that apt-get wanted to update the init package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, init is only in testing/unstable and essential there. I purged it, but apt-get keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:

  • no sources listed for backports, volatile, multimedia etc.
  • sources listed for testing and unstable
  • only packages from stable/stable-updates installed
  • sets APT::Default-Release "stable";

First I checked with aptitude:

# aptitude why init
Unable to find a reason to install init.

Ok, so why:

# apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

JFTR: I see a stable system bugging me to install systemd for no obvious reason. The issue might be similar! I'm still investigating.

Now I tried to debug this:

# apt-get -o  Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) upstart [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
Added sysvinit:amd64 to the remove list
Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

Eh, upstart?

# apt-cache policy upstart
upstart:
Installed: 1.6.1-1
Candidate: 1.6.1-1
Version table:
1.11-5 0
500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
*** 1.6.1-1 0
990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
100 /var/lib/dpkg/status
# dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================-===================-===================-===============================================================
hi upstart 1.6.1-1 amd64 event-based init daemon

Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. But I still don't understand apt-get and the difference to aptitude behaviour? Can someone please enlighten me?

Customized files

This isn't really an issue, but just for completion: several files have been customized. debsums easily shows which ones:

# debsums -ac
I don't have the original list anymore - please check yourself

EvolvisForge blog: Tip of the day: don’t use –purge when cross-grading

12 December, 2014 - 16:04

A surprise to see my box booting up with the default GRUB 2.x menu, followed by “cannot find a working init”.

What happened?

Well, grub:i386 and grub:x32 are distinct packages, so APT helpfully decided to purge the GRUB config. OK. Manual boot menu entry editing later, re-adding “GRUB_DISABLE_SUBMENU=y” and “GRUB_CMDLINE_LINUX=”syscall.x32=y”” to /etc/default/grub, removing “quiet” again from GRUB_CMDLINE_LINUX_DEFAULT, and uncommenting “GRUB_TERMINAL=console”… and don’t forget to “sudo update-grub”. There. This should work.

On the plus side, nvidia-driver:i386 seems to work… but not with boinc-client:x32 (why, again? I swear, its GPU detection has been driving me nuts on >¾ of all systems I installed it on, already!).

On the minus side, I now have to figure out why…

tglase@tglase:~ $ sudo ifup -v tap1
Configuring interface tap1=tap1 (inet)
run-parts –exit-on-error –verbose /etc/network/if-pre-up.d
run-parts: executing /etc/network/if-pre-up.d/bridge
run-parts: executing /etc/network/if-pre-up.d/ethtool
ip addr add 192.168.0.3/255.255.255.255 broadcast 192.168.0.3 peer 192.168.0.4 dev tap1 label tap1
Cannot find device “tap1″
Failed to bring up tap1.

… this happens. This used to work before the cktN kernels.

Joey Hess: a brainfuck monad

12 December, 2014 - 12:02

Inspired by "An ASM Monad", I've built a Haskell monad that produces brainfuck programs. The code for this monad is available on hackage, so cabal install brainfuck-monad.

Here's a simple program written using this monad. See if you can guess what it might do:

import Control.Monad.BrainFuck

demo :: String
demo = brainfuckConstants $ \constants -> do
        add 31
        forever constants $ do
                add 1
                output

Here's the brainfuck code that demo generates: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>++++++++++++++++++++++++++++++++<<<<<<<<[>>>>>>>>+.<<<<<<<<]

If you feed that into a brainfuck interpreter (I'm using hsbrainfuck for my testing), you'll find that it loops forever and prints out each character, starting with space (32), in ASCIIbetical order.

The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer).

newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a))

type DataPointer = Integer

-- Gets the current address of the data pointer.
addr :: BrainFuck DataPointer
addr = BrainFuck $ \loc -> ([], loc, loc)

Having the data pointer address available allows writing some useful utility functions like this one, which uses the next (brainfuck opcode >) and prev (brainfuck opcode <) instructions.

-- Moves the data pointer to a specific address.
setAddr :: Integer -> BrainFuck ()
setAddr n = do
        a <- addr
        if a > n
                then prev >> setAddr n
                else if a < n
                        then next >> setAddr n
                        else return ()

Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful..

-- The loop is only entered if the byte at the data pointer is not zero.
-- On entry, the loop body is run, and then it loops when
-- the byte at the data pointer is not zero.
loopUnless0 :: BrainFuck () -> BrainFuck ()
loopUnless0 a = do
        open
        a
        close

To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!)

I wrote a little brainfuckConstants that runs a BrainFuck program with these constants set up at the beginning. It just generates the brainfuck code for a series of ASCII art fishes: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>

With the fishes^Wconstants in place, it's possible to write a more useful loop. Notice how the data pointer location is saved at the beginning, and restored inside the loop body. This ensures that the provided BrainFuck action doesn't stomp on our constants.

-- Run an action in a loop, until it sets its data pointer to 0.
loop :: BrainFuck () -> BrainFuck ()
loop a = do
    here <- addr
    setAddr 1
    loopUnless0 $ do
        setAddr here
        a

I haven't bothered to make sure that the constants are really constant, but that could be done. It would just need a Control.Monad.BrainFuck.Safe module, that uses a different monad, in which incr and decr and input don't do anything when the data pointer is pointing at a constant. Or, perhaps this could be statically checked at the type level, with type level naturals. It's Haskell, we can make it safer if we want to. ;)

So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand.

Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this.

Dirk Eddelbuettel: RProtoBuf 0.4.2

12 December, 2014 - 09:19

A new release 0.4.2 of RProtoBuf is now on CRAN. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

Murray and Jeroen did almost all of the heavy lifting. Many changes were triggered by two helpful referee reports, and we are slowly getting to the point where we will resubmit a much improved paper. Full details are below.

Changes in RProtoBuf version 0.4.2 (2014-12-10)
  • Address changes suggested by anonymous reviewers for our Journal of Statistical Software submission.

  • Make Descriptor and EnumDescriptor objects subsettable with "[[".

  • Add length() method for Descriptor objects.

  • Add names() method for Message, Descriptor, and EnumDescriptor objects.

  • Clarify order of returned list for descriptor objects in as.list documentation.

  • Correct the definition of as.list for EnumDescriptors to return a proper list instead of a named vector.

  • Update the default print methods to use cat() with fill=TRUE instead of show() to eliminate the confusing [1] since the classes in RProtoBuf are not vectorized.

  • Add support for serializing function, language, and environment objects by falling back to R's native serialization with serialize_pb and unserialize_pb to make it easy to serialize into a Protocol Buffer all of the more than 100 datasets which come with R.

  • Use normalizePath instead of creating a temporary file with file.create when getting absolute path names.

  • Add unit tests for all of the above.

CRANberries also provides a diff to the previous release. RProtoBuf page which has a draft package vignette, a a 'quick' overview vignette, and a unit test summary vignette. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gregor Herrmann: GDAC 2014/11

12 December, 2014 - 03:45

is enthusiasm contagious? I think so. a recent example: another advent posting. – ¡gracias!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Enrico Zini: ssl-protection

11 December, 2014 - 21:35
SSL "protection"

In my experience with my VPS, setting up pretty much any service exposed to the internet, even a simple thing to put a calendar in my phone requires an SSL certificate, which costs money, which needs to be given to some corporation or another.

When the only way to get protection from a threat is to give money to some big fish, I feel like I'm being forced to pay protection money.

I look forward to this.

Rapha&#235;l Hertzog: Freexian’s fourth report about Debian Long Term Support

11 December, 2014 - 18:32

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November 42.5 work hours have been equally split among 3 paid contributors. Their reports are available:

  • Thorsten Alteholz did his share as usual.
  • Raphaël Hertzog worked 18 hours (catching up the remaining 4 hours of October).
  • Holger Levsen did his share but did not manage to catch up with the backlog of the previous months. As such, those unused work hours have been redispatched among other contributors for the month of December.
New paid contributors

Last month we mentioned the possibility to recruit more paid contributors to better share the work load and this has already happened: Ben Hutchings and Mike Gabriel join the list of paid contributors.

Ben, as a kernel maintainer, will obviously take care of releasing Linux security updates. We are glad to have him on board because backporting kernel fixes really need some skills that nobody else had within the team of paid contributors.

Evolution of the situation

Compared to last month, the number of paid work hours has almost not increased (we are at 45.7 hours per month) but we are in the process of adding a few more sponsors: Roche Diagnostics International AG, Misal-System, Bitfolk LTD. And we are still in contact with a couple of other companies which have announced their willingness to contribute but which are waiting the new fiscal year.

But even with those new sponsors, we still have some way to go to reach our minimal goal of funding the equivalent of a half-time position. So consider asking your company representative to join this project!

In terms of security updates waiting to be handled, the situation looks better than last month: the dla-needed.txt file lists 27 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total. Like last month, we’re a bit behind in terms of CVE triaging and there are still many packages using SSLv3 where we have no clear plan (in response to the POODLE issues).

The good side is that even though the kernel update spent a large chunk of time to Holger and Raphaël, we still managed to further reduce the backlog of security issues.

Thanks to our sponsors

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Steve Kemp: An anniversary and a retirement

11 December, 2014 - 17:56

On this day last year I we got married.

This morning my wife cooked me breakfast in bed for the second time in her life, the first being this time last year. In thanks I will cook a three course meal this evening.

 

In unrelated news the BlogSpam service will be retiring the XML/RPC API come 1st January 2015.

This means that any/all plugins which have not been updated to use the JSON API will start to fail.

Fingers crossed nobody will hate me too much..

Dirk Eddelbuettel: digest 0.6.6 (and 0.6.5)

11 December, 2014 - 08:27

A new release 0.6.6 of the digest package is now on CRAN and in Debian.

This release brings the xxHash non-cryptographic hash function by Yann Collet, thanks to several pull requests by Jim Hester. After the upload of version 0.6.5 we uncovered another lovely non-standardness of Windoze: you cannot format unsigned long long via printf() format strings. Great. Luckily Jim found a quick (and portable) fix via the inttypes.h header, and that went into the 0.6.6 release.

The release also contains an earlier extension for hmac() to also cover crc32 hashes, kindly provided by Suchen Jin.

I also made a number of small internal changes such as

  • switching (compiled) function registration to package load via a the useDynLib() declaration in NAMESPACE,
  • (finally!!) formating code to proper four-space indentation,
  • adding some documentation around Jim's pull request, and
  • adding a few GPL copyright headers.

CRANberries provides the usual summary of changes to the previous version.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RcppRedis 0.1.3

11 December, 2014 - 07:59

A very minor bugfix release of RcppRedis is now on CRAN. The zcount function now returns the correct type.

Changes in version 0.1.3 (2014-12-10)
  • Bug fix setting correct return type of zcount

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gregor Herrmann: GDAC 2014/10

11 December, 2014 - 04:28

debian has a bigger role than "just" providing a free operating system to our users (& derivatives), it's also an important player in the free software world at large. a recent indication of this is the composition of the FSF's High Priority Projects Committee: if I'm counting correctly, there are two active & one former DDs listed as members; oh, & the contact person is yet another DD :) – great to see many debianistas active all around!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Clint Adams: In Uganda, a popular marbles game is called dool.

11 December, 2014 - 03:13

Sophie stood before me. “I'm leaving with that guy,” she gestured.

“Yes, I thought that would happen,” I chuckled.

She hugged me. The guy, whose name we managed to never utter, did not hug me, though he usually does. They went home together.

That was the last time I saw Sophie.

The rest of us sat down, finished our drinks, and split up. I went with Sophie's ex-girlfriend and the guy who sometimes serves as her ironic beard.

They smoked their disgusting light cigarettes, the kind with very little tobacco but lots of horrible chemicals that make me cough and hopefully fail to give me lung cancer, because watching someone else die of that was excruciating enough.

So we get to our next destination and there is a Peruvian girl sitting on a stool and shopping for shoes on her phone. I am fascinated. Phone app developers had told me that people actually did this but I thought it was just wishful thinking on their part.

The Peruvian girl, who is named something that sounds like it was uttered accidentally by Tommy Gnosis, complains to Sophie's ex-girlfriend that some guy keeps harassing her. We instinctively form a human barrier to shield her from this alleged transgressor, who, it turns out, is the pompous drug dealer with whom Sophie's ex-girlfriend is just about to conduct business.

“I'll be right back,” she says. “Hit on her.”

“What‽ Why‽” I shout after her. There is no response.

Sophie's ex-girlfriend and the drug dealer return from the darkness, having swapped possessions.

The drug dealer is a blowhard and proceeds to regale us with stories so little interest to me that I can't even remember what they were about, but as drug dealers are wont to do, he abuses the power of his possession to maintain the delusion that people would tolerate his presence even if he didn't have illegal commodities to sell them.

When the beard and Sophie's ex-girlfriend go out for a smoke break, I went home.

Chris Lamb: Starting IPython automatically from zsh

11 December, 2014 - 01:07

Instead of a calculator, I tend to use IPython for those quotidian bits of "mental" arithmetic:

In  [1]: 17 * 22.2
Out [1]: 377.4

However, I often forget to actually start IPython, resulting in me running the following in my shell:

$ 17 * 22
zsh: command not found: 17

Whilst I could learn do this maths within Zsh itself, I would prefer to dump myself into IPython instead — being able to use "_" and Python modules generally is just too useful.

After following this pattern too many times, I put together the following snippet that will detect whether I have prematurely attempted a calculation inside zsh and pretend that I ran it in IPython all along:

zmodload zsh/pcre

math_regex='^[\d\.\s\+\*\/\-]+$'

function math_precmd() {
    if [ "${?}" = 0 ]
    then
        return
    fi

    if [ -z "${math_command}" ]
    then
        return
    fi

    if whence -- "$math_command" 2>&1 >/dev/null
    then
        return
    fi

    if [ "${math_command}" -pcre-match "${math_regex}" ]
    then
        echo
        ipython -i -c "_=${math_command}; print _"
    fi
}

function math_preexec() {
    typeset -g math_command="${1}"
}

typeset -ga precmd_functions
typeset -ga preexec_functions

precmd_functions+=math_precmd
preexec_functions+=math_preexec

For example:

lamby@seriouscat:~% 17 * 22.2
zsh: command not found: 17

377.4

In  [1]: _ + 1
Out [1]: 378.4

(Canonical version from my zshrc.d)

Dirk Eddelbuettel: Wilco!!

10 December, 2014 - 07:47

With a bit of luck due to a collegue having a spare ticket, I managed to make it to an awesome Wilco show at The Riviera in Uptown.

This concert was part of a set a six shows. Tweedy and the band were fast, and loose, and wonderful, and totally beloved by the home crowd. An truly outstanding show, and a great evening.

Also: I should get out more often. Last blog entry about Wilco was from 2005. Ouch.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RcppAnnoy 0.0.4

10 December, 2014 - 07:29

A few weeks ago, RcppAnnoy had its initial release 0.0.2 and subsequent update in release 0.0.3. The latter brought Windows support, thanks to a neat pull request by Qiang Kou.

RcppAnnoy wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify. RcppAnnoy uses Rcpp Modules to offer the exact same functionality as the Python module wrapped around Annoy.

In the 0.0.3 release, I overlooked one thing: that with builds on Windows, we would also get builds against what CRAN calls R-oldrel: the previous release, which cannot turn on C++11 via the simple CXX_STD = CXX11 declaration in src/Makevars (and which we need because use of Boost brings in long long which R can only cope with under C++11 ...).

So this new release 0.0.4 does nothing more than add a constraint in a Depends: R (>= 3.1.0) to avoid builds not being able to turn on C++11.

Courtesy of CRANberries, there is also a diffstat report for this release. More detailed information is on the RcppAnnoy page page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gregor Herrmann: GDAC 2014/9

10 December, 2014 - 03:37

today, I again had a pleasant experience around an RC bug, featuring a diligent patch submitter, & a maintainer showing his appreciation for the help. – motivating!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Joey Hess: podcasts that don't suck, 2014 edition

10 December, 2014 - 02:05
  • The Memory Palace: This is the way history should be taught, but rarely is. Nate DiMeo takes past events and puts you in the middle of them, in a way that makes you emphathise so much with people from the past. Each episode is a little short story, and they're often only a few minutes long. A great example is this description of when Niagra falls stopped. I have listened to the entire back archive, and want more. Only downside is it's a looong time between new episodes.

  • The Haskell Cast: Panel discussion with a guest, there is a lot of expertise amoung them and I'm often scrambling to keep up with the barrage of ideas. If this seems too tame, check out The Type Theory Podcast instead..

  • Benjamen Walker's Theory of Everything: Only caught 2 episodes so far, but they've both been great. Short, punchy, quirky, geeky. Astoundingly good production values.

  • Lightspeed magazine and Escape Pod blur together for me. Both feature 20-50 minute science fiction short stories, and occasionally other genre fictions. They seem to get all the award-winning short stories. I sometimes fall asleep to these which can make for strange dreams. Two strongly contrasting examples: "Observations About Eggs from the Man Sitting Next to Me on a Flight from Chicago, Illinois to Cedar Rapids, Iowa" and "Pay Phobetor"

  • Serial: You probably already know about this high profile TAL spinoff. If you didn't before: You're welcome. :) Nuff said.

  • Redecentralize: Interviews with creators of decentralized internet tools like Tahoe-LAFS, Ethereum, Media Goblin, TeleHash. I just wish it went into more depth on protocols and how they work.

  • Love and Radio: This American Life squared and on acid.

  • Debian & Stuff: My friend Asheesh and that guy I ate Thai food with once in Portland in a marvelously unfocused podcast that somehow connects everything up in the end. Only one episode so far; what are you guys waiting on? :P

  • Hacker Public Radio: Anyone can upload an episode, and multiple episodes are published each week, which makes this a grab bag to pick and choose from occasionally. While mostly about Linux and Free Software, the best episodes are those that veer var afield, such as the 40 minute river swim recording featured in Wildswimming in France.

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars.

PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

Wouter Verhelst: Playing with ExtreMon

10 December, 2014 - 01:43

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring.

By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too.

Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some.

And collectd doesn't do the nagios thing of actually telling you when things go down.

So what if you could see it when things go bad?

At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...

well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?

which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code.

As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere.

While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do.

You will need:

  • A monitor with a FullHD (or better) resolution. Currently, the display frontend of ExtreMon assumes it has a FullHD display at all time. Even if you have a lower resolution. Or a higher one.
  • Python3
  • OpenJDK 6 (or better)

First, we clone the ExtreMon git repository:

git clone https://github.com/m4rienf/ExtreMon.git extremon
cd extremon

There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to

  • install CollectD (which is required for its types.db)
  • Configure CollectD to have a line like Hostname "com.example.myhost" rather than the (usual) FQDNLookup true. This is because extremon uses the java-style reverse hostname, rather than the internet-style FQDN.

Make sure the dump.py script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things.

Next, we build the java libraries (which we'll need for the display thing later on):

cd java/extremon
mvn install
cd ../client/
mvn install

This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository.

We'll now build the display frontend. This is maintained in a separate repository:

cd ../..
git clone https://github.com/m4rienf/ExtreMon-Display.git display
cd display
mvn install

This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github

With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:

  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.

If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:

keytool -genkeypair -alias extremontest

after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader.

Now, we will sign the .jar file:

jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest

There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course.

You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites.

Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory.

Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to barbershop.grep.be by the location where you've stored your signed .jar file.

Add the chalins_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication—chalice_in_http is an input mechanism.

Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring.

Now run javaws https://url/x3console.jnlp to start Extremon-Display.

At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in.

Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver.

The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out.

Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd.

The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that.

Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead and and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot.

Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้