Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 55 min 36 sec ago

Richard Hartmann: Release Critical Bug report for Week 52

27 December, 2014 - 15:29

Sadly, I am a day late.

This post brought to you by download speeds of ~2-9kb/s and upload speeds of 1 kb/s.

Even though I am only a few kilometers away from Munich, I have worse Internet connection here than I had in the middle of nowhere in Finland

Also, the bug count jumped up by about 40 between Thursday and today. Else, we would have been ahead of squeeze.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1088 (Including 171 bugs affecting key packages)
    • Affecting Jessie: 147 (key packages: 95) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 112 (key packages: 72) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 24 bugs are tagged 'patch'. (key packages: 16) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 7 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 81 bugs are neither tagged patch, nor marked done. (key packages: 56) Help make a first step towards resolution!
      • Affecting Jessie only: 35 (key packages: 23) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 21 bugs are in packages that are unblocked by the release team. (key packages: 14)
        • 14 bugs are in packages that are not unblocked. (key packages: 9)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 ((112+35)) 1 93 (60+33) 287 (171+116) 2 82 (46+36) 271 (162+109) 3 25 (15+10) 249 (165+84) 4 14 (8+6) 244 (176+68) 5 2 (0+2) 224 (132+92) 6 release! 212 (129+83) 7 release+1 194 (128+66) 8 release+2 206 (144+62) 9 release+3 174 (105+69) 10 release+4 120 (72+48) 11 release+5 115 (74+41) 12 release+6 93 (47+46) 13 release+7 50 (24+26) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

Joey Hess: shell monad day 3

27 December, 2014 - 10:18

I have been hard at work on the shell-monad ever since it was born on Christmas Eve. It's now up to 820 lines of code, and has nearly comprehensive coverage of all shell features.

Time to use it for something interesting! Let's make a shell script and a haskell program that both speak a simple protocol. This kind of thing could be used by propellor when it's deploying itself to a new host. The haskell program can ssh to a remote host and run the shell program, and talk back and forth over stdio with it, using the protocol they both speak.

abstract beginnings

First, we'll write a data type for the commands in the protocol.

data Proto
    = Foo String
    | Bar
    | Baz Integer
    deriving (Show)

Now, let's go type class crazy!

class Monad t => OutputsProto t where
    output :: Proto -> t ()

instance OutputsProto IO where
    output = putStrLn . fromProto

So far, nothing interesting; this makes the IO monad an instance of the OutputsProto type class, and gives a simple implementation to output a line of the protocol.

instance OutputsProto Script where
    output = cmd "echo" . fromProto

Now it gets interesting. The Script monad is now also a member of the OutputsProto. To output a line of the protocol, it just uses echo. Yeah -- shell code is a member of a haskell type class. Awesome -- most abstract shell code evar!

Similarly, we can add another type class for monads that can input the protocol:

class Monad t => InputsProto t p where
    input :: t p

instance InputsProto IO Proto where
    input = toProto <$> readLn

instance InputsProto Script Var where
    input = do
        v <- newVar ()
        readVar v
        return v

While the IO version reads and deserializes a line back to a Proto, the shell script version of this returns a Var, which has the newly read line in it, not yet deserialized. Why the difference? Well, Haskell has data types, and shell does not ...

speaking the protocol

Now we have enough groundwork to write haskell code in the IO monad that speaks the protocol in arbitrary ways. For example:

protoExchangeIO :: Proto -> IO Proto
protoExchangeIO p = do
    output p
    input

fooIO :: IO ()
fooIO = do
    resp <- protoExchangeIO (Foo "starting up")
    -- etc

But that's trivial and uninteresting. Anyone who has read to here certianly knows how to write haskell code in the IO monad. The interesting part is making the shell program speak the protocol, including doing various things when it receives the commands.

foo :: Script ()
foo = do
    stopOnFailure True
    handler <- func (NamedLike "handler") $
        handleProto =<< input
    output (Foo "starting up")
    handler
    output Bar
    handler

handleFoo :: Var -> Script ()
handleFoo v = toStderr $ cmd "echo" "yay, I got a Foo" v

handleBar :: Script ()
handleBar = toStderr $ cmd "echo" "yay, I got a Bar"

handleBaz :: Var -> Script ()
handleBaz num = forCmd (cmd "seq" (Val (1 :: Int)) num) $
    toStderr . cmd "echo" "yay, I got a Baz"
serialization

I've left out a few serialization functions. fromProto is used in both instances of OutputsProto. The haskell program and the script will both use this to serialize Proto.

fromProto :: Proto -> String
fromProto (Foo s) = pFOO ++ " " ++ s
fromProto Bar = pBAR ++ " "
fromProto (Baz i) = pBAZ ++ " " ++ show i

pFOO, pBAR, pBAZ :: String
(pFOO, pBAR, pBAZ) = ("FOO", "BAR", "BAZ")

And here's the haskell function to convert the other direction, which was also used earlier.

toProto :: String -> Proto
toProto s = case break (== ' ') s of
    (w, ' ':rest)
        | w == pFOO -> Foo rest
        | w == pBAR && null rest -> Bar
        | w == pBAZ -> Baz (read rest)
        | otherwise -> error $ "unknown protocol command: " ++ w
    (_, _) -> error "protocol splitting error"

We also need a version of that written in the Script monad. Here it is. Compare and contrast the function below with the one above. They're really quite similar. (Sadly, not so similar to allow refactoring out a common function..)

handleProto :: Var -> Script ()
handleProto v = do
    w <- getProtoCommand v
    let rest = getProtoRest v
    caseOf w
        [ (quote (T.pack pFOO), handleFoo =<< rest)
        , (quote (T.pack pBAR), handleBar)
        , (quote (T.pack pBAZ), handleBaz =<< rest)
        , (glob "*", do
            toStderr $ cmd "echo" "unknown protocol command" w
            cmd "false"
          )
        ]

Both toProto and handleProto split the incoming line apart into the first word, and the rest of the line, then match the first word against the commands in the protocol, and dispatches to appropriate actions. So, how do we split a variable apart like that in the Shell monad? Like this...

getProtoCommand :: Var -> Script Var
getProtoCommand v = trimVar LongestMatch FromEnd v (glob " *")

getProtoRest :: Var -> Script Var
getProtoRest v = trimVar ShortestMatch FromBeginning v (glob "[! ]*[ ]")

(This could probably be improved by using a DSL to generate the globs too..)

conclusion

And finally, here's a main to generate the shell script!

main :: IO ()
main = T.writeFile "protocol.sh" $ script foo

The pretty-printed shell script that produces is not very interesting, but I'll include it at the end for completeness. More interestingly for the purposes of sshing to a host and running the command there, we can use linearScript to generate a version of the script that's all contained on a single line. Also included below.

I could easily have written the pretty-printed version of the shell script in twice the time that it took to write the haskell program that generates it and also speaks the protocol itself.

I would certianly have had to test the hand-written shell script repeatedly. Code like for _x in $(seq 1 "${_v#[!\ ]*[\ ]}") doesn't just write and debug itself. (Until now!)

But, the generated scrpt worked 100% on the first try! Well, it worked as soon as I got the Haskell program to compile...

But the best part is that the Haskell program and the shell script don't just speak the same protocol. They both rely on the same definition of Proto. So this is fairly close to the kind of type-safe protocol serialization that Fay provides, when compiling Haskell to javascript.

I'm getting the feeling that I won't be writing too many nontrivial shell scripts by hand anymore! :)

the complete haskell program

Is here, all 99 lines of it.

the pretty-printed shell program
#!/bin/sh
set -x
_handler () { :
    _v=
    read _v
    case "${_v%%\ *}" in FOO) :
        echo 'yay, I got a Foo' "${_v#[!\ ]*[\ ]}" >&2
    ;; BAR) :
        echo 'yay, I got a Bar' >&2
    ;; BAZ) :
        for _x in $(seq 1 "${_v#[!\ ]*[\ ]}")
        do :
            echo 'yay, I got a Baz' "$_x" >&2
        done
    ;; *) :
        echo 'unknown protocol command' "${_v%%\ *}" >&2
        false
    ;; esac
}
echo 'FOO starting up'
_handler
echo 'BAR '
_handler
the one-liner shell program
set -p; _handler () { :;    _v=;    read _v;    case "${_v%%\ *}" in FOO) :;        echo 'yay, I got a Foo' "${_v#[!\ ]*[\ ]}" >&2;     ;; BAR) :;      echo 'yay, I got a Bar' >&2;    ;; BAZ) :;      for _x in $(seq 1 "${_v#[!\ ]*[\ ]}");      do :;           echo 'yay, I got a Baz' "$_x" >&2;      done;   ;; *) :;        echo 'unknown protocol command' "${_v%%\ *}" >&2;       false;  ;; esac; }; echo 'FOO starting up'; _handler; echo 'BAR '; _handler

Sven Hoexter: on call one-liner: Who is sitting in my swap space?

27 December, 2014 - 03:24

A "nearly out of swap"-alarm¹ during on call duty led me to quickly assemble a one-liner to grab a list of PIDs and the amount of memory swapped out from /proc/[pid]/smaps. That one-liner later got a bit polishing from my colleague H. to look like this:

for x in $(ps -eo pid h); do s=/proc/${x}/smaps; [ -e ${s} ] && awk -vp=${x} 'BEGIN { sum=0 } /Swap:/ { sum+=$2 } END { if (sum!=0) print sum " PID " p}' ${s}; done | sort -rg

After I shared this one with some friends V. came up with a faster version (and properly formatted output :), that relies on the "VMSwap" value in /proc/[pid]/status. Since the one-liner above has to add up one "Swap" value per memory segment it's obvious why it's very slow on systems with many processes and a lot of memory.

awk 'BEGIN{printf "%-7s %-16s %s (KB)\n", "PID","COMM","VMSWAP"} {
if($1 == "Name:"){n=$2}
if($1 == "Pid:"){p=$2}
if($1 == "VmSwap:" && $2 != "0"){printf "%-7s %-16s %s\n", p,n,$2 | "sort -nrk3"}
}' /proc/*/status

Drawback of this second version is that it relies on the "VmSwap" value which is only part of Linux 2.6.34 and later. It also ended up in the 2.6.32 based kernel of RHEL 6, so this one should work from RHEL 6 and Debian/wheezy onwards. The first version also works on RHEL 5 and Debian/squeeze since both have a (default) kernel with "CONFIG_PROC_PAGE_MONITOR" enabled, which is what you need to enable /proc/[pid]/smaps.

¹ The usefulness of swap and why there is no automatic ressource assignment check and bookkeeping, based on calculations around things like the JVM -Xmx settings and other input, is a story on its own. There is room for improvement. A CMDB (Chapter 10.7) would be a starting point.

Ritesh Raj Sarraf: Linux Containers and Productization

27 December, 2014 - 00:16

Linux has improved many many things over the last couple of years. Of the many improvements, the one that I've started leveraging the most today, are Control Groups.

In the past, when there was a need to build a prototype for a solution, we needed hardware.

Then came the virtualization richness to Linux. It came in 2 major flavors, KVM (Full Virtualization) and Xen (Para Virtualization). Over the years, the difference of para vs full, for both the implementations, is almost none. KVM now has support for Para-Virtualizaiton, with para-virtualized drviers for most resource intensive tasks, like network and I/O. Similarly, Xen has Full Virtualization support with the help of Qemu-KVM.

But, if you had to build a prototype implementation comprising of a multi node setup, virtualization could still be resource hungry. Otherwise too, if your focus was an application (say like a web framework), virtualization was an overkill.

All thanks to Linux Containers, prototyping applicaiton based solutions, is now a breeze in Linux. The LXC project is very well designed, and well balanced, in terms of features (as compared to the recently introduced Docker implementation).

From an application's point of view, linux containers provide virtualization for namespace, network and resources. Thus making more than 90% of your application's needs fulfilled. For some apps, where a dependency on the kernel is needed, linux containers will not serve the need.

Beyond the defaults provided by the distribution, I like to create a base container with my customizations, as a template. This allows me to quickly create environements, without too much housekeeping to do for the initial setup.

My base config, looks like:

rrs@learner:~$ sudo cat /var/lib/lxc/deb-template/config
[sudo] password for rrs:
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

# CPU
lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234

# Mem
lxc.cgroup.memory.limit_in_bytes = 2000M
lxc.cgroup.memory.soft_limit_in_bytes = 1500M

# Network
lxc.network.type = veth
lxc.network.hwaddr = 00:16:3e:0c:c5:d4
lxc.network.flags = up
lxc.network.link = lxcbr0

# Root file system
lxc.rootfs = /var/lib/lxc/deb-template/rootfs

# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf

# Container specific configuration
lxc.mount = /var/lib/lxc/deb-template/fstab
lxc.utsname = deb-template
lxc.arch = amd64

# For apt
lxc.mount.entry = /var/cache/apt/archives var/cache/apt/archives none defaults,bind 0 0
23:07 ♒♒♒   ☺    
rrs@learner:~$

Some of the important settings to have in the templace are the mount point, to point to your local apt cache, and CPU and Memory limits.

If there was one feature request to ask the LXC developers, I'd ask them to provide a util-lxc tools suite. Currently, to know the memory (soft/hard) allocation for the container, one needs to do the following:

rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ cat memory.soft_limit_in_bytes memory.limit_in_bytes
1572864000
2097152000
23:21 ♒♒♒   ☺    
rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.

1572864000/1024/1024
1500
quit
23:21 ♒♒♒   ☺    
rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$

Tools like lxc-cpuinfo, lxc-free would be much better.

Finally, there's been a lot of buzz about Docker. Docker is an alternate product offering, like LXC, for Linux Containers. From what I have briefly looked at, docker doesn't seem to be providing any ground breaking new interface than what is already possible with LXC. It does take all the tidbit tools, and presents you with a unified docker interface. But other than that, I couldn't find it much appealing. And the assumption that the profiles should be pulled off the internet (Github ?) is not very exciting. I am hoping they do have other options, where dependence on the network is not really required.

Categories: Keywords: 

Ritesh Raj Sarraf: Linux Desktop in 2014

26 December, 2014 - 23:32

We are almost at the end of 2014. While 2014 has been a year with many mixed experiences, I think it does warrant one blog entry ;-)

Recently, I've again started spending more time on Linux products / solutions, than spending time focused on a specfic subsystem. This change has been good. It has allowed me to re-cap all the advancements that have happened in the Linux world, umm... in the last 5 years.

Once upon a time, the Linux kernel sucked on the Desktop. It led to many desktop improvement related initiatives. Many accepted in kernel, while others stood as it is (out-of-tree) still as of today. Over the years, there are many people that advocate for such out-of-tree features, for example the -ck patchset, claiming it has better performance. Most of the times, these are patches not carried by your distribution vendor, which leads you to alternate sources, if you want to try. Having some spare time, I tried the Alternative Kernel project. It is nothing but a bunch of patchsets, on top of the stock kernel.

After trying it out, I must say that these patchsets are out-of-tree, for good. I hardly could make out any performance gain. But I did notice a considerable increase in the power consumption. On my stock Debian kernel, the power consumption lies around 15-18 W. That increased to 20+ W on the alternate kernels. I guess most advocates for the out-of-tree patchsets, only measure the 1-2% performance gain, where as completely neglect the fact that that kernel sleeps less often.

But back to the generic Linux kernel performance problem......

Recently, in the last 2 years, the performance suckiness of the Linux kernel is hardly noticed. So what changed ?

The last couple of years have seen a rise in high capacity RAM, at affordable consumer price. 8 - 16 GiB of RAM is common on laptops these days.

If you go and look at the sucky bug report linked above, it is marked as closed, justified Working as Designed. The core problem with the bug reported, has to do with slow media. The Linux scheduler is (in?)efficient. It works hard to give you the best throughput and performance (for server workloads). I/O threads are a high priority task in the Linux kernel. Now map this scene to the typical Linux desktop. If you end up with doing too much buffered I/O, thus exhausting  all your available cache, and trigger paging, you are in for some sweet experience.

Given that the kernel highly priotizes I/O tasks, and if your underneath persistent storage device is slow (which is common if you have an external USB disk, or even an internal rotating magnetic disk), you end up blocking all your CPU cycles against the slow media. Which further leads to no available CPU cycles for your other desktop tasks. Hence, when you do I/O at such level, you find your desktop go terribly sluggish.

It is not that your CPU is slow or in-capable. It is just that all your CPU slices are blocked. Blocked waiting for your write() to report a completion.

So what exactly changed that we don't notice that problem any more ????

  1. RAM - Increase in RAM has led to more I/O be accommodated in cache. The best way to see this in action is to do a copy of a large file, something almost equivalent to the amount of RAM you have. But make sure it is less than the overall amount. For example, if you have 4 GiB of RAM, try copying a file of size 3.5 GiB in your graphical file manager. And at the same time, on the terminal, keep triggering the `sync` command. Check how long does it take for the `sync` to complete. By being able to cache large amount of data, the Linux kernel has been better at improving the overall performance in the eyes of the user.
  2. File System - But RAM is not alone. The file system has played a very important role too. Earlier, with ext3 file system, we had a commit interval of (5?) 30 seconds. That led to the above mentioned `sync` equivalent to get triggered every 30 secs. It was a safety measure to ensure, that at worst, you lose 30 secs worth of data. But it did hinder performance. With ext4, came delayed allocation. Delayed Allocation allowed the write() to return immediate while the data was in cache, and deferred the task of actual write() to the file system. This allowed for the allocator to find the best contiguous slot for the data to be written. Thus it improved the file system. It also brough corruption for some of the apps. :-)
  3. Solid State Drives - The file system and RAM alone aren't the sole factors that led to the drastic improvement in the overall experience of the Linux desktop. If you read through the bug report linked in this article, you'll find the core root cause to be slow persistent storage devices. Could the allocator have been improved (like Windows) to not be so pressing of the Linux desktop ? Maybe, yes. But that was a decision for the kernel devs and they believed (and believe) to keep those numbers to minimum. Thus for I/O, as for today, you have 3 schedulers and for CPU, just 1. What dramatically improved the overall Linux Desktop performance was the general availability of solid state devices. These device are real fast, which in effect made the write() calls return immediate, and did not block the CPU.

So, it was the advancement in both hardware and software that led to better overall desktop performance. Does the above mentioned bug still exist ? Yes. Its just that it is much harder to trigger it now. You'll have to ensure that you max out your cache and trigger paging. And then try to do ask for some CPU cycles.

But it wasn't that back then we didn't use Linux on the desktop / laptop. It sure did suck more than, say, Windows. But hey, sometimes we have to eat our own dog food. Even then, there sure were some efforts to overcome the then limitations. The first and obvious one is the out-of-tree patchset. But ther were also some other efforts to improve the situation.

The first such effort, that I can recollect, was ulatency. With Linux adding support for Control Groups, there were multiple avenues open on how to tackle and tame the resource starvation problem. The crux of the problem was that Linux gave way too much priority to the I/O tasks. I still wish Linux has a profile mechanism, where in on the kernel command line, we could specify what profile should Linux boot into. Anyways, with ulatency, we saw improvements in the Linux Desktop experience. ulatency had in-built policies to whitelist / blacklist a set of profiles. For example, KDE was a profile. Thus, ulatency would club all KDE processes into a group and give that group a higher precedence to ensure that it had its fair share of CPU cycles.

Today, at almost the end of 2014, there are many more consumer of Linux's control groups. Prominent names would be: LXC and systemd.

ulatency has hardly seen much development in the last year. Probably it is time for systemd to take over.

systemd is expected to bring lots of features to the Linux world, thus bridging the (many) gap Linux has had on the desktop. It makes extensive use of Control Groups for a variety of (good) reasons, which has led it to be a linux-only product. I think it should have never marketed itself as the init daemon. It rather fits better when called as the System Management Daemon.

The path to Linux Desktop looks much brighter in 2015 and beyond thanks to all the advancements that have happened so far. The other important players, who should be thanked are Mobile and Low Latency products (Android, ChromeBook), whose engagement to productize Linux has led to better features overall.

Categories: Keywords: 

Lars Wirzenius: 'It Will Never Work in Theory' and software engineering research

26 December, 2014 - 16:35

It Will Never Work in Theory is a web site that blogs, though slowly, of important research and findings about software development. It's one of the most interesting sites I've found recently, possibly for a long time.

I disagree with the term "software engineering" to describe the software development that happens today. I don't think it's accurate, and indeed I think the concept's too much of a fantasy for the term to be used seriously about practicing developers do. For software development to be an engineering discipline, it needs a strong foundation based on actual research. In short, we need to know what works, what doesn't work, and preferably why in both cases. We don't have much of that.

This website is one example of how that's now changing, and that's good. As a practicing software developer, I want to know, for example, whether code review actually helps improve software quality, the speed of software development, and the total cost of a software project, and also under what the limits of code review are, how it should be done well, and what kind of review doesn't work. Once I know that, I can decide whether and how to do reviews in my development teams.

The software development field is full of anecdotal evidence about these things. It's also full of people who've done something once, and then want to sell books, seminars, and lectures about it. That's not been working too well: it makes research be mostly about fads, and that's no way to build a strong foundation.

Now I just need the time to read everything, and the brain to understand big words.

Francois Marier: Making Firefox Hello work with NoScript and RequestPolicy

26 December, 2014 - 11:40

Firefox Hello is a new beta feature in Firefox 34 which give users the ability to do plugin-free video-conferencing without leaving the browser (using WebRTC technology).

If you cannot get it to work after adding the Hello button to the toolbar, this post may help.

Preferences to check

There are a few preferences to check in about:config:

  • media.peerconnection.enabled should be true
  • network.websocket.enabled should be true
  • loop.enabled should be true
  • loop.throttled should be false
NoScript

If you use the popular NoScript add-on, you will need to whitelist the following hosts:

  • firefox.com
  • loop.services.mozilla.com
  • opentok.com
  • tokbox.com

RequestPolicy

If you use the less popular but equally annoying RequestPolicy add-on, then you will need to whitelist the following destination hosts:

  • mozilla.com
  • opentok.com
  • tokbox.com

I have unfortunately not been able to find a way to restrict the above to a set of (source, destination) pairs. I suspect that the use of websockets confuses RequestPolicy.

Russ Allbery: pam-krb5 4.7

26 December, 2014 - 11:01

It's been a long, long time since the last upstream release. Rather too long, as the changes to the portability and test framework were larger than the changes to the module itself. But there are a few bug fixes here and one new feature.

The new feature is a new option, no_update_user, which disables the normal update of PAM_USER for the rest of the PAM stack to the canonicalized local username. This allows users to do things like enter Kerberos principals into the login prompt and have the right thing happen, but sometimes it's important to keep the authentication credentials as originally entered and not canonicalize, even if there's a local canonicalization available. This new option allows that.

In the bug-fix department, the module now suppresses spurious password prompts from Heimdal while using PKINIT and understands more Kerberos errors for purposes of try_first_pass support and returning better PAM errors.

The documentation now notes next to each option the version of pam-krb5 at which it was introduced with its current meaning.

You can get the latest version from the pam-krb5 distribution page.

Vasudev Kamath: Notes: LXC How-To

26 December, 2014 - 09:41

LXC - Linux Containers allows us to run multiple isolated Linux system under same control host. This will be useful for testing application without changing our existing system.

To create an LXC container we use the lxc-create command, it can accepts the template option, with which we can choose the OS we would like to run under the virtual isolated environment. On a Debian system I see following templates supported

[vasudev@rudra: ~/ ]% ls /usr/share/lxc/templates
lxc-alpine*    lxc-archlinux*  lxc-centos*  lxc-debian*    lxc-fedora*  lxc-openmandriva*  lxc-oracle*  lxc-sshd*    lxc-ubuntu-cloud*
lxc-altlinux*  lxc-busybox*    lxc-cirros*  lxc-download*  lxc-gentoo*  lxc-opensuse*      lxc-plamo*   lxc-ubuntu*

For my application testing I wanted to create a Debian container for my By default the template provided by lxc package creates Debian stable container. This can be changed by passing the option to debootstrap after -- as shown below.

sudo MIRROR=http://localhost:9999/debian lxc-create -t debian \
     -f   container.conf -n container-name -- -r sid

-r switch is used to specify the release, MIRROR environment variable is used to choose the required Debian mirror. I wanted to use my own local approx installation, so I can save some bandwidth.

container.conf is the configuration file used for creating the LXC, in my case it contains basic information on how container networking should b setup. The configuration is basically taken from LXC Debian wiki

lxc.utsname = aptoffline-dev
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0.1
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.3.2/24
lxc.network.veth.pair = vethvm1

I'm using VLAN setup described in Debian wiki: LXC VLAN Networking page. Below is my interfaces file.

iface eth0.1 inet manual

iface br0.1  inet manual
   bridge_ports eth0.1
   bridge_fd 0
   bridge_maxwait 0

Before launching the LXC make sure you run below command

sudo ifup eth0.1
sudo ifup br0.1

# Also give ip to bridge in same subnet as lxc.network.ipv4
sudo ip addr add 192.168.3.1/24 dev br0.1

I'm giving ip address to bridge so that I can communicate with container from my control host, once it comes up.

Now start the container using below command

sudo lxc-start -n container -d -c tty8

We are starting lxc in daemon mode and attaching it to console tty8. If you want, you can drop -d and -c option to start lxc in foreground. But its better you start it in background and attach using lxc-console command shown below.

sudo lxc-console -n container -t tty8

You can detach from console using Ctrl+a q combination and let lxc execute in background.

Its also possible to simply ssh into the running container since we have enabled networking.

Stopping the container should be done using lxc-stop command, but without -k switch (kill) this command never returned. Even with timeout container is not stopped.

sudo lxc-stop -n container

-r can be used for reboot of container. Since I couldn't get clean shutdown I normally attach the console and issue a halt command in container itself. Not sure if this is the right way, but it gets the thing done.

I consider Linux container as a better alternative for spawning a virtual Linux environment instead of running a full blown VM like Virtualbox or VMware.

Russ Allbery: rra-c-util 5.6

26 December, 2014 - 07:28

rra-c-util is my personal collection of infrastructure for C and Perl packages. This release has a quite-large accumulation of small fixes, mostly from work Julien ÉLIE has done on merging it into INN and testing INN's portability on a wide variety of platforms.

The highlights:

  • Add a new module-version.t test and helper script for Perl modules that checks that all modules included in a distribution have the correct version.

  • Add a new portable/socket-unix.h portability layer that includes sys/un.h and defines SUN_LEN if the implementation does not do so. (Use in combination with the RRA_MACRO_SUN_LEN Autoconf macro.)

  • Check for interview overflow in vector_join and vector_cjoin.

  • Avoid strlcpy in the getnameinfo and setenv replacements. I'm slowly trying to eliminate strlcpy and strlcat in my code in favor of asprintf and other cleaner replacements following an extensive discussion on the glibc mailing lists.

  • Fix network_addr_match with the empty string on AIX 7.1, whose inet_aton accepts the empty string.

  • network_connect, network_connect_host, and network_client_create all accept "any" as a synonym for "all" as a source, for parallelism with other functions.

  • Add PIPE_READ and PIPE_WRITE macros to util/macros.h to name the elements of the array passed to pipe.

  • Fix Windows portability for socket functions that, on UNIX, want to return an error code of EINVAL.

  • Fix visibility of some of the utility and portability functions.

You can get the latest version from the rra-c-util distribution page.

Dirk Eddelbuettel: rfoaas 0.0.4.20141225 -- not on CRAN

26 December, 2014 - 05:34

A new version of rfoaas was prepared for CRAN, but refused on the grounds of having been updated within 24 hours. Oh well.

To recap, the rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

And having seen the Christmas Eve (ie December 24) update, upstream immediatly and rather kindly added a new xmas(name, from) function -- so now we could do rfoaas::xmas("CRAN Maintainers") to wish the CRAN Maintainers a very Merry Christmas.

So for once, there is no CRANberries report as the package is not on CRAN :-/ There is of course always GitHub.

Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: C TAP Harness 3.2

26 December, 2014 - 04:38

This is a fairly minor release, mostly to have a release of files that I'm updating in rra-c-util. Most of the changes are typos and other cosmetic fixes. But I hate releasing a new version of something without a little bit of new functionality, so I implemented comment support in test lists. The runtests driver now ignores comments (lines starting with #) and blank lines in test list files, and leading whitespace in front of test names.

You can get the latest version from the C TAP Harness distribution page.

Russ Allbery: Review: Bad Pharma

25 December, 2014 - 13:03

Review: Bad Pharma, by Ben Goldacre

Publisher: Faber and Faber Copyright: 2012, 2013 Printing: 2014 ISBN: 0-86547-806-6 Format: Trade paperback Pages: 397

If you have previously read Bad Science by Ben Goldacre, it's doubtful you need me to do anything other than point out that he has a new book. Oh, and it's subtitled How Drug Companies Mislead Doctors and Harm Patients, and it's written with the same indignant determination, clear explanation, and appreciation for real science as Bad Science.

If you haven't read Bad Science, I recommend it. You don't have to read it before Bad Pharma, but it's a more approachable start, and a funnier book. Bad Science opens with some obviously horrible examples, and slowly develops the tools required to analyze more advanced and deceptive quackery. Bad Pharma jumps straight into the deep end of statistical biases in data, how and why they're introduced, and why they undermine your health care.

This is a more serious book than Bad Science, since it lacks the medical quackery that is so ludicrous it's funny. Everything in Bad Science is successfully sold to someone — there are people who believe in candling — but most readers of the book can laugh a bit that anyone would believe in such things before getting to the parts of the book where the quackery is widespread and kills people. Bad Pharma focuses on the mainstream pharmaceutical industry and the way that it distorts the scientific process, which leads to serious consequences quite quickly.

Goldacre is more pointedly on a mission here than in his previous book. This is not just an exposé. It includes detailed recommendations for how to make the drug evaluation process better, and some specific ethical recommendations for doctors to avoid being unduly influenced by drug company marketing. But before that, he provides the best detailed explanation of how the drug research and approval process is supposed to work that I've seen. There's also a lot of good information about how to detect trials and related studies that haven't been done properly, and how to separate marketing language from scientific evidence. Goldacre wisely does not get into all the details of how to do a trial properly, which would be way beyond the scope of this book, but he does provide valuable rules of thumb and red flags that indicate when someone is probably not doing the trial properly.

A point that's both frustrating and enlightening, and made very well by this book, is that fixing many of the problems with the drug approval process is not that difficult. Fixing all of them would be exceedingly difficult, of course, since they involve humans, complex financial motivations, the effectiveness of propaganda, and talented people who are paid well to create favorable impressions for new drugs. But we could get quite significant benefits from a few straightforward enhancements of the existing drug approval process, such as requiring that all clinical trials be pre-registered and all trial results published as a prerequisite for any drug approval. Goldacre makes his case for these changes forcefully and persuasively, and with justified anger. The current situation is bad enough for people like me who are only potential patients with no urgent medical issues. For a practicing doctor like Goldacre, it's infuriating to be actively denied the information required to effectively save people's lives.

Normally, I find books like this interesting, but depressing and frustratingly limited. It's easy to write books about abuses that currently exist, or the limitations of the current approach. It's much harder to do what Goldacre does here: clearly describe the goals and ideals of drug testing for the lay reader, detail how the current approaches fall short of that goals, and then propose practical and concrete ways to correct the situation. And keep the details interesting and entertaining enough that I enjoyed reading every page of a nearly 400 page book.

This is not a book that will fill you with trust or enthusiasm for the medical establishment (anywhere in the world). But it's still oddly comforting: we do know how to do these things properly, and occasionally we even act on that knowledge. These problems are serious, but they should provoke outrage partly because they're correctable and yet aren't being corrected. And there are doctors like Goldacre who are trying to push medicine towards proper use of evidence, research, and knowledge, instead of commercial manipulation. Recommended if you have any interest at all in how medicine or scientific research is actually done, and what its pitfalls look like.

Rating: 8 out of 10

Dirk Eddelbuettel: rfoaas 0.0.4.20141224

25 December, 2014 - 07:43

A new version of rfoaas is now on CRAN. The rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

This is minor update, affecting only the rfoaas side without changes at the FOAAS backend side. We documented that the result may need an encoding update (at least on the World's leading consumer OS), and now provide a default value for from so that many services can be called argument-less, or at least with one argument less than before. Both of these were suggested by Richie Cotton via issue tickets at the GitHub repo.

CRANberries also provides a diff to the previous release. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Chris Lamb: find(1), trailing slashes and symbolic links to directories

25 December, 2014 - 06:06

Here's a nice little "gotcha" in find(1).

First, let's create a directory and a symlink to that directory. We'll add an empty file just underneath to illustrate what is going on:

$ mkdir a
$ ln -s a b
$ touch a/file

If we invoke find with a trailing slash, everything works as expected:

$ find a/
a/
a/file

$ find b/
b/
b/file

... but if we omit the trailing slash, find does not traverse the symlink:

$ find a
a
a/file

$ find b
b

This implies that any normal-looking invokation of find such as:

find /path/to/dir -name 'somefile.ext' ...

... is subtly buggy as it won't accomodate the sysadmin replacing that path with a symlink.

This is, of course, well-covered in the find(1) manpage (spoiler: the safest option is to specify -H, or simply to append the trailing slash), but I would still class this as a "gotcha" because of the subtle difference between the trailing and non-trailing slash variants.

Putting it another way, it's completely reasonable that find doesn't follow symlinks, but when this behaviour based on the presence of the trailing slash—a usually meaningless syntactic distinction—it crosses the rubicon to being counter-intutive.

Joey Hess: generating shell scripts from haskell using a shell monad

25 December, 2014 - 04:55

Shell script is the lingua franca of Unix, it's available everywhere and often the only reasonable choice to Get Stuff Done. But it's also clumsy and it's easy to write unsafe shell scripts, that forget to quote variables, typo names of functions, etc.

Wouldn't it be nice if we could write code in some better language, that generated nicely formed shell scripts and avoided such gotchas? Today, I've built a Haskell monad that can generate shell code.

Here's a fairly involved example. This demonstrates several features, including the variadic cmd, the ability to define shell functions, to bind and use shell variables, to build pipes (with the -:- operator), and to factor out generally useful haskell functions like pipeLess and promptFor ...

santa = script $ do
    hohoho <- func $
        cmd "echo" "Ho, ho, ho!" "Merry xmas!"
    hohoho

    promptFor "What's your name?" $ \name -> pipeLess $ do
        cmd "echo" "Let's see what's in" (val name <> quote "'s") "stocking!"
        forCmd (cmd "ls" "-1" (quote "/home/" <> val name)) $ \f -> do
            cmd "echo" "a shiny new" f
            hohoho

    cmd "rm" "/table/cookies" "/table/milk"
    hohoho

pipeLess :: Script () -> Script ()
pipeLess c = c -|- cmd "less"

promptFor :: T.Text -> (Var -> Script ()) -> Script ()
promptFor prompt cont = do
    cmd "printf" (prompt <> " ")
    var <- newVar "prompt"
    readVar var
    cont var

When run, that haskell program generates this shell code. Which, while machine-generated, has nice indentation, and is generally pretty readable.

#!/bin/sh
f1 () { :
    echo 'Ho, ho, ho!' 'Merry xmas!'
}
f1
printf 'What'"'"'s your name?  '
read '_prompt1'
(
    echo 'Let'"'"'s see what'"'"'s in' "$_prompt1"''"'"'s' 'stocking!'
    for _x1 in $(ls '-1' '/home/'"$_prompt1")
    do :
        echo 'a shiny new' "$_x1"
        f1
    done
) | (
    less
)
rm '/table/cookies' '/table/milk'
f1

Santa has already uploaded shell-monad to hackage and git.

There's a lot of things that could be added to this library (if, while, redirection, etc), but I can already see using it in various parts of propellor and git-annex that need to generate shell code.

Chris Lamb: 8:20AM. Sunday, 22 December 2013

25 December, 2014 - 03:39

Running east into the sun, he hadn't seen another human for over half an hour. He navigates Westferry Circus and heads south, cutting from the road through to the riverside pathway. He keeps is breathing steady - no reason to hurry.

The wind catches him from Westminster. It smells slightly salty but it's an ersatz attempt, nowhere near bracing enough to be a real sea breeze.

Pressing on, the vacant citadel of Canary Wharf disappears behind him. But as the peninsula curves around, a man appears in the distance. Even half a mile away he looks out of place, or rather—given the hour—time. He's walking purposefully, but it doesn't feel the kind of route someone would be taking to work. He's not wearing quite enough clothes for the weather either, and homeless people are rarely made to feel welcome in the Docklands. His supermarket denim visibly flaps in the breeze. "Relaxed fit", they call it.

When he gets within earshot the man cocks his head, not expecting to hear the regular cadence of approaching footsteps. He turns slightly to reveal he's cradling a large bottle of Coca-Cola, meekly wrapped in the swathing bands of two anonymously blue corner-shop plastic bags.

The runner eyes the Coke greedily but can quickly see that it has already been opened, tainted. Although only a mouthful or so has gone, a brown froth sloshes against the top of the container. Amateur, he thinks. He'll regret that later.

He looks back up to the man, who is now smiling at him. His left hand bccomes visible as he strides: a four-pack of Carling. The man laughs.

"Oh, you and me mate are worlds apart!" the man shouts.

It's immediately friendly. He starts to raise his Carling as but thinks better of it. It's momentarily awkward.

"Worlds apart mate", the man continues. "Have a good one!"

The runner smiles back.

Only in time, the runner thinks. They both can't stop.

Gunnar Wolf: More on Debian as a social, quasi-familiar project

24 December, 2014 - 23:49

I have long wanted to echo Gregor's beautiful Debian Advent Calendar posts. Gregor is a dear project member and a dear friend to many of us Debianers, who has shown an amount of stamina and care for the project that inspires everybody; this year, after many harsh flamefests in the project (despite which we are moving at a great rate towards a great release!), many people have felt the need to echo how Debian –even as often seen from the outside as a hostile mass of blabbering geeks– is actually a great place to work together and to create a deep, strong social fabric — And that's quite probably what binds the project together and ensures it will continue existing and excelling for a long time.

As for the personal part: This year, my Debian involvement has –once again– reduced. Not because I care less about Debian, much to the contrary, but because I have taken several responsabilities which require my attention and time. Technically, I'm basically maintaining a couple of PHP-based packages I use for work (most prominently, Drupal7). I have stepped back of most of my DebConf responsabilities, although I stay (and will stay, as it's an area of the project I deeply enjoy doing) involved. And, of course, my current main area of involvement is keyring-maint (for which I have posted here several status updates).

I have to say that we expected having a much harder time (read: Stronger opposition and discussions) regarding the expiry of 1024D keys. Of course, many people do have a hard time connecting anew to the web of trust, and we will still face quite a bit of work after January 1st, but the migration has been a mostly pleasant (although clearly intensive) work. Jonathan has just told me we are down to only 306 1024D keys in the keyring (which almost exactly matches the "200-300" I expected back at DC14).

Anyway: People predicting doomsday scenarios for Debian do it because they are not familiar with how deep the project runs in us, how important it is socially, almost at a family level, to us that have been long involved in it. Debian is stronger than a technical or political discussion, no matter how harsh it is.

And, as a personal thank-you: Gregor, your actions (the GDAC, the RC bug reports) inspire us to stay active, to do our volunteer work better, and remind us of how great is it to be a part of a global, distributed will to Do It Right. Thanks a lot!

Gregor Herrmann: GDAC 2014/24

24 December, 2014 - 22:23

the last year hasn't been an easy one for debian. we've seen lots of fights, unproductive discussions, & in general behaviour which contributed to what enrico in his brilliant blog post called the "stink in the kitchen". people's feelings were hurt, some became less active, others resigned from a specific position or retired completely.

it happened to me as well that reading through threads full of trolling, aggressiveness, finger-pointing, unrespectful or abusive behaviour, etc. made me frustrated, or sad, or angry, or all kinds of other negative feelings.

but then I usually told myself: this is not the debian project as I know it; this is only a small part – a part which urgently needs improvement! –; but if we only look at it alone, our picture of debian at large is distorted.

the much bigger part of the debian life I know doesn't happen on this handful of high-profile mailing lists; it happens on dozens of specialized mailing lists, in many small IRC channels, in the BTS, & in in-person meetings. – & what I see there is most of the time constructive, collaborative, respectful communication, & committed, helpful, funny, awesome people.

the idea of this advent calendar was to give examples of some of my recent experiences which demonstrate what the bright side of debian is in my opinion & what contributes to my fun in debian. with the aim of sharing my impression that the proverbial "stink" is, if we take a step back, only one piece of the bigger picture, & that we shouldn't let ourselves get demotivated by only staring at this puzzle piece. – thanks to all the awesome contributors for polishing existing & adding new shiny puzzle pieces to our common picture every day!

finally: thanks for reading this series of posts, & especially thank you to all who provided positive feedback – this in turn motivated me to follow through!

& now: back to work, & don't forget to help cleaning the kitchen :)

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Gunnar Wolf: On rogue states disrupting foreign networks

24 December, 2014 - 11:47

Much ink has been spilled lately (well, more likely, lots of electrons have changed their paths lately — as most of these communications have surely been electronic) on the effects, blame, assurance and everything related to the (allegedly) North Korean attack on Sony's networks. And yes, the list of components and affectations is huge. Technically it was a very interesting feat, but it's quite more interesting socially. Say, the not-so-few people wanting to wipe North Korea from the face of the Earth, as... Well, how did such a puny nation dare touch a private company that's based in the USA?

Of course, there's no strong evidence the attack did originate in (or was funded by) North Korea.

And... I have read very few people talking about the parallels to the infamous Stuxnet, malware written by USA and Israel (not officially admitted, but with quite a bit of evidence pointing to it, and no denial attempts after quite a wide media exposure). In 2010, this worm derailed Iran's nuclear program. Iran, a sovereign nation. Yes, many people doubt such a nuclear program would be used "for good, not for evil" — But since when have those two words had an unambiguous meaning? And when did it become accepted as international law to operate based on hunches and a "everybody knows" mentality?

So, how can the same people repudiate NK's alleged actions and applaud Stuxnet as a perfect weapon for peace?

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้