Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 5 days 10 hours ago

Sven Hoexter: python-ipcalc bumped from 0.3 to 1.1.3

21 January, 2015 - 02:16

I've helped a friend to get started with Debian packaging and he has now adopted python-ipcalc. Since I've no prior experience with packaging of Python modules and there were five years of upstream development in between, I've uploaded to experimental to give it some exposure.

So if you still use the python-ipcalc package, which is part of all current Debian releases and the upcoming jessie release, please check out the package from experimental. I think the only reverse dependency within Debian is sshfp, that one of course also requires some testing.

Raphael Geissert: Edit Debian, with iceweasel

20 January, 2015 - 14:00
Soon after publishing the chromium/chrome extension that allows you to edit Debian online, Moez Bouhlel sent a pull request to the extension's git repository: all the changes needed to make a firefox extension!

After another session of browser extensions discovery, I merged the commits and generated the xpi. So now you can go download the Debian online editing firefox extension and hack the world, the Debian world.

Install it and start contributing to Debian from your browser. There's no excuse now.

Daniel Pocock: jSMPP project update, 2.1.1 and 2.2.1 releases

20 January, 2015 - 04:29

The jSMPP project on Github stopped processing pull requests over a year ago and appeared to be needing some help.

I've recently started hosting it under https://github.com/opentelecoms-org/jsmpp and tried to merge some of the backlog of pull requests myself.

There have been new releases:

  • 2.1.1 works in any project already using 2.1.0. It introduces bug fixes only.
  • 2.2.1 introduces some new features and API changes and bigger bug fixes

The new versions are easily accessible for Maven users through the central repository service.

Apache Camel has already updated to use 2.1.1.

Thanks to all those people who have contributed to this project throughout its history.

Daniel Pocock: Storage controllers for small Linux NFS networks

19 January, 2015 - 20:59

While contemplating the disk capacity upgrade for my Microserver at home, I've also been thinking about adding a proper storage controller.

Currently I just use the built-in controller in the Microserver. It is an AMD SB820M SATA controller. It is a bottleneck for the SSD IOPS.

On the disks, I prefer to use software RAID (such as md or BtrFs) and not become dependent on the metadata format of any specific RAID controller. The RAID controllers don't offer the checksumming feature that is available in BtrFs and ZFS.

The use case is NFS for a small number of workstations. NFS synchronous writes block the client while the server ensures data really goes onto the disk. This creates a performance bottleneck. It is actually slower than if clients are writing directly to their local disks through the local OS caches.

SSDs on an NFS server offer some benefit because they can complete write operations more quickly and the NFS server can then tell the client the operation is complete. The more performant solution (albeit with a slight risk of data corruption) is to use a storage controller with a non-volatile (battery-backed or flash-protected) write cache.

Many RAID controllers have non-volatile write caches. Some online discussions of BtrFs and ZFS have suggested staying away from full RAID controllers though, amongst other things, to avoid the complexities of RAID controllers adding their metadata to the drives.

This brings me to the first challenge though: are there suitable storage controllers that have a non-volatile write cache but without having other RAID features?

Or a second possibility: out of the various RAID controllers that are available, do any provide first-class JBOD support?

Observations

I looked at specs and documentation for various RAID controllers and identified some of the following challenges:

Next steps

Are there other options to look at, for example, alternatives to NFS?

If I just add in a non-RAID HBA to enable faster IO to the SSDs will this be enough to make a noticeable difference on the small number of NFS clients I'm using?

Or is it inevitable that I will have to go with one of the solutions that involves putting a vendor's volume metadata onto JBOD volumes? If I do go that way, which of the vendors' metadata formats are most likely to be recognized by free software utilities in the future if I ever connect the disk to a generic non-RAID HBA?

Thanks to all those people who provided comments about choosing drives for this type of NAS usage.

Related reading

Jonathan Wiltshire: Alcester BSP, day three

19 January, 2015 - 05:27

We have had a rather more successful weekend then I feared, as you can see from our log on the wiki page. Steve reproduced and wrote patches for several installer/bootloader bugs, and Neil and I spent significant time in a maze of twist zope packages (we have managed to provide more diagnostics on the bug, even if we couldn’t resolve it). Ben and Adam have ploughed through a mixture of bugs and maintenance work.

I wrongly assumed we would only be able to touch a handful of bugs, since they are now mostly quite difficult, so it was rather pleasant to recap our progress this evening and see that it’s not all bad after all.

Alcester BSP, day three is a post from: jwiltshire.org.uk | Flattr

Gregor Herrmann: RC bugs 2014/51-2015/03

19 January, 2015 - 04:41

I have to admit that I was a bit lazy when it comes to working on RC bugs in the last weeks. here's my not-so-stellar summary:

  • #729220 – pdl: "pdl: problems upgrading from wheezy due to triggers"
    investigate (unsuccessfully), later fixed by maintainer
  • #772868 – gxine: "gxine: Trigger cycle causes dpkg to fail processing"
    switch trigger from "interest" to "interest-noawait", upload to DELAYED/2
  • #774584 – rtpproxy: "rtpproxy: Deamon does not start as init script points to wrong executable path"
    adjust path in init script, upload to DELAYED/2
  • #774791 – src:xine-ui: "xine-ui: Creates dpkg trigger cycle via libxine2-ffmpeg, libxine2-misc-plugins or libxine2-x"
    add trigger patch from Michael Gilbert, upload to DELAYED/2
  • #774862 – ciderwebmail: "ciderwebmail: unhandled symlink to directory conversion: /usr/share/ciderwebmail/root/static/images/mimeicons"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion (pkg-perl)
  • #774867 – lirc-x: "lirc-x: unhandled symlink to directory conversion: /usr/share/doc/PACKAGE"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion, upload to DELAYED/2
  • #775640 – src:libarchive-zip-perl: "libarchive-zip-perl: FTBFS in jessie: Tests failures"
    start to investigate (pkg-perl)

Mark Brown: Heating the Internet of Things

19 January, 2015 - 04:23

Internet of Things seems to be trendy these days, people like the shiny apps for controlling things and typically there are claims that the devices will perform better than their predecessors by offloading things to the cloud – but this makes some people worry that there are potential security issues and it’s not always clear that internet usage is actually delivering benefits over something local. One of the more widely deployed applications is smart thermostats for central heating which is something I’ve been playing with. I’m using Tado, there’s also at least Nest and Hive who do similar things, all relying on being connected to the internet for operation.

The main thing I’ve noticed has been that the temperature regulation in my flat is better, my previous thermostat allowed the temperature to vary by a couple of degrees around the target temperature in winter which got noticeable, with this the temperature generally seems to vary by a fraction of a degree at most. That does use the internet connection to get the temperature outside, though I’m fairly sure that most of this is just a better algorithm (the thermostat monitors how quickly the flat heats up when heating and uses this when to turn off rather than waiting for the temperature to hit the target then seeing it rise further as the radiators cool down) and performance would still be substantially improved without it.

The other thing that these systems deliver which does benefit much more from the internet connection is that it’s easy to control them remotely. This in turn makes it a lot easier to do things like turn the heating off when it’s not needed – you can do it remotely, and you can turn the heating back on without being in the flat so that you don’t need to remember to turn it off before you leave or come home to a cold building. The smarter ones do this automatically based on location detection from smartphones so you don’t need to think about it.

For example, when I started this post this I was sitting in a coffee shop so the heating had been turned off based on me taking my phone with me and as a result the temperature gone had down a bit. By the time I got home the flat was back up to normal temperature all without any meaningful intervention or visible difference on my part. This is particularly attractive for me given that I work from home – I can’t easily set a schedule to turn the heating off during the day like someone who works in an office so the heating would be on a lot of the time. Tado and Nest will to varying extents try to do this automatically, I don’t know about Hive. The Tado one at least works very well, I can’t speak to the others.

I’ve not had a bill for a full winter yet but I’m fairly sure looking at the meter that between the two features I’m saving a substantial amount of energy (and hence money and/or the environment depending on what you care about) and I’m also seeing a more constant temperature within the flat, my guess would be that most of the saving is coming from the heating being turned off when I leave the flat. For me at least this means that having the thermostat internet connected is worthwhile.

Dirk Eddelbuettel: Running UBSAN tests via clang with Rocker

19 January, 2015 - 00:37

Every now and then we get reports from CRAN about our packages failing a test there. A challenging one concerns UBSAN, or Undefined Behaviour Sanitizer. For background on UBSAN, see this RedHat blog post for gcc and this one from LLVM about clang.

I had written briefly about this before in a blog post introducing the sanitizers package for tests, as well as the corresponding package page for sanitizers, which clearly predates our follow-up Rocker.org repo / project described in this initial announcement and when we became the official R container for Docker.

Rocker had support for SAN testing, but UBSAN was not working yet. So following a recent CRAN report against our RcppAnnoy package, I was unable to replicate the error and asked for help on r-devel in this thread.

Martyn Plummer and Jan van der Laan kindly sent their configurations in the same thread and off-list; Jeff Horner did so too following an initial tweet offering help. None of these worked for me, but further trials eventually lead me to the (already mentioned above) RedHat blog post with its mention of -fno-sanitize-recover to actually have an error abort a test. Which, coupled with the settings used by Martyn, were what worked for me: clang-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover.

This is now part of the updated Dockerfile of the R-devel-SAN-Clang repo behind the r-devel-ubsan-clang. It contains these settings, as well a new support script check.r for littler---which enables testing right out the box.

Here is a complete example:

docker                          # run Docker (any recent version, I use 1.2.0)
  run                           # launch a container 
    --rm                        # remove Docker temporary objects when dome
    -ti                         # use a terminal and interactive mode 
    -v $(pwd):/mnt              # mount the current dir. as /mnt in the container
    rocker/r-devel-ubsan-clang  # using the rocker/r-devel-ubsan-clang container
  check.r                       # launch the check.r command from littler (in container)
    --setwd /mnt                # with a setd() to the /mnt directory
    --install-deps              # installing all package dependencies before the test
    RcppAnnoy_0.0.5.tar.gz      # and test this tarball

I know. It's a mouthful. But it really is merely the standard practice of running Docker to launch a single command. And while I frequently make this the /bin/bash command (hence the -ti options I always use) to work and explore interactively, here we do one better thanks to the (pretty useful so far) check.r script I wrote over the last two days.

check.r does about the same as R CMD check. If you look inside check you will see a call to a (non-exported) function from the (R base-internal) tools package. We call the same function here. But to make things more interesting we also first install the package we test to really ensure we have all build-dependencies from CRAN met. (And we plan to extend check.r to support additional apt-get calls in case other libraries etc are needed.) We use the dependencies=TRUE option to have R smartly install Suggests: as well, but only one level (see help(install.packages) for details. With that prerequisite out of the way, the test can proceed as if we had done R CMD check (and additional R CMD INSTALL as well). The result for this (known-bad) package:

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.tar.gz 
also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’

trying URL 'http://cran.rstudio.com/src/contrib/Rcpp_0.11.3.tar.gz'
Content type 'application/x-gzip' length 2169583 bytes (2.1 MB)
opened URL
==================================================
downloaded 2.1 MB

trying URL 'http://cran.rstudio.com/src/contrib/BH_1.55.0-3.tar.gz'
Content type 'application/x-gzip' length 7860141 bytes (7.5 MB)
opened URL
==================================================
downloaded 7.5 MB

trying URL 'http://cran.rstudio.com/src/contrib/RUnit_0.4.28.tar.gz'
Content type 'application/x-gzip' length 322486 bytes (314 KB)
opened URL
==================================================
downloaded 314 KB

trying URL 'http://cran.rstudio.com/src/contrib/RcppAnnoy_0.0.4.tar.gz'
Content type 'application/x-gzip' length 25777 bytes (25 KB)
opened URL
==================================================
downloaded 25 KB

* installing *source* package ‘Rcpp’ ...
** package ‘Rcpp’ successfully unpacked and MD5 sums checked
** libs
clang++-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover -I/usr/local/lib/R/include -DNDEBUG -I../inst/include/ -I/usr/local/include    -fpic  -pipe -Wall -pedantic -
g  -c Date.cpp -o Date.o

[...]
* checking examples ... OK
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘runUnitTests.R’
 ERROR
Running the tests in ‘tests/runUnitTests.R’ failed.
Last 13 lines of output:
  +     if (getErrors(tests)$nFail > 0) {
  +         stop("TEST FAILED!")
  +     }
  +     if (getErrors(tests)$nErr > 0) {
  +         stop("TEST HAD ERRORS!")
  +     }
  +     if (getErrors(tests)$nTestFunc < 1) {
  +         stop("NO TEST FUNCTIONS RUN!")
  +     }
  + }
  
  
  Executing test function test01getNNsByVector  ... ../inst/include/annoylib.h:532:40: runtime error: index 3 out of bounds for type 'int const[2]'
* checking PDF version of manual ... OK
* DONE

Status: 1 ERROR, 2 WARNINGs, 1 NOTE
See
  ‘/tmp/RcppAnnoy/..Rcheck/00check.log’
for details.
root@a7687c014e55:/tmp/RcppAnnoy# 

The log shows that thanks to check.r, we first download and the install the required packages Rcpp, BH, RUnit and RcppAnnoy itself (in the CRAN release). Rcpp is installed first, we then cut out the middle until we get to ... the failure we set out to confirm.

Now having a tool to confirm the error, we can work on improved code.

One such fix currently under inspection in a non-release version 0.0.5.1 then passes with the exact same invocation (but pointing at RcppAnnoy_0.0.5.1.tar.gz):

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.1.tar.gz
also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’
[...]
* checking examples ... OK
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘runUnitTests.R’
 OK
* checking PDF version of manual ... OK
* DONE

Status: 1 WARNING
See
  ‘/mnt/RcppAnnoy.Rcheck/00check.log’
for details.

edd@max:~/git$

This proceeds the same way from the same pristine, clean container for testing. It first installs the four required packages, and the proceeds to test the new and improved tarball. Which passes the test which failed above with no issues. Good.

So we now have an "appliance" container anybody can download from free from the Docker hub, and deploy as we did here in order to have fully automated, one-command setup for testing for UBSAN errors.

UBSAN is a very powerful tool. We are only beginning to deploy it. There are many more useful configuration settings. I would love to hear from anyone who would like to work on building this out via the R-devel-SAN-Clang GitHub repo. Improvements to the littler scripts are similarly welcome (and I plan on releasing an updated littler package "soon").

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

EvolvisForge blog: Debian/m68k hacking weekend cleanup

18 January, 2015 - 23:16

OK, time to clean up ↳ tarent so people can work again tomorrow.

Not much to clean though (the participants were nice and cleaned up after themselves ☺), so it’s mostly putting stuff back to where it belongs. Oh, and drinking more of the cool Belgian beer Geert (Linux upstream) brought ☻

We were productive, reporting and fixing kernel bugs, fixing hardware, swapping and partitioning discs, upgrading software, getting buildds (mostly Amiga) back to work, trying X11 (kdrive) on a bare metal Atari Falcon (and finding a window manager that works with it), etc. – I hope someone else writes a report; for now we have a photo and a screenshot (made with trusty xwd). Watch the debian-68k mailing list archives for things to come.

I think that, issues with electric cars aside, everyone liked the food places too ;-)

Andreas Metzler: Another new toy

18 January, 2015 - 22:35

Given that snow is yet a little bit sparse for snowboarding and the weather could be improved on I have made myself a late christmas present:

It is a rather sporty rodel (Torggler TS 120 Tourenrodel Spezial 2014/15, 9kg weight, with fast (non stainless) "racing rails" and 22° angle of the runners) but not a competition model. I wish I had bought this years ago. It is a lot more comfortable than a classic sled ("Davoser Schlitten"), since one is sitting in instead of on top of the sled somehow like in a hammock. Being able to steer without putting a foot into the snow has the nice side effect that the snow stays on the ground instead of ending up in my face. Obviously it is also faster which is a huge improvement even for recreational riding, since it makes the difference between riding the sledge or pulling it on flattish stretches. Strongly recommended.

FWIW I ordered this via rodelfuehrer.de (they started with a guidebook of luge tracks, which translates to "Rodelführer"), where I would happily order again.

Chris Lamb: Adjusting a backing track with SoX

18 January, 2015 - 19:28

Earlier today I came across some classical sheet music that included a "playalong" CD, just like a regular recording except it omits the solo cello part. After a quick listen it became clear there were two problems:

  • The recording was made at A=442, rather than the more standard A=440.
  • The tempi of the movements was not to my taste, either too fast or too slow.

SoX, the "Swiss Army knife of sound processing programs", can easily adjust the latter, but to remedy the former it must be provided with a dimensionless "cent" unit—ie. 1/100th of a semitone—rather than the 442Hz and 440Hz reference frequencies.

First, we calculate the cent difference with:

Next, we rip the material from the CD:

$ sudo apt-get install ripit flac
[..]
$ ripit --coder 2 --eject --nointeraction
[..]

And finally we adjust the tempo and pitch:

$ apt-get install sox libsox-fmt-mp3
[..]
$ sox 01.flac 01.mp3 pitch -7.85 tempo 1.00 # (Tuning notes)
$ sox 02.flac 02.mp3 pitch -7.85 tempo 0.95 # Too fast!
$ sox 03.flac 03.mp3 pitch -7.85 tempo 1.01 # Close..
$ sox 04.flac 04.mp3 pitch -7.85 tempo 1.03 # Too slow!

(I'm converting to MP3 at the same time it'll be more convenient on my phone.)

Ian Campbell: Using Grub 2 as a bootloader for Xen PV guests on Debian Jessie

18 January, 2015 - 16:23

I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.

Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).

TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:

kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"

or

kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"

(instead of bootloader = ... or other kernel = ..., also omit ramdisk = ... and any command line related stuff (e.g. root = ..., extra = ..., cmdline = ... ) and your guests will boot using Grub 2, much like on native.

In slightly more detail:

The forthcoming Debian 8.0 (Jessie) release will contain support for both host and guest pvgrub2. This was added in version 2.02~beta2-17 of the package (bits were present before then, but -17 ties it all together).

The package grub-xen-host contains grub binaries configured for the host, these will attempt to chainload an in-guest grub image (following the Xen x86 PV Bootloader Protocol) and fall back to searching for a grub.cfg in the guest filesystems. grub-xen-host is Recommended by the Xen meta-packages in Debian or can be installed by hand.

The package grub-xen-bin contains the grub binaries for both the i386-xen and x86_64-xen platforms, while the grub-xen package integrates this into the running system by providing the actual pvgrub2 image (i.e. running grub-install at the appropriate times to create an image tailored to the system) and integration with the kernel packages (i.e. running update-grub at the right times), so it is the grub-xen which should be installed in Debian guests.

At this time the grub-xen package is not installed in a guest automatically so it will need to be done manually (something which perhaps could be addressed for Stretch).

Guido Günther: whatmaps 0.0.9

18 January, 2015 - 16:17

I have released whatmaps 0.0.9 a tool to check which processes map shared objects of a certain package. It can integrate into apt to automatically restart services after a security upgrade.

This release fixes the integration with recent systemd (as in Debian Jessie), makes logging more consistent and eases integration into downstream distributions. It's available in Debian Sid and Jessie and will show up in Wheezy-backports soon.

This blog is flattr enabled.

Rogério Brito: Uploading SICP to Youtube

18 January, 2015 - 08:52
Intro

I am not alone in considering Harold Abelson and Gerald Jay Sussman's recorded lectures based on their book "Structure and Interpretation of Computer Programs" is a masterpiece.

There are many things to like about the content of the lectures, beginning with some pearls and wisdom about the craft of writing software (even though this is not really a "software enginneering" book), the clarity with which the concepts are described, the Freedom-friendly aspects of the authors regarding the material that they produced and much, the breadth of the subjects covered and much more.

The videos, their length, and splitting them

The course consists of 20 video files and they are all uploaded on Youtube already.

There is one thing, though: while the lectures are naturally divided into segments (the instructors took a break in after every 30 minutes or so worth of lectures), the videos corresponding to each lecture have all the segments concatenated.

To better watch them, accounting for the easier possibility to put a few of the lectures in a mobile device or to avoid fast forwarding long videos from my NAS when I am watching them on my TV (and some other factors), I decided to sit down, take notes for each video of where the breaks where, and write a simple Python script to help split the videos in segments, and, then, reencode the segments.

I decided not to take the videos from Youtube to perform my splitting activities, but, instead, to operate on one of the "sources" that the authors once had in their homepage (videos encoded in DivX and audio in MP3). The videos are still available as a torrent file (with a magnet link for the hash 650704e4439d7857a33fe4e32bcfdc2cb1db34db), with some very good souls still seeding it (I can seed it too, if desired). Alas, I have not found a source for the higher quality MPEG1 videos, but I think that the videos are legible enough to avoid bothering with a larger download.

I soon found out that there are some beneficial side-effects of splitting the videos, like not having to edit/equalize the entire audio of the videos when only a segment was bad (which is understandable, as these lectures were recorded almost 30 years ago and technology was not as advanced as things are today).

So, since I already have the split videos lying around here, I figured out that, perhaps, other people may want to download them, as they may be more convenient to watch (say, during commutes or whatever/whenever/wherever it best suits them).

Of course, uploading all the videos is going to take a while and I would only do it if people would really benefit from them. If you think so, let me know here (or if you know someone who would like the split version of the videos, spread the word).

Jonathan Wiltshire: Alcester BSP, day two

18 January, 2015 - 06:02

Neil has abandoned his reputation as an RM machine, and instead concentrated on making the delayed queue as long as he can. I’m reliably informed that it’s now at a 3-year high. Steve is delighted that his reigning-in work is finally having an effect.

Alcester BSP, day two is a post from: jwiltshire.org.uk | Flattr

Tim Retout: CPAN PR Challenge - January - IO-Digest

18 January, 2015 - 05:01

I signed up to the CPAN Pull Request Challenge - apparently I'm entrant 170 of a few hundred.

My assigned dist for January was IO-Digest - this seems a fairly stable module. To get the ball rolling, I fixed the README, but this was somehow unsatisfying. :)

To follow-up, I added Travis-CI support, with a view to validating the other open pull request - but that one looks likely to be a platform-specific problem.

Then I extended the Travis file to generate coverage reports, and separately realised the docs weren't quite fully complete, so fixed this and added a test.

Two of these have already been merged by the author, who was very responsive.

Part of me worries that Github is a centralized, proprietary platform that we now trust most of our software source code to. But activities such as this are surely a good thing - how much harder would it be to co-ordinate 300 volunteers to submit patches in a distributed fashion? I suppose you could do something similar with the list of Debian source packages and metadata about the upstream VCS, say...

Ulrike Uhlig: Updating a profile in Debian’s apparmor-profiles-extra package

17 January, 2015 - 22:00

I have gotten my first patch to the Pidgin AppArmor profile accepted upstream. One of my mentors thus suggested that I’d patch the updated profile in the Debian package myself. This is fairly easy and requires simply that one knows how to use Git.

If you want to get write access to the apparmor-profiles-extra package in Debian, you first need to request access to the Collaborative Maintenance Alioth project, collab-maint in short. This also requires setting up an account on Alioth.

Once all is set up, one can export the apparmor-profiles-extra Git repository.
If you simply want to submit a patch, it’s sufficient to clone this repository anonymously.
Otherwise, one should use the “–auth” parameter with “debcheckout”. The “debcheckout” command is part of the “devscripts” package:

debcheckout --auth apparmor-profiles-extra

Go into the apparmor-profiles-extra folder and create a new working branch:

git branch workingtitle
git checkout workingtitle

Get the latest version of profiles from upstream. In “profiles”, one can edit the profiles.

Test.

The debian/README.Debian file should be edited: add what relevant changes one just imported from upstream.

Then, one could either push the branch to collab-maint:

git commit -a
git push origin workingtitle

or simply submit a patch to the Debian Bug Tracking System against the apparmor-profiles-extra package.

The Debian AppArmor packaging team mailing list will receive a notification of this commit. This way, commits can be peer reviewed and merged by the team.

Guido Günther: krb5-auth-dialog 3.15.4

17 January, 2015 - 16:42

To keep up with GNOMEs schedule I've released krb5-auth-dialog 3.15.4. The changes of 3.15.1 and 3.15.4 include among updated translations, the replacement of deprecated GTK+ widgets, minor UI cleanups and bug fixes a header bar fix that makes us only use header bar buttons iff the desktop environment has them enabled:

This makes krb5-auth-dialog better ingtegrated into other desktops again thanks to mclasen's awesome work.

This blog is flattr enabled.

Diego Escalante Urrelo: Link Pack #03

17 January, 2015 - 07:45

What’s that? The third edition of Link Pack of course!

Playing with Power (7 minutes, Vimeo)
A super awesome story about a stop motion animator that turned a Nintendo Power Glove into the perfect animation tool. It’s a fun, inspiring video :-). I love the Power Glove, it’s so bad.

The Power Glove – Angry Video Game Nerd – Episode 14 (12 minutes, YouTube)
On the topic of the Power Glove, here’s the now classic Angry Video Game Nerd video about it. James Rolfe is funny.

Ship Your Enemies Glitter
A rising star in the internet business landscape. You pay them $9.99 and they send an envelope full of glitter to your worst enemy. They promise it will jump into everything, as usual. Damn you glitter.

A Guide to Practical Contentment
Be happy with what you have, but understand why:

(…) if you start in this place of fixing what’s wrong with you, you keep looking for what else is wrong with you, what else you need to improve. So maybe now feel like you don’t have enough muscles, or six pack abs, or you think your calves don’t look good, or if it’s not about your body, you’ll find something else.

So it’s this never-ending cycle for your entire life. You never reach it. If you start with a place of wanting to improve yourself and feeling stuck, even if you’re constantly successful and improving, you’re always looking for happiness from external sources. You don’t find the happiness from within, so you look to other things.

The Comments Section For Every Video Where Someone Does A Pushup
Comments. From YouTube. Enough said.

“These are dips. Not pushups. In the entire history of the world, no one has ever successfully performed a pushup. They’re all just dips.”

“STOP DRIVING WITH YOUR HIPS. IF YOU’RE DOING A PUSHUP CORRECTLY, YOUR HIPS SHOULD CEASE TO EXIST.”

“You could do 100 pushups like this and it wouldn’t improve your strength at all. You’re just bending your arms.”

Self-Taught Chinese Street Photographer Tao Liu Has an Eye for Peculiar Moments
This Chinese photog uses his lunch break to snap interesting street photography. Funny selection by PetaPixel, his Flickr page has even more stuff. Even more in his photoblog.

By Liu Tao. https://www.flickr.com/photos/58083590@N05/14613273495/

Enrique Castro-Mendivil’s Agua Dulce photo set
Another interesting photo link. This time it’s the most popular beach in Lima, with most people coming from low income neighborhoods, it shows how fragmented the city is.

By Enrique Castro-Mendivil. http://www.castromendivilphoto.com/index.php/component/content/article/11-work/69-agua-dulce

Jonathan Wiltshire: Alcester BSP, day one

17 January, 2015 - 07:25

Perhaps I should say evening one, since we didn’t get going until nine or so. I have mostly been processing unblocks – 13 in all. We have a delayed upload and a downgrade in the pipeline, plus a tested diff for Django. Predictably, Neil had the one and only removal request so far.

Alcester BSP, day one is a post from: jwiltshire.org.uk | Flattr

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้